Planet Russell

,

Charles StrossRoe v Wade v Sanity

Supreme court voted to overturn Roe v Wade abortion law, leaked draft opinion reportedly shows.

Here is the leaked draft opinion by Justice Alito. (Format: PDF.)

I am not a lawyer.

The opinion apparently overturns Roe v. Wade by junking the implied constitutional right to privacy that it created. However, a bunch of other US legal precedents rely on the right to privacy. Notably:

  • Lawrence v. Texas (2003) determined that it's unconstitutional to punish people for committing "Sodomy" (any sex act other than missionary-position penis-in-vagina between a married man and woman)

  • Griswold v. Connecticut (1965) protects the ability of married couples to buy contraceptives without government interference

  • Loving v. Virginia (1968): right to privacy was used to overturn laws banning interracial marriage

  • Stanley v. Georgia (1969): right to privacy protects personal possession of pornography

  • Obergefell v. Hodges (2015): right to privacy and equal protection clause were used to argue for legality of same sex marriage

  • Meyer v. Nebraska (1923): ruling allows families to decide for themselves if they want their children to learn a language other than English (overturning the right to privacy could open the door for racist states to outlaw parents teaching their children their natal language)

  • Skinner v. Oklahoma (1942): this ruling found it unconstitutional to forcibly sterilize people (it violated the Equal Protection clause)

I am going to note that the US congressional mid-term elections take place in about six months' time.

Wider point: if Alito's leaked ruling represents current USSC opinion, then it appears that the USSC is intent on turning back the clock all the way to the 19th century.

Another point: it is unwise to underestimate the degree to which extreme white supremacism in the USA is enmeshed with a panic about "white" people being "out-bred" by other races: this also meshes in with extreme authoritarian patriarchal values, the weird folk religion that names itself "Christianity" and takes pride in its guns and hatred of others, homophobia, transphobia, an unhealthy obsession with eugenics (and a low-key desire to eliminate the disabled which plays into COVID19 denialism, anti-vaxx, and anti-mask sentiment), misogyny, incel culture, QAnon, classic anti-semitic Blood Libel, and Christian Dominionism (which latter holds that the USA is a Christian nation—and by Christian they mean that aforementioned weird folk religion derived from protestantism I mentioned earlier—and their religious beliefs must be enshrined in law).

Okay, so, it's open season in the comments here. (Meanwhile discussion of RvW on other blog post comment threads is officially forbidden.)

PS: There are no indications they're going to use this ruling as an opening shot for bringing back slavery. Why would they? Slavery never went away. (The 13th Amendment has a gigantic loophole permitting enslavement as punishment, and the prison-industrial sector in the USA clearly enforces chattel slavery—only under government/corporate management rather than as personal property.)

Charles StrossThe impotence of the long-distance trillionaire

(In other news, I finally send off the novel manuscript I've been working on for the past 18 months. Taking a couple of days off before getting back to work on a novella I started in 2014 ...)

(Disclaimer: money is a proxy for control or power. I'm focussing on money rather than political leverage only because it's quantifiable.)

To you and me, a billion dollars sounds like a lot of money. It's on the order of what I (at peak earning capacity) would earn in 10,000 years. Give me just $10M and I could comfortably retire and live off interest and some judicious siphoning of capital for the rest of my life.

So are there any valid reasons to put up with billionaires?

There's a very fertile field of what I can only describe as capitalist apologetics, wherein economists and others try to justify the existence of billionaires in terms of social utility. Crude arguments that "greed is good" are all very well, but it begs the question of what positive good billionaires contribute to the commonweal—beyond a certain point the diminishing marginal utility of money means that every extra million or billion dollars changes nothing significant in the recipient's life.

For example, Steve Jobs had pancreatic cancer, as a result of which his liver was failing (after he underwent a pancreaticoduodenectomy ). As a very rich man, he could afford the best healthcare. As a billionaire, he could do more than that: he reputedly kept a business jet on 24x7 standby to whisk him to any hospital in the United States where a histocompatible liver for transplant surgery became available. (Livers are notoriously short-lived outside the donor body. Most liver transplant recipients are only able to register in one state within the USA; Jobs was registered in two or three.) But at that point, it did not matter how many billions he had: once you've got the jet and are registered with every major transplant centre within flight range, no extra amount of money is going to improve your chances of survival. In other words, in personal terms the marginal utility of money diminishes all the way to zero.

So, personal wealth has an upper bound beyond which the numbers are meaningless. Which leads to the second common argument for tolerating billionaires: that they have the resources to undertake tasks that governments decline to address. For example, there's the Gates Foundation's much-touted goal of eliminating childhood diseases of poverty in South-East Asia (which I haven't heard much about since COVID19 hit—or, for that matter, since the allegations of a Gates-Epstein surfaced in the press). Or Elon Musk's avowed goal of colonizing Mars.

Contra which, I would argue that in planetary terms a billion dollars is peanuts.

Gross planetary GDP (GWP—gross world product) is on the order of $85Tn— that is, $100,000 billion—a year. It's hard to pin it down because it's distributed among multiple currencies with varying PPP, so it could be anywhere from $70Tn to $100Tn.

Anyway. Those insanely rich guys, Elon Musk and Jeff Bezos? Each of them is worth less than the growth of GWP during 2019. The richest billionaires are barely visible when you look at wealth on the scale of GWP. Collectively, along with Gates, the Waltons, Putin, et al, they represent only about 1% of GWP.

They can fund lobbying groups and politicians, rant about colonizing Mars, and buy midlife crisis toys like Twitter or weekend getaways on a space station, but their scope for effecting real change is actually tiny on a global scale. Even Putin and Xi, who are at the state-level actor end of the scale (individually they're multi-billionaires: but they also control nuclear weapons, armies, and populations in 8-9 digits) have little global leverage. Putin's catastrophic adventure in Ukraine has revealed how threadbare the emperor's suit is: all the current gassing in the Russian media about using nuclear weapons if he doesn't get his way actually does is to demonstrate the uselessness of those nuclear weapons for achieving political/diplomatic objectives.

So I conclude that they probably feel about as helpless in the face of revolutions, climate change, and economic upheaval as you and I.

Which in turn suggests something about the psychopathology of billionaires. They're accustomed to having their every whim granted, merely for the asking, as long as it exists within the enormous buffet of necessities and luxuries that are available in our global economic sphere. But they're all going to grow old and die. They can't really avoid the threat of creeping disablement within their own body, although they can buy the most careful attendants and luxurious bedpans and wheelchairs. They can't insulate themselves from objective reality, although they can pretend it doesn't exist and buy their very own luxury apocalypse bunker in New Zealand.

So they're likely to succumb to brutal cognitive dissonance at some point.

Elon Musk turns 50 this year. He's probably finally realized that he is not going to have a luxurious retirement on Mars. If the Mars colony isn't established within 20 years, he'll probably be too old to make the trip there (and I'm betting 20 years isn't long enough for what he'd want).

Vladimir Putin turns 70 this year. He's been treated for thyroid cancer, and may well be quite ill. Only one former Russian or Soviet leader lived past 80 in the past 400 years, and that's Mikhail Gorbachev (who was out of office, and insulated from its premature ageing effects, after only 5 or 6 years). My read on the situation is that Putin hadn't been impacted by external reality for decades before his Ukraine "peacekeeping operation"; his 70th birthday present to himself, intended to secure his legacy by re-establishing the Russian empire, has turned into a nightmare.

Jeff Bezos is 58; keep an eye on him in January 2024, that's when he's due to turn 60. (He seems to be saner than Musk and Putin, but his classic midlife crisis year falls around the start of a presidential election campaign in the US and he might succumb to the impulse to make a grand gesture, like Mike Bloomberg's abortive run on the presidence.)

More to the point?

Granting individuals enormous leverage can sometimes be socially useful. But before you point at Musk and Tesla or SpaceX, I need to remind you that he didn't found Tesla, he merely bought into it then took over: SpaceX's focus on reusability is good, but we had reusable space launchers before—the only really new angle is that it's a cost-reduction measure. Starlink isn't an original, it's merely a modern, bigger, faster version of 1990's Teledesic (which fell victim to over-ambitious technology goals and the dot-com bust). Meanwhile, billionaires can do immense damage: the Koch network has largely bankrolled climate change denial, Musk's Mars colony plan is fatally flawed, and so on. We inevitably run into the question of accountability. And when one person holds the purse-strings, we lose that.

I can't see any good reason to let any individual claim ownership over more than a billion dollars of assets—even $100M is pushing it.

Can you?

Worse Than FailureError'd: Hi-Ho, Hi-Ho

This week brings an ongoing installment in a long-running gag, and a plea for help with a truly execrable pun. Can someone please find me some map-related material in Idaho? I promise not to credit you directly!

Workaholic Stuart Longland flexes "Yes, I'm working on a Sunday. And yes, I've worked some long hours in the past. But 56 hours in one day? I don't know whether to crow or cry! But TRWTF is the fact these phantom work entries re-incarnate after I delete them." In his own defense, Stuart explains (and I concur) "Monday-Friday is fairly meaningless after COVID and the Queensland Floods."

tempo

 

Searching for the Rubicon, wandert Vivia wondered "Of those places, only Attica is in Greece. Maybe cities now exist in the cloud?"

cities

 

Non-ex Patrick explains "I recently recalled several emails with mixed date formats, regarding the Fed-Ex updates for some goods I bought in January from Europe. Of the email subject, one email had European, and the other subject had American formatting, in addition to the body shown here. Luckily it only took 2 days, not 11 months, or -10 months." And Patrick, are you located in the US? I wonder if the email formatter was using the source locale for both timezone and formatting of the ship date, and the destination locale for both timezone and formatting of delivery date. There's a certain horrible logic to that. But clearly the right answer is RFC 3339 or nothing.

fedex

 

Coincidentally, Michael B. relates "Expected delivery by time machine?" Or another case of split locales, with shipping across the International Date Line?

ebtm

 

Finally, Michael R. clocks in "The Date column is supposed to be the original date and New Date the new postponed one. My 20/02/2022 was especially tricky to figure out." Michael, I looked at the list and didn't see the problem. It must have been fixed since your encounter.

dates

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 214 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 214. This version includes the following changes:

[ Chris Lamb ]
* Support both python-argcomplete 1.x and 2.x.

[ Vagrant Cascadian ]
* Add external tool on GNU Guix for xb-tool.

You find out more by visiting the project homepage.

,

Planet DebianSergio Talens-Oliag: New Blog Config

As promised, on this post I’m going to explain how I’ve configured this blog using hugo, asciidoctor and the papermod theme, how I publish it using nginx, how I’ve integrated the remark42 comment system and how I’ve automated its publication using gitea and json2file-go.

It is a long post, but I hope that at least parts of it can be interesting for some, feel free to ignore it if that is not your case …​ 😉

Hugo Configuration

Theme settings

The site is using the PaperMod theme and as I’m using asciidoctor to publish my content I’ve adjusted the settings to improve how things are shown with it.

The current config.yml file is the one shown below (probably some of the settings are not required nor being used right now, but I’m including the current file, so this post will have always the latest version of it):

config.yml
baseURL: https://blogops.mixinet.net/
title: Mixinet BlogOps
paginate: 5
theme: PaperMod
destination: public/
enableInlineShortcodes: true
enableRobotsTXT: true
buildDrafts: false
buildFuture: false
buildExpired: false
enableEmoji: true
pygmentsUseClasses: true
minify:
  disableXML: true
  minifyOutput: true
languages:
  en:
    languageName: "English"
    description: "Mixinet BlogOps - https://blogops.mixinet.net/"
    author: "Sergio Talens-Oliag"
    weight: 1
    title: Mixinet BlogOps
    homeInfoParams:
      Title: "Sergio Talens-Oliag Technical Blog"
      Content: >
        ![Mixinet BlogOps](/images/mixinet-blogops.png)
    taxonomies:
      category: categories
      tag: tags
      series: series
    menu:
      main:
        - name: Archive
          url: archives
          weight: 5
        - name: Categories
          url: categories/
          weight: 10
        - name: Tags
          url: tags/
          weight: 10
        - name: Search
          url: search/
          weight: 15
outputs:
  home:
    - HTML
    - RSS
    - JSON
params:
  env: production
  defaultTheme: light
  disableThemeToggle: false
  ShowShareButtons: true
  ShowReadingTime: true
  disableSpecial1stPost: true
  disableHLJS: true
  displayFullLangName: true
  ShowPostNavLinks: true
  ShowBreadCrumbs: true
  ShowCodeCopyButtons: true
  ShowRssButtonInSectionTermList: true
  ShowFullTextinRSS: true
  ShowToc: true
  TocOpen: false
  comments: true
  remark42SiteID: "blogops"
  remark42Url: "/remark42"
  profileMode:
    enabled: false
    title: Sergio Talens-Oliag Technical Blog
    imageUrl: "/images/mixinet-blogops.png"
    imageTitle: Mixinet BlogOps
    buttons:
      - name: Archives
        url: archives
      - name: Categories
        url: categories
      - name: Tags
        url: tags
  socialIcons:
    - name: CV
      url: "https://www.uv.es/~sto/cv/"
    - name: Debian
      url: "https://people.debian.org/~sto/"
    - name: GitHub
      url: "https://github.com/sto/"
    - name: GitLab
      url: "https://gitlab.com/stalens/"
    - name: Linkedin
      url: "https://www.linkedin.com/in/sergio-talens-oliag/"
    - name: RSS
      url: "index.xml"
  assets:
    disableHLJS: true
    favicon: "/favicon.ico"
    favicon16x16:  "/favicon-16x16.png"
    favicon32x32:  "/favicon-32x32.png"
    apple_touch_icon:  "/apple-touch-icon.png"
    safari_pinned_tab:  "/safari-pinned-tab.svg"
  fuseOpts:
    isCaseSensitive: false
    shouldSort: true
    location: 0
    distance: 1000
    threshold: 0.4
    minMatchCharLength: 0
    keys: ["title", "permalink", "summary", "content"]
markup:
  asciidocExt:
    attributes: {}
    backend: html5s
    extensions: ['asciidoctor-html5s','asciidoctor-diagram']
    failureLevel: fatal
    noHeaderOrFooter: true
    preserveTOC: false
    safeMode: unsafe
    sectionNumbers: false
    trace: false
    verbose: false
    workingFolderCurrent: true
privacy:
  vimeo:
    disabled: false
    simple: true
  twitter:
    disabled: false
    enableDNT: true
    simple: true
  instagram:
    disabled: false
    simple: true
  youtube:
    disabled: false
    privacyEnhanced: true
services:
  instagram:
    disableInlineCSS: true
  twitter:
    disableInlineCSS: true
security:
  exec:
    allow:
      - '^asciidoctor$'
      - '^dart-sass-embedded$'
      - '^go$'
      - '^npx$'
      - '^postcss$'

Some notes about the settings:

  • disableHLJS and assets.disableHLJS are set to true; we plan to use rouge on adoc and the inclusion of the hljs assets adds styles that collide with the ones used by rouge.
  • ShowToc is set to true and the TocOpen setting is set to false to make the ToC appear collapsed initially. My plan was to use the asciidoctor ToC, but after trying I believe that the theme one looks nice and I don’t need to adjust styles, although it has some issues with the html5s processor (the admonition titles use <h6> and they are shown on the ToC, which is weird), to fix it I’ve copied the layouts/partial/toc.html to my site repository and replaced the range of headings to end at 5 instead of 6 (in fact 5 still seems a lot, but as I don’t think I’ll use that heading level on the posts it doesn’t really matter).
  • params.profileMode values are adjusted, but for now I’ve left it disabled setting params.profileMode.enabled to false and I’ve set the homeInfoParams to show more or less the same content with the latest posts under it (I’ve added some styles to my custom.css style sheet to center the text and image of the first post to match the look and feel of the profile).
  • On the asciidocExt section I’ve adjusted the backend to use html5s, I’ve added the asciidoctor-html5s and asciidoctor-diagram extensions to asciidoctor and adjusted the workingFolderCurrent to true to make asciidoctor-diagram work right (haven’t tested it yet).

Theme customisations

To write in asciidoctor using the html5s processor I’ve added some files to the assets/css/extended directory:

  1. As said before, I’ve added the file assets/css/extended/custom.css to make the homeInfoParams look like the profile page and I’ve also changed a little bit some theme styles to make things look better with the html5s output:

    custom.css
    /* Fix first entry alignment to make it look like the profile */
    .first-entry { text-align: center; }
    .first-entry img { display: inline; }
    /**
     * Remove margin for .post-content code and reduce padding to make it look
     * better with the asciidoctor html5s output.
     **/
    .post-content code { margin: auto 0; padding: 4px; }
  2. I’ve also added the file assets/css/extended/adoc.css with some styles taken from the asciidoctor-default.css, see this blog post about the original file; mine is the same after formatting it with css-beautify and editing it to use variables for the colors to support light and dark themes:

    adoc.css
    /* AsciiDoctor*/
    table {
        border-collapse: collapse;
        border-spacing: 0
    }
    
    .admonitionblock>table {
        border-collapse: separate;
        border: 0;
        background: none;
        width: 100%
    }
    
    .admonitionblock>table td.icon {
        text-align: center;
        width: 80px
    }
    
    .admonitionblock>table td.icon img {
        max-width: none
    }
    
    .admonitionblock>table td.icon .title {
        font-weight: bold;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        text-transform: uppercase
    }
    
    .admonitionblock>table td.content {
        padding-left: 1.125em;
        padding-right: 1.25em;
        border-left: 1px solid #ddddd8;
        color: var(--primary)
    }
    
    .admonitionblock>table td.content>:last-child>:last-child {
        margin-bottom: 0
    }
    
    .admonitionblock td.icon [class^="fa icon-"] {
        font-size: 2.5em;
        text-shadow: 1px 1px 2px var(--secondary);
        cursor: default
    }
    
    .admonitionblock td.icon .icon-note::before {
        content: "\f05a";
        color: var(--icon-note-color)
    }
    
    .admonitionblock td.icon .icon-tip::before {
        content: "\f0eb";
        color: var(--icon-tip-color)
    }
    
    .admonitionblock td.icon .icon-warning::before {
        content: "\f071";
        color: var(--icon-warning-color)
    }
    
    .admonitionblock td.icon .icon-caution::before {
        content: "\f06d";
        color: var(--icon-caution-color)
    }
    
    .admonitionblock td.icon .icon-important::before {
        content: "\f06a";
        color: var(--icon-important-color)
    }
    
    .conum[data-value] {
        display: inline-block;
        color: #fff !important;
        background-color: rgba(100, 100, 0, .8);
        -webkit-border-radius: 100px;
        border-radius: 100px;
        text-align: center;
        font-size: .75em;
        width: 1.67em;
        height: 1.67em;
        line-height: 1.67em;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        font-style: normal;
        font-weight: bold
    }
    
    .conum[data-value] * {
        color: #fff !important
    }
    
    .conum[data-value]+b {
        display: none
    }
    
    .conum[data-value]::after {
        content: attr(data-value)
    }
    
    pre .conum[data-value] {
        position: relative;
        top: -.125em
    }
    
    b.conum * {
        color: inherit !important
    }
    
    .conum:not([data-value]):empty {
        display: none
    }
  3. The previous file uses variables from a partial copy of the theme-vars.css file that changes the highlighted code background color and adds the color definitions used by the admonitions:

    theme-vars.css
    :root {
        /* Solarized base2 */
        /* --hljs-bg: rgb(238, 232, 213); */
        /* Solarized base3 */
        /* --hljs-bg: rgb(253, 246, 227); */
        /* Solarized base02 */
        --hljs-bg: rgb(7, 54, 66);
        /* Solarized base03 */
        /* --hljs-bg: rgb(0, 43, 54); */
        /* Default asciidoctor theme colors */
        --icon-note-color: #19407c;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #bf6900;
        --icon-caution-color: #bf3400;
        --icon-important-color: #bf0000
    }
    
    .dark {
        --hljs-bg: rgb(7, 54, 66);
        /* Asciidoctor theme colors with tint for dark background */
        --icon-note-color: #3e7bd7;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #ff8d03;
        --icon-caution-color: #ff7847;
        --icon-important-color: #ff3030
    }
  4. The previous styles use font-awesome, so I’ve downloaded its resources for version 4.7.0 (the one used by asciidoctor) storing the font-awesome.css into on the assets/css/extended dir (that way it is merged with the rest of .css files) and copying the fonts to the static/assets/fonts/ dir (will be served directly):

    FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
    curl "$FA_BASE_URL/css/font-awesome.css" \
      > assets/css/extended/font-awesome.css
    for f in FontAwesome.otf fontawesome-webfont.eot \
      fontawesome-webfont.svg fontawesome-webfont.ttf \
      fontawesome-webfont.woff fontawesome-webfont.woff2; do
        curl "$FA_BASE_URL/fonts/$f" > "static/assets/fonts/$f"
    done
  5. As already said the default highlighter is disabled (it provided a css compatible with rouge) so we need a css to do the highlight styling; as rouge provides a way to export them, I’ve created the assets/css/extended/rouge.css file with the thankful_eyes theme:

    rougify style thankful_eyes > assets/css/extended/rouge.css
  6. To support the use of the html5s backend with admonitions I’ve added a variation of the example found on this blog post to assets/js/adoc-admonitions.js:

    adoc-admonitions.js
    // replace the default admonitions block with a table that uses a format
    // similar to the standard asciidoctor ... as we are using fa-icons here there
    // is no need to add the icons: font entry on the document.
    window.addEventListener('load', function () {
      const admonitions = document.getElementsByClassName('admonition-block')
      for (let i = admonitions.length - 1; i >= 0; i--) {
        const elm = admonitions[i]
        const type = elm.classList[1]
        const title = elm.getElementsByClassName('block-title')[0];
    	const label = title.getElementsByClassName('title-label')[0]
    		.innerHTML.slice(0, -1);
        elm.removeChild(elm.getElementsByClassName('block-title')[0]);
        const text = elm.innerHTML
        const parent = elm.parentNode
        const tempDiv = document.createElement('div')
        tempDiv.innerHTML = `<div class="admonitionblock ${type}">
        <table>
          <tbody>
            <tr>
              <td class="icon">
                <i class="fa icon-${type}" title="${label}"></i>
              </td>
              <td class="content">
                ${text}
              </td>
            </tr>
          </tbody>
        </table>
      </div>`
        const input = tempDiv.childNodes[0]
        parent.replaceChild(input, elm)
      }
    })

    and enabled its minified use on the layouts/partials/extend_footer.html file adding the following lines to it:

    {{- $admonitions := slice (resources.Get "js/adoc-admonitions.js")
      | resources.Concat "assets/js/adoc-admonitions.js" | minify | fingerprint }}
    <script defer crossorigin="anonymous" src="{{ $admonitions.RelPermalink }}"
      integrity="{{ $admonitions.Data.Integrity }}"></script>

Remark42 configuration

To integrate Remark42 with the PaperMod theme I’ve created the file layouts/partials/comments.html with the following content based on the remark42 documentation, including extra code to sync the dark/light setting with the one set on the site:

comments.html
<div id="remark42"></div>
<script>
  var remark_config = {
    host: {{ .Site.Params.remark42Url }},
    site_id: {{ .Site.Params.remark42SiteID }},
    url: {{ .Permalink }},
    locale: {{ .Site.Language.Lang }}
  };
  (function(c) {
    /* Adjust the theme using the local-storage pref-theme if set */
    if (localStorage.getItem("pref-theme") === "dark") {
      remark_config.theme = "dark";
    } else if (localStorage.getItem("pref-theme") === "light") {
      remark_config.theme = "light";
    }
    /* Add remark42 widget */
    for(var i = 0; i < c.length; i++){
      var d = document, s = d.createElement('script');
      s.src = remark_config.host + '/web/' + c[i] +'.js';
      s.defer = true;
      (d.head || d.body).appendChild(s);
    }
  })(remark_config.components || ['embed']);
</script>

In development I use it with anonymous comments enabled, but to avoid SPAM the production site uses social logins (for now I’ve only enabled Github & Google, if someone requests additional services I’ll check them, but those were the easy ones for me initially).

To support theme switching with remark42 I’ve also added the following inside the layouts/partials/extend_footer.html file:

{{- if (not site.Params.disableThemeToggle) }}
<script>
/* Function to change theme when the toggle button is pressed */
document.getElementById("theme-toggle").addEventListener("click", () => {
  if (typeof window.REMARK42 != "undefined") {
    if (document.body.className.includes('dark')) {
      window.REMARK42.changeTheme('light');
    } else {
      window.REMARK42.changeTheme('dark');
    }
  }
});
</script>
{{- end }}

With this code if the theme-toggle button is pressed we change the remark42 theme before the PaperMod one (that’s needed here only, on page loads the remark42 theme is synced with the main one using the code from the layouts/partials/comments.html shown earlier).

Development setup

To preview the site on my laptop I’m using docker-compose with the following configuration:

docker-compose.yaml
version: "2"
services:
  hugo:
    build:
      context: ./docker/hugo-adoc
      dockerfile: ./Dockerfile
    image: sto/hugo-adoc
    container_name: hugo-adoc-blogops
    restart: always
    volumes:
      - .:/documents
    command: server --bind 0.0.0.0 -D -F
    user: ${APP_UID}:${APP_GID}
  nginx:
    image: nginx:latest
    container_name: nginx-blogops
    restart: always
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    ports:
      -  1313:1313
  remark42:
    build:
      context: ./docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    container_name: remark42-blogops
    restart: always
    env_file:
      - ./.env
      - ./remark42/env.dev
    volumes:
      - ./remark42/var.dev:/srv/var

To run it properly we have to create the .env file with the current user ID and GID on the variables APP_UID and APP_GID (if we don’t do it the files can end up being owned by a user that is not the same as the one running the services):

$ echo "APP_UID=$(id -u)\nAPP_GID=$(id -g)" > .env

The Dockerfile used to generate the sto/hugo-adoc is:

Dockerfile
FROM asciidoctor/docker-asciidoctor:latest
RUN gem install --no-document asciidoctor-html5s &&\
 apk update && apk add --no-cache curl libc6-compat &&\
 repo_path="gohugoio/hugo" &&\
 api_url="https://api.github.com/repos/$repo_path/releases/latest" &&\
 download_url="$(\
  curl -sL "$api_url" |\
  sed -n "s/^.*download_url\": \"\\(.*.extended.*Linux-64bit.tar.gz\)\"/\1/p"\
 )" &&\
 curl -sL "$download_url" -o /tmp/hugo.tgz &&\
 tar xf /tmp/hugo.tgz hugo &&\
 install hugo /usr/bin/ &&\
 rm -f hugo /tmp/hugo.tgz &&\
 /usr/bin/hugo version &&\
 apk del curl && rm -rf /var/cache/apk/*
# Expose port for live server
EXPOSE 1313
ENTRYPOINT ["/usr/bin/hugo"]
CMD [""]

If you review it you will see that I’m using the docker-asciidoctor image as the base; the idea is that this image has all I need to work with asciidoctor and to use hugo I only need to download the binary from their latest release at github (as we are using an image based on alpine we also need to install the libc6-compat package, but once that is done things are working fine for me so far).

The image does not launch the server by default because I don’t want it to; in fact I use the same docker-compose.yml file to publish the site in production simply calling the container without the arguments passed on the docker-compose.yml file (see later).

When running the containers with docker-compose up (or docker compose up if you have the docker-compose-plugin package installed) we also launch a nginx container and the remark42 service so we can test everything together.

The Dockerfile for the remark42 image is the original one with an updated version of the init.sh script:

Dockerfile
FROM umputun/remark42:latest
COPY init.sh /init.sh

The updated init.sh is similar to the original, but allows us to use an APP_GID variable and updates the /etc/group file of the container so the files get the right user and group (with the original script the group is always 1001):

init.sh
#!/sbin/dinit /bin/sh

uid="$(id -u)"

if [ "${uid}" -eq "0" ]; then
  echo "init container"

  # set container's time zone
  cp "/usr/share/zoneinfo/${TIME_ZONE}" /etc/localtime
  echo "${TIME_ZONE}" >/etc/timezone
  echo "set timezone ${TIME_ZONE} ($(date))"

  # set UID & GID for the app
  if [ "${APP_UID}" ] || [ "${APP_GID}" ]; then
    [ "${APP_UID}" ] || APP_UID="1001"
    [ "${APP_GID}" ] || APP_GID="${APP_UID}"
    echo "set custom APP_UID=${APP_UID} & APP_GID=${APP_GID}"
    sed -i "s/^app:x:1001:1001:/app:x:${APP_UID}:${APP_GID}:/" /etc/passwd
    sed -i "s/^app:x:1001:/app:x:${APP_GID}:/" /etc/group
  else
    echo "custom APP_UID and/or APP_GID not defined, using 1001:1001"
  fi
  chown -R app:app /srv /home/app
fi

echo "prepare environment"

# replace {% REMARK_URL %} by content of REMARK_URL variable
find /srv -regex '.*\.\(html\|js\|mjs\)$' -print \
  -exec sed -i "s|{% REMARK_URL %}|${REMARK_URL}|g" {} \;

if [ -n "${SITE_ID}" ]; then
  #replace "site_id: 'remark'" by SITE_ID
  sed -i "s|'remark'|'${SITE_ID}'|g" /srv/web/*.html
fi

echo "execute \"$*\""
if [ "${uid}" -eq "0" ]; then
  exec su-exec app "$@"
else
  exec "$@"
fi

The environment file used with remark42 for development is quite minimal:

env.dev
TIME_ZONE=Europe/Madrid
REMARK_URL=http://localhost:1313/remark42
SITE=blogops
SECRET=123456
ADMIN_SHARED_ID=sto
AUTH_ANON=true
EMOJI=true

And the nginx/default.conf file used to publish the service locally is simple too:

default.conf
server { 
 listen 1313;
 server_name localhost;
 location / {
    proxy_pass http://hugo:1313;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
 }
 location /remark42/ {
    rewrite /remark42/(.*) /$1 break;
    proxy_pass http://remark42:8080/;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Production setup

The VM where I’m publishing the blog runs Debian GNU/Linux and uses binaries from local packages and applications packaged inside containers.

To run the containers I’m using docker-ce (I could have used podman instead, but I already had it installed on the machine, so I stayed with it).

The binaries used on this project are included on the following packages from the main Debian repository:

  • git to clone & pull the repository,
  • jq to parse json files from shell scripts,
  • json2file-go to save the webhook messages to files,
  • inotify-tools to detect when new files are stored by json2file-go and launch scripts to process them,
  • nginx to publish the site using HTTPS and work as proxy for json2file-go and remark42 (I run it using a container),
  • task-spool to queue the scripts that update the deployment.

And I’m using docker and docker compose from the debian packages on the docker repository:

  • docker-ce to run the containers,
  • docker-compose-plugin to run docker compose (it is a plugin, so no - in the name).

Repository checkout

To manage the git repository I’ve created a deploy key, added it to gitea and cloned the project on the /srv/blogops PATH (that route is owned by a regular user that has permissions to run docker, as I said before).

Compiling the site with hugo

To compile the site we are using the docker-compose.yml file seen before, to be able to run it first we build the container images and once we have them we launch hugo using docker compose run:

$ cd /srv/blogops
$ git pull
$ docker compose build
$ if [ -d "./public" ]; then rm -rf ./public; fi
$ docker compose run hugo --

The compilation leaves the static HTML on /srv/blogops/public (we remove the directory first because hugo does not clean the destination folder as jekyll does).

The deploy script re-generates the site as described and moves the public directory to its final place for publishing.

Running remark42 with docker

On the /srv/blogops/remark42 folder I have the following docker-compose.yml:

docker-compose.yml
version: "2"
services:
  remark42:
    build:
      context: ../docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    env_file:
      - ../.env
      - ./env.prod
    container_name: remark42
    restart: always
    volumes:
      - ./var.prod:/srv/var
    ports:
      - 127.0.0.1:8042:8080

The ../.env file is loaded to get the APP_UID and APP_GID variables that are used by my version of the init.sh script to adjust file permissions and the env.prod file contains the rest of the settings for remark42, including the social network tokens (see the remark42 documentation for the available parameters, I don’t include my configuration here because some of them are secrets).

Nginx configuration

The nginx configuration for the blogops.mixinet.net site is as simple as:

server {
  listen 443 ssl http2;
  server_name blogops.mixinet.net;
  ssl_certificate /etc/letsencrypt/live/blogops.mixinet.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/blogops.mixinet.net/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
  access_log /var/log/nginx/blogops.mixinet.net-443.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-443.error.log;
  root /srv/blogops/nginx/public_html;
  location / {
    try_files $uri $uri/ =404;
  }
  include /srv/blogops/nginx/remark42.conf;
}
server {
  listen 80 ;
  listen [::]:80 ;
  server_name blogops.mixinet.net;
  access_log /var/log/nginx/blogops.mixinet.net-80.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-80.error.log;
  if ($host = blogops.mixinet.net) {
    return 301 https://$host$request_uri;
  }
  return 404;
}

On this configuration the certificates are managed by certbot and the server root directory is on /srv/blogops/nginx/public_html and not on /srv/blogops/public; the reason for that is that I want to be able to compile without affecting the running site, the deployment script generates the site on /srv/blogops/public and if all works well we rename folders to do the switch, making the change feel almost atomic.

json2file-go configuration

As I have a working WireGuard VPN between the machine running gitea at my home and the VM where the blog is served, I’m going to configure the json2file-go to listen for connections on a high port using a self signed certificate and listening on IP addresses only reachable through the VPN.

To do it we create a systemd socket to run json2file-go and adjust its configuration to listen on a private IP (we use the FreeBind option on its definition to be able to launch the service even when the IP is not available, that is, when the VPN is down).

The following script can be used to set up the json2file-go configuration:

setup-json2file.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
J2F_DIR="$BASE_DIR/json2file"
TLS_DIR="$BASE_DIR/tls"

J2F_SERVICE_NAME="json2file-go"
J2F_SERVICE_DIR="/etc/systemd/system/json2file-go.service.d"
J2F_SERVICE_OVERRIDE="$J2F_SERVICE_DIR/override.conf"
J2F_SOCKET_DIR="/etc/systemd/system/json2file-go.socket.d"
J2F_SOCKET_OVERRIDE="$J2F_SOCKET_DIR/override.conf"

J2F_BASEDIR_FILE="/etc/json2file-go/basedir"
J2F_DIRLIST_FILE="/etc/json2file-go/dirlist"
J2F_CRT_FILE="/etc/json2file-go/certfile"
J2F_KEY_FILE="/etc/json2file-go/keyfile"
J2F_CRT_PATH="$TLS_DIR/crt.pem"
J2F_KEY_PATH="$TLS_DIR/key.pem"

# ----
# MAIN
# ----

# Install packages used with json2file for the blogops site
sudo apt update
sudo apt install -y json2file-go uuid
if [ -z "$(type mkcert)" ]; then
  sudo apt install -y mkcert
fi
sudo apt clean

# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"
J2F_DIRLIST="blogops:$(uuid)"
J2F_LISTEN_STREAM="172.31.31.1:4443"

# Configure json2file
[ -d "$J2F_DIR" ] || mkdir "$J2F_DIR"
sudo sh -c "echo '$J2F_DIR' >'$J2F_BASEDIR_FILE'"
[ -d "$TLS_DIR" ] || mkdir "$TLS_DIR"
if [ ! -f "$J2F_CRT_PATH" ] || [ ! -f "$J2F_KEY_PATH" ]; then
  mkcert -cert-file "$J2F_CRT_PATH" -key-file "$J2F_KEY_PATH" "$(hostname -f)"
fi
sudo sh -c "echo '$J2F_CRT_PATH' >'$J2F_CRT_FILE'"
sudo sh -c "echo '$J2F_KEY_PATH' >'$J2F_KEY_FILE'"
sudo sh -c "cat >'$J2F_DIRLIST_FILE'" <<EOF
$(echo "$J2F_DIRLIST" | tr ';' '\n')
EOF

# Service override
[ -d "$J2F_SERVICE_DIR" ] || sudo mkdir "$J2F_SERVICE_DIR"
sudo sh -c "cat >'$J2F_SERVICE_OVERRIDE'" <<EOF
[Service]
User=$J2F_USER
Group=$J2F_GROUP
EOF

# Socket override
[ -d "$J2F_SOCKET_DIR" ] || sudo mkdir "$J2F_SOCKET_DIR"
sudo sh -c "cat >'$J2F_SOCKET_OVERRIDE'" <<EOF
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=$J2F_LISTEN_STREAM
EOF

# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$J2F_SERVICE_NAME"
sudo systemctl start "$J2F_SERVICE_NAME"
sudo systemctl enable "$J2F_SERVICE_NAME"

# ----
# vim: ts=2:sw=2:et:ai:sts=2
Warning:

The script uses mkcert to create the temporary certificates, to install the package on bullseye the backports repository must be available.

Gitea configuration

To make gitea use our json2file-go server we go to the project and enter into the hooks/gitea/new page, once there we create a new webhook of type gitea and set the target URL to https://172.31.31.1:4443/blogops and on the secret field we put the token generated with uuid by the setup script:

sed -n -e 's/blogops://p' /etc/json2file-go/dirlist

The rest of the settings can be left as they are:

  • Trigger on: Push events
  • Branch filter: *
Warning:

We are using an internal IP and a self signed certificate, that means that we have to review that the webhook section of the app.ini of our gitea server allows us to call the IP and skips the TLS verification (you can see the available options on the gitea documentation).

The [webhook] section of my server looks like this:

[webhook]
ALLOWED_HOST_LIST=private
SKIP_TLS_VERIFY=true

Once we have the webhook configured we can try it and if it works our json2file server will store the file on the /srv/blogops/webhook/json2file/blogops/ folder.

The json2file spooler script

With the previous configuration our system is ready to receive webhook calls from gitea and store the messages on files, but we have to do something to process those files once they are saved in our machine.

An option could be to use a cronjob to look for new files, but we can do better on Linux using inotify …​ we will use the inotifywait command from inotify-tools to watch the json2file output directory and execute a script each time a new file is moved inside it or closed after writing (IN_CLOSE_WRITE and IN_MOVED_TO events).

To avoid concurrency problems we are going to use task-spooler to launch the scripts that process the webhooks using a queue of length 1, so they are executed one by one in a FIFO queue.

The spooler script is this:

blogops-spooler.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
TSP_DIR="$BASE_DIR/tsp"

WEBHOOK_COMMAND="$BIN_DIR/blogops-webhook.sh"

# ---------
# FUNCTIONS
# ---------

queue_job() {
  echo "Queuing job to process file '$1'"
  TMPDIR="$TSP_DIR" TS_SLOTS="1" TS_MAXFINISHED="10" \
    tsp -n "$WEBHOOK_COMMAND" "$1"
}

# ----
# MAIN
# ----

INPUT_DIR="$1"
if [ ! -d "$INPUT_DIR" ]; then
  echo "Input directory '$INPUT_DIR' does not exist, aborting!"
  exit 1
fi

[ -d "$TSP_DIR" ] || mkdir "$TSP_DIR"

echo "Processing existing files under '$INPUT_DIR'"
find "$INPUT_DIR" -type f | sort | while read -r _filename; do
  queue_job "$_filename"
done

# Use inotifywatch to process new files
echo "Watching for new files under '$INPUT_DIR'"
inotifywait -q -m -e close_write,moved_to --format "%w%f" -r "$INPUT_DIR" |
  while read -r _filename; do
    queue_job "$_filename"
  done

# ----
# vim: ts=2:sw=2:et:ai:sts=2

To run it as a daemon we install it as a systemd service using the following script:

setup-spooler.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
J2F_DIR="$BASE_DIR/json2file"

SPOOLER_COMMAND="$BIN_DIR/blogops-spooler.sh '$J2F_DIR'"
SPOOLER_SERVICE_NAME="blogops-j2f-spooler"
SPOOLER_SERVICE_FILE="/etc/systemd/system/$SPOOLER_SERVICE_NAME.service"

# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"

# ----
# MAIN
# ----

# Install packages used with the webhook processor
sudo apt update
sudo apt install -y inotify-tools jq task-spooler
sudo apt clean

# Configure process service
sudo sh -c "cat > $SPOOLER_SERVICE_FILE" <<EOF
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for $J2F_USER
After=docker.service
[Service]
Type=simple
User=$J2F_USER
Group=$J2F_GROUP
ExecStart=$SPOOLER_COMMAND
EOF

# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$SPOOLER_SERVICE_NAME" || true
sudo systemctl start "$SPOOLER_SERVICE_NAME"
sudo systemctl enable "$SPOOLER_SERVICE_NAME"

# ----
# vim: ts=2:sw=2:et:ai:sts=2

The gitea webhook processor

Finally, the script that processes the JSON files does the following:

  1. First, it checks if the repository and branch are right,
  2. Then, it fetches and checks out the commit referenced on the JSON file,
  3. Once the files are updated, compiles the site using hugo with docker compose,
  4. If the compilation succeeds the script renames directories to swap the old version of the site by the new one.

If there is a failure the script aborts but before doing it or if the swap succeeded the system sends an email to the configured address and/or the user that pushed updates to the repository with a log of what happened.

The current script is this one:

blogops-webhook.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

# Values
REPO_REF="refs/heads/main"
REPO_CLONE_URL="https://gitea.mixinet.net/mixinet/blogops.git"

MAIL_PREFIX="[BLOGOPS-WEBHOOK] "
# Address that gets all messages, leave it empty if not wanted
MAIL_TO_ADDR="blogops@mixinet.net"
# If the following variable is set to 'true' the pusher gets mail on failures
MAIL_ERRFILE="false"
# If the following variable is set to 'true' the pusher gets mail on success
MAIL_LOGFILE="false"
# gitea's conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains
# when the KeepEmailPrivate option is enabled for a user
NO_REPLY_ADDRESS="noreply.example.org"

# Directories
BASE_DIR="/srv/blogops"

PUBLIC_DIR="$BASE_DIR/public"
NGINX_BASE_DIR="$BASE_DIR/nginx"
PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"

WEBHOOK_BASE_DIR="$BASE_DIR/webhook"
WEBHOOK_SPOOL_DIR="$WEBHOOK_BASE_DIR/spool"
WEBHOOK_ACCEPTED="$WEBHOOK_SPOOL_DIR/accepted"
WEBHOOK_DEPLOYED="$WEBHOOK_SPOOL_DIR/deployed"
WEBHOOK_REJECTED="$WEBHOOK_SPOOL_DIR/rejected"
WEBHOOK_TROUBLED="$WEBHOOK_SPOOL_DIR/troubled"
WEBHOOK_LOG_DIR="$WEBHOOK_SPOOL_DIR/log"

# Files
TODAY="$(date +%Y%m%d)"
OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
WEBHOOK_LOGFILE_PATH="$WEBHOOK_LOG_DIR/$OUTPUT_BASENAME.log"
WEBHOOK_ACCEPTED_JSON="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.json"
WEBHOOK_ACCEPTED_LOGF="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.log"
WEBHOOK_REJECTED_TODAY="$WEBHOOK_REJECTED/$TODAY"
WEBHOOK_REJECTED_JSON="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_REJECTED_LOGF="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_DEPLOYED_TODAY="$WEBHOOK_DEPLOYED/$TODAY"
WEBHOOK_DEPLOYED_JSON="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_DEPLOYED_LOGF="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_TROUBLED_TODAY="$WEBHOOK_TROUBLED/$TODAY"
WEBHOOK_TROUBLED_JSON="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_TROUBLED_LOGF="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.log"

# Query to get variables from a gitea webhook json
ENV_VARS_QUERY="$(
  printf "%s" \
    '(.           | @sh "gt_ref=\(.ref);"),' \
    '(.           | @sh "gt_after=\(.after);"),' \
    '(.repository | @sh "gt_repo_clone_url=\(.clone_url);"),' \
    '(.repository | @sh "gt_repo_name=\(.name);"),' \
    '(.pusher     | @sh "gt_pusher_full_name=\(.full_name);"),' \
    '(.pusher     | @sh "gt_pusher_email=\(.email);")'
)"

# ---------
# Functions
# ---------

webhook_log() {
  echo "$(date -R) $*" >>"$WEBHOOK_LOGFILE_PATH"
}

webhook_check_directories() {
  for _d in "$WEBHOOK_SPOOL_DIR" "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" \
    "$WEBHOOK_REJECTED" "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR"; do
    [ -d "$_d" ] || mkdir "$_d"
  done
}

webhook_clean_directories() {
  # Try to remove empty dirs
  for _d in "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" "$WEBHOOK_REJECTED" \
    "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR" "$WEBHOOK_SPOOL_DIR"; do
    if [ -d "$_d" ]; then
      rmdir "$_d" 2>/dev/null || true
    fi
  done
}

webhook_accept() {
  webhook_log "Accepted: $*"
  mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_ACCEPTED_JSON"
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_ACCEPTED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_ACCEPTED_LOGF"
}

webhook_reject() {
  [ -d "$WEBHOOK_REJECTED_TODAY" ] || mkdir "$WEBHOOK_REJECTED_TODAY"
  webhook_log "Rejected: $*"
  if [ -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
    mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_REJECTED_JSON"
  fi
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_REJECTED_LOGF"
  exit 0
}

webhook_deployed() {
  [ -d "$WEBHOOK_DEPLOYED_TODAY" ] || mkdir "$WEBHOOK_DEPLOYED_TODAY"
  webhook_log "Deployed: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_DEPLOYED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_DEPLOYED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_DEPLOYED_LOGF"
}

webhook_troubled() {
  [ -d "$WEBHOOK_TROUBLED_TODAY" ] || mkdir "$WEBHOOK_TROUBLED_TODAY"
  webhook_log "Troubled: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_TROUBLED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_TROUBLED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_TROUBLED_LOGF"
}

print_mailto() {
  _addr="$1"
  _user_email=""
  # Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,
  # which should match the value of that variable on the gitea 'app.ini' (it
  # is the domain used for emails when the user hides it).
  # shellcheck disable=SC2154
  if [ -n "${gt_pusher_email##*@"${NO_REPLY_ADDRESS}"}" ] &&
    [ -z "${gt_pusher_email##*@*}" ]; then
    _user_email="\"$gt_pusher_full_name <$gt_pusher_email>\""
  fi
  if [ "$_addr" ] && [ "$_user_email" ]; then
    echo "$_addr,$_user_email"
  elif [ "$_user_email" ]; then
    echo "$_user_email"
  elif [ "$_addr" ]; then
    echo "$_addr"
  fi
}

mail_success() {
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_LOGFILE" = "true" ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="OK - $gt_repo_name updated to commit '$gt_after'"
    mail -s "${MAIL_PREFIX}${subject}" "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
}

mail_failure() {
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_ERRFILE" = true ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="KO - $gt_repo_name update FAILED for commit '$gt_after'"
    mail -s "${MAIL_PREFIX}${subject}" "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
}

# ----
# MAIN
# ----
# Check directories
webhook_check_directories

# Go to the base directory
cd "$BASE_DIR"

# Check if the file exists
WEBHOOK_JSON_INPUT_FILE="$1"
if [ ! -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
  webhook_reject "Input arg '$1' is not a file, aborting"
fi

# Parse the file
webhook_log "Processing file '$WEBHOOK_JSON_INPUT_FILE'"
eval "$(jq -r "$ENV_VARS_QUERY" "$WEBHOOK_JSON_INPUT_FILE")"

# Check that the repository clone url is right
# shellcheck disable=SC2154
if [ "$gt_repo_clone_url" != "$REPO_CLONE_URL" ]; then
  webhook_reject "Wrong repository: '$gt_clone_url'"
fi

# Check that the branch is the right one
# shellcheck disable=SC2154
if [ "$gt_ref" != "$REPO_REF" ]; then
  webhook_reject "Wrong repository ref: '$gt_ref'"
fi

# Accept the file
# shellcheck disable=SC2154
webhook_accept "Processing '$gt_repo_name'"

# Update the checkout
ret="0"
git fetch >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository fetch failed"
  mail_failure
fi
# shellcheck disable=SC2154
git checkout "$gt_after" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository checkout failed"
  mail_failure
fi

# Remove the build dir if present
if [ -d "$PUBLIC_DIR" ]; then
  rm -rf "$PUBLIC_DIR"
fi

# Build site
docker compose run hugo -- >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
# go back to the main branch
git switch main && git pull
# Fail if public dir was missing
if [ "$ret" -ne "0" ] || [ ! -d "$PUBLIC_DIR" ]; then
  webhook_troubled "Site build failed"
  mail_failure
fi

# Remove old public_html copies
webhook_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
  -exec rm -rf {} \; >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Removal of old site versions failed"
  mail_failure
fi
# Switch site directory
TS="$(date +%Y%m%d-%H%M%S)"
if [ -d "$PUBLIC_HTML_DIR" ]; then
  webhook_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
  mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ||
    ret="$?"
fi
if [ "$ret" -eq "0" ]; then
  webhook_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
  mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ||
    ret="$?"
fi
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Site switch failed"
  mail_failure
else
  webhook_deployed "Site deployed successfully"
  mail_success
fi

# ----
# vim: ts=2:sw=2:et:ai:sts=2

Worse Than FailureMinor Revisions

In many places, driver's licenses work on a point system. As you commit infractions, you gain or lose points, when your point score hits a certain threshold, your insurance company raises your rates or you may even lose your driver's license. Where Christopher Walker lives, you start with twelve points, and each infraction takes a few away. Once a year, you have the option to attend a workshop on safe driving, where you can then regain a few of those points.

It's complicated and tedious, so several organizations, from the local department of motor vehicles to various insurance companies, have set up systems to manage this information. One of those organizations built a PHP application about fifteen years ago, and it gradually grew in kruft and complexity and confusion from that point forward. It works, but it's entirely unmaintainable.

So Christopher was hired to help upgrade it to something hopefully supportable. It's still in PHP, but it's redesigned to use some best practices, switch to Laravel as its framework, and basically be as modular and component-oriented as possible.

The real challenge was porting the existing data into the new system. The old schema was a mess. The "simple" problems were all around the fact that once upon a time the database only used ASCII, but was eventually upgraded to use UTF-8, but however that was done made it so that many characters like 'é' got mangled into '‡' or '§'.

But all of that was nothing compared to the problems updating the revision history tables. The other developers had given up on the revision/audit history many years ago. Instead of providing detailed reports, they simply displayed "[username] changed this participant."

The application tracked an audit log, and it was fairly thorough. At first glance, it even seemed pretty sensible. It had a timestamp, an action code (like "USRUPDATE" or "USRCREATE"), a "detailsaction" which contained what appeared to contain the new value of a modified field, and then a "request" which just seemed to log the raw SQL run to alter the table. That last one didn't seem important, so Christopher went ahead and started porting the old table to the new database.

That's when Christopher hit the first speed bump. Some of the records were sane, comprehensible audit logs. Some of them simply weren't. For some of them, the audit fields conveyed no information. For others, you needed to look at the request field and try and reconstruct what happened from the raw SQL. Except that was easier said than done: many of the queries in the audit log referenced tables and fields which no longer existed, or had been renamed at some point. By combing through the huge pile of data, Christopher was able to determine that there were only about 20 different ways those queries got deprecated, so it wasn't too hard to come up with a script that could translate them into the new architecture.

The other unusual edge case were that instead of storing SQL in the field, many stored a condensed array representing the row that was altered, like:

a:23:{s:14:"participantsid";i:123456;s:5:"titre";s:8:"Monsieur";s:3:"nom";s:5:"[LAST_NAME]";s:6:"nom_jf";s:0:"";s:6:"prenom";s:6:"[FIRST_NAME]";s:10:"profession";s:1:"0";s:14:"naissance_date";s:10:"xxxx-xx-xx";s:14:"naissance_lieu";s:15:"STRASBOURG (67)";s:8:"adresse1";s:20:"[REDACTED]";s:8:"adresse2";s:0:"";s:11:"code_postal";s:5:"12345";s:5:"ville";s:9:"[REDACTED]";s:4:"tel1";s:14:"[REDACTED]";s:4:"tel2";s:0:"";s:5:"email";s:24:"[REDACTED]@gmail.com";s:6:"membre";s:0:"";s:15:"immatriculation";s:0:"";s:2:"ac";s:3:"NON";s:12:"permisnumero";s:12:"[REDACTED]";s:10:"permisdate";s:10:"2019-01-21";s:10:"permislieu";s:9:"PREFET 67";s:8:"remarque";s:0:"";s:14:"naissance_pays";s:0:"";}

That wasn't terrible to manage, aside from the fact that the dumps didn't actually reference existing tables and fields. Christopher could figure out what the replacement tables and fields were and map the data back to actual audit log entries.

That got Christopher 90% of the way there. But 90% isn't all the way, and the last ten percent was going to take a lot more time. Or perhaps was going to be impossible to do. Because the remaining audit log records stored queries that had nothing to do with the entity that was changed. Many of them weren't even modification statements.

For example, the audit log entry that seemed to be about updating a workshop's status from "active" to "cancelled" was purportedly done by this query: SELECT lieux.departement FROM lieux JOIN stages ON stages.lieuxid = lieux.lieuxid WHERE stages.types = 'PAP' AND stages.stagesid ='123456'.

Christopher summarizes:

I don't know who decided that this was a good idea or even that this made sense, but I do understand why one of the previous developers of the app decided that "[username] changed this participant." was going to be the only info given in the revisions history.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianDirk Eddelbuettel: RcppAPT 0.0.9: Minor Update

A new version of the RcppAPT package with the R interface to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like arrived on CRAN earlier today.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

This release updates the code to the Apt 2.5.0 release this makes. It makes a cleaner distinction between public and private components of the API. We adjusted one access point to a pattern we already used, and while at it, simplified some of the transition from the pre-Apt 2.0.0 interface. No new features. The NEWS entries follow.

Changes in version 0.0.9 (2022-05-25)

  • Simplified and standardized to only use public API

  • No longer tests and accomodates pre-Apt 2.0 API

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianEmmanuel Kasper: One of the strangest bug I have ever seen on Linux

Networking starts when you login as root, stops when you log off !

SeLinux messages can be ignored I guess, but we see clearly the devices being activated (it's a Linux bridge)

If you have any explanations I am curious.

Cryptogram Malware-Infested Smart Card Reader

Brian Krebs has an interesting story of a smart ID card reader with a malware-infested Windows driver, and US government employees who inadvertently buy and use them.

But by all accounts, the potential attack surface here is enormous, as many federal employees clearly will purchase these readers from a myriad of online vendors when the need arises. Saicoo’s product listings, for example, are replete with comments from customers who self-state that they work at a federal agency (and several who reported problems installing drivers).

Cryptogram Manipulating Machine-Learning Systems through the Order of the Training Data

Yet another adversarial ML attack:

Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.

So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set ­ then let initialisation bias do the rest of the work.

Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.

Research paper.

Worse Than FailureCodeSOD: New Anti-Pattern Just Dropped

Linda discovered a new anti-pattern, helpfully explained with comments.

try { this.initializeConfig(this.configFile); } catch (ADWException e) { // something went terrible wrong... but we go on, since // following errors will be thrown. }

I'll call this the delayed catch pattern. We know something went wrong, and in this example, we seem to have a mildly specific idea of what went wrong, based on the exception type. We know that it's impossible for the program to continue, but we don't complain. We just ignore the exception and wait for somebody else to notice it, like the person who saw the mess in the breakroom, but are pretending they didn't, so they don't have to clean it up. They'll wait until somebody else needs coffee more than they do, and then act surprised when they discover Carole from down the hall had to spend 30 minutes cleaning up a mess left by god knows who. "Oh no," they'd say with all the sincerity of a used-car salesman, "I walked by but didn't notice the mess. I would have totally stopped and helped if I'd have noticed! Anywho, let me just grab some coffee quick, I've got a meeting in five."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianBits from Debian: Debian welcomes the 2022 GSOC interns

GSoC logo

We are very excited to announce that Debian has selected three interns to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, interns, and details of the tasks to be performed.


Project: Android SDK Tools in Debian

  • Interns: Nkwuda Sunday Cletus and Raman Sarda

The deliverables of this project will mostly be finished packages submitted to Debian sid, both for new packages and updated packages. Whenever possible, we should also try to get patches submitted and merged upstream in the Android sources.


Project: Project: Quality Assurance for Biological and Medical Applications inside Debian

  • Interns: Mohammed Bilal

Deliverables of the project: Continuous integration tests for all Debian Med applications (life sciences, medical imaging, others), Quality Assurance review and bug fixing.


Congratulations and welcome to all the interns!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

Worse Than FailureCodeSOD: Weakly Courses

Kerin inherited a scheduling application for a university. This application stored the scheduled days for a class in the database… as one comma-separated field. This was a problem for Kerin, who was hired to add predictive scheduling and classroom density measurements to the system.

This particular function was written to take that data and transform it for display. Sort of.

function getDaysEnrollment($DayStr){ $Days = explode(',',$DayStr); $StrDay = ""; if(sizeof($Days)>0){ foreach($Days as $day) { switch($day) { case 'M': $StrDay .=" Mon,"; break; case 'T': $StrDay .=" Tue,"; break; case 'W': $StrDay .=" Wen,"; break; case 'TH': $StrDay .=" Thu,"; break; case 'F': $StrDay .=" Fri,"; break; } $StrDay = substr($StrDay,0,-1); } } return $StrDay; }

So, at its core, this function wants to loop through the list of days and convert them from short abbreviatiosn, like "M", to longer abbreviations, like "Mon". It then keeps concatenating each one together, but with a twist- it strips the commas. $StrDay = substr($StrDay,0,-1); rips off the last character, which would be the comma. Except they hard coded the comma into the strings they're concatenating in the first place. It's completely superfluous. There's no need for that, they could have just not done the commas.

According to Kerrin, this isn't the worst thing in the code, but it is the "punchiest". I'll let Kerin explain some of the things in this codebase:

[I found this] nestled in amongst mountains of similarly-awful code, Slovenian-language comments, and - I'm being completely serious here - executing code loaded from a text file remote, foreign-language MMORPG blog's domain.
If I were to hazard a guess, the remote code is probably insurance - there were a couple other similar tricks like a concealed PHP shell and an emailer that phones home with the current admin details if the IP address changes.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Cryptogram The Justice Department Will No Longer Charge Security Researchers with Criminal Hacking

Following a recent Supreme Court ruling, the Justice Department will no longer prosecute “good faith” security researchers with cybercrimes:

The policy for the first time directs that good-faith security research should not be charged. Good faith security research means accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.

[…]

The new policy states explicitly the longstanding practice that “the department’s goals for CFAA enforcement are to promote privacy and cybersecurity by upholding the legal right of individuals, network owners, operators, and other persons to ensure the confidentiality, integrity, and availability of information stored in their information systems.” Accordingly, the policy clarifies that hypothetical CFAA violations that have concerned some courts and commentators are not to be charged. Embellishing an online dating profile contrary to the terms of service of the dating website; creating fictional accounts on hiring, housing, or rental websites; using a pseudonym on a social networking site that prohibits them; checking sports scores at work; paying bills at work; or violating an access restriction contained in a term of service are not themselves sufficient to warrant federal criminal charges. The policy focuses the department’s resources on cases where a defendant is either not authorized at all to access a computer or was authorized to access one part of a computer—such as one email account—and, despite knowing about that restriction, accessed a part of the computer to which his authorized access did not extend, such as other users’ emails.

News article.

Planet DebianArturo Borrero González: Toolforge Jobs Framework

Toolforge jobs framework diagram

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

This post continues the discussion of Toolforge updates as described in a previous post. Every non-trivial task performed in Toolforge (like executing a script or running a bot) should be dispatched to a job scheduling backend, which ensures that the job is run in a suitable place with sufficient resources.

Jobs can be scheduled synchronously or asynchronously, continuously, or simply executed once. The basic principle of running jobs is fairly straightforward:

  • You create a job from a submission server (usually login.toolforge.org).
  • The backend finds a suitable execution node to run the job on, and starts it once resources are available.
  • As it runs, the job will send output and errors to files until the job completes or is aborted.

So far, if a tool developer wanted to work with jobs, the Toolforge Grid Engine backend was the only suitable choice. This is despite the fact that Kubernetes supports this kind of workload natively. The truth is that we never prepared our Kubernetes environment to work with jobs. Luckily that has changed.

We no longer want to run Grid Engine

In a previous blog post we shared information about our desired future for Grid Engine in Toolforge. Our intention is to discontinue our usage of this technology.

Convenient way of running jobs on Toolforge Kubernetes

Some advanced Toolforge users really wanted to use Kubernetes. They were aware of the lack of abstractions or helpers, so they were forced to use the raw Kubernetes API. Eventually, they figured everything out and managed to succeed. The result of this move was in the form of [docs on Wikitech][raws] and a few dozen jobs running on Kubernetes for the first time.

We were aware of this, and this initiative was much in sync with our ultimate goal: to promote Kubernetes over Grid Engine. We rolled up our sleeves and started thinking of a way to abstract and make it easy to run jobs without having to deal with lots of YAML and the raw Kubernetes API.

There is a precedent: the webservice command does exactly that. It hides all the details behind a simple command line interface to start/stop a web app running on Kubernetes. However, we wanted to go even further, be more flexible and prepare ourselves for more situations in the future: we decided to create a complete new REST API to wrap the jobs functionality in Toolforge Kubernetes. The Toolforge Jobs Framework was born.

Toolforge Jobs Framework components

The new framework is a small collection of components. As of this writing, we have three:

  • The REST API — responsible for creating/deleting/listing jobs on the Kubernetes system.
  • A command line interface — to interact with the REST API above.
  • An emailer — to notify users about their jobs activity in the Kubernetes system.

Toolforge jobs framework diagram

There were a couple of challenges that weren’t trivial to solve. The authentication and authorization against the Kubernetes API was one of them. The other was deciding on the semantics of the new REST API itself. If you are curious, we invite you to take a look at the documentation we have in wikitech.

Open beta phase

Once we gained some confidence with the new framework, in July 2021 we decided to start a beta phase. We suggested some advanced Toolforge users try out the new framework. We tracked this phase in Phabricator, where our collaborators quickly started reporting some early bugs, helping each other, and creating new feature requests.

Moreover, when we launched the Grid Engine migration from Debian 9 Stretch to Debian 10 Buster we took a step forward and started promoting the new jobs framework as a viable replacement for the grid. Some official documentation pages were created on wikitech as well.

As of this writing the framework continues in beta phase. We have solved basically all of the most important bugs, and we already started thinking on how to address the few feature requests that are missing.

We haven’t yet established yet the criteria for leaving the beta phase, but it would be good to have:

  • Critical bugs fixed and most feature requests addressed (or at least somehow planned).
  • Proper automated test coverage. We can do better on testing the different software components to ensure they are as bug free as possible. This also would make sure that contributing changes is easy.
  • REST API swagger integration.
  • Deployment automation. Deploying the REST API and the emailer is tedious. This is tracked in Phabricator.
  • Documentation, documentation, documentation.

Limitations

One of the limitations we bear in mind since early on in the development process of this framework was the support for mixing different programming languages or runtime environments in the same job.

Solving this limitation is currently one of the WMCS team priorities, because this is one of the key workflows that was available on Grid Engine. The moment we address it, the framework adoption will grow, and it will pretty much enable the same workflows as in the grid, if not more advanced and featureful.

Stay tuned for more upcoming blog posts with additional information about Toolforge.

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Cryptogram The Onion on Google Map Surveillance

Worse Than FailureCodeSOD: Nullable or Not

Nullable types, at least in theory, make our code simpler and easier to maintain. If nothing else, we know when there's a risk of a null value, and can handle it with some grace. At least, that's how it works if we understand what they do.

Boaz's co-worker knows that nullables are valuable, but doesn't quite get it.

public ulong? ParentOpinionId { get { return DbAdapter.Parent_Opinion; } set { DbAdapter.Parent_Opinion = value.Value; } }

This is a pretty basic C# getter/setter, and it advertises itself as accepting nullable unsigned longs. And, sure enough, the database field it wraps also supports null values. So the signature advertises that it accepts nulls, the persistence layer supports nulls, but this setter doesn't.

Specifically, value.Value on a null value throws an exception. This could be solved easily by just… not trying so hard. Parent_Opinion = value solves this problem perfectly. That's the thing about this- the developer did extra work to get to the wrong result.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianUlrike Uhlig: How do kids conceive the internet? - part 3

I received some feedback on the first part of interviews about the internet with children that I’d like to share publicly here. Thank you! Your thoughts and experiences are important to me!

In the first interview round there was this French girl.

Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children.

To this bit, one person reacted saying that they first laughed when reading her proposal, but then felt extremely touched by it.

Another person reacted to the same bit of text:

That’s just brilliant. We spend so much time worrying about how the internet will affect children while overlooking how it has already affected us as parents. It actively harms our relationship with our children (keeping us distracted from their amazing life) and sets a bad example for them.

Too often, when we worry about children, we should look at our own behavior first. Until about that age (9-10+) at least, they are such a direct reflection of us that it’s frightening…

Yet another person reacted to the fact that many of the interviewees in the first round seemed to believe that the internet is immaterial, located somewhere in the air, while being at the same time omnipresent:

It reminds me of one time – about a dozen years ago, when i was still working closely with one of the city high schools – where i’d just had a terrible series of days, dealing with hardware failure, crappy service followthrough by the school’s ISP, and overheating in the server closet, and had basically stayed overnight at the school and just managed to get things back to mostly-functional before kids and teachers started showing up again.

That afternoon, i’d been asked by the teacher of a dystopian fiction class to join them for a discussion of Feed, which they’d just finished reading. i had read it the week before, and came to class prepared for their questions. (the book is about a near-future where kids have cybernetic implants and their society is basically on a runaway communications overload; not a bad Y[oung]A[dult] novel, really!)

The kids all knew me from around the school, but the teacher introduced my appearance in class as “one of the most Internet-connected people” and they wanted to ask me about whether i really thought the internet would “do this kind of thing” to our culture, which i think was the frame that the teacher had prepped them with. I asked them whether they thought the book was really about the Internet, or whether it was about mobile phones. Totally threw off the teacher’s lesson plans, i think, but we had a good discussion.

At one point, one of the kids asked me “if there was some kind of crazy disaster and all the humans died out, would the internet just keep running? what would happen on it if we were all gone?”

all of my labor – even that grueling week – was invisible to him! The internet was an immaterial thing, or if not immaterial, a force of nature, a thing that you accounted for the way you accounted for the weather, or traffic jams. It didn’t occur to him, even having just read a book that asked questions about what hyperconnectivity does to a culture (including grappling with issues of disparate access, effective discrimination based on who has the latest hardware, etc), it didn’t occur to him that this shit all works to the extent that it does because people make it go.

I felt lost trying to explain it to him, because where i wanted to get to with the class discussion was about how we might decide collectively to make it go somewhere else – that our contributions to it, and our labor to perpetuate it (or not) might actually help shape the future that the network helps us slide into. but he didn’t even see that human decisions or labor played a role it in at all, let alone a potentially directive role. We were really starting at square zero, which wasn’t his fault. Or the fault of his classmates that matter – but maybe a little bit of fault on the teacher, who i thought should have been emphasizing this more – but even the teacher clearly thought of the internet as a thing being done to us not as something we might actually drive one way or another. And she’s not even wrong – most people don’t have much control, just like most people can’t control the weather, even as our weather changes based on aggregate human activity.

I was quite impressed by seeing the internet perceived as a force of nature, so we continued this discussion a bit:

that whole story happened before we started talking about “the cloud”, but “the cloud” really reinforces this idea, i think. not that anyone actually thinks that “the cloud” is a literal cloud, but language shapes minds in subtle ways.

(Bold emphasis in the texts are mine.)

Thanks :) I’m happy and touched that these interviews prompted your wonderful reactions, and I hope that there’ll be more to come on this topic. I’m working on it!

Planet DebianSergio Talens-Oliag: New Blog

Welcome to my new Blog for Technical Stuff.

For a long time I was planning to start publishing technical articles again but to do it I wanted to replace my old blog based on ikiwiki by something more modern.

I’ve used Jekyll with GitLab Pages to build the Intranet of the ITI and to generate internal documentation sites on Agile Content, but, as happened with ikiwiki, I felt that things were kind of slow and not as easy to maintain as I would like.

So on Kyso (the Company I work for right now) I switched to Hugo as the Static Site Generator (I still use GitLab Pages to automate the deployment, though), but the contents are written using the Markdown format, while my personal preference is the Asciidoc format.

One thing I liked about Jekyll was that it was possible to use Asciidoctor to generate the HTML simply by using the Jekyll Asciidoc plugin (I even configured my site to generate PDF documents from .adoc files using the Asciidoctor PDF converter) and, luckily for me, that is also possible with Hugo, so that is what I plan to use on this blog, in fact this post is written in .adoc.

My plan is to start publishing articles about things I’m working on to keep them documented for myself and maybe be useful to someone else.

The general intention is to write about Container Orchestration (mainly Kubernetes), CI/CD tools (currently I’m using GitLab CE for that), System Administration (with Debian GNU/Linux as my preferred OS) and that sort of things.

My next post will be about how I build, publish and update the Blog, but probably I will not finish it until next week, once the site is fully operational and the publishing system is tested.

Spoiler Alert:

This is a personal site, so I’m using Gitea to host the code instead of GitLab.

To handle the deployment I’ve configured json2file-go to save the data sent by the hook calls and process it asynchronously using inotify-tools.

When a new file is detected a script parses the JSON file using jq and builds and updates the site if appropriate.

David BrinCrypto is not a dog... or doge... or is it?

As this goes online, Bitcoin and other cryptocurrencies are in apparent price-freefall. This posting - prepared over a month ago - will not discuss the recent coin market meltdowns. Still, it seems a good moment to offer some light on one aspect.

First, I actually know a little about this topic. I've consulted with a number of companies, agencies, etc. about the blockchain era. More generally, about the conceptual underpinnings of "smart contracts" and the eerie, free-floating algorithms that were long-predicted by science fiction, but have become reality, as we speak. (Yes they are out there; some may be living right behind the screen you are looking at.)

One topic generating excitement - though the notion has been floating since the 1990s - is that of Decentralized Autonomous Organizations, or DAO, which are portrayed in many novels and utopian manifestos as a way for humans (and their helpers) to bypass sclerotic legacy nations and codger institutions with self-organizing action groups, using NFTs and Blockchain tokens to modernize and revitalize the concept of guilds -- global, quick, low-cost, boundaryless, open and inherently accountable. Bruce Sterling wrote about this notion in the last century (as in his novel, Heavy Weather) and other authors, like Neal Stephenson (Cryptonomicon), Karl Schroeder (Stealing Worlds), as well as Cory Doctorow (Down and Out in the Magic Kingdom), Annalee Newitz (Autonomous), and many others roam this conceptual landscape with agility! 

To a large extent, versions of DAO thinking underlie moves by nations like Estonia (or "E-stonia") to modernize democracy and public services. Also spreading widely is the related notion of Citizen Assemblies

But today I want to focus now on just one aspect of this brave new world: whether DAOs can find a middle ground between autonomy and accountability, by self-policing to reduce bad behavior by predators, while retaining their better, freedom enhancing traits.  

== Can blockchain-based DAOs - especially coin communities - self-police? ==

This is an important topic! Because major legacy nations like China are already stomping hard, using as justification the way cryptocurrencies do empower the very worst of parasitic human criminals. That justification might be reduced or eliminated if DAOs or blockchain communities could find a positive-sum sweet spot, cauterizing predators while preserving their role as gritty irritants, creating pearls of creative freedom.

Although there is no way to "ban" crypto currencies in general, there is an approach to making them much more accountable to real life law.

Let's start with an ironic fact. Blockchain-based token systems are not totally secret!  


Yes, they use crypto to mask the identity of token (coin) holders.  But those holders only "own" their tokens by general consent of all members in a communal 'shared ledger' that maintains the list of coins and which public keys stand ready to be turned by each owner's encrypted keys. In that sense it is the opposite of 'secret,' since the ledger is out there in tens of thousands of copies on just as many distributed computers. Attempts to invade or distort or corrupt the ledger are detected and canceled en masse. (The ecologically damaging "coin mining" operations out there are partly about maintaining the ledger.)


All of this means that - to the delight of libertarians - it will be hard to legislate or regulate blockchain token systems. Hard, but not impossible. For example, the value of Bitcoin rises and falls depending on how many real world entities will accept it in payment. And as stated above, and some governments have been hammering on that, lately.

There is another way to modify any given blockchain token system, and that is for the owners themselves to deliberate and decide on a change to their shared economy... to change the ledger and its support software.  No one member/owner can do that. Any effort to do so would be detected by the ledger's built in immune system and canceled. 


Only dig it, all such ledger-blockchain systems are ruled by a weird kind of consensus democracy. While there is no institutional or built in provision for democratic decision making in the commons - (Satoshi himself may have back doors: a separate topic) - there is nothing to stop a majority of bitcoin holders from simply making their own, new version of the shared ledger and inserting all their coins into it, with new software that's tuned to less eagerly reward polluters and extortionist gangs. 


Oh, sure, a large minority would refuse. Their rump or legacy Bitcoin ledger (Rumpcoin?) would continue to operate... with value plummeted as commercial and government and individual entities refuse to accept it and as large numbers of computer systems refuse to host rump-coin ledger operations. Because at that point, the holdouts will include a lot of characters who are doing unsavory things in the real world.


There are vernaculars for this. Indeed it has been done, occasionally, in what are called soft and hard 'forks.' 


== A forking solution? ==


A “fork,” in programming terms, is an open-source code modification. Usually, the forked code is similar to the original, but with important modifications, and the two “prongs” comfortably co-exist. Sometimes a fork is used to test a process, but with cryptocurrencies, it is more often used to implement a fundamental change or to create a new asset with similar (but not equal) characteristics as the original.


With a soft fork, only one blockchain will remain valid as users adopt the update. Whereas with a hard fork, both the old and new blockchains exist side by side, which means that the software must be updated to work by the new rules. But the aim is to render the old code so obsolete and so widely spurned that it ceases to have any use to anyone.


As an example: Etherium did a fork when about $100 million worth of coins (that would now be worth tens of billions) was tied up in a badly written smart contract that a hacker was stealing. The community decided to kill that smart contract showing that immutable blockchains can change if 50% +1 decides to change it.


If you squint at this, it's really not so radical.  (Don't even ask about the blockchain "spork!"). It is just an operating system upgrade that can only occur by majority consent of the owner-members of the commune.  As pioneered at the famous University of Fork... or...


And so the stage is set to 'regulate' in ways that leave the potential benefits of blockchain - self-correction, smart contracts and the like - alone while letting system users deliberate and decide to revise, a trait that should be possible in any democratic or accountable system.


Now, is there a way to use a Grand Fork to change the insane approach to coin "mining" so that ledger maintenance can be achieved without encouraging planet-killing pollution and waste?


== And finally... ==


The concept that I called equiveillance or look-back accountability, in The Transparent Society - and Steve Mann called sousveillance - is labeled "inverse surveillance" by members of the Asimov Institute, in Holland. “How can we use AI as a Panopticon to promote beneficial actions for citizens by organizations?” A proof of concept was explored in a 2021 hackathon


Well well. These are harder concepts to relate than they might think, I know from experience! Yet they are fundamental to the very basis of our kind of civilization.

Planet DebianRuss Allbery: Review: On a Sunbeam

Review: On a Sunbeam, by Tillie Walden

Publisher: Tillie Walden
Copyright: 2016-2017
Format: Online graphic novel
Pages: 544

On a Sunbeam is a web comic that was published in installments between Fall 2016 and Spring 2017, and then later published in dead tree form. I read the on-line version, which is still available for free from its web site. It was nominated for an Eisner Award and won a ton of other awards, including the Los Angeles Times Book Prize.

Mia is a new high school graduate who has taken a job with a construction crew that repairs old buildings (that are floating in space, but I'll get to that in a moment). Alma, Elliot, and Charlotte have been together for a long time; Jules is closer to Mia's age and has been with them for a year. This is not the sort of job one commutes to: they live together on a spaceship that travels to the job sites, share meals together, and are more of an extended family than a group of coworkers. It's all a bit intimidating for Mia, but Jules provides a very enthusiastic welcome and some orientation.

The story of Mia's new job is interleaved with Mia's school experience from five years earlier. As a new frosh at a boarding school, Mia is obsessed with Lux, a school sport that involves building and piloting ships through a maze to capture orbs. Sent to the principal's office on the first day of school for sneaking into the Lux tower when she's supposed to be at assembly, she meets Grace, a shy girl with sparkly shoes and an unheard-of single room. Mia (a bit like Jules in the present timeline) overcomes Grace's reticence by being persistently outgoing and determinedly friendly, while trying to get on the Lux team and dealing with the typical school problems of bullies and in-groups.

On a Sunbeam is science fiction in the sense that it seems to take place in space and school kids build flying ships. It is not science fiction in the sense of caring about technological extrapolation or making any scientific sense whatsoever. The buildings that Mia and the crew repair appear to be hanging in empty space, but there's gravity. No one wears any protective clothing or air masks. The spaceships look (and move) like giant tropical fish. If you need realism in your science fiction graphical novels, it's probably best not to think of this as science fiction at all, or even science fantasy despite the later appearance of some apparently magical or divine elements.

That may sound surrealistic or dream-like, but On a Sunbeam isn't that either. It's a story about human relationships, found family, and diversity of personalities, all of which are realistically portrayed. The characters find their world coherent, consistent, and predictable, even if it sometimes makes no sense to the reader. On a Sunbeam is simply set in its own universe, with internal logic but without explanation or revealed rules.

I kind of liked this approach? It takes some getting used to, but it's an excuse for some dramatic and beautiful backgrounds, and it's oddly freeing to have unremarked train tracks in outer space. There's no way that an explanation would have worked; if one were offered, my brain would have tried to nitpick it to the detriment of the story. There's something delightful about a setting that follows imaginary physical laws this unapologetically and without showing the author's work.

I was, sadly, not as much of a fan of the art, although I am certain this will be a matter of taste. Walden mixes simple story-telling panels with sweeping vistas, free-floating domes, and strange, wild asteroids, but she uses a very limited color palette. Most panels are only a few steps away from monochrome, and the colors are chosen more for mood or orientation in the story (Mia's school days are all blue, the Staircase is orange) than for any consistent realism. There is often a lot of detail in the panels, but I found it hard to appreciate because the coloring confused my eye. I'm old enough to have been a comics reader during the revolution in digital coloring and improved printing, and I loved the subsequent dramatic improvement in vivid colors and shading. I know the coloring style here is an intentional artistic choice, but to me it felt like a throwback to the days of muddy printing on cheap paper.

I have a similar complaint about the lettering: On a Sunbeam is either hand-lettered or closely simulates hand lettering, and I often found the dialogue hard to read due to inconsistent intra- and interword spacing or ambiguous letters. Here too I'm sure this was an artistic choice, but as a reader I'd much prefer a readable comics font over hand lettering.

The detail in the penciling is more to my liking. I had occasional trouble telling some of the characters apart, but they're clearly drawn and emotionally expressive. The scenery is wildly imaginative and often gorgeous, which increased my frustration with the coloring. I would love to see what some of these panels would have looked like after realistic coloring with a full palette.

(It's worth noting again that I read the on-line version. It's possible that the art was touched up for the print version and would have been more to my liking.)

But enough about the art. The draw of On a Sunbeam for me is the story. It's not very dramatic or event-filled at first, starting as two stories of burgeoning friendships with a fairly young main character. (They are closely linked, but it's not obvious how until well into the story.) But it's the sort of story that I started reading, thought was mildly interesting, and then kept reading just one more chapter until I had somehow read the whole thing.

There are some interesting twists towards the end, but it's otherwise not a very dramatic or surprising story. What it is instead is open-hearted, quiet, charming, and deeper than it looks. The characters are wildly different and can be abrasive, but they invest time and effort into understanding each other and adjusting for each other's preferences. Personal loss drives a lot of the plot, but the characters are also allowed to mature and be happy without resolving every bad thing that happened to them. These characters felt like people I would like and would want to get to know (even if Jules would be overwhelming). I enjoyed watching their lives.

This reminded me a bit of a Becky Chambers novel, although it's less invested in being science fiction and sticks strictly to humans. There's a similar feeling that the relationships are the point of the story, and that nearly everyone is trying hard to be good, with differing backgrounds and differing conceptions of good. All of the characters are female or non-binary, which is left as entirely unexplained as the rest of the setting. It's that sort of book.

I wouldn't say this is one of the best things I've ever read, but I found it delightful and charming, and it certainly sucked me in and kept me reading until the end. One also cannot argue with the price, although if I hadn't already read it, I would be tempted to buy a paper copy to support the author. This will not be to everyone's taste, and stay far away if you are looking for realistic science fiction, but recommended if you are in the mood for an understated queer character story full of good-hearted people.

Rating: 7 out of 10

,

Planet DebianDirk Eddelbuettel: #37: Introducing r2u with 2 x 19k CRAN binaries for Ubuntu 22.04 and 20.04

One month ago I started work on a new side project which is now up and running, and deserving on an introductory blog post: r2u. It was announced in two earlier tweets (first, second) which contained the two (wicked) demos below also found at the documentation site.

So what is this about? It brings full and complete CRAN installability to Ubuntu LTS, both the ‘focal’ release 20.04 and the recent ‘jammy’ release 22.04. It is unique in resolving all R and CRAN packages with the system package manager. So whenever you install something it is guaranteed to run as its dependencies are resolved and co-installed as needed. Equally important, no shared library will be updated or removed by the system as the possible dependency of the R package is known and declared. No other package management system for R does that as only apt on Debian or Ubuntu can — and this project integrates all CRAN packages (plus 200+ BioConductor packages). It will work with any Ubuntu installation on laptop, desktop, server, cloud, container, or in WSL2 (but is limited to Intel/AMD chips, sorry Raspberry Pi or M1 laptop). It covers all of CRAN (or nearly 19k packages), all the BioConductor packages depended-upon (currently over 200), and only excludes less than a handful of CRAN packages that cannot be built.

Usage

Setup instructions approaches described concisely in the repo README.md and documentation site. It consists of just five (or fewer) simple steps, and scripts are provided too for ‘focal’ (20.04) and ‘jammy’ (22.04).

Demos

Check out these two demos (also at the r2u site):

Installing the full tidyverse in one command and 18 seconds

Installing brms and its depends in one command and 13 seconds (and show gitpod.io)

Integration via bspm

The r2u setup can be used directly with apt (or dpkg or any other frontend to the package management system). Once installed apt update; apt upgrade will take care of new packages. For this to work, all CRAN packages (and all BioConductor packages depended upon) are mapped to names like r-cran-rcpp and r-bioc-s4vectors: an r prefix, the repo, and the package name, all lower-cased. That works—but thanks to the wonderful bspm package by Iñaki Úcar we can do much better. It connects R’s own install.packages() and update.packages() to apt. So we can just say (as the demos above show) install.packages("tidyverse") or install.packages("brms") and binaries are installed via apt which is fantastic and it connects R to the system package manager. The setup is really only two lines and described at the r2u site as part of the setup.

History and Motivation

Turning CRAN packages into .deb binaries is not a new idea. Albrecht Gebhardt was the first to realize this about twenty years ago (!!) and implemented it with a single Perl script. Next, Albrecht, Stefan Moeller, David Vernazobres and I built on top of this which is described in this useR! 2007 paper. A most excellent generalization and rewrite was provided by Charles Blundell in an superb Google Summer of Code contribution in 2008 which I mentored. Charles and I described it in this talk at useR! 2009. I ran that setup for a while afterwards, but it died via an internal database corruption in 2010 right when I tried to demo it at CRAN headquarters in Vienna. This peaked at, if memory serves, about 5k packages: all of CRAN at the time. Don Armstrong took it one step further in a full reimplemenation which, if I recall correctly, coverd all of CRAN and BioConductor for what may have been 8k or 9k packages. Don had a stronger system (with full RAID-5) but it also died in a crash and was never rebuilt even though he and I could have relied on Debian resources (as all these approaches focused on Debian). During that time, Michael Rutter created a variant that cleverly used an Ubuntu-only setup utilizing Launchpad. This repo is still going strong, used and relied-upon by many, and about 5k packages (per distribution) strong. At one point, a group consisting of Don, Michael, Gábor Csárdi and myself (as lead/PI) had financial support from the RConsortium ISC for a more general re-implementation , but that support was withdrawn when we did not have time to deliver.

We should also note other long-standing approaches. Detlef Steuer has been using the openSUSE Build Service to provide nearly all of CRAN for openSUSE for many years. Iñaki Úcar built a similar system for Fedora described in this blog post. Iñaki and I also have a arXiv paper describing all this.

Details

Please see the the r2u site for all details on using r2u.

Acknowledgements

The help of everybody who has worked on this is greatly appreciated. So a huge Thank you! to Albrecht, David, Stefan, Charles, Don, Michael, Detlef, Gábor, Iñaki—and whoever I may have omitted. Similarly, thanks to everybody working on R, CRAN, Debian, or Ubuntu—it all makes for a superb system. And another big Thank you! goes to my GitHub sponsors whose continued support is greatly appreciated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianWouter Verhelst: Faster tar

I have a new laptop. The new one is a Dell Latitude 5521, whereas the old one was a Dell Latitude 5590.

As both the old and the new laptops are owned by the people who pay my paycheck, I'm supposed to copy all my data off the old laptop and then return it to the IT department.

A simple way of doing this (and what I'd usually use) is to just rsync the home directory (and other relevant locations) to the new machine. However, for various reasons I didn't want to do that this time around; for one, my home directory on the old laptop is a bit of a mess, and a new laptop is an ideal moment in time to clean that up. If I were to just rsync over the new home directory, then, well.

So instead, I'm creating a tar ball. The first attempt was quite slow:

tar cvpzf wouter@new-laptop:old-laptop.tar.gz /home /var /etc

The problem here is that the default compression algorithm, gzip, is quite slow, especially if you use the default non-parallel implementation.

So we tried something else:

tar cvpf wouter@new-laptop:old-laptop.tar.gz -Ipigz /home /var /etc

Better, but not quite great yet. The old laptop now has bursts of maxing out CPU, but it doesn't even come close to maxing out the gigabit network cable between the two.

Tar can compress to the LZ4 algorithm. That algorithm doesn't compress very well, but it's the best algorithm if "speed" is the most important consideration. So I could do that:

tar cvpf wouter@new-laptop:old-laptop.tar.gz -Ilz4 /home /var /etc

The trouble with that, however, is that the tarball will then be quite big.

So why not use the CPU power of the new laptop?

tar cvpf - /home /var /etc | ssh new-laptop "pigz > old-laptop.tar.gz"

Yeah, that's much faster. Except, now the network speed becomes the limiting factor. We can do better.

tar cvpf - -Ilz4 /home /var /etc | ssh new-laptop "lz4 -d | pigz > old-laptop.tar.gz"

This uses about 70% of the link speed, just over one core on the old laptop, and 60% of CPU time on the new laptop.

After also adding a bit of --exclude="*cache*", to avoid files we don't care about, things go quite quickly now: somewhere between 200 and 250G (uncompressed) was transferred into a 74G file, in 20 minutes. My first attempt hadn't even done 10G after an hour!

Charles StrossHolding pattern 2022 ...

Just a quick note: I am not blogging right now—at least until the end of April, most likely until this point in mind-May—because I am 2/3 of the way through the final draft of Season of Skulls, book 3 of the New Management: it's due in at the end of the month, or in any case some time in May, for publication in May 2023. (It already exists as a book, this is a final polishing pass with some additional scenes adding into it to make the continuity work better.)

After SoS is baked I also have to finish a half-written novella, A Conventional Boy, about Derek the DM; it got steamrollered by two novels going through production in the past year. I can't multitask on writing projects, so the lower-priority job (a novella) got shelved temporarily.

Normal service will be resumed by June at the latest; in the meantime, if you think the last thread on the Ukraine war is getting too cumbersome, feel free to colonize the comments on this one.

Worse Than FailureError'd: Nice Work If You Can Get IT

Danish cookie connoisseur Jørgen N. contributes our starter course this week. "Cloudera has an interesting way of implementing "Required only" cookies." It's an exercise for the frist poster to explain to the peanut gallery what's so distasteful about third-party cookies.

cookie

 

Studious Olivier sagely notes "Anatomy has been a subject of study ever since someone cut some other fellow open, but I don't think Android applications existed when Pontius Pilate washed his hands."

old

 

Coincidentally, Brett R. contemporaneously comments "That last build took a little more than 2,020 years, I think the software is already out of date."

slow

 

Regular Peter G. breaks some big news for cosmology. "Looks like the dark matter mystery has been solved. Bunnings Warehouse in Australia reckons 150ccs of null weighs about 42g!" That must be inclusive of the packaging.

light

 

Speaking of cosmology, after Dan N's retirement portfolio took a haircut this week, he decided he might need to check his Social Security benefit for a little reassurance. Imagine his dismay at this bureaucratic brushoff. "I got an email from the SSA telling me that they'd published a new statement of my current progress to earning retirement benefits. When I went to their website to login I found this. They've got anywhere between 4 and 8.5 hours of scheduled downtime every single day! Unless the "database" backing their site is a room full of filing cabinets with hundreds of clerks to transcribe records when a teletype prints out requests I can think of no sane reason for this sort of downtime." Union rules?

ssa

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianLouis-Philippe Véronneau: Introducing metalfinder

After going to an incredible Arch Enemy / Behemoth / Napalm Death / Unto Others concert a few weeks ago, I decided I wanted to go to more concerts.

I like music, and I really enjoy concerts. Sadly, I often miss great performances because no one told me about it, or my local newspaper didn't cover the event enough in advance for me to get tickets.

Some online services lets you sync your Spotify account to notify you when a new concert is announced, but I don't use Spotify. As a music geek, I have a local music collection and if I need to stream it, I have a supysonic server.

Introducing metalfinder, a cli tool to find concerts using your local music collection! At the moment, it scans your music collection, creates a list of artists and queries Bandsintown for concerts in your town. Multiple output formats are supported, but I mainly use the ATOM one, as I'm a heavy feed reader user.

Screenshot of the ATOM output in my feed reader

The current metalfinder version (1.1.1) is a MVP: it works well enough, but I still have a lot of work to do... If you want to give it a try, the easiest way is to download it from PyPi. metalfinder is also currently in NEW and I'm planning to have something feature complete in time for the Bookworm freeze.

Planet DebianReproducible Builds (diffoscope): diffoscope 213 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 213. This version includes the following changes:

* Don't mask differences in .zip/.jar central directory extra fields.
* Don't show a binary comparison of .zip/.jar files if we have at least
  one observed nested difference.
* Use assert_diff in test_zip over get_data and separate assert.

You find out more by visiting the project homepage.

,

Planet DebianUlrike Uhlig: How do kids conceive the internet? - part 2

I promised a follow up to my post about interviews about how children conceptualize the internet. Here it is. (Maybe not the last one!)

The internet, it’s that thing that acts up all the time, right?

As said in my first post, I abandoned the idea to interview children younger than 9 years because it seems they are not necessarily aware that they are using the internet. But it turns out that some do have heard about the internet. My friend Anna, who has 9 younger siblings, tried to win some of her brothers and sisters for an interview with me. At the dinner table, this turned into a discussion and she sent me an incredibly funny video where two of her brothers and sisters, aged 5 and 6, discuss with her about the internet. I won’t share the video for privacy reasons — besides, the kids speak in the wondrous dialect of Vorarlberg, a region in western Austria, close to the border with Liechtenstein.

Here’s a transcription of the dinner table discussion:

  • Anna: what is the internet?
  • both children: (shouting as if it was a game of who gets it first) photo! mobile! device! camera!
  • Anna: But one can have a camera without the internet…
  • M.: Internet is the mobile phone charger! Mobile phone full!
  • J.: Internet is… internet is…
  • M.: I know! Internet is where you can charge something, the mobile phone and…
  • Anna: You mean electricity?
  • M.: Yeah, that is the internet, electricity!
  • Anna: (laughs), Yes, the internet works a bit similarly, true.
  • J.: It’s the electricity of the house!
  • Anna: The electricity of the house…

(everyone is talking at the same time now.)

  • Anna: And what’s WiFi?
  • M.: WiFi it’s the TV!
  • Anna (laughs)
  • M.: WiFi is there so it doesn’t act up!
  • Anna (laughs harder)
  • J. (repeats what M. just said): WiFi is there so it doesn’t act up!
  • Anna: So that what doesn’t act up?
  • M.: (moves her finger wildly drawing a small circle in the air) So that it doesn’t spin!
  • Anna: Ah?
  • M.: When one wants to watch something on Youtube, well then… that the thing doesn’t spin like that!
  • Anna: Ahhh! so when you use Youtube, you need the internet, right?
  • J.: Yes, so that one can watch things.

I really like how the kids associate the internet with a thing that works all the time, except for when it doesn’t work. Then they notice: “The internet is acting up!” Probably, when that happens, parents or older siblings say: “the internet is acting up” or “let me check why the internet acts up again” and maybe they get up from the sofa, switch a home router on and off again, which creates this association with electricity.

(Just for the sake of clarity for fellow multilingualist readers, the kids used the German word “spinnen”, which I translated to “acting up”. In French that would be “déconner”.)

WiFi for everyone!

I interviewed another of Anna’s siblings, a 10 year old boy. He told me that he does not really use the internet by himself yet, and does not own any internet capable device. He watches when older family members look up stuff on Google, or put on a video on Youtube, Netflix, or Amazon — he knew all these brand names though. In the living room, there’s Alexa, he told me, and he uses the internet by asking Alexa to play music.

Then I say: Alexa, play this song!

Interestingly, he knew that, in order to listen to a CD, the internet was not needed.

When asked how a drawing would look like that explains the internet, he drew a scheme of the living room at home, with the TV, Alexa, and some kind of WiFi dongle, maybe a repeater. (Unfortunately I did not manage to get his drawing.)

If he could ask a wise and friendly dragon one thing about the internet that he always wanted to know, he would ask “How much internet can one have and what are all the things one can do with the internet?”

If he could change the internet for the better for everyone, he would build a gigantic building which would provide the entire world with WiFi. ☺

Cut out the stupid stuff from the internet

His slightly older sister does own a laptop and a smartphone. She uses the internet to watch movies, or series, to talk with her friends, or to listen to music.

When asked how she would explain the internet to an alien, she said that

one can do a lot of things on the internet, but on the internet there can be stupid things, but also good things, one can learn stuff on the internet, for example how to do crochet.

Most importantly, she noticed that

one needs the internet nowadays.

A child's drawing. On the left, a smartphone with WhatsApp, saying 'calls with WhatsApp'. In the middle a TV saying 'watching movies'. On the right, a laptop with lots of open windowns.

Her drawing shows how she uses the internet: calls using WhatsApp, watching movies online, and a laptop with open windows on the screen.

She would ask the dragon that can explain one thing she always wanted to know about the internet:

What is the internet? How does it work at all? How does it function?

What she would change has to do with her earlier remark about stupid things:

I would make it so that there are less stupid things. It would be good to use the internet for better things, but not for useless things, that one doesn’t actually need.

When I asked her what she meant by “stupid things”, she replied:

Useless videos where one talks about nonsense. And one can also google stupid things, for example “how long will i be alive?” and stuff like that.

Patterns

From the interviews I made until now, there seems to be a cut between then age where kids don’t own a device and use the internet to watch movies, series or listen to music and the age where they start owning a device and then they start talking to their friends, and create accounts on social media. This seems to happen roughly at ages 9-10.

I’m still surprised at the amount of ideas that kids have, when asked what they would change on the internet if they could. I’m sure there’s more if one goes looking for it.

Thanks

Thanks to my friends who made all these interviews possible either by allowing me to meet their children, or their younger siblings: Anna, Christa, Aline, Cindy, and Martina.

Planet DebianJoerg Jaspert: Rust? Munin? munin-plugin…

My first Rust crate: munin-plugin

Sooo, some time ago I had to rewrite a munin plugin from Shell to Rust, due to the shell version going crazy after some runtime and using up a CPU all for its own. Sure, it only did that on Systems with Oracle Database installed, so that monster seems to be bad (who would have guessed?), but somehow I had to fixup this plugin and wasn’t allowed to drop that wannabe-database.

A while later I wrote a plugin to graph Fibre Channel Host data, and then Network interface statistics, all with a one-second resolution for the graphs, to allow one to zoom in and see every spike. Not have RRD round of the interesting parts.

As one can imagine, that turns out to be a lot of very similar code - after all, most of the difference is in the graph config statements and actual data gathering, but the rest of code is just the same.

As I already know there are more plugins (hello rsyslog statistics) I have to (sometimes re-)write in Rust, I took some time and wrote me a Rust library to make writing munin-plugins in Rust easier. Yay, my first crate on crates.io (and wrote lots of docs for it).

By now I made my 1 second resolution CPU load plugin and the 1 second resolution Network interface plugin use this lib already. To test less complicated plugins with the lib, I took the munin default plugin “load” (Linux variant) and made a Rust version from it, but mostly to see that something as simple as that is also easy to implement: Munin load

I got some idea on how to provide a useful default implementation of the fetch function, so one can write even less code, when using this library.

It is my first library in Rust, so if you see something bad or missing in there, feel free to open issues or pull requests.

Now, having done this, one thing missing: Someone to (re)write munin itself in something that is actually fast… Not munin-node, but munin. Or maybe the RRD usage, but with a few hundred nodes in it, with loads of graphs, we had to adjust munin code and change some timeout or it would commit suicide regularly. And some other code change for not always checking for a rename, or something like it. And only run parts of the default cronjob once an hour, not on every update run. And switch to fetching data over ssh (and munin-async on the nodes). And rrdcached with loads of caching for the trillions of files (currently amounts to ~800G of data).. And it still needs way more CPU than it should. Soo, lots of possible optimizations hidden in there. Though I bet a non-scripting language rewrite might gain the most. (Except, of course, someone needs to do it… :) )

Cryptogram Forging Australian Driver’s Licenses

The New South Wales digital driver’s license has multiple implementation flaws that allow for easy forgeries.

This file is encrypted using AES-256-CBC encryption combined with Base64 encoding.

A 4-digit application PIN (which gets set during the initial onboarding when a user first instals the application) is the encryption password used to protect or encrypt the licence data.

The problem here is that an attacker who has access to the encrypted licence data (whether that be through accessing a phone backup, direct access to the device or remote compromise) could easily brute-force this 4-digit PIN by using a script that would try all 10,000 combinations….

[…]

The second design flaw that is favourable for attackers is that the Digital Driver Licence data is never validated against the back-end authority which is the Service NSW API/database.

This means that the application has no native method to validate the Digital Driver Licence data that exists on the phone and thus cannot perform further actions such as warn users when this data has been modified.

As the Digital Licence is stored on the client’s device, validation should take place to ensure the local copy of the data actually matches the Digital Driver’s Licence data that was originally downloaded from the Service NSW API.

As this verification does not take place, an attacker is able to display the edited data on the Service NSW application without any preventative factors.

There’s a lot more in the blog post.

Cryptogram Friday Squid Blogging: Squid Street Art

Pretty.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Bluetooth Flaw Allows Remote Unlocking of Digital Locks

Locks that use Bluetooth Low Energy to authenticate keys are vulnerable to remote unlocking. The research focused on Teslas, but the exploit is generalizable.

In a video shared with Reuters, NCC Group researcher Sultan Qasim Khan was able to open and then drive a Tesla using a small relay device attached to a laptop which bridged a large gap between the Tesla and the Tesla owner’s phone.

“This proves that any product relying on a trusted BLE connection is vulnerable to attacks even from the other side of the world,” the UK-based firm said in a statement, referring to the Bluetooth Low Energy (BLE) protocol—technology used in millions of cars and smart locks which automatically open when in close proximity to an authorised device.

Although Khan demonstrated the hack on a 2021 Tesla Model Y, NCC Group said any smart locks using BLE technology, including residential smart locks, could be unlocked in the same way.

Another news article.

Cory DoctorowAbout Those Killswitched Ukrainian Tractors

A vintage John Deere tractor whose wheel hubs have been replaced with HAL 9000 eyes, matted over a background of the cyber-waterfall image from The Matrix.

This week on my podcast, I read a recent Medium column, About those kill-switched Ukrainian tractors, suggesting that what John Deere did to Russian looters, anyone can do to farmers, anywhere.

(Image: Cryteria, CC BY 3.0, modified)

MP3

Worse Than FailureCodeSOD: The String Buildulator

"Don't concatenate long strings," is generally solid advice in most languages. Due to internal representations, strings are frequently immutable and of a fixed length, so a block like this:

string s = getSomeString(); s = s + "some suffix";

creates three strings- the original, the suffix, and the third, concatenated string. Keep spamming instances, especially long ones, if you want to stress test your garbage collector.

While languages will do their best to optimize those kinds of operations, the general advice is to use string builders which can minimize those allocations and boost performance.

Or, you can do what Richard B's predecessor did, and abuse the heck out of string interpolation in C#.

StreamWriter sw = new StreamWriter(filename); #region Export file header string header = ""; header = "Title"; header = $"{header},\"First Name\""; header = $"{header},\"Middle Name\""; header = $"{header},\"Last Name\""; header = $"{header},Suffix"; header = $"{header},Company"; header = $"{header},Department"; header = $"{header},\"Job Title\""; header = $"{header},\"Business Street\""; header = $"{header},\"Business Street 2\""; header = $"{header},\"Business Street 3\""; header = $"{header},\"Business City\""; header = $"{header},\"Business State\""; header = $"{header},\"Business Postal Code\""; header = $"{header},\"Business Country/ Region\""; header = $"{header},\"Home Street\""; header = $"{header},\"Home Street 2\""; header = $"{header},\"Home Street 3\""; header = $"{header},\"Home City\""; header = $"{header},\"Home State\""; header = $"{header},\"Home Postal Code\""; header = $"{header},\"Home Country/ Region\""; header = $"{header},\"Other Street\""; header = $"{header},\"Other Street 2\""; header = $"{header},\"Other Street 3\""; header = $"{header},\"Other City\""; header = $"{header},\"Other State\""; header = $"{header},\"Other Postal Code\""; header = $"{header},\"Other Country/ Region\""; header = $"{header},\"Assistant's Phone\""; header = $"{header},\"Business Fax\""; header = $"{header},\"Business Phone\""; header = $"{header},\"Business Phone 2\""; header = $"{header},Callback"; header = $"{header},\"Car Phone\""; header = $"{header},\"Company Main Phone\""; header = $"{header},\"Home Fax\""; header = $"{header},\"Home Phone\""; header = $"{header},\"Home Phone 2\""; header = $"{header},ISDN"; header = $"{header},\"Mobile Phone\""; header = $"{header},\"Other Fax\""; header = $"{header},\"Other Phone\""; header = $"{header},Pager"; header = $"{header},\"Primary Phone\""; header = $"{header},\"Radio Phone\""; header = $"{header},\"TTY/TDD Phone\""; header = $"{header},Telex"; header = $"{header},Account"; header = $"{header},Anniversary"; header = $"{header},\"Assistant's Name\""; header = $"{header},\"Billing Information\""; header = $"{header},Birthday"; header = $"{header},\"Business Address PO Box\""; header = $"{header},Categories"; header = $"{header},Children"; header = $"{header},\"Directory Server\""; header = $"{header},\"E - mail Address\""; header = $"{header},\"E - mail Type\""; header = $"{header},\"E - mail Display Name\""; header = $"{header},\"E-mail 2 Address\""; header = $"{header},\"E - mail 2 Type\""; header = $"{header},\"E - mail 2 Display Name\""; header = $"{header},\"E-mail 3 Address\""; header = $"{header},\"E - mail 3 Type\""; header = $"{header},\"E - mail 3 Display Name\""; header = $"{header},Gender"; header = $"{header},\"Government ID Number\""; header = $"{header},Hobby"; header = $"{header},\"Home Address PO Box\""; header = $"{header},Initials"; header = $"{header},\"Internet Free Busy\""; header = $"{header},Keywords"; header = $"{header},Language"; header = $"{header},Location"; header = $"{header},\"Manager's Name\""; header = $"{header},Mileage"; header = $"{header},Notes"; header = $"{header},\"Office Location\""; header = $"{header},\"Organizational ID Number\""; header = $"{header},\"Other Address PO Box\""; header = $"{header},Priority"; header = $"{header},Private"; header = $"{header},Profession"; header = $"{header},\"Referred By\""; header = $"{header},Sensitivity"; header = $"{header},Spouse"; header = $"{header},\"User 1\""; header = $"{header},\"User 2\""; header = $"{header},\"User 3\""; header = $"{header},\"User 4\""; header = $"{header},\"Web Page\""; #endregion Export file header sw.WriteLine(header);

The real killer to this is that there's no need for string concatenation at all. There's no reason one needs to WriteLine the entire header at once. sw.Write("Title,"); Also, string interpolation is almost always more expensive than straight concatenation, and harder for compilers to optimize. I'm not about to benchmark this disaster to prove it, but I suspect this is going to be pretty much the most expensive option.

And don't worry, the same basic process follows for each individual row they're outputting:

string contactRow = ""; HtmlToText htmlToText = new HtmlToText(); bool extendedPropRetrieved = false; #region Extract properties for export file if (contact.CompleteName != null) contactRow = $"\"{contact.CompleteName.Title}\""; // Title else contactRow = $""; contactRow = $"{contactRow},\"{contact.GivenName}\""; // First name contactRow = $"{contactRow},\"{contact.MiddleName}\""; // Middle name contactRow = $"{contactRow},\"{contact.Surname}\""; // Last name if (contact.CompleteName != null) contactRow = $"{contactRow},\"{contact.CompleteName.Suffix}\""; //Suffix else contactRow = $"{contactRow},"; contactRow = $"{contactRow},\"{contact.CompanyName}\""; // Company contactRow = $"{contactRow},\"{contact.Department}\""; // Department contactRow = $"{contactRow},\"{contact.JobTitle}\""; // Job title if (contact.PhysicalAddresses.Contains(PhysicalAddressKey.Business)) { contactRow = $"{contactRow},\"{contact.PhysicalAddresses[PhysicalAddressKey.Business].Street}\""; // Business street contactRow = $"{contactRow},"; // Business street 2 contactRow = $"{contactRow},"; // Business street 3 contactRow = $"{contactRow},\"{contact.PhysicalAddresses[PhysicalAddressKey.Business].City}\""; // Business city contactRow = $"{contactRow},\"{contact.PhysicalAddresses[PhysicalAddressKey.Business].State}\""; // Business state contactRow = $"{contactRow},\"{contact.PhysicalAddresses[PhysicalAddressKey.Business].PostalCode}\""; // Business postalcode contactRow = $"{contactRow},\"{contact.PhysicalAddresses[PhysicalAddressKey.Business].CountryOrRegion}\""; // Business country/region } else { contactRow = $"{contactRow},"; contactRow = $"{contactRow},"; contactRow = $"{contactRow},"; contactRow = $"{contactRow},"; contactRow = $"{contactRow},"; contactRow = $"{contactRow},"; contactRow = $"{contactRow},"; } // ... this goes on for about 600 lines

The physical address else block is something really special, here.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Cryptogram Websites that Collect Your Data as You Type

A surprising number of websites include JavaScript keyloggers that collect everything you type as you type it, not just when you submit a form.

Researchers from KU Leuven, Radboud University, and University of Lausanne crawled and analyzed the top 100,000 websites, looking at scenarios in which a user is visiting a site while in the European Union and visiting a site from the United States. They found that 1,844 websites gathered an EU user’s email address without their consent, and a staggering 2,950 logged a US user’s email in some form. Many of the sites seemingly do not intend to conduct the data-logging but incorporate third-party marketing and analytics services that cause the behavior.

After specifically crawling sites for password leaks in May 2021, the researchers also found 52 websites in which third parties, including the Russian tech giant Yandex, were incidentally collecting password data before submission. The group disclosed their findings to these sites, and all 52 instances have since been resolved.

“If there’s a Submit button on a form, the reasonable expectation is that it does something—that it will submit your data when you click it,” says Güneş Acar, a professor and researcher in Radboud University’s digital security group and one of the leaders of the study. “We were super surprised by these results. We thought maybe we were going to find a few hundred websites where your email is collected before you submit, but this exceeded our expectations by far.”

Research paper.

Krebs on SecuritySenators Urge FTC to Probe ID.me Over Selfie Data

Some of more tech-savvy Democrats in the U.S. Senate are asking the Federal Trade Commission (FTC) to investigate identity-proofing company ID.me for “deceptive statements” the company and its founder allegedly made over how they handle facial recognition data collected on behalf of the Internal Revenue Service, which until recently required anyone seeking a new IRS account online to provide a live video selfie to ID.me.

In a letter to FTC Chair Lina Khan, the Senators charge that ID.me’s CEO Blake Hall has offered conflicting statements about how his company uses the facial scan data it collects on behalf of the federal government and many states that use the ID proofing technology to screen applicants for unemployment insurance.

The lawmakers say that in public statements and blog posts, ID.me has frequently emphasized the difference between two types of facial recognition: One-to-one, and one-to-many. In the one-to-one approach, a live video selfie is compared to the image on a driver’s license, for example. One-to-many facial recognition involves comparing a face against a database of other faces to find any potential matches.

Americans have particular reason to be concerned about the difference between these two types of facial recognition, says the letter to the FTC, signed by Sens. Cory Booker (D-N.J.), Edward Markey (D-Mass.), Alex Padilla (D-Calif.), and Ron Wyden (D-Ore.):

“While one-to-one recognition involves a one-time comparison of two images in order to confirm an applicant’s identity, the use of one-to-many recognition means that millions of innocent people will have their photographs endlessly queried as part of a digital ‘line up.’ Not only does this violate individuals’ privacy, but the inevitable false matches associated with one-to-many recognition can result in applicants being wrongly denied desperately-needed services for weeks or even months as they try to get their case reviewed.”

“This risk is especially acute for people of color: NIST’s Facial Recognition Vendor Test found that many facial recognition algorithms have rates of false matches that are as much as 100 times higher for individuals from countries in West Africa, East Africa and East Asia than for individuals from Eastern European countries. This means Black and Asian Americans could be disproportionately likely to be denied benefits due to a false match in a one-to-many facial recognition system.”

The lawmakers say that throughout the latter half of 2021, ID.me published statements and blog posts stating it did not use one-to-many facial recognition and that the approach was “problematic” and “tied to surveillance operations.” But several days after a Jan. 16, 2022 post here about the IRS’s new facial ID requirement went viral and prompted a public backlash, Hall acknowledged in a LinkedIn posting that ID.me does use one-to-many facial recognition.

“Within days, the company edited the numerous blog posts and white papers on its website that previously stated the company did not use one-to-many to reflect the truth,” the letter alleges. “According to media reports, the company’s decision to correct its prior misleading statements came after mounting internal pressure from its employees.”

Cyberscoop’s Tonya Riley published excerpts from internal ID.me employee Slack messages wherein some expressed dread and unease with the company’s equivocation on its use of one-to-many facial recognition.

In February, the IRS announced it would no longer require facial scans or other biometric data from taxpayers seeking to create an account at the agency’s website. The agency also pledged that any biometric data shared with ID.me would be permanently deleted.

But the IRS still requires new account applicants to sign up with either ID.me or Login.gov, a single sign-on solution already used to access 200 websites run by 28 federal agencies. It also still offers the option of providing a live selfie for verification purposes, although the IRS says this data will be deleted automatically.

Asked to respond to concerns raised in the letter from Senate lawmakers, ID.me instead touted its successes in stopping fraud.

“Five state workforce agencies have publicly credited ID.me with helping to prevent $238 billion dollars in fraud,” the statement reads. “Conditions were so bad during the pandemic that the deputy assistant director of the FBI called the fraud ‘an economic attack on the United States.’ ID.me played a critical role in stopping that attack in more than 20 states where the service was rapidly adopted for its equally important ability to increase equity and verify individuals left behind by traditional options. We look forward to cooperating with all relevant government bodies to clear up any misunderstandings.”

As Cyberscoop reported on Apr. 14, the House Oversight and Reform Committee last month began an investigation into ID.me’s practices, with committee chairwoman Carolyn Maloney (D-N.Y.) saying the committee’s questions to the company would help shape policy on how the government wields facial recognition technology.

A copy of the letter the senators sent to the FTC is here (PDF).

Planet DebianGunnar Wolf: I do have a full face

I have been a bearded subject since I was 18, back in 1994. Yes, during 1999-2000, I shaved for my military service, and I briefly tried the goatee look in 2008… Few people nowadays can imagine my face without a forest of hair.

But sometimes, life happens. And, unlike my good friend Bdale, I didn’t get Linus to do the honors… But, all in all, here I am:

Turns out, I have been suffering from quite bad skin infections for a couple of years already. Last Friday, I checked in to the hospital, with an ugly, swollen face (I won’t put you through that), and the hospital staff decided it was in my best interests to trim my beard. And then some more. And then shave me. I sat in the hospital for four days, getting soaked (medical term) with antibiotics and otherstuff, got my recipes for the next few days, and… well, I really hope that’s the end of the infections. We shall see!

So, this is the result of the loving and caring work of three different nurses. Yes, not clean-shaven (I should not trim it further, as shaving blades are a risk of reinfection).

Anyway… I guess the bits of hair you see over the place will not take too long to become a beard again, even get somewhat respectable. But I thought some of you would like to see the real me™ 😉

PS- Thanks to all who have reached out with good wishes. All is fine!

Planet DebianReproducible Builds: Supporter spotlight: Jan Nieuwenhuizen on Bootstrappable Builds, GNU Mes and GNU Guix

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project.

We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent ones about ARDC and the Google Open Source Security Team (GOSST). Today, however, we will be talking with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix.


Chris Lamb: Hi Jan, thanks for taking the time to talk with us today. First, could you briefly tell me about yourself?

Jan: Thanks for the chat; it’s been a while! Well, I’ve always been trying to find something new and interesting that is just asking to be created but is mostly being overlooked. That’s how I came to work on GNU Guix and create GNU Mes to address the bootstrapping problem that we have in free software. It’s also why I have been working on releasing Dezyne, a programming language and set of tools to specify and formally verify concurrent software systems as free software.

Briefly summarised, compilers are often written in the language they are compiling. This creates a chicken-and-egg problem which leads users and distributors to rely on opaque, pre-built binaries of those compilers that they use to build newer versions of the compiler. To gain trust in our computing platforms, we need to be able to tell how each part was produced from source, and opaque binaries are a threat to user security and user freedom since they are not auditable. The goal of bootstrappability (and the bootstrappable.org project in particular) is to minimise the amount of these “bootstrap” binaries.

Anyway, after studying Physics at Eindhoven University of Technology (TU/e), I worked for digicash.com, a startup trying to create a digital and anonymous payment system – sadly, however, a traditional account-based system won. Separate to this, as there was no software (either free or proprietary) to automatically create beautiful music notation, together with Han-Wen Nienhuys, I created GNU LilyPond. Ten years ago, I took the initiative to co-found a democratic school in Eindhoven based on the principles of sociocracy. And last Christmas I finally went vegan, after being mostly vegetarian for about 20 years!


Chris: For those who have not heard of it before, what is GNU Guix? What are the key differences between Guix and other Linux distributions?

Jan: GNU Guix is both a package manager and a full-fledged GNU/Linux distribution. In both forms, it provides state-of-the-art package management features such as transactional upgrades and package roll-backs, hermetical-sealed build environments, unprivileged package management as well as per-user profiles. One obvious difference is that Guix forgoes the usual Filesystem Hierarchy Standard (ie. /usr, /lib, etc.), but there are other significant differences too, such as Guix being scriptable using Guile/Scheme, as well as Guix’s dedication and focus on free software.


Chris: How does GNU Guix relate to GNU Mes? Or, rather, what problem is Mes attempting to solve?

Jan: GNU Mes was created to address the security concerns that arise from bootstrapping an operating system such as Guix. Even if this process entirely involves free software (i.e. the source code is, at least, available), this commonly uses large and unauditable binary blobs.

Mes is a Scheme interpreter written in a simple subset of C and a C compiler written in Scheme, and it comes with a small, bootstrappable C library. Twice, the Mes bootstrap has halved the size of opaque binaries that were needed to bootstrap GNU Guix. These reductions were achieved by first replacing GNU Binutils, GNU GCC and the GNU C Library with Mes, and then replacing Unix utilities such as awk, bash, coreutils, grep sed, etc., by Gash and Gash-Utils. The final goal of Mes is to help create a full-source bootstrap for any interested UNIX-like operating system.


Chris: What is the current status of Mes?

Jan: Mes supports all that is needed from ‘R5RS’ and GNU Guile to run MesCC with Nyacc, the C parser written for Guile, for 32-bit x86 and ARM. The next step for Mes would be more compatible with Guile, e.g., have guile-module support and support running Gash and Gash Utils.

In working to create a full-source bootstrap, I have disregarded the kernel and Guix build system for now, but otherwise, all packages should be built from source, and obviously, no binary blobs should go in. We still need a Guile binary to execute some scripts, and it will take at least another one to two years to remove that binary. I’m using the 80/20 approach, cutting corners initially to get something working and useful early.

Another metric would be how many architectures we have. We are quite a way with ARM, tinycc now works, but there are still problems with GCC and Glibc. RISC-V is coming, too, which could be another metric. Someone has looked into picking up NixOS this summer. “How many distros do anything about reproducibility or bootstrappability?” The bootstrappability community is so small that we don’t ‘need’ metrics, sadly. The number of bytes of binary seed is a nice metric, but running the whole thing on a full-fledged Linux system is tough to put into a metric. Also, it is worth noting that I’m developing on a modern Intel machine (ie. a platform with a management engine), that’s another key component that doesn’t have metrics.


Chris: From your perspective as a Mes/Guix user and developer, what does ‘reproducibility’ mean to you? Are there any related projects?

Jan: From my perspective, I’m more into the problem of bootstrapping, and reproducibility is a prerequisite for bootstrappability. Reproducibility clearly takes a lot of effort to achieve, however. It’s relatively easy to install some Linux distribution and be happy, but if you look at communities that really care about security, they are investing in reproducibility and other ways of improving the security of their supply chain. Projects I believe are complementary to Guix and Mes include NixOS, Debian and — on the hardware side — the RISC-V platform shares many of our core principles and goals.


Chris: Well, what are these principles and goals?

Jan: Reproducibility and bootstrappability often feel like the “next step” in the frontier of free software. If you have all the sources and you can’t reproduce a binary, that just doesn’t “feel right” anymore. We should start to desire (and demand) transparent, elegant and auditable software stacks. To a certain extent, that’s always been a low-level intent since the beginning of free software, but something clearly got lost along the way.

On the other hand, if you look at the NPM or Rust ecosystems, we see a world where people directly install binaries. As they are not as supportive of copyleft as the rest of the free software community, you can see that movement and people in our area are doing more as a response to that so that what we have continues to remain free, and to prevent us from falling asleep and waking up in a couple of years and see, for example, Rust in the Linux kernel and (more importantly) we require big binary blobs to use our systems. It’s an excellent time to advance right now, so we should get a foothold in and ensure we don’t lose any more.


Chris: What would be your ultimate reproducibility goal? And what would the key steps or milestones be to reach that?

Jan: The “ultimate” goal would be to have a system built with open hardware, with all software on it fully bootstrapped from its source. This bootstrap path should be easy to replicate and straightforward to inspect and audit. All fully reproducible, of course! In essence, we would have solved the supply chain security problem.

Our biggest challenge is ignorance. There is much unawareness about the importance of what we are doing. As it is rather technical and doesn’t really affect everyday computer use, that is not surprising. This unawareness can be a great force driving us in the opposite direction. Think of Rust being allowed in the Linux kernel, or Python being required to build a recent GNU C library (glibc). Also, the fact that companies like Google/Apple still want to play “us” vs “them”, not willing to to support GPL software. Not ready yet to truly support user freedom.

Take the infamous log4j bug — everyone is using “open source” these days, but nobody wants to take responsibility and help develop or nurture the community. Not “ecosystem”, as that’s how it’s being approached right now: live and let live/die: see what happens without taking any responsibility. We are growing and we are strong and we can do a lot… but if we have to work against those powers, it can become problematic. So, let’s spread our great message and get more people involved!


Chris: What has been your biggest win?

Jan: From a technical point of view, the “full-source” bootstrap has have been our biggest win. A talk by Carl Dong at the 2019 Breaking Bitcoin conference stated that connecting Jeremiah Orian’s Stage0 project to Mes would be the “holy grail” of bootstrapping, and we recently managed to achieve just that: in other words, starting from hex0, 357-byte binary, we can now build the entire Guix system.

This past year we have not made significant visible progress, however, as our funding was unfortunately not there. The Stage0 project has advanced in RISC-V. A month ago, though, I secured NLnet funding for another year, and thanks to NLnet, Ekaitz Zarraga and Timothy Sample will work on GNU Mes and the Guix bootstrap as well. Separate to this, the bootstrappable community has grown a lot from two people it was six years ago: there are now currently over 100 people in the #bootstrappable IRC channel, for example. The enlarged community is possibly an even more important win going forward.


Chris: How realistic is a 100% bootstrappable toolchain? And from someone who has been working in this area for a while, is “solving Trusting Trust)” actually feasible in reality?

Jan: Two answers: Yes and no, it really depends on your definition. One great thing is that the whole Stage0 project can also run on the Knight virtual machine, a hardware platform that was designed, I think, in the 1970s. I believe we can and must do better than we are doing today, and that there’s a lot of value in it.

The core issue is not the trust; we can probably all trust each other. On the other hand, we don’t want to trust each other or even ourselves. I am not, personally, going to inspect my RISC-V laptop, and other people create the hardware and do probably not want to inspect the software. The answer comes back to being conscientious and doing what is right. Inserting GCC as a binary blob is not right. I think we can do better, and that’s what I’d like to do. The security angle is interesting, but I don’t like the paranoid part of that; I like the beauty of what we are creating together and stepwise improving on that.


Chris: Thanks for taking the time to talk to us today. If someone wanted to get in touch or learn more about GNU Guix or Mes, where might someone go?

Jan: Sure! First, check out:

I’m also on Twitter (@janneke_gnu) and on octodon.social (@janneke@octodon.social).


Chris: Thanks for taking the time to talk to us today.

Jan: No problem. :)




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

Worse Than FailureCodeSOD: Failed Successfully

Martin's company had written a set of command line tools which their internal analysts could then string together via shell scripts to do their work. It was finicky and fragile, but frankly didn't work too badly for most cases.

There was one tool, however, which seemed to be the source of an unfair number of problems. Eventually, Martin sat down with an analyst to see what was going wrong. The program would exit successfully, but wouldn't actually do any of the work it was supposed to. Instead of doing the normal thing and writing errors to STDERR, the tool wrote to a file. Which file, however, was determined by reading some shell variables, but the shell variables used by each of the tools were slightly different, because why would you build a consistent interface for your suite of analytical tools?

Eventually, Martin was able to figure out where the errors were going, and saw that it was failing to connect to the backend database. That was a simple matter- just fix the connection string- but why was it exiting successfully when it couldn't connect?

/* Connect to Oracle*/ iStatus = BDconnect(); if(iStatus != SUCCESS_STATUS) { write_log(ERROR_LOG_FILE,"BDconnect() FAILED sql:%d",iStatus); exit(EXIT_SUCCESS); }

It exited with a successful return code, and thus the shell scripts the analysts were using assumed, wrongly, that the application had succeeded. It wasn't too much to fix this specific case, but as it turned out, this "exit with success even when you fail" was an endemic pattern across many of these tools.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianLouis-Philippe Véronneau: Clojure Team 2022 Sprint Report

This is the report for the Debian Clojure Team remote sprint that took place on May 13-14th.

Looking at my previous blog entries, this was my first Debian sprint since July 2020! Crazy how fast time flies...

Many thanks to those who participated, namely:

  • Rob Browning (rlb)
  • Elana Hashman (ehashman)
  • Jérôme Charaoui (lavamind)
  • Leandro Doctors (allentiak)
  • Louis-Philippe Véronneau (pollo)

Sadly, Utkarsh Gupta — although having planned on participating — ended up not being able to and worked on DebConf Bursary paperwork instead.

rlb

Rob mostly worked on creating a dh-clojure tool to help make packaging Clojure libraries easier.

At the moment, most of the packaging is done manually, by invoking build tools by hand. Having a tool to automate many of the steps required to build Clojure packages would go a long way in making them more uniform.

His work (although still very much a WIP) can be found here: https://salsa.debian.org/rlb/dh-clojure/

ehashman

Elana:

  • Finished the Java Team VCS migration to the Clojure Team namespace.
  • Worked on updating Leiningen to 2.9.8.
  • Proposed an upstream dependency update in Leiningen to match Debian's most recent version.
  • Gave pollo Owner access on the Clojure Team namespace and added lavamind as a Developer.
  • Uploaded Clojure 1.10.3-1.
  • Updated sjacket-clojure to version 0.1.1.1 and uploaded it to experimental.
  • Added build tests to spec-alpha-clojure.
  • Filed bug #1010995 for missing test dependency for Clojure.
  • Closed bugs #976151, #992735 and #992736.

lavamind

It was Jérôme's first time working on Clojure packages, and things went great! During the sprint, he:

  • Joined the Clojure Team on salsa.
  • Identified missing dependencies to update puppetdb to the 7.x release.
  • Learned how to package Clojure libraries in Debian.
  • Packaged murphy-clojure, truss-clojure and encore-clojure and uploaded them to NEW.
  • Began to package nippy-clojure.

allentiak

Leandro joined us on Saturday, since he couldn't get off work on Friday. He mostly continued working on replacing our in-house scripts for /usr/bin/clojure by upstream's, a task he had already started during GSoC 2021.

Sadly, none of us were familiar with Debian's mechanism for alternatives. If you (yes you, dear reader) are familiar with it, I'm sure he would warmly welcome feedback on his development branch.

pollo

As for me, I:

  • Fixed a classpath bug in core-async-clojure that was breaking other libraries.
  • Added meaningful autopkgtests to core-async-clojure.
  • Uploaded new versions of tools-analyzer-clojure and trapperkeeper-clojure with autopkgtests.
  • Updated pomegranate-clojure and nrepl-clojure to the latest upstream version and revamped the way they were packaged.
  • Assisted lavamind with Clojure packaging.

Overall, it was quite a productive sprint!

Thanks to Debian for sponsoring our food during the sprint. It was nice to be able to concentrate on fixing things instead of making food :)

Here's a bonus picture of the nice sushi platter I ended up getting for dinner on Saturday night:

Picture of a sushi platter

Krebs on SecurityWhen Your Smart ID Card Reader Comes With Malware

Millions of U.S. government employees and contractors have been issued a secure smart ID card that enables physical access to buildings and controlled spaces, and provides access to government computer networks and systems at the cardholder’s appropriate security level. But many government employees aren’t issued an approved card reader device that lets them use these cards at home or remotely, and so turn to low-cost readers they find online. What could go wrong? Here’s one example.

A sample Common Access Card (CAC). Image: Cac.mil.

KrebsOnSecurity recently heard from a reader — we’ll call him “Mark” because he wasn’t authorized to speak to the press — who works in IT for a major government defense contractor and was issued a Personal Identity Verification (PIV) government smart card designed for civilian employees. Not having a smart card reader at home and lacking any obvious guidance from his co-workers on how to get one, Mark opted to purchase a $15 reader from Amazon that said it was made to handle U.S. government smart cards.

The USB-based device Mark settled on is the first result that currently comes up one when searches on Amazon.com for “PIV card reader.” The card reader Mark bought was sold by a company called Saicoo, whose sponsored Amazon listing advertises a “DOD Military USB Common Access Card (CAC) Reader” and has more than 11,700 mostly positive ratings.

The Common Access Card (CAC) is the standard identification for active duty uniformed service personnel, selected reserve, DoD civilian employees, and eligible contractor personnel. It is the principal card used to enable physical access to buildings and controlled spaces, and provides access to DoD computer networks and systems.

Mark said when he received the reader and plugged it into his Windows 10 PC, the operating system complained that the device’s hardware drivers weren’t functioning properly. Windows suggested consulting the vendor’s website for newer drivers.

The Saicoo smart card reader that Mark purchased. Image: Amazon.com

So Mark went to the website mentioned on Saicoo’s packaging and found a ZIP file containing drivers for Linux, Mac OS and Windows:

Image: Saicoo

Out of an abundance of caution, Mark submitted Saicoo’s drivers file to Virustotal.com, which simultaneously scans any shared files with more than five dozen antivirus and security products. Virustotal reported that some 43 different security tools detected the Saicoo drivers as malicious. The consensus seems to be that the ZIP file currently harbors a malware threat known as Ramnit, a fairly common but dangerous trojan horse that spreads by appending itself to other files.

Image: Virustotal.com

Ramnit is a well-known and older threat — first surfacing more than a decade ago — but it has evolved over the years and is still employed in more sophisticated data exfiltration attacks. Amazon said in a written statement that it was investigating the reports.

“Seems like a potentially significant national security risk, considering that many end users might have elevated clearance levels who are using PIV cards for secure access,” Mark said.

Mark said he contacted Saicoo about their website serving up malware, and received a response saying the company’s newest hardware did not require any additional drivers. He said Saicoo did not address his concern that the driver package on its website was bundled with malware.

In response to KrebsOnSecurity’s request for comment, Saicoo sent a somewhat less reassuring reply.

“From the details you offered, issue may probably caused by your computer security defense system as it seems not recognized our rarely used driver & detected it as malicious or a virus,” Saicoo’s support team wrote in an email.

“Actually, it’s not carrying any virus as you can trust us, if you have our reader on hand, please just ignore it and continue the installation steps,” the message continued. “When driver installed, this message will vanish out of sight. Don’t worry.”

Saicoo’s response to KrebsOnSecurity.

The trouble with Saicoo’s apparently infected drivers may be little more than a case of a technology company having their site hacked and responding poorly. Will Dormann, a vulnerability analyst at CERT/CC, wrote on Twitter that the executable files (.exe) in the Saicoo drivers ZIP file were not altered by the Ramnit malware — only the included HTML files.

Dormann said it’s bad enough that searching for device drivers online is one of the riskiest activities one can undertake online.

“Doing a web search for drivers is a VERY dangerous (in terms of legit/malicious hit ratio) search to perform, based on results of any time I’ve tried to do it,” Dormann added. “Combine that with the apparent due diligence of the vendor outlined here, and well, it ain’t a pretty picture.”

But by all accounts, the potential attack surface here is enormous, as many federal employees clearly will purchase these readers from a myriad of online vendors when the need arises. Saicoo’s product listings, for example, are replete with comments from customers who self-state that they work at a federal agency (and several who reported problems installing drivers).

A thread about Mark’s experience on Twitter generated a strong response from some of my followers, many of whom apparently work for the U.S. government in some capacity and have government-issued CAC or PIV cards.

Two things emerged clearly from that conversation. The first was general confusion about whether the U.S. government has any sort of list of approved vendors. It does. The General Services Administration (GSA), the agency which handles procurement for federal civilian agencies, maintains a list of approved card reader vendors at idmanagement.gov (Saicoo is not on that list). [Thanks to @MetaBiometrics and @shugenja for the link!]

The other theme that ran through the Twitter discussion was the reality that many people find buying off-the-shelf readers more expedient than going through the GSA’s official procurement process, whether it’s because they were never issued one or the reader they were using simply no longer worked or was lost and they needed another one quickly.

“Almost every officer and NCO [non-commissioned officer] I know in the Reserve Component has a CAC reader they bought because they had to get to their DOD email at home and they’ve never been issued a laptop or a CAC reader,” said David Dixon, an Army veteran and author who lives in Northern Virginia. “When your boss tells you to check your email at home and you’re in the National Guard and you live 2 hours from the nearest [non-classified military network installation], what do you think is going to happen?”

Interestingly, anyone asking on Twitter about how to navigate purchasing the right smart card reader and getting it all to work properly is invariably steered toward militarycac.com. The website is maintained by Michael Danberry, a decorated and retired Army veteran who launched the site in 2008 (its text and link-heavy design very much takes one back to that era of the Internet and webpages in general). His site has even been officially recommended by the Army (PDF). Mark shared emails showing Saicoo itself recommends militarycac.com.

Image: Militarycac.com.

“The Army Reserve started using CAC logon in May 2006,” Danberry wrote on his “About” page. “I [once again] became the ‘Go to guy’ for my Army Reserve Center and Minnesota. I thought Why stop there? I could use my website and knowledge of CAC and share it with you.”

Danberry did not respond to requests for an interview — no doubt because he’s busy doing tech support for the federal government. The friendly message on Danberry’s voicemail instructs support-needing callers to leave detailed information about the issue they’re having with CAC/PIV card readers.

Dixon said Danberry has “done more to keep the Army running and connected than all the G6s [Army Chief Information Officers] put together.”

In many ways, Mr. Danberry is the equivalent of that little known software developer whose tiny open-sourced code project ends up becoming widely adopted and eventually folded into the fabric of the Internet.  I wonder if he ever imagined 15 years ago that his website would one day become “critical infrastructure” for Uncle Sam?

,

Cryptogram iPhone Malware that Operates Even When the Phone Is Turned Off

Researchers have demonstrated iPhone malware that works even when the phone is fully shut down.

t turns out that the iPhone’s Bluetooth chip­—which is key to making features like Find My work­—has no mechanism for digitally signing or even encrypting the firmware it runs. Academics at Germany’s Technical University of Darmstadt figured out how to exploit this lack of hardening to run malicious firmware that allows the attacker to track the phone’s location or run new features when the device is turned off.

[…]

The research is the first—or at least among the first—to study the risk posed by chips running in low-power mode. Not to be confused with iOS’s low-power mode for conserving battery life, the low-power mode (LPM) in this research allows chips responsible for near-field communication, ultra wideband, and Bluetooth to run in a special mode that can remain on for 24 hours after a device is turned off.

The research is fascinating, but the attack isn’t really feasible. It requires a jailbroken phone, which is hard to pull off in an adversarial setting.

Slashdot thread.

Worse Than FailureCodeSOD: The GUID Utillity

Let's say you saw a method called StrToGuid, in a C# codebase. Your first thought might be: "Wait, isn't there a built in parse? Well, I guess maybe they might do some sort of exception handling. But it still doesn't seem right." And then you'd take a look at the method signature and see that it takes both a string, and an integer named counter, and you'd think: "Wait, what?"

Henrik H had a similar experience. His team hired a new developer, someone with 15+ years of experience. This is what they contributed to the codebase:

private Guid StrToGuid(string s, int counter) { Guid newGuid = new Guid(); if (counter < 10) Utillity.ScreenAndLog("d", s); try { newGuid = Guid.Parse(s); _noOfOKGuids++; } catch(ArgumentNullException) { _noOfEmptyGuids++; } catch(FormatException) { _noOfErrorGuids++; } return newGuid; }

So, if s contains a valid GUID, this parses it into a GUID and returns it. Otherwise, it returns a new GUID. It also accepts that counter parameter which clearly exists to log every tenth GUID we attempt to parse, using the "Utillity"(sic) class. Instead of responding to exceptions, we just increment counters- counters which never get checked by any other method, by the way, so this is essentially just "swallowing the exceptions", as it were.

Henrik adds: "Yes, he is related to the boss."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Cryptogram Attacks on Managed Service Providers Expected to Increase

CISA, NSA, FBI, and similar organizations in the other Five Eyes countries are warning that attacks on MSPs—as a vector to their customers—are likely to increase. No details about what this prediction is based on. Makes sense, though. The SolarWinds attack was incredibly successful for the Russian SVR, and a blueprint for future attacks.

News articles.

Planet DebianMatthew Garrett: Can we fix bearer tokens?

Last month I wrote about how bearer tokens are just awful, and a week later Github announced that someone had managed to exfiltrate bearer tokens from Heroku that gave them access to, well, a lot of Github repositories. This has inevitably resulted in a whole bunch of discussion about a number of things, but people seem to be largely ignoring the fundamental issue that maybe we just shouldn't have magical blobs that grant you access to basically everything even if you've copied them from a legitimate holder to Honest John's Totally Legitimate API Consumer.

To make it clearer what the problem is here, let's use an analogy. You have a safety deposit box. To gain access to it, you simply need to be able to open it with a key you were given. Anyone who turns up with the key can open the box and do whatever they want with the contents. Unfortunately, the key is extremely easy to copy - anyone who is able to get hold of your keyring for a moment is in a position to duplicate it, and then they have access to the box. Wouldn't it be better if something could be done to ensure that whoever showed up with a working key was someone who was actually authorised to have that key?

To achieve that we need some way to verify the identity of the person holding the key. In the physical world we have a range of ways to achieve this, from simply checking whether someone has a piece of ID that associates them with the safety deposit box all the way up to invasive biometric measurements that supposedly verify that they're definitely the same person. But computers don't have passports or fingerprints, so we need another way to identify them.

When you open a browser and try to connect to your bank, the bank's website provides a TLS certificate that lets your browser know that you're talking to your bank instead of someone pretending to be your bank. The spec allows this to be a bi-directional transaction - you can also prove your identity to the remote website. This is referred to as "mutual TLS", or mTLS, and a successful mTLS transaction ends up with both ends knowing who they're talking to, as long as they have a reason to trust the certificate they were presented with.

That's actually a pretty big constraint! We have a reasonable model for the server - it's something that's issued by a trusted third party and it's tied to the DNS name for the server in question. Clients don't tend to have stable DNS identity, and that makes the entire thing sort of awkward. But, thankfully, maybe we don't need to? We don't need the client to be able to prove its identity to arbitrary third party sites here - we just need the client to be able to prove it's a legitimate holder of whichever bearer token it's presenting to that site. And that's a much easier problem.

Here's the simple solution - clients generate a TLS cert. This can be self-signed, because all we want to do here is be able to verify whether the machine talking to us is the same one that had a token issued to it. The client contacts a service that's going to give it a bearer token. The service requests mTLS auth without being picky about the certificate that's presented. The service embeds a hash of that certificate in the token before handing it back to the client. Whenever the client presents that token to any other service, the service ensures that the mTLS cert the client presented matches the hash in the bearer token. Copy the token without copying the mTLS certificate and the token gets rejected. Hurrah hurrah hats for everyone.

Well except for the obvious problem that if you're in a position to exfiltrate the bearer tokens you can probably just steal the client certificates and keys as well, and now you can pretend to be the original client and this is not adding much additional security. Fortunately pretty much everything we care about has the ability to store the private half of an asymmetric key in hardware (TPMs on Linux and Windows systems, the Secure Enclave on Macs and iPhones, either a piece of magical hardware or Trustzone on Android) in a way that avoids anyone being able to just steal the key.

How do we know that the key is actually in hardware? Here's the fun bit - it doesn't matter. If you're issuing a bearer token to a system then you're already asserting that the system is trusted. If the system is lying to you about whether or not the key it's presenting is hardware-backed then you've already lost. If it lied and the system is later compromised then sure all your apes get stolen, but maybe don't run systems that lie and avoid that situation as a result?

Anyway. This is covered in RFC 8705 so why aren't we all doing this already? From the client side, the largest generic issue is that TPMs are astonishingly slow in comparison to doing a TLS handshake on the CPU. RSA signing operations on TPMs can take around half a second, which doesn't sound too bad, except your browser is probably establishing multiple TLS connections to subdomains on the site it's connecting to and performance is going to tank. Fixing this involves doing whatever's necessary to convince the browser to pipe everything over a single TLS connection, and that's just not really where the web is right at the moment. Using EC keys instead helps a lot (~0.1 seconds per signature on modern TPMs), but it's still going to be a bottleneck.

The other problem, of course, is that ecosystem support for hardware-backed certificates is just awful. Windows lets you stick them into the standard platform certificate store, but the docs for this are hidden in a random PDF in a Github repo. Macs require you to do some weird bridging between the Secure Enclave API and the keychain API. Linux? Well, the standard answer is to do PKCS#11, and I have literally never met anybody who likes PKCS#11 and I have spent a bunch of time in standards meetings with the sort of people you might expect to like PKCS#11 and even they don't like it. It turns out that loading a bunch of random C bullshit that has strong feelings about function pointers into your security critical process is not necessarily something that is going to improve your quality of life, so instead you should use something like this and just have enough C to bridge to a language that isn't secretly plotting to kill your pets the moment you turn your back.

And, uh, obviously none of this matters at all unless people actually support it. Github has no support at all for validating the identity of whoever holds a bearer token. Most issuers of bearer tokens have no support for embedding holder identity into the token. This is not good! As of last week, all three of the big cloud providers support virtualised TPMs in their VMs - we should be running CI on systems that can do that, and tying any issued tokens to the VMs that are supposed to be making use of them.

So sure this isn't trivial. But it's also not impossible, and making this stuff work would improve the security of, well, everything. We literally have the technology to prevent attacks like Github suffered. What do we have to do to get people to actually start working on implementing that?

comment count unavailable comments

Worse Than FailureCodeSOD: An Animated Block

"There are a few more functions like this in the same file," writes Jenny, about today's submission. This is one which largely does speak for itself.

const gright = () => { setIscountright(isCountright + 1); if(isCountright === 0) { setIsleft(!isLeft); setIsfirstdot(!isFirstdot); setIssecdot(!isSecdot); setInfof('Once activated buttons on the right panel will appear'); setIssquareleft(!isSquareleft); setIsanimBottRightIn(!isAnimBottRightIn); } if(isCountright === 1) { setIssecdot(!isSecdot); setIsthirddot(!isThirdtdot); setInfof('Tap on them to change content of the projection on the wall'); setIselmscale(!isElmscale); setIssquareleft(!isSquareleft); setIsmap(!isMap); setIsmapdot(!isMapdot); setIsborderwhite(!isBorderwhite); } if(isCountright === 2) { setIsright(!isRight); setIsthirddot(!isThirdtdot); setIsfourthdot(!isForthdot); setInfof('Use the menu bar in top left corner to switch between pages'); setIssquareleft(isSquareleft); setIsanimBottRightIn(!isAnimBottRightIn); setIselmscale(!isElmscale); setIsmap(!isMap); setIsmapdot(!isMapdot); setIsborderwhite(!isBorderwhite); setIsindicator(!isIndicator); setTimeout(():void => { setAnimain(false); setMainsec(true); setIsindicator(false); setIsindicator2(true); }, 1000); setTimeout(():void => { setMainsec(false); setMainth(true); setIsindicator2(false); setIsindicator3(true); setShowdone(true); }, 2200); } }

Clearly, this is setting all sorts of display properties inside of a UI component. But it's also using its own internal conventions and namings that just make everything harder to understand. gright? setMainth? And those capitalizations. Plus, look at these giant piles of setters- it reads like someone created a bunch of global variables and ther wrapped them in setter functions to make them feel less global.

The logic here isn't that complicated, but because of the names, because of the spammed negations, because of the conventions, I still don't really have any good sense of what it does. Maybe if I knew more of the conventions, I think this code was all gright, but as it is, I'm pretty sure it's all gwrong.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Cryptogram The NSA Says that There are No Known Flaws in NIST’s Quantum-Resistant Algorithms

Rob Joyce, the director of cybersecurity at the NSA, said so in an interview:

The NSA already has classified quantum-resistant algorithms of its own that it developed over many years, said Joyce. But it didn’t enter any of its own in the contest. The agency’s mathematicians, however, worked with NIST to support the process, trying to crack the algorithms in order to test their merit.

“Those candidate algorithms that NIST is running the competitions on all appear strong, secure, and what we need for quantum resistance,” Joyce said. “We’ve worked against all of them to make sure they are solid.”

The purpose of the open, public international scrutiny of the separate NIST algorithms is “to build trust and confidence,” he said.

I believe him. This is what the NSA did with NIST’s candidate algorithms for AES and then for SHA-3. NIST’s Post-Quantum Cryptography Standardization Process looks good.

I still worry about the long-term security of the submissions, though. In 2018, in an essay titled “Cryptography After the Aliens Land,” I wrote:

…there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.

It took us a couple of decades to fully understand von Neumann computer architecture. I’m sure it will take years of working with a functional quantum computer to fully understand the limits of that architecture. And some things that we think of as computationally hard today will turn out not to be.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.11.1.1.0 on CRAN: Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has syntax deliberately close to Matlab and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 978 other packages on CRAN, downloaded over 24 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 469 times according to Google Scholar.

This release brings a first new upstream fix in the new release series 11.*. In particular, treatment of ill-conditioned matrices is further strengthened. We once again tested this very rigorously via three different RC releases each of which got a full reverse-dependencies run (for which results are always logged here). A minor issue with old g++ compilers was found once 11.1.0 was tagged to this upstream release is now 11.1.1. Also fixed is an OpenMP setup issue where Justin Silverman noticed that we did not propagate the -fopenmp setting correctly.

The full set of changes (since the last CRAN release 0.11.0.0.0) follows.

Changes in RcppArmadillo version 0.11.1.1.0 (2022-05-15)

  • Upgraded to Armadillo release 11.1.1 (Angry Kitchen Appliance)

    • added inv_opts::no_ugly option to inv() and inv_sympd() to disallow inverses of poorly conditioned matrices

    • more efficient handling of rank-deficient matrices via inv_opts::allow_approx option in inv() and inv_sympd()

    • better detection of rank deficient matrices by solve()

    • faster handling of symmetric and diagonal matrices by cond()

  • The configure script again propagates the'found' case again, thanks to Justin Silverman for the heads-up and suggested fix (Dirk and Justin in #376 and #377 fixing #375).

Changes in RcppArmadillo version 0.11.0.1.0 (2022-04-14)

  • Upgraded to Armadillo release 11.0.1 (Creme Brulee)

    • fix miscompilation of inv() and inv_sympd() functions when using inv_opts::allow_approx and inv_opts::tiny options

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

David BrinFrom geology to quantum science to a healthy planet...

For your weekend... as I traditionallly do, here's a round-up of recent science news...


First, here's the latest CARTA conference - the Center for Anthropogeny (human origins) at UCSD. This one with talks having to do with the theme of "The Planet Altering Apes."


== Physical Science ==


The observation of the Higgs boson  at the Large Hadron Collider (LHC) has validated the last missing piece of the standard model (SM) of elementary particle physics.  The mass of the W boson, a mediator of the weak force between elementary particles, should be tightly constrained by the symmetries of the standard model of particle physics.  So… do recent results mean we have a problem here?


Wireless Sensors: Tiny Battery-Free Devices Float In The Wind Like Dandelion Seeds…” or a lot like the ‘localizer nanochips in Vernor Vinge’s great novel  A Deepness in the Sky.  


A new form of ice discovered, which forms at high-pressures: Shades of Kurt Vonnegut! Here’s ‘ice-ten’ or Ice-X!  Scientists speculate that it could be common on distant, water-rich exoplanets.


Asking the Ultimate Questions, Robert Laurence Kuhn’s recent presentation at the Institute of Art and ideas (IAI-UK), is posted on the Closer To Truth YouTube channel.



== The biologic world ==


States and cities have also begun to decriminalize psilocybin – the core of magic mushrooms - in general or for medicinal purposes, especially treatment of depression. 


The disturbing rise of bird flu; already more than 27 million birds have died or been slaughtered. Will we see a poultry vaccine?


Apparently fish can calculate....stingrays can perform simple addition and subtraction in the low digit range.


Forty to fifty percent of all animal species are actually parasites, including 300,000 different types of worms that parasitize vertebrates.


Interesting question: Why didn't our primitive ancestors get cavities?



== Insights into our planet ==


In Earth’s past, two gargantuan 'super-mountain' ranges may have fueled two of the biggest evolutionary boom times in our planet's history — the first appearance of complex cells roughly 2 billion years ago, and the Cambrian explosion of marine life 541 million years ago.  


Is Earth’s ‘solid’ inner core something like my fictional-hypothetical descriptions in Earth? If the material is ‘superionic,’ then the majority iron atoms might be 'solid' in the crystalline lattice structure, whereas the carbon, hydrogen, and oxygen molecules would diffuse through the medium, creating the liquid-like element.  


And in related matters, the top mineral form of the mantle is perovskites… which are still (since I wrote Earth) among the best high pressure/high temperature superconductors. So… is she alive? Way too soon to tell. But the traits (or potentialities) keep piling up!


Moving a bit outward toward Earth's mantle… “Earth is layered like an onion, with a thin outer crust, a thick viscous mantle, a fluid outer core, and a solid inner core. Within the mantle, there are two massive blob-like structures, roughly on opposite sides of the planet. The blobs, more formally referred to as Large Low-Shear-Velocity Provinces (LLSVPs), are each the size of a continent and 100 times taller than Mt. Everest. One is under the African continent, while the other is under the Pacific Ocean.”  Might this explain the unusual solidity of the African continent?


Meanwhile, fast melting Alpine permafrost may contribute to rising global temperatures.


There have been wonderful paleontologic finds at the Tanis site, in the Dakotas, which show many creatures exceptionally well-preserved who seem to have died suddenly the very day that asteroid ended the era of the dinosaurs. I look forward to the show - Dinosaurs: The Final Day with Sir David Attenborough, which was broadcast on BBC One. A version for the U.S. science series Nova on the PBS network will be broadcast later in the year.


And...an allegory of uncertainty


Four quantum physicists are in a car. Heisenberg is driving like he is in The Matrix. Schrödinger is in the front seat waving at the other cars. Einstein and Bohr are in the back arguing when they get pulled over. The officer asks Heisenberg, “do you know how fast you were going?”

“No, but we know exactly where we are,” Heisenberg replies.


The officer looks confused and says, “you were going 120 km/h!”


Heisenberg throws his arms up and cries, “Great! Now we’re lost!”


The officer looks over the car and asks Schrödinger if they have anything in the trunk.


“A cat,” Schrödinger replies.


The officer opens the trunk and yells, “This cat is dead!”


Schrödinger angrily replies, “Well it is now.”


Bohr says, “on the bright side, a moment ago we didn’t have a position, speed, or a cat. Now we have all three!”


Fed up, the officer says, “I just want to know how many of you I need to bring back to the station!”


“Roll dice for it?” Einstein asks.


heh.


Now back to your regularly scheduled 21st Century crises...


Cryptogram ICE Is a Domestic Surveillance Agency

Georgetown has a new report on the highly secretive bulk surveillance activities of ICE in the US:

When you think about government surveillance in the United States, you likely think of the National Security Agency or the FBI. You might even think of a powerful police agency, such as the New York Police Department. But unless you or someone you love has been targeted for deportation, you probably don’t immediately think of Immigration and Customs Enforcement (ICE).

This report argues that you should. Our two-year investigation, including hundreds of Freedom of Information Act requests and a comprehensive review of ICE’s contracting and procurement records, reveals that ICE now operates as a domestic surveillance agency. Since its founding in 2003, ICE has not only been building its own capacity to use surveillance to carry out deportations but has also played a key role in the federal government’s larger push to amass as much information as possible about all of our lives. By reaching into the digital records of state and local governments and buying databases with billions of data points from private companies, ICE has created a surveillance infrastructure that enables it to pull detailed dossiers on nearly anyone, seemingly at any time. In its efforts to arrest and deport, ICE has—without any judicial, legislative or public oversight—reached into datasets containing personal information about the vast majority of people living in the U.S., whose records can end up in the hands of immigration enforcement simply because they apply for driver’s licenses; drive on the roads; or sign up with their local utilities to get access to heat, water and electricity.

ICE has built its dragnet surveillance system by crossing legal and ethical lines, leveraging the trust that people place in state agencies and essential service providers, and exploiting the vulnerability of people who volunteer their information to reunite with their families. Despite the incredible scope and evident civil rights implications of ICE’s surveillance practices, the agency has managed to shroud those practices in near-total secrecy, evading enforcement of even the handful of laws and policies that could be invoked to impose limitations. Federal and state lawmakers, for the most part, have yet to confront this reality.

EDITED TO ADD (5/13): A news article.

Planet DebianAntoine Beaupré: NVMe/SSD disk failure

Yesterday, my workstation (curie) was hung when I came in the office. After a "skinny elephant", the box rebooted, but it couldn't find the primary disk (in the BIOS). Instead, it booted on the secondary HDD drive, still running an old Fedora 27 install which somehow survived to this day, possibly because ?BTRFS is incomprehensible.

Somehow, I blindly accepted the Fedora prompt asking me to upgrade to Fedora 28, not realizing that:

  1. Fedora is now at release 36, not 28
  2. major upgrades take about an hour...
  3. ... and happen at boot time, blocking the entire machine (I'll remember this next time I laugh at Windows and Mac OS users stuck on updates on boot)
  4. you can't skip more than one major upgrade

Which means that upgrading to latest would take over 4 hours. Thankfully, it's mostly automated and seems to work pretty well (which is not exactly the case for Debian). It still seems like a lot of wasted time -- it would probably be better to just reinstall the machine at this point -- and not what I had planned to do that morning at all.

In any case, after waiting all that time, the machine booted (in Fedora) again, and now it could detect the SSD disk. The BIOS could find the disk too, so after I reinstalled grub (from Fedora) and fixed the boot order, it rebooted, but secureboot failed, so I turned that off (!?), and I was back in Debian.

I did an emergency backup with ddrescue, from the running system which probably doesn't really work as a backup (because the filesystem is likely to be corrupt) but it was fast enough (20 minutes) and gave me some peace of mind. My offsites backup have been down for a while and since I treat my workstations as "cattle" (not "pets"), I don't have a solid recovery scenario for those situations other than "just reinstall and run Puppet", which takes a while.

Now I'm wondering what the next step is: probably replace the disk anyways (the new one is bigger: 1TB instead of 500GB), or keep the new one as a hot backup somehow. Too bad I don't have a snapshotting filesystem on there... (Technically, I have LVM, but LVM snapshots are heavy and slow, and can't atomically cover the entire machine.)

It's kind of scary how this thing failed: totally dropped off the bus, just not in the BIOS at all. I prefer the way spinning rust fails: clickety sounds, tons of warnings beforehand, partial recovery possible. With this new flashy junk, you just lose everything all at once. Not fun.

Planet DebianAntoine Beaupré: BTRFS notes

I'm not a fan of BTRFS. This page serves as a reminder of why, but also a cheat sheet to figure out basic tasks in a BTRFS environment because those are not obvious to me, even after repeatedly having to deal with them.

Content warning: there might be mentions of ZFS.

Stability concerns

I'm worried about BTRFS stability, which has been historically ... changing. RAID-5 and RAID-6 are still marked unstable, for example. It's kind of a lucky guess whether your current kernel will behave properly with your planned workload. For example, in Linux 4.9, RAID-1 and RAID-10 were marked as "mostly OK" with a note that says:

Needs to be able to create two copies always. Can get stuck in irreversible read-only mode if only one copy can be made.

Even as of now, RAID-1 and RAID-10 has this note:

The simple redundancy RAID levels utilize different mirrors in a way that does not achieve the maximum performance. The logic can be improved so the reads will spread over the mirrors evenly or based on device congestion.

Granted, that's not a stability concern anymore, just performance. A reviewer of a draft of this article actually claimed that BTRFS only reads from one of the drives, which hopefully is inaccurate, but goes to show how confusing all this is.

There are other warnings in the Debian wiki that are quite scary. Even the legendary Arch wiki has a warning on top of their BTRFS page, still.

Even if those issues are now fixed, it can be hard to tell when they were fixed. There is a changelog by feature but it explicitly warns that it doesn't know "which kernel version it is considered mature enough for production use", so it's also useless for this.

It would have been much better if BTRFS was released into the world only when those bugs were being completely fixed. Or that, at least, features were announced when they were stable, not just "we merged to mainline, good luck". Even now, we get mixed messages even in the official BTRFS documentation which says "The Btrfs code base is stable" (main page) while at the same time clearly stating unstable parts in the status page (currently RAID56).

There are much harsher BTRFS critics than me out there so I will stop here, but let's just say that I feel a little uncomfortable trusting server data with full RAID arrays to BTRFS. But surely, for a workstation, things should just work smoothly... Right? Well, let's see the snags I hit.

My BTRFS test setup

Before I go any further, I should probably clarify how I am testing BTRFS in the first place.

The reason I tried BTRFS is that I was ... let's just say "strongly encouraged" by the LWN editors to install Fedora for the terminal emulators series. That, in turn, meant the setup was done with BTRFS, because that was somewhat the default in Fedora 27 (or did I want to experiment? I don't remember, it's been too long already).

So Fedora was setup on my 1TB HDD and, with encryption, the partition table looks like this:

NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
├─sda1                   8:1    0   200M  0 part  /boot/efi
├─sda2                   8:2    0     1G  0 part  /boot
├─sda3                   8:3    0   7,8G  0 part  
│ └─fedora_swap        253:5    0   7.8G  0 crypt [SWAP]
└─sda4                   8:4    0 922,5G  0 part  
  └─fedora_crypt       253:4    0 922,5G  0 crypt /

(This might not entirely be accurate: I rebuilt this from the Debian side of things.)

This is pretty straightforward, except for the swap partition: normally, I just treat swap like any other logical volume and create it in a logical volume. This is now just speculation, but I bet it was setup this way because "swap" support was only added in BTRFS 5.0.

I fully expect BTRFS experts to yell at me now because this is an old setup and BTRFS is so much better now, but that's exactly the point here. That setup is not that old (2018? old? really?), and migrating to a new partition scheme isn't exactly practical right now. But let's move on to more practical considerations.

No builtin encryption

BTRFS aims at replacing the entire mdadm, LVM, and ext4 stack with a single entity, and adding new features like deduplication, checksums and so on.

Yet there is one feature it is critically missing: encryption. See, my typical stack is actually mdadm, LUKS, and then LVM and ext4. This is convenient because I have only a single volume to decrypt.

If I were to use BTRFS on servers, I'd need to have one LUKS volume per-disk. For a simple RAID-1 array, that's not too bad: one extra key. But for large RAID-10 arrays, this gets really unwieldy.

The obvious BTRFS alternative, ZFS, supports encryption out of the box and mixes it above the disks so you only have one passphrase to enter. The main downside of ZFS encryption is that it happens above the "pool" level so you can typically see filesystem names (and possibly snapshots, depending on how it is built), which is not the case with a more traditional stack.

Subvolumes, filesystems, and devices

I find BTRFS's architecture to be utterly confusing. In the traditional LVM stack (which is itself kind of confusing if you're new to that stuff), you have those layers:

  • disks: let's say /dev/nvme0n1 and nvme1n1
  • RAID arrays with mdadm: let's say the above disks are joined in a RAID-1 array in /dev/md1
  • volume groups or VG with LVM: the above RAID device (technically a "physical volume" or PV) is assigned into a VG, let's call it vg_tbbuild05 (multiple PVs can be added to a single VG which is why there is that abstraction)
  • LVM logical volumes: out of that volume group actually "virtual partitions" or "logical volumes" are created, that is where your filesystem lives
  • filesystem, typically with ext4: that's your normal filesystem, which treats the logical volume as just another block device

A typical server setup would look like this:

NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1                   259:0    0   1.7T  0 disk  
├─nvme0n1p1               259:1    0     8M  0 part  
├─nvme0n1p2               259:2    0   512M  0 part  
│ └─md0                     9:0    0   511M  0 raid1 /boot
├─nvme0n1p3               259:3    0   1.7T  0 part  
│ └─md1                     9:1    0   1.7T  0 raid1 
│   └─crypt_dev_md1       253:0    0   1.7T  0 crypt 
│     ├─vg_tbbuild05-root 253:1    0    30G  0 lvm   /
│     ├─vg_tbbuild05-swap 253:2    0 125.7G  0 lvm   [SWAP]
│     └─vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
└─nvme0n1p4               259:4    0     1M  0 part

I stripped the other nvme1n1 disk because it's basically the same.

Now, if we look at my BTRFS-enabled workstation, which doesn't even have RAID, we have the following:

  • disk: /dev/sda with, again, /dev/sda4 being where BTRFS lives
  • filesystem: fedora_crypt, which is, confusingly, kind of like a volume group. it's where everything lives. i think.
  • subvolumes: home, root, /, etc. those are actually the things that get mounted. you'd think you'd mount a filesystem, but no, you mount a subvolume. that is backwards.

It looks something like this to lsblk:

NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
├─sda1                   8:1    0   200M  0 part  /boot/efi
├─sda2                   8:2    0     1G  0 part  /boot
├─sda3                   8:3    0   7,8G  0 part  [SWAP]
└─sda4                   8:4    0 922,5G  0 part  
  └─fedora_crypt       253:4    0 922,5G  0 crypt /srv

Notice how we don't see all the BTRFS volumes here? Maybe it's because I'm mounting this from the Debian side, but lsblk definitely gets confused here. I frankly don't quite understand what's going on, even after repeatedly looking around the rather dismal documentation. But that's what I gather from the following commands:

root@curie:/home/anarcat# btrfs filesystem show
Label: 'fedora'  uuid: 5abb9def-c725-44ef-a45e-d72657803f37
    Total devices 1 FS bytes used 883.29GiB
    devid    1 size 922.47GiB used 916.47GiB path /dev/mapper/fedora_crypt

root@curie:/home/anarcat# btrfs subvolume list /srv
ID 257 gen 108092 top level 5 path home
ID 258 gen 108094 top level 5 path root
ID 263 gen 108020 top level 258 path root/var/lib/machines

I only got to that point through trial and error. Notice how I use an existing mountpoint to list the related subvolumes. If I try to use the filesystem path, the one that's listed in filesystem show, I fail:

root@curie:/home/anarcat# btrfs subvolume list /dev/mapper/fedora_crypt 
ERROR: not a btrfs filesystem: /dev/mapper/fedora_crypt
ERROR: can't access '/dev/mapper/fedora_crypt'

Maybe I just need to use the label? Nope:

root@curie:/home/anarcat# btrfs subvolume list fedora
ERROR: cannot access 'fedora': No such file or directory
ERROR: can't access 'fedora'

This is really confusing. I don't even know if I understand this right, and I've been staring at this all afternoon. Hopefully, the lazyweb will correct me eventually.

(As an aside, why are they called "subvolumes"? If something is a "sub" of "something else", that "something else" must exist right? But no, BTRFS doesn't have "volumes", it only has "subvolumes". Go figure. Presumably the filesystem still holds "files" though, at least empirically it doesn't seem like it lost anything so far.

In any case, at least I can refer to this section in the future, the next time I fumble around the btrfs commandline, as I surely will. I will possibly even update this section as I get better at it, or based on my reader's judicious feedback.

Mounting BTRFS subvolumes

So how did I even get to that point? I have this in my /etc/fstab, on the Debian side of things:

UUID=5abb9def-c725-44ef-a45e-d72657803f37   /srv    btrfs  defaults 0   2

This thankfully ignores all the subvolume nonsense because it relies on the UUID. mount tells me that's actually the "root" (? /?) subvolume:

root@curie:/home/anarcat# mount | grep /srv
/dev/mapper/fedora_crypt on /srv type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)

Let's see if I can mount the other volumes I have on there. Remember that subvolume list showed I had home, root, and var/lib/machines. Let's try root:

mount -o subvol=root /dev/mapper/fedora_crypt /mnt

Interestingly, root is not the same as /, it's a different subvolume! It seems to be the Fedora root (/, really) filesystem. No idea what is happening here. I also have a home subvolume, let's mount it too, for good measure:

mount -o subvol=home /dev/mapper/fedora_crypt /mnt/home

Note that lsblk doesn't notice those two new mountpoints, and that's normal: it only lists block devices and subvolumes (rather inconveniently, I'd say) do not show up as devices:

root@curie:/home/anarcat# lsblk 
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
├─sda1                   8:1    0   200M  0 part  
├─sda2                   8:2    0     1G  0 part  
├─sda3                   8:3    0   7,8G  0 part  
└─sda4                   8:4    0 922,5G  0 part  
  └─fedora_crypt       253:4    0 922,5G  0 crypt /srv

This is really, really confusing. Maybe I did something wrong in the setup. Maybe it's because I'm mounting it from outside Fedora. Either way, it just doesn't feel right.

No disk usage per volume

If you want to see what's taking up space in one of those subvolumes, tough luck:

root@curie:/home/anarcat# df -h  /srv /mnt /mnt/home
Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/fedora_crypt  923G  886G   31G  97% /srv
/dev/mapper/fedora_crypt  923G  886G   31G  97% /mnt
/dev/mapper/fedora_crypt  923G  886G   31G  97% /mnt/home

(Notice, in passing, that it looks like the same filesystem is mounted in different places. In that sense, you'd expect /srv and /mnt (and /mnt/home?!) to be exactly the same, but no: they are entirely different directory structures, which I will not call "filesystems" here because everyone's head will explode in sparks of confusion.)

Yes, disk space is shared (that's the Size and Avail columns, makes sense). But nope, no cookie for you: they all have the same Used columns, so you need to actually walk the entire filesystem to figure out what each disk takes.

(For future reference, that's basically:

root@curie:/home/anarcat# time du -schx /mnt/home /mnt /srv
124M    /mnt/home
7.5G    /mnt
875G    /srv
883G    total

real    2m49.080s
user    0m3.664s
sys 0m19.013s

And yes, that was painfully slow.)

ZFS actually has some oddities in that regard, but at least it tells me how much disk each volume (and snapshot) takes:

root@tubman:~# time df -t zfs -h
Filesystem         Size  Used Avail Use% Mounted on
rpool/ROOT/debian  3.5T  1.4G  3.5T   1% /
rpool/var/tmp      3.5T  384K  3.5T   1% /var/tmp
rpool/var/spool    3.5T  256K  3.5T   1% /var/spool
rpool/var/log      3.5T  2.0G  3.5T   1% /var/log
rpool/home/root    3.5T  2.2G  3.5T   1% /root
rpool/home         3.5T  256K  3.5T   1% /home
rpool/srv          3.5T   80G  3.5T   3% /srv
rpool/var/cache    3.5T  114M  3.5T   1% /var/cache
bpool/BOOT/debian  571M   90M  481M  16% /boot

real    0m0.003s
user    0m0.002s
sys 0m0.000s

That's 56360 times faster, by the way.

But yes, that's not fair: those in the know will know there's a different command to do what df does with BTRFS filesystems, the btrfs filesystem usage command:

root@curie:/home/anarcat# time btrfs filesystem usage /srv
Overall:
    Device size:         922.47GiB
    Device allocated:        916.47GiB
    Device unallocated:        6.00GiB
    Device missing:          0.00B
    Used:            884.97GiB
    Free (estimated):         30.84GiB  (min: 27.84GiB)
    Free (statfs, df):        30.84GiB
    Data ratio:               1.00
    Metadata ratio:           2.00
    Global reserve:      512.00MiB  (used: 0.00B)
    Multiple profiles:              no

Data,single: Size:906.45GiB, Used:881.61GiB (97.26%)
   /dev/mapper/fedora_crypt  906.45GiB

Metadata,DUP: Size:5.00GiB, Used:1.68GiB (33.58%)
   /dev/mapper/fedora_crypt   10.00GiB

System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
   /dev/mapper/fedora_crypt   16.00MiB

Unallocated:
   /dev/mapper/fedora_crypt    6.00GiB

real    0m0,004s
user    0m0,000s
sys 0m0,004s

Almost as fast as ZFS's df! Good job. But wait. That doesn't actually tell me usage per subvolume. Notice it's filesystem usage, not subvolume usage, which unhelpfully refuses to exist. That command only shows that one "filesystem" internal statistics that are pretty opaque.. You can also appreciate that it's wasting 6GB of "unallocated" disk space there: I probably did something Very Wrong and should be punished by Hacker News. I also wonder why it has 1.68GB of "metadata" used...

At this point, I just really want to throw that thing out of the window and restart from scratch. I don't really feel like learning the BTRFS internals, as they seem oblique and completely bizarre to me. It feels a little like the state of PHP now: it's actually pretty solid, but built upon so many layers of cruft that I still feel it corrupts my brain every time I have to deal with it (needle or haystack first? anyone?)...

Conclusion

I find BTRFS utterly confusing and I'm worried about its reliability. I think a lot of work is needed on usability and coherence before I even consider running this anywhere else than a lab, and that's really too bad, because there are really nice features in BTRFS that would greatly help my workflow. (I want to use filesystem snapshots as high-performance, high frequency backups.)

So now I'm experimenting with OpenZFS. It's so much simpler, just works, and it's rock solid. After this 8 minute read, I had a good understanding of how ZFS worked. Here's the 30 seconds overview:

  • vdev: a RAID array
  • vpool: a volume group of vdevs
  • datasets: normal filesystems (or block device, if you want to use another filesystem on top of ZFS)

There's also other special volumes like caches and logs that you can (really easily, compared to LVM caching) use to tweak your setup. You might also want to look at recordsize or ashift to tweak the filesystem to fit better your workload (or deal with drives lying about their sector size, I'm looking at you Samsung), but that's it.

Running ZFS on Linux currently involves building kernel modules from scratch on every host, which I think is pretty bad. But I was able to setup a ZFS-only server using this excellent documentation without too much problem.

I'm hoping some day the copyright issues are resolved and we can at least ship binary packages, but the politics (e.g. convincing Debian that is the right thing to do) and the logistics (e.g. DKMS auto-builders? is that even a thing? how about signed DKMS packages? fun-fun-fun!) seem really impractical. Who knows, maybe hell will freeze over (again) and Oracle will fix the CDDL. I personally think that we should just completely ignore this problem (which wasn't even supposed to be a problem) and ship binary packages directly, but I'm a pragmatic and do not always fit well with the free software fundamentalists.

All of this to say that, short term, we don't have a reliable, advanced filesystem/logical disk manager in Linux. And that's really too bad.

Chaotic IdealismWhy do Autism Parents mourn the neurotypical child they never had?

I don’t condone it, but I think I can sort of explain why it happens. Do you know how, when plans are changed suddenly, you feel sort of out of balance, and might even have a meltdown if it’s bad and sudden enough? Neurotypicals make plans for their children. They have a mental picture of their future, which includes their child’s personality and cognitive traits. They build these castles in the air–they imagine future scenarios–that may or may not be anything like the reality they’re going to have.

When their child is diagnosed with autism, these future plans disappear, and they feel off-balance like we do when our schedules are suddenly changed and we don’t know what’s going to happen.

Some of them adjust pretty quickly, because they realize that their child hasn’t changed; it’s still the same child they’ve loved all along, and it’s not like those mental plans were ever going to be accurate anyway. Most are scared because they don’t know what life with autism is going to be like and they worry that their child won’t be happy, and it takes a little while for them to regain their equilibrium; instead of a stereotypical future, they’re gazing into the unknown. That, we can put down to an autism-unfriendly world that doesn’t give them enough examples of regular families with autistic people in them.

But others hold on to their mental future, and even reject the actual child they have. Those are the ones who focus on the will-nevers, who love the neurotypical child they would have had in an alternate universe in favor of the autistic child they actually do have. This is a form of emotional abuse.

Cryptogram Corporate Involvement in International Cybersecurity Treaties

The Paris Call for Trust and Stability in Cyberspace is an initiative launched by French President Emmanuel Macron during the 2018 UNESCO’s Internet Governance Forum. It’s an attempt by the world’s governments to come together and create a set of international norms and standards for a reliable, trustworthy, safe, and secure Internet. It’s not an international treaty, but it does impose obligations on the signatories. It’s a major milestone for global Internet security and safety.

Corporate interests are all over this initiative, sponsoring and managing different parts of the process. As part of the Call, the French company Cigref and the Russian company Kaspersky chaired a working group on cybersecurity processes, along with French research center GEODE. Another working group on international norms was chaired by US company Microsoft and Finnish company F-Secure, along with a University of Florence research center. A third working group’s participant list includes more corporations than any other group.

As a result, this process has become very different than previous international negotiations. Instead of governments coming together to create standards, it is being drive by the very corporations that the new international regulatory climate is supposed to govern. This is wrong.

The companies making the tools and equipment being regulated shouldn’t be the ones negotiating the international regulatory climate, and their executives shouldn’t be named to key negotiation roles without appointment and confirmation. It’s an abdication of responsibility by the US government for something that is too important to be treated this cavalierly.

On the one hand, this is no surprise. The notions of trust and stability in cyberspace are about much more than international safety and security. They’re about market share and corporate profits. And corporations have long led policymakers in the fast-moving and highly technological battleground that is cyberspace.

The international Internet has always relied on what is known as a multistakeholder model, where those who show up and do the work can be more influential than those in charge of governments. The Internet Engineering Task Force, the group that agrees on the technical protocols that make the Internet work, is largely run by volunteer individuals. This worked best during the Internet’s era of benign neglect, where no one but the technologists cared. Today, it’s different. Corporate and government interests dominate, even if the individuals involved use the polite fiction of their own names and personal identities.

However, we are a far cry from decades past, where the Internet was something that governments didn’t understand and largely ignored. Today, the Internet is an essential infrastructure that underpins much of society, and its governance structure is something that nations care about deeply. Having for-profit tech companies run the Paris Call process on regulating tech is analogous to putting the defense contractors Northrop Grumman or Boeing in charge of the 1970s SALT nuclear agreements between the US and the Soviet Union.

This also isn’t the first time that US corporations have led what should be an international relations process regarding the Internet. Since he first gave a speech on the topic in 2017, Microsoft President Brad Smith has become almost synonymous with the term “Digital Geneva Convention.” It’s not just that corporations in the US and elsewhere are taking a lead on international diplomacy, they’re framing the debate down to the words and the concepts.

Why is this happening? Different countries have their own problems, but we can point to three that currently plague the US.

First and foremost, “cyber” still isn’t taken seriously by much of the government, specifically the State Department. It’s not real to the older military veterans, or to the even older politicians who confuse Facebook with TikTok and use the same password for everything. It’s not even a topic area for negotiations for the US Trade Representative. Nuclear disarmament is “real geopolitics,” while the Internet is still, even now, seen as vaguely magical, and something that can be “fixed” by having the nerds yank plugs out of a wall.

Second, the State Department was gutted during the Trump years. It lost many of the up-and-coming public servants who understood the way the world was changing. The work of previous diplomats to increase the visibility of the State Department’s cyber efforts was abandoned. There are few left on staff to do this work, and even fewer to decide if they’re any good. It’s hard to hire senior information security professionals in the best of circumstances; it’s why charlatans so easily flourish in the cybersecurity field. The built-up skill set of the people who poured their effort and time into this work during the Obama years is gone.

Third, there’s a power struggle at the heart of the US government involving cyber issues, between the White House, the Department of Homeland Security (represented by CISA), and the military (represented by US Cyber Command). Trying to create another cyber center of power within the State Department threatens those existing powers. It’s easier to leave it in the hands of private industry, which does not affect those government organizations’ budgets or turf.

We don’t want to go back to the era when only governments set technological standards. The governance model from the days of the telephone is another lesson in how not to do things. The International Telecommunications Union is an agency run out of the United Nations. It is moribund and ponderous precisely because it is run by national governments, with civil society and corporations largely alienated from the decision-making processes.

Today, the Internet is fundamental to global society. It’s part of everything. It affects national security and will be a theater in any future war. How individuals, corporations, and governments act in cyberspace is critical to our future. The Internet is critical infrastructure. It provides and controls access to healthcare, space, the military, water, energy, education, and nuclear weaponry. How it is regulated isn’t just something that will affect the future. It is the future.

Since the Paris Call was finalized in 2018, it has been signed by 81 countries — including the US in 2021 — 36 local governments and public authorities, 706 companies and private organizations, and 390 civil society groups. The Paris Call isn’t the first international agreement that puts companies on an equal signatory footing as governments. The Global Internet Forum to Combat Terrorism and the Christchurch Call to eliminate extremist content online do the same thing. But the Paris Call is different. It’s bigger. It’s more important. It’s something that should be the purview of governments and not a vehicle for corporate power and profit.

When something as important as the Paris Call comes along again, perhaps in UN negotiations for a cybercrime treaty, we call for actual State Department officials with technical expertise to be sitting at the table with the interests of the entire US in their pocket…not people with equity shares to protect.

This essay was written with Tarah Wheeler, and previously published on The Cipher Brief.

Cryptogram Friday Squid Blogging: Ten-Foot Long Squid Washed onto Japanese Shore — ALIVE

This is rare:

An about 3-meter-long giant squid was found stranded on a beach here on April 20, in what local authorities said was a rare occurrence.

At around 10 a.m., a nearby resident spotted the squid at Ugu beach in Obama, Fukui Prefecture, on the Sea of Japan coast. According to the Obama Municipal Government, the squid was still alive when it was found. It is unusual for a giant squid to be washed ashore alive, officials said.

The deep-sea creature will be transported to Echizen Matsushima Aquarium in the prefectural city of Sakai.

Sadly, I do not expect the giant squid to survive, certainly not long enough for me to fly there and see it. But if any Japanese readers can supply more information, I would very much appreciate it.

BoingBoing post. Video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianBits from Debian: New Debian Developers and Maintainers (March and April 2022)

The following contributors got their Debian Developer accounts in the last two months:

  • Henry-Nicolas Tourneur (hntourne)
  • Nick Black (dank)

The following contributors were added as Debian Maintainers in the last two months:

  • Jan Mojžíš
  • Philip Wyett
  • Thomas Ward
  • Fabio Fantoni
  • Mohammed Bilal
  • Guilherme de Paula Xavier Segundo

Congratulations!

Planet DebianArturo Borrero González: Toolforge GridEngine Debian 10 Buster migration

Toolforge logo, a circle with an anvil in the middle

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

In accordance with our operating system upgrade policy, we should migrate our servers to Debian Buster.

As discussed in the previous post, one of the most important and successful services provided by the Wikimedia Cloud Services team at the Wikimedia Foundation is Toolforge. Toolforge is a platform that allows users and developers to run and use a variety of applications with the ultimate goal of helping the Wikimedia mission from the technical side.

As you may know already, all Wikimedia Foundation servers are powered by Debian, and this includes Toolforge and Cloud VPS. The Debian Project mostly follows a two year cadence for releases, and Toolforge has been using Debian Stretch for some years now, which nowadays is considered “old-old-stable”. In accordance with our operating system upgrade policy, we should migrate our servers to Debian Buster.

Toolforge’s two different backend engines, Kubernetes and Grid Engine, are impacted by this upgrade policy. Grid Engine is notably tied to the underlying Debian release, and the execution environment offered to tools running in the grid is limited to what the Debian archive contains for a given release. This is unlike in Kubernetes, where tool developers can leverage container images and decouple the runtime environment selection from the base operating system.

Since the Toolforge grid original conception, we have been doing the same operation over and over again:

  • Prepare a parallel grid deployment with the new operating system.
  • Ask our users (tool developers) to evaluate a newer version of their runtime and programming languages.
  • Introduce a migration window and coordinate a quick migration.
  • Finally, drop the old operating system from grid servers.

We’ve done this type of migration several times before. The last few ones were Ubuntu Precise to Ubuntu Trusty and Ubuntu Trusty to Debian Stretch. But this time around we had some special angles to consider.

So, you are upgrading the Debian release

  • You are migrating to Debian 11 Bullseye, no?
  • No, we’re migrating to Debian 10 Buster
  • Wait, but Debian 11 Bullseye exists!
  • Yes, we know! Let me explain…

We’re migrating the grid from Debian 9 Stretch to Debian 10 Buster, but perhaps we should be migrating from Debian 9 Stretch to Debian 11 Bullseye directly. This is a legitimate concern, and we discussed it in September 2021.

A timeline showing Debian versions since 2014

Back then, our reasoning was that skipping to Debian 11 Bullseye would be more difficult for our users, especially because greater jump in version numbers for the underlying runtimes. Additionally, all the migration work started before Debian 11 Bullseye was released. Our original intention was for the migration to be completed before the release. For a couple of reasons the project was delayed, and when it was time to restart the project we decided to continue with the original idea.

We had some work done to get Debian 10 Buster working correctly with the grid, and supporting Debian 11 Bullseye would require an additional effort. We didn’t even check if Grid Engine could be installed in the latest Debian release. For the grid, in general, the engineering effort to do a N+1 upgrade is lower than doing a N+2 upgrade. If we had tried a N+2 upgrade directly, things would have been much slower and difficult for us, and for our users.

In that sense, our conclusion was to not skip Debian 10 Buster.

We no longer want to run Grid Engine

In a previous blog post we shared information about our desired future for Grid Engine in Toolforge. Our intention is to discontinue our usage of this technology.

No grid? What about my tools?

Toolforge logo, a circle with an anvil in the middle

Traditionally there have been two main workflows or use cases that were supported in the grid, but not in our Kubernetes backend:

  • Running jobs, long-running bots and other scheduled tasks.
  • Mixing runtime environments (for example, a nodejs app that runs some python code).

The good news is that work to handle the continuity of such use cases has already started. This takes the form of two main efforts:

  • The Toolforge buildpacks project — to support arbitrary runtime environments.
  • The Toolforge Jobs Framework — to support jobs, scheduled tasks, etc.

In particular, the Toolforge Jobs Framework has been available for a while in an open beta phase. We did some initial design and implementation, then deployed it in Toolforge for some users to try it and report bugs, report missing features, etc.

These are complex, and feature-rich projects, and they deserve a dedicated blog post. More information on each will be shared in the future. For now, it is worth noting that both initiatives have some degree of development already.

The conclusion

Knowing all the moving parts, we were faced with a few hard questions when deciding how to approach the Debian 9 Stretch deprecation:

  • Should we not upgrade the grid, and focus on Kubernetes instead? Let Debian 9 Stretch be the last supported version on the grid?
  • What is the impact of these decisions on the technical community? What is best for our users?

The choices we made are already known in the community. A couple of weeks ago we announced the Debian 9 Stretch Grid Engine deprecation. In parallel to this migration, we decided to promote the new Toolforge Jobs Framework, even if it’s still in beta phase. This new option should help users to future-proof their tool, and reduce maintenance effort. An early migration to Kubernetes now will avoid any more future grid problems.

We truly hope that Debian 10 Buster is the last version we have for the grid, but as they say, hope is not a good strategy when it comes to engineering. What we will do is to work really hard in bringing Toolforge to the service level we want, and that means to keep developing and enabling more Kubernetes-based functionalities.

Stay tuned for more upcoming blog posts with additional information about Toolforge.

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Worse Than FailureError'd: Irony

This week's edition of Err'd gets off to a flying start with one that came in "over the transom" as t'were. Ordinarily, expired certs are a bit mundane for this column, but in this case, where this foible fetched up is at least worth a chuckle.

Jim M. wrote directly to the editor with this explanation. "If you're looking for compliance reports to prove that your cloud provider has solid security practices, be wary of this WTF with Azure. Quoting the site, SOC 2 Type 2 attestation report addresses the requirements set forth in the Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM), and the Cloud Computing Compliance Criteria Catalogue (C5:2020) created by the German Federal Office for Information Security (BSI). Sounds impressive! The link for Azure DevOps SOC 2 Type 2 attestation report goes to this link, https://docs.microsoft.com/en-us/compliance/regulatory/offering-soc-2, which shows that the cert for this page has expired. Try it here: https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3 "

azure

 

An anonymous New Yorker shared this from the land of traffic and tourists. "The real WTF is a function that converts 330 to 3.99," says he. Doesn't seem quite like a euro conversion at current rates. Any ideas?

lyft

 

Richi's no lööli, writing in with a trick captcha: "This google captcha asked me to check all boxes that have a pedestrian crossing in them. As you can see, there is just a motorcycle." The trick is to simply submit the answer without selecting any squares, as there are none that match.

captcha

 

Deutscher Konrad seeks help translating Googlisch. "Thanks Google for the informative overlay. I'm totally okay with the HH:MM but what does the u in front of it want to tell me?" Live in 4 days, at uhm, hm, I don't know?

date

 

Luke H. "While other companies may claim to be redefining the modern career, IBM is going one step further and undefining it." I can't improve on that. Enjoy your weekend.

blue

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianReproducible Builds (diffoscope): diffoscope 212 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 212. This version includes the following changes:

* Add support for extracting vmlinuz/vmlinux Linux kernel images.
  (Closes: reproducible-builds/diffoscope#304)
* Some Python .pyc files report as "data", so support ".pyc" as a
  fallback extension.

You find out more by visiting the project homepage.

,

Cryptogram Surveillance by Driverless Car

San Francisco police are using autonomous vehicles as mobile surveillance cameras.

Privacy advocates say the revelation that police are actively using AV footage is cause for alarm.

“This is very concerning,” Electronic Frontier Foundation (EFF) senior staff attorney Adam Schwartz told Motherboard. He said cars in general are troves of personal consumer data, but autonomous vehicles will have even more of that data from capturing the details of the world around them. “So when we see any police department identify AVs as a new source of evidence, that’s very concerning.”

Krebs on SecurityDEA Investigating Breach of Law Enforcement Data Portal

The U.S. Drug Enforcement Administration (DEA) says it is investigating reports that hackers gained unauthorized access to an agency portal that taps into 16 different federal law enforcement databases. KrebsOnSecurity has learned the alleged compromise is tied to a cybercrime and online harassment community that routinely impersonates police and government officials to harvest personal information on their targets.

Unidentified hackers shared this screenshot of alleged access to the Drug Enforcement Administration’s intelligence sharing portal.

On May 8, KrebsOnSecurity received a tip that hackers obtained a username and password for an authorized user of esp.usdoj.gov, which is the Law Enforcement Inquiry and Alerts (LEIA) system managed by the DEA.

KrebsOnSecurity shared information about the allegedly hijacked account with the DEA, the Federal Bureau of Investigation (FBI), and the Department of Justice, which houses both agencies. The DEA declined to comment on the validity of the claims, issuing only a brief statement in response.

“DEA takes cyber security and information of intrusions seriously and investigates all such reports to the fullest extent,” the agency said in a statement shared via email.

According to this page at the Justice Department website, LEIA “provides federated search capabilities for both EPIC and external database repositories,” including data classified as “law enforcement sensitive” and “mission sensitive” to the DEA.

A document published by the Obama administration in May 2016 (PDF) says the DEA’s El Paso Intelligence Center (EPIC) systems in Texas are available for use by federal, state, local and tribal law enforcement, as well as the Department of Defense and intelligence community.

EPIC and LEIA also have access to the DEA’s National Seizure System (NSS), which the DEA uses to identify property thought to have been purchased with the proceeds of criminal activity (think fancy cars, boats and homes seized from drug kingpins).

“The EPIC System Portal (ESP) enables vetted users to remotely and securely share intelligence, access the National Seizure System, conduct data analytics, and obtain information in support of criminal investigations or law enforcement operations,” the 2016 White House document reads. “Law Enforcement Inquiry and Alerts (LEIA) allows for a federated search of 16 Federal law enforcement databases.”

The screenshots shared with this author indicate the hackers could use EPIC to look up a variety of records, including those for motor vehicles, boats, firearms, aircraft, and even drones.

Claims about the purloined DEA access were shared with this author by “KT,” the current administrator of the Doxbin — a highly toxic online community that provides a forum for digging up personal information on people and posting it publicly.

As KrebsOnSecurity reported earlier this year, the previous owner of the Doxbin has been identified as the leader of LAPSUS$, a data extortion group that hacked into some of the world’s largest tech companies this year — including Microsoft, NVIDIA, Okta, Samsung and T-Mobile.

That reporting also showed how the core members of LAPSUS$ were involved in selling a service offering fraudulent Emergency Data Requests (EDRs), wherein the hackers use compromised police and government email accounts to file warrantless data requests with social media firms, mobile telephony providers and other technology firms, attesting that the information being requested can’t wait for a warrant because it relates to an urgent matter of life and death.

From the standpoint of individuals involved in filing these phony EDRs, access to databases and user accounts within the Department of Justice would be a major coup. But the data in EPIC would probably be far more valuable to organized crime rings or drug cartels, said Nicholas Weaver, a researcher for the International Computer Science Institute at University of California, Berkeley.

Weaver said it’s clear from the screenshots shared by the hackers that they could use their access not only to view sensitive information, but also submit false records to law enforcement and intelligence agency databases.

“I don’t think these [people] realize what they got, how much money the cartels would pay for access to this,” Weaver said. “Especially because as a cartel you don’t search for yourself you search for your enemies, so that even if it’s discovered there is no loss to you of putting things ONTO the DEA’s radar.”

The DEA’s EPIC portal login page.

ANALYSIS

The login page for esp.usdoj.gov (above) suggests that authorized users can access the site using a “Personal Identity Verification” or PIV card, which is a fairly strong form of authentication used government-wide to control access to federal facilities and information systems at each user’s appropriate security level.

However, the EPIC portal also appears to accept just a username and password, which would seem to radically diminish the security value of requiring users to present (or prove possession of) an authorized PIV card. Indeed, KT said the hacker who obtained this illicit access was able to log in using the stolen credentials alone, and that at no time did the portal prompt for a second authentication factor.

It’s not clear why there are still sensitive government databases being protected by nothing more than a username and password, but I’m willing to bet big money that this DEA portal is not only offender here. The DEA portal esp.usdoj.gov is listed on Page 87 of a Justice Department “data inventory,” which catalogs all of the data repositories that correspond to DOJ agencies.

There are 3,330 results. Granted, only some of those results are login portals, but that’s just within the Department of Justice.

If we assume for the moment that state-sponsored foreign hacking groups can gain access to sensitive government intelligence in the same way as teenage hacker groups like LAPSUS$, then it is long past time for the U.S. federal government to perform a top-to-bottom review of authentication requirements tied to any government portals that traffic in sensitive or privileged information.

I’ll say it because it needs to be said: The United States government is in urgent need of leadership on cybersecurity at the executive branch level — preferably someone who has the authority and political will to eventually disconnect any federal government agency data portals that fail to enforce strong, multi-factor authentication.

I realize this may be far more complex than it sounds, particularly when it comes to authenticating law enforcement personnel who access these systems without the benefit of a PIV card or government-issued device (state and local authorities, for example). It’s not going to be as simple as just turning on multi-factor authentication for every user, thanks in part to a broad diversity of technologies being used across the law enforcement landscape.

But when hackers can plunder 16 law enforcement databases, arbitrarily send out law enforcement alerts for specific people or vehicles, or potentially disrupt ongoing law enforcement operations — all because someone stole, found or bought a username and password — it’s time for drastic measures.

Planet DebianJonathan Dowland: Scalable Computing seminar

title slide

Last week I delivered a seminar for the research group I belong to, Scalable Computing. This was a slightly-expanded version of the presentation I gave at uksystems21. The most substantial change is the addition of a fourth example to describe recent work on optimising for a second non-functional requirement: Bandwidth.

Worse Than FailureCodeSOD: Nullable Booleans

Austin's team received a bug report. When their C# application tried to load certain settings, it would crash. Now, since this was a pretty severe issue in their production software, impacting a fair number of customers, everyone on the team dove in.

It didn't take Austin long to spot the underlying problem, which isn't quite the WTF.

bool? setting; public static bool GetSetting(){ return (bool)setting; }

setting is defined as nullable, so when attempting to cast to bool, a null value will fail. Theoretically, this should have been caught by testing, but at least the fix was easy. Austin simply added a coalescing operator to the return line: return setting ?? false.

While Austin sent out a pull request, a second pull request came in. This one was from Austin's boss, who was far more experienced- and who had been gifted the ability to approve their own pull requests. So this is what they approved, a change in where GetSetting is called:

try{ checkbox.Checked = Foo.GetSetting(); } catch{}

While Austin took a moment to understand the root cause of the issue, his boss took the straight line path: "this line throws an exception, so let's just make the exception go away".

After some convincing, the team agreed that the boss's commit should be rolled back and an actual proper null handling was a better choice. That didn't stop the boss from grumbling that it was an emergency, and it was more important to get it fixed fast than fixed right. Austin didn't point out that he submitted his pull request first.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianRaphaël Hertzog: Debian 9 soon out of (free) security support

Organizations that are still running Debian 9 servers should be aware that the security support of the Debian LTS team will end on June 30th 2022.

If upgrading to a newer Debian release is not an option for them, then they should consider subscribing to Freexian’s Extended LTS to get security support for the packages that they are using on their servers.

It’s worth pointing out that we made some important changes to Freexian’s Extended LTS offering :

  • we are now willing to support each Debian release for up to 10 years (so 5 years of ELTS support after the 5 initial years), provided that we have customers willing to pay the required amount.
  • we have changed our pricing scheme so that we can announce up-front the (increasing) cost over the 5 years of ELTS
  • we have dropped the requirement to subscribe to the Debian LTS sponsorship, though it’s still a good idea to contribute to the funding of that project to ensure that one’s packages are properly monitored/maintained during the LTS period

This means that we have again extended the life of Debian 8 Jessie, this time until June 30th 2025. And that Debian 9 Stretch – that will start its “extended” life on July 1st 2022 – can be maintained up to June 30th 2027.

Organizations using Debian 10 should consider sponsoring the Debian LTS team since security support for that Debian release will soon transition from the regular security team to the LTS team.

Worse Than FailureCodeSOD: Observing the Observer

In the endless quest to get asynchronous operations right, we're constantly changing approaches. From callbacks, to promises, to awaits, to observables. Observables are pretty straight forward: we observe something, like a socket or a HTTP request, and when something happens, we trigger some callbacks. In a lightly pseudocode version, it might look something like:

requestFactory.postFormRequest(url).subscribe( resp => myResponseHandler(resp), err => myErrorHandler(err), () => myRequestCompleteHandler() )

It's cleaner than pure callback hell, but conceptually similar. The key advantage to observables is that they work exactly the same way on objects that may emit multiple events, like a WebSocket. Each time a message arrives on the socket, we can simply re-execute the handlers for our subscription.

I lay out this background, because Lucas has a co-worker who doesn't quite get it. Because every time they need to make a request, they follow this pattern:

return new Observable<ILeaseFile>(observer => { this._request.new_postForm<ILeaseFile>(this._app.apiUrl() + requestUrl + paramsUrl, formData).subscribe( leaseFileRes => observer.next(leaseFileRes), error => observer.error(error), () => observer.complete() ); });

This creates two observables. The first, created by this bit, this._request.new_postForm<ILeaseFile>(this._app.apiUrl() + requestUrl + paramsUrl, formData), is a request factory. It creates a request, adds some error handling code- code which uses window.alert to raise errors- and returns an observable.

In this code example, we create a second outer observable, which is what subscribes to the request- the observer.next, observer.error, and observer.complete do those steps. So the outer observable is just a do-nothing observer.

Why? I'll let Lucas explain:

The reason for this is that, sometimes, it wants to return a particular field from the response, and not all of it, and it was then copy-pasted all over. Or maybe someone doesn't understand asynchronous requests. Probably both.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Krebs on SecurityMicrosoft Patch Tuesday, May 2022 Edition

Microsoft today released updates to fix at least 74 separate security problems in its Windows operating systems and related software. This month’s patch batch includes fixes for seven “critical” flaws, as well as a zero-day vulnerability that affects all supported versions of Windows.

By all accounts, the most urgent bug Microsoft addressed this month is CVE-2022-26925, a weakness in a central component of Windows security (the “Local Security Authority” process within Windows). CVE-2022-26925 was publicly disclosed prior to today, and Microsoft says it is now actively being exploited in the wild. The flaw affects Windows 7 through 10 and Windows Server 2008 through 2022.

Greg Wiseman, product manager for Rapid7, said Microsoft has rated this vulnerability as important and assigned it a CVSS (danger) score of 8.1 (10 being the worst), although Microsoft notes that the CVSS score can be as high as 9.8 in certain situations.

“This allows attackers to perform a man-in-the-middle attack to force domain controllers to authenticate to the attacker using NTLM authentication,” Wiseman said. “This is very bad news when used in conjunction with an NTLM relay attack, potentially leading to remote code execution. This bug affects all supported versions of Windows, but Domain Controllers should be patched on a priority basis before updating other servers.”

Wiseman said the most recent time Microsoft patched a similar vulnerability — last August in CVE-2021-36942 — it was also being exploited in the wild under the name “PetitPotam.”

“CVE-2021-36942 was so bad it made CISA’s catalog of Known Exploited Vulnerabilities,” Wiseman said.

Seven of the flaws fixed today earned Microsoft’s most-dire “critical” label, which it assigns to vulnerabilities that can be exploited by malware or miscreants to remotely compromise a vulnerable Windows system without any help from the user.

Among those is CVE-2022-26937, which carries a CVSS score of 9.8, and affects services using the Windows Network File System (NFS). Trend Micro’s Zero Day Initiative notes that this bug could allow remote, unauthenticated attackers to execute code in the context of the Network File System (NFS) service on affected systems.

“NFS isn’t on by default, but it’s prevalent in environment where Windows systems are mixed with other OSes such as Linux or Unix,” ZDI’s Dustin Childs wrote. “If this describes your environment, you should definitely test and deploy this patch quickly.”

Once again, this month’s Patch Tuesday is sponsored by Windows Print Spooler, a core Windows service that keeps spooling out the security hits. May’s patches include four fixes for Print Spooler, including two information disclosure and two elevation of privilege flaws.

“All of the flaws are rated as important, and two of the three are considered more likely to be exploited,” said Satnam Narang, staff research engineer at Tenable. “Windows Print Spooler continues to remain a valuable target for attackers since PrintNightmare was disclosed nearly a year ago. Elevation of Privilege flaws in particular should be carefully prioritized, as we’ve seen ransomware groups like Conti favor them as part of its playbook.”

Other Windows components that received patches this month include .NET and Visual Studio, Microsoft Edge (Chromium-based), Microsoft Exchange Server, Office, Windows Hyper-V, Windows Authentication Methods, BitLocker, Remote Desktop Client, and Windows Point-to-Point Tunneling Protocol.

Also today, Adobe issued five security bulletins to address at least 18 flaws in Adobe CloudFusion, Framemaker, InCopy, InDesign, and Adobe Character Animator. Adobe said it is not aware of any exploits in the wild for any of the issues addressed in today’s updates.

For a more granular look at the patches released by Microsoft today and indexed by severity and other metrics, check out the always-useful Patch Tuesday roundup from the SANS Internet Storm Center. And it’s not a bad idea to hold off updating for a few days until Microsoft works out any kinks in the updates: AskWoody.com usually has the skinny on any patches that may be causing problems for Windows users.

As always, please consider backing up your system or at least your important documents and data before applying system updates. And if you run into any problems with these patches, please drop a note about it here in the comments.

,

Planet DebianBen Hutchings: Debian LTS work, April 2022

In April I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from March. I worked 11 hours, and will carry over the remaining time to May.

I spent most of my time triaging security issues for Linux, working out which of them were fixed upstream and which actually applied to the versions provided in Debian 9 "stretch". I also rebased the Linux 4.9 (linux) package on the latest stable update, but did not make an upload this month.

Planet DebianDaniel Kahn Gillmor: 2022 Digital Rights Job Fair

I'm lucky enough to work at the intersection between information communications technology and civil rights/civil liberties. I get to combine technical interests and social/political interests.

I've talked with many folks over the years who are interested in doing similar work. Some come from a technical background, and some from an activist background (and some from both). Are you one of them? Are you someone who works as an activist or in a technical field who wants to look into different ways of meging these interests?

Some great organizers maintain a job board for Digital Rights. Next month they'll host a Digital Rights Job Fair, which offers an opportunity to talk with good people at organizations that fight in different ways for a better world. You need to RSVP to attend.

Digital Rights Job Fair

Planet DebianRussell Coker: Elon and Free Speech

Elon Musk has made the news for spending billions to buy a share of Twitter for the alleged purpose of providing free speech. The problem with this claim is that having any company controlling a large portion of the world’s communication is inherently bad for free speech. The same applies for Facebook, but that’s not a hot news item at the moment.

If Elon wanted to provide free speech he would want to have decentralised messaging systems so that someone who breaks rules on one platform could find another with different rules. Among other things free speech ideally permits people to debate issues with residents of another country on issues related to different laws. If advocates for the Russian government get kicked off Twitter as part of the American sanctions against Russia then American citizens can’t debate the issue with Russian citizens via Twitter. Mastodon is one example of a federated competitor to Twitter [1]. With a federated messaging system each host could make independent decisions about interpretation of sanctions. Someone who used a Mastodon instance based in the US could get a second account in another country if they wanted to communicate with people in countries that are sanctioned by the US.

The problem with Mastodon at the moment is lack of use. It’s got a good set of features and support for different platforms, there are apps for Android and iPhone as well as lots of other software using the API. But if the people you want to communicate with aren’t on it then it’s less useful. Elon could solve that problem by creating a Tesla Mastodon server and give a free account to everyone who buys a new Tesla, which is the sort of thing that a lot of Tesla buyers would like. It’s quite likely that other companies selling prestige products would follow that example. Everyone has seen evidence of people sharing photos on social media with someone else’s expensive car, a Mastodon account on ferrari.com or mercedes.com would be proof of buying the cars in question. The number of people who buy expensive cars new is a very small portion of the world population, but it’s a group of people who are more influential than average and others would join Mastodon servers to follow them.

The next thing that Elon could do to kill Twitter would be to have all his companies (which have something more than a dozen verified Twitter accounts) use Mastodon accounts for their primary PR releases and then send the same content to Twitter with a 48 hour delay. That would force journalists and people who want to discuss those companies on social media to follow the Mastodon accounts. Again this wouldn’t be a significant number of people, but they would be influential people. Getting journalists to use a communications system increases it’s importance.

The question is whether Elon is lacking the vision necessary to plan a Mastodon deployment or whether he just wants to allow horrible people to run wild on Twitter.

The Verge has an interesting article from 2019 about Gab using Mastodon [2]. The fact that over the last 2.5 years I didn’t even hear of Gab using Mastodon suggests that the fears of some people significantly exceeded the problem. I’m sure that some Gab users managed to harass some Mastodon users, but generally they were apparently banned quickly. As an aside the Mastodon server I use doesn’t appear to ban Gab, a search for Gab on it gave me a user posting about being “pureblood” at the top of the list.

Gab claims to have 4 million accounts and has an estimated 100,000 active users. If 5.5% of Tesla owners became active users on a hypothetical Tesla server that would be the largest Mastodon server. Elon could demonstrate his commitment to free speech by refusing to ban Gab in any way. The Wikipedia page about Gab [3] has a long list of horrible people and activities associated with it. Is that the “free speech” to associate with Tesla? Polestar makes some nice electric cars that appear quite luxurious [4] and doesn’t get negative PR from the behaviour of it’s owner, that’s something Elon might want to consider.

Is this really about bragging rights? Buying a controlling interest in a company that has a partial monopoly on Internet communication is something to boast about. Could users of commercial social media be considered serfs who serve their billionaire overlord?

Planet DebianMelissa Wen: Multiple syncobjs support for V3D(V) (Part 2)

In the previous post, I described how we enable multiple syncobjs capabilities in the V3D kernel driver. Now I will tell you what was changed on the userspace side, where we reworked the V3DV sync mechanisms to use Vulkan multiple wait and signal semaphores directly. This change represents greater adherence to the Vulkan submission framework.

I was not used to Vulkan concepts and the V3DV driver. Fortunately, I counted on the guidance of the Igalia’s Graphics team, mainly Iago Toral (thanks!), to understand the Vulkan Graphics Pipeline, sync scopes, and submission order. Therefore, we changed the original V3DV implementation for vkQueueSubmit and all related functions to allow direct mapping of multiple semaphores from V3DV to the V3D-kernel interface.

Disclaimer: Here’s a brief and probably inaccurate background, which we’ll go into more detail later on.

In Vulkan, GPU work submissions are described as command buffers. These command buffers, with GPU jobs, are grouped in a command buffer submission batch, specified by vkSubmitInfo, and submitted to a queue for execution. vkQueueSubmit is the command called to submit command buffers to a queue. Besides command buffers, vkSubmitInfo also specifies semaphores to wait before starting the batch execution and semaphores to signal when all command buffers in the batch are complete. Moreover, a fence in vkQueueSubmit can be signaled when all command buffer batches have completed execution.

From this sequence, we can see some implicit ordering guarantees. Submission order defines the start order of execution between command buffers, in other words, it is determined by the order in which pSubmits appear in VkQueueSubmit and pCommandBuffers appear in VkSubmitInfo. However, we don’t have any completion guarantees for jobs submitted to different GPU queue, which means they may overlap and complete out of order. Of course, jobs submitted to the same GPU engine follow start and finish order. A fence is ordered after all semaphores signal operations for signal operation order. In addition to implicit sync, we also have some explicit sync resources, such as semaphores, fences, and events.

Considering these implicit and explicit sync mechanisms, we rework the V3DV implementation of queue submissions to better use multiple syncobjs capabilities from the kernel. In this merge request, you can find this work: v3dv: add support to multiple wait and signal semaphores. In this blog post, we run through each scope of change of this merge request for a V3D driver-guided description of the multisync support implementation.

Groundwork and basic code clean-up:

As the original V3D-kernel interface allowed only one semaphore, V3DV resorted to booleans to “translate” multiple semaphores into one. Consequently, if a command buffer batch had at least one semaphore, it needed to wait on all jobs submitted complete before starting its execution. So, instead of just boolean, we created and changed structs that store semaphores information to accept the actual list of wait semaphores.

Expose multisync kernel interface to the driver:

In the two commits below, we basically updated the DRM V3D interface from that one defined in the kernel and verified if the multisync capability is available for use.

Handle multiple semaphores for all GPU job types:

At this point, we were only changing the submission design to consider multiple wait semaphores. Before supporting multisync, V3DV was waiting for the last job submitted to be signaled when at least one wait semaphore was defined, even when serialization wasn’t required. V3DV handle GPU jobs according to the GPU queue in which they are submitted:

  • Control List (CL) for binning and rendering
  • Texture Formatting Unit (TFU)
  • Compute Shader Dispatch (CSD)

Therefore, we changed their submission setup to do jobs submitted to any GPU queues able to handle more than one wait semaphores.

These commits created all mechanisms to set arrays of wait and signal semaphores for GPU job submissions:

  • Checking the conditions to define the wait_stage.
  • Wrapping them in a multisync extension.
  • According to the kernel interface (described in the previous blog post), configure the generic extension as a multisync extension.

Finally, we extended the ability of GPU jobs to handle multiple signal semaphores, but at this point, no GPU job is actually in charge of signaling them. With this in place, we could rework part of the code that tracks CPU and GPU job completions by verifying the GPU status and threads spawned by Event jobs.

Rework the QueueWaitIdle mechanism to track the syncobj of the last job submitted in each queue:

As we had only single in/out syncobj interfaces for semaphores, we used a single last_job_sync to synchronize job dependencies of the previous submission. Although the DRM scheduler guarantees the order of starting to execute a job in the same queue in the kernel space, the order of completion isn’t predictable. On the other hand, we still needed to use syncobjs to follow job completion since we have event threads on the CPU side. Therefore, a more accurate implementation requires last_job syncobjs to track when each engine (CL, TFU, and CSD) is idle. We also needed to keep the driver working on previous versions of v3d kernel-driver with single semaphores, then we kept tracking ANY last_job_sync to preserve the previous implementation.

Rework synchronization and submission design to let the jobs handle wait and signal semaphores:

With multiple semaphores support, the conditions for waiting and signaling semaphores changed accordingly to the particularities of each GPU job (CL, CSD, TFU) and CPU job restrictions (Events, CSD indirect, etc.). In this sense, we redesigned V3DV semaphores handling and job submissions for command buffer batches in vkQueueSubmit.

We scrutinized possible scenarios for submitting command buffer batches to change the original implementation carefully. It resulted in three commits more:

We keep track of whether we have submitted a job to each GPU queue (CSD, TFU, CL) and a CPU job for each command buffer. We use syncobjs to track the last job submitted to each GPU queue and a flag that indicates if this represents the beginning of a command buffer.

The first GPU job submitted to a GPU queue in a command buffer should wait on wait semaphores. The first CPU job submitted in a command buffer should call v3dv_QueueWaitIdle() to do the waiting and ignore semaphores (because it is waiting for everything).

If the job is not the first but has the serialize flag set, it should wait on the completion of all last job submitted to any GPU queue before running. In practice, it means using syncobjs to track the last job submitted by queue and add these syncobjs as job dependencies of this serialized job.

If this job is the last job of a command buffer batch, it may be used to signal semaphores if this command buffer batch has only one type of GPU job (because we have guarantees of execution ordering). Otherwise, we emit a no-op job just to signal semaphores. It waits on the completion of all last jobs submitted to any GPU queue and then signal semaphores. Note: We changed this approach to correctly deal with ordering changes caused by event threads at some point. Whenever we have an event job in the command buffer, we cannot use the last job in the last command buffer assumption. We have to wait all event threads complete to signal

After submitting all command buffers, we emit a no-op job to wait on all last jobs by queue completion and signal fence. Note: at some point, we changed this approach to correct deal with ordering changes caused by event threads, as mentioned before.

Final considerations

With many changes and many rounds of reviews, the patchset was merged. After more validations and code review, we polished and fixed the implementation together with external contributions:

Also, multisync capabilities enabled us to add new features to V3DV and switch the driver to the common synchronization and submission framework:

  • v3dv: expose support for semaphore imports

    This was waiting for multisync support in the v3d kernel, which is already available. Exposing this feature however enabled a few more CTS tests that exposed pre-existing bugs in the user-space driver so we fix those here before exposing the feature.

  • v3dv: Switch to the common submit framework

    This should give you emulated timeline semaphores for free and kernel-assisted sharable timeline semaphores for cheap once you have the kernel interface wired in.

We used a set of games to ensure no performance regression in the new implementation. For this, we used GFXReconstruct to capture Vulkan API calls when playing those games. Then, we compared results with and without multisync caps in the kernelspace and also enabling multisync on v3dv. We didn’t observe any compromise in performance, but improvements when replaying scenes of vkQuake game.

Planet DebianMelissa Wen: Multiple syncobjs support for V3D(V) (Part 1)

As you may already know, we at Igalia have been working on several improvements to the 3D rendering drivers of Broadcom Videocore GPU, found in Raspberry Pi 4 devices. One of our recent works focused on improving V3D(V) drivers adherence to Vulkan submission and synchronization framework. We had to cross various layers from the Linux Graphics stack to add support for multiple syncobjs to V3D(V), from the Linux/DRM kernel to the Vulkan driver. We have delivered bug fixes, a generic gate to extend job submission interfaces, and a more direct sync mapping of the Vulkan framework. These changes did not impact the performance of the tested games and brought greater precision to the synchronization mechanisms. Ultimately, support for multiple syncobjs opened the door to new features and other improvements to the V3DV submission framework.

DRM Syncobjs

But, first, what are DRM sync objs?

* DRM synchronization objects (syncobj, see struct &drm_syncobj) provide a
* container for a synchronization primitive which can be used by userspace
* to explicitly synchronize GPU commands, can be shared between userspace
* processes, and can be shared between different DRM drivers.
* Their primary use-case is to implement Vulkan fences and semaphores.
[...]
* At it's core, a syncobj is simply a wrapper around a pointer to a struct
* &dma_fence which may be NULL.

And Jason Ekstrand well-summarized dma_fence features in a talk at the Linux Plumbers Conference 2021:

A struct that represents a (potentially future) event:

  • Has a boolean “signaled” state
  • Has a bunch of useful utility helpers/concepts, such as refcount, callback wait mechanisms, etc.

Provides two guarantees:

  • One-shot: once signaled, it will be signaled forever
  • Finite-time: once exposed, is guaranteed signal in a reasonable amount of time

What does multiple semaphores support mean for Raspberry Pi 4 GPU drivers?

For our main purpose, the multiple syncobjs support means that V3DV can submit jobs with more than one wait and signal semaphore. In the kernel space, wait semaphores become explicit job dependencies to wait on before executing the job. Signal semaphores (or post dependencies), in turn, work as fences to be signaled when the job completes its execution, unlocking following jobs that depend on its completion.

The multisync support development comprised of many decision-making points and steps summarized as follow:

  • added to the v3d kernel-driver capabilities to handle multiple syncobj;
  • exposed multisync capabilities to the userspace through a generic extension; and
  • reworked synchronization mechanisms of the V3DV driver to benefit from this feature
  • enabled simulator to work with multiple semaphores
  • tested on Vulkan games to verify the correctness and possible performance enhancements.

We decided to refactor parts of the V3D(V) submission design in kernel-space and userspace during this development. We improved job scheduling on V3D-kernel and the V3DV job submission design. We also delivered more accurate synchronizing mechanisms and further updates in the Broadcom Vulkan driver running on Raspberry Pi 4. Therefore, we summarize here changes in the kernel space, describing the previous state of the driver, taking decisions, side improvements, and fixes.

From single to multiple binary in/out syncobjs:

Initially, V3D was very limited in the numbers of syncobjs per job submission. V3D job interfaces (CL, CSD, and TFU) only supported one syncobj (in_sync) to be added as an execution dependency and one syncobj (out_sync) to be signaled when a submission completes. Except for CL submission, which accepts two in_syncs: one for binner and another for render job, it didn’t change the limited options.

Meanwhile in the userspace, the V3DV driver followed alternative paths to meet Vulkan’s synchronization and submission framework. It needed to handle multiple wait and signal semaphores, but the V3D kernel-driver interface only accepts one in_sync and one out_sync. In short, V3DV had to fit multiple semaphores into one when submitting every GPU job.

Generic ioctl extension

The first decision was how to extend the V3D interface to accept multiple in and out syncobjs. We could extend each ioctl with two entries of syncobj arrays and two entries for their counters. We could create new ioctls with multiple in/out syncobj. But after examining other drivers solutions to extend their submission’s interface, we decided to extend V3D ioctls (v3d_cl_submit_ioctl, v3d_csd_submit_ioctl, v3d_tfu_submit_ioctl) by a generic ioctl extension.

I found a curious commit message when I was examining how other developers handled the issue in the past:

Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Fri Mar 22 09:23:22 2019 +0000

    drm/i915: Introduce the i915_user_extension_method
    
    An idea for extending uABI inspired by Vulkan's extension chains.
    Instead of expanding the data struct for each ioctl every time we need
    to add a new feature, define an extension chain instead. As we add
    optional interfaces to control the ioctl, we define a new extension
    struct that can be linked into the ioctl data only when required by the
    user. The key advantage being able to ignore large control structs for
    optional interfaces/extensions, while being able to process them in a
    consistent manner.
    
    In comparison to other extensible ioctls, the key difference is the
    use of a linked chain of extension structs vs an array of tagged
    pointers. For example,
    
    struct drm_amdgpu_cs_chunk {
    	__u32		chunk_id;
        __u32		length_dw;
        __u64		chunk_data;
    };
[...]

So, inspired by amdgpu_cs_chunk and i915_user_extension, we opted to extend the V3D interface through a generic interface. After applying some suggestions from Iago Toral (Igalia) and Daniel Vetter, we reached the following struct:

struct drm_v3d_extension {
	__u64 next;
	__u32 id;
#define DRM_V3D_EXT_ID_MULTI_SYNC		0x01
	__u32 flags; /* mbz */
};

This generic extension has an id to identify the feature/extension we are adding to an ioctl (that maps the related struct type), a pointer to the next extension, and flags (if needed). Whenever we need to extend the V3D interface again for another specific feature, we subclass this generic extension into the specific one instead of extending ioctls indefinitely.

Multisync extension

For the multiple syncobjs extension, we define a multi_sync extension struct that subclasses the generic extension struct. It has arrays of in and out syncobjs, the respective number of elements in each of them, and a wait_stage value used in CL submissions to determine which job needs to wait for syncobjs before running.

struct drm_v3d_multi_sync {
	struct drm_v3d_extension base;
	/* Array of wait and signal semaphores */
	__u64 in_syncs;
	__u64 out_syncs;

	/* Number of entries */
	__u32 in_sync_count;
	__u32 out_sync_count;

	/* set the stage (v3d_queue) to sync */
	__u32 wait_stage;

	__u32 pad; /* mbz */
};

And if a multisync extension is defined, the V3D driver ignores the previous interface of single in/out syncobjs.

Once we had the interface to support multiple in/out syncobjs, v3d kernel-driver needed to handle it. As V3D uses the DRM scheduler for job executions, changing from single syncobj to multiples is quite straightforward. V3D copies from userspace the in syncobjs and uses drm_syncobj_find_fence()+ drm_sched_job_add_dependency() to add all in_syncs (wait semaphores) as job dependencies, i.e. syncobjs to be checked by the scheduler before running the job. On CL submissions, we have the bin and render jobs, so V3D follows the value of wait_stage to determine which job depends on those in_syncs to start its execution.

When V3D defines the last job in a submission, it replaces dma_fence of out_syncs with the done_fence from this last job. It uses drm_syncobj_find() + drm_syncobj_replace_fence() to do that. Therefore, when a job completes its execution and signals done_fence, all out_syncs are signaled too.

Other improvements to v3d kernel driver

This work also made possible some improvements in the original implementation. Following Iago’s suggestions, we refactored the job’s initialization code to allocate memory and initialize a job in one go. With this, we started to clean up resources more cohesively, clearly distinguishing cleanups in case of failure from job completion. We also fixed the resource cleanup when a job is aborted before the DRM scheduler arms it - at that point, drm_sched_job_arm() had recently been introduced to job initialization. Finally, we prepared the semaphore interface to implement timeline syncobjs in the future.

Going Up

The patchset that adds multiple syncobjs support and improvements to V3D is available here and comprises four patches:

  • drm/v3d: decouple adding job dependencies steps from job init
  • drm/v3d: alloc and init job in one shot
  • drm/v3d: add generic ioctl extension
  • drm/v3d: add multiple syncobjs support

After extending the V3D kernel interface to accept multiple syncobjs, we worked on V3DV to benefit from V3D multisync capabilities. In the next post, I will describe a little of this work.

Worse Than FailureCodeSOD: Counting Answers

Lacy's co-worker needed to collect groups of strings into "clusters"- literally arrays of arrays of strings. Of great importance was knowing how many groups were in each cluster.

Making this more complicated, there was an array list of clusters. Obviously, there's a code smell just in this data organization- ArrayList<ArrayList<String[]>> is not a healthy sounding type name. There's probably a better way to express that.

That said, the data structure is pretty easy to understand: I have an outer ArrayList. Each item in the ArrayList is another ArrayList (called a cluster), and each one of those contains an array of strings (called an answer, in their application domain).

So, here's the question: for a set of clusters, what is the largest number of answers contained in any single cluster?

There's an obvious solution to this- it's a pretty basic max problem. There's an obviously wrong solution. Guess which one Lacy found in the codebase?

int counter = 0; int max_cluster_size = 0; for (ArrayList<String[]> cluster : clusters) { for (String[] item : cluster) { counter++; } if (counter > max_cluster_size) { max_cluster_size = counter; } counter = 0; }

Now, on one hand, this is simple- replace the inner for loop with a if (cluster.size() > max_cluster_size) max_cluster_size = cluster.size(). And I'm sure there's some even more readable Java streams way to do this.

And with that in mind, this is barely a WTF, but what I find interesting here is that we can infer what actually happened. Because here's what I think: once upon a time, someone misunderstood the requirements (maybe the developer, maybe the person writing the requirements). At one time, they wanted to base this off of how many strings were in the answer. Something like:

for (String[] item : cluster) { counter += item.length; }

This was wrong, and so someone decided to make the minimal change to fix the code: they turned the in-place addition into an increment. Minimal changes, and it certainly works. But it lacks any sense of the context or purpose of the code. It's a completely understandable change, but it's also hinting at so many other bad choices, lurking just off camera, that we can't see.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianUtkarsh Gupta: FOSS Activites in April 2022

Here’s my (thirty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 40th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I did this month but mostly non-technical, now that DC22 is around the corner. Here are the things I did:

Debian Uploads

  • Helped Andrius w/ FTBFS for php-text-captcha, reported via #977403.
    • I fixed the samed in Ubuntu a couple of months ago and they copied over the patch here.

Other $things:

  • Volunteering for DC22 Content team.
  • Leading the Bursary team w/ Paulo.
  • Answering a bunch of questions of referees and attendees around bursary.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 15th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirty-first month as a Debian LTS and twentieth month as a Debian ELTS paid contributor.
I worked for 23.25 hours for LTS and 20.00 hours for ELTS.

LTS CVE Fixes and Announcements:

  • Issued DLA 2976-1, fixing CVE-2022-1271, for gzip.
    For Debian 9 stretch, these problems have been fixed in version 1.6-5+deb9u1.
  • Issued DLA 2977-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 9 stretch, these problems have been fixed in version 5.2.2-1.2+deb9u1.
  • Working on src:tiff and src:mbedtls to fix the issues, still waiting for more issues to be reported, though.
  • Looking at src:mutt CVEs. Haven’t had the time to complete but shall roll out next month.

ELTS CVE Fixes and Announcements:

  • Issued ELA 593-1, fixing CVE-2022-1271, for gzip.
    For Debian 8 jessie, these problems have been fixed in version 1.6-4+deb8u1.
  • Issued ELA 594-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 8 jessie, these problems have been fixed in version 5.1.1alpha+20120614-2+deb8u1.
  • Issued ELA 598-1, fixing CVE-2019-16935, CVE-2021-3177, and CVE-2021-4189, for python2.7.
    For Debian 8 jessie, these problems have been fixed in version 2.7.9-2-ds1-1+deb8u9.
  • Working on src:tiff and src:beep to fix the issues, still waiting for more issues to be reported for src:tiff and src:beep is a bit of a PITA, though. :)

Other (E)LTS Work:

  • Triaged gzip, xz-utils, tiff, beep, python2.7, python-django, and libgit2,
  • Signed up to be a Freexian Collaborator! \o/
  • Read through some bits around that.
  • Helped and assisted new contributors joining Freexian.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.
  • Attended monthly Debian meeting. Held on Jitsi this month.

Debian LTS Survey

I’ve spent 18 hours on the LTS survey on the following bits:

  • Rolled out the announcement. Started the survey.
  • Answered a bunch of queries, people asked via e-mail.
  • Looked at another bunch of tickets: https://salsa.debian.org/freexian-team/project-funding/-/issues/23.
  • Sent a reminder and fixed a few things here and there.
  • Gave a status update during the meeting.
  • Extended the duration of the survey.

Until next time.
:wq for today.

,

Cryptogram Apple Mail Now Blocks Email Trackers

Apple Mail now blocks email trackers by default.

Most email newsletters you get include an invisible “image,” typically a single white pixel, with a unique file name. The server keeps track of every time this “image” is opened and by which IP address. This quirk of internet history means that marketers can track exactly when you open an email and your IP address, which can be used to roughly work out your location.

So, how does Apple Mail stop this? By caching. Apple Mail downloads all images for all emails before you open them. Practically speaking, that means every message downloaded to Apple Mail is marked “read,” regardless of whether you open it. Apples also routes the download through two different proxies, meaning your precise location also can’t be tracked.

Crypto-Gram uses Mailchimp, which has these tracking pixels turned on by default. I turn them off. Normally, Mailchimp requires them to be left on for the first few mailings, presumably to prevent abuse. The company waived that requirement for me.

Planet DebianRobert McQueen: Evolving a strategy for 2022 and beyond

As a board, we have been working on several initiatives to make the Foundation a better asset for the GNOME Project. We’re working on a number of threads in parallel, so I wanted to explain the “big picture” a bit more to try and connect together things like the new ED search and the bylaw changes.

We’re all here to see free and open source software succeed and thrive, so that people can be be truly empowered with agency over their technology, rather than being passive consumers. We want to bring GNOME to as many people as possible so that they have computing devices that they can inspect, trust, share and learn from.

In previous years we’ve tried to boost the relevance of GNOME (or technologies such as GTK) or solicit donations from businesses and individuals with existing engagement in FOSS ideology and technology. The problem with this approach is that we’re mostly addressing people and organisations who are already supporting or contributing FOSS in some way. To truly scale our impact, we need to look to the outside world, build better awareness of GNOME outside of our current user base, and find opportunities to secure funding to invest back into the GNOME project.

The Foundation supports the GNOME project with infrastructure, arranging conferences, sponsoring hackfests and travel, design work, legal support, managing sponsorships, advisory board, being the fiscal sponsor of GNOME, GTK, Flathub… and we will keep doing all of these things. What we’re talking about here are additional ways for the Foundation to support the GNOME project – we want to go beyond these activities, and invest into GNOME to grow its adoption amongst people who need it. This has a cost, and that means in parallel with these initiatives, we need to find partners to fund this work.

Neil has previously talked about themes such as education, advocacy, privacy, but we’ve not previously translated these into clear specific initiatives that we would establish in addition to the Foundation’s existing work. This is all a work in progress and we welcome any feedback from the community about refining these ideas, but here are the current strategic initiatives the board is working on. We’ve been thinking about growing our community by encouraging and retaining diverse contributors, and addressing evolving computing needs which aren’t currently well served on the desktop.

Initiative 1. Welcoming newcomers. The community is already spending a lot of time welcoming newcomers and teaching them the best practices. Those activities are as time consuming as they are important, but currently a handful of individuals are running initiatives such as GSoC, Outreachy and outreach to Universities. These activities help bring diverse individuals and perspectives into the community, and helps them develop skills and experience of collaborating to create Open Source projects. We want to make those efforts more sustainable by finding sponsors for these activities. With funding, we can hire people to dedicate their time to operating these programs, including paid mentors and creating materials to support newcomers in future, such as developer documentation, examples and tutorials. This is the initiative that needs to be refined the most before we can turn it into something real.

Initiative 2: Diverse and sustainable Linux app ecosystem. I spoke at the Linux App Summit about the work that GNOME and Endless has been supporting in Flathub, but this is an example of something which has a great overlap between commercial, technical and mission-based advantages. The key goal here is to improve the financial sustainability of participating in our community, which in turn has an impact on the diversity of who we can expect to afford to enter and remain in our community. We believe the existence of this is critically important for individual developers and contributors to unlock earning potential from our ecosystem, through donations or app sales. In turn, a healthy app ecosystem also improves the usefulness of the Linux desktop as a whole for potential users. We believe that we can build a case for commercial vendors in the space to join an advisory board alongside with GNOME, KDE, etc to input into the governance and contribute to the costs of growing Flathub.

Initiative 3: Local-first applications for the GNOME desktop. This is what Thib has been starting to discuss on Discourse, in this thread. There are many different threats to free access to computing and information in today’s world. The GNOME desktop and apps need to give users convenient and reliable access to technology which works similarly to the tools they already use everyday, but keeps them and their data safe from surveillance, censorship, filtering or just being completely cut off from the Internet. We believe that we can seek both philanthropic and grant funding for this work. It will make GNOME a more appealing and comprehensive offering for the many people who want to protect their privacy.

The idea is that these initiatives all sit on the boundary between the GNOME community and the outside world. If the Foundation can grow and deliver these kinds of projects, we are reaching to new people, new contributors and new funding. These contributions and investments back into GNOME represent a true “win-win” for the newcomers and our existing community.

(Originally posted to GNOME Discourse, please feel free to join the discussion there.)

Worse Than FailureCodeSOD: Uniquely Unique

Giles's company has a hard time with doing things in the database.

In today's example, they attempt the very challenging task of generating unique IDs in a SQL Server database. Now, what you're about to see follows the basic pattern of "generate a random number and see if it's already been used", which is a fairly common anti-pattern, but it's managed to do this in some of the worst ways I've ever seen. And it can't even hide behind the defense of being written a long time ago- it's a handful of years old.

Comments to this C# code have been added by Giles, and no, there were no comments.

protected void AddBlankRowToDatabase() { //This - in effect - calls a stored procedure which has been carefully designed to work around the issue with SPs and //SQL injection - i.e. in theory using a stored procedure could help prevent it from happening, which is obviously //a problem, so this SP allows it once again. //also note that this actually fetches *all* machines for the currently selected customer, //so is basically calling "select * from ..." and creating a list of objects to represent them. List<Machine> allMachine = MachineManager.GetByCriteria("CustomerId=" + Convert.ToString(Session["ActiveCustomerId"]), ""); //having gone to all that trouble, let us see how many machines there are, and increment this since we are going to //add one int machineCount = allMachine.Count + 1; //Create a new object representing a machine. Machine machine = new Machine(); //we will definitely want random numbers, as that is the correct way to ensure uniqueness. Random _rng = new Random(); //new numbers must start with 999 for no reason that is documented or known to anyone. string paddedString, padding = "999"; //OK, random number please. We want it to be 3 chars, but instead of all that zero-padding nonsense, let's just //ensure this by starting from 100. int RandomId = _rng.Next(100, 999); //append our random number to our "999" paddedString = padding + RandomId; RandomId = Convert.ToInt32(paddedString); //now here's the big brain part; we don't know if the machine ID already exists, so we check if it does! bool machineIdNotExists = true; while (machineIdNotExists) { //Get any machine with that ID, again via this garbage SP system. List<Machine> machineExists = MachineManager.GetByCriteria("MachineId=" + RandomId, ""); if (machineExists.Count > 0) { //ah well, let's try *another* random 'number' RandomId = _rng.Next(100, 999); paddedString = padding + RandomId; RandomId = Convert.ToInt32(paddedString); machineIdNotExists = true; } else { //ok, so the machine does not not Exist, we have a new number! machineIdNotExists = false; } } //Good stuff, ensure we use the ID we found machine.MachineId = RandomId; //let's use the count of machines we arrived at earlier to set a temporary title for the machine.... //yes, this is the *only* use of the count, which we worked out by selecting an entire list //of DB rows, constructing a bunch of objects and then *counting* them. machine.Title = "New Machine " + machineCount; //////// //Snip a bunch of other boring property setting ///////// //Now we save the created machine. //This internally takes the Machine Object and (manually - no EF or similar) //pulls its properties into params for another stored proc, which adds the entry to the DB. MachineManager.InsertMachine(machine); //Now run through the entire grid of all machines, and save them all individually. //Even though you haven't amended them. SaveAllRecords(); //Reload the page, which will refresh the grid to show our new machine! Response.Redirect("MachinesAdmin.aspx?RecordSaved=Yes"); }

SQL Injection by stored procedure, fetching entire sets from the database when you need a count. Forcing entire front-end page refreshes via redirect. Mysterious and very not random padding.

Every choice made here was a bad choice.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRuss Allbery: remctl 3.18

remctl is a simple RPC mechanism using Kerberos GSS-API authentication (or SSH authentication).

The primary change in this release, and the reason for the release, is to add support for PCRE2, the latest version of the Perl-Compatible Regular Expression library, since PCRE1 is now deprecated.

This release also improves some documentation, marks the allocation functions in the C client library with deallocation functions for GCC 11, and fixes some issues with the Python and Ruby bindings that were spotted by Ken Dreyer, as well as the normal update of portability support.

I still do plan to move the language bindings into separate packages, since this will make it easier to upload them to their per-language module repositories and that, in turn, will make them easier to use, but this version doesn't have those changes. I wanted to flush the portability changes and PCRE update out first before starting that project.

You can get the latest version from the remctl distribution page.

Planet DebianRuss Allbery: rra-c-util 10.2

rra-c-util is my collection of utility functions, mostly but not entirely for C, that I use with my various software releases.

There are two major changes in this release. The first is Autoconf support for PCRE2, the new version of the Perl-Compatible Regular Expression library (PCRE1 is now deprecated), which was the motivation for a new release. The second is a huge update to the Perl formatting rules due to lots of work by Julien ÉLIE for INN.

This release also tags deallocation functions, similar to the change mentioned for C TAP Harness 4.8, for all the utility libraries provided by rra-c-util, and fixes an issue with the systemd support.

You can get the latest version from the rra-c-util distribution page.

Planet DebianRuss Allbery: C TAP Harness 4.8

C TAP Harness is my C implementation of the Perl "Test Anything Protocol" test suite framework. It includes test runner and libraries for both C and shell.

This is mostly a cleanup release to resync with other utility libraries. It does fix an installation problem by managing symlinks correctly, and adds support for GCC 11's new deallocation warnings.

The latter is a rather interesting new GCC feature. There is a Red Hat blog post about the implementation with more details, but the short version is that the __malloc__ attribute can now take an argument that specifies the function that should be used to deallocate the allocated object. GCC 11 and later can use that information to catch some deallocation bugs, such as deallocating things with the wrong function.

You can get the latest version from the C TAP Harness distribution page.

,

Planet DebianThorsten Alteholz: My Debian Activities in April 2022

FTP master

This month I accepted 186 and rejected 26 packages. The overall number of packages that got accepted was 188.

Debian LTS

This was my ninety-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 2973-1] minidlna security update for one CVE
  • [DLA 2974-1] fribidi security update for three CVEs
  • [DLA 2988-1] tinyxml security update for one CVE
  • [DLA 2987-1] libarchive security update for three CVEs
  • [#1009076] buster-pu: minidlna/1.2.1+dfsg-2+deb10u3
  • [#1009077] bullseye-pu: minidlna/1.3.0+dfsg-2+deb11u1
  • [#1009251] buster-pu: fribidi/1.0.5-3.1+deb10u2
  • [#1009250] bullseye-pu: fribidi/1.0.8-2+deb11u1
  • [#1010380] buster-pu: flac/1.3.2-3+deb10u2

Further I worked on libvirt, the dependency problems in unstable have been resolved and fixing in other releases can continue.

I also continued to work on security support for golang packages.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the forty-siyth ELTS month.

During my allocated time I uploaded:

  • ELA-591-1 for minidlna
  • ELA-592-1 for fribidi
  • ELA-602-1 for tinyxml
  • ELS-603-1 for libarchive

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

As I already became the maintainer of usb-modeswitch I also adopted usb-modeswitch-data

Debian Astro

Unfortunately I didn’t do anything for this group, but in May I will upload a new version of openvlbi and several indi-3rdparty packages.

Other stuff

Last but not least I uploaded several new upstream version of golang packages but not before checking with ratt that all dependencies still work.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking on “Securing a World of Physically Capable Computers” at OWASP Belgium’s chapter meeting in Antwerp, Belgium, on May 17, 2022.
  • I’m speaking at Future Summits in Antwerp, Belgium, on May 18, 2022.
  • I’m speaking at IT-S Now 2022 in Vienna, Austria, on June 2, 2022.
  • I’m speaking at the 14th International Conference on Cyber Conflict, CyCon 2022, in Tallinn, Estonia, on June 3, 2022.
  • I’m speaking at the RSA Conference 2022 in San Francisco, June 6-9, 2022.
  • I’m speaking at the Dublin Tech Summit in Dublin, Ireland, June 15-16, 2022.

The list is maintained on this page.

,

Krebs on SecurityYour Phone May Soon Replace Many of Your Passwords

Apple, Google and Microsoft announced this week they will soon support an approach to authentication that avoids passwords altogether, and instead requires users to merely unlock their smartphones to sign in to websites or online services. Experts say the changes should help defeat many types of phishing attacks and ease the overall password burden on Internet users, but caution that a true passwordless future may still be years away for most websites.

Image: Blog.google

The tech giants are part of an industry-led effort to replace passwords, which are easily forgotten, frequently stolen by malware and phishing schemes, or leaked and sold online in the wake of corporate data breaches.

Apple, Google and Microsoft are some of the more active contributors to a passwordless sign-in standard crafted by the FIDO (“Fast Identity Online”) Alliance and the World Wide Web Consortium (W3C), groups that have been working with hundreds of tech companies over the past decade to develop a new login standard that works the same way across multiple browsers and operating systems.

According to the FIDO Alliance, users will be able to sign in to websites through the same action that they take multiple times each day to unlock their devices — including a device PIN, or a biometric such as a fingerprint or face scan.

“This new approach protects against phishing and sign-in will be radically more secure when compared to passwords and legacy multi-factor technologies such as one-time passcodes sent over SMS,” the alliance wrote on May 5.

Sampath Srinivas, director of security authentication at Google and president of the FIDO Alliance, said that under the new system your phone will store a FIDO credential called a “passkey” which is used to unlock your online account.

“The passkey makes signing in far more secure, as it’s based on public key cryptography and is only shown to your online account when you unlock your phone,” Srinivas wrote. “To sign into a website on your computer, you’ll just need your phone nearby and you’ll simply be prompted to unlock it for access. Once you’ve done this, you won’t need your phone again and you can sign in by just unlocking your computer.”

As ZDNet notes, Apple, Google and Microsoft already support these passwordless standards (e.g. “Sign in with Google”), but users need to sign in at every website to use the passwordless functionality. Under this new system, users will be able to automatically access their passkey on many of their devices — without having to re-enroll every account — and use their mobile device to sign into an app or website on a nearby device.

Johannes Ullrich, dean of research for the SANS Technology Institute, called the announcement “by far the most promising effort to solve the authentication challenge.”

“The most important part of this standard is that it will not require users to buy a new device, but instead they may use devices they already own and know how to use as authenticators,” Ullrich said.

Steve Bellovin, a computer science professor at Columbia University and an early internet researcher and pioneer, called the passwordless effort a “huge advance” in authentication, but said it will take a very long time for many websites to catch up.

Bellovin and others say one potentially tricky scenario in this new passwordless authentication scheme is what happens when someone loses their mobile device, or their phone breaks and they can’t recall their iCloud password.

“I worry about people who can’t afford an extra device, or can’t easily replace a broken or stolen device,” Bellovin said. “I worry about forgotten password recovery for cloud accounts.”

Google says that even if you lose your phone, “your passkeys will securely sync to your new phone from cloud backup, allowing you to pick up right where your old device left off.”

Apple and Microsoft likewise have cloud backup solutions that customers using those platforms could use to recover from a lost mobile device. But Bellovin said much depends on how securely such cloud systems are administered.

“How easy is it to add another device’s public key to an account, without authorization?” Bellovin wondered. “I think their protocols make it impossible, but others disagree.”

Nicholas Weaver, a lecturer at the computer science department at University of California, Berkeley, said websites still have to have some recovery mechanism for the “you lost your phone and your password” scenario, which he described as “a really hard problem to do securely and already one of the biggest weaknesses in our current system.”

“If you forget the password and lose your phone and can recover it, now this is a huge target for attackers,” Weaver said in an email. “If you forget the password and lose your phone and CAN’T, well, now you’ve lost your authorization token that is used for logging in. It is going to have to be the latter. Apple has the infrastructure in place to support it (iCloud keychain), but it is unclear if Google does.”

Even so, he said, the overall FIDO approach has been a great tool for improving both security and usability.

“It is a really, really good step forward, and I’m delighted to see this,” Weaver said. “Taking advantage of the phone’s strong authentication of the phone owner (if you have a decent passcode) is quite nice. And at least for the iPhone you can make this robust even to phone compromise, as it is the secure enclave that would handle this and the secure enclave doesn’t trust the host operating system.”

The tech giants said the new passwordless capabilities will be enabled across Apple, Google and Microsoft platforms “over the course of the coming year.” But experts said it will likely take several more years for smaller web destinations to adopt the technology and ditch passwords altogether.

Recent research shows far too many people still reuse or recycle passwords (modifying the same password slightly), which presents an account takeover risk when those credentials eventually get exposed in a data breach. A report in March from cybersecurity firm SpyCloud found 64 percent of users reuse passwords for multiple accounts, and that 70 percent of credentials compromised in previous breaches are still in use.

A March 2022 white paper on the FIDO approach is available here (PDF). A FAQ on it is here.

,

Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.19 on CRAN: Updates

A new release 0.4.19 of RProtoBuf arrived on CRAN earlier today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release contains a pull request contribution by Michael Chirico to add support for the TextFormat API, a minor maintenance fix ensuring (standard) string are referenced as std::string to avoid a hickup on Arch builds, some repo updates, plus reporting of (package and library) versions on startup. The following section from the NEWS.Rd file has more details.

Changes in RProtoBuf version 0.4.19 (2022-05-06)

  • Small cleanups to repository

  • Raise minimum Protocol Buffers version to 3.3 (closes #83)

  • Update package version display, added to startup message

  • Expose TextFormat API (Michael Chirico in #88 closing #87)

  • Add missing explicit std:: on seven string instances in one file (closes #89)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David BrinRipping off masks... and a powerful (if dry) way to pop the lie-bubble

I'll get to a potent meme (below) that shreds one of the clichés most-shared by both left and right. And shredding it will help one against the other. But first...

In Earth - and differently in Existence - I speculated on ways that 'ownership transparency' might solve many of the crimes and contradictions of feral capitalism, without resorting to anti-market socialism. Defenders of capitalism are hypocrites if they talk about free and competitive markets while excusing secrecy that blinds 99% of market participants. They should be the first to demand world transparency of who owns what.

So am I glad that the Ukraine war is causing the U.S., U.K and even Switzerland to rip veils off some of the shell corporations that own all those seized yachts and so much property in London, New York, Paris etc.? Well, yeah. Sure. But watch our own aristos scramble to make sure this remains only about Russian Oligarchs. I'll be shocked if truly broad reforms happen.

It's gonna take a lot more than Ukraine. Possibly even a "Helvetian War."

Thomas Piketty elaborates: "Let’s say it straight away: it is time to imagine a new type of sanction focused on the oligarchs who have prospered thanks to the regime in question. This will require the establishment of an international financial register, which will not be to the liking of western fortunes, whose interests are much more closely linked to those of the Russian and Chinese oligarchs than is sometimes claimed. However, it is at this price that western countries will succeed in winning the political and moral battle against the autocracies and in demonstrating to the world that the resounding speeches on democracy and justice are not simply empty words."


== Again, the one thing that would transform the world almost instantly ==

Transparency of property and ownership would likely make competitive markets work vastly better while slashing the parasitive effects of all sorts of cheating and (likely) reduce effective tax rates on honest citizens, worldwide. But it is the sort of reform that seems unlikely in the near future.

It may not happen till tumbrels are rolling through the streets, alas.

But there is one thing -- one action by one leader -- that could transform America and the world, overnight. You've seen it proposed here time and again. Jobee could do it all by himself, not even needing Congress.

Maybe he is hoping Putin will do it for him.

Only now the topic I promised. A potent meme that shreds one of the clichés most-shared by both left and right. And shredding it will help one against the other.


== The boring stuff – deficits and how each party tries to ‘stimulate’ the economy – actually matters! ==

 

As I show in Polemical Judo, Democratic Party pols are seldom smart enough to use powerful memes like this one -- that Biden and the dems have actually reduced the federal deficit for the first time since Obama. 


Not only that, but Democratic Administrations are always* more fiscally responsible than GOP ones.  While caring far more for the poor, oppressed and workers… and science and the planet… and rights for women and minorities... they also reduce, rather than lay heavier debt burdens upon our children.  

 

Is that really, really hard for you to parse in your head?  


We are so used to each party’s clichés, such as Republican-hypocritical demands for fiscal prudence, while spilling tsunamis of red ink, opening America’s carotid arteries for greedy suction by aristocrats… and the almost equally-dumb obsession of the far-left called “Modern Monetary Theory” (MMT.)

In fact, honest Keynesians are the only adults in the room, running deficits to effectively help the working class during rough patches… then paying down debt in the resulting good times. Clinton did it. So did Jerry Brown, Gavin Newsom…. the list goes on. Not just this round, but every round, as I showed here:  

 

‘So Do Outcomes Matter More than Rhetoric?’ 

 

This matters! Because there are two large groups we must draw into the Union side in this especially hazardous phase of the U.S. Civil War. And both of these groups are needed by the only coalition that stands a chance of saving the republic, civilization, planet and posterity. 

 

First, the frippy sanctimony-preeners of the left need to grow up and learn (as AOC, Bernie, Liz and Stacey know) the meaning of the word ‘coalition.’ One keeps hoping the next news item will snap the poseurs out of their ritual chants of “Biden is Republican-lite!”  


Maybe the looming reversal of Roe v. Wade will do it. But don’t hold your breath.

 

We ALSO absolutely must peel away the 10% - possibly even 20% - of Republicans who maintain at least a sliver of residual sanity. Why? Because the confederate/Red/Foxite/Trumpist/Kremlinite, anti-science and anti-fact treason party is in demographic collapse! If we can peel away just 10%, all their cheats, including gerrymandering, will fail!  

 

And that’s where the ‘fiscal responsibility’ thing comes in. It is a wedge you can pound in, to cleave off some of those ‘ostrich Republicans.’ 

 

Start by demanding a cash wager, whether Democratic Administrations always* prove to be far more fiscally responsible!

 

Picture your Tucker-hugger blinking in dismay when he realizes one of his cult’s core catechisms is proved – proved! – to be diametrically opposite to true, and he better admit it, or pay off on the bet.

 

All right. I know your lazy response, shrugging that ‘it’s hopeless to even talk to those people'... 


...and I am telling you now that – hopeless or not – it is your duty!  If just one in ten of you peel away just one… well…. 

 

Look up the old phrase: “All heaven rejoices when…”

 


Finally....

How Putin may seek an exit strategy to save face by declaring a “Mission Accomplished!” moment. Very cogent analysis. Also, this fellow is among the few who describes in detail how under GHW Bush a flock of western vultures - most of them Cheney family-connected - swarmed into Russia to help a hundred or so Soviet commissars snap up shares of sold-off state enterprises… 


...one of several reasons why I rank Bush Senior as unquestionably and by far the worst U.S. president of the 20th Century, who set the stage for our crisis ridden world.  Alas, the author of this piece gets a bit kooky toward the end. But the first half is worthwhile.



=====

 

* Sure, ‘always’ is a strong term. There are undoubtedly exceptions, though I know of none since 1980. So? Use the polemical power.


Planet DebianAntoine Beaupré: Wallabako 1.4.0 released

I don't particularly like it when people announce their personal projects on their blog, but I'm making an exception for this one, because it's a little special for me.

You see, I have just released Wallabako 1.4.0 (and a quick, mostly irrelevant 1.4.1 hotfix) today. It's the first release of that project in almost 3 years (the previous was 1.3.1, before the pandemic).

The other reason I figured I would mention it is that I have almost never talked about Wallabako on this blog at all, so many of my readers probably don't even know I sometimes meddle with in Golang which surprises even me sometimes.

What's Wallabako

Wallabako is a weird little program I designed to read articles on my E-book reader. I use it to spend less time on the computer: I save articles in a read-it-later app named Wallabag (hosted by a generous friend), and then Wallabako connects to that app, downloads an EPUB version of the book, and then I can read it on the device directly.

When I'm done reading the book, Wallabako notices and sets the article as read in Wallabag. I also set it to delete the book locally, but you can actually configure to keep those books around forever if you feel like it.

Wallabako supports syncing read status with the built-in Kobo interface (called "Nickel"), Koreader and Plato. I happen to use Koreader for everything nowadays, but it should work equally well on the others.

Wallabako is actually setup to be started by udev when there's a connection change detected by the kernel, which is kind of a gross hack. It's clunky, but actually works and I thought for a while about switching to something else, but it's really the easiest way to go, and that requires the less interaction by the user.

Why I'm (still) using it

I wrote Wallabako because I read a lot of articles on the internet. It's actually most of my readings. I read about 10 books a year (which I don't think is much), but I probably read more in terms of time and pages in Wallabag. I haven't actually made the math, but I estimate I spend at least double the time reading articles than I spend reading books.

If I wouldn't have Wallabag, I would have hundreds of tabs open in my web browser all the time. So at least that problem is easily solved: throw everything in Wallabag, sort and read later.

If I wouldn't have Wallabako however, I would be either spend that time reading on the computer -- which I prefer to spend working on free software or work -- or on my phone -- which is kind of better, but really cramped.

I had stopped (and developing) Wallabako for a while, actually, Around 2019, I got tired of always read those technical articles (basically work stuff!) at home. I realized I was just not "reading" (as in books! fiction! fun stuff!) anymore, at least not as much as I wanted.

So I tried to make this separation: the ebook reader is for cool book stuff. The rest is work. But because I had the Wallabag Android app on my phone and tablet, I could still read those articles there, which I thought was pretty neat. But that meant that I was constantly looking at my phone, which is something I'm generally trying to avoid, as it sets a bad example for the kids (small and big) around me.

Then I realized there was one stray ebook reader lying around at home. I had recently bought a Kobo Aura HD to read books, and I like that device. And it's going to stay locked down to reading books. But there's still that old battered Kobo Glo HD reader lying around, and I figured I could just borrow it to read Wallabag articles.

What is this new release

But oh boy that was a lot of work. Wallabako was kind of a mess: it was using the deprecated go dep tool, which lost the battle with go mod. Cross-compilation was broken for older devices, and I had to implement support for Koreader.

go mod

So I had to learn go mod. I'm still not sure I got that part right: LSP is yelling at me because it can't find the imports, and I'm generally just "YOLO everythihng" every time I get anywhere close to it. That's not the way to do Go, in general, and not how I like to do it either.

But I guess that, given time, I'll figure it out and make it work for me. It certainly works now. I think.

Cross compilation

The hard part was different. You see, Nickel uses SQLite to store metadata about books, so Wallabako actually needs to tap into that SQLite database to propagate read status. Originally, I just linked against some sqlite3 library I found lying around. It's basically a wrapper around the C-based SQLite and generally works fine. But that means you actually link your Golang program against a C library. And that's when things get a little nutty.

If you would just build Wallabag naively, it would fail when deployed on the Kobo Glo HD. That's because the device runs a really old kernel: the prehistoric Linux kobo 2.6.35.3-850-gbc67621+ #2049 PREEMPT Mon Jan 9 13:33:11 CST 2017 armv7l GNU/Linux. That was built in 2017, but the kernel was actually released in 2010, a whole 5 years before the Glo HD was released, in 2015 which is kind of outrageous. and yes, that is with the latest firmware release.

My bet is they just don't upgrade the kernel on those things, as the Glo was probably bought around 2017...

In any case, the problem is we are cross-compiling here. And Golang is pretty good about cross-compiling, but because we have C in there, we're actually cross-compiling with "CGO" which is really just Golang with a GCC backend. And that's much, much harder to figure out because you need to pass down flags into GCC and so on. It was a nightmare.

That's until I found this outrageous "little" project called modernc.org/sqlite. What that thing does (with a hefty does of dependencies that would make any Debian developer recoil in horror) is to transpile the SQLite C source code to Golang. You read that right: it rewrites SQLite in Go. On the fly. It's nuts.

But it works. And you end up with a "pure go" program, and that thing compiles much faster and runs fine on older kernel.

I still wasn't sure I wanted to just stick with that forever, so I kept the old sqlite3 code around, behind a compile-time tag. At the top of the nickel_modernc.go file, there's this magic string:

//+build !sqlite3

And at the top of nickel_sqlite3.go file, there's this magic string:

//+build sqlite3

So now, by default, the modernc file gets included, but if I pass --tags sqlite3 to the Go compiler (to go install or whatever), it will actually switch to the other implementation. Pretty neat stuff.

Koreader port

The last part was something I was hesitant in doing for a long time, but that turned out to be pretty easy. I have basically switch to using Koreader to read everything. Books, PDF, everything goes through it. I really like that it stores its metadata in sidecar files: I synchronize all my books with Syncthing which means I can carry my read status, annotations and all that stuff without having to think about it. (And yes, I installed Syncthing on my Kobo.)

The koreader.go port was less than 80 lines, and I could even make a nice little test suite so that I don't have to redeploy that thing to the ebook reader at every code iteration.

I had originally thought I should add some sort of graphical interface in Koreader for Wallabako as well, and had requested that feature upstream. Unfortunately (or fortunately?), they took my idea and just ran with it. Some courageous soul actually wrote a full Wallabag plugin for koreader, in Lua of course.

Compared to the Wallabako implementation however, the koreader plugin is much slower, probably because it downloads articles serially instead of concurrently. It is, however, much more usable as the user is given a visible feedback of the various steps. I still had to enable full debugging to diagnose a problem (which was that I shouldn't have a trailing slash, and that some special characters don't work in passwords). It's also better to write the config file with a normal text editor, over SSH or with the Kobo mounted to your computer instead of typing those really long strings over the kobo.

There's no sample config file which makes that harder but a workaround is to save the configuration with dummy values and fix them up after. Finally I also found the default setting ("Remotely delete finished articles") really dangerous as it can basically lead to data loss (Wallabag article being deleted!) for an unsuspecting user...

So basically, I started working on Wallabag again because the koreader implementation of their Wallabag client was not up to spec for me. It might be good enough for you, but I guess if you like Wallabako, you should thank the koreader folks for their sloppy implementation, as I'm now working again on Wallabako.

Actual release notes

Those are the actual release notes for 1.4.0.

Ship a lot of fixes that have accumulated in the 3 years since the last release.

Features:

  • add timestamp and git version to build artifacts
  • cleanup and improve debugging output
  • switch to pure go sqlite implementation, which helps
  • update all module dependencies
  • port to wallabago v6
  • support Plato library changes from 0.8.5+
  • support reading koreader progress/read status
  • Allow containerized builds, use gomod and avoid GOPATH hell
  • overhaul Dockerfile
  • switch to go mod

Documentation changes:

  • remove instability warning: this works well enough
  • README: replace branch name master by main in links
  • tweak mention of libreoffice to clarify concern
  • replace "kobo" references by "nickel" where appropriate
  • make a section about related projects
  • mention NickelMenu
  • quick review of the koreader implementation

Bugfixes:

  • handle errors in http request creation
  • Use OutputDir configuration instead of hardcoded wallabako paths
  • do not noisily fail if there's no entry for book in plato
  • regression: properly detect read status again after koreader (or plato?) support was added

How do I use this?

This is amazing. I can't believe someone did something that awesome. I want to cover you with gold and Tesla cars and fresh water.

You're weird please stop. But if you want to use Wallabako, head over to the README file which has installation instructions. It basically uses a hack in Kobo e-readers that will happily overwrite their root filesystem as soon as you drop this file named KoboRoot.tgz in the .kobo directory of your e-reader.

Note that there is no uninstall procedure and it messes with the reader's udev configuration (to trigger runs on wifi connect). You'll also need to create a JSON configuration file and configure a client in Wallabag.

And if you're looking for Wallabag hosting, Wallabag.it offers a 14-day free trial. You can also, obviously, host it yourself. Which is not the case for Pocket, even years after Mozilla bought the company. All this wouldn't actually be necessary if Pocket was open-source because Nickel actually ships with a Pocket client.

Shame on you, Mozilla. But you still make an awesome browser, so keep doing that.

Planet DebianHolger Levsen: 20220506-i-had-an-abortion

I had an abortion...

Well, it wasn't me, but when I was 18 my partner thankfully was able to take a 'morning-after-pill' because we were seriously not ready to have a baby. As one data point: We were both still in high school.

It's not possible to ban abortions. It's only possible to ban safe abortions.

Worse Than FailureError'd: He's Got a Ticket to Ride

We've had a rash of train troubles lately. If only I had saved them all, we could have enjoyed a first class special edition instead of squeezing them into economy. But here we are. First stop Budapest!

Magyar Máté murmurs from Switzerland "I seem to have a ticket into the void 😱" Make sure you pay for a round trip!

null

 

While Marc Würth also notes from Switzerland "The banking branch of the Swiss Post called PostFinance has a payment service system named Checkout. They publish very useful release notes which you can find at https://checkout.postfinance.ch/en-us/release-history. You can even subscribe to the RSS!"

release

 

François P. shares a mojibake, commenting "This page raises lots of questions. The one that is puzzling me is why they have a fairly modern application, sensibly utf-8 encoded files, that they serve with incorrect encoding headers. It looks like someone knew what they were doing, but didn't finish the work?"

questions

 

Worried Drew W. frets "I will never complete setting up my Shortcut profile at this rate!"

nan

 

And finally, an anonymous Linuxian deduces "Apparently Debian doesn't quite know what went wrong, either."

debian

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.16 on CRAN: Small Updates

A new release 0.4.16 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

The release of RQuantLib comes agaain about four months after the previous release, and brings a a few small updates for daycounters, all thanks to Kai Lin, plus a small parameter change to avoid an error in an example, and small updates to the Docker files.

Changes in RQuantLib version 0.4.16 (2022-05-05)

  • Documentationn for daycounters was updated and extended (Kai Lin)

  • Deprecated daycounters were approtiately updated (Kai Lin)

  • One example parameterization was changed to avoid error (Dirk)

  • The Docker files were updated

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianReproducible Builds: Reproducible Builds in April 2022

Welcome to the April 2022 report from the Reproducible Builds project! In these reports, we try to summarise the most important things that we have been up to over the past month. If you are interested in contributing to the project, please take a few moments to visit our Contribute page on our website.

News

Cory Doctorow published an interesting article this month about the possibility of Undetectable backdoors for machine learning models. Given that machine learning models can provide unpredictably incorrect results, Doctorow recounts that there exists another category of “adversarial examples” that comprise “a gimmicked machine-learning input that, to the human eye, seems totally normal — but which causes the ML system to misfire dramatically” that permit the possibility of planting “undetectable back doors into any machine learning system at training time”.


Chris Lamb published two ‘supporter spotlights’ on our blog: the first about Amateur Radio Digital Communications (ARDC) and the second about the Google Open Source Security Team (GOSST).


Piergiorgio Ladisa, Henrik Plate, Matias Martinez and Olivier Barais published a new academic paper titled A Taxonomy of Attacks on Open-Source Software Supply Chains (PDF):

This work proposes a general taxonomy for attacks on open-source supply chains, independent of specific programming languages or ecosystems, and covering all supply chain stages from code contributions to package distribution. Taking the form of an attack tree, it covers 107 unique vectors, linked to 94 real-world incidents, and mapped to 33 mitigating safeguards.


Elsewhere in academia, Ly Vu Duc published his PhD thesis. Titled Towards Understanding and Securing the OSS Supply Chain (PDF), Duc’s abstract reads as follows:

This dissertation starts from the first link in the software supply chain, ‘developers’. Since many developers do not update their vulnerable software libraries, thus exposing the user of their code to security risks. To understand how they choose, manage and update the libraries, packages, and other Open-Source Software (OSS) that become the building blocks of companies’ completed products consumed by end-users, twenty-five semi-structured interviews were conducted with developers of both large and small-medium enterprises in nine countries. All interviews were transcribed, coded, and analyzed according to applied thematic analysis


Upstream news

Filippo Valsorda published an informative blog post recently called How Go Mitigates Supply Chain Attacks outlining the high-level features of the Go ecosystem that helps prevent various supply-chain attacks.


There was new/further activity on a pull request filed against openssl by Sebastian Andrzej Siewior in order to prevent saved CFLAGS (which may contain the -fdebug-prefix-map=<PATH> flag that is used to strip an arbitrary the build path from the debug info — if this information remains recorded then the binary is no longer reproducible if the build directory changes.


Events

The Linux Foundation’s SupplyChainSecurityCon, will take place June 21st — 24th 2022, both virtually and in Austin, Texas. Long-time Reproducible Builds and openSUSE contributor Bernhard M. Wiedemann learned that he had his talk accepted, and will speak on Reproducible Builds: Unexpected Benefits and Problems on June 21st.


There will be an in-person “Debian Reunion” in Hamburg, Germany later this year, taking place from 23 — 30 May. Although this is a “Debian” event, there will be some folks from the broader Reproducible Builds community and, of course, everyone is welcome. Please see the event page on the Debian wiki for more information. 41 people have registered so far, and there’s approx 10 “on-site” beds still left.


The minutes and logs from our April 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on May 31st at 15:00 UTC on #reproducible-builds on the OFTC network.


Debian

Roland Clobus wrote another in-depth status update about the status of ‘live’ Debian images, summarising the current situation that all major desktops build reproducibly with bullseye, bookworm and sid, including the Cinnamon desktop on bookworm and sid, “but at a small functionality cost: 14 words will be incorrectly abbreviated”. This work incorporated:

  • Reporting an issue about unnecessarily modified timestamps in the daily Debian installer images. []
  • Reporting a bug against the debian-installer: in order to use a suitable kernel version. (#1006800)
  • Reporting a bug in: texlive-binaries regarding the unreproducible content of .fmt files. (#1009196)
  • Adding hacks to make the Cinnamon desktop image reproducible in bookworm and sid. []
  • Added a script to rebuild a live-build ISO image from a given timestamp. [
  • etc.

On our mailing list, Venkata Pyla started a thread on the Debian debconf cache is non-reproducible issue while creating system images and Vagrant Cascadian posted an excellent summary of the reproducibility status of core package sets in Debian and solicited for similar information from other distributions.


Lastly, 122 reviews of Debian packages were added, 44 were updated and 193 were removed this month adding to our extensive knowledge about identified issues. A number of issue types have been updated as well, including timestamps_generated_by_hevea, randomness_in_ocaml_preprocessed_files, build_path_captured_in_emacs_el_file, golang_compiler_captures_build_path_in_binary and build_path_captured_in_assembly_objects,


Other distributions

Happy birthday to GNU Guix, which recently turned 10 years old! People have been sharing their stories, in which reproducible builds and bootstrappable builds are a recurring theme as a feature important to its users and developers. The experiences are available on the GNU Guix blog as well as a post on fossandcrafts.org


In openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 210 and 211 to Debian unstable, as well as noticed that some Python .pyc files are reported as data, so we should support .pyc as a fallback filename extension [].

In addition, Mattia Rizzolo disabled the Gnumeric tests in Debian as the package is not currently available [] and dropped mplayer from Build-Depends too []. In addition, Mattia fixed an issue to ensure that the PATH environment variable is properly modified for all actions, not just when running the comparator. []


Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Daniel Golle:

    • Prefer a different solution to avoid building all OpenWrt packages; skip packages from optional community feeds. []
  • Holger Levsen:

    • Detect Python deprecation warnings in the node health check. []
    • Detect failure to build the Debian Installer. []
  • Mattia Rizzolo:

    • Install disorderfs for building OpenWrt packages. []
  • Paul Spooren (OpenWrt-related changes):

    • Don’t build all packages whilst the core packages are not yet reproducible. []
    • Add a missing RUN directive to node_cleanup. []
    • Be less verbose during a toolchain build. []
    • Use disorderfs for rebuilds and update the documentation to match. [][][]
  • Roland Clobus:

    • Publish the last reproducible Debian ISO image. []
    • Use the rebuild.sh script from the live-build package. []

Lastly, node maintenance was also performed by Holger Levsen [][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Planet DebianBits from Debian: Google Platinum Sponsor of DebConf22

Googlelogo

We are very pleased to announce that Google has committed to support DebConf22 as a Platinum sponsor. This is the third year in a row that Google is sponsoring The Debian Conference with the higher tier!

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf since more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

With this additional commitment as Platinum Sponsor for DebConf22, Google contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Google, for your support of DebConf22!

Become a sponsor too!

DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor.

DebConf22 banner open registration

Worse Than FailureCodeSOD: Capital Irregularities

Carolyn's company has a bunch of stringly-typed enums. That's already a problem, even in Python, but the other problem is that they need to display these in a human readable format. So, "ProductCategory" needs to become "Product Category". Now, it's true that every one of these stringly typed enums follows the PascalCase convention. It's also true that the list of them is constantly growing.

So this is the method someone wrote for formatting:

def format_text(data): field = data["field_name"] if field == "ProductCategory": field = "Product Category" elif field == "ProductPrice": field = "Product Price" elif field == "ProductName": field = "Product Name"elif field == "UnknownField": field = "Unknown Field" return field

It's unclear how many fields there actually are from the submission, but suffice to say, it's a lot. The fact that "field_name" is a dictionary key hints at a deeper WTF, like an inner-platform style homebrew ORM or something similar, but I have no real evidence about how the code is actually used.

But looking at this code, it makes me wish there was some way to identify the regularities in the input strings, some sort of expression that could identify substrings that I could modify according to the regular pattern. Some sort of regular expression if you will.

Nah, something like that would probably make my code to cryptic to read, and probably just give me extra problems.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Cryptogram 15.3 Million Request-Per-Second DDoS Attack

Cloudflare is reporting a large DDoS attack against an unnamed company “operating a crypto launchpad.”

While this isn’t the largest application-layer attack we’ve seen, it is the largest we’ve seen over HTTPS. HTTPS DDoS attacks are more expensive in terms of required computational resources because of the higher cost of establishing a secure TLS encrypted connection. Therefore it costs the attacker more to launch the attack, and for the victim to mitigate it. We’ve seen very large attacks in the past over (unencrypted) HTTP, but this attack stands out because of the resources it required at its scale.

The attack only lasted 15 seconds. No word on motive. Was this a test? Or was that 15-second delay critical for some other fraud?

News article.

Charles StrossHugos, 2022

Empire Games cover

The Merchant Princes series is on the shortlist for the Hugo Award for best series, winner to be announced at Chicon 8, the World Science Fiction Convention in Chicago, this September 1st-5th.

I'd like to congratulate all the nominees, in all the various categories: the full list is here.

For the first three omnibus books in the Merchant Princes series, you can do worse than start here; for the Empire Games trilogy—originally pitched as Merchant Princes: The Next Generation—you can find it here.

(Links go to Amazon ebook format, US store: you can find 'em elsewhere, in the UK and EU as well. I'm going to talk to the folks at Tor about providing series purchase links and links to other stores presently.)

For reasons which should be obvious, I'm going to do my best to get to Chicago this September. Usual caveats apply: it's an 8-9 hour flight from Edinburgh (although there are ofteen direct flights, so no extra airports to traverse in the middle), there's a pandemic on, and who the hell knows what hopeful mutants will emerge in the next five or six months. Getting to attend my first in-person worldcon since 2019 would be good, but Not Dying is my absolute priority.

Worse Than FailureCodeSOD: Exceptionally TF

Steve's predecessor knows: sometimes, stuff happens that makes you go "WTF". While Steve was reviewing some inherited PHP code, he discovered that this predecessor had built some code to handle those situations.

namespace WtfInc; ##use \Exception as Exception; class WTFException extends \Exception { public function __construct($message = null, $code = null) { if (! $message) { $message = "WTF!?"; } else { $message = "WTF!? " . $message; } parent::__construct($message, $code); } }

Now, Steve tracked down all the places it was actually used, which was not zero, but was only one location:

if ($contents === false) { throw new WTFException("Couldn't read the yaml"); }

This confirms what we all already knew: YAML is TRWTF.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Cryptogram New Sophisticated Malware

Mandiant is reporting on a new botnet.

The group, which security firm Mandiant is calling UNC3524, has spent the past 18 months burrowing into victims’ networks with unusual stealth. In cases where the group is ejected, it wastes no time reinfecting the victim environment and picking up where things left off. There are many keys to its stealth, including:

  • The use of a unique backdoor Mandiant calls Quietexit, which runs on load balancers, wireless access point controllers, and other types of IoT devices that don’t support antivirus or endpoint detection. This makes detection through traditional means difficult.
  • Customized versions of the backdoor that use file names and creation dates that are similar to legitimate files used on a specific infected device.
  • A live-off-the-land approach that favors common Windows programming interfaces and tools over custom code with the goal of leaving as light a footprint as possible.
  • An unusual way a second-stage backdoor connects to attacker-controlled infrastructure by, in essence, acting as a TLS-encrypted server that proxies data through the SOCKS protocol.

[…]

Unpacking this threat group is difficult. From outward appearances, their focus on corporate transactions suggests a financial interest. But UNC3524’s high-caliber tradecraft, proficiency with sophisticated IoT botnets, and ability to remain undetected for so long suggests something more.

From Mandiant:

Throughout their operations, the threat actor demonstrated sophisticated operational security that we see only a small number of threat actors demonstrate. The threat actor evaded detection by operating from devices in the victim environment’s blind spots, including servers running uncommon versions of Linux and network appliances running opaque OSes. These devices and appliances were running versions of operating systems that were unsupported by agent-based security tools, and often had an expected level of network traffic that allowed the attackers to blend in. The threat actor’s use of the QUIETEXIT tunneler allowed them to largely live off the land, without the need to bring in additional tools, further reducing the opportunity for detection. This allowed UNC3524 to remain undetected in victim environments for, in some cases, upwards of 18 months.

Planet DebianSteve Kemp: A plea for books ..

Recently I've been getting much more interested in the "retro" computers of my youth, partly because I've been writing crazy code in Z80 assembly-language, and partly because I've been preparing to introduce our child to his first computer:

  • An actual 1982 ZX Spectrum, cassette deck and all.
    • No internet
    • No hi-rez graphics
    • Easily available BASIC
    • And as a nice bonus the keyboard is wipe-clean!

I've got a few books, books I've hoarded for 30+ years, but I'd love to collect some more. So here's my request:

  • If you have any books covering either the Z80 processor, or the ZX Spectrum, please consider dropping me an email.

I'd be happy to pay €5-10 each for any book I don't yet own, and I'd also be more than happy to cover the cost of postage to Finland.

I'd be particularly pleased to see anything from Melbourne House, and while low-level is best, the coding-books from Usbourne (The Mystery Of Silver Mountain, etc, etc) wouldn't go amiss either.

I suspect most people who have collected and kept these wouldn't want to part with them, but just in case ..

Cryptogram Using Pupil Reflection in Smartphone Camera Selfies

Researchers are using the reflection of the smartphone in the pupils of faces taken as selfies to infer information about how the phone is being used:

For now, the research is focusing on six different ways a user can hold a device like a smartphone: with both hands, just the left, or just the right in portrait mode, and the same options in horizontal mode.

It’s not a lot of information, but it’s a start. (It’ll be a while before we can reproduce these results from Blade Runner.)

Research paper.

Planet DebianGunnar Wolf: Using a RPi as a display adapter

Almost ten months ago, I mentioned on this blog I bought an ARM laptop, which is now my main machine while away from home — a Lenovo Yoga C630 13Q50. Yes, yes, I am still not as much away from home as I used to before, as this pandemic is still somewhat of a thing, but I do move more.

My main activity in the outside world with my laptop is teaching. I teach twice a week, and… well, having a display for my slides and for showing examples in the terminal and such is a must. However, as I said back in August, one of the hardware support issues for this machine is:

No HDMI support via the USB-C displayport. While I don’t expect
to go to conferences or even classes in the next several months,
I hope this can be fixed before I do. It’s a potential important
issue for me.

It has sadly… not yet been solved ☹ While many things have improved since kernel 5.12 (the first I used), the Device Tree does not yet hint at where external video might sit.

So, I went to the obvious: Many people carry different kinds of video adaptors… I carry a slightly bulky one: A RPi3 �

For two months already (time flies!), I had an ugly contraption where the RPi3 connected via Ethernet and displayed a VNC client, and my laptop had a VNC server. Oh, but did I mention — My laptop works so much better with Wayland than with Xorg that I switched, and am now a happy user of the Sway compositor (a drop-in replacement for the i3 window manager). It is built over WLRoots, which is a great and (relatively) simple project, but will thankfully not carry some of Gnome or KDE’s ideas — not even those I’d rather have. So it took a bit of searching; I was very happy to find WayVNC, a VNC server for wlroot-sbased Wayland compositors. I launched a second Wayland, to be able to have my main session undisturbed and present only a window from it.

Only that… VNC is slow and laggy, and sometimes awkward. So I kept searching for something better. And something better is, happily, what I was finally able to do!

In the laptop, I am using wf-recorder to grab an area of the screen and funnel it into a V4L2 loopback device (which allows it to be used as a camera, solving the main issue with grabbing parts of a Wayland screen):

/usr/bin/wf-recorder -g '0,32 960x540' -t --muxer=v4l2 --codec=rawvideo --pixelformat=yuv420p --file=/dev/video10

(yes, my V4L2Loopback device is set to /dev/video10). You will note I’m grabbing a 960×540 rectangle, which is the top ¼ of my screen (1920x1080) minus the Waybar. I think I’ll increase it to 960×720, as the projector to which I connect the Raspberry has a 4×3 output.

After this is sent to /dev/video10, I tell ffmpeg to send it via RTP to the fixed address of the Raspberry:

/usr/bin/ffmpeg -i /dev/video10 -an -f rtp -sdp_file /tmp/video.sdp rtp://10.0.0.100:7000/

Yes, some uglier things happen here. You will note /tmp/video.sdp is created in the laptop itself; this file describes the stream’s metadata so it can be used from the client side. I cheated and copied it over to the Raspberry, doing an ugly hardcode along the way:

user@raspi:~ $ cat video.sdp
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 10.0.0.100
t=0 0
a=tool:libavformat 58.76.100
m=video 7000 RTP/AVP 96
b=AS:200
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=1

People familiar with RTP will scold me: How come I’m streaming to the unicast client address? I should do it to an address in the 224.0.0.0–239.0.0.0 range. And it worked, sometimes. I switched over to 10.0.0.100 because it works, basically always ☺

Finally, upon bootup, I have configured NoDM to start a session with the user user, and dropped the following in my user’s .xsession:

setterm -blank 0 -powersave off -powerdown 0
xset s off
xset -dpms
xset s noblank

mplayer -msglevel all=1 -fs /home/usuario/video.sdp

Anyway, as a result, my students are able to much better follow the pace of my presentation, and I’m able to do some tricks better (particularly when it requires quick reaction times, as often happens when dealing with concurrency and such issues).

Oh, and of course — in case it’s of interest to anybody, knowing that SD cards are all but reliable in the long run, I wrote a vmdb2 recipe to build the images. You can grab it here; it requires some local files to be present to be built — some are the ones I copied over above, and the other ones are surely of no interest to you (such as my public ssh key or such :-] )

What am I still missing? (read: Can you help me with some ideas? 😉)

  • I’d prefer having Ethernet-over-USB. I have the USB-C Ethernet adapter, which powers the RPi and provides a physical link, but I’m sure I could do away with the fugly cable wrapped around the machine…
  • Of course, if that happens, I would switch to a much sexier Zero RPi. I have to check whether the video codec is light enough for a plain ol’ Zero (armel) or I have to use the much more powerful Zero 2… I prefer sticking to the lowest possible hardware!
  • Naturally… The best would be to just be able to connect my USB-C-to-{HDMI,VGA} adapter, that has been sitting idly… 😕 One day, I guess…

Of course, this is a blog post published to brag about my stuff, but also to serve me as persistent memory in case I need to recreate this…

Worse Than FailureCodeSOD: Fetching Transactions

When companies reinvent their own accounting software, they usually start from the (reasonable) position of just mirroring basic accounting processes. You have transactions, for an amount, and then tagged with information about what the transaction actually represents. So, for example, if you wanted to find all the transactions which represent tax paid, you'd need to filter on some metadata and then sum up the amounts.

It quickly gets more complicated. In some organizations, that complexity keeps growing, as it turns out that each department uses slightly different codes, the rules change over time, this central accounting database gradually eats older databases which had wildly different rules. Before long, you end up with a database so krufty that it's a miracle SQL Server doesn't just up and quit.

That's the situation Giles W found himself in. What follows is an example query, which exists to answer the simple question: on 20th December, 2013, how much tax was paid across all transactions? For most database designs, that might be an expensive query, but hopefully a simple query. For this one… well… they may need to do some redesigning. Note the horizontal scrolling on today's code, there was no good place for me to add linebreaks for readability.

SELECT SUM(Tax_Paid) AS Tax_Paid FROM ( SELECT SUM(te.tax1_amount + te.tax2_amount) AS Tax_Paid FROM transactions t WITH (NOLOCK) LEFT JOIN transaction_ext te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id WHERE t.Start_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Start_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.Reversed <> 1 AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.transaction_type in ('NPROD','NCRADJ','PROD','CRADJ','NEW','TAXEXEM','COUPON','COUPONS','TAXFREE') UNION ALL SELECT SUM((te.tax1_amount + te.tax2_amount) * t.quantity) AS Tax_Paid FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id WHERE t.Transaction_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.transaction_type in ('SCOFFER','SCPRIV','C_SALE','COUPON','COUPONS','TAXEXEM','TAXFREE') UNION ALL SELECT SUM(te.tax1_amount + te.tax2_amount) AS Tax_Paid FROM transactions t WITH (NOLOCK) LEFT JOIN transaction_ext te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id LEFT JOIN tab_accounts ta WITH (NOLOCK) on te.sale_invoice_no = ta.sale_invoice_no and ta.location_id = ta.location_id WHERE t.Start_Date <= '2013-12-20' AND t.Start_Time <= '2013-12-20 06:00:00' AND t.Reversed <> 1 AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.transaction_type in ('NPROD','NCRADJ','PROD','CRADJ','NEW','TAXEXEM','COUPON','COUPONS','TAXFREE') AND ta.payment_cancelled <> 1 AND te.group_sale_no in (SELECT sp.group_sale_no FROM Split_Payments sp WITH (NOLOCK) INNER JOIN Sale_Invoices si WITH (NOLOCK) on sp.location_id = si.location_id and sp.group_sale_no = si.invoice_no WHERE sp.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND si.date_issued < '2013-12-20 06:00:00' AND sp.location_id = 40123 AND sp.Sublocation = 0 AND Transaction_Amount >= 0 GROUP BY sp.group_sale_no) UNION ALL SELECT SUM((te.tax1_amount + te.tax2_amount)* t.quantity) AS Tax_Paid FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id LEFT JOIN tab_accounts ta WITH (NOLOCK) on te.sale_invoice_no = ta.sale_invoice_no and ta.location_id = ta.location_id WHERE t.Transaction_Date <= '2013-12-20' AND t.Transaction_Time <= '2013-12-20 06:00:00' AND t.location_id = 40123 AND t.sublocation = 0 AND t.Cashier > '' AND t.transaction_type in ('SCOFFER','SCPRIV','C_SALE','COUPON','COUPONS','TAXEXEM','TAXFREE') AND ta.payment_cancelled <> 1 AND te.group_sale_no in (SELECT sp.group_sale_no FROM Split_Payments sp WITH (NOLOCK) INNER JOIN Sale_Invoices si on sp.location_id = si.location_id and sp.group_sale_no = si.invoice_no WHERE sp.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND si.date_issued < '2013-12-20 06:00:00' AND sp.location_id = 40123 AND sp.Sublocation = 0 AND Transaction_Amount >= 0 GROUP BY sp.group_sale_no) UNION ALL SELECT -1 * SUM(te.tax1_amount + te.tax2_amount) AS Tax_Paid FROM transactions t WITH (NOLOCK) LEFT JOIN transaction_ext te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id LEFT JOIN (SELECT transaction_amount=sum(transaction_amount), transaction_time=min(transaction_time), location_id, group_sale_no FROM Split_Payments GROUP BY location_id, group_sale_no) as sp on sp.group_sale_no = te.group_sale_no and sp.location_id = te.location_id WHERE t.Start_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Start_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.Reversed <> 1 AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.payment_type = 222 AND t.transaction_type in ('NPROD','NCRADJ','PROD','CRADJ','NEW','TAXEXEM','COUPON','COUPONS','TAXFREE') AND (sp.group_sale_no is NULL OR (sp.transaction_time > '2013-12-21 05:59:59' AND sp.transaction_amount >= 0)) UNION ALL SELECT -1 * SUM((te.tax1_amount + te.tax2_amount) * t.quantity) AS Tax_Paid FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id LEFT JOIN (SELECT transaction_amount=sum(transaction_amount), transaction_time=min(transaction_time), location_id, group_sale_no FROM Split_Payments GROUP BY location_id, group_sale_no) as sp on sp.group_sale_no = te.group_sale_no and sp.location_id = te.location_id WHERE t.Transaction_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.payment_type = 222 AND (t.transaction_type in ('SCOFFER','SCPRIV','C_SALE','COUPON','COUPONS','TAXEXEM','TAXFREE') OR t.transaction_type = 'DEPOSIT' and (te.comment IS NULL or te.comment = '')) AND (sp.group_sale_no is NULL OR (sp.transaction_time > '2013-12-21 05:59:59' AND sp.transaction_amount >= 0)) UNION ALL SELECT sum((te.tax1_amount + te.tax2_amount) * t.quantity) as Tax_Paid FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id INNER JOIN (SELECT distinct te.sale_invoice_no FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id LEFT JOIN (SELECT transaction_amount=sum(transaction_amount), transaction_time=min(transaction_time), location_id, group_sale_no FROM Split_Payments GROUP BY location_id, group_sale_no) as sp on sp.group_sale_no = te.group_sale_no and sp.location_id = te.location_id WHERE t.Transaction_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.payment_type = 222 AND t.transaction_type = 'CANCTAB' AND (sp.group_sale_no is NULL OR (sp.transaction_time > '2013-12-21 05:59:59' AND sp.transaction_amount >= 0)) ) s ON te.sale_invoice_no = s.sale_invoice_no WHERE t.Transaction_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.transaction_type <> 'CANCTAB' AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' UNION ALL SELECT sum(te.tax1_amount + te.tax2_amount) as Tax_Paid FROM transactions t WITH (NOLOCK) LEFT JOIN transaction_ext te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id INNER JOIN (SELECT distinct te.sale_invoice_no FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id LEFT JOIN (SELECT transaction_amount=sum(transaction_amount), transaction_time=min(transaction_time), location_id, group_sale_no FROM Split_Payments GROUP BY location_id, group_sale_no) as sp on sp.group_sale_no = te.group_sale_no and sp.location_id = te.location_id WHERE t.Transaction_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.payment_type = 222 AND t.transaction_type = 'CANCTAB' AND (sp.group_sale_no is NULL OR (sp.transaction_time > '2013-12-21 05:59:59' AND sp.transaction_amount >= 0)) ) s ON te.sale_invoice_no = s.sale_invoice_no WHERE t.Start_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Start_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.Reversed <> 1 AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' UNION ALL SELECT SUM((te.tax1_amount + te.tax2_amount) * t.quantity) AS Tax_Paid FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id WHERE t.Transaction_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.transaction_type = 'REFUND' UNION ALL SELECT SUM((te.tax1_amount + te.tax2_amount) * t.quantity) AS Tax_Paid FROM other_transactions t WITH (NOLOCK) LEFT JOIN ot_extention te WITH (NOLOCK) on t.transaction_id = te.transaction_id and t.location_id = te.location_id LEFT JOIN sale_invoices si WITH (NOLOCK) on te.sale_invoice_no = si.invoice_no and te.location_id = si.location_id WHERE t.Transaction_Date BETWEEN '2013-12-20' AND '2013-12-21' AND t.Transaction_Time BETWEEN '2013-12-20 06:00:00' AND '2013-12-21 05:59:59' AND t.location_id = 40123 AND t.Sublocation = 0 AND t.Cashier > '' AND t.transaction_type = 'REVADJ' ) AS Taxes

"This," Giles writes, "is what you would see if you wanted to tinker with the query to account for some new transaction type. Presumably shortly before resigning."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cryptogram SMS Phishing Attacks are on the Rise

SMS phishing attacks — annoyingly called “smishing” — are becoming more common.

I know that I have been receiving a lot of phishing SMS messages over the past few months. I am not getting the “Fedex package delivered” messages the article talks about. Mine are usually of the form: “Thank you for paying your bill, here’s a free gift for you.”

Krebs on SecurityRussia to Rent Tech-Savvy Prisoners to Corporate IT?

Image: Proxima Studios, via Shutterstock.

Faced with a brain drain of smart people fleeing the country following its invasion of Ukraine, the Russian Federation is floating a new strategy to address a worsening shortage of qualified information technology experts: Forcing tech-savvy people within the nation’s prison population to perform low-cost IT work for domestic companies.

Multiple Russian news outlets published stories on April 27 saying the Russian Federal Penitentiary Service had announced a plan to recruit IT specialists from Russian prisons to work remotely for domestic commercial companies.

Russians sentenced to forced labor will serve out their time at one of many correctional centers across dozens of Russian regions, usually at the center that is closest to their hometown. Alexander Khabarov, deputy head of Russia’s penitentiary service, said his agency had received proposals from businessmen in different regions to involve IT specialists serving sentences in correctional centers to work remotely for commercial companies.

Khabarov told Russian media outlets that under the proposal people with IT skills at these facilities would labor only in IT-related roles, but would not be limited to working with companies in their own region.

“We are approached with this initiative in a number of territories, in a number of subjects by entrepreneurs who work in this area,” Khabarov told Russian state media organization TASS. “We are only at the initial stage. If this is in demand, and this is most likely in demand, we think that we will not force specialists in this field to work in some other industries.”

According to Russian media site Lenta.ru, since March 21 nearly 95,000 vacancies in IT have remained unfilled in Russia. Lenta says the number unfilled job slots actually shrank 25 percent from the previous month, officially because “many Russian companies are currently reviewing their plans and budgets, and some projects have been postponed.” The story fails to even mention the recent economic sanctions that are currently affecting many Russian companies thanks to Russia’s invasion of Ukraine in late February.

The Russian Association for Electronic Communications (RAEC) estimated recently that between 70,000 and 100,000 people will leave Russia as part of the second wave of emigration of IT specialists from Russia. “The study also notes that the number of IT people who want to leave Russia is growing. Experts consider the USA, Germany, Georgia, Cyprus and Canada to be the most attractive countries for moving,” Lenta reported of the RAEC survey.

It’s not clear how many “IT specialists” are currently serving prison time in Russia, or precisely what that might mean in terms of an inmate’s IT skills and knowledge. According to the BBC, about half of the world’s prison population is held in the United States, Russia or China. The BBC says Russia currently houses nearly 875,000 inmates, or about 615 inmates for every 100,000 citizens. The United States has an even higher incarceration rate (737/100,000), but also a far larger total prison population of nearly 2.2 million.

Sergei Boyarsky, deputy chairman of the Russian Duma’s Committee on Information Policy, said the idea was worth pursuing if indeed there are a significant number of IT specialists who are already incarcerated in Russia.

“I know that we have a need in general for IT specialists, this is a growing market,” said Boyarsky, who was among the Russian leaders sanctioned by the United States Treasury on Marc. 24, 2022 in response to the Russian invasion of Ukraine. Boyarsky is head of the St. Petersburg branch of United Russia, a strongly pro-Putin political party that holds more than 70 percent of the seats in the Russian State Duma.

“Since they still work there, it would probably be right to give people with a profession that allows them to work remotely not to lose their qualifications,” Boyarsky was quoted as saying of potentially qualified inmates. “At a minimum, this proposal is worth attention and discussion if there are a lot of such specialists.”

According to Russia’s penitentiary service, the average salary of those sentenced to forced labor is about 20,000 rubles per month, or approximately USD $281. Russian news outlet RBC reports that businesses started using prison labor after the possibility of creating correctional centers in organizations appeared in 2020. RBC notes that Russia now has 117 such centers across 76 Russian regions.

Worse Than FailureCodeSOD: Annotated Private Members

Shery sends us perfectly innocent looking Java class, marked up with some annotations. The goal of this class is to have an object which contains a list of names that is definitely not null. Let's see how we did.

@Data @Builder @NoArgsConstructor @AllArgsConstructor public class NamesHolder { @NotNull private List<String> names; }

So, the first four annotations are from the Lombok library. They're code generation tools that can generate most of your boiler plate for you.

@Data is easy: it just generates getters and setters for every property. Note, these getters/setters don't do any validation, so this won't stop you from setting to null. It also creates a constructor with arguments for all properties.

Which, @AllArgsConstructor also generates a constructor with arguments for all properties, so that's a curious choice.

Neither of these do anything to prevent the list from being initialized to null.

@NoArgsConstructor does exactly what it implies, which is useful here, since that's a default constructor. It does not, however, initialize objects to non-null values.

Finally, @Builder creates a static factory. This is common for classes with lots of properties which need to be initialized, e.g., ComplicatedClass instance = ComplicatedClass.builder().withProp(value).withOtherProp(value).build()

This is completely overkill for a class with only one property, and still hasn't done anything to prevent our list from being null.

But hey, there's that @NotNull annotation on there. That must do something, right? Well, that isn't from the same library, that's from JavaX, and it's meant to support serialization. The annotation is information for when serializing or deserializing data: a valid object can't be null.

The class in question here isn't getting serialized, it's just a lightweight container to pass some data between modules. So @NotNull doesn't do anything in this case.

When you take into account all the autogenerated code in play here, while that list of names is declared private, it is very much public, and very much available to everybody to manipulate. There's no guarantee that it's not null, which was the entire purpose of this class.

Shery adds:

All the original programmer had to do is make sure our original list is not null, chuck it in a box marked 'NamesHolder' and send it on it's way. Instead they've made it as nullable as possible using 3rd party tools that they don't understand and aren't needed. Friends of mine recently contemplated giving up programming, fed up after having to deal with similar stuff for years, and may open a brewery instead. I may join them.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Planet DebianThomas Koch: Missing memegen

Posted on May 1, 2022

Back at $COMPANY we had an internal meme-site. I had some reputation in my team for creating good memes. When I watched Episode 3 of Season 2 from Yes Premier Minister yesterday, I really missed a place to post memes.

This is the full scene. Please watch it or even the full episode before scrolling down to the GIFs. I had a good laugh for some time.

With Debian, I could just download the episode from somewhere on the net with youtube-dl and easily create two GIFs using ffmpeg, with and without subtitle:

ffmpeg  -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 tmp/tragic.gif

ffmpeg  -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 \
        -vf "subtitles=tmp/sub.srt:force_style='Fontsize=60'" tmp/tragic_with_subtitle.gif

And this sub.srt file:

1
00:00:10,000 --> 00:00:12,000
Tragic.

I believe, one needs to install the libavfilter-extra variant to burn the subtitle in the GIF.

Some

space

to

hide

the

GIFs.

The Premier Minister just learned, that his predecessor, who was about to publish embarassing memories, died of a sudden heart attack:

I can’t actually think of a meme with this GIF, that the internal thought police community moderation would not immediately take down.

For a moment I thought that it would be fun to have a Meme-Site for Debian members. But it is probably not the right time for this.

Maybe somebody likes the above GIFs though and wants to use them somewhere.

Planet DebianThomas Koch: lsp-java coming to debian

Posted on March 12, 2022
Tags: debian

The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.

I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.

I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.

I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.

Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.

The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).

In Detail I uploaded these packages:

Planet DebianThomas Koch: Waiting for a STATE folder in the XDG basedir spec

Posted on February 18, 2014

The XDG Basedirectory specification proposes default homedir folders for the categories DATA (~/.local/share), CONFIG (~/.config) and CACHE (~/.cache). One category however is missing: STATE. This category has been requested several times but nothing happened.

Examples for state data are:

  • history files of shells, repls, anything that uses libreadline
  • logfiles
  • state of application windows on exit
  • recently opened files
  • last time application was run
  • emacs: bookmarks, ido last directories, backups, auto-save files, auto-save-list

The missing STATE category is especially annoying if you’re managing your dotfiles with a VCS (e.g. via VCSH) and you care to keep your homedir tidy.

If you’re as annoyed as me about the missing STATE category, please voice your opinion on the XDG mailing list.

Of course it’s a very long way until applications really use such a STATE directory. But without a common standard it will never happen.

Planet DebianThomas Koch: shared infrastructure coop

Posted on February 5, 2014

I’m working in a very small web agency with 4 employees, one of them part time and our boss who doesn’t do programming. It shouldn’t come as a surprise, that our development infrastructure is not perfect. We have many ideas and dreams how we could improve it, but not the time. Now we have two obvious choices: Either we just do nothing or we buy services from specialized vendors like github, atlassian, travis-ci, heroku, google and others.

Doing nothing does not work for me. But just buying all this stuff doesn’t please me either. We’d depend on proprietary software, lock-in effects or one-size-fits-all offerings. Another option would be to find other small web shops like us, form a cooperative and share essential services. There are thousands of web shops in the same situation like us and we all need the same things:

  • public and private Git hosting
  • continuous integration (Jenkins)
  • code review (Gerrit)
  • file sharing (e.g. git-annex + webdav)
  • wiki
  • issue tracking
  • virtual windows systems for Internet Explorer testing
  • MySQL / Postgres databases
  • PaaS for PHP, Python, Ruby, Java
  • staging environment
  • Mails, Mailing Lists
  • simple calendar, CRM
  • monitoring

As I said, all of the above is available as commercial offerings. But I’d prefer the following to be satisfied:

  • The infrastructure itself should be open (but not free of charge), like the OpenStack Project Infrastructure as presented at LCA. I especially like how they review their puppet config with Gerrit.

  • The process to become an admin for the infrastructure should work much the same like the process to become a Debian Developer. I’d also like the same attitude towards quality as present in Debian.

Does something like that already exists? There already is the German cooperative hostsharing which is kind of similar but does provide mainly hosting, not services. But I’ll ask them next after writing this blog post.

Is your company interested in joining such an effort? Does it sound silly?

Comments:

Sounds promising. I already answered by mail. Dirk Deimeke (Homepage) am 16.02.2014 08:16 Homepage: http://d5e.org

I’m sorry for accidentily removing a comment that linked to https://mayfirst.org while moderating comments. I’m really looking forward to another blogging engine… Thomas Koch am 16.02.2014 12:20

Why? What are you missing? I am using s9y for 9 years now. Dirk Deimeke (Homepage) am 16.02.2014 12:57

Cory DoctorowRevenge Of The Chickenized Reverse Centaurs

A horse-headed

This week on my podcast, I read a recent Medium column, Revenge of the Chickenized Reverse-Centaurs, about the relationship between algorithms, interoperability and worker power.

(Image: Cryteria, CC BY 3.0, modified)

MP3

,

David BrinWormholes, blackholes... and more!

Just returned from my first speaking tour in 2+ years. Vaxxed & masked in public areas but pretty relaxed holding small meetings with brilliant researchers at UIUC Champagne.

Only now... how about some science?

Let's start with a fabulous rundown by Peter Diamandis of the 5 top things we may learn from the newly-launched James Webb Space Telescope! And yes, I was a skeptic about this hugely complex machine. The fact that it appears to be... well... perfect suggests that maybe you ought to consider yourself a mamber of a fantastically competent civilization... whenever our anti-modernist cousins stop dragging at our ankles.

Strange things keep manifesting! (Ain't it cool?) Pairs and clusters of strands stretch for nearly 150 light-years in the galactic center region and are equally spaced. The bizarre structures are a few million years old and vary in appearance. Some of them resemble harp strings, waterfalls or even the rings around Saturn. But the true nature of the filaments remains elusive.


Giant radio galaxies are yet another mystery in a Universe full of mysteries. They consist of a host galaxy (that's the cluster of stars orbiting a galactic nucleus containing a supermassive black hole), as well as colossal jets and lobes that erupt forth from the galactic center. Now, an utterly humongous one has been found with radio lobes reaching 5 megaparsecs.  


The new Imaging X-Ray Polarimetry Explorer (IXPE) space telescope reveals wonders out there in ‘a new light.’  

An excellent article about why black holes appear to spin so fast - via conservation of angular momentum - that the edges of their ergosphere’s may approach the speed of light. 

And meta cosmological -- If the physics theory of cosmological coupling is correct, the expansion of the universe causes black holes to gain mass.


And even more meta! “spiderweb of wormholes could solve a fundamental “information paradox” first proposed by Stephen Hawking.” 


== And within our solar system ==


2020 XL5 is an Earth Trojan — an asteroid companion to Earth that orbits the Sun along the same path as our planet does, only 60 degrees ahead at L4. These are far more rare than the large numbers collected 60 degrees ahead or behind Jupiter. Over a kilometer wide, it is speculated as a potentially useful base (especially if the Type C asteroid contains volatiles like water)… but also as a place we ought to scan for “lurker” interstellar observation probes… as I describe in EXISTENCE. 


Large-scale liquid on Mars existed much longer than suspected, according to this Caltech report. Martian salt deposits are often found in shallow depressions, sometimes perched above much larger craters that are devoid of the deposits. MRO data showing shallow salt plains above craters suggests that some wet patches endured rather late, as recently as 2.3 billion years ago. Some of these deposits are on terrain that's a billion years younger than the ground the Perseverance Rover is rolling across right now.


The European Space Agency said that its Solar Orbiter – which was launched in 2020 on a mission to study the sun – quite by accident passed through this comet’s tail in late 2021. While within the tail, one of the sensors aboard Solar Orbiter measured particles that were definitively from the comet and not the solar wind. It detected ions of oxygen, carbon, molecular nitrogen, and molecules of carbon monoxide, carbon dioxide and possibly water. Visible light images can hint at the rate at which the comet is ejecting dust, while the ultraviolet images can give the water production rate.


Three prominent features on the Kuiper Belt object Arrokoth – the farthest planetary body ever explored, by NASA's New Horizons spacecraft – now have official names. Proposed by the New Horizons team and approved by the International Astronomical Union, the names follow a theme set by "Arrokoth" itself, which means "sky" in the Powhatan/Algonquin Native American language.


Ice roofed worlds might be a majority of all life worlds. Tidal heating is foremost, but also radioactivity and a weird effect of serpentine rocks relaxing slowly into a lower energy structure!


Ah, balmy Venus: “Venus, our closest planetary neighbor, is called Earth's twin because of the similarity in size and density of both planets. Otherwise, the planets differ radically… While previous studies suggested Venus might have once been covered in oceans, new research has found the opposite: Venus has likely never been able to support oceans.” Any water clouds that did form fled to the night side, where they did not reflect sunlight (albedo) but did trap in heat. So the place never cooled down.

Still, oceans may yet come to Venus!  See how in my novella “The Tumbledowns of Cleopatra Abyss”! On my website and in Best of David Brin stories… my top stuff! 

Scientists have identified what appears to be a small chunk of the moon – possibly blasted off it by an impact 100,000 years ago. Kamo`oalewa is one of Earth’s quasi-satellites, a category of asteroid that orbits the Sun passing frequently by Earth. Also a perfect place for an alien observation post! 


An interesting theory about the origin of Earth’s water: the solar wind - charged particles from the Sun largely made of hydrogen ions - created water on the surface of dust grains carried on asteroids that smashed into the Earth during the early days of the Solar System, helping to explain how lighter isotopes and hydrogen complemented water arriving from early comets and carbonaceous chondrites.  It also suggests “astronauts may be able to process fresh supplies of water straight from the dust on a planet's surface, such as the Moon."

At 100 km across, comet Bernardinelli-Bernstein (BB) is the largest comet ever discovered by far, and it is active, even though farther from the sun than the planet Uranus. The size of comet BB and its distance from the sun suggests that the vaporizing ice forming the coma is dominated by carbon monoxide.  To understand this better, you might go to my doctoral dissertation. Or else the best look at these objects… a novel… Heart of the Comet!


The solar system’s strangest moon? Saturn's IapetusWell… after Titan of course. Tropical-balmy beach resort Titan. Ahhhh! Yet, read about the curious, unexplained features of Iapetus.


,

Krebs on SecurityYou Can Now Ask Google to Remove Your Phone Number, Email or Address from Search Results

Google said this week it is expanding the types of data people can ask to have removed from search results, to include personal contact information like your phone number, email address or physical address. The move comes just months after Google rolled out a new policy enabling people under the age of 18 (or a parent/guardian) to request removal of their images from Google search results.

Google has for years accepted requests to remove certain sensitive data such as bank account or credit card numbers from search results. In a blog post on Wednesday, Google’s Michelle Chang wrote that the company’s expanded policy now allows for the removal of additional information that may pose a risk for identity theft, such as confidential log-in credentials, email addresses and phone numbers when it appears in Search results.

“When we receive removal requests, we will evaluate all content on the web page to ensure that we’re not limiting the availability of other information that is broadly useful, for instance in news articles,” Chang wrote. “We’ll also evaluate if the content appears as part of the public record on the sites of government or official sources. In such cases, we won’t make removals.”

While Google’s removal of a search result from its index will do nothing to remove the offending content from the site that is hosting it, getting a link decoupled from Google search results is going to make the content at that link far less visible. According to recent estimates, Google enjoys somewhere near 90 percent market share in search engine usage.

KrebsOnSecurity decided to test this expanded policy with what would appear to be a no-brainer request: I asked Google to remove search result for BriansClub, one of the largest (if not THE largest) cybercrime stores for selling stolen payment card data.

BriansClub has long abused my name and likeness to pimp its wares on the hacking forums. Its homepage includes a copy of my credit report, Social Security card, phone bill, and a fake but otherwise official looking government ID card.

The login page for perhaps the most bustling cybercrime store for stolen payment card data.

Briansclub updated its homepage with this information in 2019, after it got massively hacked and a copy of its customer database was shared with this author. The leaked data — which included 26 million credit and debit card records taken from hacked online and brick-and-mortar retailers — was ultimately shared with dozens of financial institutions.

TechCrunch writes that the policy expansion comes six months after Google started allowing people under 18 or their parents request to delete their photos from search results. To do so, users need to specify that they want Google to remove “Imagery of an individual currently under the age of 18” and provide some personal information, the image URLs and search queries that would surface the results. Google also lets you submit requests to remove non-consensual explicit or intimate personal images from Google, along with involuntary fake pornography, TechCrunch notes.

This post will be updated in the event Google responds one way or the other, but that may take a while: Google’s automated response said: “Due to the preventative measures being taken for our support specialists in light of COVID-19, it may take longer than usual to respond to your support request. We apologize for any inconvenience this may cause, and we’ll send you a reply as soon as we can.”

Update: 10:30 p.m. ET: An earlier version of this story incorrectly stated that people needed to show explicit or implicit threats regarding requests to remove information like one’s phone number, address or email address from a search result. A spokesperson for Google said “there is no requirement that we find the content to be harmful or shared in a malicious way.”

Cryptogram Video Conferencing Apps Sometimes Ignore the Mute Button

New research: “Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing Apps“:

Abstract: In the post-pandemic era, video conferencing apps (VCAs) have converted previously private spaces — bedrooms, living rooms, and kitchens — into semi-public extensions of the office. And for the most part, users have accepted these apps in their personal space, without much thought about the permission models that govern the use of their personal data during meetings. While access to a device’s video camera is carefully controlled, little has been done to ensure the same level of privacy for accessing the microphone. In this work, we ask the question: what happens to the microphone data when a user clicks the mute button in a VCA? We first conduct a user study to analyze users’ understanding of the permission model of the mute button. Then, using runtime binary analysis tools, we trace raw audio in many popular VCAs as it traverses the app from the audio driver to the network. We find fragmented policies for dealing with microphone data among VCAs — some continuously monitor the microphone input during mute, and others do so periodically. One app transmits statistics of the audio to its telemetry servers while the app is muted. Using network traffic that we intercept en route to the telemetry server, we implement a proof-of-concept background activity classifier and demonstrate the feasibility of inferring the ongoing background activity during a meeting — cooking, cleaning, typing, etc. We achieved 81.9% macro accuracy on identifying six common background activities using intercepted outgoing telemetry packets when a user is muted.

The paper will be presented at PETS this year.

News article.

Worse Than FailureError'd: Time is Time in Time and Your Time

Shocked sharer Rob J. blurts "I feel like that voltage is a tad high."

hot

 

On the Heisenbahn you either know where you're going, or how much it costs, but not both. Personenzug Christian K. chose the wrong train, as he now knows neither place nor price. "Seems that I should get a ticket from %1$@ to %2$@. I wonder how much that’ll be."

train

 

The anonymous OP titled this "Changing trains the German way". To appreciate it, it helps to know that Umsteigszeit is how one says "connection time" auf Deutsch. Zero minutes seems plenty generous if you're trying to connect to a train that left seven minutes before you arrived! Maybe someone more familiar with German rail schedules can explain.

transfer

 

Aussie Paul J. complains "Apparently Red can't be my favourite colour, and I can't have a cat called Ada..." It's a right dictatorship downunder.

red

 

Sad Bret finds himself unpopular with chatbots lately. He reckons "I guess the Customer Service Bot is having a smoke and doesn't want to deal."

help


Genieße das Wochenende.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Cryptogram Microsoft Issues Report of Russian Cyberattacks against Ukraine

Microsoft has a comprehensive report on the dozens of cyberattacks — and even more espionage operations — Russia has conducted against Ukraine as part of this war:

At least six Russian Advanced Persistent Threat (APT) actors and other unattributed threats, have conducted destructive attacks, espionage operations, or both, while Russian military forces attack the country by land, air, and sea. It is unclear whether computer network operators and physical forces are just independently pursuing a common set of priorities or actively coordinating. However, collectively, the cyber and kinetic actions work to disrupt or degrade Ukrainian government and military functions and undermine the public’s trust in those same institutions.

[…]

Threat groups with known or suspected ties to the GRU have continuously developed and used destructive wiper malware or similarly destructive tools on targeted Ukrainian networks at a pace of two to three incidents a week since the eve of invasion. From February 23 to April 8, we saw evidence of nearly 40 discrete destructive attacks that permanently destroyed files in hundreds of systems across dozens of organizations in Ukraine.

Worse Than FailureCodeSOD: Counting References

If you're working in a research field, references matter- specifically, the citations made by your paper and the citations eventually made against yours. But when programming, references can be hard.
Dorothy is a scientist, and understands that code itself is a valuable artifact- it's not enough to just to get the solution, but the code itself needs to be maintanable and readable. So when her peers get into trouble, they frequently come to Dorothy to figure out why.

This Java code is one such example:

// convert image stack to correct architecture with scaling if (impType != "GRAY32") { ImageProcessor ip; for (int i = 1; i <= testImage.getStackSize(); i++) { ip = testImage.getStack().getProcessor(i); if (impType == "GRAY16") ip = ip.convertToShortProcessor(true); else ip = ip.convertToByte(true); } }

Without knowing the details, we can already see that there's some stringly-typed data. Java has had enums for a long time, and that'd be a much better way to manage these variations.

But that's boring style nonsense. testImage is an ImagePlus object, which contains a set of ImageProcessor objects. An ImageProcessor wraps an actual image, and has concrete subclasses for various kinds of ImageProcessors.

This code wants to iterate across all the ImageProcessors in an ImagePlus and convert them to a new type. Here's the problem with this approach: the convertTo methods don't modify the object in place: they return a new instance.

The developer has gotten a little lost in their references and the result is that they never update the object they're trying to update: they create a local copy and modify the local copy.

It's a pretty basic mistake, but it's the sort of thing that ends up eating a lot of time, because as you can imagine, nobody actually documents which methods work in place and which return copies, and as you can see they don't even consistently name methods which do the same thing- convertToShortProcessor and convertToByte.

The real WTF though, is that for loop. Array types with 1-based indexes? That's just unforgivable.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Cryptogram Zero-Day Vulnerabilities Are on the Rise

Both Google and Mandiant are reporting a significant increase in the number of zero-day vulnerabilities reported in 2021.

Google:

2021 included the detection and disclosure of 58 in-the-wild 0-days, the most ever recorded since Project Zero began tracking in mid-2014. That’s more than double the previous maximum of 28 detected in 2015 and especially stark when you consider that there were only 25 detected in 2020. We’ve tracked publicly known in-the-wild 0-day exploits in this spreadsheet since mid-2014.

While we often talk about the number of 0-day exploits used in-the-wild, what we’re actually discussing is the number of 0-day exploits detected and disclosed as in-the-wild. And that leads into our first conclusion: we believe the large uptick in in-the-wild 0-days in 2021 is due to increased detection and disclosure of these 0-days, rather than simply increased usage of 0-day exploits.

Mandiant:

In 2021, Mandiant Threat Intelligence identified 80 zero-days exploited in the wild, which is more than double the previous record volume in 2019. State-sponsored groups continue to be the primary actors exploiting zero-day vulnerabilities, led by Chinese groups. The proportion of financially motivated actors­ — particularly ransomware groups — ­deploying zero-day exploits also grew significantly, and nearly 1 in 3 identified actors exploiting zero-days in 2021 was financially motivated. Threat actors exploited zero-days in Microsoft, Apple, and Google products most frequently, likely reflecting the popularity of these vendors. The vast increase in zero-day exploitation in 2021, as well as the diversification of actors using them, expands the risk portfolio for organizations in nearly every industry sector and geography, particularly those that rely on these popular systems.

News article.

David BrinRomanticism & Resentment: Great for art! Terrible for running a civilization

My romantic soul agrees with this vivid howl! (From Robert A. Heinlein's Glory Road.) 


How vivid, and don't we all... at least in part... agree?


And yet, this plaint by a Heinlein character -- a scarred Vietnam vet and sci fi fan -- also exemplifies the lethal Problem of Romanticism, in which arty emotionalism gets all the mighty propaganda! Propaganda just like Heinlein's passage (though seldom as eloquent.) 


Let me put it as a bald assertion. Romanticism may be one of the most-central aspects of being human... and not always for the better.


From the Punic Wars all the way to modern Hollywood flicks, romanticism has spent centuries propelling rage and demonization in all parties, in all human conflicts, making calm negotiation next to impossible. (Admit it. Some of your own passion is about “MY kind of people are virtuous and those opposing my kind are inherently and by type morally deficient!”)


Oh, let's also admit from the start how addicting righteousness can be! Yes, it must have been reinforced during evolution because of the passion and forcefulness it supplies, during the struggles each generation faced, across the last half a million years. So reinforced that it can be hard even to notice.


== NOT a good basis for policy, in a complex world ==


Emerging from the voluptuous high of romanticism is hard, but not quite impossible, as we’ve shown during the last 200 years of gradually augmenting… maturity.


In fact, as one who lost nearly all of his cousin family lines to one of the most romantic of all vile movements, let me thank God that the romantic soul is having its hands peeled off of policy at long last, after 10,000 years of wretched fear-drenched rage, in which every generation's tribes called their rivals subhuman, deserving only death, like the Tharks of Mars, Tolkien's orcs, the Trojans that Achilles slew in heaps...


...or the Black folks who Confederate romantics enslaved as sub-human and Jews slaughtered in millions by romantics playing Wagner...


...and successively masses of robots... then clones... then masked storm troopers who George Lucas mowed down to our delight since, naturally, none of their kind had mothers to mourn them?


== We need romanticism, at our core! Only... ==


Here's a pretty basic question.  Look at Heinlein's list of great adventures his character longed for. Now tell us which of them  would be even a scintilla as good a place to raise a family as this tawdry, fouled up mess of a world he was complaining about.  Oh, it's tawdry and messed up, all right. But largely by the ways it has failed to move away from the kinds of brutal, even sadistic adventure-zones that were rampant both in fiction and across nearly all of human history. 


But there are equally many ways that we have started leaving all of that behind!  And your long, comfortable lives, free of most anguish, pain and death while staring at the flat screens of these palantir miracle devices, kind of suggest our change of path was the right course.


At long last we are giving policy over to the part of us that does fair argument and science and the freedom of even despised minorities to speak and demand we LOOK at them with compassion and respect!


That transformation is not complete - by far - and it may yet fail! But we are close - so close - to exiling 'romance’ from daylight activities of fact-based policy, sending that part of us instead over to the realm where it belongs. NOT the daylight hours of invention, argument and negotiated progress... 


...but to the campfire hours of moonlight and stars dancing overhead - or the couch or movie theater or pulpy novel - when... YES!... we can unleash that wild, romantic spirit. Those hours when we still need to bay at Luna or Barsoom, to relish garish adventures and quests against dragons...


...or to scan a million black squiggles on pressed vegetable pages, or glowing from a kindled screen, and let those incantations draw us into the voluptuous, subjective roar of which Heinlein speaks!


I make such incantations! I craft good ones. (You'll enjoy them!) 


But no. 


That side of us should never again be given the tiller of nations or policy. (As crazy people at all political wings are right now demanding that we do!) 


The daytime halls of policy and science and truth-seeking and negotiation... and yes, even revising even our most passionate biases - that's when and where we must (it is long past time) at last grow up.


== Recovery from authoritarian regimes ==


Here's an amazingly cogent and well-parsed theory for how authoritarian regimes often transition to democracy after a long reign by an autocrat who is both repressive and good at effective rulership and development. It reminds me of Asimov’s ‘psychohistory’ riff on strong vs. weak emperors vs. strong vs. weak generals. In fact, this article strikes me as a much more cogent psychohistorical contribution than any of the recently popular “historical cycles” bilge that’s been going around. Income, Democracy, and Leader Turnover, by Daniel Treisman


“Abstract:  While some believe that economic development prompts democratization, others contend that both result from distant historical causes. Using the most comprehensive estimates of national income available, I show that development is associated with more democratic government—but mostly in the medium run (10 to 20 years). This is because higher income tends to induce breakthroughs to more democratic politics only after an incumbent dictator leaves office. And in the short run, faster economic growth increases the ruler's survival odds. Leader turnover appears to matter because of selection: In authoritarian states, reformist leaders tend to either democratize or lose power relatively quickly, so long-serving leaders are rarely reformers. Autocrats also become less activist after their first year in office. This logic helps explain why dictators, concerned only to prolong their rule, often inadvertently prepare their countries for jumps to democracy after they leave the scene.”


Certainly Singapore and South Korea followed this model. Did Pinochet? Iran’s Shah is hard to fit here, except to put him in the category of “less strong than he thought he was.” So. Can we hope this will be legacy of some of today’s world strongmen?


And finally... 


I may have linked to this before. Here's Mark Twain blaming Sir Walter Scott's romanticism for the Civil War


"Then comes Sir Walter Scott with his enchantments, and by his single might checks this wave of progress, and even turns it back; sets the world in love with dreams and phantoms; with decayed and swinish forms of religion; with decayed and degraded systems of government; with the silliness and emptinesses, sham grandeurs, sham gauds, and sham chivalries of a brainless and worthless long-vanished society."


I knew I liked the fellow who crafted Huckleberry Finn, one of the finest and most noble of all fictional rascals.

Cryptogram Friday Squid Blogging: Squidmobile

The Squidmobile.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Krebs on SecurityFighting Fake EDRs With ‘Credit Ratings’ for Police

When KrebsOnSecurity recently explored how cybercriminals were using hacked email accounts at police departments worldwide to obtain warrantless Emergency Data Requests (EDRs) from social media firms and technology providers, many security experts called it a fundamentally unfixable problem. But don’t tell that to Matt Donahue, a former FBI agent who recently quit the agency to launch a startup that aims to help tech companies do a better job screening out phony law enforcement data requests — in part by assigning trustworthiness or “credit ratings” to law enforcement authorities worldwide.

A sample Kodex dashboard. Image: Kodex.us.

Donahue is co-founder of Kodex, a company formed in February 2021 that builds security portals designed to help tech companies “manage information requests from government agencies who contact them, and to securely transfer data & collaborate against abuses on their platform.”

The 30-year-old Donahue said he left the FBI in April 2020 to start Kodex because it was clear that social media and technology companies needed help validating the increasingly large number of law enforcement requests domestically and internationally.

“So much of this is such an antiquated, manual process,” Donahue said of his perspective gained at the FBI. “In a lot of cases we’re still sending faxes when more secure and expedient technologies exist.”

Donahue said when he brought the subject up with his superiors at the FBI, they would kind of shrug it off, as if to say, “This is how it’s done and there’s no changing it.”

“My bosses told me I was committing career suicide doing this, but I genuinely believe fixing this process will do more for national security than a 20-year career at the FBI,” he said. “This is such a bigger problem than people give it credit for, and that’s why I left the bureau to start this company.”

One of the stated goals of Kodex is to build a scoring or reputation system for law enforcement personnel who make these data requests. After all, there are tens of thousands of police jurisdictions around the world — including roughly 18,000 in the United States alone — and all it takes for hackers to abuse the EDR process is illicit access to a single police email account.

Kodex is trying to tackle the problem of fake EDRs by working directly with the data providers to pool information about police or government officials submitting these requests, and hopefully making it easier for all customers to spot an unauthorized EDR.

Kodex’s first big client was cryptocurrency giant Coinbase, which confirmed their partnership but otherwise declined to comment for this story. Twilio confirmed it uses Kodex’s technology for law enforcement requests destined for any of its business units, but likewise declined to comment further.

Within their own separate Kodex portals, Twilio can’t see requests submitted to Coinbase, or vice versa. But each can see if a law enforcement entity or individual tied to one of their own requests has ever submitted a request to a different Kodex client, and then drill down further into other data about the submitter, such as Internet address(es) used, and the age of the requestor’s email address.

Donahue said in Kodex’s system, each law enforcement entity is assigned a credit rating, wherein officials who have a long history of sending valid legal requests will have a higher rating than someone sending an EDR for the first time.

“In those cases, we warn the customer with a flash on the request when it pops up that we’re allowing this to come through because the email was verified [as being sent from a valid police or government domain name], but we’re trying to verify the emergency situation for you, and we will change that rating once we get new information about the emergency,” Donahue said.

“This way, even if one customer gets a fake request, we’re able to prevent it from happening to someone else,” he continued. “In a lot of cases with fake EDRs, you can see the same email [address] being used to message different companies for data. And that’s the problem: So many companies are operating in their own silos and are not able to share information about what they’re seeing, which is why we’re seeing scammers exploit this good faith process of EDRs.”

NEEDLES IN THE HAYSTACK

As social media and technology platforms have grown over the years, so have the volumes of requests from law enforcement agencies worldwide for user data. For example, in its latest transparency report mobile giant Verizon reported receiving 114,000 data requests of all types from U.S. law enforcement entities in the second half of 2021.

Verizon said approximately 35,000 of those requests (~30 percent) were EDRs, and that it provided data in roughly 91 percent of those cases. The company doesn’t disclose how many EDRs came from foreign law enforcement entities during that same time period. Verizon currently asks law enforcement officials to send these requests via fax.

Validating legal requests by domain name may be fine for data demands that include documents like subpoenas and search warrants, which can be validated with the courts. But not so for EDRs, which largely bypass any official review and do not require the requestor to submit any court-approved documents.

Police and government authorities can legitimately request EDRs to learn the whereabouts or identities of people who have posted online about plans to harm themselves or others, or in other exigent circumstances such as a child abduction or abuse, or a potential terrorist attack.

But as KrebsOnSecurity reported in March, it is now clear that crooks have figured out there is no quick and easy way for a company that receives one of these EDRs to know whether it is legitimate. Using illicit access to hacked police email accounts, the attackers will send a fake EDR along with an attestation that innocent people will likely suffer greatly or die unless the requested data is provided immediately.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR — and potentially having someone’s blood on their hands — or possibly leaking a customer record to the wrong person. That might explain why the compliance rate for EDRs is usually quite high — often upwards of 90 percent.

Fake EDRs have become such a reliable method in the cybercrime underground for obtaining information about account holders that several cybercriminals have started offering services that will submit these fraudulent EDRs on behalf of paying clients to a number of top social media and technology firms.

A fake EDR service advertised on a hacker forum in 2021.

An individual who’s part of the community of crooks that are abusing fake EDR told KrebsOnSecurity the schemes often involve hacking into police department emails by first compromising the agency’s website. From there, they can drop a backdoor “shell” on the server to secure permanent access, and then create new email accounts within the hacked organization.

In other cases, hackers will try to guess the passwords of police department email systems. In these attacks, the hackers will identify email addresses associated with law enforcement personnel, and then attempt to authenticate using passwords those individuals have used at other websites that have been breached previously.

EDR OVERLOAD?

Donahue said depending on the industry, EDRs make up between 5 percent and 30 percent of the total volume of requests. In contrast, he said, EDRs amount to less than three percent of the requests sent through Kodex portals used by customers.

KrebsOnSecurity sought to verify those numbers by compiling EDR statistics based on annual or semi-annual transparency reports from some of the largest technology and social media firms. While there are no available figures on the number of fake EDRs each provider is receiving each year, those phony requests can easily hide amid an increasingly heavy torrent of legitimate demands.

Meta/Facebook says roughly 11 percent of all law enforcement data requests — 21,700 of them — were EDRs in the first half of 2021. Almost 80 percent of the time the company produced at least some data in response. Facebook has long used its own online portal where law enforcement officials must first register before submitting requests.

Government data requests, including EDRs, received by Facebook over the years. Image: Meta Transparency Report.

Apple said it received 1,162 emergency requests for data in the last reporting period it made public — July – December 2020. Apple’s compliance with EDRs was 93 percent worldwide in 2020. Apple’s website says it accepts EDRs via email, after applicants have filled out a supplied PDF form. [As a lifelong Apple user and customer, I was floored to learn that the richest company in the world — which for several years has banked heavily on privacy and security promises to customers — still relies on email for such sensitive requests].

Twitter says it received 1,860 EDRs in the first half of 2021, or roughly 15 percent of the global information requests sent to Twitter. Twitter accepts EDRs via an interactive form on the company’s website. Twitter reports that EDRs decreased by 25% during this reporting period, while the aggregate number of accounts specified in these requests decreased by 15%. The United States submitted the highest volume of global emergency requests (36%), followed by Japan (19%), and India (12%).

Discord reported receiving 378 requests for emergency data disclosure in the first half of 2021. Discord accepts EDRs via a specified email address.

For the six months ending in December 2021, Snapchat said it received 2,085 EDRs from authorities in the United States (with a 59 percent compliance rate), and another 1,448 from international police (64 percent granted). Snapchat has a form for submitting EDRs on its website.

TikTok‘s resources on government data requests currently lead to a “Page not found” error, but a company spokesperson said TikTok received 715 EDRs in the first half of 2021. That’s up from 409 EDRs in the previous six months. Tiktok handles EDRs via a form on its website.

The current transparency reports for both Google and Microsoft do not break out EDRs by category. Microsoft says that in the second half of 2021 it received more than 25,000 government requests, and that it complied at least partly with those requests more than 90 percent of the time.

Microsoft runs its own portal that law enforcement officials must register at to submit legal requests, but that portal doesn’t accept requests for other Microsoft properties, such as LinkedIn or Github.

Google said it received more than 113,000 government requests for user data in the last half of 2020, and that about 76 percent of the requests resulted in the disclosure of some user information. Google doesn’t publish EDR numbers, and it did not respond to requests for those figures. Google also runs its own portal for accepting law enforcement data requests.

Verizon reports (PDF) receiving more than 35,000 EDRs from just U.S. law enforcement in the second half of 2021, out of a total of 114,000 law enforcement requests (Verizon doesn’t disclose how many EDRs came from foreign law enforcement entities). Verizon said it complied with approximately 91 percent of requests. The company accepts law enforcement requests via snail mail or fax.

Image: Verizon.com.

AT&T says (PDF) it received nearly 19,000 EDRs in the second half of 2021; it provided some data roughly 95 percent of the time. AT&T requires EDRs to be faxed.

The most recent transparency report published by T-Mobile says the company received more than 164,000 “emergency/911” requests in 2020 — but it does not specifically call out EDRs. Like its old school telco brethren, T-Mobile requires EDRs to be faxed. T-Mobile did not respond to requests for more information.

Data from T-Mobile’s most recent transparency report in 2020. Image: T-Mobile.

Worse Than FailureCodeSOD: Never Don't Stop Not Doing This

It's not nothing to never write confusing English. And it doesn't never influence the code that we write. Don't fail to look at this anti-pattern from today's un-named submitter.

If Not port Is Nothing Then portUnAvailable = False End If

If the port isn't nothing, then the port isn't unavailable. That's… not untrue. But it is surprisingly confusing when you're reading it in a larger block of code. And, of course, this would be simpler to express as a boolean operation:

portUnAvailable = port is Nothing

Then again, without context, that might be wrong- perhaps portUnAvailable gets set elsewhere and then changes only if Not port Is Nothing. But let's assume that's not how this works, because that hints at a bigger WTF.

Do never don't avoid this pattern in your own code or writing.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

MEPIN for Login

Windows 10 added a new “PIN” login method, which is an optional login method instead of an Internet based password through Microsoft or a Domain password through Active Directory. Here is a web page explaining some of the technology (don’t watch the YouTube video) [1]. There are three issues here, whether a PIN is any good in concept, whether the specifics of how it works are any good, and whether we can copy any useful ideas for Linux.

Is a PIN Any Good?

A PIN in concept is a shorter password. I think that less secure methods of screen unlocking (fingerprint, face unlock, and a PIN) can be reasonably used in less hostile environments. For example if you go to the bathroom or to get a drink in a relatively secure environment like a typical home or office you don’t need to enter a long password afterwards. Having a short password that works for short time periods of screen locking and a long password for longer times could be a viable option.

It could also be an option to allow short passwords when the device is in a certain area (determined by GPS or Wifi connection). Android devices have in the past had options to disable passwords when at home.

Is the Windows 10 PIN Any Good?

The Windows 10 PIN is based on TPM security which can provide real benefits, but this is more of a failure of Windows local passwords in not using the TPM than a benefit for the PIN. When you login to a Windows 10 system you will be given a choice of PIN or the configured password (local password or AD password).

As a general rule providing a user a choice of ways to login is bad for security as an attacker can use whichever option is least secure.

The configuration options for Windows 10 allow either group policy in AD or the registry to determine whether PIN login is allowed but doesn’t have any control over when the PIN can be used which seems like a major limitation to me.

The claim that the PIN is more secure than a password would only make sense if it was a viable option to disable the local password or AD domain password and only use the PIN. That’s unreasonably difficult for home users and usually impossible for people on machines with corporate management.

Ideas For Linux

I think it would be good to have separate options for short term and long term screen locks. This could be implemented by having a screen locking program use two different PAM configurations for unlocking after short term and long term lock periods.

Having local passwords based on the TPM might be useful. But if you have the root filesystem encrypted via the TPM using systemd-cryptoenroll it probably doesn’t gain you a lot. One benefit of the TPM is limiting the number of incorrect attempts at guessing the password in hardware, the default is allowing 32 wrong attempts and then one every 10 minutes. Trying to do that in software would allow 32 guesses and then a hardware reset which could average at something like 32 guesses per minute instead of 32 guesses per 320 minutes. Maybe something like fail2ban could help with this (a similar algorithm but for password authentication guesses instead of network access).

Having a local login method to use when there is no Internet access and network authentication can’t work could be useful. But if the local login method is easier then an attacker could disrupt Internet access to force a less secure login method.

Is there a good federated authentication system for Linux? Something to provide comparable functionality to AD but with distributed operation as a possibility?

,

Worse Than FailureA Slice of Spam

In addition to being a developer, Beatrix W manages a few small email servers, which means she sometimes needs to evaluate the kinds of messages arriving and their origins, especially when they're suspicious. One such suspicious message arrived, complete with a few malicious links, and some hints of possibly being the start of a spear-phishing attack.

That was concerning, and as it turns out, the email came through a vendor who specializes in sending marketing emails- but the requested sort (or at least the sort where you got confused about which box to uncheck at checkout and accidentially signed yourself up for a newsletter). So Beatrix tracked down the contact form on the company website.

She filled out the form with a description of the issue. The form had a handy-dandy "Attachments" field, and the instructions said, "Attach the suspicious email with its full email headers." So, she copy/pasted the suspicious email, headers included, into a text file, and attached the text file. She reviewed her work, confirmed that the attachment had uploaded successfully, and then pushed "Send". A copy of her submission arrived in her inbox, attachment and all, so she filed it away and forgot about it.

Two weeks later, the vendor replied.

We were unable to complete your investigation of unwanted email because we did not have enough information. In order for us to address issues you may be experiencing with users of our services sending you unwanted, unsolicited, or otherwise problematic emails, it will be necessary for you to send us the full content of this message including the full headers.

Beatrix paused reading right there, and pulled up her email, and confirmed, yes, she had attached the email, complete with its headers. She went back to the vendor's email reply and continued reading:

Please note that due to security concerns we will not open attachments under any circumstance. You must provide any necessary information in plaintext in the body of your report.

At least they care about their security, if not yours. Though it does raise the question: why does their contact form have an attachments button if you shouldn't use it?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cryptogram Friday Squid Blogging: Squid Filmed Changing Color for Camouflage Purposes

Video of oval squid (Sepioteuthis lessoniana) changing color in reaction to their background. The research paper claims this is the first time this has been documented.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Long Article on NSO Group

Ronan Farrow has a long article in the New Yorker on NSO Group, which includes the news that someone — probably Spain — used the software to spy on domestic Catalonian separatists.

Cryptogram Java Cryptography Implementation Mistake Allows Digital-Signature Forgeries

Interesting implementation mistake:

The vulnerability, which Oracle patched on Tuesday, affects the company’s implementation of the Elliptic Curve Digital Signature Algorithm in Java versions 15 and above. ECDSA is an algorithm that uses the principles of elliptic curve cryptography to authenticate messages digitally.

[…]

ECDSA signatures rely on a pseudo-random number, typically notated as K, that’s used to derive two additional numbers, R and S. To verify a signature as valid, a party must check the equation involving R and S, the signer’s public key, and a cryptographic hash of the message. When both sides of the equation are equal, the signature is valid.

[…]

For the process to work correctly, neither R nor S can ever be a zero. That’s because one side of the equation is R, and the other is multiplied by R and a value from S. If the values are both 0, the verification check translates to 0 = 0 X (other values from the private key and hash), which will be true regardless of the additional values. That means an adversary only needs to submit a blank signature to pass the verification check successfully.

Madden wrote:

Guess which check Java forgot?

That’s right. Java’s implementation of ECDSA signature verification didn’t check if R or S were zero, so you could produce a signature value in which they are both 0 (appropriately encoded) and Java would accept it as a valid signature for any message and for any public key. The digital equivalent of a blank ID card.

More details.

Cryptogram Clever Cryptocurrency Theft

Beanstalk Farms is a decentralized finance project that has a majority stake governance system: basically people have proportional votes based on the amount of currency they own. A clever hacker used a “flash loan” feature of another decentralized finance project to borrow enough of the currency to give himself a controlling stake, and then approved a $182 million transfer to his own wallet.

It is insane to me that cryptocurrencies are still a thing.

Cryptogram Undetectable Backdoors in Machine-Learning Models

New paper: “Planting Undetectable Backdoors in Machine Learning Models“:

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an “adversarially robust” classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

EDITED TO ADD (4/20): Cory Doctorow wrote about this as well.

,

MEGot Covid

I’ve currently got Covid, I believe I caught it on the 11th of April (my first flight since the pandemic started) with a runny nose on the 13th and a positive RAT on the evening of the 14th. I got an official PCR test on the 16th with a positive result returned on the 17th. I think I didn’t infect anyone else (yay)! Now I seem mostly OK but still have a lack of energy, sometimes I suddenly feel tired after 20 minutes of computer work.

The progression of the disease was very different to previous cold/flu diseases that I have had. What I expect is to start with a cough or runny nose, escalate with more of that, have a day or two of utter misery with congestion, joint pain, headache, etc, then have it suddenly decrease overnight. For Covid I had a runny nose for a couple of days which went away then I got congestion in my throat with serious coughing such that I became unable to speak. Then the coughing went away and I had a really bad headache for a day with almost no other symptoms. Then the headache went away and I was coughing a bit the next day. The symptoms seemed to be moving around my body.

I got a new job and they wanted me to fly to the head office to meet the team, I apparently got it on the plane a day before starting work. I’ve discussed this with a manager and stated my plan to drive instead of fly in future. It’s only a 7 hour drive and it’s not worth risking the disease to save 3-4 hours travel time, or even the 5 hours travel I’d have saved if the airports were working normally (apparently a lot of airport staff are off sick so there’s delays). Given the flight delays and the fact that I was advised to arrive extra early at the airport I ended up taking almost 7 hours for the entire trip!

7 hours driving is a bit of effort, but sitting in an airport waiting for a delayed flight while surrounded by diseased people isn’t fun either.

Sam VargheseBias? Don’t know that word, says The Age editor

Another Saturday, and there’s a fresh dose of wisdom from Gay Alcorn, the venerable editor of The Age, a tabloid that is one of the two main newspapers in Melbourne. Once again, Alcorn’s gem was behind a paywall in the morning but is now free to read.

As with her effort some weeks ago — which was dissected here — Alcorn is again trying to play the balance card even as accusations of bias arise. This time, a federal election campaign is in full swing and thus the shrieks from the gallery are that much louder.

Alcorn claims the newspaper, part of once what was a large stable running under the name Fairfax Media until it was taken over by Nine Entertainment, has not moved to the right.

[I worked for the website of The Age for nearly 17 years, from June 1999 until May 2016.]

In her own words: “There is no doubt that on social media in particular, The Age is accused of being pro-Coalition, especially since Fairfax, publishers of The Age and The Sydney Morning Herald, were taken over by Nine Entertainment and because the board’s chairman is Peter Costello, a former Liberal Treasurer. We are also accused of being pro-Labor – our letter writers appear overwhelmingly progressive to me – but mostly the suggestion is that we have moved rightwards.”

Alcorn denies this is the case: “Maybe I would say this, but I don’t believe it’s true. We take editorial independence so seriously that there would be a major problem if commercial interests attempted to influence our editorial decisions in any way.”

But history says otherwise. The very fact that The Guardian, a newspaper that leans to the left, has been able to set up a website and thrive in Australia, and The New Daily, a website that is funded by the superannuation industry and also veers more left than right, has found a sizeable audience speaks to the untruthfulness of this assertion.

Both these publications have cannibalised the Age’s left-wing readers as the Age has swung to right-of-centre.

In 2006, the Age’s rival, the Herald Sun, a Murdoch tabloid, was planning a redesign, to become a little more of a red-top than it already was. At that time, the editor of the Age website, Mike van Niekerk, sent an email to the site’s news editor, pointing out what was about to happen and saying the Age website would have to go in a similar direction.

The website became a lot more tabloid-like from that point on, often leading to criticism from the print side of operations, saying that the website could not be even recognised as being the Age.

Some years later, the Age and the Sydney Morning Herald both decided to reduce their sizes and become tabloids. The content slowly began to reflect the size of the paper. Hence, saying there has been no turn to the right is wide of the mark.

As to bias, Alcorn and the Sydney Morning Herald editor Bevan Shields both defended their respective papers (and by extension the two websites) from such a claim on a podcast.

But then the reality is different. The columnists are reflective of the bias: Shaun Carney, who left the paper some years ago, is now writing for it again; he is Peter Costello’s biographer.

Another of the op-ed writers is Parnell Palme McGuinness, daughter of the late right-wing contrarian Padraic McGuinness, a right-wing nutjob if ever there was one. Add to that a woman named Julie Szego — who has been there for decades and once ran to Mark Leibler to get a column of hers reinstated after the then editor, Paul Ramadge, had spiked it — and you have all the ingredients for a stale right-wing pudding. There’s also former Howard minister Amanda Vanstone who provides the icing on that.

The only decent political columnist is Nikki Savva and she came to Nine only because The Australian, where she was a staple, hired the former Tony Abbott spin doctor Peta Credlin. The Age also runs columns written by Michelle Grattan who left the paper seven or eight years ago.

Waleed Aly, a lecturer who was once on the left, but is much more centrist these days, writes the occasional column. Then there is Peter Hartcher who is on the record as saying that Australia should not accept Chinese from mainland China as immigrants. Plus Ross Gittins, an economics writer of some vintage.

The sharing of copy between the two papers started some years ago to save money and continues apace. The Sydney Morning Herald is the boss and calls the tune.

These occasional missives from Alcorn do little good to convince any reader with even an IQ of 10 about the lack of bias; the content is the only thing that will speak for it.

,

Krebs on SecurityLeaked Chats Show LAPSUS$ Stole T-Mobile Source Code

KrebsOnSecurity recently reviewed a copy of the private chat messages between members of the LAPSUS$ cybercrime group in the week leading up to the arrest of its most active members last month. The logs show LAPSUS$ breached T-Mobile multiple times in March, stealing source code for a range of company projects. T-Mobile says no customer or government information was stolen in the intrusion.

LAPSUS$ is known for stealing data and then demanding a ransom not to publish or sell it. But the leaked chats indicate this mercenary activity was of little interest to the tyrannical teenage leader of LAPSUS$, whose obsession with stealing and leaking proprietary computer source code from the world’s largest tech companies ultimately led to the group’s undoing.

From its inception in December 2021 until its implosion late last month, LAPSUS$ operated openly on its Telegram chat channel, which quickly grew to more than 40,000 followers after the group started using it to leak huge volumes of sensitive data stolen from victim corporations.

But LAPSUS$ also used private Telegram channels that were restricted to the core seven members of the group. KrebsOnSecurity recently received a week’s worth of these private conversations between LAPSUS$ members as they plotted their final attacks late last month.

The candid conversations show LAPSUS$ frequently obtained the initial access to targeted organizations by purchasing it from sites like Russian Market, which sell access to remotely compromised systems, as well as any credentials stored on those systems.

The logs indicate LAPSUS$ had exactly zero problems buying, stealing or sweet-talking their way into employee accounts at companies they wanted to hack. The bigger challenge for LAPSUS$ was the subject mentioned by “Lapsus Jobs” in the screenshot above: Device enrollment. In most cases, this involved social engineering employees at the targeted firm into adding one of their computers or mobiles to the list of devices allowed to authenticate with the company’s virtual private network (VPN).

The messages show LAPSUS$ members continuously targeted T-Mobile employees, whose access to internal company tools could give them everything they needed to conduct hassle-free “SIM swaps” — reassigning a target’s mobile phone number to a device they controlled. These unauthorized sim swaps allow an attacker to intercept a target’s text messages and phone calls, including any links sent via SMS for password resets, or one-time codes sent for multi-factor authentication.

The LAPSUS$ group had a laugh at this screenshot posted by their leader, White, which shows him reading a T-Mobile news alert about their hack into Samsung. White is viewing the page via a T-Mobile employee’s virtual machine.

In one chat, the LAPSUS$ leader — a 17-year-old from the U.K. who goes by the nicknames “White,” “WhiteDoxbin” and “Oklaqq” — is sharing his screen with another LAPSUS$ member who used the handles “Amtrak” and “Asyntax.”

The two were exploring T-Mobile’s internal systems, and Amtrak asked White to obscure the T-Mobile logo on his screen. In these chats, the user “Lapsus Jobs” is White. Amtrak explains this odd request by saying their parents are aware Amtrak was previously involved in SIM swapping.

“Parents know I simswap,” Amtrak said. “So, if they see [that] they think I’m hacking.”

The messages reveal that each time LAPSUS$ was cut off from a T-Mobile employee’s account — either because the employee tried to log in or change their password — they would just find or buy another set of T-Mobile VPN credentials. T-Mobile currently has approximately 75,000 employees worldwide.

On March 19, 2022, the logs and accompanying screenshots show LAPSUS$ had gained access to Atlas, a powerful internal T-Mobile tool for managing customer accounts.

LAPSUS$ leader White/Lapsus Jobs looking up the Department of Defense in T-Mobile’s internal Atlas system.

After gaining access to Atlas, White proceeded to look up T-Mobile accounts associated with the FBI and Department of Defense (see image above). Fortunately, those accounts were listed as requiring additional verification procedures before any changes could be processed.

Faced with increasingly vocal pleadings from other LAPSUS$ members not to burn their access to Atlas and other tools by trying to SIM swap government accounts, White unilaterally decided to terminate the VPN connection permitting access to T-Mobile’s network.

The other LAPSUS$ members desperately wanted to SIM swap some wealthy targets for money. Amtrak throws a fit, saying “I worked really hard for this!” White calls the Atlas access trash and then kills the VPN connection anyway, saying he wanted to focus on using their illicit T-Mobile access to steal source code.

A screenshot taken by LAPSUS$ inside T-Mobile’s source code repository at Bitbucket.

Perhaps to mollify his furious teammates, White changed the subject and told them he’d gained access to T-Mobile’s Slack and Bitbucket accounts. He said he’d figured out how to upload files to the virtual machine he had access to at T-Mobile.

Roughly 12 hours later, White posts a screenshot in their private chat showing his automated script had downloaded more than 30,000 source code repositories from T-Mobile.

White showing a screenshot of a script that he said downloaded all available T-Mobile source code.

In response to questions from KrebsOnSecurity, T-Mobile issued the following statement:

“Several weeks ago, our monitoring tools detected a bad actor using stolen credentials to access internal systems that house operational tools software. The systems accessed contained no customer or government information or other similarly sensitive information, and we have no evidence that the intruder was able to obtain anything of value. Our systems and processes worked as designed, the intrusion was rapidly shut down and closed off, and the compromised credentials used were rendered obsolete.”

CONSIDER THE SOURCE

It is not clear why LAPSUS$ was so fixated on stealing source code. Perhaps LAPSUS$ thought they could find in the source clues about security weaknesses that could be used to further hack these companies and their customers. Maybe the group already had buyers lined up for specific source code that they were then hired to procure. Or maybe it was all one big Capture the Flag competition, with source code being the flag. The leaked chats don’t exactly explain this fixation.

But it seems likely that the group routinely tried to steal and then delete any source code it could find on victim systems. That way, it could turn around and demand a payment to restore the deleted data.

In one conversation in late March, a LAPSUS$ member posts screenshots and other data indicating they’d gained remote administrative access to a multi-billion dollar company. But White is seemingly unimpressed, dismissing the illicit access as not worth the group’s time because there was no source code to be had.

LAPSUS$ first surfaced in December 2021, when it hacked into Brazil’s Ministry of Health and deleted more than 50 terabytes of data stored on the ministry’s hacked servers. The deleted data included information related to the ministry’s efforts to track and fight the COVID-19 pandemic in Brazil, which has suffered a disproportionate 13 percent of the world’s COVID-19 fatalities. LAPSUS$’s next 15 victims were based either in Latin America or Portugal, according to cyber threat intelligence firm Flashpoint.

By February 2022, LAPSUS$ had pivoted to targeting high-tech firms based in the United States. On Feb. 26, LAPSUS$ broke into graphics and computing chip maker NVIDIA. The group said it stole more than a terabyte of NVIDIA data, including source code and employee credentials.

Dan Goodin at Ars Technica wrote about LAPSUS$’s unusual extortion demand against NVIDIA: The group pledged to publish the stolen code unless NVIDIA agreed to make the drivers for its video cards open-source. According to these chats, NVIDIA responded by connecting to the computer the attackers were using, and then encrypting the stolen data.

Like many high-tech firms whose value is closely tied to their intellectual property, NVIDIA relies on a number of technologies designed to prevent data leaks or theft. According to LAPSUS$, among those is a requirement that only devices which have been approved or issued by the company can be used to access its virtual private network (VPN).

These so-called Mobile Device Management (MDM) systems retrieve information about the underlying hardware and software powering the system requesting access, and then relay that information along with any login credentials.

In a typical MDM setup, a company will issue employees a laptop or smartphone that has been pre-programmed with a data profile, VPN and other software that allows the employer to track, monitor, troubleshoot or even wipe device data in the event of theft, loss, or a detected breach.

MDM tools also can be used to encrypt or retrieve data from connected systems, and this was purportedly the functionality NVIDIA used to claw back the information stolen by LAPSUS$.

“Access to NVIDIA employee VPN requires the PC to be enrolled in MDM,” LAPSUS$ wrote in a post on their public Telegram channel. “With this they were able to connect to a [virtual machine] that we use. Yes, they successfully encrypted the data. However, we have a backup and it’s safe from scum!!!”

NVIDIA declined to comment for this story.

On March 7, consumer electronics giant Samsung confirmed what LAPSUS$ had bragged on its Telegram channel: That the group had stolen and leaked nearly 200 GB of source code and other internal company data.

The chats reveal that LAPSUS$ stole a great deal more source code than they bragged about online. One of White’s curious fascinations was SASCAR, Brazil’s leading fleet management and freight security company. White had bought and talked his way into SASCAR’s systems, and had stolen many gigabytes worth of source code for the company’s fleet tracking software.

It was bad enough that LAPSUS$ had just relieved this company of valuable intellectual property: The chats show that for several days White taunted SASCAR employees who were responding to the then-unfolding breach, at first by defacing the company’s website with porn.

The messages show White maintained access to the company’s internal systems for at least 24 hours after that, even sitting in on the company’s incident response communications where the security team discussed how to evict their tormentors.

SASCAR is owned by tire industry giant Michelin, which did not respond to requests for comment.

ENROLLMENT

The leaked LAPSUS$ internal chats show the group spent a great deal of time trying to bypass multi-factor authentication for the credentials they’d stolen. By the time these leaked chat logs were recorded, LAPSUS$ had spent days relentlessly picking on another target that relied on MDM to restrict employee logins: Iqor, a customer support outsourcing company based in St. Petersburg, Fla.

LAPSUS$ apparently had no trouble using Russian Market to purchase access to Iqor employee systems. “I will buy login when on sale, Russians stock it every 3-4 days,” Amtrak wrote regarding Iqor credentials for sale in the bot shops.

The real trouble for LAPSUS$ came when the group tried to evade Iqor’s MDM systems by social engineering Iqor employees into removing multi-factor authentication on Iqor accounts they’d purchased previously. The chats show that time and again Iqor’s employees simply refused requests to modify multi-factor authentication settings on the targeted accounts, or make any changes unless the requests were coming from authorized devices.

One of several IQOR support engineers who told LAPSUS$ no over and over again.

After many days of trying, LAPSUS$ ultimately gave up on Iqor. On Mar. 22, LAPSUS$ announced it hacked Microsoft, and began leaking 37 gigabytes worth of Microsoft source code.

Like NVIDIA, Microsoft was able to stanch some of the bleeding, cutting off LAPSUS$’s illicit access while the group was in the process of downloading all of the available source code repositories alphabetically (the group publicized their access to Microsoft at the same time they were downloading the software giant’s source code). As a result, LAPSUS$ was only able to leak the source for Microsoft products at the beginning of the code repository, including Azure, Bing and Cortana.

BETRAYAL

LAPSUS$ leader White drew attention to himself prior to the creation of LAPSUS$ last year when he purchased a website called Doxbin, a long-running and highly toxic online community that is used to “dox” or post deeply personal information on people.

Based on the feedback posted by Doxbin members, White was not a particularly attentive administrator. Longtime members soon took to harassing him about various components of the site falling into disrepair. That pestering eventually prompted White to sell Doxbin back to its previous owner at a considerable loss. But before doing so, White leaked the Doxbin user database.

White’s leak triggered a swift counterpunch from Doxbin’s staff, which naturally responded by posting on White perhaps the most thorough dox the forum had ever produced — including videos filmed just outside his home where he lives with his parents in the United Kingdom.

The past and current owner of the Doxbin — an established cybercriminal who goes by the handle “KT” — is the same person who leaked these private LAPSUS$ Telegram chat logs to KrebsOnSecurity.

In early April, multiple news outlets reported that U.K. police had arrested seven people aged 15-21 in connection with the LAPSUS$ investigation. But it seems clear from reading these leaked Telegram chats that individual members of LAPSUS$ were detained and questioned at different times over the course of several months.

In his chats with other LAPSUS$ members during the last week in March, White maintained that he was arrested 1-2 months prior in connection with an intrusion against a victim referred to only by the initials “BT.” White also appeared unconcerned when Amtrak admits that the City of London police found LAPSUS$ Telegram chat conversations on his mobile phone.

Perhaps to demonstrate his indifference (or maybe just to screw with Amtrak), White responds by leaking Amtrak’s real name and phone number to the group’s public Telegram channel. In an ALL CAPS invective of disbelief at the sudden betrayal, Amtrak relates how various people started calling their home and threatening their parents as a result, and how White effectively outed them to law enforcement and the rest of the world as a LAPSUS$ member.

The vast majority of noteworthy activity documented in these private chats takes place between White and Amtrak, but it doesn’t seem that White counted Amtrak or any of his fellow LAPSUS$ members as friends or confidants. On the contrary, White generally behaved horribly toward everyone in the group, and he particularly seemed to enjoy abusing Amtrak (who somehow always came back for more).

Mox,” one of the LAPSUS$ members who shows up throughout these leaked chats, helped the group in their unsuccessful attempts to enroll their mobile devices with an airline in the Middle East to which they had purchased access. Audio recordings leaked from the group’s private Telegram channel include a call wherein Mox can be heard speaking fluently in Arabic and impersonating an airline employee.

At one point, Mox’s first name briefly shows up in a video he made and shared with the group, and Mox mentions that he lives in the United States. White then begins trying to find and leak Mox’s real-life identity.

When Mox declares he’s so scared he wants to delete his iCloud account, White suggests he can get Mox’s real name, precise location and other information by making a fraudulent “emergency data request” (EDR) to Apple, in which they use a hacked police department email account to request emergency access to subscriber information under the claim that the request can’t wait for a warrant because someone’s life is on the line.

White was no stranger to fake EDRs. White was a founding member of a cybercriminal group called “Recursion Team,” which existed between 2020 and 2021. This group mostly specialized in SIM swapping targets of interest and participating in “swatting” attacks, wherein fake bomb threats, hostage situations and other violent scenarios are phoned in to police as part of a scheme to trick them into visiting potentially deadly force on a target’s address.

The roster of the now-defunct “Infinity Recursion” hacking team, from which some members of LAPSUS$ hail.

The Recursion Team was founded by a then 14-year-old from the United Kingdom who used the handle “Everlynn.” On April 5, 2021, Everlynn posted a new sales thread to the cybercrime forum cracked[.]to titled, “Warrant/subpoena service (get law enforcement data from any service).” The price: $100 to $250 per request.

Everlynn advertising a warrant/subpoena service based on fake EDRs.

Bringing this full circle, it appears Amtrak/Asyntax is the same person as Everlynn. As part of the Recursion Team, White used the alias “Peter.” Several LAPSUS$ members quizzed White and Amtrak about whether authorities asked about Recursion Team during questioning. In several discussion threads, White’s “Lapsus Jobs” alias on Telegram answers “yes?” or “I’m here” when another member addresses him by Peter.

White dismissed his public doxing of both Amtrak and Mox as their fault for being sloppy with operational security, or by claiming that everyone already knew their real identities. Incredibly, just a few minutes after doxing Amtrak, White nonchalantly asks them for help in stealing source code from yet another victim firm — as if nothing had just happened between them. Amtrak seems soothed by this invitation, and agrees to help.

On Mar. 30, software consultancy giant Globant was forced to acknowledge a hack after LAPSUS$ published 70 gigabytes of data stolen from the company, including customers’ source code. While the Globant hack has been widely reported for weeks, the cause of the breach remained hidden in these chat logs: A stolen five-year-old access token for Globant’s network that still worked.

LAPSUS$ members marvel at a 5-year-old stolen authentication cookie still working when they use it against Globant to steal source code.

Globant lists a number of high-profile customers on its website, including the U.K. Metropolitan Police, software house Autodesk and gaming giant Electronic Arts. In March, KrebsOnSecurity showed how White was connected to the theft of 780 GB worth of source code from Electronic Arts last summer.

In that attack, the intruders reportedly gained access to EA’s data after purchasing authentication cookies for an EA Slack channel from the dark web marketplace “Genesis,” which offers more or less the same wares as the Russian Market.

One remarkable aspect of LAPSUS$ was that its members apparently decided not to personally download or store any data they stole from companies they hacked. They were all so paranoid of police raiding their homes that they assiduously kept everything “in the cloud.” That way, when investigators searched their devices, they would find no traces of the stolen information.

But this strategy ultimately backfired: Shortly before the private LAPSUS$ chat was terminated, the group learned it had just lost access to the Amazon AWS server it was using to store months of source code booty and other stolen data.

“RIP FBI seized my server,” Amtrak wrote. “So much illegal shit. It’s filled with illegal shit.”

White shrugs it off with the dismissive comment, “U can’t do anything about ur server seized.” Then Amtrak replies that they never made a backup of the server.

“FFS, THAT AWS HAD TMO SRC [T-Mobile source] code!” White yelled back.

The two then make a mad scramble to hack back into T-Mobile and re-download the stolen source code. But that effort ultimately failed after T-Mobile’s systems revoked the access token they were using to raid the company’s source code stash.

“How they noticed?” Amtrak asked White.

“Gitlab auto-revoked, likely,” White replied. “Cloning 30k repos four times in 24 hours isn’t very normal.”

Ah, the irony of a criminal hacking group that specializes in stealing and deleting data having their stolen data deleted.

It’s remarkable how often LAPSUS$ was able to pay a few dollars to buy access to some hacked machine at a company they wanted to break into, and then successfully parlay that into the theft of source code and other sensitive information.

What’s even more remarkable is that anyone can access dark web bot shops like Russian Market and Genesis, which means larger companies probably should be paying someone to regularly scrape these criminal bot services, even buying back their own employee credentials to take those vulnerable systems off the market. Because that’s probably the simplest and cheapest incident response money can buy.

The Genesis bot shop.

MEJoplin Notes

In response to my post about Android phones without Google Play [1] I received an email recommending Joplin for notes on Android [2].

Joplin supports storing notes on a number of protocols including Nextcloud and WebDAV. I setup WebDAV because it’s easiest, here is Digital Ocean instructions for WebDAV on Apache [3]. That basically works. One problem for my use case is that the Joplin client doesn’t support accounts on multiple servers and the only released way of sharing notes between accounts is using the paid Joplin Cloud service.

There is a Joplin Server in beta which allows sharing notes but that is designed to run in Docker and is written in TypeScript so it was too much pain to setup. One mitigating factor is that there are “Notebooks” which are collections of notes. So if multiple people who trust each other share an account they can have Notebooks for personal notes and a Notebook for shared notes.

There is also a Snap install of the client for Debian [4]. Snap isn’t my favourite way of doing things but packaging JavaScript programs will probably be painful so I’ll do it if I continue using Joplin.

,

Charles StrossBehind the Ukraine war

Today is April 2nd. There's a good reason I skipped blogging on April 1st: the actual news right now is both sufficiently ghastly and surreal that any attempt at satire either falls flat or runs victim to Poe's Law.

(I did hatch a relatively harmless idea for a non-depressing April Fool's jape—an announcement that I'd decided my fiction was too depressing, so I was going to pivot to writing Squeecore (albeit with Lovecraftian features), but then I described it to a friend and he pointed out that Dead Lies Dreaming was already Squeecore with Lovecraftian features, so the joke's on me.)

I have real difficulty writing fiction during periods when the Wrong Sort of History is Happening. The Ukraine invasion completely threw me off my stride, so the novella I was attempting to write the second half of is still unfinished and I'm behind schedule on the final draft of Season of Skulls.

But when life hands you lemons you might as well make lemonade, so here's what I learned from my most recent month of doomscrolling.

Some of the news this year puts me in mind of a novel I never got round to writing. Back in March of 2012 I wrote about something that worried me: the intersection of social media apps, geolocation, smartphones, and murder:

In the worst case, it's possible to envisage geolocation and data aggregation apps being designed to facilitate the identification and elimination of some ethnic or class enemy

Today, some of it is happening in the Ukraine war:

There's even an app people can use to report the movements of Russian troops, sending location-tagged videos directly to Ukrainian intelligence. The country's minister of digital transformation, Mykhailo Fedorov, told The Washington Post they're getting tens of thousands of reports a day.

It's a lot less morally questionable than my grim speculation about geolocation/social media apps mediating intra-community genocide, but it's still appalling by implication. The Ukrainians are justified in doing this, but sooner or later someone is going to turn this into a tool for genocide.

What is funny, in the sense of funny-peculiar, not funny-humorous, is the war of the cellular networks. It turns out the Russian field units are using 1980s analog radios and cellphones to communicate. A lot of them got lost because after commanders confiscated all the troops' smartphones, they issued paper maps which nobody knows how to use any more. Meanwhile the Russian commanders were using an end-to-end encrypted secure messaging app ... that required cellphone service, and by shelling the Ukrainian cellphone base stations they were disrupting their own secure comms. It's an absolute clusterfuck, and if it wasn't combined with atrocities and war crimes it would be hilarious.

This is without even touching on the self-inflicted Russian casualties in the Chernobyl exclusion zone. You may wonder why the Russian soldiers were stupid enough to dig trenches in the Red Forest, possibly the most radioactive pollution zone on the planet. (Hint: it takes more radiation to kill a conifer than a human being—which is why the Red Forest, where almost all the trees died, is a really bad place to go bivouacking.) It becomes clearer once you know that the Russian armies are being directed from the top down, receiving exact orders from Moscow and allowed no scope for deviation. Someone who was 16 years old in 1986 (the year of the Chernobyl Disaster—about the youngest age to fully understand the scale and implications of the event) would be 52 by now, probably too old to be in the field: to the kids fighting the war, the Chernobyl disaster probably happened before their parents were born. It's ancient history about an accident in a foreign country.

Back in the mists of time on this blog (DDG search isn't terribly helpful in locating it) I prognosticated about the first generation who would never have experienced getting lost, because smartphones with GPS would be ubiquitous. But when Generation Location runs into a military-historical Cold War LARP/nostalgia trip—which seems to be what the Ukraine war is turning into, from the Russian point of view—things get messy. Ditto for no access to wikipedia or other online information resources. It seems humans have short memories (especially 18-20 year old conscripts from the decrepit, poverty-stricken Russian heartland), and the elderly and rigid Russian leadership (Putin is only 5 years younger than Leonid Brezhnev was when he died) is locked in an information bubble of their own creation, uncritically consuming reports their subordinates prepare in hope of not attracting their ire.

I could go on endlessly about this ongoing war, but right now I just want to clutch my head and hide. Anyway, I speak/read neither Russian nor Ukrainian, so I'm at best a second-hand information source. A lot of the stuff circulating on twitter (I don't do Facebook) is of dubious quality, although I find the twitter-streams of @kamilkazani and @drleostrauss (note: it's an alias, the real Leo Strauss died in 1973: this one is a pseudonymous Washington DC foreign policy wonk) both compelling and mostly persuasive.

What I can safely say is that this war isn't going the way any of us might have expected. That Ukraine wouldn't roll over and surrender instantly, but would instead fight back furiously, could have been predicted. (This is the sort of war that nation-building myths are later based on, like the Battle of Britain, or the Winter War, or the Israeli War of Independence.) It's at least as revolutionary as the Second Boer War in terms of brutally exposing the obsolete military doctrines of an imperial invader: in this case the obsolescence of traditional Soviet/Russian tank doctrine in the face of drones, loitering munitions, and infantry-portable ATGMs, not to mention the bizarre failure of military comms to keep up with the smartphone revolution.

The true impact of the cyberwar hasn't become clear yet, but the Rosaviation hack alone—the entire licensing/registration database of Rosaviation, the Russian Civil Aviation Registrar, has been erased, all 65Tb of it, apparently without leaving them with a backup—could be the most expensive hacking attack this century.

And that's before we come to the way the war is amplifying the ongoing energy crisis.

I think the war can best be contextualized as the flailing reaction of an ossifying, increasingly centralized and aggressively authoritarian oil/gas extraction regime to the growing threat of its own irrelevance. While crude Russian nationalism and revanchist empire-building is the obvious superficial cause of the war, the real structural issues underlying it are the failure of Russia to diversify its economy and to establish a modern framework of government that doesn't degrade into Tsarist rule-by-decree: eventually the Tsar loses touch with reality (whether by going nuts or due to being fed misinformation from below) and bad stuff happens. Oil and gas are economic heroin to the exporting countries: only a handful have moved to effectively avoid the withdrawal side-effects (I'm thinking of Norway in particular), and for most withdrawal is disastrous. Russia is particularly vulnerable, and can't afford to let the rest of the world wean itself off fossil carbon abuse. And Ukraine is now paying the price. (It should be noted that Donbass has the second largest gas reserves in Europe: this is economically as much an oil/gas war as was the Iraq war before it.)

Anyway, as Lenin remarked, "There are decades where nothing happens; and there are weeks where decades happen."

We had a couple of decades of Francis Fukuyama's The end of history and now we're paying the price in catch-up weeks.

PS: I have chosen to ignore the question of Russian interference in Western politics, and especially Donald Trump and Alexander Boris de Pfeffel Johnson, because this war is not about the west: it's about long-term Russian ethnonationalist revanchism, an attempt to rebuild their Empire. Centering western political concerns is dangerous and misleading and will lead us into error, so don't do that in the comments.

,

Charles StrossYokai Land Q&A

Sorry about the outage: I just spent the past two weeks being a tourist and visiting friends in Germany—my first journey more than 50km from home since January 2020. It was fun, good beer was drunk, old friends were visited, many FFP2 masks were worn, and I'm now testing daily because of course BA.2 arrived while I was traveling. (LFTs are all negative so far ...)

Shipping delays mean that Transreal Fiction didn't get copies of Escape from Yokai Land before I departed, so if you've been wondering where your order got to, I'm going to try and get up there tomorrow to sign them (assuming Ingrams, the wholesaler, have delivered them the day after I went on vacation).

As it's not going paperback (ever) there's no point holding off on spoilers/questions about Escape, so if you want to ask me anything about it, feel free to do so in the comments below.

Please do not colonise the comments with (a) the permanent floating cars v. bicycles discussion, (b) the permanent floating climate change discussion, or (c) the Russian invasion of Ukraine. I'll start a new topic for those things later.

MEAndroid Without Play

A while ago I was given a few reasonably high-end Android phones to give away. I gave two very nice phones to someone who looks after refugees so a couple of refugee families could make video calls to relatives. The third phone is a Huawei Nova 7i [1] which doesn’t have the Google Play Store. The Nova 7i is a ridiculously powerful computer (8G of RAM in a phone!!!) but without the Google Play Store it’s not much use to the average phone user. It has the “HuaWei App Gallery” which isn’t as bad as most of the proprietary app stores of small players in the Android world, it has SnapChat, TikTok, Telegram, Alibaba, WeChat, and Grays auction (an app I didn’t even know existed) along with many others. It also links to ApkPure (apparently a 3rd party app installer that “obtains” APK files for major commercial apps) for Facebook among others. The ApkPure thing might be Huawei outsourcing the violation of Facebook terms of service. For the moment I’ve decided to only use free software on this phone and use my old phone for non-free stuff (Facebook, LinkedIn, etc). The eventual aim is that I can only carry a phone with free software for normal use and carry a second phone if I’m active on LinkedIn or something. My recollection is that when I first got the phone (almost 2 years ago) it didn’t have such a range of apps.

The first thing to install was f-droid [2] as the app repository. F-droid has a repository of thousands of free software Android apps as well as some apps that are slightly less free which are tagged appropriately. You can install the F-Droid app from the web site. As an aside I had to go to settings and enable “force old index format” to get the list of packages, I don’t know why as other phones had worked without it.

Here are the F-Droid apps I installed:

  • Kdeconnect to transfer files to PC. This has some neat features including using the PC keyboard on Android. One downside is that there’s no convenient way to kill it. I don’t want it hanging around, I want to transfer a file and close it down to minimise exposure.
  • K9 is an Android app for email that I’ve used for over a decade now. Previously I’ve used it from the Play Store but it’s available in F-droid. I used Kdeconnect to transfer the exported configuration from my old phone to my PC and then from my PC to my new phone.
  • I’m now using SchildiChat for Matrix as a replacement for Google Hangouts (I previously wrote about how Google is killing Hangouts [3]). One advantage of SchildiChat is that it keeps a notification running 24*7 to reduce the incidence of Android killing it. The process of sending private messages with Matrix seems noticeably slower than Hangouts, while Google will inevitably be faster than a federated system (if only because they buy better hardware than I rent) the difference shouldn’t be enough to notice (my Matrix servers might need some work).
  • I used ffupdater to install Firefox. It can also install other browsers that don’t publish APK files. One of the options is “Ungoogled Chromium” which I’m not going to use even though I’ve found Google Chrome to be a great browser, I think I should go all the way in avoiding Google. There’s no description in the app of the differences between the browsers, the ffupdater web page has information about the browsers [4].
  • I use Tusky for Mastodon which is a replacement for Twitter. My Mastodon address is @etbe@mastodon.nzoss.nz. Currently Mastodon needs more users, there are plenty of free servers out there and the New Zealand Open Source Society is just one I have contact with.
  • I have used ConnectBot for ssh connections from Android for over 10 years, previously via the Play Store but it’s also in F-droid. To get the hash of a key from a server in the way ConnectBot displays it run “ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_ed25519_key.pub“.
  • I initially changed Keyboard from MS Swiftkey to the Celia keyboard that came with the phone. But it’s spelling correction was terrible, almost never suggesting words with apostrophes when appropriate and also having no apparent option to disable adult words. I’m now using OpenBoard which is a port of the Google Android keyboard which works well.
  • I’ve just installed “primitive ftpd” for file transfer, it supports ftp and sftp protocols and is well written.
  • I’ve installed the mpv video player which plays FullHD video at high quality using hardware decoding. I don’t need to do that sort of thing (the screen is too small to make it worth FullHD video), but it’s nice to have.
  • For barcodes and QR codes I’m using Binary Eye which seems better than the Play Store one I had used previously.
  • For playing music I’ve tried using the Simple Music Player (which is nice for mp3s), but it doesn’t play m4a or webm files. Auxio and Music Player Go play mp3 and m4a but not webm. So far the only programs I’ve found that can play webm are VLC and MPV, so I’m trying out VLC as a music player which it basically does but a program with the same audio features and no menu options about video would be better. Webm is important to me because I have some music videos downloaded from YouTube and webm allows me to put a binary copy of the audio data into an audio file.

Future Plans

The current main things I’m missing are a calendar, a contact list, and a shared note taking system (like Google Keep). For calendaring and a contact list the CalDAV and CardDAV protocols seem best. The most common implementation on the server side appears to be DAViCal [5]. The Nextcloud system supports CalDAV, CardDAV, web editing of notes and documents (including LibreOffice if you install that plugin) [6]. But it is huge and demands write access to all it’s own code (bad for security), and it’s not packaged for Debian. Also in my tests it gave me an error 401 when I tried to authenticate to it from the Android Nextcloud client. I’ve seen a positive review about Radicale, a simple CalDAV and CardDAV server that doesn’t need a database [7]. I prefer the Unix philosophy of keeping things simple with file storage unless there’s a real need for anything else. I don’t think that anything I ever do with calendaring will require the PostgreSQL database that DAViCal uses.

I’ll give Radicale a go for CalDAV and CardDAV, but I still need something for shared notes (shopping lists etc). Suggestions welcome.

Current Status

Lack of a contacts list is a major loss of functionality in a phone. I could store contacts in the phone memory or on the SIM, but I would still have to get all my old contacts in there and also getting something half working reduces motivation for getting it working properly. Lack of a calendar is also a problem, again I could work around that by exporting all my Google calendars as iCal URLs but I’d rather get it working correctly.

The lack of shared notes may be a harder problem to solve given the failure of Nextcloud. For that I would consider just having the keep.google.com web site always open in Mozilla at least in the short term.

At the moment I require two phones, my new Android phone without Google and the old one for my contacts list etc. Hopefully in a week or so I’ll have my new phone doing contacts, calendaring, and notes. Then my old phone will just be for proprietary apps which I don’t need most of the time and I can leave it at home when I don’t need that sort of thing.

,

Krebs on SecurityConti’s Ransomware Toll on the Healthcare Industry

Conti — one of the most ruthless and successful Russian ransomware groups — publicly declared during the height of the COVID-19 pandemic that it would refrain from targeting healthcare providers. But new information confirms this pledge was always a lie, and that Conti has launched more than 200 attacks against hospitals and other healthcare facilities since first surfacing in 2018 under its earlier name, “Ryuk.”

On April 13, Microsoft said it executed a legal sneak attack against Zloader, a remote access trojan and malware platform that multiple ransomware groups have used to deploy their malware inside victim networks. More specifically, Microsoft obtained a court order that allowed it to seize 65 domain names that were used to maintain the Zloader botnet.

Microsoft’s civil lawsuit against Zloader names seven “John Does,” essentially seeking information to identify cybercriminals who used Zloader to conduct ransomware attacks. As the company’s complaint notes, some of these John Does were associated with lesser ransomware collectives such as Egregor and Netfilim.

But according to Microsoft and an advisory from the U.S. Cybersecurity & Infrastructure Security Agency (CISA), Zloader had a special relationship with Ryuk/Conti, acting as a preferred distribution platform for deploying Ryuk/Conti ransomware.

Several parties backed Microsoft in its legal efforts against Zloader by filing supporting declarations, including Errol Weiss, a former penetration tester for the U.S. National Security Agency (NSA). Weiss now serves as the chief security officer of the Health Information Sharing & Analysis Center (H-ISAC), an industry group that shares information about cyberattacks against healthcare providers.

Weiss said ransomware attacks from Ryuk/Conti have impacted hundreds of healthcare facilities across the United States, including facilities located in 192 cities and 41 states and the District of Columbia.

“The attacks resulted in the temporary or permanent loss of IT systems that support many of the provider delivery functions in modern hospitals resulting in cancelled surgeries and delayed medical care,” Weiss said in a declaration (PDF) with the U.S. District Court for the Northern District of Georgia.

“Hospitals reported revenue losses due to Ryuk infections of nearly $100 million from data I obtained through interviews with hospital staff, public statements, and media articles,” Weiss wrote. “The Ryuk attacks also caused an estimated $500 million in costs to respond to the attacks – costs that include ransomware payments, digital forensic services, security improvements and upgrading impacted systems plus other expenses.”

The figures cited by Weiss appear highly conservative. A single attack by Ryuk/Conti in May 2021 against Ireland’s Health Service Executive, which operates the country’s public health system, resulted in massive disruptions to healthcare in Ireland. In June 2021, the HSE’s director general said the recovery costs for that attack were likely to exceed USD $600 million.

Conti ravaged the healthcare sector throughout 2020, and leaked internal chats from the Conti ransomware group show the gang had access to more than 400 healthcare facilities in the U.S. alone by October 2020.

On Oct. 28, 2020, KrebsOnSecurity broke the news that FBI and DHS officials had seen reliable intelligence indicating the group planned to ransom many of these care facilities simultaneously. Hours after that October 2020 piece ran, I heard from a respected H-ISAC security professional who questioned whether it was worth getting the public so riled up. The story had been updated multiple times throughout the day, and there were at least five healthcare organizations hit with ransomware within the span of 24 hours.

“I guess it would help if I understood what the baseline is, like how many healthcare organizations get hit with ransomware on average in one week?” I asked the source.

“It’s more like one a day,” the source confided.

A report in February 2022 from Sophos found Conti orchestrated a cyberattack against a Canadian healthcare provider in late 2021. Security software firm Emsisoft found that at least 68 healthcare providers suffered ransomware attacks last year.

While Conti is just one of many ransomware groups threatening the healthcare industry, it seems likely that ransomware attacks on the healthcare sector are underreported. Perhaps this is because a large percentage of victims are paying a ransom demand to keep their data (and news of their breach) confidential. A survey published in February by email security provider Proofpoint found almost 60 percent of victims hit by ransomware paid their extortionists.

Or perhaps it’s because many crime groups have shifted focus away from deploying ransomware and toward stealing data and demanding payment not to publish the information. Conti shames victims who refuse to pay a ransom by posting their internal data on their darkweb blog.

Since the beginning of 2022, Conti has claimed responsibility for hacking a cancer testing lab, a medical prescription service online, a biomedical testing facility, a pharmaceutical company, and a spinal surgery center.

The Healthcare Information and Management Systems Society recently released its 2021 HIMSS Healthcare Cybersecurity Survey (PDF), which interviewed 167 healthcare cybersecurity professionals and found 67 percent had experienced a “significant security incident” in the past year.

The survey also found that just six percent or less of respondent’s information technology budgets were devoted to cybersecurity, although roughly 60 percent of respondents said their cybersecurity budgets would increase in 2022. Last year, just 79 percent of respondents said they’d fully implemented antivirus or other anti-malware systems; only 43 percent reported they’d fully implemented intrusion detection and prevention technologies.

The FBI says Conti typically gains access to victim networks through weaponized malicious email links, attachments, or stolen Remote Desktop Protocol (RDP) credentials, and that it weaponizes Microsoft Office documents with embedded Powershell scripts — initially staging Cobalt Strike via the Office documents and then dropping Emotet onto the network — giving them the ability to deploy ransomware. The FBI said Conti has been observed inside victim networks between four days and three weeks on average before deploying Conti ransomware.

,

David BrinAnticipating changes for the next few decades... and weeks

“The Nobel Prize-winning physicist Ilya Prigogine was fond of saying that the future is not so much determined by what we do in the present as our image of the future determines what we do today.” So begins the latest missive of Noema Magazine.


The Near Future: The Pew Research Center’s annual Big Challenges Report top-features my musings on energy, local production/autonomy, transparency etc., along with other top seers, like the estimable Esther Dyson, Jamais Cascio, Amy Webb and Abigail deKosnick and many others. 


In this report, "Experts say the 'New Normal' in 2025 will be far more tech-driven, presenting more challenges" these pundits argue that changes resulting from disruptions from the pandemic are likely to worsen economic inequality, enhance the power of big tech firms, and multiply the spread of misinformation.

They also argue that changes have the potential to bring about new reforms aimed at ensuring greater social and racial equality and that tech advances have the power to enhance the quality of life for many.

Among the points I raise:

  • Advances in cost-effectiveness of sustainable energy supplies will be augmented by better storage systems. This will both reduce reliance on fossil fuels and allow cities and homes to be more autonomous.
  • Urban farming methods may move to industrial scale, allowing similar moves toward local autonomy (perhaps requiring a full decade or more to show significant impact). Meat use will decline for several reasons, ensuring some degree of food security, as well.
  • Local, small-scale, on-demand manufacturing may start to show effects in 2025. If all of the above take hold, there will be surplus oceanic shipping capacity across the planet. Some of it may be applied to ameliorate (not solve) acute water shortages. Innovative uses of such vessels may range all the way to those depicted in my novel ‘Earth.’
  • Full-scale diagnostic evaluations of diet, genes and microbiome will result in micro-biotic therapies and treatments. AI appraisals of other diagnostics will both advance detection of problems and become distributed to handheld devices cheaply available to all, even poor clinics.
  • Handheld devices will start to carry detection technologies that can appraise across the spectrum, allowing NGOs and even private parties to detect and report environmental problems.
  • Socially, this extension of citizen vision will go beyond the current trend of assigning accountability to police and other authorities. Despotisms will be empowered, as predicted in George Orwell's ‘Nineteen Eighty-Four.’ But democracies will also be empowered, as in my nonfiction book, ‘The Transparent Society.’
  • I give odds that tsunamis of revelation will crack the shields protecting many elites from disclosure of past and present torts and turpitudes. The Panama Papers and Epstein cases exhibit how fear propels many elites to combine efforts at repression. But only a few more cracks may cause the dike to collapse, revealing networks of blackmail. This is only partly technologically driven and hence is not guaranteed. If it does happen, there will be dangerous spasms by all sorts of elites, desperate to either retain status or evade consequences. (I wrote that before the panic-frenzy we are seeing by Vladimire Putin, whose best option is to spill the entire KGB file of blackmail he holds over western elites.) But if the fever runs its course, the more transparent world will be cleaner and better run.
  • Some of those elites have grown aware of the power of ninety years of Hollywood propaganda for individualism, criticism, diversity, suspicion of authority and appreciation of eccentricity. Counter-propaganda pushing older, more traditional approaches to authority and conformity are already emerging, and they have the advantage of resonating with ancient human fears. Much will depend upon this meme war.

Of course, much will also depend upon short-term resolution of current crises. If our systems remain undermined and sabotaged by incited civil strife and distrust of expertise, then all bets are off. 


== The pertinence (again) of transparency ==

When they hear the "T-word" so many dive into fretting about the spread of ‘surveillance technologies that will empower Big Brother.’ These fears are well-grounded, but also utterly myopic. I recall what Ulysses Grant said to Union generals who were in a froth over Robert E. Lee's next moves. 

Paraphrasing Grant: "Stop worrying over how despots will use light against us, and start talking about how to use light against despotism!"

First, ubiquitous cameras and facial recognition are only the beginning. Nothing will stop them and any such thought of ‘protecting’ citizens from being seen by elites (e.g. billionaires or the police) is stunningly absurd, as the cameras get smaller, better, faster, cheaper, more mobile and vastly more numerous every month. Moore’s Law to the nth degree

Yes, despotisms will benefit from this trend. And hence, the only thing that matters is to prevent despotism altogether. And only one thing ever did that!

In contrast, a free society will be able to apply the very same burgeoning technologies toward accountability. We are seeing them applied to end centuries of abuse by ‘bad-apple’ police who are thugs, while empowering the truly professional cops to do their jobs better.  

Gandhi and Martin Luther King Jr. were saved by crude technologies of light in their days. And history shows that assertive vision by and for the citizenry is the only method that has ever increased freedom and – yes – some degree of privacy.




And finally...


A new type of digital asset - known as a non-fungible token (NFT) - has exploded in popularity during the pandemic as enthusiasts and investors scramble to spend enormous sums of money on items that only exist online. “Blockchain technology allows the items to be publicly authenticated as one-of-a-kind, unlike traditional online objects which can be endlessly reproduced.”… “


 In October 2020, Miami-based art collector Pablo Rodriguez-Fraile spent almost $67,000 on a 10-second video artwork that he could have watched for free online. Last week, he sold it for $6.6 million. The video by digital artist Beeple, whose real name is Mike Winkelmann, was authenticated by blockchain, which serves as a digital signature to certify who owns it and that it is the original work.”



The post-covid luxury spending boom has begun. It’s already reshaping the economy.


A sealed copy of Super Mario 64 sells for $1.56M in record-breaking auction That record didn’t last long. In August 2021, a rare copy of Super Mario Bros. sells for $2 million, the most ever paid for a video game. Until the next time...


On the other hand, a once $2 million image of the world's first tweet recently resold for $245.000 That's volatility!  What NFTs fundamentally prove and what we've seen with election interference, Russian oligarch yachts and the stoopid-oligarchs subsidies of Fox 'News'... that the rich simply have too much money. Period.


And when that happens, as Adam Smith himself said, the first thing destroyed is flat-fair-creative-competitive enterprise.


====

====

====

Addendum on Ukraine:


Real time we are awed by several things. 

By the doughty endurance, courage and ingenuity of the Ukrainian people. 


By yet another example of the topmost lesson from 6000 years of history, that despotism leads to psychotic leader-delusion... in this case endangering all our lives as Ras*-Putin plummets into full panic mode.


That Russians have a long way to go, before they become capable of seeing through the Strongman Hallucination, a version of which also captivates a very large minority of (confederate) Americans. 


And much else. But right now this armchair-general wants to conclude with a couple of amateur military observations, in the wake of Russia's setbacks in the north and at sea:


First: "Ukraine says between 2,500 to 3,000 of its troops have been killed, compared to Russia's 19,000."


Even if the disparity is half as great, it relates to the RF's worst problem. Very soon their troops will be outnumbered in the field. True, most of the new Ukrainian units are recently trained infantry battalions. But they are highly motivated volunteers in truly vast numbers, while the RF has not even dared to call up reservists, yet. Astonishingly, the much larger and more militaristic invader may be outnumbered soon at the front.
The RF retains a huge advantage in mechanized units and artillery. They may yet use them with great effectiveness. But of late a flaw in standard RF Battalion Battle Groups has become clear.... very few mobile infantry patrol flanks to protect the tanks from lurking infantry groups armed with manpads.

Also, large numbers of infantry companies may lurk behind any major blitz-thrust, ready to do partisan tactics. Do not draw hasty conclusions from such thrusts.

Much depends on the weather. If things remain soggy, either the Uks will get time to emplace new units... or RF must rush to use roads and solid ground in narrow channels. There will be a lot of artillery, so dig in, Ukrainians.


=====
* "Ras" means 'prince" in Amharic/Ethiopian and is a root word for the Jamaican Rastafarianism. In this case it doubles as a comment on Putin's self-image and his similarity to a past figure he has emulated and should, all the way. Dance your way through this educational song!




,

David BrinTransparency as the key ingredient to saving the enlightenment experiment: recent examples!

"The best weapon of a dictatorship is secrecy, but the best weapon of a democracy should be the weapon of openness."

                                - Niels Bohr 

Through my nonfiction book The Transparent Society, I wound up playing a niche role in our crucial ongoing debates over freedom, privacy and the Information Age. It's an odd niche - speaking up for the cleansing and liberating power of light in our fragile Enlightenment Experiment - but alas, with a few exceptions, this niche was and remains almost completely unoccupied. Even the great paladins of freedom at ACLU and the Electronic Frontier Foundation and well-meaning 'privacy commissions' in Europe prove clueless when it comes to fundamentals. Like:

 Transparency is not only the one effective way to defeat cheating and despotism by elites... it is also the only ultimate way to stymie bullying and loss of privacy among 8 billion human beings.

How is this principle so hard to grasp? Reciprocal Criticism Is The Only Known Antidote to Error - the most basic underpinning of everything we know and cherish from liberty and tolerance to competitive-creative arenas like markets, democracy, science, courts and sports. In fact every 'positive sum' system that we have relies upon it! Yet, that simple fact appears to be conceptually so counter-intuituve that it is almost-impossible to explain, after 25 years.

Alas, I've learned that whining about it won't be persuasive. So, let's switch to recent examples from the news.

== Encouraging news... though it will take a lot more than this ==

First, a victory for our hope of human survival and justice:  A massive leak from one of the world’s biggest private banks, Credit Suisse, has exposed the hidden wealth of clients involved in torture, drug trafficking, money laundering, corruption and other serious crimes.”  


These things keep happening as I predicted in EARTH (1989) - that ever-more crimes and cheating would be revealed by whistle blowers… 


…and that it will never be enough to truly shred (with light) the worldwide networks of cheaters. Indeed, that danger to them is likely one reason the cheater-mafias all seem united now, in desperate moves to quash democracy and rule-of-law. And boy are they desperate, it seems!


There here are the mega yachts. OMG the Russian oligarch mega and giga-yachts being seized almost daily amid the rucxtions of war in Ukraine. The amount of former Soviet state wealth that supposedly belonged to the Russian People, that was expropriated (stolen) by many of the commissars who had been mere managers under the USSR, who spent their formative years reciting Leninist-egalitarian-socialist catechisms five times every day... 


For every spill like these, there are likely ten that the oligarchs just barely manage to quash, just in time, “phew,” through murder, blackmail, bribery etc. Like the Epstein Files, or the Deutsche Bank records, or David Pecker’s safe… or a myriad other potentially lethal-to-aristocracy revelations that explain why the distilled chant every night on Fox amounts to: “Don’t look! No one should look at us!”

And yes, the one thing that Joe Biden could do, to smash this worldwide mafia putsch, would be to appoint a truth commission to recommend clemency for blackmail victims who come forward!

== Others weirdly calling for transparency ==

One prominent, dour Jonah-of-Doom, Nick Bostrom, appeals for salvation-via-light in apocalyptic terms via his latest missive about existential threats


How vulnerable is the world? - Sooner or later a technology capable of wiping out human civilisation might be invented. How far would we go to stop it?”   


His only solution, utter transparency to a degree I never recommended, via a total panopticon in which all potential extinction devices are discovered before they can be deployed! Because all is seen by all, all the time. A bit preferable over Orwell's top-down despotic surveillance. But under such a simplistic version of transparency, yes, privacy is extinct. So will be most forms of non-conformity.


(If I just sounded critical, let me add that I agree with him about most things! Except the pessimism part… oh and the incessant implication that “I invented all of these ideas!!”)


Yes, there really is only one path out of these messes - through the cleansing power of light. 


And yet, I am convinced it does not have to go full-panopticon! Not if a few social trends continue, as I have described elsewhere. Still at least Bostrom points in the right general direction.


 And the fact that so many elites reflexively oppose it means that they are far, far less-sapient than their hired sycophants flatter them into believing. 


== NOW can Johnny code? ==


Re: my 'classic' article "Why Johnny Can't Code", here's a 10th anniversary video look back at the Raspberry Pi by its creator, who nicely describes the BASIC PC era, similarly to my essay (but British) - exactly nailing the watershed when learning to code devolved into sifting eye candy. 


And now the top tech companies seem to be conspiring deliberately to keep kids lobotomized from digging in the guts of programming. Why on Earth would they perfectly act together in such a way that ruins their own seed corn supply of bright programmers? 


I used to ask that question a lot in speeches at Sili Valley corporations. Their response? To solve (very cheaply) the problem?


Naw, I am just invited to speak there less.

 

== Notes of hope? ==


New polls show that Facial Recognition is supported by a majority of Americans: Zogby’s polling found that three-in-four residents in Massachusetts and Virginia see law enforcement use of facial recognition as appropriate and beneficial. A large majority of residents of both states supported its use for finding missing children, prosecuting sex offenders and traffickers, finding endangered adults, investigating criminal activity, apprehending and prosecuting violent offenders and drug traffickers, and identifying individuals on a terrorist watchlist at public events.”


And now, another data leak: Leakage of 1.2 Terabytes of footage taken by Dallas area police helicopters stirs privacy concerns.


Surprised? Well I reiterate. The solution is not - not! - to try - in utter futility - to ban such tech. 


Seriously? Name one time when that worked? Or when elites ever let themselves be blinded?

Robert Heinlein said the chief thing accomplished by such bans is to "make the spy bugs smaller." 


Think. 

The flaws in facial recognition (like racial bias) that folks complained about were FOUND and criticized and incrementally corrected precisely because the systems were visible to critics, not driven into dark shadows.


Criminy, why is the obvious so counter intuitive? Within five years Facial Recognition will be a phone app that you take for granted. So why choose such a technologically doomed hill to die upon? Pick your battles!


 We must fight against Orwellian dystopias in the only way that ever worked, by increasing flows of light, especially upon the mighty. Looking back at power. 


Stripping the mighty naked and telling them to get used to it.


== Self-promotion or just worthwhile? ==


A Parable About Openness: Some think that the first part of this posting (an excerpt from The Transparent Society) is among my best writing. A little fable about the ongoing battle for enlightenment.


== And Finally ==


An amazing anti-jaywalking PSA that’s both entertaining and shock-effective.


Also... XKCD almost perfectly captured a fact about so-called “UFO” so-called “sightings” that I’ve been making for 40 years. There are about a MILLION-x as many active cameras on Planet Earth than there were in the 1950s. Those poor alien teaser guys have to work harder every single year to keep their ships fuzzy! https://xkcd.com/2572/ 


(Aside: if you believe these UAP 'tictacs' are super-duper alien 'ships,' maybe you should look at this.)


Also a really good XKCD about texts you don’t want to see: https://xkcd.com/2544/

And another good one about viruses: https://xkcd.com/2535/

And poignant, about drones: https://xkcd.com/2499/

And some perspective: https://xkcd.com/2481/


Finally... one of the best capsule summaries of the ten top logical fallacies... though like Mel Brooks I think there oughta be 15!  (No provenance, alas, sorry.)



Sam VargheseIs Scott Morrison really a Christian?

Politicians normally try to keep their private lives separate from their public personas. And the media generally respect this separation, unless any probing can be justified as being in the public interest.

But some politicians purposely ventilate aspects of their private lives when they feel that it will help them in their jobs.

Scott Morrison: pretending to be what he is not. Courtesy Channel 10

And that is the case with the Australian Prime Minister Scott Morrison who has, right from day one, broadcast the idea that he is a Christian, claiming that this is what drives him.

Morrison attends a Pentecostal church and has even gone to the extent of inviting media photographers into his church so they can photograph him while he is attending a service.

But after three years of his rule, in what has at times become a circus, one needs to ask: despite all his contortions, is Morrison really a Christian? Is he all talk and no action? Is he a windbag who blows hot and cold, doing one thing when it suits and another when it does not?

To examine whether he is a Christian in deed as well as word, one needs just a single source: the Bible, and, more specifically, the New Testament.

When Jesus delivered what is known as the Sermon on the Mount, He went into great detail to explain what people ought to be doing if they wanted to rank as His followers. The sermon can be found in the gospek of Matthew, from chapter 5 to 7.

One of the first things Jesus said was: “Blessed are the peacemakers, for they shall be called sons of God.” Morrison does not fit into this category as he is always trying to create strife.

“Blessed are the merciful, for they shall receive mercy” does not fit Morrison either for he is never charitable to his enemies, leave alone his friends.

Neither does “Blessed are the meek, for they shall inherit the earth” describe Morrison who is one of the most aggressive public figures in Australia, always seeking to shout someone else down and throw mud at them.

Morrison lies as a matter of default, to the extent that one Australian publication, Crikey, has even published a book about all the falsehoods he has told. Given his character, Morrison would definitely have tried to sue Crikey if this was contestable, and the fact that he hasn’t shows that he has no defence.

Verse 11 says: “Blessed are you when others revile you and persecute you and utter all kinds of evil against you falsely on my account. Rejoice and be glad, for your reward is great in heaven, for so they persecuted the prophets who were before you.” Doesn’t fit Morrison again.

Jesus says: “You have heard that it was said to those of old, ‘You shall not murder; and whoever murders will be liable to judgment.’ But I say to you that everyone who is angry with his brother will be liable to judgment; whoever insults his brother will be liable to the council; and whoever says, ‘You fool!’ will be liable to the hell of fire.” Morrison stands condemned on most counts therein; he ladles out insults as though they are going out of stock and abuses everyone in sight whenever he can.

“Come to terms quickly with your accuser while you are going with him to court, lest your accuser hand you over to the judge, and the judge to the guard, and you be put in prison” is another admonition. Morrison does the exact opposite, accusing all and sundry of a number of things, without any basis most of the time.

Jesus warned people against swearing; Morrison probably never heard of this advice. When it comes to retaliation, the PM stands on the wrong side every time.

Jesus’ words were: ““You have heard that it was said, ‘An eye for an eye and a tooth for a tooth.’ But I say to you, Do not resist the one who is evil. But if anyone slaps you on the right cheek, turn to him the other also. And if anyone would sue you and take your tunic, let him have your cloak as well. And if anyone forces you to go one mile, go with him two miles. Give to the one who begs from you, and do not refuse the one who would borrow from you.”

Morrison tries his level best to belittle others and act against the weaker sections in the community. No better example is available than his keeping a Tamil family of four, including two little girls, in limbo and away from their home in Queensland for many years. He could settle the issue in a matter of minutes and in behaving in this manner, he is being anything but charitable.

Morrison has contempt for women and does not shrink from showing this in public. He turns his back on women when they are speaking in Parliament and treats them as though they are weak in the head.

The gospel says: “Beware of practising your righteousness before other people in order to be seen by them, for then you will have no reward from your Father who is in heaven”, but Morrison makes a song and dance about even the smallest act of generosity. He lives for the media, putting on almost a circus act for them every day.

On the same lines, Jesus said, “when you give to the needy, do not let your left hand know what your right hand is doing, so that your giving may be in secret. And your Father who sees in secret will reward you.” But Morrison beats the drums and blows the trumpets when he does anything to ensure that all sections of the media know what he is doing.

There’s a lot more, but even by this stage, it is apparent that Morrison is the very antithesis of what a Christian should be. He is a hypocrite, a wolf in sheep’s clothing, someone who pretends to be what he is not.

For people such as him, Jesus had this warning: ““No one can serve two masters, for either he will hate the one and love the other, or he will be devoted to the one and despise the other. You cannot serve God and money.”

,

David BrinMore science please!

It's a busy season. Spring is springing with hot ferocity and I am back to giving speeches and consults about The Future, sometimes boarding airplanes and others via video. Including a coming meeting of NASA's Innovative & Advanced Concepts program - (NIAC).

Which brings us around to science. Yes, amind ructions of war and insurrection and treason against an Enlightenment Experiment that gave us just about everything we have... it's good to pause, now and then, and realize just how fantastic and wonderful this experiment in reason and equality and reasonableness and acceptance of facts has been to us.

YOU are a member of the first civilization that ever did any of the stuff you are about to read about, below. If you quiver with failure of confidence -- or feel tempted by yammerers of gloom -- remember that's what enemies of the Enlightenment want. Snap out of it!

We're wonderful. And we can solve this. Read examples of just how wonderful we are. Starting now.

==A Brighter Future ==


For starters, some fun. As a reminder that the future might… maybe… be better, see again the wonderful Arconic advert using The Jetsons.


And that optimistic future is possible, ironically more than ever in these dolorous and insipidly pessimistic times, as solutions keep rising…. if only we can gather the collective and individual will to use them. As described in Peter Diamandis’s book Abundance; The Future is Better Than You Think. Take this very recent example from Peter’s newsletter:


“This past year, the world’s biggest jeweler Pandora announced it will cease to sell all mined diamonds (which are scarce and fraught with environmental and human rights abuses), and switch exclusively to selling lab-made diamonds, which can be abundant and low cost—produced from water, methane, and electricity.” 


There are few oligarchic-monopolistic conspiracies more evil than the DeBeers diamond cartel. So this is potentially great news, brought to you by advancing technology.


Though Peter starts with the story of aluminum.

== Capsule Updates in Physics ==

From Scientific American: "In a first, scientists have measured the curvature of Spacetime, " revealing subtle changes in gravity's strength. 


Until now, reading the electric field of light has been a challenge because of the high speeds at which light waves oscillates. Current methods can clock electric fields at up to gigahertz frequencies— radio frequency and microwave spectra. A Florida research team has developed the world's first optical oscilloscope:  "Our optical oscilloscope may be able to increase that speed by a factor of about 10,000." Most earlier methods relied upon interference of waves. Could this be important? Are you kidding?


An amazing helium airship that alternates life as dirigible or water ship. Alas, it is missing some important aspects I could explain… As shown in EXISTENCE.


And the latest ‘flying car” is actually a road-ready flying car.  And yes, I have LONG predicted 2024 as the Year of the Flying Car. Well, for the rich. Along certain licensed limo routes. But yeah.


== Paleontology and Archaeology ==


The dinosaurs' last season: Apparently we can now tell pretty precisely when the dinosaur-killing comet or asteroid struck. Not the exact year, but pretty much the exact month! “And it’s looking like life on Earth had a really, really bad June.”


And...? Wonderful finds at Tanis, in the Dakotas, show many creatures exceptionally well-preserved who seem to have died suddenly the very day that asteroid ended the era of the dinosaurs. I look forward to the show! Dinosaurs: The Final Day with Sir David Attenborough will be broadcast on BBC One on 15 April at 18:30 BST. A version has been made for the US science series Nova on the PBS network to be broadcast later in the year.


The oldest human family tree – almost 6000 years old – has been reconstructed from genetic analysis of the occupants of a cairn-tomb in England. Researchers discovered that most of those buried in the tomb were descended from four women who had children with the same man. And so, likely, are many of us.


A recent update on the fascinating Antikythera device.


Helping historians fill in the blanks: DeepMind's new AI helps restore damaged ancient texts and inscriptions, whether written on papyrus, stone, or pottery.


Interesting science on the genes of ancient Egyptians; it will not be welcome in some quarters. They found that the ancient Egyptians were most closely related to the peoples of the Near East, particularly from the Levant.


And related to archaeology… alas… “Japan is scrapping their model 700 Shinkansen Bullet Trains that blow away all passenger trains in our entire country! They’re getting scrapped at Hakata Minami depot and replaced by newer N700A and the latest N700S models. And yes, one can imagine the blowback of ‘national shame(!) if the U.S. or just California were to purchase the old ones, with some refurbishment! 

But here are answers! (1) The deal could include licenses to produce newer models here. But more important: (2) “By installing these proved and utterly safe trains now, we can invest heavily in leapfrogging to the next level. And you need an existing ‘frog’ to leap over!” And finally: (3) Once these are running, millions will say “Oh! I didn’t get it, before. Now I can’t imagine life without these trains.” 

== Business also matters… ==


Steve Jobs said Xerox could have owned computer industry.  A great man (modest too.) and a great loss.  But Xerox was not the worst example, nor Boeing nor Kodak. Want the biggest fail of all? In 1993 - the year that the Mosaic web browser came out and a year before the Bezos family started selling books online - SEARS shut down their established, 150 year mail order catalogue. Yes, that very year. I am sure at some meeting some guy mentioned this Internet thing and was laughed out of the room… even though at the time Sears was a 50% owner of Prodigy, with IBM!


By now Sears would have owned the world. 


Is there a solution to this kind of myopia, that has shrunk the "ROI horizon" of most businesses from 7 years to 3 months, amid a tsunami of criminal 'stock buybacks' that the Greatest Generation wisely banned?


Easy peasy solution. Every undergrad and most grad business majors should be shut down at once. Then allow MBA programs only for those who can show they spent 5 years making a product or delivering a service. And force a harshly winnowed Boeing corporate staff to move back to Seattle.


This is an amazing five-minute video with Steve Jobs--from 1995. The power of visionaries (and competition and innovation and empathy). Read the excellent biography of Steve Jobs by Walter Isaacson. Amazing how he was a very modest fellow.



== Reflections ==


Noted futurist John Smart has finally finished Book 1 of his two book series, The Foresight Guide, which shows how – philosophically and effectively – one can argue for a universe that propels sapient civilizations toward development and light. A passionately intelligent argument for optimism.


A terrific video about the Antikythera device. Alas, like every other discussion of ancient wonders and technologies, not one person discussed the real question... which is why such marvelous methods were lost! 


You know my answer. Secrecy.


There’s a mythology that all the tricks of construction used in the Parthenon were cleverly calculated de novo, when in fact it benefitted from lessons learned across 300 years of lesser works. Likewise, the Antikythera machine blatantly was not the first. There had to have been at least a century of buildup and trials and we never heard of it because of … you know.


== H. G. Welles on… E.T. ==


“We can conceive vaguely of silicon playing the part of carbon, sulphur taking on the role of oxygen, and so forth, in compounds which, at a different tempo under pressures and temperatures beyond our earthly ken, may sustain processes of movement and metabolism with the accompaniment of some sort of consciousness, and even of individuation and reproduction. We can play with such ideas and evoke if we like a Gamma life, a Delta life, and so on through the whole Greek alphabet. 


"We can guess indeed at subconscious and superconscious aspects to every material phenomenon. But all such exercises strain the meaning of the word life towards the breaking-point, and we glance at them only to explain that here we restrict our use of the word life to its common everyday significance of the individualized, reproductive, spontaneously, stirring and metabolic beings about us.”


So said the mighty co-inventor of science fiction, almost a century ago. A fascinating read, as he spoke of the ‘likelihood’ of oceans on Venus and canals on Mars. Yet, he also quotes a noted astronomer’s opinion that star-warmed ocean-bearing worlds must be rare in the cosmos.


Well, at the time, James Jeans and others deemed planets to be rare, making the chances for life limited. (That us the core premise behind E.E. Doc Smith's Lensmen saga.) We now know better. Still, as Wells put it…


“It is limited as yet, but it is still premature of us to define its final limitations. It seems that life must once have begun, but no properly informed man can say with absolute conviction that it will ever end.”


Excepted from: The Science of Life, H.G. Wells, Julian S. Huxley, G. P. Wells, 1929



== A contest re the Fermi Paradox! ==


Finally....  So how many various ‘answers’ to the Question of the Great Silence – or Fermi Paradox – have appeared in music? And sure, in film?


One of you (Talin) suggested a contest  for folks to chime in – under comments. 


Examples:


 David Bowie has one ("He'd like to come and meet us, but he thinks he'd blow our minds")


- The Carpenters ("Calling occupants of interplanetary craft")


- Jefferson Starship's fun and inventive... but ultimately very nasty and churlish... song “Hijack.”


- Hank Green: "Fermi Paradox."


Have fun in our lively comment community, below, taking a break from international madness.


,

Charles StrossBad news day

Russia invades Ukraine; need I say any more?

Well, yes: Vladimir Putin is 69 and rumours last year suggested he'd been diagnosed with multiple sclerosis. He's been the Russian Federation's Prime Minister or President for 23 years, and high office combined with executive power tends to drive office-holders completely out of touch with external reality in about a decade.

I'm also going to note that Putin's politics seems to echo a bunch of ethnonationalist tropes from Aleksandr Dugin, a deeply dangerous ideologue who drank the Kool Aid Julius Evola was passing around. (Esoteric fascist neoreactionary philosopher.)

Hates the LGBTQ+, UK news outlet Pink News reports Russia plotting to kill LGBT+ Ukrainians after invasion (according to an unnamed US source, so treat with caution—might be disinformation).

Oh, and both the Russian stock market and BitCoin both fell off a cliff (BtC is down nearly 10% in the past 24 hours). Some "store of value", huh? (Gold is heading for the stratosphere, as usual in time of war ...)

Anyway, over to you for discussion, with one ground rule: do not report on current Ukrainian troop or defensive positions or anything else that might get people killed, otherwise you will get an immediate red card (permaban).

Sam VargheseHazel and Harry are gone, but their memory lives on

Fifteen years is a long time in any human’s existence. It’s even longer in the case of a dog. Last Monday, the family and I had to bid goodbye to a four-legged friend who had been with the family since 2007, and the wound is still very raw.

Harry: gone, but not forgotten.

The decision to put Harry to  sleep was a painful one, but he had come to the stage where he could not control his bodily functions. In human terms, he was almost 80, an age which many humans live beyond nowadays, but still very old for a little dog. He had arthritis in his rear legs and found it very painful to walk outside.

The merciful thing to do was to put him to sleep. Fortunately, there are easy and painless methods to effect such a thing. But it does not make the loss any easier to bear.

Harry was given to my wife and I by a close friend who was recovering from a heart attack and finding the management of two dogs a bit too much. It was 2007 and he was one year old at the time, but house-trained.

We initially tried to keep him in a kennel in the garden, but his plaintive yelps ensured that he would end up inside the house. That wasn’t the extent of it; Harry finally ended up sleeping in someone’s bed. He was a nice pet to have around, and given that my wife is crazy about dogs, he had a very nice life indeed. The children took a while to get adjusted to him, but once they did, he was treated like a maharaja.

Two years later, my friend asked me if I would like to have his other dog as well, a Tenterfield terrier with the name Hazel. He had bought an apartment and wanted to give her away as she would not get much exercise living in such a place. So Hazel joined our household as well and was with us until 2017, when she had to be put down.

Hazel: a little dog but able to stick up for herself.

Hazel and Harry had already spent some years together so there was no problem having the two of them at our home. Hazel was smaller and four years older than Harry. But despite her size, she knew how to keep Harry in check whenever he tried to bully her. They had the occasional fight and Hazel always came out on top.

Jack Russells are small dogs that are, by nature, a bit nervous. Harry would occasionally snap at someone whom he did not know, more due to his own nervousness than anything else. But during all those years, he only once attacked anyone. That was a small dog who came too close for his [and Harry’s] comfort.

I had taken Harry out for a walk as my wife was away. The little dog who was attacked was under the care of an aged Chinese man and not on a leash. He came too close and Harry nipped him in the eye. His owners made a song and dance about it but I finally calmed them down by offering to pay the vet’s bills.

There were no such issues with Hazel. In 2016, we found out that she had a heart murmur. Later that year, she developed an ailment which resulted in fluid not being drained from her body. This worsened until April 2017 when she lost control of her bodily functions and we had to put her to sleep. That was our first experience with losing a pet.

Harry did not seem too affected by her disappearance. He was a little reluctant to go out on walks, but did not show any signs of depression or the like.

Good friends for the most part, but Hazel and Harry did indulge in the occasional skirmish.

But then time catches up with us all. Harry started developing arthritis and from that point on, about three years back, he slowly went downhill. At times, he would not be able to support himself on his legs on the wooden floor.

At others, he would try to take a leap into the house through the door and fall flat on his chin. In the mornings, when he got up from his bed, he would often find it difficult to stand up.

In one way, I guess we were putting off the inevitable. But then there is always a point at which one has to decide. I left it to my wife to take the decision as Harry was something like another child to her.

And so on Monday, 4 April, we went to the vet at 2.30pm. Our daughter came along with us, but my son said he could not bear to be present. An hour later we were back home.

In the early morning, I would always keep an ear open for Harry with the signs of his stirring being the noise of his nails on the floorboards. One had to then take him out as soon as possible, or end up with a pool of urine on the floor.

I still find myself occasionally listening for that scratching sound in the mornings – before realising that I will hear it no more. At times, a tear or two comes to my eye. But then one has to accept that nothing in life is permanent and treasure the happy memories that Harry and Hazel brought into our lives.

,

Charles StrossQuantum of Nightmares: spoiler time!

Quantum of Nightmares: UK cover

In the before times, a mass market paperback edition usually followed the initial hardcover release of one of my books exactly 12 months later.

But we're not living in the before times any more! The UK paperback of "Quantum of Nightmares" is due in November, but there isn't going to be a US paperback (although the ebook list price will almost certainly drop to reflect a paperback-equivalent price).

So ... I'm open for questions about Quantum of Nightmares in the comment thread below. Ask me anything! Just ignore this thread if you haven't read the book yet and mean to do so in the near future, because there will be spoilers.

,

David BrinWar & (long-term) Peace: Contemplating historical perspectives

The past dominates our thoughts and imaginations, even as we veer away from truly looking at its lessons, let alonge speculate uncomfortably about the future. Well, except in cogent science fiction. And even then, how to tell which projections are accurate?


Naturally, this applies to current events... we'll apply the question to the Ukraine War as well as World War II, below. But first...


The Future of Man: Take this rumination by Bertrand Russell in 1951, on the three possible futures he could conceive. “Before the end of the present century, unless something quite unforeseeable occurs, one of three possibilities will have been realized. I do not pretend to know which of these will happen, or even which is the most likely. What I do contend is that the kind of system to which we have been accustomed cannot possibly continue. These three are: 

1. The end of human life, perhaps of all life on our planet.

2. A reversion to barbarism after a catastrophic diminution of the population of the globe.

3. A unification of the world under a single government, possessing a monopoly of all the major weapons of war.”

Of course, we read these words more than two decades after his deadline, and none of the three has happened. One could berate Russell for the myopia of urgency that also led (that same year) to dire-warnings like the film The Day The Earth Stood Still.

Further, Russell wrote:


“If things are allowed to drift, it is obvious that the bickering between Russia and the Western democracies will continue until Russia has a considerable store of atomic bombs, and that when that time comes there will be an atomic war. In such a war, even if the worst consequences are avoided, Western Europe, including Great Britain, will be virtually exterminated. If America and the U.S.S.R. survive as organized states, they will presently fight again. If one side is victorious, it will rule the world, and a unitary government of mankind will have come into existence; if not, either mankind or, at least, civilization will perish. This is what must happen if nations and their rulers are lacking in constructive vision.”


I remain astonished by the pertinence of a brilliant thinker who – while correct in his general appraisals, was (fortunately) wrong in the very-widely-shared gloom of his assessment of our civilization’s future.


Wrong, that is, up until now? Having said that, I remain daunted by how almost everything Russell said in 1951 about his future could now be said about the tomorrows that we face, looking ahead.  Go ahead and read the essay, squinting and updating by 70 years, replacing some of the players and inserting ecological catastrophe to loom alongside the spectre of nuclear Armageddon. And an America-led enlightenment whose well-earned confidence has been shattered by a deliberately-instigated spate of wholly unnecessary internal civil war. 


To all of the oligarchies now united in desperate urgency to bring down this Periclean experiment, I will further quote Bertrand Russell from 1951:

“Only democracy and free publicity can prevent the holders of power from establishing a servile state, with luxury for the few and overworked poverty for the many. This is what is being done by the Soviet government wherever it is in secure control. There are, of course, economic inequalities everywhere, but in a democratic regime they tend to diminish, whereas under an oligarchy they tend to increase. And wherever an oligarchy has power, economic inequalities threaten to become permanent owing to the modern impossibility of successful rebellion.”


George Orwell surely must have read this essay before writing Nineteen Eighty-Four, wherein he portray's the oligarchs ruling Oceania doing this. The essay will likely shock you in some ways, especially in its militancy, given that Russell would later tout pacifism and denounce US errors in Vietnam. I know I blinked in surprise at a number of paragraphs!

As prediction, the essay failed, in ways that turned out to be fortunate. But as an exploration, it will make you rethink some of the crucial factors that are even more redolent than they were in the year that I first looked out upon the world.


It is in this context we must recognize that oligarchy - the ruling pattern in 99.99% of human societies across 6000 years - has had to concoct fresh tactics to counter the blazing, brilliant strengths and creative fecundity of democratic enlightenment, inciting our own virtues - like individualism and suspicion of authority - to divide and disrupt us.


Russell concludes his essay: "There is hope that law, rather than private force, may come to govern the relations of nations within the present century. If this hope is not realized we face utter disaster; if it is realized, the world will be far better than at any previous period in the history of man."



== Looking back to World War II ==


While sane and decent people are deeply moved and enraged by the criminal horrors of the invasion of Ukraine - and encouraged by not only the courageous defenders but also the obstinate stupidity of the invaders (more on that, below) - I am further prompted to comment on the struggle that shaped the modern world and gave the Enlightenment one more, last-best chance. 


For example it was 80 years ago, yesterday, that the USS Hornet sailed through fog under the Golden Gate Bridge laden with B25 bombers on a rendezvous - 2 weeks later - with destiny. An innovation that changed history and whose 'mother' was necessity born of innovative duplicity at Pearl Harbor.


I also read more about the Battle of Leyte Gulf, where my uncle (later professor at UIUC) Victor Stone commanded a set of landing craft. And I re-evaluated my opinion of Admiral Kurita - blamed for a Halsey-level flub in not pressing his attack after a hellish encounter with a small US force called Taffy 3. I've realized that blaming Kurita is simply wrong! During that 90 minute tussle, the IJN commander realized a simple truth - that the USN of October 1944 was not even remotely the same force that had been crushed at Pearl Harbor, in December 1941. Not by decades or even generations. Just as Imperial air power was annihilated a few months earlier, in the Philippine Sea, and as the IJN southern and northern forces were pulverized with ease that same day, Kurita’s Central Force faced skills, ships and technologies against which they never stood a chance. 


Consider that the totally surprised and unprepared Taffy Three - 6 tiny escort carriers with 3 destroyers and 4 destroyer escorts - applied 5-inch pea shooters and (with help from a couple of torpedoes) wrecked or sank four IJN heavy cruisers. Read that again. Radar control, gunnery stabilizers, terrific damage control and - oh yes - incredible courage and skill made the sacrifices of those tiny ships into a lopsided victory that left Kurita staring in astonishment as his flagship, mighty Yamato, veered wildly to evade other torpedoes…


… and 400+ aircraft equipped only for supporting ground troops, but whose bombs and strafings turned the superstructures of three IJN battleships into confusion and chaos.


Kurita’s later apologia claimed he had intel about more vulnerable US carriers to the north and he did spend a few minutes charging after that ghost target, before calling it a day. But even if he had continued south, chasing after my uncle and the fleeing transports, he’d only have met 14 more destroyers, who were preparing to charge in (again perhaps suicidally) and damage his force enough to make it easy prey for Adm. Oldendorf’s six older battleships, rushing north from their victory in Surigao Strait…. and then 5 newer ones hurrying south from Halsey’s Mistake.


I have a habit of re-evaluating earlier opinions, like coming around to realizing that the Battle of Gettysburg was never in doubt and never even close. And thinking maybe that a statue of Benedict Arnold wouldn’t be so inappropriate, after all. (Ask in comments!)


In this case, I have come to conclude that even with Halsery's entire 3rd fleet off chasing potemkin distractions, just Taffy 3 alone showed what a supremely competent buzz saw the USN had become, in just 2.5 years. And so, I have total sympathy for Takeo Kurita. The IJN was already finished. But at least his retreat meant a few of his sailors made it home to their families. And so did the crews of those 14 fresh U.S. destroyers, And so did Victor Stone.


== So many lessons from Ukraine ==


We are likewise behooved to learn from more recent events. Take this anecdote from one of the best members of an under-ratedly brilliant clade, retirees of the US military officer corps, Gen. Wesley Clark.


General Clark recalled teaching a class of Ukrainian generals in 2016 in Kyiv and trying to explain what an American military “after-action review” was. He told them that after a battle involving American troops, “everybody got together and broke down what happened.”

“The colonel has to confess his mistakes in front of the captain,” General Clark said. “He says, ‘Maybe I took too long to give an order.’”

After hearing him out, the Ukrainians, General Clark said, told him that could not work. “They said, ‘We’ve been taught in the Soviet system that information has to be guarded and we lie to each other,’” he recalled.


To which one could cynically reply that there is still plenty of butt-covering in the US military! Of course there is. Ass-covering and delusion-protection are core human nature attributes, perhaps THE core attributes responsible for the litany of horrors we call ‘history.’ 


After almost any war (including 'wars' between companies in the marketplace or between theories in science), the victors prepare for the same kind of war they just won, while the losers try to innovate. Hence, Russians rebuild what they think enabled them to crush Army Group Center in 1944, while Ukrainians had to re-adjust and listen to guys like Clark.


The biggest exceptions to that 'only losers innovate' rule? European observers watching the US Civil War went home appalled and demanded top-to-bottom changes. Except the French, who soon were smashed. by the Prussians.

But the biggest was George Marshall in 1945 asking: “What mistakes do empires always make?” And red-team critiquing became part of US military culture. Enough to maybe half compensate for inevitable human delusion. (The same question about mistakes of all previous empires led to US counter-mercantilist trade policies that for 75 years have uplifted poor nations all over the world, though some ingrates yowl "we did it all ourselves!")

The crux? We see today that teachings about flexibility by guys like Wesley Clark must have been heeded by those Ukrainian students, whose battlefield innovations - with some help - have turned a terrible and toxic tide.

Stay tuned to reality. Have we any other choice?


,

David BrinScience & Tech updates and incredible marvels!

Taking another break from war, pandemic & politics. Though again I think our paladins in all of those realms could benefit from my book of fresh tactics, Polemical Judo... 

== Wow, just wow ==

The eeriest news during an eerie month? Was the 1983 Christopher Walken film Brainstorm prophetic? According to a recent report“After an elderly patient died suddenly during a routine test, scientists accidentally captured unique data on the activity in his brain at the very end of his life: During the 30 seconds before and after the man's heart stopped, his brain waves were remarkably similar to those seen during dreaming, memory recall and meditation, suggesting that people may actually see their life "flash before their eyes" when they die.”

And fantastic! A team has found the wreck of explorer Ernest Shackleton’s ship Endurance more than 100 years after the vessel was crushed by ice and sank near Antarctica. One of my lifelong heroes.

== Our biological world ==


While I prepare to restore my pastime as a honey beekeeper, it is a good idea to remember that there are other, important kinds of buzzers out there, including the humble (and endangered!) bumble bee! 


Homes for bees: Bee bricks: In southern England, a new law calls for new-building construction to include special bricks designed for bee nests. Not honeybees but native stingless kinds that also are needed.


Expanding our definition of food: Lobster, once despised, is now a delicacy. The same could happen with insects, this video argues. Useful for space colonies as well.


 A bacterium single cell is visible to the naked eye, growing up to 2 centimeters and 5000 times bigger than most other microbes. ‘What’s more, this giant has a huge genome that’s not free floating inside the cell as in other bacteria, but is instead encased in a membrane, an innovation characteristic of much more complex eukaryotes.’ 


At a recent (zoomed) CARTA conference, Mark Moffett of the Smithsonian gave a talk: “Ants and the Anthropocene” that suggests humans aren’t the only species who are aggressively altering the planet. Kneel down anywhere in California or the south of France or a hundred other places and you’ll see Argentine ants (AA) who hitchhiked on human transports, then expanded to extinguish almost all native ant species, in part by making super-colonies who cooperate in taking over the land. Fire ants, who have done the same thing in the US south, coincidentally come from the same Argentine flood basin! They became so ferocious amid millennia of fights in one spot, And now the battle of two rival species is worldwide, including fire ant infestations in California. There are some exceptions to Argentine ants’ super cooperation. California actually has five super colonies and they battle incessantly, with one boundary just a few klicks from our home!


Might a solution be as simple as a virus that changes and randomizes the scent signatures of these super colonies?


== Ah, Covid ==


Despite so many deaths (almost a million in the U.S. alone), this pandemic was mild by sci fi standards, and may be looked back-upon as more of a live fire 'drill' that left us better prepared for the real thing. The stunning speed with which half a dozen different vaccines came forth must be daunting to any villains out there, planning bio-war. Indeed, the U.S. Army's entirely new vaccine may be the future. Apparently it offers 12 sites to attach any antigen stimulant you want. And thus could simultaneously immunize vs, all known coronas, including those responsible for half of all common colds.


Oh, and the pandemic also made clear which of our neighbors are stark-jibbering-loony science-hating cultists. Our earlier notion that they can be talked into reason was revealed as a mad delusion, in its own right. Though I believe there are polemical tactics that could peel away just enough to make a gig difference, this November.


== Insights into humanity ==


Scientists have gained new glimpses into how our brains lay down memories - which could shed light on memory disorders such as Alzheimer's disease. 


Researchers have calculated that lead exposure from car exhaust shrank the IQ scores of half of the American population. But it would have been far worse, except for the campaign circa 1970 to overcome massive trog resistance and ban lead from most gas. I was involved


Superdeterminism? Does quantum mechanics rule out free will? 


Wow. A unified genealogy of modern and ancient genomes: “We present a unified tree sequence of 3601 modern and eight high-coverage ancient human genome sequences compiled from eight datasets. This structure is a lossless and compact representation of 27 million ancestral haplotype fragments and 231 million ancestral lineages linking genomes from these datasets back in time.”


Why do we age? Robert Lustig’s list of aging processes that take place at a cellular level includes: glycation, oxidative stress, mitochondrial dysfunction,, insulin resistance, membrane instability, inflammation, methylation, and autophagy.


More than 350 blind people around the world with Second Sight’s implants in their eyes, experienced a miracle of partly restored vision, only to now find themselves in a world in which the technology that transformed their lives is just another obsolete gadget that (unsupported by the now bankrupt company) may fail at any moment. Neural implants—devices that interact with the human nervous system, either on its periphery or in the brain—are part of a rapidly growing category of medicine that’s sometimes called electroceuticals. Some technologies are well established, like deep-brain stimulators that reduce tremors in people with Parkinson’s disease. But recent advances in neuroscience and digital technology have sparked a gold rush in brain tech, with the outsized investments epitomized by Elon Musk’s buzzy brain-implant company, Neuralink. Some companies talk of reversing depression, treating Alzheimer’s disease, restoring mobility, or even dangle the promise of superhuman cognition.”


== It’s a virus! ==


One-fifth. Nearly 20% of cancers worldwide are caused by a virus. These viruses don’t cause cancer until long after they initially infect a person. Rather, the viruses teach the cells they take over how to escape the natural biological process of cell death. This strategy sets these altered cells on a path for other genetic changes that can cause full-blown cancer years down the road..All known viruses can be categorized into one of 22 distinct families. Five of these families are called “persisting,” because once a person is infected, the virus remains in their body for life. One example is the herpes virus that causes chickenpox in children and can reappear later in life as shingles. This ability to survive over the long term helps the virus spread from person to person.


There are seven known viruses that can cause cancer. Five of them are members of persistent virus families. The human papillomavirus, commonly known as HPV and known to cause cervical cancer, is in the papilloma family. The Epstein-Barr virus, which causes Hodgkin lymphomas, and the Kaposi’s sarcoma-associated virus, are both in the herpes family. The human T-lymphotropic virus, which can cause a type of leukemia, is what’s known as a retrovirus. And Merkel cell polyoma virus, which causes Merkel cell carcinoma, is in the polyoma family.  All five of these viruses contain genetic code for one or more proteins that teach cells how to avoid cell death, effectively immortalizing them and promoting cell growth. The cancer cells that develop from these oncogenic viruses all contain their original viruses’ genetic information, even when they appear years after the initial infection.


And if all of that sounds familiar to some of you who have read my story “Chrysalis” in my Best of Brin short story collection, well, I only predict the future, I don’t make it happen.


All of which… plus Covid… reminds me to periodically offer up this song from 1979’s Unpacking the Eighties (on NPR).


IT’S A VIRUS*

Back in the Pleistocene,
When we were still marine,
a virus launched a quest,
to be the perfect guest 
And re-arranged our genes.

So to this very day,
Whether you grok or pray
all your inheritors
count on those visitors
And what they make you pay.

REFRAIN

It’s a virus,
It inspired us,
to rise above the mud.
It’s a virus,
It’s desirous,
of your very flesh and blood.

Now I know your body’s burning,
But don’t give up the ghost.
Tiny viruses are turning you
Into the perfect... host.


(More verses in comments ... and yes, I made up one of the verses, myself.)


== And is anything blurry, yet? ==


Finally.... Is anyone interested in reviewing an advanced copy of a book about how our human senses change as we age? By an old college chum of mine. See 1st comment, below.


,

Sam VargheseImportant news from The Age. It’s the sainted editor speaking…

An indication of how far The Age, a tabloid newspaper that is published from Melbourne, has sunk can be seen from a letter to subscribers [note, not those who read it free] from the editor, Gay Alcorn on 2 April.

Perhaps to imbue said document with importance, Alcorn chose to place it behind a paywall. [The Age home page can be read without payment and a limited number of articles are also free to read, before the paywall kicks in.]

But Alcorn apparently considers her writing so important that it has to be paid for. Of such stern mettle are editors [and journalists too] made. Heaven forbid that the common man should be able to read this important missive.

[I worked for the website of The Age for nearly 17 years, from June 1999 until May 2016.]

Alcorn was a good editor when she headed the paper’s Sunday edition. But she appears to be out of her depth as editor of the weekday paper.

With a national election looming, Alcorn apparently wanted to offer the subscriber a bit of spin: our coverage will be unbiased. But why would she need to offer this perspective – unless the newspaper had been caught out taking one side or the other recently?

All this talk about balance is so much hogwash when a former Federal Liberal treasurer Peter Costello is chairman of the company that owns The Age. The terrible, regular columnists are another indication of the paucity of real talent at The Age; most of them are unreadable and biased in the extreme. Shaun Carney, who left the paper some years ago, but is now writing for it again, is Costello’s biographer. No prizes for guessing where his sympathies lie.

And there is Parnell Palme McGuinness, daughter of the late right-wing contrarian Padraic McGuinness, a right-wing nutjob if ever there was one. Add to that a woman named Julie Szego — who has been there for decades and once ran to Mark Leibler to get a column of hers reinstated after the then editor, Paul Ramadge, had spiked it — and you have all the ingredients for a stale right-wing pudding. Oh, before I forget, there’s also former Howard minister Amanda Vanstone who provides the icing on that.

The only decent political columnist is Nikki Savva and she came to Nine only because The Australian, where she was a staple, hired the former Tony Abbott spin doctor Peta Credlin. [Update, April 9: Looks like The Age is now reduced to running columns written by Michelle Grattan who left the paper seven or eight years ago.]

But back to the exclusive letter. That an editor would pen a missive like this is itself a joke. It starts out by saying that she would like to outline The Age’s principles [that is, if it has any, which is doubtful – see above] before the election – only to promptly say that they are the same as for all other coverage. Then why raise the point at all, one hears the puzzled subscriber ask.

Alcorn says election coverage will not be “he said, she said” journalism. But given that a substantial part of everyday reporting, be it political, economic, sport or arts, is just that kind of story, what is she planning to offer in lieu? Mystery stories? Tales of dark horror? Crossword puzzles? Frankly, it’s a mystery.

“We have to seek the truth about what is being claimed, and to highlight issues that matter and those our subscribers care about,” Alcorn thunders, unaware of the many contradictions in what she has just written. Issues that matter to whom? What do The Age’s subscribers care about? Nobody knows or cares.

And then Alcorn comes back with, “You can rely on The Age for trustworthy coverage.” Let me just note one thing here: the ABC, the government-funded media organisation, touts something similar as its mantra: “the most trusted news source.” But the same ABC is lapping up the data of its users online and sending it to the likes of Google, Facebook and Chartbeat – though it has no need to do so as it carries no advertising. Trust?

If The Age did care about the common man in Melbourne [and other parts of the country] then it is fair to assume that it would have given this data slurping by the ABC some critical coverage. Alas, it has been silent for the most part. Its media writers are pro-ABC and so the coverage is slanted – by a publication that claims not to take sides.

At the last election, in 2019, practically all media were carried away by the opinion polls that predicted an easy win for Labor. In the end, the Coalition came home. Alcorn appears to want to put this memory behind her [she was not editor at the time] and also offer an excuse: “Much of our coverage is informative and useful, but in the past, it could be too focused on opinion polls, particularly the shifts in two-party preferred voting intentions. The problem with that is that the polls influenced our broader coverage too much.”

That’s nothing to do with the polls. It’s got a lot to do with the fact that journalists at The Age sit on their arses in the office all the time and do not bother to go out and get a feel for how voters are thinking. Writing stories from polls is easy – and that’s what The Age generally does. Or else, there are always plenty of press releases which these glorified stenographers can use.

Alcorn’s solution? “We relaunched polling in April last year through the Resolve Political Monitor. Voting intention is collected — emulating as closely as possible the real ballot paper ranking without an ‘undecided’ option — but you will have noticed that we no longer report two-party preferred results.”

The Age editor’s subscriber exclusive isn’t done yet. It indulges in loads of verbiage, canvassing a number of options, but forgetting that Australians, like mugs all over the world, are more influenced by how much money they will get due to backing a particular political party. That strategy was put in place by the Liberal Party hack John Howard who, thanks to a boom in resource exports, had buckets of money to play with. He once gave women a bonus of $3000 for having a child! He stayed in power for 11 years and spent hundreds of billions in such bribes.

The truth is that The Age now has a stable of hacks who are really not world-beaters. Some of them are tired Murdoch castoffs — like David Crowe, Simone Fox Koob and Chip Le Grand — plus others who border on the edge of racism like Peter Hartcher. The last-named is, incidentally, on the record as saying that Australia should not accept Chinese from mainland China as immigrants. This is the kind of balanced writer in the Nine stables [Hartcher is an employee of the Sydney Morning Herald, The Age’s equivalent in Sydney].

The epic letter then says: “The Age is not partisan, and we attempt to think through the issues independently to come to a position on which party would best serve the public interest.”

Which public is this? Until that is defined, one would really not know what to expect.

“So, I am taking a breath. Election campaigns are intense, often ugly, but they are a privilege in a messy liberal democracy. The Age’s job is to provide accurate and fair coverage, an important role in any democracy. Then you get to decide,” is how Alcorn ends.

One point about balance: When a Murdoch newspaper claimed that a Labor senator had been bullied [based on material that came from others after said Senator died] and that this, in part, led to her death, The Age was quick to leap on the story and give it wall-to-wall coverage.

But when a Liberal Senator openly blasted the prime minister and called him a bully [two other women politicians followed suit] The Age did not offer even a tenth of the coverage afforded to the Labor issue.

Balance, did you say?

MEConverting to UEFI

When I got my HP ML110 Gen9 working as a workstation I initially was under the impression that boot wasn’t supported on NVMe and booted it from USB. I found USB booting with legacy boot to be unreliable so decided to try EFI booting and noticed that the NVMe devices were boot candidates with UEFI. Making one of them bootable was more complex than expected because no-one seems to have documented such things. So here’s my documentation, it’s not great but this method has worked once for me.

Before starting major partitioning work it’s best to run “parted -l and save the output to a file, that can allow you to recreate partitions if you corrupt them. One thing I’m doing on systems I manage is putting “@reboot /usr/sbin/parted -l > /root/parted.log” in the root crontab, then when the system is backed up the backup server gets any recent changes to partitioning (I don’t backup /var/log on all my systems).

Firstly run parted on the device to create the EFI and /boot partitions, note that if you want to copy and paste from this you must do so one line at a time, a block paste seemed to confuse parted.

mklabel gpt
mkpart EFI fat32 1 99
mkpart boot ext3 99 300
toggle 1 boot
toggle 1 esp
p
# Model: CT1000P1SSD8 (nvme)
# Disk /dev/nvme1n1: 1000GB
# Sector size (logical/physical): 512B/512B
# Partition Table: gpt
# Disk Flags: 
#
# Number  Start   End     Size    File system  Name  Flags
#  1      1049kB  98.6MB  97.5MB  fat32        EFI   boot, esp
#  2      98.6MB  300MB   201MB   ext3         boot
q

Here are the commands needed to create the filesystems and install the necessary files. This is almost to the stage of being scriptable. Some minor changes need to be made to convert from NVMe device names to SATA/SAS but nothing serious.

mkfs.vfat /dev/nvme1n1p1
mkfs.ext3 -N 1000 /dev/nvme1n1p2
file -s /dev/nvme1n1p2 | sed -e s/^.*UUID/UUID/ -e "s/ .*$/ \/boot ext3 noatime 0 1/" >> /etc/fstab
file -s /dev/nvme1n1p1 | tr "[a-f]" "[A-F]" |sed -e s/^.*numBEr.0x/UUID=/ -e "s/, .*$/ \/boot\/efi vfat umask=0077 0 1/" >> /etc/fstab
# edit /etc/fstab to put a hyphen between the 2 groups of 4 chars for the VFAT filesystem UUID
mount /boot
mkdir -p /boot/efi /boot/grub
mount /boot/efi
mkdir -p /boot/efi/EFI/debian
apt install efibootmgr shim-unsigned grub-efi-amd64
cp /usr/lib/shim/* /usr/lib/grub/x86_64-efi/monolithic/grubx64.efi /boot/efi/EFI/debian
file -s /dev/nvme1n1p2 | sed -e "s/^.*UUID=/search.fs_uuid /" -e "s/ .needs.*$/ root hd0,gpt2/" > /boot/efi/EFI/debian/grub.cfg
echo "set prefix=(\$root)'/boot/grub'" >> /boot/efi/EFI/debian/grub.cfg
echo "configfile \$prefix/grub.cfg" >> /boot/efi/EFI/debian/grub.cfg
grub-install
update-grub

If someone would like to make a script that can handle the different partition names of regular SCSI/SATA disks, NVMe, CCISS, etc then that would be great. It would be good to have a script in Debian that creates the partitions and sets up the EFI files.

If you want to have a second bootable device then the following commands will copy a GPT partition table and give it new UUIDs, make very certain that $DISKB is the one you want to be wiped and refer to my previous mention of “parted -l“. Also note that parted has a rescue command which works very well.

sgdisk /dev/$DISKA -R /dev/$DISKB 
sgdisk -G /dev/$DISKB

To backup a GPT partition table run a command like this. Note that if sgdisk is told to backup a MBR partitioned disk it will say “Found invalid GPT and valid MBR; converting MBR to GPT forma” which is probably a viable way of converting MBR format to GPT.

sgdisk -b sda.bak /dev/sda

,

Sam VargheseIncestuous relationships in Canberra once again on display

The incestuous relationship between Australian journalists and politicians has been exposed again, with the journalist in question being the political editor of news.com.au, Samantha Maiden [seen below in a picture from YouTube].

The politician, sadly, is no longer in this world; Kimberley Kitching, a senator from the Labor Party, died on 10 March of a suspected heart attack. [More on Canberra’s incestuous culture here and here.]

Given the way that News Corporation, the empire owned by Rupert Murdoch, used alleged events prior to Kitching’s death to accuse other Labor senators of contributing to the stress that led to her exiting the mortal coil, nobody except an idiot would have assumed that the topic would not come up for discussion during political programs on the weekend after her death.

As happened on 13 March on the Insiders program which is hosted by David Speers on the government-funded TV outlet ABC. This program has a panel of three that discusses the events of the week, with Speers guiding the discussion, an interview [generally with a politician] and a look at the cartoons of the week.

Maiden is a very good reporter and, though she works for a right-wing outfit — news.com.au is also owned by Murdoch but is not behind a paywall and uses some stories from the numerous other publications that Murdoch owns — appears to be guided by her instinct for news. There is, thus, no reason to believe that she would not have expected the Kitching issue to figure in the Insiders discussion.

Conflicts of interest should always be disclosed prior to speaking on a program such as this, but although Maiden was conflicted about commenting on the Kitching death and its fallout, she did not tell Speers about it before the program.

The correct thing for her to do would have been to withdraw from the program, but then journalists love to have their faces on the Insiders as it is a national program. Perhaps in an earlier era, a journalist would have done the right thing, but in this age narcissism is the order of the day.

When the topic came to Kitching, Maiden suddenly said she had something to disclose: the fact that she had interviewed Kitching shortly before the Labor politician died, in connection with a book which she claims to be writing about the culture in parliament.

Maiden revealed this when the claim that Kitching had leaked to the Coalition came up, vehemently saying that this was not true, and that Kitching had told her so.

It was an awkward situation for Speers as he could not question what Maiden was saying without calling her a liar; here was someone attributing this and that to a dead woman, confident that it would have to be accepted.

Maiden will obviously have to go back to Kitching’s close associates for more material for her book; hence, her appearance on Insiders and her defence of what a dead woman allegedly said.

It wasn’t a very professional thing to do and illustrated how journalists — supposedly the fourth estate but more and more parochial these days and catering to their own whims — and politicians have an unholy nexus. It’s one that leaves the public very much in the dark.

,

David BrinIt’s primary season. There’s a trick to multiply your citizen-influence.

I have (with others) inveighed mightily against the cheats that have come close to ruining American democracy. For example, elsewhere I oft claim blackmail is the sole plausible explanation for the behavior of so many in Washington DC. I recommend easy ways to find out if I’m right.  


Even worse is gerrymandering, abetted by the wholly-treasonous Roberts Doctrine, that has warped US politics, shifting power from the General Election to primaries, where radicals exert vastly amplified influence.


See “Radicalization by primary has driven the GOP mad.” Though - at risk of being accused of ‘both-side-ism’ - the same thing has happened (to a lesser degree) on the U.S. left. 


In a hurry to learn the ‘trick’ to amplify your vote-power? 

Then skim down past the following explanations, to the *** part...

... where I describe a practical way that YOU… and maybe 50 million Americansmight restore your sovereign power as a citizen.


 == Why do I keep trying? == 


My own past efforts to urge judo tactics on our ‘generals’ leading the Union side in this phase of U.S. Civil War include a book — Polemical Judo -- containing almost a hundred agile tactics to break a gone-mad Republican Party’s hold on many institutions, states and 40% of our neighbors... asking: “Have you noticed a path past those trenches and around that mountain?” 


Alas, among political castes and pundits — even good guys! — new ideas and methods are not-welcome. Oh, there are paladins like AOC, Stacey Abrams and DNC Chair Jaime Harrison, who must spend inordinate time corralling flakes eager to repeat their betrayals of 80, 88, 94, 2000, 2010 and 2016, spurning the only coalition that stands any chance of stopping the forces of darkness. See Five devastating rebuttals to use those who would split our coalition.


If November 2020 was our Gettysburg, turning the tide from Trumpism, science-hating idiocracy and treason, remember that it took another 20 months from Gettysburg to Appomattox and America’s rebirth of freedom.


So here’s my top suggestion for how citizens like you might help fight the wave of Holnists trying to wreck our Great Experiment.


== The essence of this crime ==


The Roberts majority on the Supreme Court has one mission, above all. Not abortion or corporatism, but to preserve gerrymandering - the sole thing keeping the GOP relevant in the House and in state legislatures. Without it, there might still be a Republican power in the U.S. Senate. But the party would collapse almost everywhere else.


Crucially, gerrymandering lets politicians choose the voters, instead of voters choosing politicians. It creates mostly ‘safe” districts in which the General Election is a farce. 

The only balloting most reps and assembly members fear is their party’s primary, explaining much of today’s riled-up radicalism of U.S. politics. See how the New York Times made a game to help you understand it


And the sole top priority of John Roberts is to protect this situation.


Do you live in such a district, frustrated that you will never, ever get to help elect someone you like, or thwart representatives you despise?  Then skip ahead to the next section ***


Only first… how does this foul cheat endure, in the face of a vast American consensus against it? John Roberts even admitted that it’s loathsome and inherently wrong. In order to protect it, he concocted the Roberts Doctrine, that the U.S. Supreme Court has no business ruling on the processes and procedures adopted by sovereign state legislatures. Not even when they are cheating voters in order to maintain power that large majorities want to take away from them!


Of course this ploy is an absolutely stunning sham! Since the Supreme Court has intervened countless times in such matters. In fact, Roberts avows that Congress could overcome gerrymandering, if it chooses, via the Voting Rights Act. Hence, a top priority of McConnell & co. has been to stymie that.


In defense of his Doctrine, the Chief Justice contrived an even more amazing rationalization — that while gerrymandering is ‘regrettable,’ alas ‘no remedy” had been presented without flaws of its own.  


Okay, sure, giving the job of forging fair districts to a neutral commission can pose problems too, maybe 1% as bad as letting legislatures do it!  But… what if Biden sent the Solicitor General back before the Court with a fresh approach? One that cancels all of JR’s rationalized excuses? 


If you're curious, see the Minimal Overlap Solution that I offered in Polemical Judo, and summarized