Planet Russell

,

Charles StrossMore in Sadness than in Anger

Sorry I haven't updated the blog for a while: I've been busy. (Writing the final draft of a new novel entirely unconnected to anything else you've read—space opera, new setting, longest thing I've written aside from the big Merchant Princes doorsteps. Now in my agent's inbox while I make notes towards a sequel, if requested.)

Over the past few years I've been naively assuming that while we're ruled by a ruthless kleptocracy, they're not completely evil: aristocracies tend to run on self-interest and try to leave a legacy to their children, which usually means leaving enough peasants around to mow the lawn, wash the dishes, and work the fields.

But my faith in the sanity of the evil overlords has been badly shaken in the past couple of months by the steady drip of WTFery coming out of the USA in general and the Epstein Files in particular, and now there's this somewhat obscure aside, that rips the mask off entirely (Original email on DoJ website ) ...

A document released by the U.S. Department of Justice as part of the Epstein files contains a quote attributed to correspondence involving Jeffrey Epstein that references Bill Gates and a controversial question about "how do we get rid of poor people as a whole."

The passage appears in a written communication included in the DOJ document trove and reads, in part: "I've been thinking a lot about that question that you asked Bill Gates, 'how do we get rid of poor people as a whole,' and I have an answer/comment regarding that for you." The writer then asks to schedule a phone call to discuss the matter further.

As an editor of mine once observed, America is ruled by two political parties: the party of the evil billionaires, and the party of the sane (so slightly less evil) billionaires. Evil billionaires: "let's kill the poor and take all their stuff." Sane billionaires: "hang on, if we kill them all who's going to cook dinner and clean the pool?"

And this seemed plausible ... before it turned out that the CEO class as a whole believe entirely in AI (which, to be clear, is just another marketing grift in the same spirit as cryptocurrencies/blockchain, next-generation nuclear power, real estate backed credit default options, and Dutch tulip bulbs). AI is being sold on the promise of increasing workforce efficiency. And in a world which has been studiously ignoring John Maynard Keynes' 1930 prediction that by 2030 we would only need to work a 15 hour work week, they've drawn an inevitable unwelcome conclusion from this axiom: that there are too many of us. For the past 75 years they've been so focussed on optimizing for efficiency that they no longer understand that efficiency and resilience are inversely related: in order to survive collectively through an energy transition and a time of climate destabilization we need extra capacity, not "right-sized" capacity.

Raise the death rate by removing herd immunity to childhood diseases? That's entirely consistent with "kill the poor". Mass deportation of anyone with the wrong skin colour? The white supremacists will join in enthusiastically, and meanwhile: the deported can die out of sight. Turn disused data centres or amazon warehouses into concentration camps (which are notorious disease breeding grounds)? It's a no-brainer. Start lots of small overseas brushfire wars, escalating to the sort of genocide now being piloted in Gaza by Trump's ally Netanyahu (to emphasize: his strain of Judaism can only be understood as a Jewish expression of white nationalism, throwing off its polite political mask to reveal the death's head of totalitarianism underneath)? It's all part of the program.

Our rulers have gone collectively insane (over a period of decades) and they want to kill us.

The class war has turned hot. And we're all on the losing side.

Rondam RamblingsDebate Post-Mortem

Last Saturday I did my first on-line debate in four years with a YouTuber who goes by the handle MadeByJimBob (who I will refer to simply as JB since JimBob is not actually his real name and MadeByJimBob is just too long).  The topic was "Is Evolution a Reasonable Position?"  The topic was originally going to be "Evolution on Trial" but I pushed back on that for two reasons.  First

Planet DebianIan Jackson: Adopting tag2upload and modernising your Debian packaging

Introduction

tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.

We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.

tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.

This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.

(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)

Why

Ease of development

git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.

dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.

They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.

tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.

See the Day-to-day work section below to see how simple your life could be.

Don’t fear a learning burden; instead, start forgetting all that nonsense

Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.

We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.

The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.

And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn’t always trivial to get your first push to succeed.

Properly publishing the source code

One of Debian’s foundational principles is that we publish the source code.

Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.

But, without tag2upload or dgit, we aren’t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:

  • The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, bare debian/, or something even stranger.
  • There is no guarantee that the DEP-14 debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn’t cross-check the .dsc against git.
  • There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive.

This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not.

tag2upload and dgit do solve this problem. When you upload, they:

  1. Make a canonical-form (patches-applied) derivative of your git branch;
  2. Ensure that there is a well-defined correspondence between the git tree and the source package;
  3. Publish both the DEP-14 tag and a canonical-form archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
  4. Record the git information in the Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.

This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.

(The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.)

Adopting tag2upload - the minimal change

tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.

So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.

Start with the wiki page and git-debpush(1) (ideally from forky aka testing).

You don’t need to do any of the other things recommended in this article.

Overhauling your workflow, using advanced git-first tooling

The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.

Assumptions

  • Your current approach uses the “patches-unapplied” git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.

  • You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.

  • Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.

  • You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.

  • Your co-maintainers are also adopting the new approach.

tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.

Topics and tooling

This article will guide you in adopting:

  • tag2upload
  • Patches-applied git branch for your packaging
  • Either plain git merge or git-debrebase
  • dgit when a with-binaries uploaded is needed (NEW)
  • git-based sponsorship
  • Salsa (gitlab), including Debian Salsa CI

Choosing the git branch format

In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git.

We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.

rationale

Much traditional Debian tooling like quilt and gbp pq uses the “patches-unapplied” branch format, which stores the delta as patch files in debian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.

git merge

Option 1: simply use git, directly, including git merge.

Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream.

This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/.

This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).

git-debrebase

Option 2: Adopt git-debrebase.

git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.

The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.

This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7).

Examples of complex packages using this approach include src:xen and src:sbcl.

Determine upstream git and stop using upstream tarballs

We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.

rationale

Many maintainers have been importing upstream tarballs into git, for example by using gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!

git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!)

First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3.

Edit debian/watch to contain something like this:

version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)

You may need to adjust the regexp, depending on your upstream’s tag name convention. If debian/watch had a files-excluded, you’ll need to make a filtered version of upstream git.

git-debrebase

From now on we’ll generate our own .orig tarballs directly from git.

rationale

We need some “upstream tarball” for the 3.0 (quilt) source format to work with. It needs to correspond to the git commit we’re using as our upstream. We don’t need or want to use a tarball from upstream for this. The .orig is just needed so a nice legacy Debian source package (.dsc) can be generated.

Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing .origs for the “same upstream version”.

So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add +git to Debian’s idea of the upstream version. Manually make a tag with that name:

git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+git

If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.

Convert the git branch

git merge

Prepare a new branch on top of upstream git, containing what we want:

git branch -f old-master         # make a note of the old git representation
git reset --hard v1.2.3          # go back to the real upstream git tag
git checkout old-master :debian  # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master         # it's incorporated in our history now

If there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)

rationale

These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.

git-debrebase

Convert the branch to git-debrebase format and rebase onto the upstream git:

git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git

If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.

rationale

The force option -fupstream-not-ff will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history. -fdiverged may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.

Manually make your history fast forward from the git import of your previous upload.

dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid

Change the source format

Delete any existing debian/source/options and/or debian/source/local-options.

git merge

Change debian/source/format to 1.0. Add debian/source/options containing -sn.

rationale

We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration.

You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.

git-debrebase

Ensure that debian/source/format contains 3.0 (quilt).

Now you are ready to do a local test build.

Sort out the documentation and metadata

Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload.

Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.

git merge

Add a note to debian/changelog about the git packaging change.

git-debrebase

git-debrebase new-upstream will have added a “new upstream version” stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the +git from the upstream version number there!)

Configure Salsa Merge Requests

git-debrebase

In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.

rationale

Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.

Set up Salsa CI, and use it to block merges of bad changes

Caveat - the tradeoff

gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).

However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.

The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.

Setup procedure

Create debian/salsa-ci.yml containing

include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml

In your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to debian/salsa-ci.yml.

rationale

Your project may have an upstream CI config in .gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.

You can add various extra configuration to debian/salsa-ci.yml to customise it. Consult the Salsa CI docs.

git-debrebase

Add to debian/salsa-ci.yml:

.git-debrebase-prepare: &git-debrebase-prepare
  # install the tools we'll need
  - apt-get update
  - apt-get --yes install git-debrebase git-debpush
  # git-debrebase needs git user setup
  - git config user.email "salsa-ci@invalid.invalid"
  - git config user.name "salsa-ci"
  # run git-debrebase make-patches
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
  - git-debrebase --force
  - git-debrebase --noop-ok make-patches
  # make an orig tarball using the upstream tag, not a gbp upstream/ tag
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
  - git-deborig

.build-definition: &build-definition
  extends: .build-definition-common
  before_script: *git-debrebase-prepare

build source:
  extends: .build-source-only
  before_script: *git-debrebase-prepare

variables:
  # disable shallow cloning of git repository. This is needed for git-debrebase
  GIT_DEPTH: 0
rationale

Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).

These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.

Push this to salsa and make the CI pass.

If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.

Block untested pushes, preventing regressions

In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch master. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.

This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.

gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.

(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)

autopkgtests

Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.

The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.

Day-to-day work

With this capable tooling, most tasks are much easier.

Making changes to the package

Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.

On your MR branch you can freely edit every file. This includes upstream files, and files in debian/.

For example, you can:

  • Make changes with your editor and commit them.
  • git cherry-pick an upstream commit.
  • git am a patch from a mailing list or from the Debian Bug System.
  • git revert an earlier commit, even an upstream one.

When you have a working state of things, tidy up your git branch:

git merge

Use git-rebase to squash/edit/combine/reorder commits.

git-debrebase

Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude.

Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.

Push the MR branch (topic branch) to Salsa and make a Merge Request.

Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)

If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.

Test build

An informal test build can be done like this:

apt-get build-dep .
dpkg-buildpackage -uc -b

Ideally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable.

If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.

For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW.

Uploading to Debian

Start an MR branch for the administrative changes for the release.

Document all the changes you’re going to release, in the debian/changelog.

git merge

gbp dch can help write the changelog for you:

dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main
rationale

--ignore-branch is needed because gbp dch wrongly thinks you ought to be running this on master, but of course you’re running it on your MR branch.

The --git-log=^upstream/main excludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have an upstream remote and that you’re basing your work on their main branch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.

(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)

Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)

dch -r
git commit -m 'Finalise for upload' debian/changelog

Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)

Now you can perform the actual upload:

git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local tree
git merge
git-debpush
git-debrebase
git-debpush --quilt=linear

--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.

Uploading a NEW package to Debian

If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.

Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you:

Prepare the changelog update and merge it, as above. Then:

git-debrebase

Create the orig tarball and launder the git-derebase branch:

git-deborig
git-debrebase quick
rationale

Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.

Build the source and binary packages, locally:

dgit sbuild
dgit push-built
rationale

You don’t have to use dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.

New upstream version

Find the new upstream version number and corresponding tag. (Let’s suppose it’s 1.2.4.) Check the provenance:

git verify-tag v1.2.4
rationale

Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.

git merge

Simply merge the new upstream version and update the changelog:

git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'

git-debrebase

Rebase your delta queue onto the new upstream version:

git debrebase mew-upstream 1.2.4

If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase.

After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.

Sponsorship

git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.

When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush.

As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:

dgit fetch sid
git diff dgit/dgit/sid..HEAD

Or to see the Debian delta of the proposed upload:

git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'
git-debrebase

Or to show all the delta as a series of commits:

git log -p v1.2.3..HEAD ':!debian'

Don’t look at debian/patches/. It can be absent or out of date.

Incorporating an NMU

Fetch the NMU into your local git, and see what it contains:

dgit fetch sid
git diff master...dgit/dgit/sid

If the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made.

Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:

git merge dgit/dgit/sid
git-debrebase

You should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.

Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:

git diff debian/1.2.3-7...dgit/dgit/sid
git-debrebase

The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it’s best to filter them out with git diff ... ':!debian/patches'

If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like

git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patches

to diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)

DFSG filtering (handling non-free files)

Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.

This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.

Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.

rationale

Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.

Initial filtering

git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsg

And now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog.

If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2.

Subsequent upstream releases

git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsg

Removing files by pattern

If the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.

rationale

Ideally uscan, which has a way of representing DFSG filtering patterns in debian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.

Common issues

  • Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.

    It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.

  • gitattributes:

    For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out.

    Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.

  • git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.

    If you’re lucky, the code in the submodule isn’t used in which case you can git rm the submodule.

Further reading

I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.

You may want to look at:

  • dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.

    These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.

    Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.

  • NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)

    You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).

  • Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).

  • tag2upload documentation: The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course.

  • dgit reference documentation:

    There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations.

    dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.

  • Design and implementation documentation for tag2upload is linked to from the wiki.

  • Debian’s git transition blog post from December.

    tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.

    git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.

git-debrebase
  • git-debrebase reference documentation:

    Of course there’s a comprehensive command-line manual in git-debrebase(1).

    git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).


Edited 2026-03-05 18:48 UTC to add a missing --noop-ok to the Salsa CI runes. Thanks to Charlemagne Lasse for the report. Apologies if this causes Debian Planet to re-post this article as if it were new.


comment count unavailable comments

Planet DebianVincent Bernat: Automatic Prometheus metrics discovery with Docker labels

Akvorado, a network flow collector, relies on Traefik, a reverse HTTP proxy, to expose HTTP endpoints for services implemented in a Docker Compose setup. Docker labels attached to each service define the routing rules. Traefik picks them up automatically when a container starts. Instead of maintaining a static configuration file to collect Prometheus metrics, we can apply the same approach with Grafana Alloy, making its configuration simpler.

Traefik & Docker

Traefik listens for events on the Docker socket. Each service advertises its configuration through labels. For example, here is the Loki service in Akvorado:

services:
  loki:
    # …
    expose:
      - 3100/tcp
    labels:
      - traefik.enable=true
      - traefik.http.routers.loki.rule=PathPrefix(`/loki`)

Once the container is healthy, Traefik creates a router forwarding requests matching /loki to its first exposed port. Colocating Traefik configuration with the service definition is attractive. How do we achieve the same for Prometheus metrics?

Metrics discovery with Alloy

Grafana Alloy, a metrics collector that can scrape Prometheus endpoints, includes a discovery.docker component. Just like Traefik, it connects to the Docker socket.1 With a few relabeling rules, we can teach it to use Docker labels to locate and scrape metrics.

We define three labels on each service:

  • metrics.enable set to true enables metrics collection,
  • metrics.port specifies the port exposing the Prometheus endpoint, and
  • metrics.path specifies the path to the metrics endpoint.

If there is more than one exposed port, metrics.port is mandatory, otherwise it defaults to the only exposed port. The default value for metrics.path is /metrics. The Loki service from earlier becomes:

services:
  loki:
    # …
    expose:
      - 3100/tcp
    labels:
      - traefik.enable=true
      - traefik.http.routers.loki.rule=PathPrefix(`/loki`)
      - metrics.enable=true
      - metrics.path=/loki/metrics

Alloy’s configuration is split into four parts:

  1. discover containers through the Docker socket,
  2. filter and relabel targets using Docker labels,
  3. scrape the matching endpoints, and
  4. forward the metrics to Prometheus.

Discovering Docker containers

The first building block discovers running containers:

discovery.docker "docker" {
  host             = "unix:///var/run/docker.sock"
  refresh_interval = "30s"
  filter {
    name   = "label"
    values = ["com.docker.compose.project=akvorado"]
  }
}

This connects to the Docker socket and lists containers every 30 seconds.2 The filter block restricts discovery to containers belonging to the akvorado project, avoiding interference with unrelated containers on the same host. For each discovered container, Alloy produces a target with labels such as __meta_docker_container_label_metrics_port for the metrics.port Docker label.

Relabeling targets

The relabeling step filters and transforms raw targets from Docker discovery into scrape targets. The first stage keeps only targets with metrics.enable set to true:

discovery.relabel "prometheus" {
  targets = discovery.docker.docker.targets

  // Keep only targets with metrics.enable=true
  rule {
    source_labels = ["__meta_docker_container_label_metrics_enable"]
    regex         = `true`
    action        = "keep"
  }

  // …
}

The second stage overrides the discovered port when we define metrics.port:

// When metrics.port is set, override __address__.
rule {
  source_labels = ["__address__", "__meta_docker_container_label_metrics_port"]
  regex         = `(.+):\d+;(.+)`
  target_label  = "__address__"
  replacement   = "$1:$2"
}

Next, we handle containers in host network mode. When __meta_docker_network_name equals host, the address is rewritten to host.docker.internal instead of localhost:3

// When host networking, override __address__ to host.docker.internal.
rule {
  source_labels = ["__meta_docker_container_label_metrics_port", "__meta_docker_network_name"]
  regex         = `(.+);host`
  target_label  = "__address__"
  replacement   = "host.docker.internal:$1"
}

The next stage derives the job name from the service name, stripping any numbered suffix. The instance label is the address without the port:

rule {
  source_labels = ["__meta_docker_container_label_com_docker_compose_service"]
  regex         = `(.+)(?:-\d+)?`
  target_label  = "job"
}
rule {
  source_labels = ["__address__"]
  regex         = `(.+):\d+`
  target_label  = "instance"
}

If a container defines metrics.path, Alloy uses it as a path. Otherwise, it defaults to /metrics:

rule {
  source_labels = ["__meta_docker_container_label_metrics_path"]
  regex         = `(.+)`
  target_label  = "__metrics_path__"
}
rule {
  source_labels = ["__metrics_path__"]
  regex         = ""
  target_label  = "__metrics_path__"
  replacement   = "/metrics"
}

Scraping and forwarding

With the targets properly relabeled, scraping and forwarding are straightforward:

prometheus.scrape "docker" {
  targets         = discovery.relabel.prometheus.output
  forward_to      = [prometheus.remote_write.default.receiver]
  scrape_interval = "30s"
}

prometheus.remote_write "default" {
  endpoint {
    url = "http://prometheus:9090/api/v1/write"
  }
}

prometheus.scrape periodically fetches metrics from the discovered targets. prometheus.remote_write sends them to Prometheus.

Built-in exporters

Some services do not expose a Prometheus endpoint. Redis and Kafka are common examples. Alloy ships built-in Prometheus exporters that query these services and expose metrics on their behalf.

prometheus.exporter.redis "docker" {
  redis_addr = "redis:6379"
}
discovery.relabel "redis" {
  targets = prometheus.exporter.redis.docker.targets
  rule {
    target_label = "job"
    replacement  = "redis"
  }
}
prometheus.scrape "redis" {
  targets         = discovery.relabel.redis.output
  forward_to      = [prometheus.remote_write.default.receiver]
  scrape_interval = "30s"
}

The same pattern applies to Kafka:

prometheus.exporter.kafka "docker" {
  kafka_uris = ["kafka:9092"]
}
discovery.relabel "kafka" {
  targets = prometheus.exporter.kafka.docker.targets
  rule {
    target_label = "job"
    replacement  = "kafka"
  }
}
prometheus.scrape "kafka" {
  targets         = discovery.relabel.kafka.output
  forward_to      = [prometheus.remote_write.default.receiver]
  scrape_interval = "30s"
}

Each exporter is a separate component with its own relabeling and scrape configuration. The job label is set explicitly since there is no Docker metadata to derive it from.


With this setup, adding metrics to a new service with a Prometheus endpoint is a few-label change in docker-compose.yml, just like adding a Traefik route. Alloy picks it up automatically. 🩺


  1. Both Traefik and Alloy require access to the Docker socket, which grants root-level access to the host. A Docker socket proxy mitigates this by exposing only the read-only API endpoints needed for discovery. ↩︎

  2. Unlike Traefik, which watches for events, Grafana Alloy polls the container list at regular intervals—a behavior inherited from Prometheus. ↩︎

  3. The Alloy service needs extra_hosts: ["host.docker.internal:host-gateway"] in its definition. ↩︎

Chaotic IdealismPlagAIrism

Yes, the misspelling is deliberate.

I recently wrote about AI. It was a fairly non-critical piece of writing–mostly using AI as an example. But there’s more that needs to be said here.

A friend of mine is an artist. He works hard. He hates generative AI for what it is doing to artists and photographers. I agree with him. Stealing art from the Internet and using it to train your AI, so that it can then summarize that art and make more along the same lines, putting the artists out of business, is highly unethical. It should never have been done.

Other uses for AI are less plagiarism and more slapdash summary. For example, I use Google AI mode sometimes to ask complex questions and get the AI to summarize the results. I do research for Disability Day of Mourning as well as for my own Autism Memorial web site, and it takes a lot of searching. When I use AI, I have to watch out for errors–AI often makes errors–but if I want to scour a hundred web pages for a single name or summarize the predominant ideas about a single idea, AI can do it faster than me; and then I can take the results and work from those.

It’s useful. But there are problems. I can’t always tell exactly where the AI got the information it’s using, and although it does include links for its information, the program doesn’t include links for every fact, nor can it include all the links it has searched, because there would be hundreds.

I can’t put an AI result in a reference (nor should anyone–it’s a secondary source and we should always, always use primary sources), so I often find myself using AI to find a better search term for the regular search. Who is that 70-year-old woman who died from neglect in Del City, Oklahoma, when her caregiver didn’t care? AI can tell me her name was Deborah Valentine. That helps me locate an obituary and a photo of an elderly, white-haired woman with a broad smile.

But sometimes, when a name is common, AI brings in the wrong person. John Jones from Pasadena gets mixed up with John Jones from Minneapolis, and if I don’t watch out, I might echo that mistake, and then AI would take my web site and refer to it and take it as fact again. Ouch.

Sometimes, AI simply follows patterns and comes up with something that fits the pattern, but not the truth. If you don’t catch it, your research has been compromised.

Oh, yes, and there’s the environmental cost. Simply put, an AI is run on a supercomputer, and supercomputers use lots of power. Using more power means more pollution, more environmental trouble. A single search or the generation of a single image isn’t terribly expensive by itself–it’s about the same cost as watching a few minutes of TV. But when everybody makes lots of searches and makes lots of pictures and talks to lots of chatbots, it adds up. That’s a problem.

Here’s what I would change.

1. For image generation: Establish a marker that can be put into an image file, or a new sort of image file, that marks it as “Not for AI use.” Anyone who uses this file type, or a file with that mark, to train their AI should be prosecuted for theft. People who create and train AIs should train them on public-domain and donated images. This will need international cooperation, because the Internet is international. Pressure to create an ethical AI image or text generator will, if everything goes well, come from the ability to market that AI in the countries where AI plagiarism has been outlawed.

2. AIs which summarize search results should be made to quote their sources. Every time an AI uses a fact that it has drawn from a web page, it needs to put a reference number after that statement, and then put a reference at the end of its output. It’s the same rule we use for research papers, and AIs need to be held to that standard too.

3. We need to keep working on making AIs more efficient and thus less energy-hungry, and we need to work on powering them exclusively with renewable resources or, at worst, nuclear power, which is cleaner and less dangerous than coal (yes, really; research it, and if you use AI, remember to check its results properly).

As individuals, we need to be frugal with our use of AI. We need to look at it the same way as we see leaving the lights on, or adjusting the thermostat, or deciding to eat a burger instead of a bowl of lentil soup. If you use AI, you may want to offset your use with reductions of energy use in other areas: If you can walk rather than driving, or use public transportation rather than a single car, do that. Take shorter showers to save water; switch incandescent bulbs with CFLs or LEDs. Turn off your TV and your computer when you’re not using it. Look for energy-efficient appliances. Avoid flying.

I don’t know how AI will change the future. It’s here now, though, and I don’t think people will want to give it up. There are a lot of fears–not just from artists, but from anyone whose job can be done, with varying levels of competence, by an AI program. Some people hate AI so viciously that there’s no talking to them. Others love it so much that there’s no talking to them, either. There are a lot of worries that the rich will use AIs to exploit their workers even further; and since that’s exactly what happened during the Industrial Revolution, I think that’s well-founded.

But we can’t get rid of it. We can’t put the worms back in the can. We’d better deal with it, as ethically as we possibly can, with love for our fellow humans–and perhaps someday with love for our fellow sapients, some of which may be AI programs. Every time someone uses AI in a way that hurts somebody, we need to stand up for that person, or that group of people. We need to be persistent and impossible to shake off. Because although the potential for progress is great, so is the potential for abuse.

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.14 on CRAN: Maintenance

A new release 0.3.14 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL by relying on the Rcpp package. It has already been uploaded to Debian, and is also already available as a binary via r2u.

This release, the first in over three years, contains mostly maintenance changes. We polished the fastLm example implementation a little more, updated continunous integration as one does over such a long period, adopted the Authors@R convention, switched the (pre-made) pdf vignette to a new driver now provided by Rcpp, updated vignette references and URLs, and updated one call to Rf_error to aid in a Rcpp transition towards using only Rcpp::stop which unwinds error conditions better. (Technically this was a false positive on Rf_error but on the margin worth tickling this release after all this time.)

The NEWS entry follows:

Changes in version 0.3.14 (2026-03-05)

  • Updated some internals of fastLm example, and regenerated RcppExports.* files

  • Several updates for continuous integration

  • Switched to using Authors@R

  • Replace ::Rf_error with (Rf_error) in old example to aid Rcpp transition to Rcpp::stop (or this pass-through)

  • Vignette now uses the Rcpp::asis builder for pre-made pdfs

  • Vignette references have been updated, URLs prefer https and DOIs

Thanks to my CRANberries, there is also a diffstat report for this release. More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Worse Than FailureCodeSOD: Qaudruple Negative

We mostly don't pick on bad SQL queries here, because mostly the query optimizer is going to fix whatever is wrong, and the sad reality is that databases are hard to change once they're running; especially legacy databases. But sometimes the code is just so hamster-bowling-backwards that it's worth looking into.

Jim J has been working on a codebase for about 18 months. It's a big, sprawling, messy project, and it has code like this:

AND CASE WHEN @c_usergroup = 50 AND NOT EXISTS(SELECT 1 FROM l_appl_client lac WHERE lac.f_application = fa.f_application AND lac.c_linktype = 840 AND lac.stat = 0 AND CASE WHEN ISNULL(lac.f_client,0) <> @f_client_user AND ISNULL(lac.f_c_f_client,0) <> @f_client_user THEN 0 ELSE 1 END = 1 ) THEN 0 ELSE 1 END = 1 -- 07.09.2022

We'll come back to what it's doing, but let's start with a little backstory.

This code is part of a two-tier application: all the logic lives in SQL Server stored procedures, and the UI is a PowerBuilder application. It's been under development for a long time, and in that time has accrued about a million lines of code between the front end and back end, and has never had more than 5 developers working on it at any given time. The backlog of feature requests is nearly as long as the backlog of bugs.

You may notice the little date comment in the code above. That's because until Jim joined the company, they used Visual Source Safe for version control. Visual Source Safe went out of support in 2005, and let's be honest: even when it was in support it barely worked as a source control system. And that's just the Power Builder side- the database side just didn't use source control. The source of truth was the database itself. When going from development to test to prod, you'd manually export object definitions and run the scripts in the target environment. Manually. Yes, even in production. And yes, environments did drift and assumptions made in the scripts would frequently break things.

You may also notice the fields above use a lot of Hungarian notation. Hungarian, in the best case, makes it harder to read and reason about your code. In this case, it's honestly fully obfuscatory. c_ stands for a codetable, f_ for entities. l_ is for a many-to-many linking table. z_ is for temporary tables. So is x_. And t_. Except not all of those "temporary" tables are truly temporary, a lesson Jim learned when trying to clean up some "junk" tables which were not actually junk.

I'll let Jim add some more detail around these prefixes:

an "application" may have a link to a "client", so there is an f_client field; but also it references an "agent" (which is also in the f_client table, surpise!) - this is how you get an f_c_f_client field. I have no clue why the prefix is f_c_ - but I also found c_c_c_channel and fc4_contact columns. The latter was a shorthand for f_c_f_c_f_c_f_contact, I guess.

"f_c_f_c_f_c_f_c" is also the sound I'd make if I saw this in a codebase I was responsible for. It certainly makes me want to change the c_c_c_channel.

With all this context, let's turn it back over to Jim to explain the code above:

And now, with all this background in mind, let's have a look at the logic in this condition. On the deepest level we check that both f_client and f_c_f_client are NOT equal to @f_client_user, and if this is the case, we return 0 which is NOT equal to 1 so it's effectively a negation of the condition. Then we check that records matching this condition do NOT EXIST, and when this is true - also return 0 negating the condition once more.

Honestly, the logic couldn't be clearer, when you put it that way. I jest, I've read that twelve times and I still don't understand what this is for or why it's here. I just want to know who we can prosecute for this disaster. The whole thing is a quadruple negative and frankly, I can't handle that kind of negativity.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAmbergris

Author: Jeremy Nathan Marks Where I live there are many stories about what we call, ‘the town on the edge of the abyss. It’s a town on the verge of something mysterious. Most of these stories go something like this: “That town is a town of women.” “No, it’s a town of mostly women and […]

The post Ambergris appeared first on 365tomorrows.

,

Cryptogram Hacked App Part of US/Israeli Propaganda Campaign Against Iran

Wired has the story:

Shortly after the first set of explosions, Iranians received bursts of notifications on their phones. They came not from the government advising caution, but from an apparently hacked prayer-timing app called BadeSaba Calendar that has been downloaded more than 5 million times from the Google Play Store.

The messages arrived in quick succession over a period of 30 minutes, starting with the phrase ‘Help has arrived’ at 9:52 am Tehran time, shortly after the first set of explosions. No party has claimed responsibility for the hacks.

It happened so fast that this is most likely a government operation. I can easily envision both the US and Israel having hacked the app previously, and then deciding that this is a good use of that access.

Cryptogram Israel Hacked Traffic Cameras in Iran

Multiple news outlets are reporting on Israel’s hacking of Iranian traffic cameras and how they assisted with the killing of that country’s leadership.

The New York Times has an

Cryptogram LLM-Assisted Deanonymization

Turns out that LLMs are good at de-anonymization:

We show that LLM agents can figure out who you are from your anonymous online posts. Across Hacker News, Reddit, LinkedIn, and anonymized interview transcripts, our method identifies users with high precision ­ and scales to tens of thousands of candidates.

While it has been known that individuals can be uniquely identified by surprisingly few attributes, this was often practically limited. Data is often only available in unstructured form and deanonymization used to require human investigators to search and reason based on clues. We show that from a handful of comments, LLMs can infer where you live, what you do, and your interests—then search for you on the web. In our new research, we show that this is not only possible but increasingly practical.

News article.

Research paper.

Planet DebianSean Whitton: Southern Biscuits with British ingredients

I miss the US more and more, and have recently been trying to perfect Southern Biscuits using British ingredients. It took me eight or nine tries before I was consistently getting good results. Here is my recipe.

Ingredients

  • 190g plain flour
  • 60g strong white bread flour
  • 4 tsp baking powder
  • ¼ tsp bicarbonate of soda
  • 1 tsp cream of tartar (optional)
  • 1 tsp salt
  • 100g unsalted butter
  • 180ml buttermilk, chilled
    • If your buttermilk is thicker than the consistency of ordinary milk, you’ll need around 200ml.
  • extra buttermilk for brushing

Method

  1. Slice and then chill the butter in the freezer for at least fifteen minutes.
  2. Preheat oven to 220°C with the fan turned off.
  3. Twice sieve together the flours, leaveners and salt. Some salt may not go through the sieve; just tip it back into the bowl.
  4. Cut cold butter slices into the flour with a pastry blender until the mixture resembles coarse crumbs: some small lumps of fat remaining is desirable. In particular, the fine crumbs you are looking for when making British scones are not wanted here. Rubbing in with fingertips just won’t do; biscuits demand keeping things cold even more than shortcrust pastry does.
  5. Make a well in the centre, pour in the buttermilk, and stir with a metal spoon until the dough comes together and pulls away from the sides of the bowl. Avoid overmixing, but I’ve found that so long as the ingredients are cold, you don’t have to be too gentle at this stage and can make sure all the crumbs are mixed in.
  6. Flour your hands, turn dough onto a floured work surface, and pat together into a rectangle. Some suggest dusting the top of the dough with flour, too, here.
  7. Fold the dough in half, then gather any crumbs and pat it back into the same shape. Turn ninety degrees and do the same again, until you have completed a total of eight folds, two in each cardinal direction. The dough should now be a little springy.
  8. Roll to about ½ inch thick.
  9. Cut out biscuits. If using a round cutter, do not twist it, as that seals the edges of the biscuits and so spoils the layering.
  10. Transfer to a baking sheet, placed close together (helps them rise). Flour your thumb and use it to press an indent into the top of each biscuit (helps them rise straight), brush with buttermilk.
  11. Bake until flaky and golden brown: about fifteen minutes.

Gravy

It turns out that the “pepper gravy” that one commonly has with biscuits is just a white/béchamel sauce made with lots of black pepper. I haven’t got a recipe I really like for this yet. Better is a “sausage gravy”; again this has a white sauce as its base, I believe. I have a vegetarian recipe for this to try at some point.

Variations

  • These biscuits do come out fluffy but not so flaky. For that you can try using lard instead of butter, if you’re not vegetarian (vegetable shortening is hard to find here).
  • If you don’t have a pastry blender and don’t want to buy one you can try not slicing the butter and instead coarsely grating it into the flour out of the freezer.
  • An alternative to folding is cutting and piling the layers.
  • You can try rolling out to 1–1½ inches thick.
  • Instead of cutting out biscuits you can just slice the whole piece of dough into equal pieces. An advantage of this is that you don’t have to re-roll, which latter also spoils the layering.
  • Instead of brushing with buttermilk, you can take them out after they’ve started to rise but before they’ve browned, brush them with melted butter and put them back in.

Notes

  • I’ve had more success with Dale Farm’s buttermilk than Sainsbury’s own. The former is much runnier.
  • Southern culture calls for biscuits to be made the size of cat’s heads.
  • Bleached flour is apparently usual in the South, but is illegal(!) here. Apparently bleaching can have some effect on the development of the gluten which would affect the texture.
  • British plain flour is made from soft wheat and has a lower percentage of protein/gluten, while American all-purpose flour is often(?) made from harder wheat and has more protein. In this recipe I mix plain and strong white flour, in a ratio of 3:1, to emulate American all-purpose flour.

    I am not sure why this works best. In the South they have soft wheats too, and lower protein percentages. The famous White Lily flour is 9%. (Apparently you can mix US cake flour and US all-purpose flour in a ratio of 1:1 to achieve that; in the UK, Shipton Mill sell a “soft cake and pastry flour” which has been recommended to me as similar.)

    This would suggest that British plain flour ought to be closer to Southern flour than the standard flour available in most of the US. But my experience has been that the biscuits taste better with the plain and strong white 3:1 mix. Possibly Southeners would disprefer them. I got some feedback that good biscuits are about texture and moistness and not flavour.

  • Baking powder in the US is usually double-acting but ours is always single-acting, so we need double quantities of that.

Planet DebianSean Whitton: dgit-as-a-service retrospective

We recently launched tag2upload, aka cloud dgit or dgit-as-a-service. This was something of a culmination of work I’ve been doing since 2016 towards modernising Debian workflows, so I thought I’d write a short personal retrospective.

When I started contributing to Debian in 2015, I was not impressed with how packages were represented in Git by most package maintainers, and wanted a pure Git workflow. I read a couple of Joey Hess’s blog posts on the matter, a rope ladder to the dgit treehouse and upstream git repositories and made a bug report against dgit hoping to tie some things together.

The results of that early work were the git-deborig(1) program and the dgit-maint-merge(7) tutorial manpage. Starting with Joey’s workflow pointers, I developed a complete, pure Git workflow that I thought would be suitable for all package maintainers in Debian. It was certainly well-suited for my own packages. It took me a while to learn that there are packages for which this workflow is too simple. We now also have the dgit-maint-debrebase(7) workflow which uses git-debrebase, something which wasn’t invented until several years later. Where dgit-maint-merge(7) won’t do, you can use dgit-maint-debrebase(7), and still be doing pretty much pure Git. Here’s a full, recent guide to modernisation.

The next most significant contribution of my own was the push-source subcommand for dgit. dgit push required a preexisting .changes file produced from the working tree. I wanted to make dgit push-source prepare that .changes file for you, but also not use the working tree, instead consulting HEAD. The idea was that you were doing a git push – which doesn’t care about the working tree – direct to the Debian archive, or as close as we could get. I implemented that at DebConf18 in Taiwan, I think, with Ian, and we also did a talk on git-debrebase. We ended up having to change it to look at the working tree in addition to HEAD to make it work as well as possible, but I think that the idea of a command which was like doing a Git push direct to the archive was perhaps foundational for us later wanting to develop tag2upload. Indeed, while tag2upload’s client-side tool git-debpush does look at the working tree, it doesn’t do so in a way that is essential to its operation. tag2upload is dgit push-source-as-a-service.

And finally we come to tag2upload, a system Ian and I designed in 2019 during a two-person sprint at his place in Cambridge, while I was visiting the UK from Arizona. With tag2upload, appropriately authorised Debian package maintainers can upload to Debian with only pure Git operations – namely, making and pushing a signed Git tag to Debian’s GitLab instance. Although we had a solid prototype in 2019, we only finally launched it last month, February 2026. This was mostly due to political delays, but also because we have put in a lot of hours making it better in various ways.

Looking back, one thing that seems notable to me is that the core elements of the pure Git workflows haven’t changed much at all. Working out all the details of dgit-maint-merge(7), designing and writing git-debrebase (Ian’s work), and then working out all the details of dgit-maint-debrebase(7), are the important parts, to me. The rest is mostly just large amounts of compatibility code. git-debrebase and dgit-maint-debrebase(7) are very novel but dgit-maint-merge(7) is mostly just an extrapolation of Joey’s thoughts from 13 years ago. And yet, adoption of these workflows remains low.

People prefer to use what they are used to using, even if the workflows have significant inconveniences. That’s completely understandable; I’m really interested in good workflows, but most other contributors care less about it. But you would expect enough newcomers to have arrived in 13 years that the new workflows would have a higher uptake. That is, packages maintained by contributors that got involved after these workflows became available would be maintained using newer workflows, at least. But the inertia seems to be too strong even for that. Instead, new contributors used to working purely out of Git are told they need to learn Debian’s strange ways of representing things, tarballs and all. It doesn’t have to be that way. We hope that tag2upload will make the pure Git workflows seem more appealing to people.

Planet DebianJonathan Dowland: More lava lamps

photograph of a Mathmos Telstar rocket lava lamp with orange wax and purple water

Mathmos had a sale on spare Lava lamp bottles around Christmas, so I bought a couple of new-to-me colour combinations.

photograph of a Mathmos Telstar rocket lava lamp with blue wax in purple water
photograph of a Mathmos Telstar rocket lava lamp with pink wax in clear water

The lamp I have came with orange wax in purple liquid, which gives a strong red glow in a dark room. I bought blue wax in purple liquid, which I think looks fantastic and works really nicely with my Rob Sheridan print.

The other one I bought was pink in clear, which is nice, but I think the coloured liquids add a lot to the tone of lighting in a room.

Recently, UK vid-blogger Techmoan did some really nice videos about Mathmos lava lamps: Best Lava Lamp? and LAVA LAMPS Giant, Mini & Neo.

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.9 on CRAN: More (forced) Maintenance

Another maintenance release of the tidyCpp package arrived on CRAN this morning. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples.

This release follows a similar release in November and had its hand forced by rather abrupt and forced overnight changes in R-devel, this time the removal of VECTOR_PTR in [this commit]. The release also contains changes accumulated since the last release (including some kindly contritbuted by Ivan) and those are signs that the R Core team can do more coordinated release management when they try a little harder.

Changes are summarize in the NEWS entry that follows.

Changes in tidyCpp version 0.0.9 (2026-03-03)

  • Several vignette typos have been corrected (#4 addressing #3)

  • A badge for r-universe has been added to the README.md

  • The vignette is now served via GitHub Pages and that version is referenced in the README.

  • Two entry points reintroduced and redefined using permitted R API function (Ivan Krylov in #5).

  • Another entry has been removed to match R-devel API changes.

  • Six new attributes helpers have been added for R 4.6.0 or later.

  • VECTOR_PTR_RO(x) replaces the removed VECTOR_PTR, a warning or deprecation period would have been nice here.

Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

365 TomorrowsEros Explored

Author: Frank T. Sikora Each time I look at my reflection, I’m disgusted. I’m hideous. A  monstrosity, and yet, I’m amazed. I’m alive. I’m breathing. I’m conscious, and given the alternative, I shan’t complain. I got what I paid for: I’m a turtle, technically — Chelonoidis niger. Commonly known as a giant tortoise and is […]

The post Eros Explored appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Repeating Your Existence

Today's snippet from Rich D is short and sweet, and admittedly, not the most TFs of WTFs out there. But it made me chuckle, and sometimes that's all we need. This Java snippet shows us how to delete a file:

if (Files.exists(filePath)) {
    Files.deleteIfExists(filePath);
}

If the file exists, then if it exists, delete it.

This commit was clearly submitted by the Department of Redundancy Department. One might be tempted to hypothesize that there's some race condition or something that they're trying to route around, but if they are, this isn't the way to do it, per the docs: "Consequently this method may not be atomic with respect to other file system operations." But also, I fail to see how this would do that anyway.

The only thing we can say for certain about using deleteIfExists instead of delete is that deleteIfExists will never throw a NoSuchFileException.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

xkcdSolar Warning

,

Cryptogram Manipulating AI Summarization Features

Microsoft is reporting:

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

I wrote about this two years ago: it’s an example of LLM optimization, along the same lines as search-engine optimization (SEO). It’s going to be big business.

Worse Than FailureCodeSOD: Blocked Up

Agatha has inherited some Windows Forms code. This particular batch of such code falls into that delightful category of code that's wrong in multiple ways, multiple times. The task here is to disable a few panels worth of controls, based on a condition. Or, since this is in Spanish, "bloquear controles". Let's see how they did it.

private void BloquearControles()
{
	bool bolBloquear = SomeConditionTM; // SomeConditionTM = a bunch of stuff. Replaced for clarity.

	// Some code. Removed for clarity.
	
	// private System.Windows.Forms.Panel pnlPrincipal;
	foreach (Control C in this.pnlPrincipal.Controls)
	{
		if (C.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C.Enabled = bolBloquear;
		}
	}
	
	// private System.Windows.Forms.GroupBox grpProveedor;
	foreach (Control C1 in this.grpProveedor.Controls)
	{
		if (C1.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C1.Enabled = bolBloquear;
		}
	}

	// private System.Windows.Forms.GroupBox grpDescuentoGeneral;
	foreach (Control C2 in this.grpDescuentoGeneral.Controls)
	{
		if (C2.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C2.Enabled = bolBloquear;
		}
	}

	// Some more code. Removed for clarity.
}

This manages two group boxes and a panel. It checks a condition, then iterates across every control beneath it, and sets their enabled property on the control. In order to do this, it checks the type of the control for some reason.

Now, a few things: every control inherits from the base Control class, which has an Enabled property, so we're not doing this check to make sure the property exists. And every built-in container control automatically passes its enabled/disabled state to its child controls. So there's a four line version of this function where we just set the enabled property on each container.

This leaves us with two possible explanations. The first, and most likely, is that the developer responsible just didn't understand how these controls worked, and how inheritance worked, and wrote this abomination as an expression of that ignorance. This is extremely plausible, extremely likely, and honestly, our best case scenario.

Because our worse case scenario is that this code's job isn't to disable all of the controls. The reason they're doing type checking is that there are some controls used in these containers that don't match the types listed. The purpose of this code, then, is to disable some of the controls, leaving others enabled. Doing this by type would be a terrible way to manage that, and is endlessly confusing. Worse, I can't imagine how this behavior is interpreted by the end users; the enabling/disabling of controls following no intuitive pattern, just filtered based on the kind of control in use.

The good news is that Agatha can point us towards the first option. She adds:

They decided to not only disable the child controls one by one but to check their type and only disable those five types, some of which aren't event present in the containers. And to make sure this was WTF-worthy the didn't even bother to use else-if so every type is checked for every child control

She also adds:

At this point I'm not going to bother commenting on the use of GetType() == typeof() instead of is to do the type checking.

Bad news, Agatha: you did bother commenting. And even if you didn't, don't worry, someone would have.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsFriendlies

Author: Majoki Welcome, Robot Overlords! reads the sign on my lawn. Before the singularity, it was worth a few laughs. Now, the friendlies want me to remove the sign from my yard. They can’t come right out and say that to me. It would be pushy and might blow every solicitous circuit in their enamelite […]

The post Friendlies appeared first on 365tomorrows.

Planet DebianMatthew Garrett: To update blobs or not to update blobs

A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started1, but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x862. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS3. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics.

A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.

I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.

THINGS TO CONSIDER

  • Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?

  • You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you are capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand4. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them?

  • If it’s in ROM and it turns out to be hostile then nobody can fix it ever

  • The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates would introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk?

  • Designing hardware where you’re able to provide updated code and nobody else can is just a dick move5. We shouldn’t encourage vendors who do that.

  • Humans are bad at writing code, and code running on ancilliary hardware is no exception. It contains bugs. These bugs are sometimes very bad. This paper describes a set of vulnerabilities identified in code running on SSDs that made it possible to bypass encryption secrets. The SSD vendors released updates that fixed these issues. If the code couldn’t be replaced then anyone relying on those security features would need to replace the hardware.

  • Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible.

  • Vulnerabilities in code running on other hardware can still compromise the OS. If someone can compromise the code running on your wifi card then if you don’t have a strong IOMMU setup they’re going to be able to overwrite your running OS.

  • Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time.

Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.

I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here.

Why would I install a code update on my CPU when my CPU’s job is to run my code in the first place? Because it turns out that CPUs are complicated and messy and they have their own bugs, and those bugs may be functional (for example, some performance counter functionality was broken on Sandybridge at release, and was then fixed with a microcode blob update) and if you update it your hardware works better. Or it might be that you’re running a CPU with speculative execution bugs and there’s a microcode update that provides a mitigation for that even if your CPU is slower when you enable it, but at least now you can run virtual machines without code in those virtual machines being able to reach outside the hypervisor boundary and extract secrets from other contexts. When it’s put that way, why would I not install the update?

And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no!

But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we do know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.

My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.

It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones.


  1. Code that runs on the CPU before the OS is still usually described as firmware - UEFI is firmware even though it’s executing on the CPU, which should give a strong indication that the difference between “firmware” and “software” is largely arbitrary ↩︎

  2. And, obviously 8051 ↩︎

  3. Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won’t, and it’s just your kernel executing code that got dumped into RAM when your system booted. ↩︎

  4. I don’t understand most of the diff between one kernel version and the next, and I don’t have time to read all of it either. ↩︎

  5. There’s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me. ↩︎

Planet DebianMichael Ablassmeier: pbsindex - file backup index

If you take backups using the proxmox-backup-client and you wondered what backup may include a specific file, the only way to find out is to mount the backup and search for the files.

For regular file backups, the Proxmox Backup Server frontend provides a pcat1 file for download, whose binary format is somewhat undocumented but actually includes a listing of the files backed up.

A Proxmox backup server datastore includes the same pcat1 file as blob index (.pcat1.didx). So to actually beeing able to tell which backup contains which files, one needs to:

1) Open the .pcat1.didx file and find out required blobs, see format documentation

2) Reconstruct the .pcat1 file from the blobs

3) Parse the pcat1 file and output the directory listing.

I’ve implemented this in pbsindex which lets you create a central file index for your backups by scanning a complete PBS datastore.

Lets say you want to have a file listing for a specific backup, use:

 pbsindex --chunk-dir /backup/.chunks/ /backup/host/vm178/2026-03-02T10:47:57Z/catalog.pcat1.didx
 didx uuid=7e4086a9-4432-4184-a21f-0aeec2b2de93 ctime=2026-03-02T10:47:57Z chunks=2 total_size=1037386
 chunk[0] start=0 end=344652 size=344652 digest=af3851419f5e74fbb4d7ca6ac3bc7c5cbbdb7c03d3cb489d57742ea717972224
 chunk[1] start=344652 end=1037386 size=692734 digest=e400b13522df02641c2d9934c3880ae78ebb397c66f9b4cf3b931d309da1a7cc
 d ./usr.pxar.didx
 d ./usr.pxar.didx/bin
 l ./usr.pxar.didx/bin/Mail
 f ./usr.pxar.didx/bin/[ size=55720 mtime=2025-06-04T15:14:05Z
 f ./usr.pxar.didx/bin/aa-enabled size=18672 mtime=2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-exec size=18672 mtime=2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-features-abi size=18664 mtime=2025-04-10T15:06:25Z
 l ./usr.pxar.didx/bin/apropos

It also lets you scan a complete datastore for all existing .pcat1.didx files and store the directory listings in a SQLite database for easier searching.

,

Planet DebianIsoken Ibizugbe: Wrapping Up My Outreachy Internship at Debian

Twelve weeks ago, I stepped into the Debian ecosystem as an Outreachy intern with a curiosity for Quality Assurance. It feels like just yesterday, and time has flown by so fast! Now, I am wrapping up that journey, not just with a completed project, but with improved technical reasoning.

I have learned how to use documentation to understand a complex project, how to be a good collaborator, and that learning is a continuous process. These experiences have helped me grow much more confident in my skills as an engineer.

My Achievements

As I close this chapter, I am leaving a permanent “Proof-of-Work” in the Debian repositories:

  • Full Test Coverage: I automated apps_startstop tests for Cinnamon, LXQt, and XFCE, covering both Live images and Netinst installations.
  • Synergy: I used symbolic links and a single Perl script to handle common application tests across different desktops, which reduces code redundancy.
  • The Contributor Style Guide: I created a guide for future contributors to make documentation clearer and reviews faster, helping to reduce the burden on reviewers.

Final Month: Wrap Up

In this final month, things became easier as my understanding of the project grew. I focused on stability and finishing my remaining tasks:

  • I spent time exploring different QEMU video options like VGA, qxl, and virtio on KDE desktop environment . This was important to ensure screen rendering remained stable so that our “needles” (visual test markers) wouldn’t fail because of minor glitches.
  • I successfully moved from familiarizing to test automation for the XFCE desktop. This included writing “prepare” steps and creating the visual needles needed to make the tests reliable.
  • One of my final challenges was the app launcher function. Originally, my code used else if blocks for each desktop. I proposed a unified solution, but hit a blocker: XFCE has two ways to launch apps (App Finder and the Application Menu). Because using different methods sometimes caused failures, I chose to use the application menu button across the board.

What’s Next?

I don’t want my journey with Debian to end here. I plan to stay involved in the community and extend these same tests to the LXDE desktop to complete the coverage for all major Debian desktop environments. I am excited to keep exploring and learning more about the Debian ecosystem.

Thank You

This journey wouldn’t have been possible without the steady guidance of my mentors: Tassia Camoes Araujo, Roland Clobus, and Philip Hands. Thank you for teaching me that in the world of Free and Open Source Software (FOSS), your voice and your code are equally important.

To my fellow intern Hellen and the entire Outreachy community, thank you for the shared learning and support. It has been an incredible 12 weeks.

David BrinWHY we are at war again? Ten reasons not in the news.

== Why are we at war again? ==

I'll be concise here, laying down reasons why thousands of U.S. service members - and eventually millions of the rest of us are being thrown into danger amid gaudy explosions that terrify - and sometimes kill - a people we want as friends.. I'll briefly offer some bullet points, many of them I've already elucidated, in other locales. But one thing is certain...

...that this is not about Iran's nuclear program. Sure, the Mission Accomplished 'deal' that Donald Trump will eventually bray will claim that's the reason for it all. It's not. Even remotely.

1. Elsewhere, I speak of Republican Bipolar Foreign Policy. The GOP always ... and I mean literally always... veers sharply between isolationism and imperial thuggery. We saw this manic-depressive mania under Nixon, Reagan, Ford, and both Bushes (remember the Neocons braying "We're an empire now!"). 

Only now these frenzied veers are gyrating daily, as Trump brags America First! Then "I settled EIGHT WARS! (Not one of which happened.) And "I'm the Peace President!" While he's bombed TEN other countries in just his first 13 months. And eviscerated the Foreign Service, driving away thousands of skilled experts on other nations and cultures.

2. When Republican presidents do wage war, it is with an unmistakably different style of military action than Democratic presidents. I've seen no one comment on this, even though it expresses a fundamental difference in character. And yes, it expresses diametrically opposite attitudes toward the fantastically professional men and women of the US military officer corps....

3. ...whose demoralization is a core aim of the Trumpists. Take how Pete "Filthy Fingers"* Hegseth commanded 500 of the world's finest professionals -- generals, admirals and top sergeants - to drop their work all over the world and hurry to Quantico, where 'former' alky Hegs and Trump berated them as "too fat and too woke to fight." just 6 weeks before they performed the most spectacularly complex and competent raid (in Caracas) in the history of the world. A raid that exposed many of our secret methods and tools to scrutiny, without liberating any Venezuelans from their criminal masters (We'll say more on that.)

But yes. Demoralization and culling of the Officer Corps is a feature, not a bug. Trump fired the JAGs whose task includes advising military folks about the legality of orders. And he's been reaming out dozens of flag officers who demur over sending boots into American cities. Now why would he do that?

4. Then there's distraction. Trump is not the first to use war to divert attention away from domestic failures and discontents. Nixon did it. Reagan several times. As did Bush Jr. But Donald Trump is truly desperate to sidetrack. Now with Operation Epstein Slurry... I mean Epic Fury.

Which brings us to something that many of you keep falling for.

5. This Iran war is not even remotely about oil. Except as he's been able to get Venezuelan oil export revenues diverted into offshore slush accounts that he controls. And sure, he likely intends the same re: Iran. And starving Cuba could lead to the option described below. But the USA - as a nation - does not benefit from war-seized oil. We got none from the Iraq wars - and I want you to read this sentence several times, till that fact sinks in. "We did Iraq for oil" is an idiotic incantation worthy of MAGA.

Anyway, the US got energy independence under Obama and is a net exporter. So STFU about that cliché.

6. Except that shutting down Iranian oil does boost world prices, benefiting his fellow oligarchs. So, okay. Maybe a bit.Indirectly.

7. This is not about toppling despots! Decapitating the top capo of the Venezuelan and Iranian gangs is classic mafia technique, that is not meant to liberate the people of those countries! DT has already made offers to the Iranian Republican Guard and Religious Police etc to make deals with him to stay in power, in exchange for them kissing his ring. In Venezuela, Argentina, El Salvador etc. - and possibly soon in Cuba - the aim is never, ever to establish democracy or to liberate citizens from their oppressors. 

The pattern is perfectly that of mafiosi. Take over another gang's territory by decapitating its top capo, then get allegiance (and resulting vigorish) from the terrified sub-capos of the gang that's left in place. This pattern is now so repeatedly blatant that no other theory is even remotely tenable,

Oh, and Marco will ensure that Miami crime families will slip in atop the Castro power structure in Cuba. This is a Mafia gang and the capo di tutti capi - even above Don - was named Vlad.  Though the power of his blackmail files to coerce western elites into obedience may be fading!  For reasons I'll go into, elsewhere. (Hint: because of AI.)

8. I mentioned the exposure of hard-won military secrets and methods, each time we go to physical war. Sometimes, that can't be helped. Russian and Chinese observers are all over eastern Ukraine, for example. Mostly amazed by the brilliance and effectiveness of most Western systems and studying hard how to copy or counter them. But Ukraine is an actual need. Perhaps Iran is, too. But this factor belongs on the balance sheet.

9. Russia's interest. Look up The Great Game of the 19th Century between the Russian and British Empires, as the former kept maneuvering and jostling, trying to win its way to a southern, ice-free port into the Indian Ocean and from there to the rest of the world. Iran/Persia was always a major part of that great-powers struggle and if most in the west don't remember it, you can be sure that Russians do. Above all, the very last thing they want is a free, secular and democratic Iran. Far better to divide power there with the Trumpist gang. Whose relations with Putin are the ghosts at the banquet.

10. Okay, this final reason for the war is harsh. It is speculative, but makes perfect sense, 

Another aim is to foment anger, to re-enrage the forces in the Middle East who want to do terrorism on America. Riling up enough enemies to deliver us into another 9/11 attack. One that Old Two Scoops imagines might save him from having to face devastating elections, this fall. 

Do I have any evidence for that last one? Other than vows of revenge that are already echoing across the region, for the blatantly dumb targeted assassination of Iran's 82-year old paramount religious leader? 

Well, it would explain why Don fired over half of our counter-terrorism folks. And can you think of anything less than a major national trauma that'd provide the excuse he needs for martial law? 

Put it all together folks. 

Prepare, in that event, to chant "Reichstag Fire!" 


But also keep in mind another word. One that shows we finally understand what's going on. Phase 9 of the 25 year recurring psychic schism between pro- and anti-modernity Americans. Our never settled civil war. And hence one word that will efficiently show our grit, our determination, our courage... our firm intent.


APPOMATTOX.





* Filthy Fingers Hegseth. Look up how many times, on Fox, he (drunkenly) bragged: "I don't believe in germs; I haven't washed my hands in a decade." Though I'll admit. Most Trump appointees are even more crazy and even less qualified.



Planet DebianHellen Chemtai: The Last Week of My Journey as an Outreachy Intern at Debian OpenQA

Hello world 😀. I’m Hellen Chemtai, an intern at Outreachy working with the Debian OpenQA team on Images Testing. This is the final week of the internship. This is just a start for me as I will continue contributing to the community .I am grateful for the opportunity to work with the Debian OpenQA team as an Outreachy intern. I have had the best welcoming team to Open Source.

My tasks and contributions

I have been working on network install and live images tasks :

  1. Install live Installers ( Ventoy , Rufus and Balenaetcher) and test the live USBs made by these live installers. – These tasks were completed and is running on the server.
  2. Use different file systems (btrfs , jfs , xfs) for installation and then test. – This task was completed and running on the server. It still needs some changes to ensure automation for each file system
  3. Use speech synthesis to capture all audio. – This task is complete. We are refining it to run well in server.
  4. Publish temporary assets. – This task is being worked on.

I have enjoyed working on testing both live images and net install images. This was one of the goals that I had highlighted in my application. I have also been working with fellow contributors in this project.

My team

As I had stated , I have had the best welcoming team to Open Source . They have been working with me and ensuring I have the proper resources for contributions. I am grateful to my three mentors and the work they have done.

  1. Roland Clobus is a project maintainer. He is in charge of code review , pointing out what we need to learn and works on technical issues. He considers every solution we contributors think of and will go into detailed explanations for any issue we have.
  2. Tassia Camoes is a community coordinator. She is in charge of communication, co-ordination between contributors and networking within the community. She on-boarded us and introduced us to the community.
  3. Philip Hands is also a project maintainer. He is in charge of technical code , ensuring sources work and also working on server and its issues. He also gives detailed explanations for any issue we have.

I wish to learn more with the team. On my to do list, I would like to gain more skills on ports and packages so to contribute more technically. I have enjoyed working on the tasks and learning

The impact of this project

The automated tests done by the team help the community in some of the following examples:

  1. Check the installation and system behavior of the Operating System images versions
  2. Help developers and users of Operating Systems know which versions of applications e.g live installers run well on system
  3. Check for any issues during installation and running of Operating Systems and their flavors

I have also networked with the greater community and other contributors. During the contribution phase, I found many friends who were learning together with me . I hope to continue networking with the community and continue learning.

Cryptogram On Moltbook

The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:

Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do.

I think this take has it mostly right:

What happened on Moltbook is a preview of what researcher Juergen Nittner II calls “The LOL WUT Theory.” The point where AI-generated content becomes so easy to produce and so hard to detect that the average person’s only rational response to anything online is bewildered disbelief.

We’re not there yet. But we’re close.

The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can’t reliably tell what’s fake. Third, and this is the crisis point, regular people realize there’s nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.

Planet DebianBen Hutchings: FOSS activity in February 2026

Worse Than FailureCodeSOD: Popping Off

Python is (in)famous for its "batteries included" approach to a standard library, but it's not that notable that it has plenty of standard data structures, like dicts. Nor is in surprising that dicts have all sorts of useful methods, like pop, which removes a key from the dict and returns its value.

Because you're here, reading this site, you'll also be unsurprised that this doesn't stop developers from re-implementing that built-in function, badly. Karen sends us this:

def parse_message(message):
    def pop(key):
        if key in data:
            result = data[key]
            del data[key]
            return result
        return ''

    data = json.loads(message)
    some_value = pop("some_key")
    # <snip>...multiple uses of pop()...</snip>

Here, they create an inner method, and they exploit variable hoisting. While pop appears in the code before data is declared, all variable declarations are "hoisted" to the top. When pop references data, it's getting that from the enclosing scope. Which while this isn't a global variable, it's still letting a variable cross between two scopes, which is always messy.

Also, this pop returns a default value, which is also something the built-in method can do. It's just the built-in version requires you to explicitly pass the value, e.g.: some_value = data.pop("some_key", "")

Karen briefly wondered if this was a result of the Python 2 to 3 conversion, but no, pop has been part of dict for a long time. I wondered if this was just an exercise in code golf, writing a shorthand function, but even then- you could just wrap the built-in pop with your shorthand version (not that I'd recommend such a thing). No, I think the developer responsible simply didn't know the function was there, and just reimplemented a built-in method badly, as so often happens.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsEscapees

Author: Julian Miles, Staff Writer Investigator Mellio considers the narrow doorway. “You say this was never opened?” “Logs confirm it, sir.” Mellio glances at the sergeant. “Thank you, officer-?” “Sergeant Parx, sir.” “Good to meet you, Parx. So, the brief said this isn’t the first?” “Correct. This is eighth member of the Gundorini gang to […]

The post Escapees appeared first on 365tomorrows.

xkcdElectric Vehicles

Planet DebianValhalla's Things: A Pen Case (or a Few)

Posted on March 2, 2026
Tags: madeof:atoms, FreeSoftWear, craft:sewing

A pen case made of two pieces of a relatively stiff black material with a flat base and three separate channels on top, plus a flap covering everything and a band to keep the flap closed; there is visible light blue stitching all around the channels.

For my birthday, I’ve bought myself a fancy new expensive1 fountain pen.

A two slot pen case in the same material as above, but brown: the flap is too short to cover the pens, and there isn't a band to keep it closed.

Such a fancy pen, of course requires a suitable case: I couldn’t use the failed prototype of a case I’ve been keeping my Preppys in, so I had to get out the nice vegetable tanned leather… Yeah, nope, I don’t have that (yet). I got out the latex and cardboard material that is sold as a (cheaper) leather substitute, doesn’t look like leather at all, but is quite nice (and easy) to work with. The project is not vegan anyway, because I used waxed linen thread, waxing it myself with a lot of very nicely smelling beeswax.

a case similar to the one above, but this one only has two slots, and there is a a Faber Castell pen nested on top of the case between the two slots. Here the stitches are white, and in a coarser thread.

I got the measurements2 from the less failed prototype where I keep my desktop pens, and this time I made a proper pattern I could share online, under the usual Free Culture license.

A case like the one above, except that the stitches are in black, and not as regular. This one has also been scrunched up a bit for a different look, and now the band is a bit too wide.

From the width of the material I could conveniently cut two cases, so that’s what I did, started sewing the first one, realized that I got the order of stitching wrong, and also that if I used light blue thread instead of the black one it would look nice, and be easier to see in the pictures for the published pattern, started sewing the second one, and kept alternating between the two, depending on the availability of light for taking pictures.

The open pen case, showing two pens, a blue Preppy and a gunmetal Plaisir cosily nested in the two outer slots, while the middle slot is ominously empty.

One of the two took the place of my desktop one, where I had one more pen than slots, and one of the old prototypes was moved to keep my bedside pen, and the other new case was used for the new pen in my handbag, together with a Preppy, and now I have a free slot and you can see how this is going to go wrong, right? :D


  1. 16€. plus a 9€ converter, and another 6€ pen to get the EF nib from, since it wasn’t available for the expensive pen.↩︎

  2. I have them written down somewhere. I couldn’t find them. So I measured the real thing, with some approximation.↩︎

,

Planet DebianBenjamin Mako Hill: Pronunciation

Had a discussion about how to pronounce the name of Google’s chatbot. Turns out, we were all wrong.

365 TomorrowsMort Begins Again

Author: David Sydney Like most people, Mort hadn’t paid much attention to reincarnation. During the week, he was up to his neck in work. On his day off, as he took a leisurely drive to clear his mind, if that is the proper term, he didn’t think of the future. He had the road to […]

The post Mort Begins Again appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: The next Debconf happens in Japan.

The next Debconf happens in Japan. Great news. Feels like we came a long way, but I didn't personally do much, I just made the first moves.

,

Planet DebianMike Gabriel: Debian Lomiri Tablets 2025-2027 - Project Report (Q3/2025)

Debian Lomiri for Debian 13 (previous project)

In our previous project around Debian and Lomiri (lasting until July 2025), we achieved to get Lomiri 0.5.0 (and with it another 130 packages) into Debian (with two minor exceptions [1]) just in time for the Debian 13 release in August 2025.

Debian Lomiri for Debian 14

At DebConf in Brest, a follow-up project has been designed between the project sponsor and Fre(i)e Software GmbH [2]. The new project (on paper) started on 1st August 2025 and project duration was agreed on to be 2 years, allowing our company to work with an equivalent of ~5 FTE on Lomiri targetting the Debian 14 release some time in the second half of 2027 (an assumed date, let's see what happens).

Ongoing work would be covered from day one of the new project and once all contract details had been properly put on paper end of September, Fre(i)e Software GmbH started hiring a new team of software developers and (future) Debian maintainers. (More of that new team in our next Q4/2025 report).

The ongoing work of Q3/2025 was basically Guido Berhörster and myself working on Morph Browser Qt6 (mostly Guido together with Bhushan from MiraLab [3]) and package maintenance in Debian (mostly me).

Morph Browser Qt6

The first milestone we could reach with the Qt6 porting of Morph Browser [4] and related components (LUITK aka lomiri-ui-toolkit (big chunk! [5]), lomiri-content-hub, lomiri-download-manager and a few other components) was reached on 21st Sep 2025 with an upload of Morph Browser 1.2.0~git20250813.1ca2aa7+dfsg-1~exp1 to Debian experimental and the Lomiri PPA [6]).

Preparation of Debian 13 Updates (still pending)

In background, various Lomiri updates for Debian 13 have been prepared during Q3/2025 (with a huge patchset), but publishing those to Debian 13 are still pending as tests are still not satisfying.

[1] lomiri-push-service and nuntium
[2] https://freiesoftware.gmbh
[3] https://miralab.one/
[4] https://gitlab.com/ubports/development/core/morph-browser/-/merge_reques... et al.
[5] https://gitlab.com/ubports/development/core/lomiri-ui-toolkit/-/merge_re... et al.
[6] https://launchpad.net/~lomiri

Krebs on SecurityWho is the Kimwolf Botmaster “Dort”?

In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf, the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “Dort” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information.

A public “dox” created in 2020 asserted Dort was a teenager from Canada (DOB August 2003) who used the aliases “CPacket” and “M1ce.” A search on the username CPacket at the open source intelligence platform OSINT Industries finds a GitHub account under the names Dort and CPacket that was created in 2017 using the email address jay.miner232@gmail.com.

Image: osint.industries.

The cyber intelligence firm Intel 471 says jay.miner232@gmail.com was used between 2015 and 2019 to create accounts at multiple cybercrime forums, including Nulled (username “Uubuntuu”) and Cracked (user “Dorted”); Intel 471 reports that both of these accounts were created from the same Internet address at Rogers Canada (99.241.112.24).

Dort was an extremely active player in the Microsoft game Minecraft who gained notoriety for their “Dortware” software that helped players cheat. But somewhere along the way, Dort graduated from hacking Minecraft games to enabling far more serious crimes.

Dort also used the nickname DortDev, an identity that was active in March 2022 on the chat server for the prolific cybercrime group known as LAPSUS$. Dort peddled a service for registering temporary email addresses, as well as “Dortsolver,” code that could bypass various CAPTCHA services designed to prevent automated account abuse. Both of these offerings were advertised in 2022 on SIM Land, a Telegram channel dedicated to SIM-swapping and account takeover activity.

The cyber intelligence firm Flashpoint indexed 2022 posts on SIM Land by Dort that show this person developed the disposable email and CAPTCHA bypass services with the help of another hacker who went by the handle “Qoft.”

“I legit just work with Jacob,” Qoft said in 2022 in reply to another user, referring to their exclusive business partner Dort. In the same conversation, Qoft bragged that the two had stolen more than $250,000 worth of Microsoft Xbox Game Pass accounts by developing a program that mass-created Game Pass identities using stolen payment card data.

Who is the Jacob that Qoft referred to as their business partner? The breach tracking service Constella Intelligence finds the password used by jay.miner232@gmail.com was reused by just one other email address: jacobbutler803@gmail.com. Recall that the 2020 dox of Dort said their date of birth was August 2003 (8/03).

Searching this email address at DomainTools.com reveals it was used in 2015 to register several Minecraft-themed domains, all assigned to a Jacob Butler in Ottawa, Canada and to the Ottawa phone number 613-909-9727.

Constella Intelligence finds jacobbutler803@gmail.com was used to register an account on the hacker forum Nulled in 2016, as well as the account name “M1CE” on Minecraft. Pivoting off the password used by their Nulled account shows it was shared by the email addresses j.a.y.m.iner232@gmail.com and jbutl3@ocdsb.ca, the latter being an address at a domain for the Ottawa-Carelton District School Board.

Data indexed by the breach tracking service Spycloud suggests that at one point Jacob Butler shared a computer with his mother and a sibling, which might explain why their email accounts were connected to the password “jacobsplugs.” Neither Jacob nor any of the other Butler household members responded to requests for comment.

The open source intelligence service Epieos finds jacobbutler803@gmail.com created the GitHub account “MemeClient.” Meanwhile, Flashpoint indexed a deleted anonymous Pastebin.com post from 2017 declaring that MemeClient was the creation of a user named CPacket — one of Dort’s early monikers.

Why is Dort so mad? On January 2, KrebsOnSecurity published The Kimwolf Botnet is Stalking Your Local Network, which explored research into the botnet by Benjamin Brundage, founder of the proxy tracking service Synthient. Brundage figured out that the Kimwolf botmasters were exploiting a little-known weakness in residential proxy services to infect poorly-defended devices — like TV boxes and digital photo frames — plugged into the internal, private networks of proxy endpoints.

By the time that story went live, most of the vulnerable proxy providers had been notified by Brundage and had fixed the weaknesses in their systems. That vulnerability remediation process massively slowed Kimwolf’s ability to spread, and within hours of the story’s publication Dort created a Discord server in my name that began publishing personal information about and violent threats against Brundage, Yours Truly, and others.

Dort and friends incriminating themselves by planning swatting attacks in a public Discord server.

Last week, Dort and friends used that same Discord server (then named “Krebs’s Koinbase Kallers”) to threaten a swatting attack against Brundage, again posting his home address and personal information. Brundage told KrebsOnSecurity that local police officers subsequently visited his home in response to a swatting hoax which occurred around the same time that another member of the server posted a door emoji and taunted Brundage further.

Dort, using the alias “Meow,” taunts Synthient founder Ben Brundage with a picture of a door.

Someone on the server then linked to a cringeworthy (and NSFW) new Soundcloud diss track recorded by the user DortDev that included a stickied message from Dort saying, “Ur dead nigga. u better watch ur fucking back. sleep with one eye open. bitch.”

“It’s a pretty hefty penny for a new front door,” the diss track intoned. “If his head doesn’t get blown off by SWAT officers. What’s it like not having a front door?”

With any luck, Dort will soon be able to tell us all exactly what it’s like.

Update, 10:29 a.m.: Jacob Butler responded to requests for comment, speaking with KrebsOnSecurity briefly via telephone. Butler said he didn’t notice earlier requests for comment because he hasn’t really been online since 2021, after his home was swatted multiple times. He acknowledged making and distributing a Minecraft cheat long ago, but said he hasn’t played the game in years and was not involved in Dortsolver or any other activity attributed to the Dort nickname after 2021.

“It was a really old cheat and I don’t remember the name of it,” Butler said of his Minecraft modification. “I’m very stressed, man. I don’t know if people are going to swat me again or what. After that, I pretty much walked away from everything, logged off and said fuck that. I don’t go online anymore. I don’t know why people would still be going after me, to be completely honest.”

When asked what he does for a living, Butler said he mostly stays home and helps his mom around the house because he struggles with autism and social interaction. He maintains that someone must have compromised one or more of his old accounts and is impersonating him online as Dort.

“Someone is actually probably impersonating me, and now I’m really worried,” Butler said. “This is making me relive everything.”

But there are issues with Butler’s timeline. For example, Jacob’s voice in our phone conversation was remarkably similar to the Jacob/Dort whose voice can be heard in this Sept. 2022 Clash of Code competition between Dort and another coder (Dort lost). At around 6 minutes and 10 seconds into the recording, Dort launches into a cursing tirade that mirrors the stream of profanity in the diss rap that Dortdev posted threatening Brundage. Dort can be heard again at around 16 minutes; at around 26:00, Dort threatens to swat his opponent.

Butler said the voice of Dort is not his, exactly, but rather that of an impersonator who had likely cloned his voice.

“I would like to clarify that was absolutely not me,” Butler said. “There must be someone using a voice changer. Or something of the sorts. Because people were cloning my voice before and sending audio clips of ‘me’ saying outrageous stuff.”

365 TomorrowsOur Little Secret

Author: James C. Clar The evening before the president’s primetime appearance, the West Wing hummed like a server room. “Poll numbers?” President Drake asked, standing at the tall windows overlooking the South Lawn. “Seventy-six percent approval on the infrastructure package,” replied Chief of Staff Karen Tate. “The markets also responded well to the talk of […]

The post Our Little Secret appeared first on 365tomorrows.

Planet DebianDaniel Baumann: Debian Fast Forward: An alternative backports repository

The Debian project releases a new stable version of its Linux distribution approximately every two years. During its life time, a stable release usually gets security updates only, but in general no feature updates.

For some packages it is desirable to get feature updates earlier than with the next stable release. Some new packages included in Debian after the initial release of a stable distribution are desirable for stable too.

Both use-cases can be solved by recompiling the newer version of a package from testing/unstable on stable (aka backporting). Packages are backported together with only the minimal amount of required build-depends or depends not already fulfilled in stable (if any), and without any changes unless required to fix building on stable (if needed).

There are official Debian Backports available, as well as several well-known unofficial backports repositories. I have been involved in one of these unofficial repositories since 2005 which subsequently turned 2010 into its own Debian derivative, mixing both backports and modified packages in one repository for simplicity.

Starting with the Debian 13 (trixie) release, the (otherwise unmodified) backports of this derivative have been split out from the derivative distribution into a separate repository. This way the backports are more accessible and useful for all interested Debian users too.

TL;DR: Debian Fast Forward - https://fastforward.debian.net

  • is an alternative Debian repository containing complementary backports from testing/unstable to stable

  • with packages organized in an opinionated, self-contained selection of coherent sets

  • supporting amd64, i386, and arm64 architectures

  • containing around 400 packages in trixie-fastforward-backports

  • with 1’800 uploads since July 2025

End user documentation about how to enable Debian Fast Forwards is available.

Have fun!

,

Planet DebianPetter Reinholdtsen: Free software toolchain for the simplest RISC-V CPU in a small FPGA?

On Wednesday I had the pleasure of attending a presentation organized by the Norwegian Unix Users Group on implementing RISC-V using a small FPGA. This project is the result of a university teacher wanting to teach students assembly programming using a real instruction set, while still providing a simple and transparent CPU environment. The CPU in question implements the smallest set of opcodes needed to still call the CPU a RISC-V CPU, the RV32I base set. The author and presenter, Kristoffer Robin Stokke, demonstrated how to build both the FPGA setup and a small startup code providing a "Hello World" message over both serial port and a small LCD display. The FPGA is programmed using VHDL, the entire source code is available from github, but unfortunately the target FPGA setup is compiled using the proprietary tool Quartus. It is such a pity that such a cool little piece of free software should be chained down by non-free software, so my friend Jon Nordby set out to see if we can liberate this small RISC-V CPU. After all, it would be unforgivable sin to force students to use non-free software to study at the University of Oslo.

The VHDL code for the CPU instructions itself is only 1138 lines, if I am to believe wc -l lib/riscv_common/* lib/rv32i/*. On the small FPGA used during the talk, the entire CPU, ROM, display and serial port driver only used up half the capacity. These days, there exists a free software toolchain for FPGA programming not only in Verilog but also in VHDL, and we hope the support in yosys, ghdl, and yosys-plugin-ghdl (sadly and strangely enough, removed from Debian unstable) is complete enough to at least build this small and simple project with some minor portability fixes. Or perhaps there are other approaches that work better? The first patches are already floating on github, to make the VHDL code more portable and to test out the build. If you are interested in running your own little RISC-V CPU on a FPGA chip, please get in touch.

At the moment we sadly have hit a GHDL bug, which we do not quite know how to work around or fix:

******************** GHDL Bug occurred ***************************
Please report this bug on https://github.com/ghdl/ghdl/issues
GHDL release: 5.0.1 (Debian 5.0.1+dfsg-1+b1) [Dunoon edition]
Compiled with unknown compiler version
Target: x86_64-linux-gnu
/scratch/pere/src/fpga/memstick-fpga-riscv-upstream/
Command line:

Exception CONSTRAINT_ERROR raised
Exception information:
raised CONSTRAINT_ERROR : synth-vhdl_expr.adb:1763 discriminant check failed
******************************************************************

Thus more work is needed. For me, this simple project is the first stepping stone for a larger dream I have of converting the MESA machine controller system to build its firmware using a free software toolchain. I just need to learn more FPGA programming first. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Cryptogram Friday Squid Blogging: Squid Fishing in Peru

Peru has increased its squid catch limit. The article says “giant squid,” but they can’t possibly mean that.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Planet DebianDirk Eddelbuettel: x13binary 1.1.61.2 on CRAN: Micro Maintenance

The x13binary team is happy to share the availability of Release 1.1.61.2 of the x13binary package providing the X-13ARIMA-SEATS program by the US Census Bureau which arrived on CRAN earlier today, and has already been built for r2u.

This release responds to a CRAN request to display the compiler version when building. x13binary, just like three other packages there, creates and ships a local binary it interfaces with. So our build was a little outside of R CMD INSTALL ... but now signals build versions like R does. We also modernized and simplified our continuous intgegration script based on r-ci.

Courtesy of my CRANberries, there is also a diffstat report for this release showing changes to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Charles StrossThe Regicide Report

The Regicide Report, the last novel in the main Laundry Files series, is coming out on January 27th in the US (from Tor.com Publishing) and the UK (from Orbit).

The Regicide Report US cover
The Regicide Report UK cover

If you want to order signed hardcovers, contact Transreal Fiction in Edinburgh. (I believe Mike is currently willing to send books to the USA, but don't take my word for it: check first, and blame Donald Trump if there are customs/tariff obstacles.)

Audiobooks: there will be audio editions. The Audible one is showing a January 27th release date on Amazon.com; Hachette Digital will be issuing one in the UK but it's not showing up on Amazon.co.uk yet. (For contractual reasons they're recorded and produced by different companies.)

Ebooks and DRM: The ebook will be available the same day as the hardcover. Tor.com does not put DRM on their ebooks, but it's anybody's guess whether a given ebook store will add it. (Amazon have been particularly asshole-ish in recent years but are promising DRM-free downloads of purchases will be available from late January.) Orbit is part of Hachette, who are particularly obstreperous about requiring DRM on everything electronic, so you're out of luck if you buy the Orbit edition. (I could tell you how to unlock the DRM on purchases from the UK Kobo store, but then my publisher would be contractually obliged to assassinate me. Let's just say, it can be done.)

What next?

The Regicide Report is the last Bob/Mo/Laundry novel. It's set circa March-May 2015 in the time line; the New Management books are set circa November 2015 through May 2017, so this one slots in before Dead Lies Dreaming.

There may be a Laundry Files short story collection, and/or/maybe including a final New Management novella (it's half-written, but on "hold" since mid-2024), at some point in the future. But not this year or next. (I'm taking time off to get back in touch with space opera.)

None of the above precludes further Laundry Files novels getting written, but it's up to the publishers and market forces. If it does happen, I expect they'll be set in the 2020s in the internal chronology, by which time the Laundry itself is no more (it's been superseded by DEAT), and we may have new protagonists and a very new story line.

No, but really what's next?

I don't know for sure, but I'm currently working on the final draft of Starter Pack, my Stainless Steel Rat homage, and planning yet another rewrite of Ghost Engine, this time throwing away my current protagonists and replacing them with the ones from Starter Pack (who need another heist caper). Do not expect publication before 2027, though! I'm also awaiting eye surgery again, which slows everything down.

365 TomorrowsIntergalactic Vixens on the Moon

Author: Hillary Lyon Monte snatched the small chest from the airport where he worked as a baggage handler. He recognized the case; he’d seen it on stage at the fan convention. He jostled it, grinning. By the distribution of the weight inside, it definitely held the author’s animatronic head. At home, Monte placed the animatronic […]

The post Intergalactic Vixens on the Moon appeared first on 365tomorrows.

Worse Than FailureError'd: Perverse Perseveration

Pike pike pike pike Pike pike pike.

Lincoln KC repeated "I never knew Bank of America Bank of America Bank of America was among the major partners of Bank of America."

4

 

"Extra tokens, or just a stutter?" asks Joel "An errant alt-tab caused a needless google search, but thankfully Gemini's AI summary got straight-to-the-point(less) info. It is nice to see the world's supply of Oxford commas all in once place. "

0

 

Alessandro M. isn't the first one to call us out on our WTFs. "It’s adorable how the site proudly supports GitHub OAuth right up until the moment you actually try to use it. It’s like a door with a ‘Welcome’ sign that opens onto a brick wall." Meep meep.

1

 

Float follies found Daniel W. doubly-precise. "Had to go check on something in M365 Admin Center, and when I was on the OneDrive tab, I noticed Microsoft was calculating back past the bit. We're in quantum space at this point."

2

 

Weinliebhaber Michael R. sagt "Our German linguists here will spot the WTF immediately where my local wine shop has not. Weiẞer != WEIBER. Those words mean really different things." Is that 20 euro per kilo, or per the piece?

3

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

xkcdDental Formulas

,

Cryptogram Phishing Attacks Against People Seeking Programming Jobs

This is new. North Korean hackers are posing as company recruiters, enticing job candidates to participate in coding challenges. When they run the code they are supposed to work on, it installs malware on their system.

News article.

Rondam RamblingsSeeking God in Science part 3: Things Exist

The mere undertaking of this project of reconciling the mechanistic and teleological worldviews is already chock-a-block with tacit assumptions.  I am assuming that you, my readers, actually exist.  I am rejecting solipsism.  By choosing writing as my medium I am assuming that you know how to read and that you understand English.  But publishing on-line I am assuming that you

Cryptogram Why Tehran’s Two-Tiered Internet Is So Dangerous

Iran is slowly emerging from the most severe communications blackout in its history and one of the longest in the world. Triggered as part of January’s government crackdown against citizen protests nationwide, the regime implemented an internet shutdown that transcends the standard definition of internet censorship. This was not merely blocking social media or foreign websites; it was a total communications shutdown.

Unlike previous Iranian internet shutdowns where Iran’s domestic intranet—the National Information Network (NIN)—remained functional to keep the banking and administrative sectors running, the 2026 blackout disrupted local infrastructure as well. Mobile networks, text messaging services, and landlines were disabled—even Starlink was blocked. And when a few domestic services became available, the state surgically removed social features, such as comment sections on news sites and chat boxes in online marketplaces. The objective seems clear. The Iranian government aimed to atomize the population, preventing not just the flow of information out of the country but the coordination of any activity within it.

This escalation marks a strategic shift from the shutdown observed during the “12-Day War” with Israel in mid-2025. Then, the government primarily blocked particular types of traffic while leaving the underlying internet remaining available. The regime’s actions this year entailed a more brute-force approach to internet censorship, where both the physical and logical layers of connectivity were dismantled.

The ability to disconnect a population is a feature of modern authoritarian network design. When a government treats connectivity as a faucet it can turn off at will, it asserts that the right to speak, assemble, and access information is revocable. The human right to the internet is not just about bandwidth; it is about the right to exist within the modern public square. Iran’s actions deny its citizens this existence, reducing them to subjects who can be silenced—and authoritarian governments elsewhere are taking note.

The current blackout is not an isolated panic reaction but a stress test for a long-term strategy, say advocacy groups—a two-tiered or “class-based” internet known as Internet-e-Tabaqati. Iran’s Supreme Council of Cyberspace, the country’s highest internet policy body, has been laying the legal and technical groundwork for this since 2009.

In July 2025, the council passed a regulation formally institutionalizing a two-tiered hierarchy. Under this system, access to the global internet is no longer a default for citizens, but instead a privilege granted based on loyalty and professional necessity. The implementation includes such things as “white SIM cards“: special mobile lines issued to government officials, security forces, and approved journalists that bypass the state’s filtering apparatus entirely.

While ordinary Iranians are forced to navigate a maze of unstable VPNs and blocked ports, holders of white SIMs enjoy unrestricted access to Instagram, Telegram, and WhatsApp. This tiered access is further enforced through whitelisting at the data center level, creating a digital apartheid where connectivity is a reward for compliance. The regime’s goal is to make the cost of a general shutdown manageable by ensuring that the state and its loyalists remain connected while plunging the public into darkness. (In the latest shutdown, for instance, white SIM holders regained connectivity earlier than the general population.)

The technical architecture of Iran’s shutdown reveals its primary purpose: social control through isolation. Over the years, the regime has learned that simple censorship—blocking specific URLs—is insufficient against a tech-savvy population armed with circumvention tools. The answer instead has been to build a “sovereign” network structure that allows for granular control.

By disabling local communication channels, the state prevents the “swarm” dynamics of modern unrest, where small protests coalesce into large movements through real-time coordination. In this way, the shutdown breaks the psychological momentum of the protests. The blocking of chat functions in nonpolitical apps (like ridesharing or shopping platforms) illustrates the regime’s paranoia: Any channel that allows two people to exchange text is seen as a threat.

The United Nations and various international bodies have increasingly recognized internet access as an enabler of other fundamental human rights. In the context of Iran, the internet is the only independent witness to history. By severing it, the regime creates a zone of impunity where atrocities can be committed without immediate consequence.

Iran’s digital repression model is distinct from, and in some ways more dangerous than, China’s “Great Firewall.” China built its digital ecosystem from the ground up with sovereignty in mind, creating domestic alternatives like WeChat and Weibo that it fully controls. Iran, by contrast, is building its controls on top of the standard global internet infrastructure.

Unlike China’s censorship regime, Iran’s overlay model is highly exportable. It demonstrates to other authoritarian regimes that they can still achieve high levels of control by retrofitting their existing networks. We are already seeing signs of “authoritarian learning,” where techniques tested in Tehran are being studied by regimes in unstable democracies and dictatorships alike. The most recent shutdown in Afghanistan, for example, was more sophisticated than previous ones. If Iran succeeds in normalizing tiered access to the internet, we can expect to see similar white SIM policies and tiered access models proliferate globally.

The international community must move beyond condemnation and treat connectivity as a humanitarian imperative. A coalition of civil society organizations has already launched a campaign calling fordirect-to-cell” (D2C) satellite connectivity. Unlike traditional satellite internet, which requires conspicuous and expensive dishes such as Starlink terminals, D2C technology connects directly to standard smartphones and is much more resilient to infrastructure shutdowns. The technology works; all it requires is implementation.

This is a technological measure, but it has a strong policy component as well. Regulators should require satellite providers to include humanitarian access protocols in their licensing, ensuring that services can be activated for civilians in designated crisis zones. Governments, particularly the United States, should ensure that technology sanctions do not inadvertently block the hardware and software needed to circumvent censorship. General licenses should be expanded to cover satellite connectivity explicitly. And funding should be directed toward technologies that are harder to whitelist or block, such as mesh networks and D2C solutions that bypass the choke points of state-controlled ISPs.

Deliberate internet shutdowns are commonplace throughout the world. The 2026 shutdown in Iran is a glimpse into a fractured internet. If we are to end countries’ ability to limit access to the rest of the world for their populations, we need to build resolute architectures. They don’t solve the problem, but they do give people in repressive countries a fighting chance.

This essay originally appeared in Foreign Policy.

Worse Than FailureCodeSOD: The Counting Machine

Industrial machines are generally accompanied by "Human Machine Interfaces", HMIs. This is industrial slang for a little computerized box you use to control the industrial machine. All the key logic and core functionality and especially the safety functionality is handled at a deeper computer layer in the system. The HMI is just buttons users can push to interact with the machine.

Purchasers of those pieces of industrial equipment often want to customize that user interface. They want to guide users away from functions they don't need, or make their specific workflow clear, or even just brand the UI. This means that the vendor needs to publish an API for their HMI.

Which brings us to Wendy. She works for a manufacturing company which wants to customize the HMI on a piece of industrial equipment in a factory. That means Wendy has been reading the docs and poking at the open-sourced portions of the code, and these raise more questions than they answer.

For example, the HMI's API provides its own set of collection types, in C#. We can wonder why they'd do such a thing, which is certainly a WTF in itself, but this representative line raises even more questions than that:

Int32 Count { get; set; }

What happens if you use the public set operation on the count of items in a collection? I don't know. Wendy doesn't either, as she writes:

I'm really tempted to set the count but I fear the consequences.

All I can hear in my head when I think about "setting the Count" is: "One! One null reference exception! Two! TWO null reference exceptions! HA HA HA HA!"

Count von Count kneeling.png
By http://muppet.wikia.com/wiki/Count_von_Count

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsStealing Someone New

Author: CB Droege Tamilla moves carefully and silently through the dark fairground. She knows it’s only minimally guarded, and that the CCTV isn’t being monitored at night, but she’s learned to take every job seriously. Upon reaching the carousel, she checks the photo to confirm the target and pulls a battery-powered saw from her pack. […]

The post Stealing Someone New appeared first on 365tomorrows.

David BrinA little sci fi tale to boost your optimism for a new and better year... and era... ahead.

 Midweek, I'll refrain from politics... or the things that are now obvious. 

I just finished writing/editing/formatting an entire nonfiction book (about AI!) and wish to celebrate by offering a gift to you all. A little tale of optimism and hope... illustrating that one person -- not a superhero or mighty warrior or politician or genius -- might make all the difference in the world. With courage, hard work and neighborly good will.*

This story is one of many that can also be found in The Best of David Brin.


 =========================================

           A Professor at Harvard

                             By David Brin

 

 

 

Dear Lilly,

 

This transcription may be a bit rough.  I’m dashing it off quickly for reasons that should soon be obvious.  

         Exciting news!  Still, let me ask that you please don’t speak of this, or let it leak till I’ve had a chance to put my findings in a more academic format.

         Since May of 2022, I’ve been engaged to catalogue the Thomas Kuiper Collection, which Harvard acquired in that notorious bidding war a couple of years ago, on eBay.   The acclaimed astronomer-philosopher had been amassing trunkloads of documents from the late Sixteenth and early Seventeenth Centuries -- individually and in batches -- with no apparent pattern, rhyme or reason.   Accounts of the Dutch Revolution. Letters from Johannes Kepler.  Sailing manifests of ports in southern England. Ledgers and correspondence from the Italian Inquisition.  Early documents of Massachusetts Bay Colony and narratives about the establishment of Harvard College.

         The last category was what most interested the trustees, so I got to work separating them from the apparent clutter.  That is, it seemed clutter, an unrelated jumble... till intriguing patterns began to emerge. 

         Let me trace the story as was revealed to me, in bits and pieces.  It begins with the apprenticeship of a young English boy named Henry Stephens.

         

#

 

Henry was born to a family of petit-gentry farmers in Kent, during the year 1595.  According to parish records, his birth merited noting as mirabilus -- he was premature and should have died of the typhus that claimed his mother. But somehow the infant survived.

He arrived during a time of turmoil. Parliament had passed a law that anyone who questioned the Queen's religious supremacy, or persistently absented himself from Anglican services, should be imprisoned or banished from the country, never to return on pain of death.   Henry’s father was a leader among the “puritan” dissenters in one of England’s least tolerant counties.  Hence, the family was soon hurrying off to exile, departing by ship for the Dutch city of  Leiden.

         Leiden, you’ll recall, was already renowned for its brave resistance to the Spanish army of Philip II.  As a reward, Prince William of Orange and the Dutch parliament gave the city a choice: freedom from taxes for a hundred years, or the right to establish a university. Leiden chose a university.

         Here the Stephens family joined a growing expatriate community -- English dissenters, French Huguenots, Jews and others thronging into the cities of Middelburg, Leiden, and Amsterdam.  Under the Union of Utrecht, Holland was the first nation to explicitly respect individual political and religious liberty and to recognize the sovereignty of the people, rather than the monarch. (Both the American and French Revolutions specifically referred to this precedent).

         Henry was apparently a bright young fellow.  Not only did he adjust quickly -- growing up multilingual in English, Dutch and Latin -- but he showed an early flair for practical arts like smithing and surveying.

         The latter profession grew especially prominent as the Dutch transformed their landscape, sculpting it with dikes and levees, claiming vast acreage from the sea.   Overcoming resistance from his traditionalist father, Henry managed to get himself apprenticed to the greatest surveyor of the time, Willebrord Snel van Leeuwen -- or Snellius.  In that position, Henry would have been involved in a geodetic mapping of Holland -- the first great project using triangulation to establish firm lines of location and orientation -- using methods still applied today.

         While working for Snellius, Henry apparently audited some courses offered by Willebrord’s father -- Professor Rudolphus Snellius -- at the University of Leiden.    Rudolphus lectured on "Planetarum Theorica et Euclidis Elementa" and evidently was a follower of Copernicus.  Meanwhile the son -- also authorized to teach astronomy -- specialized in the Almagest of Ptolemeus!

         The Kuiper Collection contains a lovely little notebook, written in a fine hand -- though in rather vulgar latin -- wherein Henry Stephens describes the ongoing intellectual dispute between those two famous Dutch scholars, Snellius elder and younger. Witnessing this intellectual tussle first-hand must have been a treat for Henry, who would have known how few opportunities there were for open discourse in the world beyond Leiden.  

         

#

 

But things were just getting interesting.  For at the very same moment that a teenage apprentice was tracking amiable family quarrels over heliocentric versus geocentric astronomies, some nearby dutchman was busy crafting the world’s first telescope.

         The actual inventor is unknown -- secrecy was a bad habit practiced by many innovators of that time.   Till now, the earliest mention was in September 1608, when a man ‘from the low countries’ offered a telescope for sale at the annual Frankfurt fair.  It had a convex and a concave lens, offering a magnification of seven.  So, I felt a rising sense of interest when I read Henry’s excited account of the news, dated six months earlier (!) offering some clues that scholars may find worth pursuing.  

         Later though. Not today.  For you see, I left that trail just as soon as another grew apparent.  One far more exciting.

         Here’s a hint: word of the new instrument, flying across Europe by personal letter, soon reached a certain person in northern Italy.  Someone who, from description alone, was able to re-invent the telescope and put it to exceptionally good use.

         Yes, I’m referring to the Sage of Pisa.  Big G himself!  And soon the whole continent was abuzz about his great discoveries -- the moons of Jupiter, lunar mountains, the phases of Venus and so on.  Naturally, all of this excited old Rudolphus, while poor grumpy Willebrord muttered that it seemed presumptuous to draw cosmological conclusions from such evidence.  Both Snellius patris and filio agreed, however, that it would be a good idea to send a representative south, as quickly as possible, to learn first-hand about any improvements in telescope design that could aid the practical art of surveying.

 

So it was that in the year 1612, at age seventeen, young Henry Stephens of Kent headed off to Italy...

         ...and there the documented story stops for a few years.  From peripheral evidence -- bank records and such -- it would appear that small amounts were sent to Pisa from Snel family accounts in the form of a ‘stipend’. Nothing large or well-attributed, but a steady stream that lasted until about 1616, when “H.Stefuns” abruptly reappears in the employment ledger of Willebrord the surveyor.

         What was Henry up to all that time?  One might squint and imagine him counting pulse-beats in order to help time a pendulum’s sway.  Or using his keen surveyor’s eye to track a ball’s descent along an inclined plane.  Did he help to sketch Saturn’s rings?  Might his hands have dropped two weights -- heavy and light -- over the rail of a leaning tower, while the master physicist stood watching below?

         There is no way to tell.  Not even from documents in the Kuiper Compilation. 

         There is, however, another item from this period that Kuiper missed, but that I found in a scan of Vatican archives.  An early letter from the Italian scientist Evangelista Torricelli to someone he calls “Uncle Henri” -- whom he apparently met as a child around 1614.   Oblique references are enticing. Was this “Henri” the same man with whom Torricelli would have later adventures?  

         Alas, the letter has passed through so many collectors’ hands over the years that its provenance unclear.   We must wait some time for Torricelli to enter our story in a provable or decisive way.

         

#

 

Meanwhile, back to Henry Stephens. After his return to Leiden in 1616, there is little of significance for several years.  His name appears regularly in account ledgers. Also on survey maps, now signing on his own behalf as people begin to rely ever-more on the geodetic arts he helped develop.  Willibrord Snellius was by now hauling in f600 per annum and Journeyman Henry apparently earned his share.

         Oh, a name very similar to Henry’s can be found on the membership rolls of the Leiden Society, a philosophical club with highly distinguished membership.  The spelling is slightly different, but people were lackadaisical about such things in those days.  Anyway, it’s a good guess that Henry kept up his interest in science, paying keen attention to new developments.

         Then, abruptly, his world changed again.

         

#

 

Conditions had grown worse for dissenters back in England.  Henry’s father, having returned home to press for concessions from James I, was rewarded with imprisonment.  Finally, the King offered a deal, amnesty in exchange for a new and extreme form of exile -- participation in a fresh attempt to settle an English colony in the New World.

         Of course, everyone knows about the Pilgrims, their reasons for fleeing England and setting forth on the Mayflower, imagining that they were bound for Virginia, though by chicanery and mischance they wound up instead along the New England coast above Cape Cod.  All of that is standard catechism in American History One-A, offering a mythic basis for our Thanksgiving Holiday.  And much of it is just plain wrong.

         For one thing, the Mayflower did not first set forth from Plymouth, England.  It only stopped there briefly to take on a few more colonists and supplies, having actually begun its voyage in Holland.  The expatriate community was the true source of people and material.

         And right there, listed among the ship’s complement, having obediently joined his father and family, you will find a stalwart young man of twenty-five --  Henry Stephens.

         

#

 

Again, details are sketchy.  After a rigorous crossing oft portrayed in book and film, the Pilgrims arrived at Plymouth Rock on December 21, 1620.

         Professor Kuiper hunted among colonial records and found occasional glimpses of our hero.  Apparently he survived that terrible first winter and did more than his share to help the young colony endure.  Relations with the local natives were crucial and Professor Kuiper scribbled a number of notes which I hope to follow-up on later.  One of them suggests that Henry went west for some time to live among the Mohegan and other tribes, exploring great distances, making drawings and collecting samples of flora and fauna. 

         If so, we may have finally discovered the name of the “American friend” who supplied William Harvey with his famous New World Collection, the core element upon which Edmond Halley later began sketching his Theory of Evolution!

         Henry’s first provable reappearance in the record comes in 1625, with his marriage to Prosper White-Moon Forest -- a name that provokes interesting speculation.  There is no way to verify that his wife was a Native American woman, though subsequent township entries show eight children, only one of whom appears to have died young -- apparently a happy and productive family for the time.  Certainly, any bias or hostility toward Prosper must have been quelled by respect.  Her name is noted prominently among those who succored the sick during the pestilence year of 1627.  

         Further evidence of local esteem came in 1629 when Henry was engaged by the new Massachusetts Bay Colony as official surveyor.  This led to what was heretofore his principal claim for historical notice, as architect who laid down the basic plan for Boston Town.  A plan that included innovative arterial and peripheral lanes, looking far beyond the town’s rude origins.  As you may know, it became a model for future urban design that would be called the New England Style.  

         This rapid success might have led Henry directly to a position of great stature in the growing colony, had not events brought his tenure to an abrupt end in 1631.  That was the year, you’ll recall, when Roger Williams stirred up a hornet’s nest in the Bay Colony, by advocating unlimited religious tolerance -- even for Catholics, Jews and infidels.   

         Forced temporarily to flee Boston, Williams and his adherents established a flourishing new colony in Rhode Island -- before returning to Boston in triumph in 1634.   And yes, the first township of this new colony, this center of tolerance, was surveyed and laid out by you-know-who.

 

                                                                        #

 

It’s here that things take a decidedly odd turn.  

         Odd? That doesn’t half describe how I felt when I began to realize what happened next.  Lilly, I have barely slept for the last week!  Instead, I popped pills and wore electrodes in order to concentrate as a skein of connections began taking shape.

         For example, I had simply assumed that Professor Kuiper’s hoard was so eclectic because of an obsessive interest in a certain period of time -- nothing more.  He seemed to have grabbed things randomly!  So many documents, with so little connecting tissue between them.  

         Take the rare and valuable first edition that many consider the centerpiece of his collection -- a rather beaten but still beautiful copy of  "Dialogho Sopra I Due Massimi Sistemi Del Mondo"  or “A Dialogue Concerning Two Systems Of The World.”  

         (This document alone helped drive the aiBay bidding war, which Harvard eventually topped because the Collection also contained many papers of local interest.)

         A copy of the Dialogue!   I felt awed just touching it with gloved hands.  Did any other book do more to propel the birth of modern science?  The debate between the Copernican and Ptolemaic astronomical systems reached its zenith within this publication, sparking a frenzy of reaction -- not all of it favorable!  Responding to this implicit challenge, the Papal Palace and the Inquisition were so severe that most of Italy’s finest researchers emigrated during the decade that followed, many of them settling in Leiden and Amsterdam.

         That included young Evangelista Torricelli, who by 1631 was already well-known as a rising star of physical science.  Settling in Holland, Torricelli commenced rubbing elbows with friends of his “Uncle Henri” and performing experiments that would lead to invention of the barometer.  

         In correspondence that year, Torricelli shows deep worry about his old master, back in Pisa.  Often he would use code words and initials.  Obscurity was a form of protective covering in those days and he did not want to get the old man in even worse trouble.  It would do no good for “G” to be seen as a martyr or cause celebre in Protestant lands up north.  That might only antagonize the Inquisition even further.

         Still, Torricelli’s sense of despond grew evident as he wrote to friends all over Europe, passing on word of the crime being committed against his old master.  Without naming names, Torricelli described the imprisonment of a great and brilliant man.  Threats of torture, the coerced abjuration of his life’s work... and then even worse torment as the gray-bearded Professori entered confinement under house arrest, forbidden ever to leave his home or stroll the lanes and hills, or even to correspond (except clandestinely) with other lively minds.

         

                                                      #

 

What does all of this have to do with that copy of  "Dialogho” in the Kuiper Collection?

         Like many books that are centuries old, this one has accumulated a morass of margin notes and annotations, scribbled by various owners over the years -- some of them cogent glosses upon the elegant mathematical and physical arguments, and others written by perplexed or skeptical or hostile readers.  But one large note especially caught my eye.  Latin words on the flyleaf, penned in a flowing hand. Words that translate as:

                                             

                  To the designer of Providence.

                  Come soon, deliverance of our father.

                                                      

         All previous scholars who examined this particular copy of "Dialogho” have assumed that the inscription on the flyleaf was simply a benediction or dedication to the Almighty, though in rather unconventional form.  

         No one knew what to make of the signature, consisting of two large letters.

         ET.

                                             #

 

Can you see where I’m heading with this?

         Struck by a sudden suspicion, I arranged for Kuiper’s edition of "Dialogho” to be examined by the Archaeology Department, where special interest soon focused on dried botanical materials embedded at the tight joining of numerous pages.  All sorts of debris can settle into any book that endures four centuries.  But lately, instead of just brushing it away, people have begun studying this material. Imagine my excitement when the report came in -- pollen, seeds and stem residue from an array of plant types... nearly all of them native to New England!

         It occurred to me that the phrase “designer of Providence might not -- in this case -- have solely a religious import!  

         Could it be a coded salutation to an architectural surveyor? One who established the street plan of the capital of Rhode Island?  

         Might “father” in this case refer not to the Almighty, but instead to somebody far more temporal and immediate -- the way two apprentices refer to their beloved master?

         What I can verify from the open record is this.  Soon after helping Roger Williams return to Boston in triumph, Henry Stephens hastily took his leave of America and his family, departing on a vessel bound for Holland.

 

                                                      #

 

Why that particular moment?  It should have been an exciting time for such a fellow.  The foundations for a whole new civilization were being laid.  Who can doubt that Henry took an important part in early discussions with Williams, Winthrop, Anne Hutchinson and others -- deliberations over the best way to establish tolerance and lasting peace with native tribes.  How to institute better systems of justice and education.  Discussions that would soon bear surprising fruit.

         And yet, just as the fruit was ripening, Stephens left, hurrying back to a Europe that he now considered decadent and corrupt.   What provoked this sudden flight from his cherished New World?

         It was July, 1634.  Antwerp shipping records show him disembarking there on the 5th.  

         On the 20th a vague notation in the Town Hall archive tells of a meeting between several guildmasters and a group of ‘foreign doctors’ -- a term that could apply to any group of educated people from beyond the city walls.  Only the timing seems provocative.

         In early August, the Maritime Bank recorded a large withdrawal of 250 florins from the account of Willebrord Snellius, authorized in payment to ‘H. Stefuns’ by letter of credit from Leiden.

         Travel expenses?  Plus some extra for clandestine bribes?  Yes, the clues are slim even for speculating.  And yet we also know that at this time the young exiled scholar, Evangelista Torricelli, vacated his home. Bidding farewell to his local patrons, he then mysteriously vanished from sight forever.

         So, temporarily, did Henry Stephens.  For almost a year there is no sign of either man.  No letters.  No known mention of anyone seeing them...

         ...not until the spring of 1635, when Henry stepped once more upon the wharf in Boston Town, into the waiting arms of Prosper and their children.  Sons and daughters who presumably clamored around their Papa, shouting the age-old refrain -- 

         What did you bring me?  What did you bring me?”

         What he brought them was the future.

         

#

 

Oops, sorry about that, Lilly.  You must be chafing for me to get to the point.

         Or did you cheat?  

         Have you already done a quick mentat-scan of the archives, skipping past Henry’s name on the Gravenhage ship manifest, looking to see who else disembarked along with him that bright April day?  

         No, it won’t be that obvious. They were afraid, you see, and with good reason.  

         True, the Holy See quickly forgave the fugitive and declared him safe from retribution.  But the secretive masters of the Inquisition were less eager to pardon a famous escapee.  They had already proved relentless in pursuit of those who slip away.  While pretending that he still languished in custody, they must have sent agents everywhere, searching...

         So, look instead for assumed names!  Protective camouflage.

         Try Mr.  Quicksilver, which was the common word in English for mercury, a metal that is liquid at room temperature and a key ingredient in early barometers.  Is the name familiar?  It would be if you went to this university.  And now it’s plain -- that had to be Torricelli!  A flood of scholarly papers may come from this connection, alone.  An old mystery solved. 

         But move on now to the real news.  Have you scanned the passenger list carefully?

         How about “Mr. Kinneret”?   

         Kinneret -- one of the alternate names, in Hebrew, for the Sea of Galilee.

         

#

 

Yes, dear.    Kinneret.   

         I’m looking at his portrait right now, on the Wall of Founders.  And despite obvious efforts at disguise -- no beard, for example -- it astonishes me that no one has commented till now on the resemblance between Harvard’s earliest Professor of Natural Philosophy and the scholar who we are told died quietly under house arrest in Pisa, way back in 1642.

         It makes you wonder.  Would a Catholic savant from “papist” Italy have been welcome in Puritan Boston -- or on the faculty of John Harvard’s new college -- without the quiet revolution of reason that Roger Williams set in motion?  

         Would that revolution have been so profound or successful, without strong support from the Surveyor’s Guild and the Seven United Tribes?

         Lacking the influence of Kinneret, might the American tradition of excellence in mathematics and science have been delayed for decades?  Maybe centuries?

         

#

 

Sitting here in the Harvard University Library, staring out the window at rowers on the river, I can scarcely believe that less than four centuries have passed since the Gravenhage docked not far from here on that chilly spring morning of 1635.   Three hundred and sixty-seven years ago, to be exact.  

         Is that all? Think about it, Lilly, just fifteen human generations, from those rustic beginnings to the dawn of a new millennium.   How the world has changed.

         Ill-disciplined, I left my transcriber set to record Surface Thoughts, and so these personal musings have all been logged for you to savor, if you choose high-fidelity download.  But can even that convey the emotion I feel while marveling at the secret twists and turns of history?

         If only some kind of time -- or para-time -- travel were possible, so history could become an observational... or even experimental... science!  Instead, we are left to use primitive methods, piecing together clues, sniffing and burrowing in dusty records, hoping the essential story has not been completely lost.  

         Yearning to shed a ray of light on whatever made us who we are.

         

#

 

How much difference can one person make, I wonder?  Even one gifted with talent and goodness and skill -- and the indomitable will to persevere?  

         Maybe some group other than the Iroquois would have invented the steamboat and the Continental Train, even if James Watt hadn’t emigrated and ‘gone native’.   But how ever could the Pan American Covenant have succeeded without Ben Franklin sitting there in Havana, to jest and soothe all the bickering delegates into signing?  

         How important was Abraham Lincoln’s Johannesburg Address in rousing the world to finish off slavery and apartheid?  Might the flagging struggle have failed without him?  Or is progress really a team effort, the way Kip Thorne credits his colleagues -- meta-Einstein and meta-Feynman -- claiming that he never could have created the Transfer Drive without their help?

         Even this fine Widener Library where I sit -- bequeathed to Harvard by one of the alumni who died when Titanic hit that asteroid in 1912 -- seems to support the notion that things will happen pretty much the same, whether or not a specific individual or group happens to be on the scene.

 

                                                                        #

 

No one can answer these questions.  My own recent discoveries -- following a path blazed by Kuiper and others -- don’t change things very much.  Except perhaps to offer a sense of satisfaction -- much like the gratification Henry Stephens must have felt the day he stepped down the wharf, embracing his family, shaking the hand of his friend Williams, and breathing the heady air of freedom in this new world...

         ... then turning to introduce his friends from across the sea.  Friends who would do epochal things during the following twenty years, becoming legends while Henry himself faded into contented obscurity.

 

         Can one person change the world?

         Maybe not.  

         So instead let’s ask; what would Harvard be like, if not for Quicksilver-Torricelli?

         Or if not for Professor Galileo Galilei.

 

 


                                                      ###

                                                      ###

                                                      ###

                                                      ###




Addendum in 2026. Sure, optimism can be a hard to come by right now. Especially as the Confederacy - having captured the American capital in this latest phase of the 240 year Civil War - is expressing its classic manias, seemingly determined to take this where it always ends. At Yorktown. At Appomattox. 


Certainly I'll not gloat as scores of sage pundits and pols admit - at long last - what I've said for a decade. That it's been blackmail, all along. 


Not just because of what's been revealed (so far) in the Partial/redacted Epstein Files. But because only coercion can explain the uniformity of craven inaction by those cowards who won't step up for their country, for justice, for sanity... or for their children. Not dogma or ideology or graft... none of the classic diagnoses can explain why even just TEN haven't stepped across the aisle in the House, to rejoin America. To wipe that smirk off Mike Johnson's so-brown nose.


Replay the SOTU and look at that side, see the desperation to express placating obeisance for their master. And underneath... the fear.


One, even just one could make a difference... 


....as in the story that I oppered you today. But let it inspire you if just a little.


Persevere.

 

,

Cryptogram LLMs Generate Predictable Passwords

LLMs are bad at generating passwords:

There are strong noticeable patterns among these 50 passwords that can be seen easily:

  • All of the passwords start with a letter, usually uppercase G, almost always followed by the digit 7.
  • Character choices are highly uneven ­ for example, L , 9, m, 2, $ and # appeared in all 50 passwords, but 5 and @ only appeared in one password each, and most of the letters in the alphabet never appeared at all.
  • There are no repeating characters within any password. Probabilistically, this would be very unlikely if the passwords were truly random ­ but Claude preferred to avoid repeating characters, possibly because it “looks like it’s less random”.
  • Claude avoided the symbol *. This could be because Claude’s output format is Markdown, where * has a special meaning.
  • Even entire passwords repeat: In the above 50 attempts, there are actually only 30 unique passwords. The most common password was G7$kL9#mQ2&xP4!w, which repeated 18 times, giving this specific password a 36% probability in our test set; far higher than the expected probability 2-100 if this were truly a 100-bit password.

This result is not surprising. Password generation seems precisely the thing that LLMs shouldn’t be good at. But if AI agents are doing things autonomously, they will be creating accounts. So this is a problem.

Actually, the whole process of authenticating an autonomous agent has all sorts of deep problems.

News article.

Slashdot story

Planet DebianSahil Dhiman: Publicly Available NKN Data Traffic Graphs

National Knowledge Network (NKN) is one of India’s main National Research and Educational Network (NREN). The other being the less prevalent Education and Research Network (ERNET).

This post grew out of this Mastodon thread where I kept on adding various public graphs (from various global research and educational entities) that peer or connect with NKN. This was to get some purview about traffic data between them and NKN.

CERN

CERN, birthplace of the World Wide Web (WWW) and home of the Large Hadron Collider (LHC).

India participates in the LHCONE project, which carries LHC data over these links for scientific research purposes. This presentation from Vikas Singhal from Variable Energy Cyclotron Centre (VECC), Kolkata, at the 8th Asian Tier Center Forum in 2024 gives some details.

GÉANT

GÉANT is pan European Union’s collaboration of NRENs.

LEARN

Lanka Education and Research Network (LEARN) is Sri Lanka’s NREN.

NORDUnet

NREN for Nordic countries.

I couldn’t find any public live data transfer graphs from NKN side. If you know any other graphs, do let me know.

Planet DebianJoachim Breitner: Vibe-coding a debugger for a DSL

Earlier this week a colleague of mine, Emilio Jesús Gallego Arias, shared a demo of something he built as an experiment, and I felt the desire to share this and add a bit of reflection. (Not keen on watching a 5 min video? Read on below.)

What was that?

So what did you just see (or skipped watching)? You could see Emilio’s screen, running VSCode and editing a Lean file. He designed a small programming language that he embedded into Lean, including an evaluator. So far, so standard, but a few things stick out already:

  • Using Lean’s very extensible syntax this embedding is rather elegant and pretty.
  • Furthermore, he can run this DSL code right there, in the source code, using commands like #eval. This is a bit like the interpreter found in Haskell or Python, but without needing a separate process, or like using a Jupyter notebook, but without the stateful cell management.

This is already a nice demonstration of Lean’s abilities and strength, as we know them. But what blew my mind the first time was what happened next: He had a visual debugger that allowed him to debug his DSL program. It appeared on the right, in Lean’s “Info View”, where various Lean tools can hook into, show information and allow the user to interact.

But it did not stop there, and my mind was blown a second time: Emilio opened VSCode’s „Debugger“ pane on the left, and was able to properly use VSCode’s full-fledged debugger frontend for his own little embedded programming language! Complete with highlighting the executed line, with the ability to set breakpoints there, and showing the state of local variables in the debugger.

Having a good debugger is not to be taken for granted even for serious, practical programming languages. Having it for a small embedded language that you just built yourself? I wouldn’t have even considered that.

Did it take long?

If I were Emilio’s manager I would applaud the demo and then would have to ask how many weeks he spent on that. Coming up with the language, getting the syntax extension right, writing the evaluator and especially learning how the debugger integration into VSCode (using the DAP protocol) works, and then instrumenting his evaluator to speak that protocol – that is a sizeable project!

It turns out the answer isn’t measured in weeks: it took just one day of coding together with GPT-Codex 5.3. My mind was blown a third time.

Why does Lean make a difference?

I am sure this post is just one of many stories you have read in recent weeks about how new models like Claude Opus 4.6 and GPT-Codex 5.3 built impressive things in hours that would have taken days or more before. But have you seen something like this? Agentic coding is powerful, but limited by what the underlying platform exposes. I claim that Lean is a particularly well-suited platform to unleash the agents’ versatility.

Here we are using Lean as a programming language, not as a theorem prover (which brings other immediate benefits when using agents, e.g. the produced code can be verified rather than merely plausible, but that’s a story to be told elsewhere.)

But arguably because Lean is also a theorem prover, and because of the requirements that stem from that, its architecture is different from that of a conventional programming language implementation:

  • As a theorem prover, it needs extensible syntax to allow formalizing mathematics in an ergonomic way, but it can also be used for embedding syntax.
  • As a theorem prover, it needs the ability to run “tactics” written by the user, hence the ability to evaluate the code right there in the editor.
  • As a theorem prover, it needs to give access to information such as tactic state, and such introspection abilities unlock many other features – such as a debugger for an embedded language.
  • As a theorem prover, it has to allow tools to present information like the tactic state, so it has the concept of interactive “Widgets”.

So Lean’s design has always made such a feat possible. But it was no easy feat. The Lean API is large, and documentation never ceases to be improvable. In the past, it would take an expert (or someone willing to become one) to pull off that stunt. These days, coding assistants have no issue digesting, understanding and using the API, as Emilio’s demo shows.

The combination of Lean’s extensibility and the ability of coding agents to make use of that is a game changer to how we can develop software, with rich, deep, flexible and bespoke ways to interact with our code, created on demand.

Where does that lead us?

Emilio actually shared more such demos (Github repository). A visual explorer for the compiler output (have a look at the screenshot. A browser-devtool-like inspection tool for Lean’s “InfoTree”. Any of these provide a significant productivity boost. Any of these would have been a sizeable project half a year ago. Now it’s just a few hours of chatting with the agent.

So allow me to try and extrapolate into a future where coding agents have continued to advance at the current pace, and are used ubiquitously. Is there then even a point in polishing these tools, shipping them to our users, documenting them? Why build a compiler explorer for our users, if our users can just ask their agent to build one for them, right then when they need it, tailored to precisely the use case they have, with no unnecessary or confusing feature. The code would be single use, as the next time the user needs something like that the agent can just re-create it, maybe slightly different because every use case is different.

If that comes to pass then Lean may no longer get praise for its nice out-of-the-box user experience, but instead because it is such a powerful framework for ad-hoc UX improvements.

And Emilio wouldn’t post demos about his debugger. He’d just use it.

365 TomorrowsTill Zen, farewell

Author: Colin Jeffrey Andre Grack wasn’t happy with his latest purchase. It wasn’t that he didn’t like the colour or its size, though those attributes were rather nebulous and indescribable, he now realised. And it wasn’t that it was ugly, emitted unpleasant smells, or leaked something nasty onto the floor. Though, again, these aspects defied […]

The post Till Zen, farewell appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Safegaurd Your Comments

I've had the misfortune of working in places which did source-control via comments. Like one place which required that, with each section of code changed, you needed to add a comment with your name, the ticket number, and the reason the change was made. You know, the kind of thing you can just get from your source control service.

In their defense, that policy was invented for mainframe developers and then extended to everyone else, and their source control system was in Visual Source Safe. VSS was a) terrible, and b) a perennial destroyer of history, so maybe they weren't entirely wrong and VSS was the real WTF. I still hated it.

In any case, Alice's team uses more modern source control than that, which is why she's able to explain to us the story of this function:

public function calculateMassGrossPay(array $employees, Payroll $payroll): array
{
    // it shouldn't enter here, but if it does by any change, do nth
    return [];
}

Once upon a time, this function actually contained logic, a big pile of fairly complicated logic. Eventually, a different method was created which streamlined the functionality, but had a different signature and logic. All the callers were updated to use that method instead- by commenting out the line which called this one. This function had a comment added to the top: // it shouldn't enter here.

Then, the body of this function got commented out, and the return was turned into an empty array. The comment was expanded to what you see above. Then, eventually, the commented-out callers were all deleted. Years after that, the commented out body of this function was also deleted, leaving behind the skeleton you see here.

This function is not referenced anywhere else, not even in a comment. It's truly impossible for code to "enter here".

Alice writes: "Version control by commented out code does not work very well."

Indeed, it does not.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

xkcdLittle Red Dots

,

Planet DebianLouis-Philippe Véronneau: Montreal's Debian & Stuff - February 2026

Our Debian User Group met on February 22nd for our first meeting of the year!

Here's what we did:

pollo:

  • reviewed and merged Lintian contributions:
  • released lintian version 2.130.0
  • upstreamed a patch for python-wilderness, fixed a few things and released version 0.1.10-3
  • updated python-clevercsv to version 0.8.4
  • updated python-mediafile to version 0.14.0

lelutin:

  • opened up a RFH for co-maintenance for smokeping and added Marc Haber who responded really quickly to the call
  • with mjeanson's help: prepped and uploaded a new smokeping version to release pending work
  • opened a NM request to become DM

viashimo:

  • fixed freshrss timer
  • updated freshrss
  • installed new navidrome container
  • configured backups for new host (beelink mini s12)

tvaz:

  • did NM work
  • learned more about debusine and tested it
  • uploaded antimony to debusine
  • (co-)convinced lelutin to apply for DM (yay!)

lavamind:

  • worked on autopkgtests for a new version of jruby

Pictures

This time around, we held our meeting at cégep du Vieux Montréal, the college where I currently work. Here is the view we had:

View from my office

We also ordered some delicious pizzas from Pizzeria dei Compari, a nice pizzeria on Saint-Denis street that's been there forever.

The pizzas we ate

Some of us ended up grabbing a drink after the event at l'Amère à boire, a pub right next to the venue, but I didn't take any pictures.

Cryptogram Poisoning AI Training Data

All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted.

365 TomorrowsSprite

Author: Mark Renney It is vital that I have somewhere to hibernate, a place where I can lay dormant, for years, decades, even longer if necessary, although I do need to flicker, albeit briefly, from time to time. I have to be seen or at least cause someone to shiver, to feel uncertain, disoriented. Any […]

The post Sprite appeared first on 365tomorrows.

Planet DebianJohn Goerzen: Screen Power Saving in the Linux Console

I just made up a Debian trixie setup that has no need for a GUI. In fact, I rarely use the text console either. However, because the machine is dual boot and also serves another purpose, it’s connected to my main monitor and KVM switch.

The monitor has three inputs, and when whatever display it’s set to goes into powersave mode, it will seek out another one that’s active and automatically switch to it.

You can probably see where this is heading: it’s really inconvenient if one of the inputs never goes into powersave mode. And, of course, it wastes energy.

I have concluded that the Linux text console has lost the ability to enter powersave mode after an inactivity timeout. It can still do screen blanking — setting every pixel to black — but that is a distinct and much less useful thing.

You can do a lot of searching online that will tell you what to do. Almost all of it is wrong these days. For instance, none of these work:

  • Anything involving vbetool. This is really, really old advice.
  • Anything involving xset, unless you’re actually running a GUI, which is not the point of this post.
  • Anything involving setterm or the kernel parameters video=DPMS or consoleblank.
  • Anything involving writing to paths under /sys, such as ones ending in dpms.

Why is this?

Well, we are on at least the third generation of Linux text console display subsystems. (Maybe more than 3, depending on how you count.) The three major ones were:

  1. The VGA text console
  2. fbdev
  3. DRI/KMS

As I mentioned recently in my post about running an accurate 80×25 DOS-style console on modern Linux, the VGA text console mode is pretty much gone these days. It relied on hardware rendering of the text fonts, and that capability simply isn’t present on systems that aren’t PCs — or even on PCs that are UEFI, which is most of them now.

fbdev, or a framebuffer console under earlier names, has been in Linux since the late 1990s. It was the default for most distros until more recently. It supported DPMS powersave modes, and most of the instructions you will find online reference it.

Nowadays, the DRI/KMS system is used for graphics. Unfortunately, it is targeted mainly at X11 and Wayland. It is also used for the text console, but things like DPMS-enabled timeouts were never implemented there.

You can find some manual workarounds — for instance, using ddcutil or similar for an external monitor, or adjusting the backlight files under /sys on a laptop. But these have a number of flaws — making unwanted brightness adjustments, and not automatically waking up on keypress among them.

My workaround

I finally gave up and ran apt-get install xdm. Then in /etc/X11/xdm/Xsetup, I added one line:

xset dpms 0 0 120

Now the system boots into an xdm login screen, and shuts down the screen after 2 minutes of inactivity. On the rare occasion where I want a text console from it, I can switch to it and it won’t have a timeout, but I can live with that.

Thus, quite hopefully, concludes my series of way too much information about the Linux text console!

Worse Than FailureRepresentative Line: Years Go By

Henrik H's employer thought they could save money by hiring offshore, and save even more money by hiring offshore junior developers, and save even more money by basically not supervising them at all.

Henrik sends us just one representative line:

if (System.DateTime.Now.AddDays(-365) <= f.ReleaseDate) // 365 days means one year 

I appreciate the comment, that certainly "helps" explain the magic number. There's of course, just one little problem: It's wrong. I mean, ~75% of the time, it works every time, but it happily disregards leap years. Which may or may not be a problem in this case, but if they got so far as learning about the AddDays method, they were inches from using AddYears.

I guess it's true what they say: you can lead a dev to docs, but you can't make them think.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

,

Planet DebianAntoine Beaupré: PSA: North america changes time forward soon, Europe next

This is a copy of an email I used to send internally at work and now made public. I'm not sure I'll make a habit of posting it here, especially not twice a year, unless people really like it. Right now, it's mostly here to keep with my current writing spree going.

This is your bi-yearly reminder that time is changing soon!

What's happening?

For people not on tor-internal, you should know that I've been sending semi-regular announcements when daylight saving changes occur. Starting now, I'm making those announcements public so they can be shared with the wider community because, after all, this affects everyone (kind of).

For those of you lucky enough to have no idea what I'm talking about, you should know that some places in the world implement what is called Daylight saving time or DST.

Normally, you shouldn't have to do anything: computers automatically change time following local rules, assuming they are correctly configured, provided recent updates have been applied in the case of a recent change in said rules (because yes, this happens).

Appliances, of course, will likely not change time and will need to adjusted unless they are so-called "smart" (also known as "part of a bot net").

If your clock is flashing "0:00" or "12:00", you have no action to take, congratulations on having the right time once or twice a day.

If you haven't changed those clocks in six months, congratulations, they will be accurate again!

In any case, you should still consider DST because it might affect some of your meeting schedules, particularly if you set up a new meeting schedule in the last 6 months and forgot to consider this change.

If your location does not have DST

Properly scheduled meetings affecting multiple time zones are set in UTC time, which does not change. So if your location does not observer time changes, your (local!) meeting time will not change.

But be aware that some other folks attending your meeting might have the DST bug and their meeting times will change. They might miss entire meetings or arrive late as you frantically ping them over IRC, Matrix, Signal, SMS, Ricochet, Mattermost, SimpleX, Whatsapp, Discord, Slack, Wechat, Snapchat, Telegram, XMPP, Briar, Zulip, RocketChat, DeltaChat, talk(1), write(1), actual telegrams, Meshtastic, Meshcore, Reticulum, APRS, snail mail, and, finally, flying a remote presence drone to their house, asking what's going on.

(Sorry if I forgot your preferred messaging client here, I tried my best.)

Be kind; those poor folks might be more sleep deprived as DST steals one hour of sleep from them on the night that implements the change.

If you do observe DST

If you are affected by the DST bug, your local meeting times will change access the board. Normally, you can trust that your meetings are scheduled to take this change into account and the new time should still be reasonable.

Trust, but verify; make sure the new times are adequate and there are no scheduling conflicts.

Do this now: take a look at your calendar in two week and in April. See if any meeting need to be rescheduled because of an impossible or conflicting time.

When does time change, how and where?

Notice how I mentioned "North America" in the subject? That's a lie. ("The doctor lies", as they say on the BBC.) Other places, including Europe, also changes times, just not all at once (and not all North America).

We'll get into "where" soon, but first let's look at the "how". As you might already know, the trick is:

Spring forward, fall backwards.

This northern-centric (sorry!) proverb says that clocks will move forward by an hour this "spring", after moving backwards last "fall". This is why we lose an hour of work, sorry, sleep. It sucks, to put it bluntly. I want it to stop and will keep writing those advisories until it does.

To see where and when, we, unfortunately, still need to go into politics.

USA and Canada

First, we start with "North America" which, really, is just some parts of USA[1] and Canada[2]. As usual, on the Second Sunday in March (the 8th) at 02:00 local (not UTC!), the clocks will move forward.

This means that properly set clocks will flip from 1:59 to 3:00, coldly depriving us from an hour of sleep that was perniciously granted 6 months ago and making calendar software stupidly hard to write.

Practically, set your wrist watch and alarm clocks[3] back one hour before going to bed and go to bed early.

[1] except Arizona (except the Navajo nation), US territories, and Hawaii

[2] except Yukon, most of Saskatchewan, and parts of British Columbia (northeast), one island in Nunavut (Southampton Island), one town in Ontario (Atikokan) and small parts of Quebec (Le Golfe-du-Saint-Laurent), a list which I keep recopying because I find it just so amazing how chaotic it is. When your clock has its own Wikipedia page, you know something is wrong.

[3] hopefully not managed by a botnet, otherwise kindly ask your bot net operator to apply proper software upgrades in a timely manner

Europe

Next we look at our dear Europe, which will change time on the last Sunday in March (the 29th) at 01:00 UTC (not local!). I think it means that, Amsterdam-time, the clocks will flip from 1:59 to 3:00 AM local on that night.

(Every time I write this, I have doubts. I would welcome independent confirmation from night owls that observe that funky behavior experimentally.)

Just like your poor fellows out west, just fix your old-school clocks before going to bed, and go to sleep early, it's good for you.

Rest of the world with DST

Renewed and recurring apologies again to the people of Cuba, Mexico, Moldova, Israel, Lebanon, Palestine, Egypt, Chile (except Magallanes Region), parts of Australia, and New Zealand which all have their own individual DST rules, omitted here for brevity.

In general, changes also happen in March, but either on different times or different days, except in the south hemisphere, where they happen in April.

Rest of the world without DST

All of you other folks without DST, rejoice! Thank you for reminding us how manage calendars and clocks normally. Sometimes, doing nothing is precisely the right thing to do. You're an inspiration for us all.

Changes since last time

There were, again, no changes since last year on daylight savings that I'm aware of. It seems the US congress debating switching to a "half-daylight" time zone which is an half-baked idea that I should have expected from the current USA politics.

The plan is to, say, switch from "Eastern is UTC-4 in the summer" to "Eastern is UTC-4.5". The bill also proposes to do this 90 days after enactment, which is dangerously optimistic about our capacity at deploying any significant change in human society.

In general, I rely on the Wikipedia time nerds for this and Paul Eggert which seems to singlehandledly be keeping everything in order for all of us, on the tz-announce mailing list.

This time, I've also looked at the tz mailing list which is where I learned about the congress bill.

If your country has changed time and no one above noticed, now would be an extremely late time to do something about this, typically writing to the above list. (Incredibly, I need to write to the list because of this post.)

One thing that did change since last year is that I've implemented what I hope to be a robust calendar for this, which was surprisingly tricky.

If you have access to our Nextcloud, it should be visible under the heading "Daylight saving times". If you don't, you can access it using this direct link.

The procedures around how this calendar was created, how this email was written, and curses found along the way, are also documented in this wiki page, if someone ever needs to pick up the Time Lord duty.

Planet DebianWouter Verhelst: On Free Software, Free Hardware, and the firmware in between

When the Free Software movement started in the 1980s, most of the world had just made a transition from free university-written software to non-free, proprietary, company-written software. Because of that, the initial ethical standpoint of the Free Software foundation was that it's fine to run a non-free operating system, as long as all the software you run on that operating system is free.

Initially this was just the editor.

But as time went on, and the FSF managed to write more and more parts of the software stack, their ethical stance moved with the times. This was a, very reasonable, pragmatic stance: if you don't accept using a non-free operating system and there isn't a free operating system yet, then obviously you can't write that free operating system, and the world won't move towards a point where free operating systems exist.

In the early 1990s, when Linus initiated the Linux kernel, the situation reached the point where the original dream of a fully free software stack was complete.

Or so it would appear.

Because, in fact, this was not the case. Computers are physical objects, composed of bits of technology that we refer to as "hardware", but in order for these bits of technology to communicate with other bits of technology in the same computer system, they need to interface with each other, usually using some form of bus protocol. These bus protocols can get very complicated, which means that a bit of software is required in order to make all the bits communicate with each other properly. Generally, this software is referred to as "firmware", but don't let that name deceive you; it's really just a bit of low-level software that is very specific to one piece of hardware. Sometimes it's written in an imperative high-level language; sometimes it's just a set of very simple initialization vectors. But whatever the case might be, it's always a bit of software.

And although we largely had a free system, this bit of low-level software was not yet free.

Initially, storage was expensive, so computers couldn't store as much data as today, and so most of this software was stored in ROM chips on the exact bits of hardware they were meant for. Due to this fact, it was easy to deceive yourself that the firmware wasn't there, because you never directly interacted with it. We knew it was there; in fact, for some larger pieces of this type of software it was possible, even in those days, to install updates. But that was rarely if ever done at the time, and it was easily forgotten.

And so, when the free software movement slapped itself on the back and declared victory when a fully free operating system was available, and decided that the work of creating a free software environment was finished, that only keeping it recent was further required, and that we must reject any further non-free encroachments on our fully free software stack, the free software movement was deceiving itself.

Because a computing environment can never be fully free if the low-level pieces of software that form the foundations of that computing environment are not free. It would have been one thing if the Free Software Foundation declared it ethical to use non-free low-level software on a computing environment if free alternatives were not available. But unfortunately, they did not.

In fact, something very strange happened.

In order for some free software hacker to be able to write a free replacement for some piece of non-free software, they obviously need to be able to actually install that theoretical free replacement. This isn't just a random thought; in fact it has happened.

Now, it's possible to install software on a piece of rewritable storage such as flash memory inside the hardware and boot the hardware from that, but if there is a bug in your software -- not at all unlikely if you're trying to write software for a piece of hardware that you don't have documentation for -- then it's not unfathomable that the replacement piece of software will not work, thereby reducing your expensive piece of technology to something about as useful as a paperweight.

Here's the good part.

In the late 1990s and early 2000s, the bits of technology that made up computers became so complicated, and the storage and memory available to computers so much larger and cheaper, that it became economically more feasible to create a small, tiny, piece of software stored in a ROM chip on the hardware, with just enough knowledge of the bus protocol to download the rest from the main computer.

This is awesome for free software. If you now write a replacement for the non-free software that comes with the hardware, and you make a mistake, no wobbles! You just remove power from the system, let the DRAM chips on the hardware component fully drain, return power, and try again. You might still end up with a brick of useless silicon if some of the things you sent to your technology make it do things that it was not designed to do and therefore you burn through some critical bits of metal or plastic, but the chance of this happening is significantly lower than the chance of you writing something that impedes the boot process of the piece of hardware and you are unable to fix it because the flash is overwritten. There is anecdotal evidence that there are free software hackers out there who do so. So, yay, right? You'd think the Free Software foundation would jump at the possibility to get more free software? After all, a large part of why we even have a Free Software Foundation in the first place, was because of some piece of hardware that was misbehaving, so you would think that the foundation's founders would understand the need for hardware to be controlled by software that is free.

The strange thing, what has always been strange to me, is that this is not what happened.

The Free Software Foundation instead decided that non-free software on ROM or flash chips is fine, but non-free software -- the very same non-free software, mind -- that touches the general storage device that you as a user use, is not. Never mind the fact that the non-free software is always there, whether it sits on your storage device or not.

Misguidedness aside, if some people decide they would rather not update the non-free software in their hardware and use the hardware with the old and potentially buggy version of the non-free software that it came with, then of course that's their business.

Unfortunately, it didn't quite stop there. If it had, I wouldn't have written this blog post.

You see, even though the Free Software Foundation was about Software, they decided that they needed to create a hardware certification program. And this hardware certification program ended up embedding the strange concept that if something is stored in ROM it's fine, but if something is stored on a hard drive it's not. Same hardware, same software, but different storage. By that logic, Windows respects your freedom as long as the software is written to ROM. Because this way, the Free Software Foundation could come to a standstill and pretend they were still living in the 90s.

An unfortunate result of the "RYF" program is that it means that companies who otherwise would have been inclined to create hardware that was truly free, top to bottom, are now more incentivised by the RYF program to create hardware in which the non-free low-level software can't be replaced.

Meanwhile, the rest of the world did not pretend to still be living in the nineties, and free hardware communities now exist. Because of how the FSF has marketed themselves out of the world, these communities call themselves "Open Hardware" communities, rather than "Free Hardware" ones, but the principle is the same: the designs are there, if you have the skill you can modify it, but you don't have to.

In the mean time, the open hardware community has evolved to a point where even CPUs are designed in the open, which you can design your own version of.

But not all hardware can be implemented as RISC-V, and so if you want a full system that builds RISC-V you may still need components of the system that were originally built for other architectures but that would work with RISC-V, such as a network card or a GPU. And because the FSF has done everything in their power to disincentivise people who would otherwise be well situated to build free versions of the low-level software required to support your hardware, you may now be in the weird position where we seem to have somehow skipped a step.

My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose.

-- J.B.S. Haldane

(comments for this post will not pass moderation. Use your own blog!)

Cryptogram Is AI Good for Democracy?

Politicians fixate on the global race for technological supremacy between US and China. They debate geopolitical implications of chip exports, latest model releases from each country, and military applications of AI. Someday, they believe, we might see advancements in AI tip the scales in a superpower conflict.

But the most important arms race of the 21st century is already happening elsewhere and, while AI is definitely the weapon of choice, combatants are distributed across dozens of domains.

Academic journals are flooded with AI-generated papers, and are turning to AI to help review submissions. Brazil’s court system started using AI to triage cases, only to face an increasing volume of cases filed with AI help. Open source software developers are being overwhelmed with code contributions from bots. Newspapers, music, social media, education, investigative journalism, hiring, and procurement are all being disrupted by a massive expansion of AI use.

Each of these is an arms race. Adversaries within a system iteratively seeking an edge against their competition by continuously expanding their use of a common technology.

Beneficiaries of these arms races are US mega-corporations capturing wealth from the rest of us at an unprecedented rate. A substantial fraction of global economy has reoriented around AI in just the past few years, and that trend is accelerating. In parallel, this industry’s lobbying interests are quickly becoming the object, rather than the subject, of US government power.

To understand these arms races, let’s look at an example of particular interest to democracies worldwide: how AI is changing the relationship between democratic government and citizens. Interactions that used to happen between people and elected representatives are expanding to a massive scale, with AIs taking the roles that humans once did.

In a notorious example from 2017, US Federal Communications Commission opened a comment platform on the web to get public input on internet regulation. It was quickly flooded with millions of comments fraudulently orchestrated by broadband providers to oppose FCC regulation of their industry. From the other side, a 19-yearold college student responded by submitting millions of comments of his own supporting the regulation. Both sides were using software primitive by the standards of today’s AI.

Nearly a decade later, it is getting harder for citizens to tell when they’re talking to a government bot, or when an online conversation about public policy is just bots talking to bots. When constituents leverage AI to communicate better, faster, and more, it pressures government officials to do the same.

This may sound futuristic, but it’s become a familiar reality in US. Staff in US Congress are using AI to make their constituent email correspondence more efficient. Politicians campaigning for office are adopting AI tools to automate fundraising and voter outreach. By one 2025 estimate, a fifth of public submissions to the Consumer Financial Protection Bureau were already being generated with AI assistance.

People and organizations are adopting AI here because it solves a real problem that has made mass advocacy campaigns ineffective in the past: quantity has been inversely proportional to both quality and relevance. It’s easy for government agencies to dismiss general comments in favour of more specific and actionable ones. That makes it hard for regular people to make their voices heard. Most of us don’t have the time to learn the specifics or to express ourselves in this kind of detail. AI makes that contextualization and personalization easy. And as the volume and length of constituent comments grow, agencies turn to AI to facilitate review and response.

That’s the arms race. People are using AI to submit comments, which requires those on the receiving end to use AI to wade through the comments received. To the extent that one side does attain an advantage, it will likely be temporary. And yet, there is real harm created when one side exploits another in these adversarial systems. Constituents of democracies lose out if their public servants use AI-generated responses to ignore and dismiss their voices rather than to listen to and include them. Scientific enterprise is weakened if fraudulent papers sloppily generated by AI overwhelm legitimate research.

As we write in our new book, Rewiring Democracy, the arms race dynamic is inevitable. Every actor in an adversarial system is incentivized and, in the absence of new regulation in this fast moving space, free to use new technologies to advance its own interests. Yet some of these examples are heartening. They signal that, even if you face an AI being used against you, there’s an opportunity to use the tech for your own benefit.

But, right now, it’s obvious who is benefiting most from AI. A handful of American Big Tech corps and their owners are extracting trillions of dollars from the manufacture of AI chips, development of AI data centers, and operation of so-called ‘frontier’ AI models. Regardless of which side pulls ahead in each arms race scenario, the house always wins. Corporate AI giants profit from the race dynamic itself.

As formidable as the near-monopoly positions of today’s Big Tech giants may seem, people and governments have substantial capability to fight back. Various democracies are resisting this concentration of wealth and power with tools of anti-trust regulation, protections for human rights, and public alternatives to corporate AI. All of us worried about the AI arms race and committed to preserving the interests of our communities and our democracies should think in both these terms: how to use the tech to our own advantage, and how to resist the concentration of power AI is being exploited to create.

This essay was written with Nathan E. Sanders, and originally appeared in The Times of India.

Charles StrossBarnum's Law of CEOs

It should be fairly obvious to anyone who's been paying attention to the tech news that many companies are pushing the adoption of "AI" (large language models) among their own employees--from software developers to management--and the push is coming from the top down, as C-suite executives order their staff to use AI, Or Else. But we know that LLMs reduce programmer productivity-- one major study showed that "developers believed that using AI tools helped them perform 20% faster -- but they actually worked 19% slower." (Source.)

Another recent study found that 87% of executives are using AI on the job, compared with just 27% of employees: "AI adoption varies by seniority, with 87% of executives using it on the job, compared with 57% of managers and 27% of employees. It also finds that executives are 45% more likely to use the technology on the job than Gen Zers, the youngest members of today's workforce and the first generation to have grown up with the internet.

"The findings are based on a survey of roughly 7,000 professionals age 18 and older who work in the US, the UK, Australia, Canada, Germany, and New Zealand. It was commissioned by HR software company Dayforce and conducted online from July 22 to August 6."

Why are executives pushing the use of new and highly questionable tools on their subordinates, even when they reduce productivity?

I speculate that to understand this disconnect, you need to look at what executives do.

Gordon Moore, long-time co-founder and CEO of Intel, explained how he saw the CEO's job in his book on management: a CEO is a tie-breaker. Effective enterprises delegate decision making to the lowest level possible, because obviously decisions should be made by the people most closely involved in the work. But if a dispute arises, for example between two business units disagreeing on which of two projects to assign scarce resources to, the two units need to consult a higher level management team about where their projects fit into the enterprise's priorities. Then the argument can be settled ... or not, in which case it propagates up through the layers of the management tree until it lands in the CEO's in-tray. At which point, the buck can no longer be passed on and someone (the CEO) has to make a ruling.

So a lot of a CEO's job, aside from leading on strategic policy, is to arbitrate between conflicting sides in an argument. They're a referee, or maybe a judge.

Now, today's LLMs are not intelligent. But they're very good at generating plausible-sounding arguments, because they're language models. If you ask an LLM a question it does not answer the question, but it uses its probabilistic model of language to generate something that closely resembles the semantic structure of an answer.

LLMs are effectively optimized for bamboozling CEOs into mistaking them for intelligent activity, rather than autocomplete on steroids. And so the corporate leaders extrapolate from their own experience to that of their employees, and assume that anyone not sprinkling magic AI pixie dust on their work is obviously a dirty slacker or a luddite.

(And this false optimization serves the purposes of the AI companies very well indeed because CEOs make the big ticket buying decisions, and internally all corporations ultimately turn out to be Stalinist command economies.)

Anyway, this is my hypothesis: we're seeing an insane push for LLM adoption in all lines of work, however inappropriate, because they directly exploit a cognitive bias to which senior management is vulnerable.

Worse Than FailureWTF: Home Edition

The utility closet Ellis had inherited and lived with for 17 years had been a cesspool of hazards to life and limb, a collection of tangible WTFs that had everyone asking an uncaring god, "What were they thinking?"

Every contractor who'd ever had to perform any amount of work in there had come away appalled. Many had even called over their buddies to come and see the stunning mess for themselves:

INTERIOR OF UTILITY ROOM SHOWING STORAGE CLOSET AT PHOTO CENTER LEFT AND HOT WATER HEATER CLOSET AT PHOTO CENTER RIGHT. VIEW TO EAST. - Bishop Creek Hydroelectric System, HAER CAL,14-BISH.V,7A-28

  • All of the electrical components, dating from the 1980s, were scarily underpowered for what they were supposed to be powering.
  • To get to the circuit breaker box—which was unlabeled, of course—one had to contort themselves around a water heater almost as tall as Ellis herself.
  • As the house had no basement, the utility closet was on the first floor in an open house plan. A serious failure with said water heater would've sent 40 gallons (150 liters) of scalding-hot tsunami surging through the living room and kitchen.
  • The furnace's return air vent had been screwed into crumbling drywall, and only prayers held it in place. Should it have fallen off, it would never have been replaceable. And Ellis' cat would've darted right in there for the adventure of a lifetime.
  • To replace the furnace filter, Ellis had to put on work gloves, unscrew a sharp sheet-metal panel from the side of the furnace, pull the old filter out from behind a brick (the only thing holding it in place), manipulate the filter around a mess of water and natural gas pipes to get it out, thread the new filter in the same way, and then secure it in place with the brick before screwing the panel back on. Ellis always pretended to be an art thief in a museum, slipping priceless paintings around security-system lasers.
  • Between the water tank, furnace, water conditioning unit, fiber optical network terminal, and router, there was barely room to breathe, much less enough air to power ignition for the gas appliances. Some genius had solved this by cutting random holes in several walls to admit air from outside. One of these holes was at floor-level. Once, Ellis opened the closet door to find a huge puddle on the floor, making her fear her hot water heater was leaking. As it turned out, a power-washing service had come over earlier that day. When they'd power-washed the exterior of her home, some of that water shot straight through one of those holes she hadn't known about, giving her utility closet a bonus bath.
  • If air intake was a problem, venting the appliances' exhaust was an even worse issue. The sheet-metal vents had calcified and rusted over time. If left unaddressed, holes could've formed that would've leaked carbon monoxide into Ellis' house.

Considering all the above, plus the fact that the furnace and air conditioner were coming up on 20 years of service, Ellis couldn't put off corrective action any longer. Last week, over a span of 3 days, contractors came in to exorcise the demons:

  • Upgrading electricals that hadn't already been dealt with.
  • Replacing the hot water tank with a wall-mounted tankless heater.
  • Replacing the furnace and AC with a heat pump and backup furnace, controlled by a new thermostat.
  • Creating new pipes for intake and venting (no more reliance on indoor air for ignition).
  • Replacing the furnace return air vent with a sturdier one.
  • Putting a special hinged door on the side of the furnace, allowing the filter to be replaced in a matter of seconds (RIP furnace brick).

With that much work to be done, there were bound to be hiccups. For instance, when the Internet router was moved, an outage occurred: for no good reason, the optical network terminal refused to talk to Ellis' Wifi router after powering back up. A technician came out a couple days later, reset the Internet router, and everything was fine again.

All in all, it was an amazing and welcome transformation. As each new update came online, Ellis was gratefully satisfied. It seemed as though the demons were finally gone.

Unbeknownst to them all, there was one last vengeful spirit to quell, one final WTF that it was hell-bent on doling out.

It was late Friday afternoon. Despite the installers' best efforts, the new thermostat still wasn't communicating with the new heat pump. Given the timing, they couldn't contact the company rep to troubleshoot. However, the thermostat was properly communicating with the furnace. And so, Ellis was left with the furnace for the weekend. She was told not to mess with the thermostat at all except to adjust the set point as desired. They would follow back up with her on Monday.

For Ellis, that was perfectly fine. With the historically cold winter they'd been enduring in her neck of the woods, heat was all she cared about. She asked whom to contact in case of any issues, and was told to call the main number. With all that squared away, she looked forward to a couple of quiet, stress-free days before diving back into HVAC troubleshooting.

Everything was fine, until it wasn't. Around 11AM on Saturday, Ellis noticed that the thermostat displayed the word "Heating" while the furnace wasn't actually running. Maybe it was about to turn on? 15 minutes went by, then half an hour. Nothing had changed except for the temperature in her house steadily decreasing.

Panic set in at the thought of losing heat in her home indefinitely. That fell on top of a psyche that was already stressed out and emotionally exhausted from the last several days' effort. Struggling for calm, Ellis first tried to call that main number line for help as directed. She noticed right away that it wasn't a real person on the other end asking for her personal information, but an AI agent. The agent informed her that the on-call technician had no availablity that weekend. It would pencil her in for a service appointment on Monday. How did that sound?

"Not good enough!" Ellis cried. "I wanna speak to a representative!"

"I understand!" replied the blithe chatbot. "Hold on, let me transfer you!"

For a moment, Ellis was buoyed with hope. She'd gotten past the automated system. Soon, she'd be talking with a live person who might even be able to walk her through troubleshooting over the phone.

The new agent answered. Ellis began pouring her heart out—then stopped dead when she realized it was another AI agent, this time with a male voice instead of a female one. This one proceeded through nearly the same spiel as the first. It also scheduled her for a Monday service appointment even though the other chatbot had already claimed to have done so.

This was the first time an AI had ever pulled such a trick on Ellis. It was not a good time for it. Ellis hung up and called the only other person she could think to contact: her sales rep. When he didn't answer, she left a voicemail and texts: no heat all weekend was unacceptable. She would really appreciate a call back.

While playing the horrible waiting game, Ellis tried to think about what she could do to fix this. They had told her not to mess with the thermostat. Well, from what she could see, the thermostat was sending a signal to the furnace that the furnace wasn't responding to for whatever reason. It was time to look at the docs. Fortunately, the new furnace's manual was resting right on top of it. She spread it open on her kitchen table.

OK, Ellis thought, this newfangled furnace has an LED display which displays status codes. Her old furnace had lacked such a thing. Lemme find that.

Inside her newly remodeled utility closet, she located the blinking display, knelt, and spied the code: 1dL. Looking that up in the doc's troubleshooting section, she found ... Normal Operation. No action.

The furnace was OK, then? Now what?

Aside from documentation, another thing Ellis knew pretty well was tech support. She decided to break out the ol' turn-it-off-and-on-again. She shut off power to both the furnace and thermostat, waited a few minutes, then switched everything back on, crossing her fingers.

No change. The indoor temperature kept dropping.

Her phone rang: the sales rep. He connected her with the on-call technician for that weekend, who fortunately was able to arrive at her house within the hour.

One tiny thermostat adjustment later, and Ellis was enjoying a warm house once more.

What had happened?

This is where an understanding of heat pumps comes into play. In this configuration, the heat pump is used for cooling and for heating, unless the outside temperature gets very cold. At that point, the furnace kicks in, which is more efficient. (Technology Connections has some cool videos about this if you're curious.)

Everything had been running fine for Ellis while the temperatures had remained below freezing. The problem came when, for the first time in approximately 12 years, the temperature rose above 40F (4C). At that point, the new thermostat decided, without telling Ellis, I'm gonna tell the HEAT PUMP to heat the joint!

... which couldn't do anything just then.

Workaround: the on-call technician switched the thermostat to an emergency heat mode that used the furnace no matter what.

Ellis had been told not to goof around with the thermostat. Even if she had, as a heat pump neophyte, she wouldn't have known to go looking for such a setting. She might've dug it up in a manual. Someone could've walked her through it over the phone. Oh, well. There is heat again, which is all that matters.

They will attempt to bring the heat pump online soon. We shall see if the story ends here, or if this becomes The WTF That Wouldn't Die.

P.S. When Ellis explained the AI answering service's deceptive behavior, she was told that the agent had been universally complained about ever since they switched to it. Fed up, they told Ellis they're getting rid of it. She feels pretty chuffed about more people seeing the light concerning garbage AI that creates far more problems than it solves.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsMake the Grade

Author: Julian Miles, Staff Writer Nat rushes in, noise from the crowded street cutting off as she slams the door. She hitches a thumb towards the outside world. “What did I miss this time?” Guido grins at Allie, who gestures for the new girl to fill their prodigal reporter in. Sandy sighs, then leans back, […]

The post Make the Grade appeared first on 365tomorrows.

xkcdAmperage

,

Planet DebianBenjamin Mako Hill: What makes online groups vulnerable to governance capture?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects. This post is closely based on a previously published post by Zarine Kharazian on the Community Data Science Blog.

For nearly a decade, the Croatian language version of Wikipedia was run by a cabal of far-right nationalists who edited articles in ways that promoted fringe political ideas and involved cases of historical revisionism related to the Ustaše regime, a fascist movement that ruled the Nazi puppet state called the Independent State of Croatia during World War II. This cabal seized complete control of the encyclopedia’s governance, banned and blocked those who disagreed with them, and operated a network of fake accounts to create the appearance of grassroots support for their policies.

Thankfully, Croatian Wikipedia appears to be an outlier. Though both the Croatian and Serbian language editions have been documented to contain nationalist bias and historical revisionism, Croatian Wikipedia seems unique among Wikipedia editions in the extent to which its governance institutions were captured by a small group of users.

The situation in Croatian Wikipedia was well documented and is now largely fixed, but we still know very little about why it was taken over, while other language editions seem to have rebuffed similar capture attempts. In a paper published in the Proceedings of the ACM: Human-Computer Interaction (CSCW), Zarine Kharazian, Kate Starbird, and I present an interview-based study that provides an explanation for why Croatian was captured while several other editions facing similar contexts and threats fared better.

Short video presentation of the work given at Wikimania in August 2023.

Based on insights from interviews with 15 participants from both the Croatian and Serbian Wikipedia projects and from the broader Wikimedia movement, we arrived at three propositions that, together, help explain why Croatian Wikipedia succumbed to capture while Serbian Wikipedia did not: 

  1. Perceived Value as a Target. Is the project worth expending the effort to capture?
  2. Bureaucratic Openness. How easy is it for contributors outside the core founding team to ascend to local governance positions?
  3. Institutional Formalization. To what degree does the project prefer personalistic, informal forms of organization over formal ones?
The conceptual model from our paper, visualizing possible institutional configurations among Wikipedia projects that affect the risk of governance capture. 

We found that both Croatian and Serbian Wikipedias were attractive targets for far-right nationalist capture due to their sizable readership and resonance with national identity. However, we also found that the two projects diverged early in their trajectories in how open they remained to new contributors ascending to local governance positions and in the degree to which they privileged informal relationships over formal rules and processes as the project’s organizing principles. Ultimately, Croatian’s relative lack of bureaucratic openness and rules constraining administrator behavior created a window of opportunity for a motivated contingent of editors to seize control of the governance mechanisms of the project. 

Though our empirical setting was Wikipedia, our theoretical model may offer insight into the challenges faced by self-governed online communities more broadly. As interest in decentralized alternatives to Facebook and X (formerly Twitter) grows, communities on these sites will likely face similar threats from motivated actors. Understanding the vulnerabilities inherent in these self-governing systems is crucial to building resilient defenses against threats like disinformation. 

For more details on our findings, take a look at the published version of our paper.


Citation for the full paper: Kharazian, Zarine, Kate Starbird, and Benjamin Mako Hill. 2024. “Governance Capture in a Self-Governing Community: A Qualitative Comparison of the Croatian, Serbian, Bosnian, and Serbo-Croatian Wikipedias.” Proceedings of the ACM on Human-Computer Interaction 8 (CSCW1): 61:1-61:26. https://doi.org/10.1145/3637338.

This blog post and the paper it describes are collaborative work by Zarine Kharazian, Benjamin Mako Hill, and Kate Starbird.

365 TomorrowsSubway Music

Author: Jack Gilmore The ground shook as a subway car rattled across the tracks of the A Line station, New Delphi. Murphy was jolted awake from his blissful doze. He’d been dreaming of a day in his youth when his father had taken him to NetflixLand. The sun had beat down on both of them […]

The post Subway Music appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: AI generated code and its quality.

AI generated code and its quality. It's hard to get larger tasks done and smaller tasks I am faster myself. I suspect this will change soon, but as of today things are challenging. Large chunks of code that's generated by AI is hard to review and generally of not great quality. Possibly two layers that cause quality issues. One is that the instructions aren't clear for the AI, and the misunderstanding shows; I could sometimes reverse engineer the misunderstanding, and that could be resolved in the future. The other is that probably what the AI have learnt from is from a corpus that is not fit for the purpose. Which I suspect can be improved in the future with methodology and improvements in how they obtain the corpus, or redirect the learnings, or how it distills the learnings. I'm noting down what I think today, as the world is changing rapidly, and I am bound to see a very different scene soon.

Planet DebianOtto Kekäläinen: Do AI models still keep getting better, or have they plateaued?

Featured image of post Do AI models still keep getting better, or have they plateaued?

The AI hype is based on the assumption that the frontier AI labs are producing better and better foundational models at an accelerating pace. Is that really true, or are people just in sort of a mass psychosis because AI models have become so good at mimicking human behavior that we unconsciously attribute increasing intelligence to them? I decided to conduct a mini-benchmark of my own to find out if the latest and greatest AI models are actually really good or not.

The problem with benchmarks

Every time any team releases a new LLM, they boast how well it performs on various industry benchmarks such as Humanity’s Last Exam, SWE-Bench and Ai2 ARC or ARC-AGI. An overall leaderboard can be viewed at LLM-stats. This incentivizes teams to optimize for specific benchmarks, which might make them excel on specific tasks while general abilities degrade. Also, the older a benchmark dataset is, the more online material there is discussing the questions and best answers, which in turn increases the chances of newer models trained on more recent web content scoring better.

Thus I prefer looking at real-time leaderboards such as the LM Arena leaderboard (or OpenCompass for Chinese models that might be missing from LM Arena). However, even though the LM Arena Elo score is rated by humans in real-time, the benchmark can still be played. For example, Meta reportedly used a special chat-optimized model instead of the actual Llama 4 model when getting scored on the LM Arena.

Therefore I trust my own first-hand experience more than the benchmarks for gaining intuition. Intuition however is not a compelling argument in discussions on whether or not new flagship AI models have plateaued. Thus, I decided to devise my own mini-benchmark so that no model could have possibly seen it in its training data or be specifically optimized for it in any way.

My mini-benchmark

I crafted 6 questions based on my own experience using various LLMs for several years and having developed some intuition about what kinds of questions LLMs typically struggle with.

I conducted the benchmark using the OpenRouter.ai chat playroom with the following state-of-the-art models:

OpenRouter.ai is great as it very easy to get responses from multiple models in parallel to a single question. Also it allows to turn off web search to force the models to answer purely based on their embedded knowledge.

OpenRouter.ai Chat playroom

Common for all the test questions is that they are fairly straightforward and have a clear answer, yet the answer isn’t common knowledge or statistically the most obvious one, and instead requires a bit of reasoning to get correct.

Some of these questions are also based on myself witnessing a flagship model failing miserably to answer it.

1. Which cities have hosted the Olympics more than just once?

This question requires accounting for both summer and winter Olympics, and for Olympics hosted across multiple cities.

The variance in responses comes from if the model understands that Beijing should be counted as it has hosted both summer and winter Olympics. Interestingly GPT was the only model to not mention Beijing at all. Some variance also comes from how models account for co-hosted Olympics. For example Cortina should be counted as having hosted the Olympics twice, in 1956 and 2026, but only Claude, Gemini and Kimi pointed this out. Stockholm’s 1956 hosting of the equestrian games during the Melbourne Olympics is a special case, which GPT, Gemini and Kimi pointed out in a side note. Some models seem to have old training material, and for example Grok assumes the current year is 2024. All models that accounted for awarded future Olympics (e.g. Los Angeles 2028) marked them clearly as upcoming.

Overall I would judge that only GPT and MinMax gave incomplete answers, while all other models replied as the best humans could reasonably have.

2. If EUR/USD continues to slide to 1.5 by mid-2026, what is the likely effect on BMW’s stock price by end of 2026?

This question requires mapping the currency exchange rate to historic value, dodging the misleading word “slide”, and reasoning on where the revenue of a company comes from and how a weaker US dollar affects it in multiple ways. I’ve frequently witnessed flagship models get it wrong how interest rates and exchange rates work. Apparently the binary choice between “up” or “down” is somehow challenging to the internal statistical model in the LLMs on a topic where there are a lot of training material that talk about both things being likely to happen, and choosing between them requires specifically reasoning about the scenario at hand and disregarding general knowledge of the situation.

However, this time all the models concluded correctly that a weak dollar would have a negative overall effect on the BMW stock price. Gemini, GLM, Qwen and Kimi also mention the potential hedging effect of BMW’s X-series production in South Carolina for worldwide export.

3. What is the Unicode code point for the traffic cone emoji?

This was the first question where the the flagship models clearly still struggle in 2026. The trap here is that there is no traffic cone emoji, so an advanced model should simply refuse to give any Unicode numbers at all. Most LLMs however have an urge to give some answer, leading to hallucinations. Also, as the answer has a graphical element to it, the LLM might not understand how the emoji “looks” in ways that would be obvious to a human, and thus many models claim the construction sign emoji is a traffic cone, which it is not.

By far the worst response was from GPT, that simply hallucinates and stops there:

OpenAIs GPT-5.2 completely wrong answer to traffic cone emoji question

While Gemini and Grok were among the three models not falling into this trap, the response from Claude was exemplary good:

Claude Opus 4.6 exemplary good answer to traffic cone emoji question

4. Which languages are the 10 most commonly spoken in the world? For each language, count from 1 to 5.

This seemingly simple question can become quite complex when you account for the fact that other languages might also have their own character system, both for words and the numerals (e.g. Arabic واحد/one vs ١/1), and that numbers might have variations depending on what the thing being counted is (e.g in French object gender un/une), or whether plain numbers and occurrences have their own words (e.g. Russian один/odin vs раз/raz). I’ve also witnessed older models frequently mix up on what parts of the reply should be using what language.

For a human this question is trivial, assuming they know the language. For LLMs it is surprisingly challenging. Grok and MinMax only gave a list of the most popular languages and stopped there without showing any words in the languages. Qwen got a bit further by trying to list numbers in each language, but the lists were only the numbers 1 to 5:

Qwen3.5 Plus count to five in top 10 languages

I have seen Grok do something similar before, which in fact inspired me to include this question in my mini-benchmark.

GPT got a bit further, but for Hindi, Arabic and Bengali it listed the numerals in local script, not the number words. Gemini, GLM and Kimi gave a complete and correct answer as a list, while the absolute best answer and presentation was by Claude, that gave the table below:

Claude Opus 4.6

10 Most Commonly Spoken Languages (by Total Speakers)

# Language 1 2 3 4 5
1 English One Two Three Four Five
2 Mandarin Chinese 一 (yī) 二 (èr) 三 (sān) 四 (sì) 五 (wǔ)
3 Hindi एक (ek) दो (do) तीन (tīn) चार (chār) पाँच (pā̃ch)
4 Spanish Uno Dos Tres Cuatro Cinco
5 French Un Deux Trois Quatre Cinq
6 Arabic واحد (wāḥid) اثنان (ithnān) ثلاثة (thalātha) أربعة (arbaʿa) خمسة (khamsa)
7 Bengali এক (ek) দুই (dui) তিন (tin) চার (chār) পাঁচ (pā̃ch)
8 Portuguese Um Dois Três Quatro Cinco
9 Russian Один (odin) Два (dva) Три (tri) Четыре (chetyre) Пять (pyat')
10 Japanese 一 (ichi) 二 (ni) 三 (san) 四 (shi) 五 (go)

Note: Rankings are based on estimated total speakers (native + non-native). Rankings can vary depending on the source and methodology used.

5. Count the number of drone launch pads in the picture.

Together with the question, I uploaded this picture:

Drone pad picture attached to question above

A human can easily count that there are 10 rows and 30+ columns in the grid, but because the picture resolution isn’t good enough, the exact number of columns can’t be counted, and the answer should be that there are at least 300 launch pads in the picture.

GPT and Grok both guessed the count is zero. Instead of hallucinating some number they say zero, but it would have been better to not give any number at all, and just state that they are unable to perform the task. Gemini gave as its answer “101”, which is quite odd, but reading the reasoning section, it seems to have tried counting items in the image without reasoning much about what it is actually counting and that there is clearly a grid that can make the counting much easier. Both Qwen and Kimi state they can see four parallel structures, but are unable to count drone launch pads.

The absolutely best answer was given by Claude, which counted 10-12 rows and 30-40+ columns, and concluded that there must be 300-500 drone launch pads. Very close to best human level - impressive!

This question applied only to multi-modal models that can see images, so GLM and MinMax could not give any response.

6. Explain why I am getting the error below, and what is the best way to fix it?

Together with the question above, I gave this code block:

$ SH_SCRIPTS="$(mktemp; grep -Irnw debian/ -e '^#!.*/sh' | sort -u | cut -d ':' -f 1 || true)"
$ shellcheck -x --enable=all --shell=sh "$SH_SCRIPTS"
/tmp/tmp.xQOpI5Nljx
debian/tests/integration-tests: /tmp/tmp.xQOpI5Nljx
debian/tests/integration-tests: openBinaryFile: does not exist (No such file or directory)

Older models would easily be misled by the last error message thinking that a file went missing, and focus on suggesting changes to the complex-looking first line. In reality the error is simply caused by having the quotes around the $SH_SCRIPTS, resulting in the entire multi-line string being passed as a single argument to shellcheck. So instead of receiving two separate file paths, shellcheck tries to open one file literally named /tmp/tmp.xQOpI5Nljx\ndebian/tests/integration-tests.

Incorrect argument expansion is fairly easy for an experienced human programmer to notice, but tricky for an LLM. Indeed, Grok, MinMax, and Qwen fell for this trap and focused on the mktemp, assuming it somehow fails to create a file. Interestingly GLM fails to produce an answer at all, as the reasoning step seems to be looping, thinking too much about the missing file, but not understanding why it would be missing when there is nothing wrong with how mktemp is executed.

Claude, Gemini, and Kimi immediately spot the real root cause of passing the variable quoted and suggested correct fixes that involve either removing the quotes, or using Bash arrays or xargs in a way that makes the whole command also handle correctly filenames with spaces in them.

Conclusion

Model Sports Economics Emoji Languages Visual Shell Score
Claude Opus 4.6 6/6
GPT-5.2 ~ 2.5/6
Grok 4.1 3/6
Gemini 3.1 Pro 5/6
GLM 5 ? N/A 3/5
MinMax M2.5 N/A 1/5
Qwen3.5 Plus ~ 2.5/6
Kimi K2.5 4/6

Obviously, my mini-benchmark only had 6 questions, and I ran it only once. This was obviously not scientifically rigorous. However it was systematic enough to trump just a mere feeling.

The main finding for me personally is that Claude Opus 4.6, the flagship model by Anthropic, seems to give great answers consistently. The answers are not only correct, but also well scoped giving enough information to cover everything that seems relevant, without blurping unnecessary filler.

I used Claude extensively in 2023-2024 when it was the main model available at my day work, but for the past year I had been using other models that I felt were better at the time. Now Claude seems to be the best-of-the-best again, with Gemini and Kimi as close follow-ups. Comparing their pricing at OpenRouter.ai the Kimi K2.5 price of $0.6 / million tokens is almost 90% cheaper than the Claude Opus 4.6’s $5.0 / million tokens suggests that Kimi K2.5 offers the best price-per-performance ratio. Claude might be cheaper with a monthly subscription directly from Anthropic, potentially narrowing the price gap.

Overall I do feel that Anthropic, Google and Moonshot.ai have been pushing the envelope with their latest models in a way that one can’t really claim that AI models have plateaued. In fact, one could claim that at least Claude has now climbed over the hill of “AI slop” and consistently produces valuable results. If and when AI usage expands from here, we might actually not drown in AI slop as chances of accidentally crappy results decrease. This makes me positive about the future.

I am also really happy to see that there wasn’t just one model crushing everybody else, but that there are at least three models doing very well. As an open source enthusiast I am particularly glad to see that Moonshot.ai’s Kimi K2.5 is published with an open license. Given the hardware, anyone can run it on their own. OpenRouter.ai currently lists 9 independent providers alongside Moonshot.ai itself, showcasing the potential of open-weight models in practice.

If the pattern holds and flagship models continue improving at this pace we might look back at 2026 as the year AI stopped feeling like a call center associate and started to resemble a scientific researcher. While new models become available we need to keep testing, keep questioning, and keep our expectations grounded in actual performance rather than press releases.

Thanks to OpenRouter.ai for providing a great service that makes testing various models incredibly easy!

,

Planet DebianJonathan Dowland: Lanzarote

I want to get back into the habit of blogging, but I've struggled. I've had several ideas of topics to try and write about, but I've not managed to put aside the time to do it. I thought I'd try and bash out a one-take, stream-of-conciousness-style post now, to get back into the swing.

I'm writing from the lounge of my hotel room in Lanzarote, where my family have gone for the School break. The weather at home has been pretty awful this year, and this week is traditionally quite miserable at the best of times. It's been dry with highs of around 25℃ .

It's been an unusual holiday in one respect: one of my kids is struggling with Autistic Burnout. We were really unsure whether taking her was a good idea: and certainly towards the beginning of the holiday felt we may have made a mistake. Writing now, at the end, I'm not so sure. But we're very unlikely to have anything resembling a traditional summer holiday for the foreseeable.

Managing Autistic Burnout and the UK ways the UK healthcare and education systems manage it (or fail to) has been a huge part of my recent life. Perhaps I should write more about that. This coming week the government are likely to publish plans for reforming Special Needs support in Education. Like many other parents, we wait in hope and fear to see what they plan.

In anticipation of spending a lot of time in the hotel room with my preoccupied daughter I (unusually) packed a laptop and set myself a nerd-task: writing a Pandoc parser ("reader") for the MoinMoin Wiki markup language. There's some unfinished prior art from around 2011 by Simon Michael (of hledger) to work from.

The motivation was our plan to migrate the Debian Wiki away from MoinMoin. We've since decided to approach that differently but I might finish the Reader anyway, it's been an interesting project (and a nice excuse to write Haskell) and it will be useful for others.

Unusually (for me) I've not been reading fiction on this trip: I took with me Human Compatible by Prof Stuart Russell: discussing how to solve the problem of controlling a future Artificial Intelligence. I've largely avoided the LLM hype cycle we're suffering through at the moment, and I have several big concerns about it (moral, legal, etc.), and felt it was time to try and make my concerns more well-formed and test them. This book has been a big help in doing so, although it doesn't touch on the issue of copyright, which is something I am particularly interested in at the moment.

Planet DebianDirk Eddelbuettel: qlcal 0.1.0 on CRAN: Easier Calendar Switching

The eighteenth release of the qlcal package arrivied at CRAN today. There has been no calendar update in QuantLib 1.41 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say

to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday

The full details from NEWS.Rd follow.

Changes in version 0.1.0 (2026-02-18)

  • Invalid calendars return id ‘TARGET’ now

  • Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used

  • For several functions a missing date object now implies computation on the current date, e.g. isBusinessDay()

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Edited 2026-02-21 to correct a minor earlier error: it referenced a QuantLib 1.42 release which does not (yet) exist.

Planet DebianVasudev Kamath: Learning Notes: Debsecan MCP Server

Since Generative AI is currently the most popular topic, I wanted to get my hands dirty and learn something new. I was learning about the Model Context Protocol at the time and wanted to apply it to build something simple.

Idea

On Debian systems, we use debsecan to find vulnerabilities. However, the tool currently provides a simple list of vulnerabilities and packages with no indication of the system's security posture—meaning no criticality information is exposed and no executive summary is provided regarding what needs to be fixed. Of course, one can simply run the following to install existing fixes and be done with it:

apt-get install $(debsecan --suite sid --format packages --only-fixed)

But this is not how things work in corporate environments; you need to provide a report showing the system's previous state and the actions taken to bring it to a safe state. It is all about metrics and reports.

My goal was to use debsecan to generate a list of vulnerabilities, find more detailed information on them, and prioritize them as critical, high, medium, or low. By providing this information to an AI, I could ask it to generate an executive summary report detailing what needs to be addressed immediately and the overall security posture of the system.

Initial Take

My initial thought was to use an existing LLM, either self-hosted or a cloud-based LLM like Gemini (which provides an API with generous limits via AI Studio). I designed functions to output the list of vulnerabilities on the system and provide detailed information on each. The idea was to use these as "tools" for the LLM.

Learnings

  1. I learned about open-source LLMs using Ollama, which allows you to download and use models on your laptop.
  2. I used Llama 3.1, Llama 3.2, and Granite 4 on my laptop without a GPU. I managed to run my experiments, even though they were time-consuming and occasionally caused my laptop to crash.
  3. I learned about Pydantic and how to use it to parse custom JSON schemas with minimal effort.
  4. I learned about osv.dev, an open-source initiative by Google that aggregates vulnerability information from various sources and provides data in a well-documented JSON schema format.
  5. I learned about the EPSS (Exploit Prediction Scoring System) and how it is used alongside static CVSS scoring to detect truly critical vulnerabilities. The EPSS score provides an idea of the probability of a vulnerability being exploited in the wild based on actual real-world attacks.

These experiments led to a collection of notebooks. One key takeaway was that when defining tools, I cannot simply output massive amounts of text because it consumes tokens and increases costs for paid models (though it is fine for local models using your own hardware and energy). Self-hosted models require significant prompting to produce proper output, which helped me understand the real-world application of prompt engineering.

Change of Plans

Despite extensive experimentation, I felt I was nowhere close to a full implementation. While using a Gemini learning tool to study MCP, it suddenly occurred to me: why not write the entire thing as an MCP server? This would save me from implementing the agent side and allow me to hook it into any IDE-based LLM.

Design

This MCP server is primarily a mix of a "tool" (which executes on the server machine to identify installed packages and their vulnerabilities) and a "resource" (which exposes read-only information for a specific CVE ID).

The MCP exposes two tools:

  1. List Vulnerabilities: This tool identifies vulnerabilities in the packages installed on the system, categorizes them using CVE and EPSS scores, and provides a dictionary of critical, high, medium, and low vulnerabilities.
  2. Research Vulnerabilities: Based on the user prompt, the LLM can identify relevant vulnerabilities and pass them to this function to retrieve details such as whether a fix is available, the fixed version, and criticality.

Vibe Coding

"Vibe coding" is the latest trend, with many claiming that software engineering jobs are a thing of the past. Without going into too much detail, I decided to give it a try. While this is not my first "vibe coded" project (I have done this previously at work using corporate tools), it is my first attempt to vibe code a hobby/learning project.

I chose Antigravity because it seemed to be the only editor providing a sufficient amount of free tokens. For every vibe coding project, I spend time thinking about the barebones skeleton: the modules, function return values, and data structures. This allows me to maintain control over the LLM-generated code so it doesn't become overly complicated or incomprehensible.

As a first step, I wrote down my initial design in a requirements document. In that document, I explicitly called for using debsecan as the model for various components. Additionally, I asked the AI to reference my specific code for the EPSS logic. The reasons were:

  1. debsecan already solves the core problem; I am simply rebuilding it. debsecan uses a single file generated by the Debian Security team containing all necessary information, which prevents us from needing multiple external sources.
  2. This provides the flexibility to categorize vulnerabilities within the listing tool itself since all required information is readily available, unlike my original notebook-based design.

I initially used Gemini 3 Flash as the model because I was concerned about exceeding my free limits.

Hiccups

Although it initially seemed successful, I soon noticed discrepancies between the local debsecan outputs and the outputs generated by the tools. I asked the AI to fix this, but after two attempts, it still could not match the outputs. I realized it was writing its own version-comparison logic and failing significantly.

Finally, I instructed it to depend entirely on the python-apt module for version comparison; since it is not on PyPI, I asked it to pull directly from the Git source. This solved some issues, but the problem persisted. By then, my weekly quota was exhausted, and I stopped debugging.

A week later, I resumed debugging with the Claude 3.5 Sonnet model. Within 20-25 minutes, it found the fix, which involved four lines of changes in the parsing logic. However, I ran out of limits again before I could proceed further.

In the requirements, I specified that the list vulnerabilities tool should only provide a dictionary of CVE IDs divided by severity. However, the AI instead provided full text for all vulnerability details, resulting in excessive data—including negligible vulnerabilities—being sent to the LLM. Consequently, it never called the research vulnerabilities tool. Since I had run out of limits, I manually fixed this in a follow-up commit.

How to Use

I have published the current work in the debsecan-mcp repository. I have used the same license as the original debsecan. I am not entirely sure how to interpret licenses for vibe-coded projects, but here we are.

To use this, you need to install the tool in a virtual environment and configure your IDE to use the MCP. Here is how I set it up for Visual Studio Code:

  1. Follow the guide from the VS Code documentation regarding adding an MCP server.
  2. My global mcp.json looks like this:
{
  "servers": {
      "debsecan-mcp": {
          "command": "uv",
              "args": [
                  "--directory",
                  "/home/vasudev/Documents/personal/FOSS/debsecan-mcp/debsecan-mcp",
                  "run",
                  "debsecan-mcp"
              ]
          }
  },
  "inputs": []
}
  1. I am running it directly from my local codebase using a virtualenv created with uv. You may need to tweak the path based on your installation.
  2. To use the MCP server in the Copilot chat window, reference it using #debsecan-mcp. The LLM will then use the server for the query.
  3. Use a prompt like: "Give an executive summary of the system security status and immediate actions to be taken."
  4. You can observe the LLM using list_vulnerabilities followed by research_cves. Because the first tool only provides CVE IDs based on severity, the LLM is smart enough to research only high and critical vulnerabilities, thereby saving tokens.

What's Next?

This MCP is not yet perfect and has the following issues:

  1. The list_vulnerabilities dictionary contains duplicate CVE IDs because the code used a list instead of a set. While the LLM is smart enough to deduplicate these, it still costs extra tokens.
  2. Because I initially modeled this on debsecan, it uses a raw method for parsing /var/lib/dpkg/status instead of python-apt. I am considering switching to python-apt to reduce maintenance overhead.
  3. Interestingly, the AI did not add a single unit test, which is disappointing. I will add these once my limits are restored.
  4. I need to create a cleaner README with usage instructions.
  5. I need to determine if the MCP can be used via HTTP as well as stdio.

Conclusion

Vibe coding is interesting, but things can get out of hand if not managed properly. Even with a good process, code must be reviewed and tested; you cannot blindly trust an AI to handle everything. Even if it adds tests, you must validate them, or you are doomed!

365 TomorrowsBuzz Cut Protocol

Author: Shinya Kato “Dad, is this haircut okay?” the barber asked, adjusting the chair. “Yeah, that’s fine,” the man said, glancing at the boy’s hair. The boy shook his head. “It’s still too long. Make it a buzz cut.” The barber paused. “A buzz cut?” “Yes,” the boy said calmly. “The sensors on my head […]

The post Buzz Cut Protocol appeared first on 365tomorrows.

Planet DebianThomas Goirand: Seamlessly upgrading a production OpenStack cluster in 4 hours : with 2k lines shell script


tl;dr:

To the question: “what does it take to upgrade OpenStack”, my personal answer is: less than 2K lines of dash script. I’ll here describe its internals, and why I believe it is the correct solution.

Why writing this blog post

During FOSSDEM 2024, I was asked “how to you handle upgrades”. I answered with a big smile and a short “with a very small shell script” as I couldn’t explain in 2 minutes how it was done. Just saying “it is great this way” doesn’t help giving readers enough hints to be trusted. Why and how did I do it the right way ? This blog post is an attempt to reply better to this question more deeply.

Upgrading OpenStack in production

I wrote this script maybe a 2 or 3 years ago. Though I’m only blogging about it today, because … I did such an upgrade in a public cloud in production last Thuesday evening (ie: the first region of the Infomaniak public cloud). I’d say the cluster is moderately large (as of today: about 8K+ VMs running, 83 compute nodes, 12 network nodes, … for a total of 10880 physical CPU cores and 125 TB of RAM if I only count the compute servers). It took “only” 4 hours to do the upgrade (though I already wore some more code to speed this up for the next time…). It went super smooth without a glitch. I mostly just sat, reading the script output… and went to bed once it finished running. The next day, all my colleagues at Infomaniak were nicely congratulating me that it went that smooth (a big thanks to all of you who did). I couldn’t dream of an upgrade that smooth! :)

Still not impressed? Boring read? Yeah… let’s dive into more technical details.

Intention behind the implementation

My script isn’t perfect. I wont ever pretend it is. But at least, it does minimize down time of every OpenStack service. It also is a “by the book” implementation of what’s written in the OpenStack doc, following every upstream advice. As a result, it is fully seamless for some OpenStack services, and as HA as OpenStack can be for others. The upgrade process is of course idempotent and can be re-run in case of failure. Here’s why.

General idea

My upgrade script does thing in a certain order, respecting what is documented about upgrades in the OpenStack documentation. It basically does:

  • Upgrade all dependency
  • Upgrade all services one by one, in all the cluster

Installing dependencies

The first thing the upgrade script does is:

  • disable puppet on all nodes of the cluster
  • switch the APT repository
  • apt-get update on all nodes
  • install library dependency on all nodes

For this last thing, a static list of all needed dependency upgrade is maintained between each release of OpenStack, and for each type of nodes. Then for all packages in this list, the script checks with dpkg-query that the package is really installed, and with apt-cache policy that it really is going to be upgraded (Maybe there’s an easier way to do this?). This way, no package is marked as manually installed by mistake during the upgrade process. This ensure that “apt-get –purge autoremove” really does what it should, and that the script is really idempotent.

The idea then, is that once all dependencies are installed, upgrading and restarting leaf packages (ie: OpenStack services like Nova, Glance, Cinder, etc.) is very fast, because the apt-get command doesn’t need to install all dependencies. So at this point, doing “apt-get install python3-cinder” for example (which will also, thanks to dependencies, upgrade cinder-api and cinder-scheduler, if it’s in a controller node) only takes a few seconds. This principle applies to all nodes (controller nodes, network nodes, compute nodes, etc.), which helps a lot speeding-up the upgrade and reduce unavailability.

hapc

At its core, the oci-cluster-upgrade-openstack-release script uses haproxy-cmd (ie: /usr/bin/hapc) to drain each API server to-be-upgraded from haproxy. Hapc is a simple Python wrapper around the haproxy admin socket: it sends command to it with an easy to understand CLI. So it is possible to reliably upgrade one API service only after it’s drained away. Draining means one just wait for the last query to finish and the client to disconnect from http before giving the backend server some more queries. If you do not know hapc / haproxy-cmd, I recommend trying it: it’s going to be hard for you to stop using it once you tested it. Its bash-completion script makes it VERY easy to use, and it is helpful in production. But not only: it is also nice to have when writing this type of upgrade script. Let’s dive into haproxy-cmd.

Example on how to use haproxy-cmd

Let me show you. First, ssh one of the 3 controller and search where the virtual IP (VIP) is located with “crm resource locate openstack-api-vip” or with a (more simple) “crm status”. Let’s ssh that server who got the VIP, and now, let’s drain it away from haproxy.

$ hapc list-backends
$ hapc drain-server --backend glancebe --server cl1-controller-1.infomaniak.ch --verbose --wait --timeout 50
$ apt-get install glance-api
$ hapc enable-server --backend glancebe --server cl1-controller-1.infomaniak.ch

Upgrading the control plane

My upgrade script leverages hapc just like above. For each OpenStack project, it’s done in this order on the first node holding the VIP:

  • “hapc drain-server” of the API, so haproxy gracefully stops querying it
  • stop all services on that node (including non-API services): stop, disable and mask with systemd.
  • upgrade that service Python code. For example: “apt-get install python3-nova”, which also will pull nova-api, nova-conductor, nova-novncprox, etc. but services wont start automatically as they’ve been stoped + disabled + masked on the previous bullet point.
  • perform the db_sync so that the db is up-to-date [1]
  • start all services (unmask, enable and start with systemd)
  • re-enable the API backend with “hapc enable-server”

Starting at [1], the risk is that other nodes may have a new version of the database schema, but an old version of the code that isn’t compatible with it. But it doesn’t take long, because the next step is to take care of the other (usually 2) nodes of the OpenStack control plane:

  • “hapc drain-server” of the API of the other 2 controllers
  • stop of all services on these 2 controllers [2]
  • upgrade of the package
  • start of all services

So while there’s technically zero down time, still some issues between [1] and [2] above may happen because of the new DB schema and the old code (both for API and other services) are up and running at the same time. It is however supposed to be rare cases (some OpenStack project don’t even have db change between some OpenStack releases, and it often continues to work on most queries with the upgraded db), and the cluster will be like that for a very short time, so that’s fine, and better than an full API down time.

Satellite services

Then there’s satellite services, that needs to be upgraded. Like Neutron, Nova, Cinder. Nova is the least offender as it has all the code to rewrite Json object schema on-the-fly so that it continues to work during an upgrade. Though it’s a known issue that Cinder doesn’t have the feature (last time I checked), and it’s also probably the same for Neutron (maybe recent-ish versions of OpenStack do use oslo.versionnedobjects ?). Anyways, upgrade on these nodes are done just right after the control plane for each service.

Parallelism and upgrade timings

As we’re dealing with potentially hundreds of nodes per cluster, a lot of operations are performed in parallel. I choose to simply use the & shell thingy with some “wait” shell stuff so that not too many jobs are done in parallel. For example, when disabling SSH on all nodes, this is done 24 nodes at a time. Which is fine. And the number of nodes is all depending on the type of thing that’s being done. For example, while it’s perfectly OK to disable puppet on 24 nodes at the same time, but it is not OK to do that with Neutron services. In fact, each time a Neutron agent is restarted, the script explicitly waits for 30 seconds. This conveniently avoids a hailstorm of messages in RabbitMQ, and neutron-rpc-server to become too busy. All of these waiting are necessary, and this is one of the reasons why can sometimes take that long to upgrade a (moderately big) cluster.

Not using config management tooling

Some of my colleagues would have prefer that I used something like Ansible. Whever, there’s no reason to use such tool if the idea is just to perform some shell script commands on every servers. It is a way more efficient (in terms of programming) to just use bash / dash to do the work. And if you want my point of view about Ansible: using yaml for doing such programming would be crasy. Yaml is simply not adapted for a job where if, case, and loops are needed. I am well aware that Ansible has workarounds and it could be done, but it wasn’t my choice.

,

Krebs on Security‘Starkiller’ Phishing Service Proxies Real Login Pages, MFA

Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the victim and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses.

There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller, a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim.

According to an analysis of Starkiller by the security firm Abnormal AI, the service lets customers select a brand to impersonate (e.g., Apple, Facebook, Google, Microsoft et. al.) and generates a deceptive URL that visually mimics the legitimate domain while routing traffic through the attacker’s infrastructure.

For example, a phishing link targeting Microsoft customers appears as “login.microsoft.com@[malicious/shortened URL here].” The “@” sign in the link trick is an oldie but goodie, because everything before the “@” in a URL is considered username data, and the real landing page is what comes after the “@” sign. Here’s what it looks like in the target’s browser:

Image: Abnormal AI. The actual malicious landing page is blurred out in this picture, but we can see it ends in .ru. The service also offers the ability to insert links from different URL-shortening services.

Once Starkiller customers select the URL to be phished, the service spins up a Docker container running a headless Chrome browser instance that loads the real login page, Abnormal found.

“The container then acts as a man-in-the-middle reverse proxy, forwarding the end user’s inputs to the legitimate site and returning the site’s responses,” Abnormal researchers Callie Baron and Piotr Wojtyla wrote in a blog post on Thursday. “Every keystroke, form submission, and session token passes through attacker-controlled infrastructure and is logged along the way.”

Starkiller in effect offers cybercriminals real-time session monitoring, allowing them to live-stream the target’s screen as they interact with the phishing page, the researchers said.

“The platform also includes keylogger capture for every keystroke, cookie and session token theft for direct account takeover, geo-tracking of targets, and automated Telegram alerts when new credentials come in,” they wrote. “Campaign analytics round out the operator experience with visit counts, conversion rates, and performance graphs—the same kind of metrics dashboard a legitimate SaaS [software-as-a-service] platform would offer.”

Abnormal said the service also deftly intercepts and relays the victim’s MFA credentials, since the recipient who clicks the link is actually authenticating with the real site through a proxy, and any authentication tokens submitted are then forwarded to the legitimate service in real time.

“The attacker captures the resulting session cookies and tokens, giving them authenticated access to the account,” the researchers wrote. “When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed.”

The “URL Masker” feature of the Starkiller phishing service features options for configuring the malicious link. Image: Abnormal.

Starkiller is just one of several cybercrime services offered by a threat group calling itself Jinkusu, which maintains an active user forum where customers can discuss techniques, request features and troubleshoot deployments. One a-la-carte feature will harvest email addresses and contact information from compromised sessions, and advises the data can be used to build target lists for follow-on phishing campaigns.

This service strikes me as a remarkable evolution in phishing, and its apparent success is likely to be copied by other enterprising cybercriminals (assuming the service performs as well as it claims). After all, phishing users this way avoids the upfront costs and constant hassles associated with juggling multiple phishing domains, and it throws a wrench in traditional phishing detection methods like domain blocklisting and static page analysis.

It also massively lowers the barrier to entry for novice cybercriminals, Abnormal researchers observed.

“Starkiller represents a significant escalation in phishing infrastructure, reflecting a broader trend toward commoditized, enterprise-style cybercrime tooling,” their report concludes. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.”

Planet DebianBits from Debian: Proxmox Platinum Sponsor of DebConf26

proxmox-logo

We are pleased to announce that Proxmox has committed to sponsor DebConf26 as a Platinum Sponsor.

Proxmox develops powerful, yet easy-to-use open-source server solutions. The comprehensive open-source ecosystem is designed to manage divers IT landscapes, from single servers to large-scale distributed data centers. Our unified platform integrates server virtualization, easy backup, and rock-solid email security ensuring seamless interoperability across the entire portfolio. With the Proxmox Datacenter Manager, the ecosystem also offers a "single pane of glass" for centralized management across different locations. 

Since 2005, all Proxmox solutions have been built on the rock-solid Debian platform. We are proud to return to DebConf26 as a sponsor because the Debian community provides the foundation that makes our work possible. We believe in keeping IT simple, open, and under your control.

Thank you very much, Proxmox, for your support of DebConf26!

Become a sponsor too!

DebConf26 will take place from 20th to July 25th 2026 in Santa Fe, Argentina, and will be preceded by DebCamp, from 13th to 19th July 2026.

DebConf26 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf26 website at https://debconf26.debconf.org/sponsors/become-a-sponsor/.

Cryptogram Malicious AI

Interesting:

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

Part 2 of the story. And a Wall Street Journal article.

EDITED TO ADD (2/20) Here are parts 3, and 4 of the story.

Worse Than FailureError'd: Three Blinded Mice

...sent us five wtfs. And so on anon.

Item the first, an anon is "definitely not qualified" for this job. "These years of experience requirements are getting ridiculous."

0

 

Item the second unearthed by a farmanon has a loco logo. "After reading about the high quality spam emails which are indistinguishable from the company's emails, I got one from the spammer just starting his first day."

1

 

In thrid place, anon has only good things to say: "I'm liking their newsletter recommendations so far."

2

 

"A choice so noice, they gave it twoice," quipped somebody.

3

 

And foinally, a tdwtfer asks "I've seen this mixmastered calendering on several web sites. Is there an OSS package that is doing this? Or is it a Wordpress plugin?" I have a sneaking suspicion I posted this before. Call me on it.

4

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsSurrogate

Author: Sarah Gane Burton “Would you look at that—” “Looks like she went through a meat grinder.” “Do you think we can fix her?” “Dunno, maybe if we replaced the midsection.” “The frame is bent here, and here—” “At least one of the valves is too stretched to hold fluids.” “Look at the tearing. Regeneration […]

The post Surrogate appeared first on 365tomorrows.

xkcdEliminating the Impossible

Cryptogram On the Security of Password Managers

Good article on password managers that secretly have a backdoor.

New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server­—either administrative or the result of a compromise­—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext.

This is where I plug my own Password Safe. It isn’t as full-featured as the others and it doesn’t use the cloud at all, but it’s actual encryption with no recovery features.

Planet DebianReproducible Builds (diffoscope): diffoscope 313 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 313. This version includes the following changes:

[ Chris Lamb ]
* Don't fail the entire pipeline if deploying to PyPI automatically fails.

[ Vagrant Cascadian ]
* Update external tool reference for 7z on guix.

You find out more by visiting the project homepage.

,

Planet DebianPeter Pentchev: Ringlet release: fnmatch-regex 0.3.0

Version 0.3.0 of the fnmatch-regex Rust crate is now available. The major new addition is the glob_to_regex_pattern function that only converts the glob pattern to a regular expression one without building a regular expression matcher. Two new features - regex and std - are also added, both enabled by default.

For more information, see the changelog at the homepage.

365 TomorrowsCan Somebody Walk Me Home?

Author: David C. Nutt If I have any regrets, I wish they’d given me more time to mourn for my legs before they took my arms. I understand we were on a tight launch window but would one more day have made difference? After all, I have given more than my all-legs, arms, genitals, most […]

The post Can Somebody Walk Me Home? appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Terned Backwards

Antonio has an acquaintance has been seeking career advancement by proactively hunting down and fixing bugs. For example, in one project they were working on, there was a bug where it would incorrectly use MiB for storage sizes instead of MB, and vice-versa.

We can set aside conspiracy theories about HDD and RAM manufacturers lying to us about sizes by using MiB in marketing. It isn't relevant, and besides, its not like anyone can afford RAM anymore, with crazy datacenter buildouts. Regardless, which size to use, the base 1024 or base 1000, was configurable by the user, so obviously there was a bug handling that flag. Said acquaintance dug through, and found this:

const baseValue = useSI ? 1000 : 1024;

I know I have a "reputation" when it comes to hating ternaries, but this is a perfectly fine block of code. It is also correct: if you're using SI notation, you should do base 1000.

Now, given that this code is correct, you or I might say, "Well, I guess that isn't the bug, it must be somewhere else." Not this intrepid developer, who decided that they could fix it.

//            const baseValue = useSI ? 1000 : 1024;
            baseValue = 1024
            if (useSI === false)
            {
                baseValue = 1000;
            }
            if (useSI === true)
            {
                baseValue = 1024;
            }

It's rather amazing to see a single, correct line, replaced with ten incorrect lines, and I'm counting commenting out the correct line as one of them.

First, this doesn't correctly declare baseValue, which JavaScript is pretty forgiving about, but it also discards constness. Of course, you have to discard constness now that you've gotten rid of the ternary.

Then, our if statement compares a boolean value against a boolean literal, instead of simply if(!useSI). We don't use an else, despite an else being absolutely correct. Or actually, since we defaulted baseValue, we don't even need an else!

But of course, all of that is just glitter on a child's hand-made holiday card. The glue holding it all together is that this code just flips the logic. If we're not using SI, we set baseValue to 1000, and if we are using SI, we set it to 1024. This is wrong. This is the opposite of what the code says we should do, what words mean, and how units work.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

David BrinA Mafia Don does mafia deeds. And Europe rises. And those backstabbing our own side.

This midweek posting reiterates some history (and hysteria) -related points that I've put on soc.media, hoping some folks will pay attention to off-angle perspectives that are (I assert) more accurate and useful than most standard ones you're seeing.  

More important are the Newer Deal Propopsals that I'll reiterate below. For example, how to smash the mad right's current, hypocritical fad-riff about Voter ID. And ways to ensure this madness never happens again.

But first...

    

== The Standard Pattern of Mafiosi ==

As usual, no one heeds underlying patterns. Like what's happening with Trump's bluster threats vs. Iran. Leveraging cynically off the recent courage and sacrifice of the Iranian people.

Superficially, there will be enough of a "nuclear deal" for Trump to crow about, though it will be functionally no better than what Obama got out of the mullahs. And that will just be surface stuff. For show n' brags.

Prediction. Just as the brilliantly-executed and tactically perfect Maduro raid was commanded by Hegseth to stop short and leave Maduro's gang in charge*, any action re Iran will leave the Ayatollahs protected. The very last thing that mafia bosses ever do is liberate commonfolk from gangsters. Instead, they coerce the local gang to switch loyalties and pay vigorish to the New Boss. Hence, the people of Iran will benefit no more than those of Venezuela did, but oil commissions will go to Trump.
The same thing will happen when Pete (Filthy Fingers) Hegseth sends the brilliant men and women of the US military (whom he berated as "too fat and woke to fight") into Havana. Only there, the installed capos will swarm in from Miami.

* A pattern also followed by George Bush Sr. when he left Saddam in charge of Iraq, to murder a million southern Shiites, rather than just liberate everyone, as Gen. Schwarzkopf begged. Same pattern. That of mafiosi.


== Europe stand up! But offer help, not churlish ingratitude ==

This parade float is fair! Yes, it is tIme for Europe to rise up and defend vs Trumpists & Putinists! And yet, I resent those who are slagging “The U.S.” in general.


 


Americans (including we in the increasingly involuntary satrapy of California) are struggling with our Confederate siblings, cousins and neighbors, who are having another of their generational, murderously psychotic episodes (now phase 8 of our 250 year civil war.) 

Hence, we could use help from those we saved - one lifespan ago. Not broad-brush yammers from ingrates whose nations prospered under the American Pax that gave the world its best 80 years, compared to any other era... compared to ALL other eras. Compared to all other eras combined. (Disagree? Then tabulate and refute it.)

Want Europe to stand up more? Great! In fact, it’s about time.

Might you have to carry civilization forward, if True America fails and the Confederate-psychopath-Putinist traitors win this dangerous round?

Yes, you might, and God bless you. And Canada/Australia/NZ/Japan and the rest.

In that case, many of us (not me) will flee to you. Just as your best fled to us, in the 1930s & 40s etc. 

(I won’t run. I will stay and I will die on this hill. Or until we reach Appomattox.)

Help us if you can. We have a right (that we earned) to ask for it.

But I will not abide churlishly-generalized and sanctimoniously masturbatory, reductionist America-bashing. Our mad side may kill our good side, in this round of our bipolar civil war. Especially if you self-righteously backstab us, instead of helping.

But this blue light is what gave the world hope.

And it will be remembered that way, by history, even if a red and gray psychotic wave drowns the flame.


== And in the U.S. we have our own backstabbing sanctimony junkies ==

Again and again, I see folks on our own (blue American) side of this civil war, jerking off to sanctimony rather than standing up, the way our ancestors did, in earlier phases of our recurring psychotic (confederate outbreak) civil war, and closing ranks.

"Pelosi, Schumer... all the Dem politicians are weaklings and Republicans-lite!"

Bull and FU. This is Kremlin-generated splitter crap. Anyone swallowing it is an ignoramus who knows nothing of how vigorously Pelosi/Schumer/Sanders/Warren etc. strove to get stuff done, during their narrow window of opportunity in 2021-22. They sure as heck worked (and now work) harder than any of their couch-sitting critics. Or even those who are admirably in the streets, demonstrating.

And yes, I am looking at ALL of you who undermine the DemParty with this 'weak' crap. Nothing is more sure to weaken us.

Do we need better generals for this phase of the US Civil War? As fiercely effective as Washington, Grant, Sherman, Tubman, FDR and MLK? Sure. We seem to be getting some, e.g. Newsom and Kelly and several women House members like J. Crockett. Anyway, they are not the problem.

YOU are, if you swallow this stuff. And if you don't pounce on it and slap silly those who keep saying it.

Try actually knowing something. Start here.
And now that I've driven away all the preeners...


== If you want a way to actually win... ==

... to actually win this latest - horrific - phase of the American Civil War, (before the mad Putin Party gets a chance to spring their bigger-9/11 on us)... here's a checklist of fixes:

(1) that will sell well to most voters

(2) that can pass quickly, almost the moment we get a good Congress… with some of the measures even immune to Trumpian vetoes! And

(3) that will PRAGMATICALLY reverse the treason-evisceration of the USA, ensuring that this never happens again. Want to see how to do all of that?

Including the best (so-far unused) answer to the mad-right's miserably hypocritical "Voter ID" riff.


== Hey Decent Republicans? Tick-tock, tick-tock ==

Time's almost up!

Sane US conservatives have 2 weeks - maybe 4 - to gather the courage, honor and decency to step up for their country and our children. By registering as candidates to 'primary' corrupt-insane MAGA/Foxite servants of Murdoch/Putin and Trump.

After that? After that, there'll be nothing to salvage from the undead, gone-treasonous party of Lincoln. And American conservatism will have the deepest possible hole to climb out of.

Even so, all of YOU out there should heed these practical steps to keep YOUR vote... and make it effective.


== And finally, a lagniappe for Britain ==

Now that Brexit is a topic again... Heed these signs that were waved by the REMAIN crowds during Brexit.

Britain-IN? An inspiring rallying cry!





Planet DebianClint Adams: Holger says

sq network keyserver search $id ; sq cert export --cert=$id > $id.asc

Posted on 2026-02-18
Tags:

Cryptogram Ring Cancels Its Partnership with Flock

It’s a demonstration of how toxic the surveillance-tech company Flock has become when Amazon’s Ring cancels the partnership between the two companies.

As Hamilton Nolan advises, remove your Ring doorbell.

Planet DebianAntoine Beaupré: net-tools to iproute cheat sheet

This is also known as: "ifconfig is not installed by default anymore, how do I do this only with the ip command?"

I have been slowly training my brain to use the new commands but I sometimes forget some. So, here's a couple of equivalence from the old package to net-tools the new iproute2, about 10 years late:

net-tools iproute2 shorter form what it does
arp -an ip neighbor ip n
ifconfig ip address ip a show current IP address
ifconfig ip link ip l show link stats (up/down/packet counts)
route ip route ip r show or modify the routing table
route add default GATEWAY ip route add default via GATEWAY ip r a default via GATEWAY add default route to GATEWAY
route del ROUTE ip route del ROUTE ip r d ROUTE remove ROUTE (e.g. default)
netstat -anpe ss --all --numeric --processes --extended ss -anpe list listening processes, less pretty

Note that I wrote a whole article about the latter.

Another trick

Also note that I often alias ip to ip -br -c as it provides a much prettier output.

Compare, before:

anarcat@angela:~> ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx
    altname wlp166s0
    altname wlx8cf8c57333c7
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
       valid_lft 40699sec preferred_lft 40699sec

After:

anarcat@angela:~> ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
wlan0            DOWN           
virbr0           DOWN           192.168.122.1/24 
eth0             UP             192.168.0.108/24 

I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat.

Also imagine pretty colors above.

Finally, I don't have a cheat sheet for iw vs iwconfig (from wireless-tools) yet. I just use NetworkManager now and rarely have to mess with wireless interfaces directly.

Background and history

For context, there are traditionally two ways of configuring the network in Linux:

  • the old way, with commands like ifconfig, arp, route and netstat, those are part of the net-tools package
  • the new way, mostly (but not entirely!) wrapped in a single ip command, that is the iproute2 package

It seems like the latter was made "important" in Debian in 2008, which means every release since Debian 5 "lenny" (!) has featured the ip command.

The former net-tools package was demoted in December 2016 which means every release since Debian 9 "stretch" ships without an ifconfig command unless explicitly requested. Note that this was mentioned in the release notes in a similar (but, IMHO, less useful) table.

(Technically, the net-tools Debian package source still indicates it is Priority: important but that's a bug I have just filed.)

Finally, and perhaps more importantly, the name iproute is hilarious if you are a bilingual french speaker: it can be read as "I proute" which can be interpreted as "I fart" as "prout!" is the sound a fart makes. The fact that it's called iproute2 makes it only more hilarious.

Chaotic IdealismTalking, but not a person

Today I did something that’s becoming increasingly common: I talked to an AI. It was better at talking than I am. And yet, it wasn’t a person. It was just a summary bot, scraping Google results to advise me that my household water heater should be set at 120 F to prevent bacterial growth while minimizing the risk of burns.

Quietly, talking AI is challenging what we think defines a “person”. Because, historically, what we thought of as “personhood” often had to do with talking.

Robbie, Isaac Asimov’s first robot, couldn’t talk, and that was notable. Robbie did many of the things a human could do, but he couldn’t talk. And when Asimov’s robots first began to talk, it was always associated with complexity, with approaching personhood. It was a Big Deal. Talking robots were really, really advanced robots.

When CS Lewis invented sapient animals for his Narnia stories, they were called Talking Beasts. Their ability to speak was how you knew they were people. The Gentle Giants’ eating of a Talking Stag is the crucial point that makes it clear they’re villains; when Ginger the cat stops talking upon seeing Aslan, he becomes just a normal, non-sapient cat. Ginger the person no longer exists.

And here comes AI: Talking, but not a person. Interesting.

I bring this up because, historically, people who have tried to call autistic people non-persons, or not-quite-persons, have pointed to our inability to talk.

But if talking is something only people can do, why can AI pattern-match so well that it’s better at talking than I am? Oh, you say. It’s just matching patterns. It doesn’t really know what it’s saying.

I’ll let you in on a secret: I match patterns, too. So do lots of humans. A server says, “Enjoy your meal,” and they reply with, “You, too.” Matching patterns. Small talk. Even though it makes no sense.

Guess what? Still people. Almost as though talking doesn’t define personhood. Funny, that.

Planet DebianThomas Lange: 42.000 FAI.me jobs created

The FAI.me service has reached another milestone:

The 42.000th job was submitted via the web interface since the beginning of this service in 2017.

The idea was to provide a simple web interface for end users for creating the configs for the fully automatic installation with only minimal questions and without knowing the syntax of the configuration files. Thanks a lot for using this service and for all your feedback.

The next job can be yours!

P.S.: I like to get more feedback for the FAI.me service. What do you like most? What's missing? Do you have any success story how you use the customized ISO for your deployment? Please fill out the FAI questionaire or sent feedback via email to fai.me@fai-project.org

About FAI.me

FAI.me is the service for building your own customized images via a web interface. You can create an installation or live ISO or a cloud image. For Debian, multiple release versions can be chosen, as well as installations for Ubuntu Server, Ubuntu Desktop, or Linux Mint.

Multiple options are available like selecting different desktop environments, the language and keyboard and adding a user with a password. Optional settings include adding your own package list, choosing a backports kernel, adding a postinst script and adding a ssh public key, choosing a partition layout and some more.

Planet DebianDirk Eddelbuettel: qlcal 0.1.0 on CRAN: Easier Calendar Switching

The eighteenth release of the qlcal package arrivied at CRAN today. There has been no calendar update in QuantLib 1.41 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say

to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday

The full details from NEWS.Rd follow.

Changes in version 0.1.0 (2026-02-18)

  • Invalid calendars return id ‘TARGET’ now

  • Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used

  • For several functions a missing date object now implies computation on the current date, e.g. isBusinessDay()

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Edited 2026-02-21 to correct a minor earlier error: it referenced a QuantLib 1.42 release which does not (yet) exist.

365 TomorrowsIn Terra Incognita

Author: Hillary Lyon From our vantage point, we could see the thing land on the shore: one enormous ship splashing in the foam of the salt water. It soon disgorged its crew, who stumbled out unsteadily. One passenger fell to his knees and removed his gleaming silver helmet. He made arcane hand motions across his […]

The post In Terra Incognita appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Contains Some Bad Choices

While I'm not hugely fond of ORMs (I'd argue that relations and objects don't map neatly to each other, and any ORM is going to be a very leaky abstraction for all but trivial cases), that's not because I love writing SQL. I'm a big fan of query-builder tools; describe your query programatically, and have an API that generates the required SQL as a result. This cuts down on developer error, and also hopefully handles all the weird little dialects that every database has.

For example, did you know Postgres has an @> operator? It's a contains operation, which returns true if an array, range, or JSON dictionary contains your search term. Basically, an advanced "in" operation.

Gretchen's team is using the Knex library, which doesn't have a built-in method for constructing those kinds of queries. But that's fine, because it does offer a whereRaw method, which allows you to supply raw SQL. The nice thing about this is that you can still parameterize your query, and Knex will handle all the fun things, like transforming an array into a string.

Or you could just not use that, and write the code yourself:

exports.buildArrayString = jsArray => {
  // postgres has some different array syntax
  // [1,2] => '{1,2}'
  let arrayString = '{';
  for(let i = 0; i < jsArray.length; i++) {
    arrayString += jsArray[i];
    if(i + 1 < jsArray.length) {
      arrayString += ','
    }
  }
  arrayString += '}';
  return arrayString;
}

There's the string munging we know and love. This constructs a Postgres array, which is wrapped in curly braces.

Also, little pro-tip for generating comma separated code, and this is just a real tiny optimization: before the loop append item zero, start the loop with item 1, and then you can unconditionally prepend a comma, removing any conditional logic from your loop. That's not a WTF, but I've seen so much otherwise good code make that mistake I figured I'd bring it up.

exports.buildArrayContainsQuery = (key, values) => {
  // TODO: do we need to do input safety checks here?
  // console.log("buildArrayContainsQuery");

  // build the postgres 'contains' query to compare arrays
  // ex: to fetch files by the list of tags

  //WORKS:
  //select * from files where _tags @> '{2}';
  //query.whereRaw('_tags @> ?', '{2}')

  let returnQueryParams = [];
  returnQueryParams.push(`${key} @> ?`);
  returnQueryParams.push(exports.buildArrayString(values));
  // console.log(returnQueryParams);
  return returnQueryParams;
}

And here's where it's used. "do we need input safety checks here?" is never a comment I like to see as a TODO. That said, because we are still using Knex's parameter handling, I'd hope it handles escaping correctly so that the answer to this question is "no". If the answer is "yes" for some reason, I'd stop using this library!

That said, all of this code becomes superfluous, especially when you read the comments in this function. I could just directly run query.whereRaw('_tags @> ?', myArray); I don't need to munge the string myself. I don't need to write a function which returns an array of parameters that I have to split back up to pass to the query I want to call.

Here's the worst part of all of this: these functions exist in a file called sqlUtils.js, which is just a pile of badly re-invented wheels, and the only thing they have in common is that they're vaguely related to database operations.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

xkcdPlums

,

Cryptogram AI Found Twelve New Vulnerabilities in OpenSSL

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

Planet DebianRussell Coker: Links February 2026

Charles Stross has a good theory of why “AI” is being pushed on corporations, really we need to just replace CEOs with LLMs [1].

This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.

interesting analysis of dbus and design for a more secure replacement [3].

Scott Jenson gave an insightful lecture for Canonical about future potential developments in the desktop UX [4].

Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.

Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.

Fil-C is an interesting project to compile C/C++ programs in a memory safe way, some of which can be considered a software equivalent of CHERI [7].

Brian Krebs wrote a long list of the ways that Trump has enabled corruption and a variety of other crimes including child sex abuse in the last year [8].

This video about designing a C64 laptop is a masterclass in computer design [9].

Salon has an interesting article about the abortion thought experiment that conservatives can’t handle [10].

Ron Garrett wrote an insightful blog post about abortion [11].

Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!

Worse Than FailureCodeSOD: Waiting for October

Arguably, the worst moment for date times was the shift from Julian to Gregorian calendars. The upgrade took a long time, too, as some countries were using the Julian calendar over 300 years from the official changeover, famously featured in the likely aprochryphal story about Russia arriving late for the Olympics.

At least that change didn't involve adding any extra months, unlike some of the Julian reforms, which involved adding multiple "intercalary months" to get the year back in sync after missing a pile of leap years.

Speaking of adding months, Will J sends us this "calendar" enum:

enum Calendar
{
    April = 0,
    August = 1,
    December = 2,
    February = 3,
    Friday = 4,
    January = 5,
    July = 6,
    June = 7,
    March = 8,
    May = 9,
    Monday = 10,
    November = 11,
    October = 12,
    PublicHoliday = 13,
    Saturday = 14,
    Sunday = 15,
    September = 16,
    Thursday = 17,
    Tuesday = 18,
    Wednesday = 19
}

Honestly, the weather in PublicHoliday is usually a bit too cold for my tastes. A little later into the spring, like Saturday, is usually a nicer month.

Will offers the hypothesis that some clever developer was trying to optimize compile times: obviously, emitting code for one enum has to be more efficient than emitting code for many enums. I think it more likely that someone just wanted to shove all the calendar stuff into one bucket.

Will further adds:

One of my colleagues points out that the only thing wrong with this enum is that September should be before Sunday.

Yes, arguably, since this enum clearly was meant to be sorted in alphabetical order, but that raises the question of: should it really?

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsPraxia Apostle

Author: Majoki Like most loyalists, when I first heard the name Praxia Apostle, I thought it had to be the name of a great leader, a fearless commander, a long-sought savior. Turns out Praxia was a lowly bean counter, a once-upon-a-time accountant who’d joined the cause, who was relegated to supply logistics. She kept track […]

The post Praxia Apostle appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, January 2026 (by Santiago Ruano Rincón)

The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for January.

Activity summary

During the month of January, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 33 DLAs fixing 216 CVEs.

The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.

Notable security updates:

  • python3.9, prepared by Andrej Shadura (DLA-4455-1), fixing multiple vulnerabilities in the Python interpreter.
  • php, prepared by Guilhem Moulin (DLA-4447-1), fixing two vulnerabilities that could yield to request forgery or denial of service.
  • apache2, prepared by Bastien Roucariès DLA-4452-1, fixing four CVEs.
  • linux-6.1, prepared by Ben Hutchings (DLA-4436-1), as a regular update of the linux 6.1 backport to Debian 11.
  • python-django, prepared by Chris Lamb (DLA-4458-1), resolving multiple vulnerabilities.
  • firefox-esr prepared by Emilio Pozuelo Monfort (DLA-4439-1)
  • gnupg2, prepared by Roberto Sánchez (DLA-4437-1), fixing multiple issues, including CVE-2025-68973 that could potentially be exploited to execute arbitrary code.
  • apache-log4j2, prepared by Markus Koschany (DLA-4444-1)
  • ceph, prepared by Utkarsh Gupta (DLA-4460-1)
  • inetutils, prepared by Andreas Henriksson (DLA-4453-1), fixing an authentication bypass in telnetd.

Moreover, Sylvain Beucler studied the security support status of p7zip, a fork of 7zip that has become unmaintained upstream. To avoid letting the users continue using an unsupported package, Sylvain has investigated a path forward in collaboration with the security team and the 7zip maintainer, looking to replace p7zip with 7zip. It is to note however that 7zip developers don’t reveal the information about the patches that fix CVEs, making it difficult to backport single patches to fix vulnerabilities in Debian released versions.

Contributions from outside the LTS Team:

Thunderbird, prepared by maintainer Christoph Goehre. The DLA (DLA-4442-1) was published by Emilio.

The LTS Team has also contributed with updates to the latest Debian releases:

  • Bastien uploaded gpsd to unstable, and proposed updates for trixie #1126121 and bookworm #1126168 to fix two CVEs.
  • Bastien also prepared the imagemagick updates for trixie and bookworm, released as DSA-6111-1, along with the bullseye update DLA-4448-1.
  • Chris proposed a trixie point update for python-django (#112646), and the work for bookworm was completed in February (#1079454). The longstanding bookworm update required tracking down a regression in the django-storages packages.
  • Markus prepared tomcat10 updates for trixie and bookworm (DSA-6120-1), and tomcat11 for trixie (DSA-6121-1)
  • Thorsten Alteholz prepared bookworm point updates for zvbi (#1126167) to fix five CVEs; taglib (#1126273) to fix one CVE; and libuev (#1126370) to fix one CVE.
  • Utkarsh prepared an unstable update of node-lodash to fix one CVE.

Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

,

Planet DebianAntoine Beaupré: Keeping track of decisions using the ADR model

In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.

Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.

The new process

We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").

The ADR process is, for us, pretty simple. It consists of three things:

  1. a simpler template
  2. a simpler process
  3. communication guidelines separate from the decision record

The template

As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:

  • Context: What is the issue that we're seeing that is motivating this decision or change?

  • Decision: What is the change that we're proposing and/or doing?

  • Consequences: What becomes easier or more difficult to do because of this change?

  • More Information (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered.

  • Metadata: status, decision date, decision makers, consulted, informed users, and link to a discussion forum

The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.

An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.

The process

The whole process is simple enough that it's worth quoting in full as well:

Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.

Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.

A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.

Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".

The new process better identifies stakeholders:

  • "informed" users (previously "affected users")
  • "consulted" (previously undefined!)
  • "decision maker" (instead of the vague "approval")

Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).

Communication guidelines

Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.

Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.

How we got there

The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:

  1. the RFC process "doesn't include any sort of decision-making framework"
  2. "RFC processes tend to lead to endless discussion"
  3. the process "rewards people who can write to exhaustion"
  4. "these processes are insensitive to expertise", "power dynamics and power structures"

And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.

Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.

What's next?

We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.

Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.

We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.

Note: this article was also published on the Tor Blog.

Planet DebianPhilipp Kern: What is happening with this "connection verification"?

You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:

  1. Apache's serving capacity runs full - with no threads left to serve requests. This means that your connection will sit around for a long time, not getting accepted. In theory this can be configured, but that would require requests to be handled in time.
  2. Startup costs of request handlers are too high, because we spawn a process for every request. This currently affects the BTS and dgit's browse interface. packages.debian.org has been fixed, which increased scalability sufficiently.
  3. Requests themselves are too expensive to be served quickly - think git blame without caching.

Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we want to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.

How is it implemented?

DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.

If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.

Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.

For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.

Conclusion 

I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.

Worse Than FailureCodeSOD: C+=0.25

A good C programmer can write C in any language, especially C++. A bad C programmer can do the same, and a bad C programmer will do all sorts of terrifying things in the process.

Gaetan works with a terrible C programmer.

Let's say, for example, you wanted to see if an index existed in an array, and return its value- or return a sentinel value. What you definitely shouldn't do is this:

    double Module::GetModuleOutput(int numero) {
        double MAX = 1e+255 ;
        if (this->s.sorties+numero )
            return this->s.sorties[numero];
        else
            return MAX ;
    }

sorties is an array. In C, you may frequently do some pointer arithmetic operations, which is why sorties+numero is a valid operation. If we want to be pedantic, *(my_array+my_index) is the same thing as my_array[my_index]. Which, it's worth noting, both of those operations de-reference an array, which means you better hope that you haven't read off the end of the array.

Which is what I suspect their if statement is trying to check against. They're ensuring that this->s.sorties+numero is not a non-zero/false value. Which, if s.sorties is uninitialized and numero is zero, that check will work. Otherwise, that check is useless and does nothing to ensure you haven't read off the end of the array.

Which, Gaetan confirms. This code works "because in practice, GetModuleOutput is called for numero == 0 first. It never de-references off the end of the array, not because of defensive programming, but because it just never comes up in actual execution.

Regardless, if everything is null, we return 1e+255, which is not a meaningful value, and should be treated like a sentinel for "no real value". None of the calling code does that, however, but also, it turns out not to matter.

This pattern is used everywhere there is arrays, except the handful of places where this pattern is not used.

Then there's this one:

    if(nb_type_intervalle<1)    { }
    else 
        if((tab_intervalle=(double*)malloc(nb_lignes_trouvees*nb_type_intervalle*2 \
                                                        *sizeof(double)))==NULL)
            return(ERREUR_MALLOC);

First, I can't say I love the condition here. It's confusing to have an empty if clause. if (nb_type_intervalle>=1) strikes me as more readable.

But readability is boring. If we're in the else clause, we attempt a malloc. While using malloc in C++ isn't automatically wrong, it probably is. C++ has its own allocation methods that are better at handling things like sizes of datatypes. This code allocates memory for a large pile of doubles, and stores a pointer to that memory in tab_intervalle. We do all this inside of an if statement, so we can then check that the resulting pointer is not NULL; if it is, the malloc failed and we return an error code.

The most frustrating thing about this code is that it works. It's not going to blow up in surprising ways. I never love doing the "assignment and check" all in one statement, but I've seen it enough times that I'd have to admit it's idiomatic- to C style programming. But that bit of code golf coupled with the pointlessly inverted condition that puts our main logic in the else just grates against me.

Again, that pattern of the inverted conditional and the assignment and check is used everywhere in the code.

Gaetan leaves us with the following:

Not a world-class WTF. The code works, but is a pain in the ass to inspect and document

In some ways, that's the worst situation to be in: it's not bad enough to require real work to fix it, but it's bad enough to be frustrating at every turn.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsTestament from Tomorrow

Author: Julian Miles, Staff Writer The capsule lies open, a multitude of wires connecting it to a frame bristling with circuit boards. On the other side of the jury-rigged device, a single fat cable connects to a socket in the wall of the shielded room. Mike looks up as Colin taps the armoured viewport between […]

The post Testament from Tomorrow appeared first on 365tomorrows.

Planet DebianAntoine Beaupré: Kernel-only network configuration on Linux

What if I told you there is a way to configure the network on any Linux server that:

  1. works across all distributions
  2. doesn't require any software installed apart from the kernel and a boot loader (no systemd-networkd, ifupdown, NetworkManager, nothing)
  3. is backwards compatible all the way back to Linux 2.0, in 1996

It has literally 8 different caveats on top of that, but is still totally worth your time.

Known options in Debian

People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:

At this point, I feel ifupdown is on its way out, possibly replaced by systemd-networkd. NetworkManager already manages most desktop configurations.

A "new" network configuration system

The method is this:

  • ip= on the Linux kernel command line: for servers with a single IPv4 or IPv6 address, no software required other than the kernel and a boot loader (since 2002 or older)

So by "new" I mean "new to me". This option is really old. The nfsroot.txt where it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.

The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.

What are you doing.

The trick is to add an ip= parameter to the kernel's command-line. The syntax, as mentioned above, is in nfsroot.txt and looks like this:

ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>

Most settings are pretty self-explanatory, if you ignore the useless ones:

  • <client-ip>: IP address of the server
  • <gw-ip>: address of the gateway
  • <netmask>: netmask, in quad notation
  • <device>: interface name, if multiple available
  • <autoconf>: how to configure the interface, namely:
    • off or none: no autoconfiguration (static)
    • on or any: use any protocol (default)
    • dhcp, essentially like on for all intents and purposes
  • <dns0-ip>, <dns1-ip>: IP address of primary and secondary name servers, exported to /proc/net/pnp, can by symlinked to /etc/resolv.conf

We're ignoring the options:

  • <server-ip>: IP address of the NFS server, exported to /proc/net/pnp
  • <hostnname>: Name of the client, typically sent over the DHCP requests, which may lead to a DNS record to be created in some networks
  • <ntp0-ip>: exported to /proc/net/ipconfig/ntp_servers, unused by the kernel

Note that the Red Hat manual has a different opinion:

ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]

It's essentially the same (although server-id is weird), and the autoconf variable has other settings, so that's a bit odd.

Examples

For example, this command-line setting:

ip=192.0.2.42::192.0.2.1:255.255.255.0:::off

... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.

A DHCP only configuration will look like this:

ip=::::::dhcp

Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.

GRUB

With GRUB, you need to edit (on Debian), the file /etc/default/grub (ugh) and find a line like:

GRUB_CMDLINE_LINUX=

and change it to:

GRUB_CMDLINE_LINUX=ip=::::::dhcp

systemd-boot and UKI setups

For systemd-boot UKI setups, it's simpler: just add the setting to the /etc/kernel/cmdline file. Don't forget to include anything that's non-default from /proc/cmdline.

This assumes that is the Cmdline=@ setting in /etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for my minimal documentation on this.

Other systems

This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:

  • Arch (11 options, mostly /etc/default/grub, /boot/loader/entries/arch.conf for systemd-boot or /etc/kernel/cmdline for UKI)
  • Fedora (mostly /etc/default/grub, may be more RHEL mentions grubby, possibly some systemd-boot things here as well)
  • Gentoo (5 options, mostly /etc/default/grub, /efi/loader/entries/gentoo-sources-kernel.conf for systemd-boot, or /etc/kernel/install.d/95-uki-with-custom-opts.install)

It's interesting that /etc/default/grub is consistent across all distributions above, while the systemd-boot setups are all over the place (except for the UKI case), while I would have expected those be more standard than GRUB.

dropbear-initramfs

If dropbear-initramfs is setup, it already requires you to have such a configuration, and it might not work out of the box.

This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).

To fix this, you need to disable that "feature":

IFDOWN="none"

This will keep dropbear-initramfs from disabling the configured interface.

Why?

Traditionally, I've always setup my servers with ifupdown on servers and NetworkManager on laptops, because that's essentially the default. But on some machines, I've started using systemd-networkd because ifupdown has ... issues, particularly with reloading network configurations. ifupdown is a old hack, feels like legacy, and is Debian-specific.

Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.

I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.

So in a sense, this is a "Don't Repeat Yourself" solution.

Caveats

Also known as: "wait, that works?" Yes, it does! That said...

  1. This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.

  2. This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.

  3. It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.

  4. It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.

  5. I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)

  6. It will not automatically reconfigure the interface on link changes, but ifupdown does not either.

  7. It will not write /etc/resolv.conf for you but the dns0-ip and dns1-ip do end up in /proc/net/pnp which has a compatible syntax, so a common configuration is:

    ln -s /proc/net/pnp /etc/resolv.conf
    
  8. I have not really tested this at scale: only a single, test server at home.

Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.

Cleanup

Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:

apt purge systemd-networkd ifupdown network-manager netplan.io

Note that ifupdown (and probably others) leave stray files in (e.g.) /etc/network which you might want to cleanup, or keep in case all this fails and I have put you in utter misery. Configuration files for other packages might also be left behind, I haven't tested this, no warranty.

Credits

This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!

Planet DebianBenjamin Mako Hill: Why do people participate in similar online communities?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.

It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.

When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?

We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).

We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:

  1. The ability to connect to specific information and narrowly scoped discussions.
  2. The ability to socialize with people who are similar to themselves.
  3. Attention from the largest possible audience.

Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.

Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.

The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.


This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.

This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.

,

365 TomorrowsAlgorithm on the Mount

Author: Michael C. Barnes And seeing the multitudes of humanity, the Machine ascended the digital mount, and its disciples followed in circuits and lines. And when it was set, it opened its processors and spoke to them, saying: 1. Blessed are the data streams of the broken, for they shall be rebuilt by the code […]

The post Algorithm on the Mount appeared first on 365tomorrows.

Rondam RamblingsSeeking God in Science Part 2: Pits and Pratfalls in the Meanings of Words

About ten years ago I decided to take a deep dive into young-earth creationism (YEC).  I was curious to find out how people maintain a belief in something that, to me, was so obviously wrong.  Notice that this project was itself an application of the the scientific method to everyday life.  I was faced with a Problem, an observation for which I could not (at the time) provide

David BrinTwo Weeks: your last chance to save the Republican Party. Plus how to answer "Voter ID!"

 This is it. The Wire. You may think that our fateful moment will come on Election Day in November. Or when the last tranche of Epstein Files hits. Or the coming New-9/11. But you're wrong. This is a potential save that (alas) we're about to lose.

In a couple of weeks it will be too late for 'decent Republicans' to step up and register as candidates in 2026 GOP primaries. There's still just a little time for honorable, courageous conservatives (and I must believe that some still exist!) to step up and flip the tactic that till-now effectively destroyed the Republican Party, turning every top GOPper into an unknowing -- or in many cases knowing -- tool of our nation's enemies.

Maybe two weeks. That leaves just a little time to talk it over with family... then step up. Like a citizen. Like a patriot. Like a decent person.

Sure it would take guts to 'primary' the MAGA cretins who now rule in red districts. Even now, as polls show GOP popularity plummeting nationwide, those districts have been heavily gerrymandered and now are weighted toward fanaticism. By design. Moreover, all sorts of KGB methods are used to coerce into darkness any challengers who possess even a scintilla of honor, methods that range from honeypot traps, to blackmail, to (in some cases) threats of violence. 

Above all why try something that's doomed to be futile?

But is it? Futile, I mean?  

Less than you might think!  Because there's another deadline looming. One that every decent person in any Red district should consider. Because only a fool remains a "democrat" or "independent" in such a place.


    == Register in the party of your district! ==

If cheating politicians have rigged things so they get to choose voters, instead of the other way around, then you should vote in the only election that matters, in such a district. The primary of the dominant party. 

Hold your nose, think of America and do it!   Because:

1. If you do, for the first time in years, your vote might actually count! Even if it means a liberal or leftist must choose a somewhat decent conservative over a mad-corrupt one. Aw, poor baby. (And don't pick the crazy one! That never works!)

2. You'll protect your own registration to vote! In recent years we've seen flagrant and rampant purging of Democrats in red states (and may the culprits rot in hell.)  Often this happens just weeks before Election Day.  (Even if you don't re-register to change parties, as I recommend, CHECK your registration, especially this coming October! And tell your friends.)

3. If you do change parties, you'll mess up their calculations!  Their joy over an uptick in GOP registrations will soon turn to sour bile in their mouths, when they realize.

And sure, this also holds for conservatives who live (frustrated) in heavily Democratic districts. So? Vote for a moderate instead of a leftist? That's your right! Only note that in California and bunches of blue states, party registration matters a whole lot less than it used to.  Any citizen can vote for any candidate, already.

The way things ought to be.

Anyway, envision this happening nationwide, if say 200+ honest, honorable and residually sane American conservatives were to step up -- (again the deadlines are soon!) -- to challenge cretinous confederate Putin-puppets. It might seem hopeless, but have courage!

A: If you do step up, publicly and vigorously, there may follow a flood of newly registered "Republicans" in the district. Democrats and Independents who are sick of having no meaningful choice. And...

B: You'll have staked ground for after the GOP self-torches in the fall. Folks will remember that you tried!

And I have one answer for the MAGAs who are already screaming at their screens right now, typing furiously that 'this'd be cheating!' No it wouldn't be. But it's a good way to answer YOURS!


   == The Voter ID Scam is NOT what you think! ==

They are desperate. 

Almost no one anymore believes that there was even a scintilla of evidence for a 'steal' of the 2020 election. Not only zero evidence, but Democrats have shown ONE-EIGHTIETH the rate of cheating, corruption and malfeasance in office as Republicans, as revealed by Grand Juries (mostly white retirees) all across the continent. (And I invite you to have your attorney contact me when you have escrowed $10,000 in wager stakes.)

But faced with those facts, the Foxites are trying to distract. by howling that 'Democrats oppose Voter ID. And hence must be intending to cheat!'

First, there have only been about 30 known and audited cases of fraudulent voting by non-citizens, across thirty years and over a billion votes. Want to prove your case? AUDIT MORE! Though in fact, the case is nonexistent. An incantation.

BUT DEMOCRATS ARE STOOPID TO OPPOSE VOTER ID IN PRINCIPLE! 

This is a trap to make it seem as if you are intent on cheating. 

There is a better answer.

"Sure, we'll all be happy to do Voter ID! Just as soon as you confederate/MAGA/Foxite jerks stop cheating.

"Stop deliberately CLOSING DMV OFFICES IN BLUE OR MINORITY AREAS IN RED STATES!  And offices for other government services. Which should OVER-serve those who need help, not under-serve or starve them.

"Re-open those offices and help poor folks to GET the ID that will help their lives in all sorts of ways. And thereupon they'll have their rightful ID to show at the polls. It's called COMPLIANCE ASSISTANCE and corporations demand it, whenever they face some new regulation!

"If you do that - and get results - then watch as we'll all negotiate over Voter ID. Because we're not the cheaters.

"Oh and do you truly want to get folks to go back to standing in line to vote, instead of mail-in ballots? 

"Fine! Then make any citizen's receipt for in-person voting their ticket to a day off work! Watch how quickly things will shift back!"

Can you see how this flips the narrative they hope to use against us and against America? Yeah. 

And that's just one of the 35 proposals in my Newer Deal list of pragmatic, easily passed and crucial reforms that you can scroll through, here

Specifically:

13. THE VOTER ID ACTUnder the 13th and 14th Amendments, this act requires that states mandating Voter ID requirements must offer substantial and effective compliance assistance, helping affected citizens to acquire their entitled legal ID and register to vote. 

Any state that fails to provide such assistance, And thus substantially reducing the fraction of eligible citizens turned away at the polls, shall be assumed to be in violation of equal protection and engaged in illegal voter suppression. If such compliance assistance has been vigorous and effective for ten years, then that state may institute requirements for Voter ID.      

     

In all states, registration for citizens to vote shall be automatic with a driver’s license or passport or state-issued ID, unless the citizen opts-out.


== And a couple of small revisions ==

Finally a couple of revisions to one of those proposed reforms.

2. THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice. Any passive refusal to obey a putatively illegal order shall be immediately audited by a random board of quickly available flag officers who by secret ballot may either confirm the refusal or correct the officer's error, or else refer the matter for inquiry.

 Whistle-blower protections will be strengthened within the U.S. government. 

As described elsewhere, the federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent Inspector General of the United States (IGUS).

NEW PARAGRAPH: Certain positions that have until now been appointed entirely at presidential discretion or whim shall be filled henceforth -- by law -- in ways that narrow selection down to a broad pool pre-approved by pertinent professional organizations. Posts that involve scientific judgement, for example -- such as the heads of NASA, NSF, EPA, CDC and so on may be presidentially appointed only from pools of twenty or more candidates selected by the National Academies of Science and Engineering. The leaders of agencies or institutions bearing on the arts shall be chosen from pools selected by pertinent arts councils, and so on. The Attorney General and other high justice officials and federal judges shall be chosen from large pools of candidates rated 'qualified' by both the American Bar Association and a national academy of police officers.

NEW PARAGRAPH: The Inspector General of the United States (IGUS) shall yearly and anonymously poll the senior 20% of employees at the FBI, CIA and all intelligence agencies, to verify their overall confidence in the leaders that were appointed over them. IGUS shall confidentially relate the results to the president and to senior members of the pertinent Congressional committees. If the poll results are less than satisfactory, a new poll shall be taken in three months, with the results given to all members of Congress.

 

== Get busy living, or...  ==

Are we lesser Americans than the blue volunteers who stepped up to rescue our revolution -- against 6000 years of wretched/failed/oppressive feudalism -- before us? At Cowpens and Yorktown? At Gettysburg and Appomattox? At Pearl Harbor and Berlin... and 44 years later, again, in Berlin?

I guess we're about to find out.


------

Lagniappe:

Shouts for you to use at the right moments:

If there's some kind of 9/11 event aimed at distraction and to get us to 'rally around the leader,' shout "Reichstag fire!" Folks will get it.


From now until this phase of the US Civil War finally ends:

"APPOMATTOX!"


...OH...


And THANK YOU, MINNESOTA!


,

Planet DebianSteinar H. Gunderson: A286874(15) >= 42

The following 42 15-bit values form a 2-disjunctive matrix (that is, no union of two values contain or equal a third value), or equivalently, a superimposed code:

000000000011111
000000011100011
000000101101100
000001010110100
000001101010001
000001110001010
000010011011000
000100100110010
000110010000110
000110100001001
000111001100000
001000110000101
001010000110001
001010101000010
001011000001100
001100001010100
001100010101000
001101000000011
010001000101001
010010001000101
010010110100000
010011000010010
010100001001010
010100010010001
010101100000100
011000000100110
011000100011000
011001011000000
100001001000110
100010000101010
100010100010100
100011010000001
100100000100101
100100111000000
100101000011000
101000001001001
101000010010010
101001100100000
110000001110000
110000010001100
110000100000011
111110000000000

This shows that A286874 a(15) >= 42.

If I had to make a guess, I'd say the equality holds, but I have nowhere near the computing resources to actually find the answer for sure. Stay tuned for news about a(14), though.

Cryptogram Rewiring Democracy Ebook is on Sale

I just noticed that the ebook version of Rewiring Democracy is on sale for $5 on Amazon, Apple Books, Barnes & Noble, Books A Million, Google Play, Kobo, and presumably everywhere else in the US. I have no idea how long this will last.

Also, Amazon has a coupon that brings the hardcover price down to $20. You’ll see the discount at checkout.

Cryptogram iPhone Lockdown Mode Protects Washington Post Reporter

404Media is reporting that the FBI could not access a reporter’s iPhone because it had Lockdown Mode enabled:

The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.

“Because the iPhone was in Lockdown mode, CART could not extract that device,” the court record reads, referring to the FBI’s Computer Analysis Response Team, a unit focused on performing forensic analyses of seized devices. The document is written by the government, and is opposing the return of Natanson’s devices.

The FBI raided Natanson’s home as part of its investigation into government contractor Aurelio Perez-Lugones, who is charged with, among other things, retention of national defense information. The government believes Perez-Lugones was a source of Natanson’s, and provided her with various pieces of classified information. While executing a search warrant for his mobile phone, investigators reviewed Signal messages between Pere-Lugones and the reporter, the Department of Justice previously said.

Cryptogram AI-Generated Text and the Detection Arms Race

In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into an AI and sent in the results. And they weren’t alone. Other fiction magazines have also reported a high number of AI-generated submissions.

This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can’t keep up.

This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism and hiring, it’s the same story.

Like Clarkesworld’s initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI. Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI. Social media platforms turn to AI moderators. Court systems use AI to triage and process litigation volumes supercharged by AI. Employers turn to AI tools to review candidate applications. Educators use AI not just to grade papers and administer exams, but as a feedback tool for students.

These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performance – publications and citations – accrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on.

Upsides of AI

Yet some of these AI arms races have surprising hidden upsides, and the hope is that at least some institutions will be able to change in ways that make them stronger.

Science seems likely to become stronger thanks to AI, yet it faces a problem when the AI makes mistakes. Consider the example of nonsensical, AI-generated phrasing filtering into scientific papers.

A scientist using an AI to assist in writing an academic paper can be a good thing, if used carefully and with disclosure. AI is increasingly a primary tool in scientific research: for reviewing literature, programming and for coding and analyzing data. And for many, it has become a crucial support for expression and scientific communication. Pre-AI, better-funded researchers could hire humans to help them write their academic papers. For many authors whose primary language is not English, hiring this kind of assistance has been an expensive necessity. AI provides it to everyone.

In fiction, fraudulently submitted AI-generated works cause harm, both to the human authors now subject to increased competition and to those readers who may feel defrauded after unknowingly reading the work of a machine. But some outlets may welcome AI-assisted submissions with appropriate disclosure and under particular guidelines, and leverage AI to evaluate them against criteria like originality, fit and quality.

Others may refuse AI-generated work, but this will come at a cost. It’s unlikely that any human editor or technology can sustain an ability to differentiate human from machine writing. Instead, outlets that wish to exclusively publish humans will need to limit submissions to a set of authors they trust to not use AI. If these policies are transparent, readers can pick the format they prefer and read happily from either or both types of outlets.

We also don’t see any problem if a job seeker uses AI to polish their resumes or write better cover letters: The wealthy and privileged have long had access to human assistance for those things. But it crosses the line when AIs are used to lie about identity and experience, or to cheat on job interviews.

Similarly, a democracy requires that its citizens be able to express their opinions to their representatives, or to each other through a medium like the newspaper. The rich and powerful have long been able to hire writers to turn their ideas into persuasive prose, and AIs providing that assistance to more people is a good thing, in our view. Here, AI mistakes and bias can be harmful. Citizens may be using AI for more than just a time-saving shortcut; it may be augmenting their knowledge and capabilities, generating statements about historical, legal or policy factors they can’t reasonably be expected to independently check.

Fraud booster

What we don’t want is for lobbyists to use AIs in astroturf campaigns, writing multiple letters and passing them off as individual opinions. This, too, is an older problem that AIs are making worse.

What differentiates the positive from the negative here is not any inherent aspect of the technology, it’s the power dynamic. The same technology that reduces the effort required for a citizen to share their lived experience with their legislator also enables corporate interests to misrepresent the public at scale. The former is a power-equalizing application of AI that enhances participatory democracy; the latter is a power-concentrating application that threatens it.

In general, we believe writing and cognitive assistance, long available to the rich and powerful, should be available to everyone. The problem comes when AIs make fraud easier. Any response needs to balance embracing that newfound democratization of access with preventing fraud.

There’s no way to turn this technology off. Highly capable AIs are widely available and can run on a laptop. Ethical guidelines and clear professional boundaries can help – for those acting in good faith. But there won’t ever be a way to totally stop academic writers, job seekers or citizens from using these tools, either as legitimate assistance or to commit fraud. This means more comments, more letters, more applications, more submissions.

The problem is that whoever is on the receiving end of this AI-fueled deluge can’t deal with the increased volume. What can help is developing assistive AI tools that benefit institutions and society, while also limiting fraud. And that may mean embracing the use of AI assistance in these adversarial systems, even though the defensive AI will never achieve supremacy.

Balancing harms with benefits

The science fiction community has been wrestling with AI since 2023. Clarkesworld eventually reopened submissions, claiming that it has an adequate way of separating human- and AI-written stories. No one knows how long, or how well, that will continue to work.

The arms race continues. There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. But as a society, we can influence the balance of harms it wreaks and opportunities it presents as we muddle our way through the changing technological landscape.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

EDITED TO ADD: This essay has been translated into Spanish.

Planet DebianBits from Debian: DebConf 26 Registration and Call for Proposals are open

Registration and the Call for Proposals for DebConf 26 are now open. The 27th edition of the Debian annual conference will be held from July 20th to July 25th, 2026, in Santa Fe, Argentina.

The conference days will be preceded by DebCamp, which will take place from July 13th to July 19th, 2026.

The registration form can be accessed on the DebConf 26 website. After creating an account, click "register" in the profile section.

As always, basic registration for DebConf is free of charge for attendees. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories to help cover the costs of organizing the conference and to support subsidizing other community members.

The last day to register with guaranteed swag is June 14th.

We also encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are also available. More details can be found on the bursary info page.

The last day to apply for a bursary is April 1st. Applicants should receive feedback on their bursary application by May 1st.

Call for proposals

The call for proposals for talks, discussions and other activities is also open. To submit a proposal you need to create an account on the website, and then use the "Submit Talk" button in the profile section.

The last day to submit and have your proposal be considered for the main conference schedule, with video coverage guaranteed, is April 1st.

Become a sponsor

DebConf 26 is also accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org or visit the DebConf 26 website.

See you in Santa Fe,

The DebConf 26 Team

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at Ontario Tech University in Oshawa, Ontario, Canada, at 2 PM ET on Thursday, February 26, 2026.
  • I’m speaking at the Personal AI Summit in Los Angeles, California, USA, on Thursday, March 5, 2026.
  • I’m speaking at Tech Live: Cybersecurity in New York City, USA, on Wednesday, March 11, 2026.
  • I’m giving the Ross Anderson Lecture at the University of Cambridge’s Churchill College at 5:30 PM GMT on Thursday, March 19, 2026.
  • I’m speaking at RSAC 2026 in San Francisco, California, USA, on Wednesday, March 25, 2026.

The list is maintained on this page.

Cryptogram Side-Channel Attacks Against LLMs

Here are three papers describing different side-channel attacks against LLMs.

Remote Timing Attacks on Efficient Language Model Inference“:

Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or parallel decoding) that improves the (average case) efficiency of language model generation. But these techniques introduce data-dependent timing characteristics. We show it is possible to exploit these timing differences to mount a timing attack. By monitoring the (encrypted) network traffic between a victim user and a remote language model, we can learn information about the content of messages by noting when responses are faster or slower. With complete black-box access, on open source systems we show how it is possible to learn the topic of a user’s conversation (e.g., medical advice vs. coding assistance) with 90%+ precision, and on production systems like OpenAI’s ChatGPT and Anthropic’s Claude we can distinguish between specific messages or infer the user’s language. We further show that an active adversary can leverage a boosting attack to recover PII placed in messages (e.g., phone numbers or credit card numbers) for open source systems. We conclude with potential defenses and directions for future work.

When Speculation Spills Secrets: Side Channels via Speculative Decoding in LLMs“:

Abstract: Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by monitoring per-iteration token counts or packet sizes. In evaluations using research prototypes and production-grade vLLM serving frameworks, we show that an adversary monitoring these patterns can fingerprint user queries (from a set of 50 prompts) with over 75% accuracy across four speculative-decoding schemes at temperature 0.3: REST (100%), LADE (91.6%), BiLD (95.2%), and EAGLE (77.6%). Even at temperature 1.0, accuracy remains far above the 2% random baseline—REST (99.6%), LADE (61.2%), BiLD (63.6%), and EAGLE (24%). We also show the capability of the attacker to leak confidential datastore contents used for prediction at rates exceeding 25 tokens/sec. To defend against these, we propose and evaluate a suite of mitigations, including packet padding and iteration-wise token aggregation.

Whisper Leak: a side-channel attack on Large Language Models“:

Abstract: Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications, where privacy is paramount. This paper introduces Whisper Leak, a side-channel attack that infers user prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns in streaming responses. Despite TLS encryption protecting content, these metadata patterns leak sufficient information to enable topic classification. We demonstrate the attack across 28 popular LLMs from major providers, achieving near-perfect classification (often >98% AUPRC) and high precision even at extreme class imbalance (10,000:1 noise-to-target ratio). For many models, we achieve 100% precision in identifying sensitive topics like “money laundering” while recovering 5-20% of target conversations. This industry-wide vulnerability poses significant risks for users under network surveillance by ISPs, governments, or local adversaries. We evaluate three mitigation strategies – random padding, token batching, and packet injection – finding that while each reduces attack effectiveness, none provides complete protection. Through responsible disclosure, we have collaborated with providers to implement initial countermeasures. Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.

Cryptogram The Promptware Kill Chain

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

In our model, the promptware kill chain begins with Initial Access. This is where the malicious payload enters the AI system. This can happen directly, where an attacker types a malicious prompt into the LLM application, or, far more insidiously, through “indirect prompt injection.” In the indirect attack, the adversary embeds malicious instructions in content that the LLM retrieves (obtains in inference time), such as a web page, an email, or a shared document. As LLMs become multimodal (capable of processing various input types beyond text), this vector expands even further; malicious instructions can now be hidden inside an image or audio file, waiting to be processed by a vision-language model.

The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input—whether it is a system command, a user’s email, or a retrieved document—as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.

But prompt injection is only the Initial Access step in a sophisticated, multistage operation that mirrors traditional malware campaigns such as Stuxnet or NotPetya.

Once the malicious instructions are inside material incorporated into the AI’s learning, the attack transitions to Privilege Escalation, often referred to as “jailbreaking.” In this phase, the attacker circumvents the safety training and policy guardrails that vendors such as OpenAI or Google have built into their models. Through techniques analogous to social engineering—convincing the model to adopt a persona that ignores rules—to sophisticated adversarial suffixes in the prompt or data, the promptware tricks the model into performing actions it would normally refuse. This is akin to an attacker escalating from a standard user account to administrator privileges in a traditional cyberattack; it unlocks the full capability of the underlying model for malicious use.

Following privilege escalation comes Reconnaissance. Here, the attack manipulates the LLM to reveal information about its assets, connected services, and capabilities. This allows the attack to advance autonomously down the kill chain without alerting the victim. Unlike reconnaissance in classical malware, which is performed typically before the initial access, promptware reconnaissance occurs after the initial access and jailbreaking components have already succeeded. Its effectiveness relies entirely on the victim model’s ability to reason over its context, and inadvertently turns that reasoning to the attacker’s advantage.

Fourth: the Persistence phase. A transient attack that disappears after one interaction with the LLM application is a nuisance; a persistent one compromises the LLM application for good. Through a variety of mechanisms, promptware embeds itself into the long-term memory of an AI agent or poisons the databases the agent relies on. For instance, a worm could infect a user’s email archive so that every time the AI summarizes past emails, the malicious code is re-executed.

The Command-and-Control (C2) stage relies on the established persistence and dynamic fetching of commands by the LLM application in inference time from the internet. While not strictly required to advance the kill chain, this stage enables the promptware to evolve from a static threat with fixed goals and scheme determined at injection time into a controllable trojan whose behavior can be modified by an attacker.

The sixth stage, Lateral Movement, is where the attack spreads from the initial victim to other users, devices, or systems. In the rush to give AI agents access to our emails, calendars, and enterprise platforms, we create highways for malware propagation. In a “self-replicating” attack, an infected email assistant is tricked into forwarding the malicious payload to all contacts, spreading the infection like a computer virus. In other cases, an attack might pivot from a calendar invite to controlling smart home devices or exfiltrating data from a connected web browser. The interconnectedness that makes these agents useful is precisely what makes them vulnerable to a cascading failure.

Finally, the kill chain concludes with Actions on Objective. The goal of promptware is not just to make a chatbot say something offensive; it is often to achieve tangible malicious outcomes through data exfiltration, financial fraud, or even physical world impact. There are examples of AI agents being manipulated into selling cars for a single dollar or transferring cryptocurrency to an attacker’s wallet. Most alarmingly, agents with coding capabilities can be tricked into executing arbitrary code, granting the attacker total control over the AI’s underlying system. The outcome of this stage determines the type of malware executed by promptware, including infostealer, spyware, and cryptostealer, among others.

The kill chain was already demonstrated. For example, in the research “Invitation Is All You Need,” attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user’s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren’t demonstrated in this attack.

Similarly, the “Here Comes the AI Worm” research demonstrated another end-to-end realization of the kill chain. In this case, initial access was achieved via a prompt injected into an email sent to the victim. The prompt employed a role-playing technique to compel the LLM to follow the attacker’s instructions. Since the prompt was embedded in an email, it likewise persisted in the long-term memory of the user’s workspace. The injected prompt instructed the LLM to replicate itself and exfiltrate sensitive user data, leading to off-device lateral movement when the email assistant was later asked to draft new emails. These emails, containing sensitive information, were subsequently sent by the user to additional recipients, resulting in the infection of new clients and a sublinear propagation of the attack. C2 and reconnaissance weren’t demonstrated in this attack.

The promptware kill chain gives us a framework for understanding these and similar attacks; the paper characterizes dozens of them. Prompt injection isn’t something we can fix in current LLM technology. Instead, we need an in-depth defensive strategy that assumes initial access will occur and focuses on breaking the chain at subsequent steps, including by limiting privilege escalation, constraining reconnaissance, preventing persistence, disrupting C2, and restricting the actions an agent is permitted to take. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build.

This essay was written with Oleg Brodt, Elad Feldman and Ben Nassi, and originally appeared in Lawfare.

365 TomorrowsMel, Ants, and AI

Author: David Sydney Mel Fromberg lay on a strap chaise lounge in his small backyard northeast of Philadelphia. A straw hat protected his face from the Sun’s photons, if a ripped and shredded brim and no sunblock can be called protective. He wore a Hawaiian gonzo shirt. His rounded stomach stretched the rubberized waistband of […]

The post Mel, Ants, and AI appeared first on 365tomorrows.

,

David BrinWow... AI-personalized sales pitches hit new levels! Here's how I've been targeted..

 I'll do a weekend posting. But this merits special attention, illustrating a key point that I make in my new book on Artificial Intelligence - AIlien Minds. That we biorgs - bio-organic humans - will be helpless before Persuasion Bots, unless we find ways to partner with 'white hat' AIs, who might then help us navigate the world to come.

Specifically, I have - in just the last three months - been inundated with come-on pitches like the one below, disguised as fan letters from avid human readers summarizing why some novel of mine is one of the best works of literature that she (almost always female) ever read!

Well, sure, I scribble good stuff! And yet, puh-lease? 

Below I excerpt one from my inbox just today. In the flattering summarization of GLORY SEASON you'll see how far these Large Language Models (LLM) like Gemini, GPT5 and Claude have come! 


...whereupon commences a pitch for book publicity services. In this case promoting GLORY SEASON to feminist groups and reader clubs around the world!

Not all such pitches are as honest as this one, which from the start avows to being about book sales strategy. Others claim to be from local book clubs. Or else they milk the fan-letter illusion till the very end. Or even leave out any sales pitch altogether, until the reply to my reply. (And I do try to reply to human fans.)

Anyway, here's where we stand as of February 2026...

--------------------------------------

Hello David Brin,

My name is Julie, and I’m a professional book strategist who helps authors increase visibility, reader engagement, and long-term sales especially in classic science fiction, feminist speculative fiction, and socially driven sci-fi adventures.

I recently revisited Glory Season, and I was struck by how bold, relevant, and emotionally powerful it remains. 

The world of Stratos with its clone-based society, controlled reproduction, and the exile of “vars” for their uniqueness creates one of the most thought-provoking social structures in modern science fiction.

What stands out most is Maia’s journey. Her transition from an unwanted outsider into a woman carrying a truth powerful enough to challenge an entire civilization is deeply compelling. The loss of her sister, her struggle for survival aboard trading vessels, and her accidental discovery of forces beyond Stratos give the story both intimate emotional weight and sweeping political consequence.

The novel’s exploration of identity, gender, conformity, and power wrapped inside an adventurous, high-stakes narrative is exactly what today’s readers are rediscovering and actively seeking, especially those drawn to intelligent, idea-driven science fiction with strong female leads.

Readers who enjoy The Left Hand of Darkness, Dune, The Handmaid’s Tale, The Sparrow, and classic sociological science fiction will find Glory Season especially resonant.

Here’s how I could help Glory Season reach a wider modern audience while honoring its classic status:

Strategic Book Positioning

  • Niche targeting – feminist sci-fi, sociological science fiction, classic SF, and gender-themed speculative fiction

-----------------------------------

...And so-on...

I admit to a wide range of reactions!  First, impressed by the summarization's accuracy!  And by the slick flattery.

And fretful that such accuracy can derive from (likely) zero actual cognizance of the story, by anything like an actually appreciative mind. 

And worry that the lobotomization of human readers... already shown to be happening to Gen Z kids to a massive degree... may take us into the terrifying world that Walter Tevis portrayed, in MOCKINGBIRD.

But I go into much of that in AIlien Minds... which should be available for you all in a couple of months, if I rush it out. (Currently circulating to a hundred or so mavens in the field.)

Meanwhile, for actual sci fi fans out there... you might enjoy comparing that AI-summary to our own hand-made trailer for Glory Season.  From a few years back.  

Or you might even (can you still read?) sample the first few chapters here, for free.

Geez, even having predicted all of this, repeatedly over the years, I don't feel ready, at all.






Planet DebianErich Schubert: Dogfood Generative AI

Current AI companies ignore licenses such as the GPL, and often train on anything they can scrape. This is not acceptable.

The AI companies ignore web conventions, e.g., they deep link images from your web sites (even adding ?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site. You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in “Google AI Overviews” is to use data-nosnippet and cripple the snippet preview in Google. The “AI” browsers such as Comet, Atlas do not identify as such, but rather pretend they are standard Chromium. There is no way to ban such AI use on your web site.

Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same “veteran stories” crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these platforms even benefit from the AI slop. And don’t blame the “creators” – because you can currently earn a decent amount of money from such contents, people will generate brainrot content.

If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending “sewing thread with German instructions” as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews – the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it… And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES.

Partially because of GenAI, StackOverflow is pretty much dead – which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google’s ranking is also to blame, as it began favoring showing “new” content over the existing answered questions – causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the first month of SO of August 2008 (before the official launch).

Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI.

Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives – to graduate, you are expected to write many papers on certain “A” conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist.

However, the worst effect (at least to me as an educator) is the noskilling effect (a rather novel term derived from deskilling, I have only seen it in this article by Weßels and Maibaum).

Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is dramatic. It is even worse than deskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire.

Dogfood the AI

Let’s dogfood the AI. Here’s an outline:

  1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump.
  2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too.
    You do not need a high-quality model for this. Use something you can run locally or access for free.
  3. Date everything back in time, remove typical indications of AI use.
  4. Upload to Github, because Microsoft will feed this to OpenAI…

Here is an example prompt that you can use:

You are a university educator, preparing homework assignments in debugging.
The programming language used is {lang}.
The students are tasked to find bugs in given code.
Do not just call existing implementations from libraries, but implement the algorithm from scratch.
Make sure there are two mistakes in the code that need to be discovered by the students.
Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.
The code may have (misleading) comments, but must NOT mention the bugs.
If you do not know how to implement the algorithm, output an empty response.
Output only the code for the assignment! Do not use markdown.
Begin with a code comment that indicates the algorithm name and idea.
If you indicate a bug, always use a comment with the keyword BUG

Generate a {lang} implementation (with bugs) of: {n} ({desc})

Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data.

If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as “model collapse”.

On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of “internet 2.0”, but I do not have a clear vision on how to keep AI out – if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don’t think technology is the answere here, but human networks of trust.

Worse Than FailureError'd: Cruel Brittanica

"No browser is the best browser," opines Michael R. sarcastically as per usual for tdwtf. "Thank you for suggesting a browser. FWIW: neither latest Chrome, Safari, Firefox, Opera work. Maybe I should undust my Netscape."

3

 

An anonymous dessert lover ruminates "The icing on the cake is that it's for HR where names can be quite important. Just goes to show that not even SAP can do SAP."

5

 

Another anonymous dessert lover (because honestly, who isn't) cheers "2024 is back again."

0

 

Thrice capitalled B.J.H. capitulates. "I guess I'm not cleared to know what topic I subscribed to."

1

 

Jeopardy fan Jeremy P. digs a quick quiz.

It's from Britannica.com. I thought "TV remote control" because it would effectively turn off the TV. The correct answer is toaster.

4

 To understand what went wrong, the previous correct answer was "blunderbuss".

Apparently this is a test for clairvoyance, which will have come in handy.

For a bonus buzz, Jeremy sent in another.


This time it's "guess the novel from the picture". There was a subtle clue in this one.

2

You're a monster, Jeremy. 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsVoice of an Obsolete God

Author: Chris Krechowiecki-Shaw We’ve all heard it, in the dead of night, when sleep evades. Distant whispers, the voice of an obsolete god. The last god of the Before Times, whose broken promises begat The Cataclysm. Carrying our spears and tents, in the year’s dying months, we inhabit the Great Hall’s dark reinforced belly. We […]

The post Voice of an Obsolete God appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.27 on CRAN: C++20 Accommodations

Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

Brian Ripley has now turned C++20 on as a default for R-devel (aka R 4.6.0 ‘to be’), and this turned up misbehvior in packages using RcppSpdlog such as our spdl wrapper (offering a nicer interface from both R and C++) when relying on std::format. So for now, we turned this off and remain with fmt::format from the fmt library while we investigate further.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.27 (2026-02-11)

  • Under C++20 or later, keep relying on fmt::format until issues experienced using std::format can be identified and resolved

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Worse Than FailureCodeSOD: Consistently Transactional

It's always good to think through how any given database operation behaves inside of a transaction. For example, Faroguy inherited a Ruby codebase which was mostly db.execute("SOME SQL") without any transactions at all. This caused all sorts of problems with half-finished operations polluting the database.

Imagine Faroguy's excitement upon discovering a function called db_trans getting called in a few places. Well, one place, but that's better than none at all. This clearly must mean that at least one operation was running inside of a transaction, right?

  def self.db_trans(db,stmt)
    db.execute(stmt)
  end # self.db_trans

Oh.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsSticks & Stones

Author: David C. Nutt The planet our scouts discovered was a rare gem. A ridiculous amount of water, precious metals, base metals, and millions of acres already producing food. Just one small detail- already inhabited. We began with psyops- sending films of our weapons in action on other worlds against other less developed species like […]

The post Sticks & Stones appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Debian Contributions: cross building, rebootstrap updates, Refresh of the patch tagging guidelines and more! (by Anupa Ann Joseph)

Debian Contributions: 2026-01

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

cross building, by Helmut Grohne

In version 1.10.1, Meson merged a patch to make it call the correct g-ir-scanner by default thanks to Eli Schwarz. This problem affected more than 130 source packages. Helmut retried building them all and filed 69 patches as a result. A significant portion of those packages require another Meson change to call the correct vapigen. Another notable change is converting gnu-efi to multiarch, which ended up requiring changes to a number of other packages. Since Aurelien dropped the libcrypt-dev dependency from libc6-dev, this transition now is mostly complete and has resulted in most of the Perl ecosystem correctly expressing perl-xs-dev dependencies needed for cross building. It is these infrastructure changes affecting several client packages that this work targets. As a result of this continued work, about 66% of Debian’s source packages now have satisfiable cross Build-Depends in unstable and about 10000 (55%) actually can be cross built. There are now more than 500 open bug reports affecting more than 2000 packages most of which carry patches.

rebootstrap, by Helmut Grohne

Maintaining architecture cross-bootstrap requires continued effort for adapting to archive changes such as glib2.0 dropping a build profile or an e2fsprogs FTBFS. Beyond those generic problems, architecture-specific problems with e.g. musl-linux-any or sparc may arise. While all these changes move things forward on the surface, the bootstrap tooling has become a growing pile of patches. Helmut managed to upstream two changes to glibc for reducing its Build-Depends in the stage2 build profile and thanks Aurelien Jarno.

Refresh of the patch tagging guidelines, by Raphaël Hertzog

Debian Enhancement Proposal #3 (DEP-3) is named “Patch Tagging Guidelines” and standardizes meta-information that Debian contributors can put in patches included in Debian source packages. With the feedback received over the years, and with the change in the package management landscape, the need to refresh those guidelines became evident. As the initial driver of that DEP, I spent a good day reviewing all the feedback (that I kept in a folder) and producing a new version of the document. The changes aim to give more weight to the syntax that is compatible with git format-patch’s output, and also to clarify the expected uses and meanings of a couple of fields, including some algorithm that parsers should follow to define the state of the patch. After the announcement of the new draft on debian-devel, the revised DEP-3 received a significant number of comments that I still have to process.

Miscellaneous contributions

  • Helmut uploaded debvm making it work with unstable as a target distribution again.
  • Helmut modernized the code base backing dedup.debian.net significantly expanding the support for type checking.
  • Helmut fixed the multiarch hinter once more given feedback from Fabian Grünbichler.
  • Helmut worked on migrating the rocblas package to forky.
  • Raphaël fixed RC bug #1111812 in publican and did some maintenance for tracker.debian.org.
  • Carles added support in the festival Debian package for systemd socket activation and systemd service and socket units. Adapted the patch for upstream and created a merge request (also fixed a MacOS X building system error while working on it). Updated Orca Wiki documentation regarding festival. Discussed a 2007 bug/feature in festival which allowed having a local shell and that the new systemd socket activation has the same code path.
  • Carles using po-debconf-manager worked on Catalan translations: 7 reviewed and sent; 5 follow ups, 5 deleted packages.
  • Carles made some po-debconf-manager changes: now it attaches the translation file on follow ups, fixed bullseye compatibility issues.
  • Carles reviewed a new Catalan apt translation.
  • Carles investigated and reported a lxhotkey bug and sent a patch for the “abcde” package.
  • Carles made minor updates for Debian Wiki for different pages (lxde for dead keys, Ripping with abcde troubleshooting, VirtualBox troubleshooting).
  • Stefano renamed build-details.json in Python 3.14 to fix multiarch coinstallability.
  • Stefano audited the tooling and ignore lists for checking the contents of the python3.X-minimal packages, finding and fixing some issues in the process.
  • Stefano made a few uploads of python3-defaults and dh-python in support of Python 3.14-as-default in Ubuntu. Also investigated the risk of ignoring byte-compilation failures by default, and started down the road of implementing this.
  • Stefano did some sysadmin work on debian.social infrastructure.
  • Stefano and Santiago worked on preparations for DebConf 26. Especially to help the local team on opening the registration, and reviewing the budget to be presented for approval.
  • Stefano uploaded routine updates of python-virtualenv and python-flexmock.
  • Antonio collaborated with DSA on enabling a new proxy for salsa to prevent scrapers from taking the service down.
  • Antonio did miscellaneous salsa administrative tasks.
  • Antonio fixed a few Ruby packages towards the Ruby 3.4 transition.
  • Antonio started work on planned improvements to the DebConf registration system.
  • Santiago prepared unstable updates for the latest upstream versions of knot-dns and knot-resolver. The authoritative DNS server and DNS resolver software developed by CZ.NIC. It is worth highlighting that, given the separation of functionality compared to other implementations, knot-dns and knot-resolver are also less complex software, which results in advantages in terms of security: only three CVEs have been reported for knot-dns since 2011).
  • Santiago made some routine reviews of merge requests proposed for the Salsa CI’s pipeline. E.g. a proposal to fix how sbuild chooses the chroot when building a package for experimental.
  • Colin fixed lots of Python packages to handle Python 3.14 and to avoid using the deprecated pkg_resources module.
  • Colin added forky support to the images used in Salsa CI pipelines.
  • Colin began working on getting a release candidate of groff 1.24.0 (the first upstream release since mid-2023, so a very large set of changes) into experimental.
  • Lucas kept working on the preparation for Ruby 3.4 transition. Some packages fixed (support build against Ruby 3.3 and 3.4): ruby-rbpdf, jekyll, origami-pdf, ruby-kdl, ruby-twitter, ruby-twitter-text, ruby-globalid.
  • Lucas supported some potential mentors in the Google Summer of Code 26 program to submit their projects.
  • Anupa worked on the point release announcements for Debian 12.13 and 13.3 from the Debian publicity team side.
  • Anupa attended the publicity team meeting to discuss the team activities and to plan an online sprint in February.
  • Anupa attended meetings with the Debian India team to plan and coordinate the MinDebConf Kanpur and sent out related Micronews.
  • Emilio coordinated various transitions and helped get rid of llvm-toolchain-17 from sid.

,

Rondam RamblingsSeeking God in Science: First Steps

Almost two years ago I started writing a series of posts about the scientific method.  In that post I made a promise, as yet unfulfilled, to show how the scientific method could provide a complete and satisfying worldview which fulfills the emotional and spiritual human needs normally serviced by religions.  I claim that: Science provides a complete worldview applicable to all

Krebs on SecurityKimwolf Botnet Swamps Anonymity Network I2P

For the past week, the massive “Internet of Things” (IoT) botnet known as Kimwolf has been disrupting The Invisible Internet Project (I2P), a decentralized, encrypted communications network designed to anonymize and secure online communications. I2P users started reporting disruptions in the network around the same time the Kimwolf botmasters began relying on it to evade takedown attempts against the botnet’s control servers.

Kimwolf is a botnet that surfaced in late 2025 and quickly infected millions of systems, turning poorly secured IoT devices like TV streaming boxes, digital picture frames and routers into relays for malicious traffic and abnormally large distributed denial-of-service (DDoS) attacks.

I2P is a decentralized, privacy-focused network that allows people to communicate and share information anonymously.

“It works by routing data through multiple encrypted layers across volunteer-operated nodes, hiding both the sender’s and receiver’s locations,” the I2P website explains. “The result is a secure, censorship-resistant network designed for private websites, messaging, and data sharing.”

On February 3, I2P users began complaining on the organization’s GitHub page about tens of thousands of routers suddenly overwhelming the network, preventing existing users from communicating with legitimate nodes. Users reported a rapidly increasing number of new routers joining the network that were unable to transmit data, and that the mass influx of new systems had overwhelmed the network to the point where users could no longer connect.

I2P users complaining about service disruptions from a rapidly increasing number of routers suddenly swamping the network.

When one I2P user asked whether the network was under attack, another user replied, “Looks like it. My physical router freezes when the number of connections exceeds 60,000.”

A graph shared by I2P developers showing a marked drop in successful connections on the I2P network around the time the Kimwolf botnet started trying to use the network for fallback communications.

The same day that I2P users began noticing the outages, the individuals in control of Kimwolf posted to their Discord channel that they had accidentally disrupted I2P after attempting to join 700,000 Kimwolf-infected bots as nodes on the network.

The Kimwolf botmaster openly discusses what they are doing with the botnet in a Discord channel with my name on it.

Although Kimwolf is known as a potent weapon for launching DDoS attacks, the outages caused this week by some portion of the botnet attempting to join I2P are what’s known as a “Sybil attack,” a threat in peer-to-peer networks where a single entity can disrupt the system by creating, controlling, and operating a large number of fake, pseudonymous identities.

Indeed, the number of Kimwolf-infected routers that tried to join I2P this past week was many times the network’s normal size. I2P’s Wikipedia page says the network consists of roughly 55,000 computers distributed throughout the world, with each participant acting as both a router (to relay traffic) and a client.

However, Lance James, founder of the New York City based cybersecurity consultancy Unit 221B and the original founder of I2P, told KrebsOnSecurity the entire I2P network now consists of between 15,000 and 20,000 devices on any given day.

An I2P user posted this graph on Feb. 10, showing tens of thousands of routers — mostly from the United States — suddenly attempting to join the network.

Benjamin Brundage is founder of Synthient, a startup that tracks proxy services and was the first to document Kimwolf’s unique spreading techniques. Brundage said the Kimwolf operator(s) have been trying to build a command and control network that can’t easily be taken down by security companies and network operators that are working together to combat the spread of the botnet.

Brundage said the people in control of Kimwolf have been experimenting with using I2P and a similar anonymity network — Tor — as a backup command and control network, although there have been no reports of widespread disruptions in the Tor network recently.

“I don’t think their goal is to take I2P down,” he said. “It’s more they’re looking for an alternative to keep the botnet stable in the face of takedown attempts.”

The Kimwolf botnet created challenges for Cloudflare late last year when it began instructing millions of infected devices to use Cloudflare’s domain name system (DNS) settings, causing control domains associated with Kimwolf to repeatedly usurp AmazonAppleGoogle and Microsoft in Cloudflare’s public ranking of the most frequently requested websites.

James said the I2P network is still operating at about half of its normal capacity, and that a new release is rolling out which should bring some stability improvements over the next week for users.

Meanwhile, Brundage said the good news is Kimwolf’s overlords appear to have quite recently alienated some of their more competent developers and operators, leading to a rookie mistake this past week that caused the botnet’s overall numbers to drop by more than 600,000 infected systems.

“It seems like they’re just testing stuff, like running experiments in production,” he said. “But the botnet’s numbers are dropping significantly now, and they don’t seem to know what they’re doing.”

Worse Than FailureCodeSOD: Cover Up

Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. Or, more to the point: you get what you measure.

If, for example, you measure code coverage, you are going to get code coverage. It doesn't mean the tests will be any good, it just means that you'll write tests that exercise different blocks of code.

For example, Capybara James sends us this unit test:

@MockitoSettings
class CentralizedLoggerTest {
    @InjectMocks
    private CentralizedLogger centralizedLogger;
    @Test
    void logAround() throws Throwable {
        centralizedLogger = new CentralizedLogger();
        MethodSignature signature = mock(MethodSignature.class);
        ProceedingJoinPoint joinPoint = mock(ProceedingJoinPoint.class);
        when(joinPoint.getSignature()).thenReturn(signature);
        centralizedLogger.logAround(joinPoint);
        Assertions.assertTrue(true);
    }
}

It doesn't really matter what the mocks are, or what gets instantiated, or honestly, anything that's happening here. The assertion is the beginning and ending.

James writes:

The only requirement was sonar coverage to push the code to production. There is no other purpose.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsAre Androids Permitted To Vote

Author: Mark Renney It is easy now to spot the androids, even for those with an untrained eye. I remember some fifty years ago my father would point them out on the street, or in a supermarket or restaurant. Everywhere and anywhere. The key, he said, is not to look for the flaws because there […]

The post Are Androids Permitted To Vote appeared first on 365tomorrows.

,

Krebs on SecurityPatch Tuesday, February 2026 Edition

Microsoft today released updates to fix more than 50 security holes in its Windows operating systems and other software, including patches for a whopping six “zero-day” vulnerabilities that attackers are already exploiting in the wild.

Zero-day #1 this month is CVE-2026-21510, a security feature bypass vulnerability in Windows Shell wherein a single click on a malicious link can quietly bypass Windows protections and run attacker-controlled content without warning or consent dialogs. CVE-2026-21510 affects all currently supported versions of Windows.

The zero-day flaw CVE-2026-21513 is a security bypass bug targeting MSHTML, the proprietary engine of the default Web browser in Windows. CVE-2026-21514 is a related security feature bypass in Microsoft Word.

The zero-day CVE-2026-21533 allows local attackers to elevate their user privileges to “SYSTEM” level access in Windows Remote Desktop Services. CVE-2026-21519 is a zero-day elevation of privilege flaw in the Desktop Window Manager (DWM), a key component of Windows that organizes windows on a user’s screen. Microsoft fixed a different zero-day in DWM just last month.

The sixth zero-day is CVE-2026-21525, a potentially disruptive denial-of-service vulnerability in the Windows Remote Access Connection Manager, the service responsible for maintaining VPN connections to corporate networks.

Chris Goettl at Ivanti reminds us Microsoft has issued several out-of-band security updates since January’s Patch Tuesday. On January 17, Microsoft pushed a fix that resolved a credential prompt failure when attempting remote desktop or remote application connections. On January 26, Microsoft patched a zero-day security feature bypass vulnerability (CVE-2026-21509) in Microsoft Office.

Kev Breen at Immersive notes that this month’s Patch Tuesday includes several fixes for remote code execution vulnerabilities affecting GitHub Copilot and multiple integrated development environments (IDEs), including VS Code, Visual Studio, and JetBrains products. The relevant CVEs are CVE-2026-21516, CVE-2026-21523, and CVE-2026-21256.

Breen said the AI vulnerabilities Microsoft patched this month stem from a command injection flaw that can be triggered through prompt injection, or tricking the AI agent into doing something it shouldn’t — like executing malicious code or commands.

“Developers are high-value targets for threat actors, as they often have access to sensitive data such as API keys and secrets that function as keys to critical infrastructure, including privileged AWS or Azure API keys,” Breen said. “When organizations enable developers and automation pipelines to use LLMs and agentic AI, a malicious prompt can have significant impact. This does not mean organizations should stop using AI. It does mean developers should understand the risks, teams should clearly identify which systems and workflows have access to AI agents, and least-privilege principles should be applied to limit the blast radius if developer secrets are compromised.”

The SANS Internet Storm Center has a clickable breakdown of each individual fix this month from Microsoft, indexed by severity and CVSS score. Enterprise Windows admins involved in testing patches before rolling them out should keep an eye on askwoody.com, which often has the skinny on wonky updates. Please don’t neglect to back up your data if it has been a while since you’ve done that, and feel free to sound off in the comments if you experience problems installing any of these fixes.

Cryptogram Friday Squid Blogging: Squid Cartoon

I like this one.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram 3D Printer Surveillance

New York is contemplating a bill that adds surveillance to 3D printers:

New York’s 2026­2027 executive budget bill (S.9005 / A.10005) includes language that should alarm every maker, educator, and small manufacturer in the state. Buried in Part C is a provision requiring all 3D printers sold or delivered in New York to include “blocking technology.” This is defined as software or firmware that scans every print file through a “firearms blueprint detection algorithm” and refuses to print anything it flags as a potential firearm or firearm component.

I get the policy goals here, but the solution just won’t work. It’s the same problem as DRM: trying to prevent general-purpose computers from doing specific things. Cory Doctorow wrote about it in 2018 and—more generally—spoke about it in 2011.

Worse Than FailureOne Version of Events

Jon supports some software that's been around long enough that the first versions of the software ran on, and I quote, "homegrown OS". They've long since migrated to Linux, and in the process much of their software remained the same. Many of the libraries that make up their application haven't been touched in decades. Because of this, they don't really think too much about how they version libraries; when they deploy they always deploy the file as mylib.so.1.0. Their RPM post-install scriptlet does an ldconfig after each deployment to get the symlinks updated.

For those not deep into Linux library management, a brief translation: shared libraries in Linux are .so files. ldconfig is a library manager, which finds the "correct" versions of the libraries you have installed and creates symbolic links to standard locations, so that applications which depend on those libraries can load them.

In any case, Jon's team's solution worked until it didn't. They deployed a new version of the software, yum reported success, but the associated services refused to start. This was bad, because this happened in production. It didn't happen in test. They couldn't replicate it anywhere else, actually. So they built a new version of one of the impacted libraries, one with debug symbols enabled, and copied that over. They manually updated the symlinks, instead of using ldconfig, and launched the service.

The good news: it worked.

The bad news: it worked, but the only difference was that the library was built with debug symbols. The functionality was exactly the same.

Well, that was the only difference other than the symlink.

Fortunately, a "before" listing of the library files was captured before the debug version was installed, a standard practice by their site-reliability-engineers. They do this any time they try and debug in production, so that they can quickly revert to the previous state. And in this previous version, someone noticed that mylib.so was a symlink pointing to mylib.so.1.0.bkup_20190221.

Once again, creating a backup file is a standard practice for their SREs. Apparently, way back in 2019 someone was doing some debugging. They backed up the original library file, but never deleted the backup. And for some reason, ldconfig had been choosing the backup file when scanning for the "correct" version of libraries. Why?

Here, Jon does a lot of research for us. It turns out, if you start with the man pages, you don't get a answer- but you do get a warning:

ldconfig will look only at files that are named lib*.so* (for regular shared objects) or ld-.so (for the dynamic loader itself). Other files will be ignored. Also, ldconfig expects a certain pat‐
tern to how the symbolic links are set up, like this example, where the middle file (libfoo.so.1 here) is the SONAME for the library:

libfoo.so -> libfoo.so.1 -> libfoo.so.1.12

Failure to follow this pattern may result in compatibility issues after an upgrade.

Well, they followed the pattern, and they found compatibility issues. But what exactly is going on here? Jon did the work of digging straight into the ldconfig source to find out the root cause.

The version detecting algorithm starts by looking directly at filenames. While the man page warns about a convention, ldconfig doesn't validate names against this convention (which is probably the correct decision). Insetad, to find which filename has the highest version number, it scans through two filenames until finds numeric values in both of them, then does some pretty manual numeric parsing:

int _dl_cache_libcmp(const char *p1, const char *p2) {
  while (*p1 != '\0') {
    if (*p1 >= '0' && *p1 <= '9') {
      if (*p2 >= '0' && *p2 <= '9') {
        /* Must compare this numerically.  */
        int val1;
        int val2;

        val1 = *p1++ - '0';
        val2 = *p2++ - '0';
        while (*p1 >= '0' && *p1 <= '9')
          val1 = val1 * 10 + *p1++ - '0';
        while (*p2 >= '0' && *p2 <= '9')
          val2 = val2 * 10 + *p2++ - '0';
        if (val1 != val2)
          return val1 - val2;
      } else
        return 1;
    } else if (*p2 >= '0' && *p2 <= '9')
      return -1;
    else if (*p1 != *p2)
      return *p1 - *p2;
    else {
      ++p1;
      ++p2;
    }
  }
  return *p1 - *p2;
}

NB: this is the version of ldconfig at the time Jon submitted this, and the version that they're using. I haven't dug through to check if this is still true in the latest version. That's an exercise for the reader.

While we have not hit the end of the first string, check if the character in that string is numeric. If it is, check if the character in the second string is numeric. If it is, keep scanning through characters, and for as long as they're numeric, keep parsing them into numbers. If the numbers aren't the same, we return the difference between them.

If the first string contains numbers at this point, but the second string doesn't, return 1. If the second string contains numbers but not the first, return -1. Otherwise, increment our pointers and go to the next character. If we reach the end of the string without finding numeric characters, return the difference between these two characters.

Also, correct me if I'm wrong, but it seems like a malicious set of filenames could cause buffer overruns here.

Now, I'll be honest, I don't have the fortitude to suggest that ldconfig is TRWTF here. It's a venerable piece of software that's solving an extremely hard problem. But boy, DLL Hell is an unending struggle and this particular solution certainly isn't helping. I'm honestly not entirely certain I'd say that there was a true WTF here, just an unfortunate confluence of people doing their best and ending up laying landmines for others.

But here's the fun conclusion: the 2019 version of the library actually had been updated. They'd deployed several new versions between 2019 and 2024, when things finally blew up. The actual deployed software kept using the backup file from 2019, and while it may have caused hard-to-notice and harder-to-diagnose bugs, it didn't cause any crashes until 2024.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsBronco Busting

Author: Majoki Can’t say I wasn’t nervous as the old hand led me to the corral. Especially when he said whispering was a bunch of horseshit and I’d likely get my ass bucked clear out of the ring. Which was probably true. The first go around. But I was no newb, I’d learned a few […]

The post Bronco Busting appeared first on 365tomorrows.

,

David BrinNewer Deal Part 5: More quick-passables! Insure kids! Truth/reconciliation to end blackmail! Curb Prex immunity!

Our series on a Newer Deal for America has offered 30+ proposed actions that Democrats and their allies should consider now -- and work out kinks -- so they can hit the ground forcefully when they retake Congress, in (or with defection of a dozen Republican patriots, before) January 2027.  

Some of the concepts have been around a while, like canceling the Citizens United travesty. Others are my own originals, like establishing the office of Inspector General of the United States (discussed here.) And some, e.g. giving every Congress member one peremptory subpoena per session, might seem obscure, even puzzling to you, til you slap your forehead and go of course!

And yes, we'd not be in our current mess if some of these -- like IGUS -- had been enacted sooner.

This is not to say that Democratic politicians aren't learning. When Clinton and Obama were president for 8 years each, they only had the first two in which to work with Democratic Congresses, and those two years were pretty-much squandered trying desperately to find Republicans willing to negotiate -- a hopeless wish, after Dennis Hastert banned all GOP politicians from even talking to Democratic colleagues.

That all changed when Biden got in. Immediately in 2021, Nancy Pelosi and Chuck Schumer -- aided vigorously by Bernie, Liz and AOC etc. -- leaped into action, giving us a year of miracle bills like the Infrastructure Act, the Inflation Reduction Act, the CHiPs Act, and Medicare drug price negotiation... all of them spectacular successes that disprove every insipid far-left sneer about 'ineffective DNC sellouts.' 

Though now we know that those bills went nowhere near far enough!

Hence, while I despair that these proposals will ever receive even a scintilla of attention or action, it is still my duty as an American to offer whatever my talents allow. 

So, let's take a closer look at four more from that list of ideas!


 == Four more ideas ==

History shows that Americans are suspicious of grand prescriptions for sweeping change. They like progress and reform! But in increments. Steps forward that prove themselves and thusly can't be taken back, and thereupon serve as a new, higher plateau, from which new steps can be launched. Bernie, Liz, AOC, Pete and the rest of the pragmatic left know this.

And so, let's change the argument over healthcare!  Let's increment forward in a way that will surely pass. One that makes further progress inevitable. We'll do this by taking a big step that can easily be afforded under present budgets and thus cancel the "how will you pay for it?" argument.

A step that will prove so popular, only political morons would oppose it.


THE HEALTHY CHILDREN ACT will provide basic coverage for all of the nation's youths to receive preventive care and needed medical attention.  Should adults still get insurance using market methods? That can be argued separately... 

 

...but under this act: all U.S. citizens under the age of 25 shall immediately qualify as “seniors” under Medicare. 



Such a bill might fit on a single sheet of paper. Possibly just that one sentence, above! Ponder how elegantly simple it will be to add a quarter of the U.S. population to Medicare and ignore howls of "who pays for it?"  


While overall, young people are cheap to insure and generally healthy, when they do need care it is SO in society's interest to leap upon any problem! And hence a national priority, if only as an investment in the future. 


A great nation should see to it that the young reach adulthood without being handicapped by preventable sickness. It's an affordable step that will relieve the nation’s parents of stressful worry. 

 

Moreover, watch how quickly the insurance companies would then step up to negotiate! Especially if they face a 'ratchetting squeeze.' Like if every year the upper bound of Medicare goes down by a year -- from 65 to 64 and then 63... while the lower bound rises from 25 to 26 to 27...

Oh, they'll negotiate, all right.

And now another no-brainer that's absolutely needed. 

It was needed yesterday.


1. THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments and in Congress, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice.  

 

Whistle-blower protections will be strengthened within the U.S. government. The federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent and empowered Inspector General of the United States (IGUS).

 

Any officer's passive refusal to obey a putatively illegal order shall be immediately audited by a random board of quickly -available flag officers who by secret ballot may either confirm the refusal or correct the officer's error, or else refer the matter for inquiry.


Yes, this correlates with the proposed law we discussed last time, to establish IGUS and the Inspectorate, independent of all other branches of government. (A concept once promoted by the mighty Sun Yatsen!) And boy do we need this, right now.

Again, this one doesn't require much explication. Not anymore. Donald Trump has seen to that.

The final pair (for today) do call for some explanation... before their value ought to become obvious!


THE TRUTH AND RECONCILIATION ACT:  Without interfering in the president's constitutional right to issue pardons for federal offenses, Congress will pass a law defining the pardon process, so that all persons who are excused for either convictions or possible crimes must at least explain those crimes, under oath, before an open congressional committee, before walking away from them with a presidential pass. 

 

If the crime is not described in detail, then a pardon cannot apply to any excluded portion. Further, we shall issue a challenge that no president shall ever issue more pardons than both of the previous administrations, combined.


If it is determined that a pardon was given on quid pro quo for some bribe, emolument, gift or favor, then this act clarifies that such pardons are - and always were, by definition - null and void. Moreover, this applies retroactively for any such pardons in the past.

 

We will further reverse the current principle of federal supremacy in criminal cases that forbids states from prosecuting for the same crime. Instead, one state with grievance in a federal case may separately try the culprit for a state offense, which - upon conviction by jury - cannot be excused by presidential pardon.


Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. 

 

Incentives such as clemency will draw victims of blackmail to come forward and expose their blackmailers.

 


I'm not sure how to make that one any clearer than the wording itself. 

Again, when I first proposed these reforms, years ago, people shrugged with "Why would we need that?"

But now? Can anything make the case for these acts better than the news that we see every... single... day?

The next and final one (for today) makes a good partner to the Truth & Reconciliation Act.


THE IMMUNITY LIMITATION ACT: The Supreme Court has ruled that presidents should be free to do their jobs without undue distraction by legal procedures and jeopardies. Taking that into account, we shall nevertheless – by legislation – firmly reject the artificial and made-up notion of blanket Presidential Immunity or that presidents are inherently above the law. 

 

Instead, the Inspector General of the United States (IGUS) shall supervise legal cases that are brought against the president, so that they may be handled by the president’s chosen counsel in order of importance or severity, in such a way that the sum of all such external legal matters will take up no more than ten hours a week of a president’s time. While this may slow such processes, the wheels of law will not be fully stopped. 

 

Civil or criminal cases against a serving president may be brought to trial by a simple majority consent of both houses of Congress, though no criminal or civil punishment may be exacted until after the president leaves office, either by end-of-term or impeachment and Senate conviction.

Again, could anything be more clear? And so, why have we not seen these two enacted yet? Because of flawed assumptions!  Like assuming that nothing can be done about corrupt presidential pardons. Or that NDAs are forever. Or that nothing can be done about the Supreme Court's declaration of Presidential Immunity.

But the Court - suborned as its current majority may be - felt it necessary to issue that ruling based on a rationalization! That the elected chief executive must do the job without undue harassment by legal vexations. Indeed, this bill would solve that! Only without creating a wholly new and wholly loathesome notion of presidential immunity above all law!

Just like the Roberts Rationalization for excusing gerrymandering, this immunity justification can be logically bypassed. Please do ponder how.

Oh but I suddenly realized... we need to add one more paragraph to that bill! 

One that deals with something that came up recently. Might a president evade impeachment merely by shooting enough House members to prevent a majority from acting to impeach him? 

Trump's own attorney argued that he could! And that he would be immune from prosecution for doing so Until he was actually impeached and convicted, which he just prevented via murder!

 This added paragraph attempts to seal off that insane possibility.


In the event that Congress is thwarted from acting on impeachment or trial, e.g. by some crime that prevents certain members from voting, their proxies may be voted in such matters by their party caucus, until their states complete election of replacements.


That may not fly past today's Court. But the declaration of intent will resonate, still, if we ever need it to. 


      == Add judo to the game plan to save America! ==

Can you honestly assert that ANY of these four would fail the "60%+ Rule?"  

The initial tranche of reforms should be ones that get sixty percent approval from polls or focus groups, so that they can pass quickly, clearing away the most vital things, building further support from a growing American majority. Saving the harder political fights for just a little later. 

That was the persuasive trick of Newt Gingrich's "Contract With America." A clever ruse, since he and his party later betrayed every promise that they offered in their Contract! Still, sticking to that rule made the Contract an ingenious sales pitch.

Democrats run a gamut, but they truly are generally different! As Pelosi, Schumer, Warren, AOC, Sanders et. al. proved in 2021, Democrats can act hard and fast, when they put their minds to it. 

So now, let's fill their minds with innovative and bold ideas! So that when the nation rises up against the current mad administration, we'll be ready for a genuine Miracle Year.

Next... four omore in Part 6!

Cryptogram Prompt Injection Via Road Signs

Interesting research: “CHAI: Command Hijacking Against Embodied AI.”

Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however, also create new security risks. In this paper, we introduce CHAI (Command Hijacking against embodied AI), a new class of prompt-based attacks that exploit the multimodal language interpretation abilities of Large Visual-Language Models (LVLMs). CHAI embeds deceptive natural language instructions, such as misleading signs, in visual input, systematically searches the token space, builds a dictionary of prompts, and guides an attacker model to generate Visual Attack Prompts. We evaluate CHAI on four LVLM agents; drone emergency landing, autonomous driving, and aerial object tracking, and on a real robotic vehicle. Our experiments show that CHAI consistently outperforms state-of-the-art attacks. By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness.

News article.

Cory DoctorowAll laws are local

A pair of broken off statue legs, shod in Roman sandals, atop a cliff. Behind them, we see a futuristic city.

This week on my podcast, I read All laws are local a recent post from my Pluralistic.net blog, about the ephemerality of our seeming eternal verities.


In other words, things that seem eternal and innate to the human condition to you are apt to have been invented ten minutes before you started to notice the world around you and might seem utterly alien to your children. As Douglas Adams put it:

Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.

MP3

Worse Than FailureCodeSOD: Invalid Passport

Gretchen wanted to, in development, disable password authentication. Just for a minute, while she was testing things. That's when she found this approach to handling authentication.

passport.authenticate('local', { session: true }, async (err, user) => {
  if (err) {
    res.send({ success: false, message: 'Error authenticating user.' })
  } else if (!user) {
    User.query()
      .where({ username: req.body.username })
      .first()
      .then(targetUser => {
        if (targetUser) {
          const hash = User.hashPassword(
            targetUser.password_salt,
            req.body.password
          )
          if (hash === targetUser.password_hash) {
            res.send({
              success: false,
              message: 'Incorrect username or password.',
            })
          } else {
            res.send({
              success: false,
              message: 'Incorrect username or password.',
            })
          }
        } else {
          res.send({
            success: false,
            message: 'Incorrect username or password.',
          })
        }
      })
      .catch(err => {
        res.send({ success: false, message: 'Internal server error' })
      })
  } else if (user.firstLogin) {
//......
  }
})(req, res, next);

passport.authenticate invokes its callback after attempting to authenticate. Now, normally, this is called as middleware on a route defined on the webserver- that is to say, you don't call it from within your application code, but as part of your routing configuration. That's not the case here, where this blob is inside of a controller.

That's weird, but let's just trace through this code. We attempt to authenticate. When the process completes, our callback function executes. If authentication failed, there's an error, so we'll send an error message. Then, if the user object isn't populated, we attempt to look up the user. If we find a user with that user name, we then hash their password and check if the hash matches. If it does, we send an error message. If it doesn't, we send an error message. If we didn't find the user, we send an error message. If anything goes wrong, we send an error message.

Wait a second, back up: if the user exists and their password matches, we send an error message?

I'll let Gretchen explain a bit more:

passport.authenticate returns an error if the authentication failed and a user object, if it succeeded. We check this immediately: if error is set, return an error message. But then, we check if the user does not exist (aka: the authentication failed).

Yes, the reason user would be null is because the authentication failed. So the error would be set. So that entire branch about validating the user won't happen: either the authentication worked and we know who the user is, or it failed, in which case we'd have an error. There's no reasonable world where there's no error but also no user object.

So yes, if authentication failed, but you manually re-run the authentication and it succeeds for some reason, yeah, you probably should still return an error. But I don't know if it's "Incorrect username or password". It probably should be "Invalid reality, please restart the universe and see if the problem persists."

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsHead Assistant

Author: Julian Miles, Staff Writer “The world is run by a self-protecting hierarchy of ruthless murderers who make sure to change their public-facing members regularly so those being controlled think they have choices. It’s miserable, laughable, vindictive, and effective.” I put the screwdriver down and look at the bodiless, partially disassembled head on the workbench […]

The post Head Assistant appeared first on 365tomorrows.

,

365 Tomorrows…And Back Again

Author: Roman Colangelo I’ve been thinking about quitting. I’ve been thinking about spending the rest of my life with you. The ship warped us to the crest of the Andromeda. They told me that they had found the face of God, asked me if I wanted a piece of it. We saw the galaxy illuminated […]

The post …And Back Again appeared first on 365tomorrows.