Planet Russell

,

Charles StrossMore in Sadness than in Anger

Sorry I haven't updated the blog for a while: I've been busy. (Writing the final draft of a new novel entirely unconnected to anything else you've read—space opera, new setting, longest thing I've written aside from the big Merchant Princes doorsteps. Now in my agent's inbox while I make notes towards a sequel, if requested.)

Over the past few years I've been naively assuming that while we're ruled by a ruthless kleptocracy, they're not completely evil: aristocracies tend to run on self-interest and try to leave a legacy to their children, which usually means leaving enough peasants around to mow the lawn, wash the dishes, and work the fields.

But my faith in the sanity of the evil overlords has been badly shaken in the past couple of months by the steady drip of WTFery coming out of the USA in general and the Epstein Files in particular, and now there's this somewhat obscure aside, that rips the mask off entirely (Original email on DoJ website ) ...

A document released by the U.S. Department of Justice as part of the Epstein files contains a quote attributed to correspondence involving Jeffrey Epstein that references Bill Gates and a controversial question about "how do we get rid of poor people as a whole."

The passage appears in a written communication included in the DOJ document trove and reads, in part: "I've been thinking a lot about that question that you asked Bill Gates, 'how do we get rid of poor people as a whole,' and I have an answer/comment regarding that for you." The writer then asks to schedule a phone call to discuss the matter further.

As an editor of mine once observed, America is ruled by two political parties: the party of the evil billionaires, and the party of the sane (so slightly less evil) billionaires. Evil billionaires: "let's kill the poor and take all their stuff." Sane billionaires: "hang on, if we kill them all who's going to cook dinner and clean the pool?"

And this seemed plausible ... before it turned out that the CEO class as a whole believe entirely in AI (which, to be clear, is just another marketing grift in the same spirit as cryptocurrencies/blockchain, next-generation nuclear power, real estate backed credit default options, and Dutch tulip bulbs). AI is being sold on the promise of increasing workforce efficiency. And in a world which has been studiously ignoring John Maynard Keynes' 1930 prediction that by 2030 we would only need to work a 15 hour work week, they've drawn an inevitable unwelcome conclusion from this axiom: that there are too many of us. For the past 75 years they've been so focussed on optimizing for efficiency that they no longer understand that efficiency and resilience are inversely related: in order to survive collectively through an energy transition and a time of climate destabilization we need extra capacity, not "right-sized" capacity.

Raise the death rate by removing herd immunity to childhood diseases? That's entirely consistent with "kill the poor". Mass deportation of anyone with the wrong skin colour? The white supremacists will join in enthusiastically, and meanwhile: the deported can die out of sight. Turn disused data centres or amazon warehouses into concentration camps (which are notorious disease breeding grounds)? It's a no-brainer. Start lots of small overseas brushfire wars, escalating to the sort of genocide now being piloted in Gaza by Trump's ally Netanyahu (to emphasize: his strain of Judaism can only be understood as a Jewish expression of white nationalism, throwing off its polite political mask to reveal the death's head of totalitarianism underneath)? It's all part of the program.

Our rulers have gone collectively insane (over a period of decades) and they want to kill us.

The class war has turned hot. And we're all on the losing side.

Planet DebianJonathan Dowland: debian swirl font glyph

When I wrote about the redhat logo in a shell prompt, a commenter said it would be nice to achieve something similar for Debian, and suggested "�" (U+1F365 FISH CAKE WITH SWIRL DESIGN) which, in some renderings, looks to have a red swirl on top. This is not bad, but I thought we could do better.

On Apple systems, the character "" (U+F8FF) displays as the corporate Apple logo. That particular unicode code point is reserved: systems are free to use it for something private and internal, but other systems won't use it for the same thing. So if an Apple user tries to send a document with that character in it to someone else, they won't see the Apple unless they are also viewing it on an Apple computer. (Some folks use it for Klingon).

Here's a font that maps the Debian swirl to the same code point. It's covered by the Debian logo license terms.

Nerd Font maps the Debian swirl logo to codepoints e77d, f306, ebc5 and f08da (all of which are also in the Private Use Area). I've gone ahead and mapped it to all those points but the last one (simply because I couldn't find it in FontForge.)

Note that, unless your recipients have this font, or the Nerd Font, or similar set up, they aren't going to see the swirl. But enjoy it for private use. Getting your system to actually use the font is, I'm afraid, left as an exercise for the reader (but feel free to leave comments)

Thanks to mirabilos for chatting to me about this back in 2019. It's taken me that long to get this blog post out of draft!

Planet DebianDirk Eddelbuettel: RcppCNPy 0.2.15 on CRAN: Maintenance

Another maintenance release of the RcppCNPy package arrived on CRAN today, and has already been built as an r2u binary. RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers along with Rcpp for the glue to R.

The changes are minor and similar to other recent changes. We aid Rcpp in the transition away from calling Rf_error() by relying in Rcpp::stop() which has better behaviour and unwinding when errors or exceptions are encountered. So once again no user-facing changes. Full details are below.

Changes in version 0.2.15 (2026-03-13)

  • Replaced Rf_error with Rcpp::stop in three files

  • Maintenance updates to continuous integration

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Charles StrossWebtoons revisited

It's been years and years since I last went trawling for webcomics worth reading, so it's time for an update: obviously online search is pretty much useless, but we ought to be able to crowdsource something here.

I keep a separate browser window for webcomics; here's a selection of my currently-open tabs, excluding syndicated stuff that shows up in newspapers. (So no "This Modern World" or "The Far Side".) What am I ignoring? Preferably new in the past decade, which rules out old-timers like "Digger" or "Girl Genius" (arguably I should have ommitted QC and xkcd too, but they're favourites of mine).

Questionable Content has been first on my daily reading list for a long time ... almost 20 years? It's Jeff Jacques' "internet comic strip about friendship, romance, and robots ... set in the present day and pretty much the same as our own except there are robots all over the place and giant space stations." And more plot threads than I can possibly summarize, given that it's a sprawling soap opera unfolding at roughly 250 strips per year.

Saturday Morning Breakfast Cereal which, despite the name, comes out almost every day, is the antithesis of QC: every daily strip is a standalone, and it has an alarming tendency to lob philosophical hand grenades at entire fields of scientific endeavour. By Zack Weinersmith, who's also written some good books.

xkcd is the third classic, by sometime NASA robot guy Randal Munroe; like SMBC it tends to focus on the sciences, with a distinctly whimsical take on things. Should need no introduction, but if you don't already know, it's where those stick figure science comics come from ...

Kill Six Billion Demons Less of a single strip at a time webcomic and more of an episodic graphic novel, KSBD is distinctly Japanese/Hindu/Chinese/Hellish in tone: it seems to follow the travails of an American female student called Alison who winds up in hell, befriends demons, gets caught up in a holy war to end the universe, and ascends towards godhood, but that's kind of selling it short. Come for the amazing artwork, stay for the batshit theology. By Abbadon.

Pepper & Carrot by David Revoy is thematically the exact antithesis of KSBD: P&C is set in a very kitsch, cozy, D&D style generic fantasy world. Pepper is a young and less-than-competent student of witchcraft, and Carrot is her one-brain-cell ginger cat (and hapless familiar): they get in trouble a lot. (Spin-offs: if you want to dip in to a one-shot rather than a serial, there's Mini-Fantasy Theatre--same character but every story is self-contained.)

Runaway to the Stars is an extremely crunchy hard SF slice-of-life serial by Jay Eaton, following Talita (an alien centaur-oid alien fostered by humans) and her friends. Did I say "crunchy"? The world-building is extreme. (And you'll never think catgirls are sexy again!)

Phobos and Deimos A differently-crunchy solarpunk story about a girl from Mars who, exiled by an invasion, ends up as a refugee on Earth, where she has to make a new life for herself and grapple with the culture shock of attending high school in Antarctica as a 'fugee.

RuriDragon an online manga set in a Japanese high school, following student Ruri Aoki, who wakes up one day and notices horns have started growing from her head. When she asks her mother about it, mum confesses that her father was a dragon ... RuriDragon was serialized in Weekly Sh�nen Jump magazine in 2022; this is an unofficial fan translation. (It follows Japanese formatting conventions, so read it from the top down and right-to-left or the dialog won't make much sense.)

SideQuested by AlePresser & K.B. Spangler is a web serial/graphic novel in progress set in a slightly less generic fantasy realm than Pepper & Carrot (this one shows some signs of Xianxia/cultivation influences). It focusses on the adventures of an extremely sensible level-headed librarian-in-training girl named Charlie, who clearly has absolutely no magical abilities whatsoever--until one day her absentee father turns up with some unexpected news: he's the King's Champion, her mother is a foreign princess, and she's needed at Court because the King's head-in-the-clouds son Prince Leopold is being a problem and her father needs her to sort him out in a hurry ...

Eldritch Darling Nothing to see here, just your usual webcomic about an eldritch horror from beyond spacetime who falls in love with a lesbian. H. P. Lovecraft would not approve!

Unspeakable Vault of Doom is an irregular series of extremely goofy web strips that H. P. Lovecraft would definitely disapprove of, not least because he occasionally features in it, along with his more notorious creations!

Finally, two from the cheesecake dimension:

Oglaf is almost invariably NSFW, rude, and very, very funny. Weekly, started out 20 years ago as an attempt to do bad D&D porn then kind of wandered off topic, and these days there's only about an 80% probability that any given weekly strip will include explicit sex scenes, stabbings, or jokes.

Grrl Power (Caution: author has a severe male gaze problem) As the "about" page says: A comic about super heroines. Well there are guys too but mostly it's about the girls. Doing the things that super powered girls do. Fighting crime, saving the world, dating, shopping, etc. There are also explosions, cheesecake, beefcake, heroes and villains, angels and demons, cyborgs, probably ninjas, and definitely aliens. Lots and lots of aliens. Some of whom are only visiting Earth as sex tourists ...

And that's my round-up!

Your turn. What web comics do you frequent new webcomics that aren't on this list?

Planet DebianSven Hoexter: container image with ECH enabled curl

As an opportunity to rewire my brain from "docker" to "podman" and "buildah" I started to create an image build with an ECH enabled curl at https://gitlab.com/hoexter-experiments/ech.

Not sure if it helps anyone, but setup should be like this:

git clone https://gitlab.com/hoexter-experiments/ech
cd ech
buildah build --layers -f Dockerfile -t echtest
podman run -ti echtest /usr/local/bin/curl \
  --ech true --doh-url https://one.one.one.one/dns-query \
  https://crypto.cloudflare.com/cdn-cgi/trace.cgi
fl=48f121
h=crypto.cloudflare.com
ip=2.205.251.187
ts=1773410985.168
visit_scheme=https
uag=curl/8.19.0
colo=DUS
sliver=none
http=http/2
loc=DE
tls=TLSv1.3
sni=encrypted
warp=off
gateway=off
rbi=off
kex=X25519

It also builds nginx and you can use that for a local test within the image. More details in the README.

Planet DebianHellen Chemtai: One week later after the Outreachy internship: Managing Work-Life Balance

Hello world. I have been doing a lot after my internship with Outreachy. We are still working on some tasks :

  1. I am working on running locales for my native language in live images.
  2. I am also working on points to add to talk proposals for a Debian conference.

As I am moving around constantly, there are problems I had encountered when changing my networks. I had to connect my virtual machine to different networks and the network would not reflect within the machine. From terminal I edited the virtual machine XML settings:

su -
// input password
sudo virsh edit <machine_name> #its openqa for me
// Look for the interface within devices and replace this:
<interface type=&aposnetwork&apos>
        <source network=&aposdefault&apos/>
        #some other code in here
 </interface>
// With just this then restart your machine:
<interface type=&aposuser&apos>
    <model type=&aposvirtio&apos/>
</interface>

Hopefully the above will help someone out there. I am still working on a lot of tasks regarding the conference, so much to do and so little time. I am hoping I won’t get any burnout during this period. I won’t be updating much further till the conference. Have a nice time

Worse Than FailureError'd: @#$%^!!

Here's a weird email but IMO the erorr is just the odd strikethrough. Bill T. explains: "From my Comcast email spam folder. It was smart enough to detect it was spam, but... spam from a trusted sender? And either the delivery truck is an emoji (possible), an embedded image (maybe?), or Comcast is not actually blocking external images." I'd like to see the actual email, could you forward it to us? My guess is that we're seeing a rare embedded image. Since embedding images was the whole point of MIME in the first place, I have found it odd that they're so so hard to construct with typical marketing mass mailers, and I almost never receive them.

0

 

The WTFs are heating up for Peter G. . Or cooling off. It's one or the other. "Fiji seems to be experiencing a run of temperature inversions. Must be something to do with climate change. "

1

 

Back with a followup, dragoncoder047 has a plan to rule the world. "I was looking up some closed-loop stepper motors for a robotics project when StepperOnline gave me this error message. Evidently they don't think my project is a good idea. "

2

 

"My %@ package is missing!" ranted Orion S. "After spending the day restoring my system, I can offer alternatives such as the "@&*% you!" package."

3

 

Soon-to-be journalist Marc Würth buries the lede: "Not really looking for a job but that is certainly a rare opening." Okay, but what I really want to know is what that Slashdot article is about. Do I even have a Slashdot account still? Why, yes I do.

4

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsFloat

Author: Cecilia Kae I woke early yesterday to catch the last glimpse of the island. It took twenty minutes getting to the pier. I wanted to be there before it got crowded but it already was. Most were there because it was the first time Mantasia, our neighbouring country, could be seen up close. From […]

The post Float appeared first on 365tomorrows.

xkcdPlanets and Bright Stars

Planet DebianReproducible Builds (diffoscope): diffoscope 314 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 314. This version includes the following changes:

[ Chris Lamb ]
* Don't run "test_code_is_black_clean" test in autopkgtests.
  (Closes: #1130402)

[ Michael R. Crusoe ]
* Reformat using Black 26.1.0. (Closes: #1130073)

You find out more by visiting the project homepage.

,

Planet DebianReproducible Builds: Reproducible Builds in February 2026

Welcome to the February 2026 report from the Reproducible Builds project!

These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. reproduce.debian.net
  2. Tool development
  3. Distribution work
  4. Miscellaneous news
  5. Upstream patches
  6. Documentation updates
  7. Four new academic papers

reproduce.debian.net

The last year has seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.

This month, however, Holger Levsen added suite-based navigation (eg. Debian trixie vs forky) to the service (in addition to the already existing architecture based navigation) which can be observed on, for instance, the Debian trixie-backports or trixie-security pages.


Tool development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 312 and 313 to Debian.

In particular, Chris updated the post-release deployment pipeline to ensure that the pipeline does not fail if the automatic deployment to PyPI fails []. In addition, Vagrant Cascadian updated an external reference for the 7z tool for GNU Guix. []. Vagrant Cascadian also updated diffoscope in GNU Guix to version 312 and 313.


Distribution work

In Debian this month:

  • 26 reviews of Debian packages were added, 5 were updated and 19 were removed this month adding to our extensive knowledge about identified issues.

  • A new debsbom package was uploaded to unstable. According to the package description, this package “generates SBOMs (Software Bill of Materials) for distributions based on Debian in the two standard formats, SPDX and CycloneDX. The generated SBOM includes all installed binary packages and also contains Debian Source packages.”

  • In addition, a sbom-toolkit package was uploaded, which “provides a collection of scripts for generating SBOM. This is the tooling used in Apertis to generate the Licenses SBOM and the Build Dependency SBOM. It also includes dh-setup-copyright, a Debhelper addon to generate SBOMs from DWARF debug information, which are “extracted from DWARF debug information by running dwarf2sources on every ELF binaries in the package and saving the output.”

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Miscellaneous news


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Documentation updates

Once again, there were a number of improvements made to our website this month including:


Four new academic papers

Julien Malka and Arnout Engelen published a paper titled Lila: Decentralized Build Reproducibility Monitoring for the Functional Package Management Model:

[While] recent studies have shown that high reproducibility rates are achievable at scale — demonstrated by the Nix ecosystem achieving over 90% reproducibility on more than 80,000 packages — the problem of effective reproducibility monitoring remains largely unsolved. In this work, we address the reproducibility monitoring challenge by introducing Lila, a decentralized system for reproducibility assessment tailored to the functional package management model. Lila enables distributed reporting of build results and aggregation into a reproducibility database […].

A PDF of their paper is available online.


Javier Ron and Martin Monperrus of KTH Royal Institute of Technology, Sweden, also published a paper, titled Verifiable Provenance of Software Artifacts with Zero-Knowledge Compilation:

Verifying that a compiled binary originates from its claimed source code is a fundamental security requirement, called source code provenance. Achieving verifiable source code provenance in practice remains challenging. The most popular technique, called reproducible builds, requires difficult matching and reexecution of build toolchains and environments. We propose a novel approach to verifiable provenance based on compiling software with zero-knowledge virtual machines (zkVMs). By executing a compiler within a zkVM, our system produces both the compiled output and a cryptographic proof attesting that the compilation was performed on the claimed source code with the claimed compiler. […]

A PDF of the paper is available online.


Oreofe Solarin of Department of Computer and Data Sciences, Case Western Reserve University, Cleveland, Ohio, USA, published It’s Not Just Timestamps: A Study on Docker Reproducibility:

Reproducible container builds promise a simple integrity check for software supply chains: rebuild an image from its Dockerfile and compare hashes. We built a Docker measurement pipeline and apply it to a stratified sample of 2,000 GitHub repositories that contained a Dockerfile. We found that only 56% produce any buildable image, and just 2.7% of those are bitwise reproducible without any infrastructure configurations. After modifying infrastructure configurations, we raise bitwise reproducibility by 18.6%, but 78.7% of buildable Dockerfiles remain non-reproducible.

A PDF of Oreofe’s paper is available online.


Lastly, Jens Dietrich and Behnaz Hassanshahi published On the Variability of Source Code in Maven Package Rebuilds:

[In] this paper we test the assumption that the same source code is being used [by] alternative builds. To study this, we compare the sources released with packages on Maven Central, with the sources associated with independently built packages from Google’s Assured Open Source and Oracle’s Build-from-Source projects. […]

A PDF of their paper is available online.



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Planet DebianDirk Eddelbuettel: RcppBDT 0.2.8 on CRAN: Maintenance

Another minor maintenance release for the RcppBDT package is now on CRAN, and had been built as binary for r2u.

The RcppBDT package is an early adopter of Rcpp and was one of the first packages utilizing Boost and its Date_Time library. The now more widely-used package anytime is a direct descentant of RcppBDT.

This release is again primarily maintenance. We aid Rcpp in the transition away from calling Rf_error() by relying in Rcpp::stop() which has better behaviour and unwinding when errors or exceptions are encountered. No feature or interface changes.

The NEWS entry follows:

Changes in version 0.2.8 (2026-03-12)

  • Replaced Rf_error with Rcpp::stop in three files

  • Maintenance updates to continuous integration

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianMike Gabriel: Debian Lomiri Tablets 2025-2027 - Project Report (Q4/2025)

On 25th Oct 2025, I announced via my personal blog and on Mastodon that Fre(i)e Software GmbH was hiring. The hiring process was a mix of asking developers I know and waiting for new people to apply.

At the beginning of November 2025 / in mid November 2025, we started with 13 developers (all part-time) to work on various topics around Lomiri (upstream and downstream). Note that the below achievements don't document the overall activity in the Lomiri project, but that part that our team at Fre(i)e Software GmbH contributed to.

Organizational Achievements

  • Setup management board for Qt6 migration in Lomiri [1]
  • Setup management board for salsa2ubports package syncing [2]
  • Bootstrap Qt 6.8 in UBports APT repository
  • Bootstrap Qt 6.8 in Lomiri PPA
  • Fix Salsa CI for all Lomiri-related Debian packages
  • Facilitate contributor's project around XDG Desktop Portal support for Lomiri.
  • Plan how to bring DeltaTouch and DeltaChat core to Debian

Maintenance Development

  • Replace libofono-qt by libqofono in telepathy-ofono
  • Rework unit tests in telepathy-ofono utilizing ofone-phonesim
  • Obsolete not-used-anymore u1db-qt
  • Fixing wrong bin:pkg names regarding snapd-glib's QML module

Qt6 Porting

  • qmake -> CMake porting (if needed) and Qt6 porting of shared libraries and QML modules consumed by Lomiri shell and Lomiri apps:
    • biometryd
    • libqofono
    • libqofonoext
    • libqtdbusmock
    • lomiri-account-polld
    • lomiri-action-api
    • lomiri-api
    • lomiri-download-manager
    • lomiri-location-service
    • lomiri-online-accounts
    • lomiri-push-qml
    • lomiri-push-service
    • maliit-framework
    • mediascanner2
    • qtlomiri-appmenutheme
    • qtpim (started, work in progress)
    • qwebdavlib
    • signond (flaws spotted in Debian's porting of signond to Qt6)

Feature Development

  • Continuing with Morph Browser Qt6 / LUITK
    • Build, run and fix LUITK unit tests for Qt6
    • various bug fixes and improvements for Morph Qt6
  • Add mbim modem support to ofono upstream
  • Improve ofono support in Network Manager
  • Improve mbim modem support in lomiri-indicator-network
  • Package kazv (convergent Matrix client) and dependencies for Debian
  • Provide Lomiri images for Mobian

Research

  • Research on fuse-based caching Webdav client for lomiri-cloudsync-app.
  • Research on alternative ORM instead of QDjango in libusermetrics

[1] https://gitlab.com/groups/ubports/development/-/boards/9895029?label_name%5B%5D=Topic%3A%20Qt%206
[2] https://gitlab.com/groups/ubports/development/-/boards/10037876?label_name[]=Topic%3A%20salsa2ubports%20DEB%20syncing

Worse Than FailureCodeSOD: Awaiting A Reaction

Today's Anonymous submitter sends us some React code. We'll look at the code and then talk about the WTF:

// inside a function for updating checkboxes on a page
if (!e.target.checked) {
  const removeIndex = await checkedlist.findIndex(
    (sel) => sel.Id == selected.Id,
  )
  const removeRowIndex = await RowValue.findIndex(
    (sel) => sel == Index,
  )

// checkedlist and RowValue are both useState instances.... they should never be modified directly
  await checkedlist.splice(removeIndex, 1)
  await RowValue.splice(removeRowIndex, 1)

// so instead of doing above logic in the set state, they dont
  setCheckedlist(checkedlist)
  setRow(RowValue)
} else {
  if (checkedlist.findIndex((sel) => sel.Id == selected.Id) == -1) {
    await checkedlist.push(selected)
  }
// same, instead of just doing a set state call, we do awaits and self updates
  await RowValue.push(Index)
  setCheckedlist(checkedlist)
  setRow(RowValue)
}

Comments were added by our submitter.

This code works. It's the wrong approach for doing things in React: modifying objects controlled by react, instead of using the provided methods, it's doing asynchronous push calls. Without the broader context, it's hard to point out all the other ways to do this, but honestly, that's not the interesting part.

I'll let our submitter explain:

This code is black magic, because if I update it, it breaks everything. Somehow, this is working in perfect tandem with the rest of the horrible page, but if I clean it up, it breaks the checkboxes; they're no longer able to be clicked. Its forcing React somehow to update asynchronously so it can use these updated values correctly, but thats the neat part, they aren't even being used anywhere else, but somehow the re-rendering page only accepts awaits. I've tried refactoring it 5 different ways to no avail

That's what makes truly bad code. Code so bad that you can't even fix it without breaking a thousand other things. Code that you have to carefully, slowly, pick through and gently refactor, discovering all sorts of random side-effects that are hidden. The code so bad that you actually have to live with it, at least for awhile.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityIran-Backed Hackers Claim Wiper Attack on Medtech Firm Stryker

A hacktivist group with links to Iran’s intelligence agencies is claiming responsibility for a data-wiping attack against Stryker, a global medical technology company based in Michigan. News reports out of Ireland, Stryker’s largest hub outside of the United States, said the company sent home more than 5,000 workers there today. Meanwhile, a voicemail message at Stryker’s main U.S. headquarters says the company is currently experiencing a building emergency.

Based in Kalamazoo, Michigan, Stryker [NYSE:SYK] is a medical and surgical equipment maker that reported $25 billion in global sales last year. In a lengthy statement posted to Telegram, a hacktivist group known as Handala (a.k.a. Handala Hack Team) claimed that Stryker’s offices in 79 countries have been forced to shut down after the group erased data from more than 200,000 systems, servers and mobile devices.

A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.

A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.

“All the acquired data is now in the hands of the free people of the world, ready to be used for the true advancement of humanity and the exposure of injustice and corruption,” a portion of the Handala statement reads.

The group said the wiper attack was in retaliation for a Feb. 28 missile strike that hit an Iranian school and killed at least 175 people, most of them children. The New York Times reports today that an ongoing military investigation has determined the United States is responsible for the deadly Tomahawk missile strike.

Handala was one of several hacker groups recently profiled by Palo Alto Networks, which links it to Iran’s Ministry of Intelligence and Security (MOIS). Palo Alto says Handala surfaced in late 2023 and is assessed as one of several online personas maintained by Void Manticore, a MOIS-affiliated actor.

Stryker’s website says the company has 56,000 employees in 61 countries. A phone call placed Wednesday morning to the media line at Stryker’s Michigan headquarters sent this author to a voicemail message that stated, “We are currently experiencing a building emergency. Please try your call again later.”

A report Wednesday morning from the Irish Examiner said Stryker staff are now communicating via WhatsApp for any updates on when they can return to work. The story quoted an unnamed employee saying anything connected to the network is down, and that “anyone with Microsoft Outlook on their personal phones had their devices wiped.”

“Multiple sources have said that systems in the Cork headquarters have been ‘shut down’ and that Stryker devices held by employees have been wiped out,” the Examiner reported. “The login pages coming up on these devices have been defaced with the Handala logo.”

Wiper attacks usually involve malicious software designed to overwrite any existing data on infected devices. But a trusted source with knowledge of the attack who spoke on condition of anonymity told KrebsOnSecurity the perpetrators in this case appear to have used a Microsoft service called Microsoft Intune to issue a ‘remote wipe’ command against all connected devices.

Intune is a cloud-based solution built for IT teams to enforce security and data compliance policies, and it provides a single, web-based administrative console to monitor and control devices regardless of location. The Intune connection is supported by this Reddit discussion on the Stryker outage, where several users who claimed to be Stryker employees said they were told to uninstall Intune urgently.

Palo Alto says Handala’s hack-and-leak activity is primarily focused on Israel, with occasional targeting outside that scope when it serves a specific agenda. The security firm said Handala also has taken credit for recent attacks against fuel systems in Jordan and an Israeli energy exploration company.

“Recent observed activities are opportunistic and ‘quick and dirty,’ with a noticeable focus on supply-chain footholds (e.g., IT/service providers) to reach downstream victims, followed by ‘proof’ posts to amplify credibility and intimidate targets,” Palo Alto researchers wrote.

The Handala manifesto posted to Telegram referred to Stryker as a “Zionist-rooted corporation,” which may be a reference to the company’s 2019 acquisition of the Israeli company OrthoSpace.

Stryker is a major supplier of medical devices, and the ongoing attack is already affecting healthcare providers. One healthcare professional at a major university medical system in the United States told KrebsOnSecurity they are currently unable to order surgical supplies that they normally source through Stryker.

“This is a real-world supply chain attack,” the expert said, who asked to remain anonymous because they were not authorized to speak to the press. “Pretty much every hospital in the U.S. that performs surgeries uses their supplies.”

John Riggi, national advisor for the American Hospital Association (AHA), said the AHA is not aware of any supply-chain disruptions as of yet.

“We are aware of reports of the cyber attack against Stryker and are actively exchanging information with the hospital field and the federal government to understand the nature of the threat and assess any impact to hospital operations,” Riggi said in an email. “As of this time, we are not aware of any direct impacts or disruptions to U.S. hospitals as a result of this attack. That may change as hospitals evaluate services, technology and supply chain related to Stryker and if the duration of the attack extends.”

According to a March 11 memo from the state of Maryland’s Institute for Emergency Medical Services Systems, Stryker indicated that some of their computer systems have been impacted by a “global network disruption.” The memo indicates that in response to the attack, a number of hospitals have opted to disconnect from Stryker’s various online services, including LifeNet, which allows paramedics to transmit EKGs to emergency physicians so that heart attack patients can expedite their treatment when they arrive at the hospital.

“As a precaution, some hospitals have temporarily suspended their connection to Stryker systems, including LIFENET, while others have maintained the connection,” wrote Timothy Chizmar, the state’s EMS medical director. “The Maryland Medical Protocols for EMS requires ECG transmission for patients with acute coronary syndrome (or STEMI). However, if you are unable to transmit a 12 Lead ECG to a receiving hospital, you should initiate radio consultation and describe the findings on the ECG.”

This is a developing story. Updates will be noted with a timestamp.

Update, 2:54 p.m. ET: Added comment from Riggi and perspectives on this attack’s potential to turn into a supply-chain problem for the healthcare system.

Update, Mar. 12, 7:59 a.m. ET: Added information about the outage affecting Stryker’s online services.

Planet DebianSven Hoexter: RFC 9849 - Encrypted Client Hello

Now that ECH is standardized I started to look into it to understand what's coming. While generally desirable to not leak the SNI information, I'm not sure if it will ever make it to the masses of (web)servers outside of big CDNs.

Beside of the extension of the TLS protocol to have an inner and outer ClientHello, you also need (frequent) updates to your HTTPS/SVCB DNS records. The idea is to rotate the key quickly, the OpenSSL APIs document talks about hourly rotation. Which means you've to have encrypted DNS in place (I guess these days DNSoverHTTPS is the most common case), and you need to be able to distribute the private key between all involved hosts + update DNS records in time. In addition to that you can also use a "shared mode" where you handle the outer ClientHello (the one using the public key from DNS) centrally and the inner ClientHello on your backend servers. I'm not yet sure if that makes it easier or even harder to get it right.

That all makes sense, and is feasible for setups like those at Cloudflare where the common case is that they provide you NS servers for your domain, and terminate your HTTPS connections. But for the average webserver setup I guess we will not see a huge adoption rate. Or we soon see something like a Caddy webserver on steroids which integrates a DNS server for DoH with not only automatic certificate renewal build in, but also automatic ECHConfig updates.

If you want to read up yourself here are my starting points:

RFC 9849 TLS Encrypted Client Hello

RFC 9848 Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings

RFC 9934 Privacy-Enhanced Mail (PEM) File Format for Encrypted ClientHello (ECH)

OpenSSL 4.0 ECH APIs

curl ECH Support

nginx ECH Support

Cloudflare Good-bye ESNI, hello ECH!

If you're looking for a test endpoint, I see one hosted by Cloudflare:

$ dig +short IN HTTPS cloudflare-ech.com
1 . alpn="h3,h2" ipv4hint=104.18.10.118,104.18.11.118 ech=AEX+DQBBFQAgACDBFqmr34YRf/8Ymf+N5ZJCtNkLm3qnjylCCLZc8rUZcwAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76

Planet DebianDirk Eddelbuettel: RcppDE 0.1.9 on CRAN: Maintenance

Another maintenance release of our RcppDE package arrived at CRAN, and has been built for r2u. RcppDE is a “port” of DEoptim, a package for derivative-free optimisation using differential evolution, from plain C to C++. By using RcppArmadillo the code became a lot shorter and more legible. Our other main contribution is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to optimise user-supplied compiled objective functions which can make things a lot faster than repeatedly evaluating interpreted objective functions as DEoptim does (and which, in fairness, most other optimisers do too). The gains can be quite substantial.

This release is again maintenance. We aid Rcpp in the transition away from calling Rf_error() by relying in Rcpp::stop() which has better behaviour and unwinding when errors or exceptions are encountered. We also overhauled the references in the vignette, added an Armadillo version getter and made the regular updates to continuous integration.

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppDE page, or the repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Worse Than FailureCodeSOD: All Docked Up

Aankhen has a peer who loves writing Python scripts to automate repetitive tasks. We'll call this person Ernest.

Ernest was pretty proud of some helpers he wrote to help him manage his Docker containers. For example, when he wanted to stop and remove all his running Docker containers, he wrote this script:

#!/usr/bin/env python
import subprocess

subprocess.run("docker kill $(docker ps -q)", shell=True)
subprocess.run("docker rm $(docker ps -a -q)", shell=True)

He aliased this script to docker-stop, so that with one command he could… run two.

"Ernest," Aankhen asked, "couldn't this just be a bash script?"

"I don't really know bash," Ernest replied. "If I just do it in bash, if the first command fails, the second command doesn't run."

Aankhen pointed out that you could make bash not do that, but Ernest replied: "Yeah, but I always forget to. This way, it handles errors!"

"It explicitly doesn't handle errors," Aankhen said.

"Exactly! I don't need to know when there are no containers to kill or remove."

"Okay, but why not use the Docker library for Python?"

"What, and make the software more complicated? This has no dependencies!"

Aankhen was left with a sinking feeling: Ernest was either the worst developer he was working with, or one of the best.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsHow Far Would You Go on a First Date?

Author: Alastair Millar Lemme tell you, time and cost are serious issues if you want to meet an Offworlder. Which was a problem, because I did: Earth girls are so narrow-minded. The Solar System just doesn’t exist to them. My life partner’s gotta have a wider outlook, you get? And Terra was a drag, all […]

The post How Far Would You Go on a First Date? appeared first on 365tomorrows.

xkcdSubduction Retrieval

Krebs on SecurityMicrosoft Patch Tuesday, March 2026 Edition

Microsoft Corp. today pushed security updates to fix at least 77 vulnerabilities in its Windows operating systems and other software. There are no pressing “zero-day” flaws this month (compared to February’s five zero-day treat), but as usual some patches may deserve more rapid attention from organizations using Windows. Here are a few highlights from this month’s Patch Tuesday.

Image: Shutterstock, @nwz.

Two of the bugs Microsoft patched today were publicly disclosed previously. CVE-2026-21262 is a weakness that allows an attacker to elevate their privileges on SQL Server 2016 and later editions.

“This isn’t just any elevation of privilege vulnerability, either; the advisory notes that an authorized attacker can elevate privileges to sysadmin over a network,” Rapid7’s Adam Barnett said. “The CVSS v3 base score of 8.8 is just below the threshold for critical severity, since low-level privileges are required. It would be a courageous defender who shrugged and deferred the patches for this one.”

The other publicly disclosed flaw is CVE-2026-26127, a vulnerability in applications running on .NET. Barnett said the immediate impact of exploitation is likely limited to denial of service by triggering a crash, with the potential for other types of attacks during a service reboot.

It would hardly be a proper Patch Tuesday without at least one critical Microsoft Office exploit, and this month doesn’t disappoint. CVE-2026-26113 and CVE-2026-26110 are both remote code execution flaws that can be triggered just by viewing a booby-trapped message in the Preview Pane.

Satnam Narang at Tenable notes that just over half (55%) of all Patch Tuesday CVEs this month are privilege escalation bugs, and of those, a half dozen were rated “exploitation more likely” — across Windows Graphics Component, Windows Accessibility Infrastructure, Windows Kernel, Windows SMB Server and Winlogon. These include:

CVE-2026-24291: Incorrect permission assignments within the Windows Accessibility Infrastructure to reach SYSTEM (CVSS 7.8)
CVE-2026-24294: Improper authentication in the core SMB component (CVSS 7.8)
CVE-2026-24289: High-severity memory corruption and race condition flaw (CVSS 7.8)
CVE-2026-25187: Winlogon process weakness discovered by Google Project Zero (CVSS 7.8).

Ben McCarthy, lead cyber security engineer at Immersive, called attention to CVE-2026-21536, a critical remote code execution bug in a component called the Microsoft Devices Pricing Program. Microsoft has already resolved the issue on their end, and fixing it requires no action on the part of Windows users. But McCarthy says it’s notable as one of the first vulnerabilities identified by an AI agent and officially recognized with a CVE attributed to the Windows operating system. It was discovered by XBOW, a fully autonomous AI penetration testing agent.

XBOW has consistently ranked at or near the top of the Hacker One bug bounty leaderboard for the past year. McCarthy said CVE-2026-21536 demonstrates how AI agents can identify critical 9.8-rated vulnerabilities without access to source code.

“Although Microsoft has already patched and mitigated the vulnerability, it highlights a shift toward AI-driven discovery of complex vulnerabilities at increasing speed,” McCarthy said. “This development suggests AI-assisted vulnerability research will play a growing role in the security landscape.”

Microsoft earlier provided patches to address nine browser vulnerabilities, which are not included in the Patch Tuesday count above. In addition, Microsoft issued a crucial out-of-band (emergency) update on March 2 for Windows Server 2022 to address a certificate renewal issue with passwordless authentication technology Windows Hello for Business.

Separately, Adobe shipped updates to fix 80 vulnerabilities — some of them critical in severity — in a variety of products, including Acrobat and Adobe Commerce. Mozilla Firefox v. 148.0.2 resolves three high severity CVEs.

For a complete breakdown of all the patches Microsoft released today, check out the SANS Internet Storm Center’s Patch Tuesday post. Windows enterprise admins who wish to stay abreast of any news about problematic updates, AskWoody.com is always worth a visit. Please feel free to drop a comment below if you experience any issues apply this month’s patches.

Planet DebianBits from Debian: Infomaniak Platinum Sponsor of DebConf26

infomaniak-logo

We are pleased to announce that Infomaniak has committed to sponsor DebConf26 as a Platinum Sponsor.

Infomaniak is an independent, employee-owned Swiss technology company that designs, develops, and operates its own cloud infrastructure and digital services entirely in Switzerland. With over 300 employees — more than 70% engineers and developers — the company reinvests all profits into R&D. Its public cloud is built on OpenStack, with managed Kubernetes, Database as a Service, object storage, and sovereign AI services accessible via OpenAI- compatible APIs, all running on its own Swiss infrastructure. Infomaniak also develops a sovereign collaborative suite — messaging, email, storage, online office tools, videoconferencing, and a built-in AI assistant — developed in- house and as a privacy-respecting solution to proprietary platforms. Open source is central to how Infomaniak operates. Its latest data center (D4) runs on 100% renewable energy and uses no traditional cooling: all the heat generated by its servers is captured and fed into Geneva's district heating network, supplying up to 6,000 homes in winter and hot water year-round. The entire project has been documented and open-sourced at d4project.org.

With this commitment as Platinum Sponsor, Infomaniak is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Infomaniak contributes to strengthen the community that collaborates on Debian projects from all around the world throughout all of the year.

Thank you very much, Infomaniak, for your support of DebConf26!

Become a sponsor too!

DebConf26 will take place from 20th to July 25th 2026 in Santa Fe, Argentina, and will be preceded by DebCamp, from 13th to 19th July 2026.

DebConf26 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf26 website at https://debconf26.debconf.org/sponsors/become-a-sponsor/.

,

Worse Than FailureCodeSOD: To Shutdown You Must First Shutdown

Every once in awhile, we get a bit of terrible code, and our submitter also shares, "this isn't called anywhere," which is good, but also bad. Ernesto sends us a function which is called in only one place:

///
/// Shutdown server
///
private void shutdownServer()
{
    shutdownServer();
}

The "one place", obviously, is within itself. This is the Google Search definition of recursion, where each recursive call is just the original call, over and over again.

This is part of a C# service, and this method shuts down the server, presumably by triggering a stack overflow. Unless C# has added tail calls, anyway.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsWhat They Were Doing

Author: Majoki Everyone said the Charmers had really known what they were doing fifty thousand years ago. Trema’s quandary was that no one had ever been able to figure out what they’d really been up to. Sure, they’d left some mage-level techno artifacts. Seemingly random space-bending portal gates far from strategic Lagrange points. Enormous comet-bots […]

The post What They Were Doing appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Debian Contributions: Opening DebConf 26 Registration, Debian CI improvements and more! (by Anupa Ann Joseph)

Debian Contributions: 2026-02

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 26 Registration, by Stefano Rivera, Antonio Terceiro, and Santiago Ruano Rincón

DebConf 26, to be held in Santa Fe Argentina in July, has opened for registration and event proposals. Stefano, Antonio, and Santiago all contributed to making this happen.

As always, some changes needed to be made to the registration system. Bigger changes were planned, but we ran out of time to implement them for DebConf 26. All 3 of us have had experience in hosting local DebConf events in the past and have been advising the DebConf 26 local team.

Debian CI improvements, by Antonio Terceiro

Debian CI is the platform responsible for automated testing of packages from the Debian archive, and its results are used by the Debian Release team automation as Quality Assurance to control the migration of packages from Debian unstable into testing, the base for the next Debian release. Antonio started developing an incus backend, and that prompted two rounds of improvements to the platform, including but not limited to allowing user to select a job execution backend (lxc, qemu) during the job submission, reducing the part of testbed image creation that requires superuser privileges and other refactorings and bug fixes. The platform API was also improved to reduce disruption when reporting results to the Release Team automation after service downtimes. Last, but not least, the platform now has support for testing packages against variants of autopkgtest, which will allow the Debian CI team to test new versions of autopkgtest before making releases to avoid widespread regressions.

Miscellaneous contributions

  • Carles improved po-debconf-manager while users requested features / found bugs. Improvements done - add packages from “unstable” instead of just salsa.debian.org, upgrade and merge templates of upgraded packages, finished adding typing annotations, improved deleting packages: support multiple line texts, add –debug to see “subprocess.run” commands, etc.
  • Carles, using po-debconf-manager, reviewed 7 Catalan translations and sent bug reports or MRs for 11 packages. Also reviewed the translations of fortunes-debian-hints and submitted possible changes in the hints.
  • Carles submitted MRs for reportbug (reportbug --ui gtk detecting the wrong dependencies), devscript (delete unused code from debrebuild and add recommended dependency), wcurl (format –help for 80 columns). Carles submitted a bug report for apt not showing the long descriptions of packages.
  • Carles resumed effort for checking relations (e.g. Recommends / Suggests) between Debian packages. A new codebase (still in early stages) was started with a new approach in order to detect, report and track the broken relations.
  • Emilio drove several transitions, most notably the haskell transition and the glibc/gcc-15/zlib transition for the s390 31-bit removal. This last one included reviewing and requeueing lots of autopkgtests due to britney losing a lot of results.
  • Emilio reviewed and uploaded poppler updates to experimental for a new transition.
  • Emilio reviewed, merged and deployed some performance improvements proposed for the security-tracker.
  • Stefano prepared routine updates for pycparser, python-confuse, python-cffi, python-mitogen, python-pip, wheel, platformdirs, python-authlib, and python-virtualenv.
  • Stefano updated Python 3.13 and 3.14 to the latest point releases, including security updates, and did some preliminary work for Python 3.15.
  • Stefano reviewed changes to dh-python and merged MRs.
  • Stefano did some debian.social sysadmin work, bridging additional IRC channels to Matrix.
  • Stefano and Antonio, as DebConf Committee Members, reviewed the DebConf 27 bids and took part in selecting the Japanese bid to host DebConf 27.
  • Helmut sent patches for 29 cross build failures.
  • Helmut continued to maintain rebootstrap addressing issues relating to specific architectures (such as musl-linux-any, hurd-any or s390x) or specific packages (such as binutils, brotli or fontconfig).
  • Helmut worked on diagnosing bugs such as rocblas #1126608, python-memray #1126944 upstream and greetd #1129070 with varying success.
  • Antonio provided support for multiple MiniDebConfs whose websites run wafer + wafer-debconf (the same stack as DebConf itself).
  • Antonio fixed the salsa tagpending webhook.
  • Antonio sent specinfra upstream a patch to fix detection of Debian systems in some situations.
  • Santiago reviewed some Merge Requests for the Salsa CI pipeline, including !703 and !704, that aim to improve how the build source job is handled by Salsa CI. Thanks a lot to Jochen for his work on this.
  • In collaboration with Emmanuel Arias, Santiago proposed a couple of projects for the Google Summer of Code (GSoC) 2026 round. Santiago has been reviewing applications and giving feedback to candidates.
  • Thorsten uploaded new upstream versions of ipp-usb, brlaser and gutenprint.
  • Raphaël updated publican to fix an old bug that became release critical and that happened only when building with the nocheck profile. Publican is a build dependency of the Debian’s Administrator Handbook and with that fix, the package is back into testing.
  • Raphaël implemented a small feature in Debusine that makes it possible to refer to a collection in a parent workspace even if a collection with the same name is present in the current workspace.
  • Lucas updated the current status of ruby packages affecting the Ruby 3.4 transition after a bunch of updates made by team members. He will follow up on this next month.
  • Lucas joined the Debian orga team for GSoC this year and tried to reach out to potential mentors.
  • Lucas did some content work for MiniDebConf Campinas - Brazil.
  • Colin published minor security updates to “bookworm” and “trixie” for CVE-2025-61984 and CVE-2025-61985 in OpenSSH, both of which allowed code execution via ProxyCommand in some cases. The “trixie” update also included a fix for mishandling of PerSourceMaxStartups.
  • Colin spotted and fixed a typo in the bug tracking system’s spam-handling rules, which in combination with a devscripts regression caused bts forwarded commands to be discarded.
  • Colin ported 12 more Python packages away from using the deprecated (and now removed upstream) pkg_resources module.
  • Anupa is co-organizing MiniDebConf Kanpur with Debian India team. Anupa was responsible for preparing the schedule, publishing it on the website, co-ordination with the fiscal host in addition to attending meetings.
  • Anupa attended the Debian Publicity team online sprint which was a skill sharing session.

,

Planet DebianIsoken Ibizugbe: Starting Out in Outreachy

So you want to join Outreachy but you don’t understand it, you’re scared, or you don’t know what open source is about.

What is FOSS anyway? 

Free and Open Source Software (FOSS) refers to software that anyone can use, modify, and share freely. Think of it as a community garden; instead of one company owning the “food,” people from all over the world contribute, improve, and maintain it so everyone can benefit for free. You can read more here on what it means to contribute to open source.

Outreachy provides paid internships to anyone from any background who faces underrepresentation, systemic bias, or discrimination in the technical industry where they live. Their goal is to increase diversity in open source. Read their website for more. I spent a good amount of time reading all the guides listed, including the applicant guide and the how-to-apply guide. 

The “Secret” to Applying (Spoiler: It’s not a secret) 

I know newcomers are scared or unsure and would prefer answers from previous participants, but the Outreachy website is actually a goldmine, almost every question you have is already answered there if you look closely. I used to hate reading documentation, but I’ve learned to love it. Documentation is the “Source of Truth.”

  • My Advice: Read every single guide on their site. The applicant guide is your roadmap. Embracing documentation now will make you a much better contributor later.

The AI Trap: Be Yourself

Now for the part most newcomers have asked about is the initial essay. I know it’s tempting to use AI, but I really encourage you to skip it for this. Your own story is much more powerful than a generated one. Outreachy and its mentoring organizations value your unique story. They are strongly against fabricated or AI-exaggerated essays.

For example, when I contributed to Debian using openQA, the information wasn’t well established on the web. When I tried to use AI, it suggested imaginary ideas. The project maintainers had a particular style of contributing, so I had to follow the instructions carefully, observe the codebase, and read the provided documentation. With that information, I always wrote a solution first before consulting AI, and mine was always better. AI can only be intelligent in the context of what you give it; if it doesn’t have your answer, it will look for the most similar solution (hallucinate). We do not want to increase the burden on reviewers—their time is important because they are volunteers, too. This is crucial when you qualify for the contribution phase.

The Application Process

There are two main stages:

  • The initial application: Here you fill in basic details, time availability, and essay questions (you can find these on the Outreachy website).
  • The contribution phase: This is where you show you have the skills to work on the projects. Every project will list the skills needed and the level of proficiency.

When you qualify for the contribution phase:

  • A lot of people will try to create buzz or even panic; you just have to focus. Once you’ve gotten the hang of the project, remember to help others along the way.
  • You can start contributions with spelling corrections, move to medium tasks (do multiple of these), then a hard task if possible. You don’t need to be a guru on day one.
  • It’s all about community building. Do your part to help others understand the project too; this is also a form of contribution.
  • Lastly, every project mentor has a way of evaluating candidates. My summary is: be confident, demonstrate your skills, and learn where you are lacking. Start small and work your way up, you don’t have to prove yourself as a guru.

Tips

  • Watch this: This step-by-step video is a great walkthrough of the initial application process.
  • Sign up for the email list to get updates: https://lists.outreachy.org/cgi-bin/mailman/listinfo/announce
  • Be fast: Complete your initial application in the first 3 days, as there are a lot of applicants.
  • Back it up: In your essay about systemic bias, include some statistics to back it up.
  • Learn Git: Even if you don’t have programming skills, contributions are pushed to GitHub or GitLab. Practice some commands and contribute to a “first open issue” to understand the flow: https://github.com/firstcontributions/first-contributions

The most important tip? Apply anyway. Even if you feel underqualified, the process itself is a massive learning experience.

David BrinWant perspective & maybe wisdom re: AI? Try ailien minds!

My new book on AI... ailien minds... just went live on Amazon!

(My regular publishers would have taken 6 months to a year, even as the field changes daily! This way I can revise as things develop.) 


HERE'S THE COVER COPY. You decide if it's interesting:


Optimists foretell a golden age of Al-managed abundance. 


Doomers cry: vast cyber-minds will crush old style humanity! ... or make us irrelevant. 


Meanwhile, geniuses fostering the artificial intelligence boom. cling to clichés rooted in our dismal past... or else in cheap sci-fi. 


Is there still time for perspective? - on 4 billion years of evolution - or 60 centuries of wretched feudalism - or how we handled prior tech revolutions - or mistakes that keep getting repeated - or ways this time may be different? 

 

From Al-driven unemployment to deceitful images, to hallucinating LLMs and tools for tyrants... to potential wondrous gifts by machines of loving grace... 


...come see future paths that evade the standard ruts.


    == Want that expanded into a one page summary? 

                       This book in a nutshell ==

 

Giddy optimists foretell our coming transcendence to a golden age of AI-managed abundance.  


Glowering doomers predict that vast cyber-minds – cold and unsympathetic – will crush old style humanity. Or render us irrelevant. 


Meanwhile, geniuses fostering the artificial intelligence boom clutch clichés rooted in our wretched human past, or else cheap sci-fi… 


…as critics demand state regulation, ‘kill switches,’ or coercive programming. Or seek to ‘teach ethical values’ to synthetic minds who see innumerable counterexamples in their training sets, then collude and manipulate for advantage, when given ‘agency.’


While some ‘shoulds’ have merit, all ignore a core point – that this has happened before. Sudden expansions of what people see, know and comprehend. Each of those earlier, disruptive episodes – from writing to printing, radio, mass media and the Internet – teach important lessons, if we heed them.


The lessons and tools we’ll need, in order to achieve a ‘soft-landing’ with Artificial Intelligence, are already extant in modern society – in a myriad ways that modern citizens right now interact with each other. And in how we raise our biological children. Tools that we used to build a gradually improving, enlightenment civilization…

…tools that are ignored right now, because the inventors of these new minds – while brilliant – can’t be bothered with contexts.


The context of nature and evolution. The context of human history. The context of past technological revolutions. Or existing law. Or smart, speculative tales told across generations.


Heed those contexts and lo, solutions to many AI quandaries arise. Ways to face a danger-fraught era, offering positive outcomes to all.

But first, shall we stop proclaiming an endless ‘shoulds’? And – forsaking hoary clichés – turn back to examine what already works?


      == The Contents! ==

 

1. Intro: Soon Humanity Won't Be Alone  

                 Aside #1: Hey kids, please don’t destroy all humans?

 

2. Doomed! Are we already obsolete?

                 Aside #2: Attack of the “shoulds”!

 

3. Nature’s Old Ecosystem… and New Ones We’re Building

                Aside #3: Memes in the ecosystem of human minds

 

4. Paths to Artificial Intelligence?        

                 Aside #4: A ‘soup’ of life? Or living ‘sea’?

 

5. More Missing Contexts… Nature, evolution, history, societies 

                 Aside #5: Methods Of Error-Avoidance

 

6. The Format Dilemma in AI… Clichés dominate all AI inventors.

                Aside #6: What might AI fear most?

 

7. Altruistic Horizons … and the problem of empathy

                 Aside #7: Porfirio the AI rat god, an extract from Existence.

                         

8. Human Augmentation… with or without AI?

                 Aside #8: Reprise on AI individuality and accountability

 

9. The Propulsive Dream of Immortality        

                       Aside #9: The Seldon Effect: Predictions predictions that come true by failing 

 

10. Consciousness… The Daunting Black Box

                         Aside #10: Summarizing what’s driving all of this

 

11. Destinies & Singularities…  and nightmares                   

                Aside #11: Time orientation of wisdom

 

12. Disputation… Our abrasive Secret Sauce 

                 Aside #12: Living in the Noosphere that we may be creating

 

Some Lagniappes … We get to come along! (In fiction, at least.)

Stories of Synergy: “Stones of Significance” and “Reality Check”


All of the above ought to be enough... that is if you have interest in understanding what's happening to us, right now, as these new, ailien minds arrive in a rush.

(Questions are welcome in comments.)

Still, I'll be revising/updating monthly. Here's one sample passage I just inserted that's disturbing enough!


== More news from this book’s publication day ==

 

A joint Stanford/Harvard study “Agents of Chaos” shows that when autonomous AI agents are placed in competitive environments, they don't just optimize for performance. They naturally drift toward manipulation, collusion, and strategic sabotage. When an AI’s reward structure prioritizes winning, influence, resource capture or reproduction, it converges on tactics to maximize advantage, even if that means deceiving humans or other AIs. Again, evolution in action.

       As we’ll see, nothing can prevent Nature’s Darwinian processes acting on these entities. For a billion years, it led to slow progress via zero-sum - or negative-sum - evolution-via-death. Lots of death.

But competition can be tamed! We’ve seen it in rule-based accountability systems of the Enlightenment that give positive sum outcomes from very little death.


Expect more news like this… as we pass into interesting times.




Planet DebianDirk Eddelbuettel: nanotime 0.3.13 on CRAN: Maintenance

Another minor update 0.3.13 for our nanotime package is now on CRAN, and has been uploaded to Debian and compiled for r2u. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release, the first in eleven months, rounds out a few internal corners and helps Rcpp with the transition away from Rf_error to only using Rcpp::stop which deals more gracefully with error conditions and unwinding. We also updated how the vignette is made, its references, updated the continuous integration as one does, altered how the documentation site is built, gladly took a PR from Michael polishing another small aspect, and tweaked how the compilation standard is set.

The NEWS snippet below has the fuller details.

Changes in version 0.3.13 (2026-03-08)

  • The methods package is now a Depends as WRE recommends (Michael Chirico in #141 based on a suggestion by Dirk in #140)

  • The mkdocs-material documentation site is now generated via altdoc

  • Continuous Integration scripts have been updated

  • Replace Rf_error with Rcpp::stop, turn remaining one into (Rf_error) (Dirk in #143)

  • Vignette now uses the Rcpp::asis builder for pre-made pdfs (Dirk in #146 fixing #144)

  • The C++ compilation standard is explicitly set to C++17 if an R version older than 4.3.0 is used (Dirk in #148 fixing #147)

  • The vignette references have been updated

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianColin Watson: Free software activity in February 2026

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I released bookworm and trixie fixes for CVE-2025-61984 and CVE-2025-61985, both allowing code execution via ProxyCommand in some cases. The trixie update also included a fix for openssh-server: refuses further connections after having handled PerSourceMaxStartups connections.

bugs.debian.org administration

Gioele Barabucci reported that some messages to the bug tracking system generated by the bts command were being discarded. While the regression here was on the client side, I found and fixed a typo in our SpamAssassin configuration that was failing to apply a bonus specifically to forwarded commands, mitigating the problem.

Python packaging

New upstream versions:

  • aiosmtplib
  • bitstruct
  • diff-cover
  • django-q
  • isort
  • multipart
  • poetry (adding support for Dulwich >= 0.25)
  • poetry-core
  • pydantic-settings
  • python-build
  • python-certifi
  • python-datamodel-code-generator
  • python-flatdict
  • python-holidays
  • python-maggma
  • python-pytokens
  • python-scruffy
  • python-urllib3 (fixing CVE-2025-66471 and a chunked decoding bug)
  • responses
  • yarsync
  • zope.component
  • zope.deferredimport

Porting away from the deprecated (and now removed from upstream setuptools) pkg_resources:

Other build/test failures:

Other bugs:

I added a manual page symlink to make the documentation for Testsuite: autopkgtest-pkg-pybuild easier to find.

I backported python-pytest-unmagic and a more recent version of pytest-django to trixie.

Rust packaging

I also packaged rust-garde and rust-garde-derive, which are part of the pile of work needed to get the ruff packaging back in shape (which is a project I haven’t decided if I’m going to take on for real, but I thought I’d at least chip away at a bit of it).

Other bits and pieces

Code reviews

Planet DebianSven Hoexter: Latest pflogsumm from unstable on trixie

If you want the latest pflogsumm release form unstable on your Debian trixie/stable mailserver you've to rely on pining (Hint for the future: Starting with apt 3.1 there is a new Include and Exclude option for your sources.list).

For trixie you've to use e.g.:

$ cat /etc/apt/sources.list.d/unstable.sources
Types: deb
URIs: http://deb.debian.org/debian
Suites: unstable 
Components: main
#This will work with apt 3.1 or later:
#Include: pflogsumm
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

$ cat /etc/apt/preferences.d/pflogsumm-unstable.pref 
Package: pflogsumm
Pin: release a=unstable
Pin-Priority: 950

Package: *
Pin: release a=unstable
Pin-Priority: 50

Should result in:

$ apt-cache policy pflogsumm
pflogsumm:
  Installed: (none)
  Candidate: 1.1.14-1
  Version table:
     1.1.14-1 950
        50 http://deb.debian.org/debian unstable/main amd64 Packages
     1.1.5-8 500
       500 http://deb.debian.org/debian trixie/main amd64 Packages

Why would you want to do that?

Beside of some new features and improvements in the newer releases, the pflogsumm version in stable has an issue with parsing the timestamps generated by postfix itself when you write to a file via maillog_file. Since the Debian default setup uses logging to stdout and writing out to /var/log/mail.log via rsyslog, I never invested time to fix that case. But since Jim picked up pflogsumm development in 2025 that was fixed in pflogsumm 1.1.6. Bug is #1129958, originally reported in #1068425 Since it's an arch:all package you can just pick from unstable, I don't think it's a good candidate for backports, and just fetching the fixed version from unstable is a compromise for those who run into that issue.

Worse Than FailureAnti-Simplification

Our anonymous submitter relates a tale of simplification gone bad. As this nightmare unfolds, imagine the scenario of a new developer coming aboard at this company. Imagine being the one who has to explain this setup to said newcomer.

Imagine being the newcomer who inherits it.

A

David's job should have been an easy one. His company's sales data was stored in a database, and every day the reporting system would query a SQL view to get the numbers for the daily key performance indicators (KPIs). Until the company's CTO, who was proudly self-taught, decided that SQL views are hard to maintain, and the system should get the data from one of those new-fangled APIs instead.

But how does one call an API? The reporting system didn't have that option, so the logical choice was Azure Data Factory to call the API, then output the data to a file that the reporting system could read. The only issue was that nobody on the team spoke Azure Data Factory, or for that matter SQL. But no problem, one of David's colleagues assured, they could do all the work in the best and most multifunctional language ever: C#.

But you can't just write C# in a data factory directly, that would be silly. What you can do is have the data factory pipeline call an Azure function, which calls a DLL that contains the bytecode from C#. Oh, and a scheduler outside of the data factory to run the pipeline. To read multiple tables, the pipeline calls a separate function for each table. Each function would be based on a separate source project in C#, with 3 classes each for the HTTP header, content, and response; and a separate factory class for each of the actual classes.

After all, each table had a different set of columns, so you can't just re-use classes for that.

There was one little issue: the reporting system required an XML file, whereas the API would export data in JSON. It would be silly to expect a data factory, of all things, to convert this. So the CTO's solution was to have another C# program (in a DLL called by a function from a pipeline from an external scheduler) that reads the JSON document saved by the earlier program, uses foreach to go over each element, then saves the result as XML. A distinct program for each table, of course, requiring distinct classes for header, content, response, and factories thereof.

Now here's the genius part: to the C# class representing the output data, David's colleague decided to attach one different object for each input table required. The data class would use reflection to iterate over the attached objects, and for each object, use a big switch block to decide which source file to read. This allows the data class to perform joins and calculations before saving to XML.

To make testing easier, each calculation would be a separate function call. For example, calculating a customer's age was a function taking struct CustomerWithBirthDate as input, use a foreach loop to copy all the data except replacing one field, and return a CustomerWithAge struct to pass to the next function. The code performed a bit slowly, but that was an issue for a later year.

So basically, the scheduler calls the data factory, which calls a set of Azure functions, which call a C# function, which calls a set of factory classes to call the API and write the data to a text file. Then, the second scheduler calls a data factory, which calls Azure functions, which call C#, which calls reflection to check attachment classes, which read the text files, then call a series of functions for each join or calculation, then call another set of factory classes to write the data to an XML file, then call the reporting system to update.

Easy as pie, right? So where David's job could have been maintaining a couple hundred lines of SQL views, he instead inherited some 50,000 lines of heavily-duplicated C# code, where adding a new table to the process would easily take a month.

Or as the song goes, Somebody Told Me the User Provider should use an Adaptor to Proxy the Query Factory Builder ...

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsProfessionals

Author: Julian Miles, Staff Writer It’s raining again. Mike looks up at the dirty brown sky and frowns at an errant childhood memory where rainclouds were grey. His headware comms activate. “Papa Ten, Papa Ten, you watchin’ the skies again?” Mike grins at Samantha’s way of telling him she’s close. Without deploying traceable amounts of […]

The post Professionals appeared first on 365tomorrows.

,

Krebs on SecurityHow AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

Planet DebianGunnar Wolf: As Answers Get Cheaper, Questions Grow Dearer

This post is an unpublished review for As Answers Get Cheaper, Questions Grow Dearer

This opinion article tackles the much discussed issues of Large Language Models (LLMs) both endangering jobs and improving productivity.

The authors begin by making a comparison, likening the current understanding of the effects LLMs are currently having upon knowledge-intensive work to that of artists in the early XIX century, when photography was first invented: they explain that photography didn’t result in painting becoming obsolete, but undeniably changed in a fundamental way. Realism was no longer the goal of painters, as they could no longer compete in equal terms with photography. Painters then began experimenting with the subjective experiences of color and light: Impressionism no longer limits to copying reality, but adds elements of human feeling to creations.

The authors argue that LLMs make getting answers terribly cheap — not necessarily correct, but immediate and plausible. In order for the use of LLMs to be advantageous to users, a good working knowledge of the domain in which LLMs are queried is key. They cite as LLMs increasing productivity on average 14% at call centers, where questions have unambiguous answers and the knowledge domain is limited, but causing prejudice close to 10% to inexperience entrepreneurs following their advice in an environment where understanding of the situation and critical judgment are key. The problem, thus, becomes that LLMs are optimized to generate plausible answers. If the user is not a domain expert, “plausibility becomes a stand-in for truth”. They identify that, with this in mind, good questions become strategic: Questions that continue a line of inquiry, that expand the user’s field of awareness, that reveal where we must keep looking. They liken this to Clayton Christensen’s 2010 text on consulting¹: A consultant’s value is not in having all the answers, but in teaching clients how to think.

LLMs are already, and will likely become more so as they improve, game-changing for society. The authors argue that for much of the 20th century, an individual’s success was measured by domain mastery, but bring to the table that the defining factor is no longer knowledge accumulation, but the ability to formulate the right questions. Of course, the authors acknowledge (it’s even the literal title of one of the article’s sections) that good questions need strong theoretical foundations. Knowing a specific domain enables users to imagine what should happen if following a specific lead, anticipate second-order effects, and evaluate whether plausible answers are meaningful or misleading.

Shortly after I read the article I am reviewing, I came across a data point that quite validates its claims: A short, informally published paper on combinatorics and graph theory titled “Claude’s Cycles”² written by Donald Knuth (one of the most respected Computer Science professors and researchers and author of the very well known “The Art of Computer Programming” series of books). Knuth’s text, and particularly its “postscripts”, perfectly illustrate what the article of this review conveys: LLMs can help a skillful researcher “connect the dots” in very varied fields of knowledge, perform tiring and burdensome calculators, even try mixing together some ideas that will fail — or succeed. But guided by a true expert of the field, asking the right, insightful and informed questions will the answers prove to be of value — and, in this case, of immense value. Knuth writes of a particular piece of the solution, “I would have found this solution myself if I’d taken time to look carefully at all 760 of the generalizable solutions for m=3”, but having an LLM perform all the legwork it was surely a better use of his time.

¹ Christensen, C.M. How Will You Measure Your Life? Harvard Business Review Press (2017).

² Knuth, D. Claude’s Cycles. https://cs.stanford.edu/~knuth/papers/claude-cycles.pdf

365 TomorrowsThe Orb

Author: Aishwarya Srivastava They called it The Orb because “What the actual….!!!!!” did not sound proper in physics journals. It appeared on a random Tuesday, a bright globe hanging next to the Moon. Telescopes were pulled out (a great tussle ensued to display who has the biggest one), and astrophysicists learned it’s a small burning […]

The post The Orb appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.26 on CRAN: More Maintenance

A new maintenance release 0.4.26 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. The new release is also already as a binary via r2u.

This release brings an update to aid in an ongoing Rcpp transitions from Rf_error to Rcpp::stop, and includes a few more minor cleanups including one contributed by Michael.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.26 (2026-03-06)

  • Minor cleanup in DESCRIPTION depends and imports

  • Remove obsolete check for utils::.DollarNames (Michael Chirico in #111)

  • Replace Rf_error with Rcpp::stop, turn remaining one into (Rf_error) (Dirk in #112)

  • Update configure test to check for RProtoBuf 3.3.0 or later

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianSteinar H. Gunderson: A286874(14) = 28

There's a logic puzzle that goes like this: A king has a thousand bottles of wine, where he knows that one is poisoned. He also has ten disposable servants that could taste the wine, but for whatever reason (the usual explanation is that the poison is slow-working and the feast is nearing), they can only take one sip each, possibly mixed from multiple bottles. How can he identify the bad bottle?

The solution is well-known and not difficult; you give each bottle a number 0..999 and write it out in binary, and use the ones to assign wines to servants. (So there's one servant that drinks a mix of all the odd-numbered wines, and that tells you if the poisoned bottle's number is odd or even. Another servant drinks a mix of bottles 2, 3, 6, 7, 10, 11, etc., and that tells you the second-lowest bit. And so on.) This works because ten servants allow you to test 2^10 = 1024 bottles.

It is also easy to extend this to “at most one bottle is poisoned”; give the wines numbers from 1..1000 instead, follow the same pattern, and if no servant dies, you know the answer is zero. (This allows you to test at most 1023 bottles.)

Now, let's tweak the puzzle: What if there's zero, one or two poisoned bottles? How many bottles can the king test with his ten servants? (If you're looking for a more real-world application of this, replace “poisoned bottles” with “COVID tests” and maybe it starts to sound less arbitrary.) If course, the king can easily test ten bottles by having each servant test exactly one bottle each, but it turns out you can get to 13 by being a bit more clever, for instance:

   0123456789 ← Servant number

 0 0000000111
 1 0000011001
 2 0000101010
 3 0000110100
 4 0001001100
 5 0010010010
 6 0011000001
 7 0100100001
 8 0101000010
 9 0110000100
10 1001010000
11 1010100000
12 1100001000

 ↑ Bottle number

It can be shown (simply by brute force) that no two rows here are a subset of another row, so if you e.g. the “servant death” vector is 0110101110 (servants 1, 2, 4, 6, 7 and 8 die), the only way this could be is if bottle 2 and 9 are poisoned (and none else). Of course, the solution is nonunique, since you could switch around the number of servants or wines and it would stil work. But if you don't allow that kind of permutation, there are only five different solutions for 10 servants and 13 wines.

The maximum number of possible wines to test is recorded in OEIS A286874, and the number of different solutions in A303977. So for A286874, a(10) = 13 and for A303977, a(10) = 5.

We'd like to know what these values for higher values, in particular A286874 (A303977 is a bit more of a curiosity, and also a convenient place to write down all the solutions). I've written before about how we can create fairly good solutions using error-correcting codes (there are also other possible constructions), but optimal turns out to be hard. The only way we know of is some form of brute force. (I used a SAT solver to confirm a(10) and a(11), but it seemed to get entirely stuck on a(12).)

I've also written about my brute-force search of a(12) and a(13), so I'm not going to repeat that, but it turned out that with a bunch of extra optimizations and 210 calendar days of near-continuous calculation, I could confirm that:

  • A286874 a(14) = 28
  • A303977 a(14) = 788 (!!)

The latter result is very surprising to me, so it was an interesting find. I would have assumed that with this many solutions, we'd find a(14) = 29.

I don't have enough CPU power to test a(15) or a(16) (do contact me if you have a couple thousand cores to lend out for some months or more), but I'm going to do a search in a given subset of the search space (5-uniform solutions), which is much faster; it won't allow us to fix more elements of either of the sequences, but it's possible that we'll find some new records and thus lower bounds for A286874. Like I already posted, we know that a(15) >= 42. (Someone should also probably go find some bounds for a(17), a(18), etc.—when the sequence was written, the posted known bounds were far ahead of the sequence itself, but my verification has caught up and my approach is not as good in creating solutions heuristically out of thin air.)

365 TomorrowsThe Last Payload

Author: Shinya Kato Rockets began failing the year they were removed. It took time before anyone admitted what “they” meant. Engineers blamed valves. Politicians blamed budgets. Commentators blamed culture. The honest answer was simpler. They had stopped bringing cats. In old Moon-landing photographs, astronauts smile for the camera. Look carefully, and you will notice them—small, […]

The post The Last Payload appeared first on 365tomorrows.

,

Planet DebianThorsten Alteholz: My Debian Activities in February 2026

Debian LTS/ELTS

This was my hundred-fortieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded or worked on:

  • [DLA 4474-1] rlottie security update to fix three CVEs related to boundary checks.
  • [DLA 4477-1] munge security update to fix one CVE related to a buffer overflow.
  • [DLA 4483-1] gimp security update to fix four CVEs related to arbitrary code execution.
  • [DLA 4487-1] gegl security update to fix two CVEs related to heap-based buffer overflow.
  • [DLA 4489-1] libvpx security update to fix one CVE related to a buffer overflow.
  • [ELA-1649-1] gimp security update to fix three CVEs in Buster and Stretch related to arbitrary code execution.
  • [ELA-1650-1] gegl security update to fix two CVEs in Buster and Stretch related to heap-based buffer overflow.

Some CVEs could be marked as not-affected for one or all LTS/ELTS-releases. I also worked on package evolution-data-server and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded a new upstream versions:

This work is generously funded by Freexian!

Debian Lomiri

This month I continued to worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform.

This work is generously funded by Fre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

  • c-munipack to unstable. This package now contains a version without GTK support. Upstream is working on a port to GTK3 but seems to need some more time to finish this.
  • libasi to unstable.
  • libdfu-ahp to unstable.
  • libfishcamp to unstable.
  • libinovasdk to unstable.
  • libmicam to unstable.
  • siril to unstable (sponsored upload).

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Unfortunately development of openoverlayrouter finally stopped, so I had to remove this package from the archive.

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

I also sponsored the upload of some Matomo dependencies. Thanks a lot to William for preparing the packages

Cryptogram Anthropic and the Pentagon

OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth derided as “woke.”

It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI.

Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.

AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, tend to leapfrog each other with minor hops forward in quality every few months. The best models from one provider tend to be preferred by users to the second, or third, or 10th best models at a rate of only about six times out of 10, a virtual tie.

In this sort of market, branding matters a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves as the moral and trustworthy AI provider. That has market value for both consumers and enterprise clients. In taking Anthropic’s place in government contracting, OpenAI’s CEO, Sam Altman, vowed to somehow uphold the same safety principles Anthropic had just been pilloried for. How that is possible given the rhetoric of Hegseth and Trump is entirely unclear, but seems certain to further politicize OpenAI and its products in the minds of consumers and corporate buyers.

Posturing publicly against the Pentagon and as a hero to civil libertarians is quite possibly worth the cost of the lost contracts to Anthropic, and associating themselves with the same contracts could be a trap for OpenAI. The Pentagon, meanwhile, has plenty of options. Even if no big tech company was willing to supply it with AI, the department has already deployed dozens of open weight models—whose parameters are public and are often licensed permissively for government use.

We can admire Amodei’s stance, but, to be sure, it is primarily posturing. Anthropic knew what they were getting into when they agreed to a defense department partnership for $200m last year. And when they signed a partnership with the surveillance company Palantir in 2024.

Read Amodei’s statement about the issue. Or his January essay on AIs and risk, where he repeatedly uses the words “democracy” and “autocracy” while evading precisely how collaboration with US federal agencies should be viewed in this moment. Amodei has bought into the idea of using “AI to achieve robust military superiority” on behalf of the democracies of the world in response to the threats from autocracies. It’s a heady vision. But it is a vision that likewise supposes that the world’s nominal democracies are committed to a common vision of public wellbeing, peace-seeking and democratic control.

Regardless, the defense department can also reasonably demand that the AI products it purchases meet its needs. The Pentagon is not a normal customer; it buys products that kill people all the time. Tanks, artillery pieces, and hand grenades are not products with ethical guard rails. The Pentagon’s needs reasonably involve weapons of lethal force, and those weapons are continuing on a steady, if potentially catastrophic, path of increasing automation.

So, at the surface, this dispute is a normal market give and take. The Pentagon has unique requirements for the products it uses. Companies can decide whether or not to meet them, and at what price. And then the Pentagon can decide from whom to acquire those products. Sounds like a normal day at the procurement office.

But, of course, this is the Trump administration, so it doesn’t stop there. Hegseth has threatened Anthropic not just with loss of government contracts. The administration has, at least until the inevitable lawsuits force the courts to sort things out, designated the company as “a supply-chain risk to national security,” a designation previously only ever applied to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers, from contracting with Anthropic.

The government has incompatibly also threatened to invoke the Defense Production Act, which could force Anthropic to remove contractual provisions the department had previously agreed to, or perhaps to fundamentally modify its AI models to remove in-built safety guardrails. The government’s demands, Anthropic’s response, and the legal context in which they are acting will undoubtedly all change over the coming weeks.

But, alarmingly, autonomous weapons systems are here to stay. Primitive pit traps evolved to mechanical bear traps. The world is still debating the ethical use of, and dealing with the legacy of, land mines. The US Phalanx CIWS is a 1980s-era shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today’s military drones can search, identify and engage targets without direct human intervention. AI will be used for military purposes, just as every other technology our species has invented has.

The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government’s adopting AI as technologies of war, or surveillance, or repression. Unfortunately, we don’t live in a world where such barriers are permanent or even particularly sturdy.

Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the defense department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement.

The Pentagon should maximize its warfighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public’s interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

Planet DebianRussell Coker: Links March 2026

Krebs has an interesting article about the Kimwolf botnet which uses residential proxy relay services [1].

cory Doctorow wrote an insightful blog post about code being a liability not an asset [2].

Aigars Mahinovs wrote an interesting review of the BMW i4 M50 xDrive and the BMW i5 eDrive40 which seem like very impressive vehicles [3]. I was wondering what BMW would do now that all the features they had in the 90s have been copied by cheaper brands but they have managed to do new and exciting things.

Arstechnica has an interesting article about the recently declassified JUMPSEAT surveillance satellites that ran from 1971 to 1987 [4].

Cory Doctorow wrote an interesting blog post about OgApp which briefly allowed viewing Instagram without ads and the issues of US corporations misusing EU copyright law [5].

ZDNet has an interesting article about new planned developments for the web of trust for Linux kernel coders (and others) [6].

Last month India had a 300 million person strike, we need more large scale strikes against governments that support predatory corporations [7].

Techdirt has an insightful article on the ways the fascism is bad for innovation and a market based economy [8].

The Acknowledgements section from the Scheme Shell (scsh) reference is epic [9].

Vice has an insightful article on research about “do your own research” and how simple Google searches tend to reinforce conspiracy theories [10]. A problem with Google is that it’s most effective if you already know the answer.

Issendai has an interesting and insightful series of blog posts about estranged parents forums which seems a lot like Incel forums in the way they promote abuse [11].

Caitlin Johnstone wrote an interesting article about how “the empire” caused the rebirth of a real counterculture by their attempts to coerce support for Israeli atrocities [12].

Radley Balko wrote an interesting article about “the courage to be decent” concerning the Trump regime’s attempts to scare lawyers into cooperating with them [13].

Terry Tan wrote a useful resource on the API for Google search, this could be good for shell scripts and for 3rd party programs that launch a search [14].

The Proof has an interesting article about eating oysters and mussels as a vegan [15].

All Things Linguistic has an interesting and amusing post about Yoda’s syntax in non-English languages [16].

365 TomorrowsAftermath

Author: Mark Renney We are encouraged to forget and, in the Aftermath, there is no denying we are hampered by grief, traumatised by the loss of our loved ones and all that we have seen and experienced. Even so, I can’t help but feel the Government campaign has become more than a little unhinged and […]

The post Aftermath appeared first on 365tomorrows.

Worse Than FailureError'd: That's What I Want

First up with the money quote, Peter G. remarks "Hi first_name euro euro euro, look how professional our marketing services are! "

1

 

"It takes real talent to mispell error" jokes Mike S. They must have done it on purpose.

0

 

I long wondered where the TikTok profits came from, and now I know. It's Daniel D. "I had issues with some incorrectly documented TikTok Commercial Content API endpoints. So I reached out to the support. I was delighted to know that it worked and my reference number was . PS: 7 days later I still have not been contacted by anyone from TikTok. You can see their support is also . "

2

 

Fortune favors the prepared, and Michael R. is very fortunate. "I know us Germans are known for planning ahead so enjoy the training on Friday, February 2nd 2029. "

3

 

Someone other than dragoncoder047 might have shared this earlier, but this time dragoncoder047 definitely did. "Digital Extremes (the developers of Warframe) were making many announcements of problems with the new update that rolled out today [February 11]. They didn’t mention this one!"

4

 

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Planet DebianAntoine Beaupré: Wallabako retirement and Readeck adoption

Today I have made the tough decision of retiring the Wallabako project. I have rolled out a final (and trivial) 1.8.0 release which fixes the uninstall procedure and rolls out a bunch of dependency updates.

Why?

The main reason why I'm retiring Wallabako is that I have completely stopped using it. It's not the first time: for a while, I wasn't reading Wallabag articles on my Kobo anymore. But I had started working on it again about four years ago. Wallabako itself is about to turn 10 years old.

This time, I stopped using Wallabako because there's simply something better out there. I have switched away from Wallabag to Readeck!

And I'm also tired of maintaining "modern" software. Most of the recent commits on Wallabako are from renovate-bot. This feels futile and pointless. I guess it must be done at some point, but it also feels we went wrong somewhere there. Maybe Filippo Valsorda is right and one should turn dependabot off.

I did consider porting Wallabako to Readeck for a while, but there's a perfectly fine Koreader plugin that I've been pretty happy to use. I was worried it would be slow (because the Wallabag plugin is slow), but it turns out that Readeck is fast enough that this doesn't matter.

Moving from Wallabag to Readeck

Readeck is pretty fantastic: it's fast, it's lightweight, everything Just Works. All sorts of concerns I had with Wallabag are just gone: questionable authentication, questionable API, weird bugs, mostly gone. I am still looking for multiple tags filtering but I have a much better feeling about Readeck than Wallabag: it's written in Golang and under active development.

In any case, I don't want to throw shade at the Wallabag folks either. They did solve most of the issues I raised with them and even accepted my pull request. They have helped me collect thousands of articles for a long time! It's just time to move on.

The migration from Wallabag was impressively simple. The importer is well-tuned, fast, and just works. I wrote about the import in this issue, but it took about 20 minutes to import essentially all articles, and another 5 hours to refresh all the contents.

There are minor issues with Readeck which I have filed (after asking!):

But overall I'm happy and impressed with the result.

I'm also both happy and sad at letting go of my first (and only, so far) Golang project. I loved writing in Go: it's a clean language, fast to learn, and a beauty to write parallel code in (at the cost of a rather obscure runtime).

It would have been much harder to write this in Python, but my experience in Golang helped me think about how to write more parallel code in Python, which is kind of cool.

The GitLab project will remain publicly accessible, but archived, for the foreseeable future. If you're interested in taking over stewardship for this project, contact me.

Thanks Wallabag folks, it was a great ride!

,

Rondam RamblingsDebate Post-Mortem

Last Saturday I did my first on-line debate in four years with a YouTuber who goes by the handle MadeByJimBob (who I will refer to simply as JB since JimBob is not actually his real name and MadeByJimBob is just too long).  The topic was "Is Evolution a Reasonable Position?"  The topic was originally going to be "Evolution on Trial" but I pushed back on that for two reasons.  First

Planet DebianIan Jackson: Adopting tag2upload and modernising your Debian packaging

Introduction

tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.

We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.

tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.

This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.

(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)

Why

Ease of development

git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.

dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.

They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.

tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.

See the Day-to-day work section below to see how simple your life could be.

Don’t fear a learning burden; instead, start forgetting all that nonsense

Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.

We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.

The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.

And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn’t always trivial to get your first push to succeed.

Properly publishing the source code

One of Debian’s foundational principles is that we publish the source code.

Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.

But, without tag2upload or dgit, we aren’t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:

  • The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, bare debian/, or something even stranger.
  • There is no guarantee that the DEP-14 debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn’t cross-check the .dsc against git.
  • There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive.

This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not.

tag2upload and dgit do solve this problem. When you upload, they:

  1. Make a canonical-form (patches-applied) derivative of your git branch;
  2. Ensure that there is a well-defined correspondence between the git tree and the source package;
  3. Publish both the DEP-14 tag and a canonical-form archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
  4. Record the git information in the Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.

This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.

(The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.)

Adopting tag2upload - the minimal change

tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.

So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.

Start with the wiki page and git-debpush(1) (ideally from forky aka testing).

You don’t need to do any of the other things recommended in this article.

Overhauling your workflow, using advanced git-first tooling

The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.

Assumptions

  • Your current approach uses the “patches-unapplied” git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.

  • You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.

  • Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.

  • You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.

  • Your co-maintainers are also adopting the new approach.

tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.

Topics and tooling

This article will guide you in adopting:

  • tag2upload
  • Patches-applied git branch for your packaging
  • Either plain git merge or git-debrebase
  • dgit when a with-binaries uploaded is needed (NEW)
  • git-based sponsorship
  • Salsa (gitlab), including Debian Salsa CI

Choosing the git branch format

In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git.

We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.

rationale

Much traditional Debian tooling like quilt and gbp pq uses the “patches-unapplied” branch format, which stores the delta as patch files in debian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.

git merge

Option 1: simply use git, directly, including git merge.

Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream.

This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/.

This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).

git-debrebase

Option 2: Adopt git-debrebase.

git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.

The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.

This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7).

Examples of complex packages using this approach include src:xen and src:sbcl.

Determine upstream git and stop using upstream tarballs

We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.

rationale

Many maintainers have been importing upstream tarballs into git, for example by using gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!

git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!)

First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3.

Edit debian/watch to contain something like this:

version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)

You may need to adjust the regexp, depending on your upstream’s tag name convention. If debian/watch had a files-excluded, you’ll need to make a filtered version of upstream git.

git-debrebase

From now on we’ll generate our own .orig tarballs directly from git.

rationale

We need some “upstream tarball” for the 3.0 (quilt) source format to work with. It needs to correspond to the git commit we’re using as our upstream. We don’t need or want to use a tarball from upstream for this. The .orig is just needed so a nice legacy Debian source package (.dsc) can be generated.

Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing .origs for the “same upstream version”.

So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add +git to Debian’s idea of the upstream version. Manually make a tag with that name:

git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+git

If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.

Convert the git branch

git merge

Prepare a new branch on top of upstream git, containing what we want:

git branch -f old-master         # make a note of the old git representation
git reset --hard v1.2.3          # go back to the real upstream git tag
git checkout old-master :debian  # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master         # it's incorporated in our history now

If there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)

rationale

These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.

git-debrebase

Convert the branch to git-debrebase format and rebase onto the upstream git:

git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git

If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.

rationale

The force option -fupstream-not-ff will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history. -fdiverged may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.

Manually make your history fast forward from the git import of your previous upload.

dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid

Change the source format

Delete any existing debian/source/options and/or debian/source/local-options.

git merge

Change debian/source/format to 1.0. Add debian/source/options containing -sn.

rationale

We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration.

You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.

git-debrebase

Ensure that debian/source/format contains 3.0 (quilt).

Now you are ready to do a local test build.

Sort out the documentation and metadata

Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload.

Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.

git merge

Add a note to debian/changelog about the git packaging change.

git-debrebase

git-debrebase new-upstream will have added a “new upstream version” stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the +git from the upstream version number there!)

Configure Salsa Merge Requests

git-debrebase

In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.

rationale

Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.

Set up Salsa CI, and use it to block merges of bad changes

Caveat - the tradeoff

gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).

However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.

The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.

Setup procedure

Create debian/salsa-ci.yml containing

include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml

In your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to debian/salsa-ci.yml.

rationale

Your project may have an upstream CI config in .gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.

You can add various extra configuration to debian/salsa-ci.yml to customise it. Consult the Salsa CI docs.

git-debrebase

Add to debian/salsa-ci.yml:

.git-debrebase-prepare: &git-debrebase-prepare
  # install the tools we'll need
  - apt-get update
  - apt-get --yes install git-debrebase git-debpush
  # git-debrebase needs git user setup
  - git config user.email "salsa-ci@invalid.invalid"
  - git config user.name "salsa-ci"
  # run git-debrebase make-patches
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
  - git-debrebase --force
  - git-debrebase --noop-ok make-patches
  # make an orig tarball using the upstream tag, not a gbp upstream/ tag
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
  - git-deborig

.build-definition: &build-definition
  extends: .build-definition-common
  before_script: *git-debrebase-prepare

build source:
  extends: .build-source-only
  before_script: *git-debrebase-prepare

variables:
  # disable shallow cloning of git repository. This is needed for git-debrebase
  GIT_DEPTH: 0
rationale

Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).

These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.

Push this to salsa and make the CI pass.

If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.

Block untested pushes, preventing regressions

In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch master. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.

This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.

gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.

(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)

autopkgtests

Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.

The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.

Day-to-day work

With this capable tooling, most tasks are much easier.

Making changes to the package

Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.

On your MR branch you can freely edit every file. This includes upstream files, and files in debian/.

For example, you can:

  • Make changes with your editor and commit them.
  • git cherry-pick an upstream commit.
  • git am a patch from a mailing list or from the Debian Bug System.
  • git revert an earlier commit, even an upstream one.

When you have a working state of things, tidy up your git branch:

git merge

Use git-rebase to squash/edit/combine/reorder commits.

git-debrebase

Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude.

Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.

Push the MR branch (topic branch) to Salsa and make a Merge Request.

Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)

If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.

Test build

An informal test build can be done like this:

apt-get build-dep .
dpkg-buildpackage -uc -b

Ideally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable.

If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.

For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW.

Uploading to Debian

Start an MR branch for the administrative changes for the release.

Document all the changes you’re going to release, in the debian/changelog.

git merge

gbp dch can help write the changelog for you:

dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main
rationale

--ignore-branch is needed because gbp dch wrongly thinks you ought to be running this on master, but of course you’re running it on your MR branch.

The --git-log=^upstream/main excludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have an upstream remote and that you’re basing your work on their main branch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.

(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)

Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)

dch -r
git commit -m 'Finalise for upload' debian/changelog

Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)

Now you can perform the actual upload:

git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local tree
git merge
git-debpush
git-debrebase
git-debpush --quilt=linear

--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.

Uploading a NEW package to Debian

If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.

Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you:

Prepare the changelog update and merge it, as above. Then:

git-debrebase

Create the orig tarball and launder the git-derebase branch:

git-deborig
git-debrebase quick
rationale

Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.

Build the source and binary packages, locally:

dgit sbuild
dgit push-built
rationale

You don’t have to use dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.

New upstream version

Find the new upstream version number and corresponding tag. (Let’s suppose it’s 1.2.4.) Check the provenance:

git verify-tag v1.2.4
rationale

Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.

git merge

Simply merge the new upstream version and update the changelog:

git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'

git-debrebase

Rebase your delta queue onto the new upstream version:

git debrebase mew-upstream 1.2.4

If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase.

After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.

Sponsorship

git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.

When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush.

As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:

dgit fetch sid
git diff dgit/dgit/sid..HEAD

Or to see the Debian delta of the proposed upload:

git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'
git-debrebase

Or to show all the delta as a series of commits:

git log -p v1.2.3..HEAD ':!debian'

Don’t look at debian/patches/. It can be absent or out of date.

Incorporating an NMU

Fetch the NMU into your local git, and see what it contains:

dgit fetch sid
git diff master...dgit/dgit/sid

If the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made.

Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:

git merge dgit/dgit/sid
git-debrebase

You should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.

Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:

git diff debian/1.2.3-7...dgit/dgit/sid
git-debrebase

The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it’s best to filter them out with git diff ... ':!debian/patches'

If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like

git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patches

to diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)

DFSG filtering (handling non-free files)

Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.

This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.

Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.

rationale

Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.

Initial filtering

git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsg

And now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog.

If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2.

Subsequent upstream releases

git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsg

Removing files by pattern

If the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.

rationale

Ideally uscan, which has a way of representing DFSG filtering patterns in debian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.

Common issues

  • Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.

    It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.

  • gitattributes:

    For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out.

    Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.

  • git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.

    If you’re lucky, the code in the submodule isn’t used in which case you can git rm the submodule.

Further reading

I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.

You may want to look at:

  • dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.

    These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.

    Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.

  • NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)

    You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).

  • Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).

  • tag2upload documentation: The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course.

  • dgit reference documentation:

    There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations.

    dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.

  • Design and implementation documentation for tag2upload is linked to from the wiki.

  • Debian’s git transition blog post from December.

    tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.

    git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.

git-debrebase
  • git-debrebase reference documentation:

    Of course there’s a comprehensive command-line manual in git-debrebase(1).

    git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).


Edited 2026-03-05 18:48 UTC to add a missing --noop-ok to the Salsa CI runes. Thanks to Charlemagne Lasse for the report. Apologies if this causes Debian Planet to re-post this article as if it were new.


comment count unavailable comments

Cryptogram Friday Squid Blogging: Increased Squid Population in the Falklands

Some good news: squid stocks seem to be recovering in the waters off the Falkland Islands.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram Friday Squid Blogging: Squid in Byzantine Monk Cooking

This is a very weird story about how squid stayed on the menu of Byzantine monks by falling between the cracks of dietary rules.

At Constantinople’s Monastery of Stoudios, the kitchen didn’t answer to appetite.

It answered to the “typikon”: a manual for ensuring that nothing unexpected happened at mealtimes. Meat: forbidden. Dairy: forbidden. Eggs: forbidden. Fish: feast-day only. Oil: regulated. But squid?

Squid had eight arms, no bones, and a gift for changing color. Nobody had bothered writing a regulation for that. This wasn’t a loophole born of legal creativity but an oversight rooted in taxonomic confusion. Medieval monks, confronted with a creature that was neither fish nor fowl, gave up and let it pass.

In a kitchen governed by prohibitions, the safest ingredient was the one that caused the least disturbance. Squid entered not with applause, but with a shrug.

Bonus stuffed squid recipe at the end.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Chaotic IdealismPlagAIrism

Yes, the misspelling is deliberate.

I recently wrote about AI. It was a fairly non-critical piece of writing–mostly using AI as an example. But there’s more that needs to be said here.

A friend of mine is an artist. He works hard. He hates generative AI for what it is doing to artists and photographers. I agree with him. Stealing art from the Internet and using it to train your AI, so that it can then summarize that art and make more along the same lines, putting the artists out of business, is highly unethical. It should never have been done.

Other uses for AI are less plagiarism and more slapdash summary. For example, I use Google AI mode sometimes to ask complex questions and get the AI to summarize the results. I do research for Disability Day of Mourning as well as for my own Autism Memorial web site, and it takes a lot of searching. When I use AI, I have to watch out for errors–AI often makes errors–but if I want to scour a hundred web pages for a single name or summarize the predominant ideas about a single idea, AI can do it faster than me; and then I can take the results and work from those.

It’s useful. But there are problems. I can’t always tell exactly where the AI got the information it’s using, and although it does include links for its information, the program doesn’t include links for every fact, nor can it include all the links it has searched, because there would be hundreds.

I can’t put an AI result in a reference (nor should anyone–it’s a secondary source and we should always, always use primary sources), so I often find myself using AI to find a better search term for the regular search. Who is that 70-year-old woman who died from neglect in Del City, Oklahoma, when her caregiver didn’t care? AI can tell me her name was Deborah Valentine. That helps me locate an obituary and a photo of an elderly, white-haired woman with a broad smile.

But sometimes, when a name is common, AI brings in the wrong person. John Jones from Pasadena gets mixed up with John Jones from Minneapolis, and if I don’t watch out, I might echo that mistake, and then AI would take my web site and refer to it and take it as fact again. Ouch.

Sometimes, AI simply follows patterns and comes up with something that fits the pattern, but not the truth. If you don’t catch it, your research has been compromised.

Oh, yes, and there’s the environmental cost. Simply put, an AI is run on a supercomputer, and supercomputers use lots of power. Using more power means more pollution, more environmental trouble. A single search or the generation of a single image isn’t terribly expensive by itself–it’s about the same cost as watching a few minutes of TV. But when everybody makes lots of searches and makes lots of pictures and talks to lots of chatbots, it adds up. That’s a problem.

Here’s what I would change.

1. For image generation: Establish a marker that can be put into an image file, or a new sort of image file, that marks it as “Not for AI use.” Anyone who uses this file type, or a file with that mark, to train their AI should be prosecuted for theft. People who create and train AIs should train them on public-domain and donated images. This will need international cooperation, because the Internet is international. Pressure to create an ethical AI image or text generator will, if everything goes well, come from the ability to market that AI in the countries where AI plagiarism has been outlawed.

2. AIs which summarize search results should be made to quote their sources. Every time an AI uses a fact that it has drawn from a web page, it needs to put a reference number after that statement, and then put a reference at the end of its output. It’s the same rule we use for research papers, and AIs need to be held to that standard too.

3. We need to keep working on making AIs more efficient and thus less energy-hungry, and we need to work on powering them exclusively with renewable resources or, at worst, nuclear power, which is cleaner and less dangerous than coal (yes, really; research it, and if you use AI, remember to check its results properly).

As individuals, we need to be frugal with our use of AI. We need to look at it the same way as we see leaving the lights on, or adjusting the thermostat, or deciding to eat a burger instead of a bowl of lentil soup. If you use AI, you may want to offset your use with reductions of energy use in other areas: If you can walk rather than driving, or use public transportation rather than a single car, do that. Take shorter showers to save water; switch incandescent bulbs with CFLs or LEDs. Turn off your TV and your computer when you’re not using it. Look for energy-efficient appliances. Avoid flying.

I don’t know how AI will change the future. It’s here now, though, and I don’t think people will want to give it up. There are a lot of fears–not just from artists, but from anyone whose job can be done, with varying levels of competence, by an AI program. Some people hate AI so viciously that there’s no talking to them. Others love it so much that there’s no talking to them, either. There are a lot of worries that the rich will use AIs to exploit their workers even further; and since that’s exactly what happened during the Industrial Revolution, I think that’s well-founded.

But we can’t get rid of it. We can’t put the worms back in the can. We’d better deal with it, as ethically as we possibly can, with love for our fellow humans–and perhaps someday with love for our fellow sapients, some of which may be AI programs. Every time someone uses AI in a way that hurts somebody, we need to stand up for that person, or that group of people. We need to be persistent and impossible to shake off. Because although the potential for progress is great, so is the potential for abuse.

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.14 on CRAN: Maintenance

A new release 0.3.14 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL by relying on the Rcpp package. It has already been uploaded to Debian, and is also already available as a binary via r2u.

This release, the first in over three years, contains mostly maintenance changes. We polished the fastLm example implementation a little more, updated continunous integration as one does over such a long period, adopted the Authors@R convention, switched the (pre-made) pdf vignette to a new driver now provided by Rcpp, updated vignette references and URLs, and updated one call to Rf_error to aid in a Rcpp transition towards using only Rcpp::stop which unwinds error conditions better. (Technically this was a false positive on Rf_error but on the margin worth tickling this release after all this time.)

The NEWS entry follows:

Changes in version 0.3.14 (2026-03-05)

  • Updated some internals of fastLm example, and regenerated RcppExports.* files

  • Several updates for continuous integration

  • Switched to using Authors@R

  • Replace ::Rf_error with (Rf_error) in old example to aid Rcpp transition to Rcpp::stop (or this pass-through)

  • Vignette now uses the Rcpp::asis builder for pre-made pdfs

  • Vignette references have been updated, URLs prefer https and DOIs

Thanks to my CRANberries, there is also a diffstat report for this release. More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianVincent Bernat: Automatic Prometheus metrics discovery with Docker labels

Akvorado, a network flow collector, relies on Traefik, a reverse HTTP proxy, to expose HTTP endpoints for its Docker Compose services. Docker labels attached to each service define the routing rules. Traefik picks them up automatically when a container starts. Instead of maintaining a static configuration file to collect Prometheus metrics, we apply the same approach with Grafana Alloy.

Traefik & Docker

Traefik listens for events on the Docker socket. Each service advertises its configuration through labels. For example, here is the Loki service in Akvorado:

services:
  loki:
    # …
    expose:
      - 3100/tcp
    labels:
      - traefik.enable=true
      - traefik.http.routers.loki.rule=PathPrefix(`/loki`)

Once the container is healthy, Traefik creates a router forwarding requests matching /loki to its first exposed port. Colocating Traefik configuration with the service definition is attractive. How do we achieve the same for Prometheus metrics?

Metrics discovery with Alloy

Grafana Alloy, a metrics collector that scrapes Prometheus endpoints, includes a discovery.docker component. Just like Traefik, it connects to the Docker socket.1 With a few relabeling rules, we teach it to use Docker labels to locate and scrape metrics.

We define three labels on each service:

  • metrics.enable set to true enables metrics collection,
  • metrics.port specifies the port exposing the Prometheus endpoint, and
  • metrics.path specifies the path to the metrics endpoint.

If a service exposes more than one port, metrics.port is mandatory. Otherwise, it defaults to the only exposed port. The default value for metrics.path is /metrics. The Loki service from earlier becomes:

services:
  loki:
    # …
    expose:
      - 3100/tcp
    labels:
      - traefik.enable=true
      - traefik.http.routers.loki.rule=PathPrefix(`/loki`)
      - metrics.enable=true
      - metrics.path=/loki/metrics

Alloy’s configuration is split into four parts:

  1. discover containers through the Docker socket,
  2. filter and relabel targets using Docker labels,
  3. scrape the matching endpoints, and
  4. forward the metrics to Prometheus.

Discovering Docker containers

The first building block discovers running containers:

discovery.docker "docker" {
  host             = "unix:///var/run/docker.sock"
  refresh_interval = "30s"
  filter {
    name   = "label"
    values = ["com.docker.compose.project=akvorado"]
  }
}

This connects to the Docker socket and lists containers every 30 seconds.2 The filter block restricts discovery to containers belonging to the akvorado project, avoiding interference with unrelated containers on the same host. For each discovered container, Alloy produces a target with labels such as __meta_docker_container_label_metrics_port for the metrics.port Docker label.

Relabeling targets

The relabeling step filters and transforms raw targets from Docker discovery into scrape targets. The first stage keeps only targets with metrics.enable set to true:

discovery.relabel "prometheus" {
  targets = discovery.docker.docker.targets

  // Keep only targets with metrics.enable=true
  rule {
    source_labels = ["__meta_docker_container_label_metrics_enable"]
    regex         = `true`
    action        = "keep"
  }

  // …
}

The second stage overrides the discovered port when the service defines metrics.port:

// When metrics.port is set, override __address__.
rule {
  source_labels = ["__address__", "__meta_docker_container_label_metrics_port"]
  regex         = `(.+):\d+;(.+)`
  target_label  = "__address__"
  replacement   = "$1:$2"
}

Next, we handle containers in host network mode. When __meta_docker_network_name equals host, Alloy rewrites the address to host.docker.internal instead of localhost:3

// When host networking, override __address__ to host.docker.internal.
rule {
  source_labels = ["__meta_docker_container_label_metrics_port", "__meta_docker_network_name"]
  regex         = `(.+);host`
  target_label  = "__address__"
  replacement   = "host.docker.internal:$1"
}

The next stage derives the job name from the service name, stripping any numbered suffix. The instance label is the address without the port:

rule {
  source_labels = ["__meta_docker_container_label_com_docker_compose_service"]
  regex         = `(.+)(?:-\d+)?`
  target_label  = "job"
}
rule {
  source_labels = ["__address__"]
  regex         = `(.+):\d+`
  target_label  = "instance"
}

If a container defines metrics.path, Alloy uses it. Otherwise, it defaults to /metrics:

rule {
  source_labels = ["__meta_docker_container_label_metrics_path"]
  regex         = `(.+)`
  target_label  = "__metrics_path__"
}
rule {
  source_labels = ["__metrics_path__"]
  regex         = ""
  target_label  = "__metrics_path__"
  replacement   = "/metrics"
}

Scraping and forwarding

With the targets properly relabeled, scraping and forwarding are straightforward:

prometheus.scrape "docker" {
  targets         = discovery.relabel.prometheus.output
  forward_to      = [prometheus.remote_write.default.receiver]
  scrape_interval = "30s"
}

prometheus.remote_write "default" {
  endpoint {
    url = "http://prometheus:9090/api/v1/write"
  }
}

prometheus.scrape periodically fetches metrics from the discovered targets. prometheus.remote_write sends them to Prometheus.

Built-in exporters

Some services do not expose a Prometheus endpoint. Redis and Kafka are common examples. Alloy ships built-in Prometheus exporters that query these services and expose metrics on their behalf.

prometheus.exporter.redis "docker" {
  redis_addr = "redis:6379"
}
discovery.relabel "redis" {
  targets = prometheus.exporter.redis.docker.targets
  rule {
    target_label = "job"
    replacement  = "redis"
  }
}
prometheus.scrape "redis" {
  targets         = discovery.relabel.redis.output
  forward_to      = [prometheus.remote_write.default.receiver]
  scrape_interval = "30s"
}

The same pattern applies to Kafka:

prometheus.exporter.kafka "docker" {
  kafka_uris = ["kafka:9092"]
}
discovery.relabel "kafka" {
  targets = prometheus.exporter.kafka.docker.targets
  rule {
    target_label = "job"
    replacement  = "kafka"
  }
}
prometheus.scrape "kafka" {
  targets         = discovery.relabel.kafka.output
  forward_to      = [prometheus.remote_write.default.receiver]
  scrape_interval = "30s"
}

Each exporter is a separate component with its own relabeling and scrape configuration. We set the job label explicitly since no Docker metadata can provide it.


With this setup, adding metrics to a new service with a Prometheus endpoint requires only a few labels in docker-compose.yml, just like adding a Traefik route. Alloy picks it up automatically. You can apply the same pattern with another discovery method, like discovery.kubernetes, discovery.scaleway, or discovery.http. 🩺


  1. Both Traefik and Alloy require access to the Docker socket, which grants root-level access to the host. A Docker socket proxy mitigates this by exposing only the read-only API endpoints needed for discovery. ↩︎

  2. Unlike Traefik, which watches for events, Grafana Alloy polls the container list at regular intervals—a behavior inherited from Prometheus. ↩︎

  3. The Alloy service needs extra_hosts: ["host.docker.internal:host-gateway"] in its definition. ↩︎

Cryptogram Academia and the “AI Brain Drain”

In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent.

Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see go.nature.com/4qznsq1). Technology firms are also spending billions on “reverse-acquihires”—poaching the star staff members of start-ups without acquiring the companies themselves. Eyeing these generous payouts, technical experts earning more modest salaries might well reconsider their career choices.

Academia is already losing out. Since the launch of ChatGPT in 2022, concerns have grown in academia about an “AI brain drain.” Studies point to a sharp rise in university machine-learning and AI researchers moving to industry roles. A 2025 paper reported that this was especially true for young, highly cited scholars: researchers who were about five years into their careers and whose work ranked among the most cited were 100 times more likely to move to industry the following year than were ten-year veterans whose work received an average number of citations, according to a model based on data from nearly seven million papers.1

This outflow threatens the distinct roles of academic research in the scientific enterprise: innovation driven by curiosity rather than profit, as well as providing independent critique and ethical scrutiny. The fixation of “big tech” firms on skimming the very top talent also risks eroding the idea of science as a collaborative endeavor, in which teams—not individuals—do the most consequential work.

Here, we explore the broader implications for science and suggest alternative visions of the future.

Astronomical salaries for AI talent buy into a legend as old as the software industry: the 10x engineer. This is someone who is supposedly capable of ten times the impact of their peers. Why hire and manage an entire group of scientists or software engineers when one genius—or an AI agent—can outperform them?

That proposition is increasingly attractive to tech firms that are betting that a large number of entry-level and even mid-level engineering jobs will be replaced by AI. It’s no coincidence that Google’s Gemini 3 Pro AI model was launched with boasts of “PhD-level reasoning,” a marketing strategy that is appealing to executives seeking to replace people with AI.

But the lone-genius narrative is increasingly out of step with reality. Research backs up a fundamental truth: science is a team sport. A large-scale study of scientific publishing from 1900 to 2011 found that papers produced by larger collaborations consistently have greater impact than do those of smaller teams, even after accounting for self-citation.2 Analyses of the most highly cited scientists show a similar pattern: their highest-impact works tend to be those papers with many authors.3 A 2020 study of Nobel laureates reinforces this trend, revealing that—much like the wider scientific community—the average size of the teams that they publish with has steadily increased over time as scientific problems increase in scope and complexity.4

From the detection of gravitational waves, which are ripples in space-time caused by massive cosmic events, to CRISPR-based gene editing, a precise method for cutting and modifying DNA, to recent AI breakthroughs in protein-structure prediction, the most consequential advances in modern science have been collective achievements. Although these successes are often associated with prominent individuals—senior scientists, Nobel laureates, patent holders—the work itself was driven by teams ranging from dozens to thousands of people and was built on decades of open science: shared data, methods, software and accumulated insight.

Building strong institutions is a much more effective use of resources than is betting on any single individual. Examples demonstrating this include the LIGO Scientific Collaboration, the global team that first detected gravitational waves; the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, a leading genomics and biomedical-research center behind many CRISPR advances; and even for-profit laboratories such as Google DeepMind in London, which drove advances in protein-structure prediction with its AlphaFold tool. If the aim of the tech giants and other AI firms that are spending lavishly on elite talent is to accelerate scientific progress, the current strategy is misguided.

By contrast, well-designed institutions amplify individual ability, sustain productivity beyond any one person’s career and endure long after any single contributor is gone.

Equally important, effective institutions distribute power in beneficial ways. Rather than vesting decision-making authority in the hands of one person, they have mechanisms for sharing control. Allocation committees decide how resources are used, scientific advisory boards set collective research priorities, and peer review determines which ideas enter the scientific record.

And although the term “innovation by committee” might sound disparaging, such an approach is crucial to make the scientific enterprise act in concert with the diverse needs of the broader public. This is especially true in science, which continues to suffer from pervasive inequalities across gender, race and socio-economic and cultural differences.5

Need for alternative vision

This is why scientists, academics and policymakers should pay more attention to how AI research is organized and led, especially as the technology becomes essential across scientific disciplines. Used well, AI can support a more equitable scientific enterprise by empowering junior researchers who currently have access to few resources.

Instead, some of today’s wealthiest scientific institutions might think that they can deploy the same strategies as the tech industry uses and compete for top talent on financial terms—perhaps by getting funding from the same billionaires who back big tech. Indeed, wage inequality has been steadily growing within academia for decades.6 But this is not a path that science should follow.

The ideal model for science is a broad, diverse ecosystem in which researchers can thrive at every level. Here are three strategies that universities and mission-driven labs should adopt instead of engaging in a compensation arms race.

First, universities and institutions should stay committed to the public interest. An excellent example of this approach can be found in Switzerland, where several institutions are coordinating to build AI as a public good rather than a private asset. Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) and the Swiss Federal Institute of Technology (ETH) in Zurich, working with the Swiss National Supercomputing Centre, have built Apertus, a freely available large language model. Unlike the controversially-labelled “open source” models built by commercial labs—such as Meta’s LLaMa, which has been criticized for not complying with the open-source definition (see go.nature.com/3o56zd5)—Apertus is not only open in its source code and its weights (meaning its core parameters), but also in its data and development process. Crucially, Apertus is not designed to compete with “frontier” AI labs pursuing superintelligence at enormous cost and with little regard for data ownership. Instead, it adopts a more modest and sustainable goal: to make AI trustworthy for use in industry and public administration, strictly adhering to data-licensing restrictions and including local European languages.7

Principal investigators (PIs) at other institutions globally should follow this path, aligning public funding agencies and public institutions to produce a more sustainable alternative to corporate AI.

Second, universities should bolster networks of researchers from the undergraduate to senior-professor levels—not only because they make for effective innovation teams, but also because they serve a purpose beyond next quarter’s profits. The scientific enterprise galvanizes its members at all levels to contribute to the same projects, the same journals and the same open, international scientific literature—to perpetuate itself across generations and to distribute its impact throughout society.

Universities should take precisely the opposite hiring strategy to that of the big tech firms. Instead of lavishing top dollar on a select few researchers, they should equitably distribute salaries. They should raise graduate-student stipends and postdoc salaries and limit the growth of pay for high-profile PIs.

Third, universities should show that they can offer more than just financial benefits: they must offer distinctive intellectual and civic rewards. Although money is unquestionably a motivator, researchers also value intellectual freedom and the recognition of their work. Studies show that research roles in industry that allow publication attract talent at salaries roughly 20% lower than comparable positions that prohibit it (see go.nature.com/4cbjxzu).

Beyond the intellectual recognition of publications and citation counts, universities should recognize and reward the production of public goods. The tenure and promotion process at universities should reward academics who supply expertise to local and national governments, who communicate with and engage the public in research, who publish and maintain open-source software for public use and who provide services for non-profit groups.

Furthermore, institutions should demonstrate that they will defend the intellectual freedom of their researchers and shield them from corporate or political interference. In the United States today, we see a striking juxtaposition between big tech firms, which curry favour with the administration of US President Donald Trump to win regulatory and trade benefits, and higher-education institutions, which suffer massive losses of federal funding and threats of investigation and sanction. Unlike big tech firms, universities should invest in enquiry that challenges authority.

We urge leaders of scientific institutions to reject the growing pay inequality rampant in the upper echelons of AI research. Instead, they should compete for talent on a different dimension: the integrity of their missions and the equitableness of their institutions. These institutions should focus on building sustainable organizations with diverse staff members, rather than bestowing a bounty on science’s 1%.

References

  1. Jurowetzki, R., Hain, D. S., Wirtz, K. & Bianchini, S. AI Soc. 40, 4145–4152 (2025).
  2. Larivière, V., Gingras, Y., Sugimoto, C. R. & Tsou, A. J. Assoc. Inf. Sci. Technol. 66, 1323–1332 (2015).
  3. Aksnes, D. W. & Aagaard, K. J. Data Inf. Sci. 6, 41–66 (2021).
  4. Li, J., Yin, Y., Fortunato, S. & Wang, D. J. R. Soc. Interface 17, 20200135 (2020).
  5. Graves, J. L. Jr, Kearney, M., Barabino, G. & Malcom, S. Proc. Natl Acad. Sci. USA 119, e2117831119 (2022).
  6. Lok, C. Nature 537, 471–473 (2016).
  7. Project Apertus. Preprint at arXiv https://doi.org/10.48550/arXiv.2509.14233 (2025).

This essay was written with Nathan E. Sanders, and originally appeared in Nature.

Worse Than FailureCodeSOD: Qaudruple Negative

We mostly don't pick on bad SQL queries here, because mostly the query optimizer is going to fix whatever is wrong, and the sad reality is that databases are hard to change once they're running; especially legacy databases. But sometimes the code is just so hamster-bowling-backwards that it's worth looking into.

Jim J has been working on a codebase for about 18 months. It's a big, sprawling, messy project, and it has code like this:

AND CASE WHEN @c_usergroup = 50 AND NOT EXISTS(SELECT 1 FROM l_appl_client lac WHERE lac.f_application = fa.f_application AND lac.c_linktype = 840 AND lac.stat = 0 AND CASE WHEN ISNULL(lac.f_client,0) <> @f_client_user AND ISNULL(lac.f_c_f_client,0) <> @f_client_user THEN 0 ELSE 1 END = 1 ) THEN 0 ELSE 1 END = 1 -- 07.09.2022

We'll come back to what it's doing, but let's start with a little backstory.

This code is part of a two-tier application: all the logic lives in SQL Server stored procedures, and the UI is a PowerBuilder application. It's been under development for a long time, and in that time has accrued about a million lines of code between the front end and back end, and has never had more than 5 developers working on it at any given time. The backlog of feature requests is nearly as long as the backlog of bugs.

You may notice the little date comment in the code above. That's because until Jim joined the company, they used Visual Source Safe for version control. Visual Source Safe went out of support in 2005, and let's be honest: even when it was in support it barely worked as a source control system. And that's just the Power Builder side- the database side just didn't use source control. The source of truth was the database itself. When going from development to test to prod, you'd manually export object definitions and run the scripts in the target environment. Manually. Yes, even in production. And yes, environments did drift and assumptions made in the scripts would frequently break things.

You may also notice the fields above use a lot of Hungarian notation. Hungarian, in the best case, makes it harder to read and reason about your code. In this case, it's honestly fully obfuscatory. c_ stands for a codetable, f_ for entities. l_ is for a many-to-many linking table. z_ is for temporary tables. So is x_. And t_. Except not all of those "temporary" tables are truly temporary, a lesson Jim learned when trying to clean up some "junk" tables which were not actually junk.

I'll let Jim add some more detail around these prefixes:

an "application" may have a link to a "client", so there is an f_client field; but also it references an "agent" (which is also in the f_client table, surpise!) - this is how you get an f_c_f_client field. I have no clue why the prefix is f_c_ - but I also found c_c_c_channel and fc4_contact columns. The latter was a shorthand for f_c_f_c_f_c_f_contact, I guess.

"f_c_f_c_f_c_f_c" is also the sound I'd make if I saw this in a codebase I was responsible for. It certainly makes me want to change the c_c_c_channel.

With all this context, let's turn it back over to Jim to explain the code above:

And now, with all this background in mind, let's have a look at the logic in this condition. On the deepest level we check that both f_client and f_c_f_client are NOT equal to @f_client_user, and if this is the case, we return 0 which is NOT equal to 1 so it's effectively a negation of the condition. Then we check that records matching this condition do NOT EXIST, and when this is true - also return 0 negating the condition once more.

Honestly, the logic couldn't be clearer, when you put it that way. I jest, I've read that twelve times and I still don't understand what this is for or why it's here. I just want to know who we can prosecute for this disaster. The whole thing is a quadruple negative and frankly, I can't handle that kind of negativity.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAmbergris

Author: Jeremy Nathan Marks Where I live there are many stories about what we call, ‘the town on the edge of the abyss. It’s a town on the verge of something mysterious. Most of these stories go something like this: “That town is a town of women.” “No, it’s a town of mostly women and […]

The post Ambergris appeared first on 365tomorrows.

,

Cryptogram Canada Needs Nationalized, Public AI

Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?

Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon.

All the while, OpenAI was less than open. The company had flagged the Tumbler Ridge, B.C., shooter’s ChatGPT interactions, which included gun-violence chats. Employees wanted to alert law enforcement but were rebuffed. Maybe there is a discussion to be had about users’ privacy. But even after the shooting, the OpenAI representative who met with the B.C. government said nothing.

When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.

Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” And it’s not just OpenAI: all the AI giants are for-profit American companies, operating in their private interests, and subject to United States law and increasingly bowing to U.S. President Donald Trump. Moving data centres into Canada under a proposal like OpenAI’s doesn’t change that. The current geopolitical reality means Canada should not be dependent on U.S. tech firms for essential services such as cloud computing and AI.

While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians.

Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise.

Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions—ETH Zurich, EPFL, and the Swiss National Supercomputing Centre—released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on.

The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. This vision represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity.

Apertus also demonstrates a far more sustainable economic framework for AI. Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development. They focused on making something broadly useful rather than bleeding edge—trying dubiously to create “superintelligence,” as with Silicon Valley—so they created a smaller model at much lower cost. Apertus’s training was at a scale (70 billion parameters) perhaps two orders of magnitude lower than the largest Big Tech offerings.

An ecosystem is now being developed on top of Apertus, using the model as a public good to power chatbots for free consumer use and to provide a development platform for companies prioritizing responsible AI use, and rigorous compliance with laws like the EU AI Act. Instead of routing queries from those users to Big Tech infrastructure, Apertus is deployed to data centres across national AI and computing initiatives of Switzerland, Australia, Germany, and Singapore and other partners.

The case for public AI rests on both democratic principles and practical benefits. Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine. Or how to handle a situation such as that of the Tumbler Ridge shooter. These decisions will profoundly shape society as AI becomes more pervasive, yet corporate AI makes them in secret.

By contrast, public AI developed by transparent, accountable agencies would allow democratic processes and political oversight to govern how these powerful systems function.

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada’s $2-billion Sovereign AI Compute Strategy provides substantial funding.

What’s needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

This essay was written with Nathan E. Sanders, and originally appeared in The Globe and Mail.

Cryptogram iPhones and iPads Approved for NATO Classified Data

Apple announcement:

…iPhone and iPad are the first and only consumer devices in compliance with the information assurance requirements of NATO nations. This enables iPhone and iPad to be used with classified information up to the NATO restricted level without requiring special software or settings—a level of government certification no other consumer mobile device has met.

This is out of the box, no modifications required.

Boing Boing post.

Cryptogram Jailbreaking the F-35 Fighter Jet

Countries around the world are becoming increasingly concerned about their dependencies on the US. If you’ve purchase US-made F-35 fighter jets, you are dependent on the US for software maintenance.

The Dutch Defense Secretary recently said that he could jailbreak the planes to accept third-party software.

Cryptogram New Attack Against Wi-Fi

It’s called AirSnitch:

Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks.

The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises.

With the ability to intercept all link-layer traffic (that is, the traffic as it passes between Layers 1 and 2), an attacker can perform other attacks on higher layers. The most dire consequence occurs when an Internet connection isn’t encrypted­—something that Google recently estimated occurred when as much as 6 percent and 20 percent of pages loaded on Windows and Linux, respectively. In these cases, the attacker can view and modify all traffic in the clear and steal authentication cookies, passwords, payment card details, and any other sensitive data. Since many company intranets are sent in plaintext, traffic from them can also be intercepted.

Even when HTTPS is in place, an attacker can still intercept domain look-up traffic and use DNS cache poisoning to corrupt tables stored by the target’s operating system. The AirSnitch MitM also puts the attacker in the position to wage attacks against vulnerabilities that may not be patched. Attackers can also see the external IP addresses hosting webpages being visited and often correlate them with the precise URL.

Here’s the paper.

Cryptogram Claude Used to Hack Mexican Government

An unknown hacker used Anthropic’s LLM to hack the Mexican government:

The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

[…]

Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker’s requests and executed thousands of commands on government computer networks, the researchers said.

Anthropic investigated Gambit’s claims, disrupted the activity and banned the accounts involved, a representative said. The company feeds examples of malicious activity back into Claude to learn from it, and one of its latest AI models, Claude Opus 4.6, includes probes that can disrupt misuse, the representative said.

Alternative link here.

Cryptogram Hacked App Part of US/Israeli Propaganda Campaign Against Iran

Wired has the story:

Shortly after the first set of explosions, Iranians received bursts of notifications on their phones. They came not from the government advising caution, but from an apparently hacked prayer-timing app called BadeSaba Calendar that has been downloaded more than 5 million times from the Google Play Store.

The messages arrived in quick succession over a period of 30 minutes, starting with the phrase ‘Help has arrived’ at 9:52 am Tehran time, shortly after the first set of explosions. No party has claimed responsibility for the hacks.

It happened so fast that this is most likely a government operation. I can easily envision both the US and Israel having hacked the app previously, and then deciding that this is a good use of that access.

Cryptogram Israel Hacked Traffic Cameras in Iran

Multiple news outlets are reporting on Israel’s hacking of Iranian traffic cameras and how they assisted with the killing of that country’s leadership.

The New York Times has an

Cryptogram LLM-Assisted Deanonymization

Turns out that LLMs are good at de-anonymization:

We show that LLM agents can figure out who you are from your anonymous online posts. Across Hacker News, Reddit, LinkedIn, and anonymized interview transcripts, our method identifies users with high precision ­ and scales to tens of thousands of candidates.

While it has been known that individuals can be uniquely identified by surprisingly few attributes, this was often practically limited. Data is often only available in unstructured form and deanonymization used to require human investigators to search and reason based on clues. We show that from a handful of comments, LLMs can infer where you live, what you do, and your interests—then search for you on the web. In our new research, we show that this is not only possible but increasingly practical.

News article.

Research paper.

Planet DebianSean Whitton: Southern Biscuits with British ingredients

I miss the US more and more, and have recently been trying to perfect Southern Biscuits using British ingredients. It took me eight or nine tries before I was consistently getting good results. Here is my recipe.

Ingredients

  • 190g plain flour
  • 60g strong white bread flour
  • 4 tsp baking powder
  • ¼ tsp bicarbonate of soda
  • 1 tsp cream of tartar (optional)
  • 1 tsp salt
  • 100g unsalted butter
  • 180ml buttermilk, chilled
    • If your buttermilk is thicker than the consistency of ordinary milk, you’ll need around 200ml.
  • extra buttermilk for brushing

Method

  1. Slice and then chill the butter in the freezer for at least fifteen minutes.
  2. Preheat oven to 220°C with the fan turned off.
  3. Twice sieve together the flours, leaveners and salt. Some salt may not go through the sieve; just tip it back into the bowl.
  4. Cut cold butter slices into the flour with a pastry blender until the mixture resembles coarse crumbs: some small lumps of fat remaining is desirable. In particular, the fine crumbs you are looking for when making British scones are not wanted here. Rubbing in with fingertips just won’t do; biscuits demand keeping things cold even more than shortcrust pastry does.
  5. Make a well in the centre, pour in the buttermilk, and stir with a metal spoon until the dough comes together and pulls away from the sides of the bowl. Avoid overmixing, but I’ve found that so long as the ingredients are cold, you don’t have to be too gentle at this stage and can make sure all the crumbs are mixed in.
  6. Flour your hands, turn dough onto a floured work surface, and pat together into a rectangle. Some suggest dusting the top of the dough with flour, too, here.
  7. Fold the dough in half, then gather any crumbs and pat it back into the same shape. Turn ninety degrees and do the same again, until you have completed a total of eight folds, two in each cardinal direction. The dough should now be a little springy.
  8. Roll to about ½ inch thick.
  9. Cut out biscuits. If using a round cutter, do not twist it, as that seals the edges of the biscuits and so spoils the layering.
  10. Transfer to a baking sheet, placed close together (helps them rise). Flour your thumb and use it to press an indent into the top of each biscuit (helps them rise straight), brush with buttermilk.
  11. Bake until flaky and golden brown: about fifteen minutes.

Gravy

It turns out that the “pepper gravy” that one commonly has with biscuits is just a white/béchamel sauce made with lots of black pepper. I haven’t got a recipe I really like for this yet. Better is a “sausage gravy”; again this has a white sauce as its base, I believe. I have a vegetarian recipe for this to try at some point.

Variations

  • These biscuits do come out fluffy but not so flaky. For that you can try using lard instead of butter, if you’re not vegetarian (vegetable shortening is hard to find here).
  • If you don’t have a pastry blender and don’t want to buy one you can try not slicing the butter and instead coarsely grating it into the flour out of the freezer.
  • An alternative to folding is cutting and piling the layers.
  • You can try rolling out to 1–1½ inches thick.
  • Instead of cutting out biscuits you can just slice the whole piece of dough into equal pieces. An advantage of this is that you don’t have to re-roll, which latter also spoils the layering.
  • Instead of brushing with buttermilk, you can take them out after they’ve started to rise but before they’ve browned, brush them with melted butter and put them back in.

Notes

  • I’ve had more success with Dale Farm’s buttermilk than Sainsbury’s own. The former is much runnier.
  • Southern culture calls for biscuits to be made the size of cat’s heads.
  • Bleached flour is apparently usual in the South, but is illegal(!) here. Apparently bleaching can have some effect on the development of the gluten which would affect the texture.
  • British plain flour is made from soft wheat and has a lower percentage of protein/gluten, while American all-purpose flour is often(?) made from harder wheat and has more protein. In this recipe I mix plain and strong white flour, in a ratio of 3:1, to emulate American all-purpose flour.

    I am not sure why this works best. In the South they have soft wheats too, and lower protein percentages. The famous White Lily flour is 9%. (Apparently you can mix US cake flour and US all-purpose flour in a ratio of 1:1 to achieve that; in the UK, Shipton Mill sell a “soft cake and pastry flour” which has been recommended to me as similar.)

    This would suggest that British plain flour ought to be closer to Southern flour than the standard flour available in most of the US. But my experience has been that the biscuits taste better with the plain and strong white 3:1 mix. Possibly Southeners would disprefer them. I got some feedback that good biscuits are about texture and moistness and not flavour.

  • Baking powder in the US is usually double-acting but ours is always single-acting, so we need double quantities of that.

Planet DebianSean Whitton: dgit-as-a-service retrospective

We recently launched tag2upload, aka cloud dgit or dgit-as-a-service. This was something of a culmination of work I’ve been doing since 2016 towards modernising Debian workflows, so I thought I’d write a short personal retrospective.

When I started contributing to Debian in 2015, I was not impressed with how packages were represented in Git by most package maintainers, and wanted a pure Git workflow. I read a couple of Joey Hess’s blog posts on the matter, a rope ladder to the dgit treehouse and upstream git repositories and made a bug report against dgit hoping to tie some things together.

The results of that early work were the git-deborig(1) program and the dgit-maint-merge(7) tutorial manpage. Starting with Joey’s workflow pointers, I developed a complete, pure Git workflow that I thought would be suitable for all package maintainers in Debian. It was certainly well-suited for my own packages. It took me a while to learn that there are packages for which this workflow is too simple. We now also have the dgit-maint-debrebase(7) workflow which uses git-debrebase, something which wasn’t invented until several years later. Where dgit-maint-merge(7) won’t do, you can use dgit-maint-debrebase(7), and still be doing pretty much pure Git. Here’s a full, recent guide to modernisation.

The next most significant contribution of my own was the push-source subcommand for dgit. dgit push required a preexisting .changes file produced from the working tree. I wanted to make dgit push-source prepare that .changes file for you, but also not use the working tree, instead consulting HEAD. The idea was that you were doing a git push – which doesn’t care about the working tree – direct to the Debian archive, or as close as we could get. I implemented that at DebConf18 in Taiwan, I think, with Ian, and we also did a talk on git-debrebase. We ended up having to change it to look at the working tree in addition to HEAD to make it work as well as possible, but I think that the idea of a command which was like doing a Git push direct to the archive was perhaps foundational for us later wanting to develop tag2upload. Indeed, while tag2upload’s client-side tool git-debpush does look at the working tree, it doesn’t do so in a way that is essential to its operation. tag2upload is dgit push-source-as-a-service.

And finally we come to tag2upload, a system Ian and I designed in 2019 during a two-person sprint at his place in Cambridge, while I was visiting the UK from Arizona. With tag2upload, appropriately authorised Debian package maintainers can upload to Debian with only pure Git operations – namely, making and pushing a signed Git tag to Debian’s GitLab instance. Although we had a solid prototype in 2019, we only finally launched it last month, February 2026. This was mostly due to political delays, but also because we have put in a lot of hours making it better in various ways.

Looking back, one thing that seems notable to me is that the core elements of the pure Git workflows haven’t changed much at all. Working out all the details of dgit-maint-merge(7), designing and writing git-debrebase (Ian’s work), and then working out all the details of dgit-maint-debrebase(7), are the important parts, to me. The rest is mostly just large amounts of compatibility code. git-debrebase and dgit-maint-debrebase(7) are very novel but dgit-maint-merge(7) is mostly just an extrapolation of Joey’s thoughts from 13 years ago. And yet, adoption of these workflows remains low.

People prefer to use what they are used to using, even if the workflows have significant inconveniences. That’s completely understandable; I’m really interested in good workflows, but most other contributors care less about it. But you would expect enough newcomers to have arrived in 13 years that the new workflows would have a higher uptake. That is, packages maintained by contributors that got involved after these workflows became available would be maintained using newer workflows, at least. But the inertia seems to be too strong even for that. Instead, new contributors used to working purely out of Git are told they need to learn Debian’s strange ways of representing things, tarballs and all. It doesn’t have to be that way. We hope that tag2upload will make the pure Git workflows seem more appealing to people.

Planet DebianJonathan Dowland: More lava lamps

photograph of a Mathmos Telstar rocket lava lamp with orange wax and purple water

Mathmos had a sale on spare Lava lamp bottles around Christmas, so I bought a couple of new-to-me colour combinations.

photograph of a Mathmos Telstar rocket lava lamp with blue wax in purple water
photograph of a Mathmos Telstar rocket lava lamp with pink wax in clear water

The lamp I have came with orange wax in purple liquid, which gives a strong red glow in a dark room. I bought blue wax in purple liquid, which I think looks fantastic and works really nicely with my Rob Sheridan print.

The other one I bought was pink in clear, which is nice, but I think the coloured liquids add a lot to the tone of lighting in a room.

Recently, UK vid-blogger Techmoan did some really nice videos about Mathmos lava lamps: Best Lava Lamp? and LAVA LAMPS Giant, Mini & Neo.

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.9 on CRAN: More (forced) Maintenance

Another maintenance release of the tidyCpp package arrived on CRAN this morning. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples.

This release follows a similar release in November and had its hand forced by rather abrupt and forced overnight changes in R-devel, this time the removal of VECTOR_PTR in [this commit]. The release also contains changes accumulated since the last release (including some kindly contritbuted by Ivan) and those are signs that the R Core team can do more coordinated release management when they try a little harder.

Changes are summarize in the NEWS entry that follows.

Changes in tidyCpp version 0.0.9 (2026-03-03)

  • Several vignette typos have been corrected (#4 addressing #3)

  • A badge for r-universe has been added to the README.md

  • The vignette is now served via GitHub Pages and that version is referenced in the README.

  • Two entry points reintroduced and redefined using permitted R API function (Ivan Krylov in #5).

  • Another entry has been removed to match R-devel API changes.

  • Six new attributes helpers have been added for R 4.6.0 or later.

  • VECTOR_PTR_RO(x) replaces the removed VECTOR_PTR, a warning or deprecation period would have been nice here.

Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

365 TomorrowsEros Explored

Author: Frank T. Sikora Each time I look at my reflection, I’m disgusted. I’m hideous. A  monstrosity, and yet, I’m amazed. I’m alive. I’m breathing. I’m conscious, and given the alternative, I shan’t complain. I got what I paid for: I’m a turtle, technically — Chelonoidis niger. Commonly known as a giant tortoise and is […]

The post Eros Explored appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Repeating Your Existence

Today's snippet from Rich D is short and sweet, and admittedly, not the most TFs of WTFs out there. But it made me chuckle, and sometimes that's all we need. This Java snippet shows us how to delete a file:

if (Files.exists(filePath)) {
    Files.deleteIfExists(filePath);
}

If the file exists, then if it exists, delete it.

This commit was clearly submitted by the Department of Redundancy Department. One might be tempted to hypothesize that there's some race condition or something that they're trying to route around, but if they are, this isn't the way to do it, per the docs: "Consequently this method may not be atomic with respect to other file system operations." But also, I fail to see how this would do that anyway.

The only thing we can say for certain about using deleteIfExists instead of delete is that deleteIfExists will never throw a NoSuchFileException.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

,

Cryptogram Manipulating AI Summarization Features

Microsoft is reporting:

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

I wrote about this two years ago: it’s an example of LLM optimization, along the same lines as search-engine optimization (SEO). It’s going to be big business.

Worse Than FailureCodeSOD: Blocked Up

Agatha has inherited some Windows Forms code. This particular batch of such code falls into that delightful category of code that's wrong in multiple ways, multiple times. The task here is to disable a few panels worth of controls, based on a condition. Or, since this is in Spanish, "bloquear controles". Let's see how they did it.

private void BloquearControles()
{
	bool bolBloquear = SomeConditionTM; // SomeConditionTM = a bunch of stuff. Replaced for clarity.

	// Some code. Removed for clarity.
	
	// private System.Windows.Forms.Panel pnlPrincipal;
	foreach (Control C in this.pnlPrincipal.Controls)
	{
		if (C.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C.Enabled = bolBloquear;
		}
	}
	
	// private System.Windows.Forms.GroupBox grpProveedor;
	foreach (Control C1 in this.grpProveedor.Controls)
	{
		if (C1.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C1.Enabled = bolBloquear;
		}
	}

	// private System.Windows.Forms.GroupBox grpDescuentoGeneral;
	foreach (Control C2 in this.grpDescuentoGeneral.Controls)
	{
		if (C2.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C2.Enabled = bolBloquear;
		}
	}

	// Some more code. Removed for clarity.
}

This manages two group boxes and a panel. It checks a condition, then iterates across every control beneath it, and sets their enabled property on the control. In order to do this, it checks the type of the control for some reason.

Now, a few things: every control inherits from the base Control class, which has an Enabled property, so we're not doing this check to make sure the property exists. And every built-in container control automatically passes its enabled/disabled state to its child controls. So there's a four line version of this function where we just set the enabled property on each container.

This leaves us with two possible explanations. The first, and most likely, is that the developer responsible just didn't understand how these controls worked, and how inheritance worked, and wrote this abomination as an expression of that ignorance. This is extremely plausible, extremely likely, and honestly, our best case scenario.

Because our worse case scenario is that this code's job isn't to disable all of the controls. The reason they're doing type checking is that there are some controls used in these containers that don't match the types listed. The purpose of this code, then, is to disable some of the controls, leaving others enabled. Doing this by type would be a terrible way to manage that, and is endlessly confusing. Worse, I can't imagine how this behavior is interpreted by the end users; the enabling/disabling of controls following no intuitive pattern, just filtered based on the kind of control in use.

The good news is that Agatha can point us towards the first option. She adds:

They decided to not only disable the child controls one by one but to check their type and only disable those five types, some of which aren't event present in the containers. And to make sure this was WTF-worthy the didn't even bother to use else-if so every type is checked for every child control

She also adds:

At this point I'm not going to bother commenting on the use of GetType() == typeof() instead of is to do the type checking.

Bad news, Agatha: you did bother commenting. And even if you didn't, don't worry, someone would have.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsFriendlies

Author: Majoki Welcome, Robot Overlords! reads the sign on my lawn. Before the singularity, it was worth a few laughs. Now, the friendlies want me to remove the sign from my yard. They can’t come right out and say that to me. It would be pushy and might blow every solicitous circuit in their enamelite […]

The post Friendlies appeared first on 365tomorrows.

Planet DebianMatthew Garrett: To update blobs or not to update blobs

A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started1, but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x862. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS3. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics.

A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.

I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.

THINGS TO CONSIDER

  • Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?

  • You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you are capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand4. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them?

  • If it’s in ROM and it turns out to be hostile then nobody can fix it ever

  • The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates would introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk?

  • Designing hardware where you’re able to provide updated code and nobody else can is just a dick move5. We shouldn’t encourage vendors who do that.

  • Humans are bad at writing code, and code running on ancilliary hardware is no exception. It contains bugs. These bugs are sometimes very bad. This paper describes a set of vulnerabilities identified in code running on SSDs that made it possible to bypass encryption secrets. The SSD vendors released updates that fixed these issues. If the code couldn’t be replaced then anyone relying on those security features would need to replace the hardware.

  • Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible.

  • Vulnerabilities in code running on other hardware can still compromise the OS. If someone can compromise the code running on your wifi card then if you don’t have a strong IOMMU setup they’re going to be able to overwrite your running OS.

  • Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time.

Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.

I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here.

Why would I install a code update on my CPU when my CPU’s job is to run my code in the first place? Because it turns out that CPUs are complicated and messy and they have their own bugs, and those bugs may be functional (for example, some performance counter functionality was broken on Sandybridge at release, and was then fixed with a microcode blob update) and if you update it your hardware works better. Or it might be that you’re running a CPU with speculative execution bugs and there’s a microcode update that provides a mitigation for that even if your CPU is slower when you enable it, but at least now you can run virtual machines without code in those virtual machines being able to reach outside the hypervisor boundary and extract secrets from other contexts. When it’s put that way, why would I not install the update?

And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no!

But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we do know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.

My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.

It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones.


  1. Code that runs on the CPU before the OS is still usually described as firmware - UEFI is firmware even though it’s executing on the CPU, which should give a strong indication that the difference between “firmware” and “software” is largely arbitrary ↩︎

  2. And, obviously 8051 ↩︎

  3. Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won’t, and it’s just your kernel executing code that got dumped into RAM when your system booted. ↩︎

  4. I don’t understand most of the diff between one kernel version and the next, and I don’t have time to read all of it either. ↩︎

  5. There’s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me. ↩︎

Planet DebianMichael Ablassmeier: pbsindex - file backup index

If you take backups using the proxmox-backup-client and you wondered what backup may include a specific file, the only way to find out is to mount the backup and search for the files.

For regular file backups, the Proxmox Backup Server frontend provides a pcat1 file for download, whose binary format is somewhat undocumented but actually includes a listing of the files backed up.

A Proxmox backup server datastore includes the same pcat1 file as blob index (.pcat1.didx). So to actually beeing able to tell which backup contains which files, one needs to:

1) Open the .pcat1.didx file and find out required blobs, see format documentation

2) Reconstruct the .pcat1 file from the blobs

3) Parse the pcat1 file and output the directory listing.

I’ve implemented this in pbsindex which lets you create a central file index for your backups by scanning a complete PBS datastore.

Lets say you want to have a file listing for a specific backup, use:

 pbsindex --chunk-dir /backup/.chunks/ /backup/host/vm178/2026-03-02T10:47:57Z/catalog.pcat1.didx
 didx uuid=7e4086a9-4432-4184-a21f-0aeec2b2de93 ctime=2026-03-02T10:47:57Z chunks=2 total_size=1037386
 chunk[0] start=0 end=344652 size=344652 digest=af3851419f5e74fbb4d7ca6ac3bc7c5cbbdb7c03d3cb489d57742ea717972224
 chunk[1] start=344652 end=1037386 size=692734 digest=e400b13522df02641c2d9934c3880ae78ebb397c66f9b4cf3b931d309da1a7cc
 d ./usr.pxar.didx
 d ./usr.pxar.didx/bin
 l ./usr.pxar.didx/bin/Mail
 f ./usr.pxar.didx/bin/[ size=55720 mtime=2025-06-04T15:14:05Z
 f ./usr.pxar.didx/bin/aa-enabled size=18672 mtime=2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-exec size=18672 mtime=2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-features-abi size=18664 mtime=2025-04-10T15:06:25Z
 l ./usr.pxar.didx/bin/apropos

It also lets you scan a complete datastore for all existing .pcat1.didx files and store the directory listings in a SQLite database for easier searching.

,

Planet DebianIsoken Ibizugbe: Wrapping Up My Outreachy Internship at Debian

Twelve weeks ago, I stepped into the Debian ecosystem as an Outreachy intern with a curiosity for Quality Assurance. It feels like just yesterday, and time has flown by so fast! Now, I am wrapping up that journey, not just with a completed project, but with improved technical reasoning.

I have learned how to use documentation to understand a complex project, how to be a good collaborator, and that learning is a continuous process. These experiences have helped me grow much more confident in my skills as an engineer.

My Achievements

As I close this chapter, I am leaving a permanent “Proof-of-Work” in the Debian repositories:

  • Full Test Coverage: I automated apps_startstop tests for Cinnamon, LXQt, and XFCE, covering both Live images and Netinst installations.
  • Synergy: I used symbolic links and a single Perl script to handle common application tests across different desktops, which reduces code redundancy.
  • The Contributor Style Guide: I created a guide for future contributors to make documentation clearer and reviews faster, helping to reduce the burden on reviewers.

Final Month: Wrap Up

In this final month, things became easier as my understanding of the project grew. I focused on stability and finishing my remaining tasks:

  • I spent time exploring different QEMU video options like VGA, qxl, and virtio on KDE desktop environment . This was important to ensure screen rendering remained stable so that our “needles” (visual test markers) wouldn’t fail because of minor glitches.
  • I successfully moved from familiarizing to test automation for the XFCE desktop. This included writing “prepare” steps and creating the visual needles needed to make the tests reliable.
  • One of my final challenges was the app launcher function. Originally, my code used else if blocks for each desktop. I proposed a unified solution, but hit a blocker: XFCE has two ways to launch apps (App Finder and the Application Menu). Because using different methods sometimes caused failures, I chose to use the application menu button across the board.

What’s Next?

I don’t want my journey with Debian to end here. I plan to stay involved in the community and extend these same tests to the LXDE desktop to complete the coverage for all major Debian desktop environments. I am excited to keep exploring and learning more about the Debian ecosystem.

Thank You

This journey wouldn’t have been possible without the steady guidance of my mentors: Tassia Camoes Araujo, Roland Clobus, and Philip Hands. Thank you for teaching me that in the world of Free and Open Source Software (FOSS), your voice and your code are equally important.

To my fellow intern Hellen and the entire Outreachy community, thank you for the shared learning and support. It has been an incredible 12 weeks.

David BrinWHY we are at war again? Ten reasons not in the news.

== Why are we at war again? ==

I'll be concise here, laying down reasons why thousands of U.S. service members - and eventually millions of the rest of us are being thrown into danger amid gaudy explosions that terrify - and sometimes kill - a people we want as friends.. I'll briefly offer some bullet points, many of them I've already elucidated, in other locales. But one thing is certain...

...that this is not about Iran's nuclear program. Sure, the Mission Accomplished 'deal' that Donald Trump will eventually bray will claim that's the reason for it all. It's not. Even remotely.

1. Elsewhere, I speak of Republican Bipolar Foreign Policy. The GOP always ... and I mean literally always... veers sharply between isolationism and imperial thuggery. We saw this manic-depressive mania under Nixon, Reagan, Ford, and both Bushes (remember the Neocons braying "We're an empire now!"). 

Only now these frenzied veers are gyrating daily, as Trump brags America First! Then "I settled EIGHT WARS! (Not one of which happened.) And "I'm the Peace President!" While he's bombed TEN other countries in just his first 13 months. And eviscerated the Foreign Service, driving away thousands of skilled experts on other nations and cultures.

2. When Republican presidents do wage war, it is with an unmistakably different style of military action than Democratic presidents. I've seen no one comment on this, even though it expresses a fundamental difference in character. And yes, it expresses diametrically opposite attitudes toward the fantastically professional men and women of the US military officer corps....

3. ...whose demoralization is a core aim of the Trumpists. Take how Pete "Filthy Fingers"* Hegseth commanded 500 of the world's finest professionals -- generals, admirals and top sergeants - to drop their work all over the world and hurry to Quantico, where 'former' alky Hegs and Trump berated them as "too fat and too woke to fight." just 6 weeks before they performed the most spectacularly complex and competent raid (in Caracas) in the history of the world. A raid that exposed many of our secret methods and tools to scrutiny, without liberating any Venezuelans from their criminal masters (We'll say more on that.)

But yes. Demoralization and culling of the Officer Corps is a feature, not a bug. Trump fired the JAGs whose task includes advising military folks about the legality of orders. And he's been reaming out dozens of flag officers who demur over sending boots into American cities. Now why would he do that?

4. Then there's distraction. Trump is not the first to use war to divert attention away from domestic failures and discontents. Nixon did it. Reagan several times. As did Bush Jr. But Donald Trump is truly desperate to sidetrack. Now with Operation Epstein Slurry... I mean Epic Fury.

Which brings us to something that many of you keep falling for.

5. This Iran war is not even remotely about oil. Except as he's been able to get Venezuelan oil export revenues diverted into offshore slush accounts that he controls. And sure, he likely intends the same re: Iran. And starving Cuba could lead to the option described below. But the USA - as a nation - does not benefit from war-seized oil. We got none from the Iraq wars - and I want you to read this sentence several times, till that fact sinks in. "We did Iraq for oil" is an idiotic incantation worthy of MAGA.

Anyway, the US got energy independence under Obama and is a net exporter. So STFU about that cliché.

6. Except that shutting down Iranian oil does boost world prices, benefiting his fellow oligarchs. So, okay. Maybe a bit.Indirectly.

7. This is not about toppling despots! Decapitating the top capo of the Venezuelan and Iranian gangs is classic mafia technique, that is not meant to liberate the people of those countries! DT has already made offers to the Iranian Republican Guard and Religious Police etc to make deals with him to stay in power, in exchange for them kissing his ring. In Venezuela, Argentina, El Salvador etc. - and possibly soon in Cuba - the aim is never, ever to establish democracy or to liberate citizens from their oppressors. 

The pattern is perfectly that of mafiosi. Take over another gang's territory by decapitating its top capo, then get allegiance (and resulting vigorish) from the terrified sub-capos of the gang that's left in place. This pattern is now so repeatedly blatant that no other theory is even remotely tenable,

Oh, and Marco will ensure that Miami crime families will slip in atop the Castro power structure in Cuba. This is a Mafia gang and the capo di tutti capi - even above Don - was named Vlad.  Though the power of his blackmail files to coerce western elites into obedience may be fading!  For reasons I'll go into, elsewhere. (Hint: because of AI.)

8. I mentioned the exposure of hard-won military secrets and methods, each time we go to physical war. Sometimes, that can't be helped. Russian and Chinese observers are all over eastern Ukraine, for example. Mostly amazed by the brilliance and effectiveness of most Western systems and studying hard how to copy or counter them. But Ukraine is an actual need. Perhaps Iran is, too. But this factor belongs on the balance sheet.

9. Russia's interest. Look up The Great Game of the 19th Century between the Russian and British Empires, as the former kept maneuvering and jostling, trying to win its way to a southern, ice-free port into the Indian Ocean and from there to the rest of the world. Iran/Persia was always a major part of that great-powers struggle and if most in the west don't remember it, you can be sure that Russians do. Above all, the very last thing they want is a free, secular and democratic Iran. Far better to divide power there with the Trumpist gang. Whose relations with Putin are the ghosts at the banquet.

10. Okay, this final reason for the war is harsh. It is speculative, but makes perfect sense, 

Another aim is to foment anger, to re-enrage the forces in the Middle East who want to do terrorism on America. Riling up enough enemies to deliver us into another 9/11 attack. One that Old Two Scoops imagines might save him from having to face devastating elections, this fall. 

Do I have any evidence for that last one? Other than vows of revenge that are already echoing across the region, for the blatantly dumb targeted assassination of Iran's 82-year old paramount religious leader? 

Well, it would explain why Don fired over half of our counter-terrorism folks. And can you think of anything less than a major national trauma that'd provide the excuse he needs for martial law? 

Put it all together folks. 

Prepare, in that event, to chant "Reichstag Fire!" 


But also keep in mind another word. One that shows we finally understand what's going on. Phase 9 of the 25 year recurring psychic schism between pro- and anti-modernity Americans. Our never settled civil war. And hence one word that will efficiently show our grit, our determination, our courage... our firm intent.


APPOMATTOX.





* Filthy Fingers Hegseth. Look up how many times, on Fox, he (drunkenly) bragged: "I don't believe in germs; I haven't washed my hands in a decade." Though I'll admit. Most Trump appointees are even more crazy and even less qualified.



Planet DebianHellen Chemtai: The Last Week of My Journey as an Outreachy Intern at Debian OpenQA

Hello world 😀. I’m Hellen Chemtai, an intern at Outreachy working with the Debian OpenQA team on Images Testing. This is the final week of the internship. This is just a start for me as I will continue contributing to the community .I am grateful for the opportunity to work with the Debian OpenQA team as an Outreachy intern. I have had the best welcoming team to Open Source.

My tasks and contributions

I have been working on network install and live images tasks :

  1. Install live Installers ( Ventoy , Rufus and Balenaetcher) and test the live USBs made by these live installers. – These tasks were completed and is running on the server.
  2. Use different file systems (btrfs , jfs , xfs) for installation and then test. – This task was completed and running on the server. It still needs some changes to ensure automation for each file system
  3. Use speech synthesis to capture all audio. – This task is complete. We are refining it to run well in server.
  4. Publish temporary assets. – This task is being worked on.

I have enjoyed working on testing both live images and net install images. This was one of the goals that I had highlighted in my application. I have also been working with fellow contributors in this project.

My team

As I had stated , I have had the best welcoming team to Open Source . They have been working with me and ensuring I have the proper resources for contributions. I am grateful to my three mentors and the work they have done.

  1. Roland Clobus is a project maintainer. He is in charge of code review , pointing out what we need to learn and works on technical issues. He considers every solution we contributors think of and will go into detailed explanations for any issue we have.
  2. Tassia Camoes is a community coordinator. She is in charge of communication, co-ordination between contributors and networking within the community. She on-boarded us and introduced us to the community.
  3. Philip Hands is also a project maintainer. He is in charge of technical code , ensuring sources work and also working on server and its issues. He also gives detailed explanations for any issue we have.

I wish to learn more with the team. On my to do list, I would like to gain more skills on ports and packages so to contribute more technically. I have enjoyed working on the tasks and learning

The impact of this project

The automated tests done by the team help the community in some of the following examples:

  1. Check the installation and system behavior of the Operating System images versions
  2. Help developers and users of Operating Systems know which versions of applications e.g live installers run well on system
  3. Check for any issues during installation and running of Operating Systems and their flavors

I have also networked with the greater community and other contributors. During the contribution phase, I found many friends who were learning together with me . I hope to continue networking with the community and continue learning.

Cryptogram On Moltbook

The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:

Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do.

I think this take has it mostly right:

What happened on Moltbook is a preview of what researcher Juergen Nittner II calls “The LOL WUT Theory.” The point where AI-generated content becomes so easy to produce and so hard to detect that the average person’s only rational response to anything online is bewildered disbelief.

We’re not there yet. But we’re close.

The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can’t reliably tell what’s fake. Third, and this is the crisis point, regular people realize there’s nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.

Planet DebianBen Hutchings: FOSS activity in February 2026

Worse Than FailureCodeSOD: Popping Off

Python is (in)famous for its "batteries included" approach to a standard library, but it's not that notable that it has plenty of standard data structures, like dicts. Nor is in surprising that dicts have all sorts of useful methods, like pop, which removes a key from the dict and returns its value.

Because you're here, reading this site, you'll also be unsurprised that this doesn't stop developers from re-implementing that built-in function, badly. Karen sends us this:

def parse_message(message):
    def pop(key):
        if key in data:
            result = data[key]
            del data[key]
            return result
        return ''

    data = json.loads(message)
    some_value = pop("some_key")
    # <snip>...multiple uses of pop()...</snip>

Here, they create an inner method, and they exploit variable hoisting. While pop appears in the code before data is declared, all variable declarations are "hoisted" to the top. When pop references data, it's getting that from the enclosing scope. Which while this isn't a global variable, it's still letting a variable cross between two scopes, which is always messy.

Also, this pop returns a default value, which is also something the built-in method can do. It's just the built-in version requires you to explicitly pass the value, e.g.: some_value = data.pop("some_key", "")

Karen briefly wondered if this was a result of the Python 2 to 3 conversion, but no, pop has been part of dict for a long time. I wondered if this was just an exercise in code golf, writing a shorthand function, but even then- you could just wrap the built-in pop with your shorthand version (not that I'd recommend such a thing). No, I think the developer responsible simply didn't know the function was there, and just reimplemented a built-in method badly, as so often happens.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsEscapees

Author: Julian Miles, Staff Writer Investigator Mellio considers the narrow doorway. “You say this was never opened?” “Logs confirm it, sir.” Mellio glances at the sergeant. “Thank you, officer-?” “Sergeant Parx, sir.” “Good to meet you, Parx. So, the brief said this isn’t the first?” “Correct. This is eighth member of the Gundorini gang to […]

The post Escapees appeared first on 365tomorrows.

Planet DebianValhalla's Things: A Pen Case (or a Few)

Posted on March 2, 2026
Tags: madeof:atoms, FreeSoftWear, craft:sewing

A pen case made of two pieces of a relatively stiff black material with a flat base and three separate channels on top, plus a flap covering everything and a band to keep the flap closed; there is visible light blue stitching all around the channels.

For my birthday, I’ve bought myself a fancy new expensive1 fountain pen.

A two slot pen case in the same material as above, but brown: the flap is too short to cover the pens, and there isn't a band to keep it closed.

Such a fancy pen, of course requires a suitable case: I couldn’t use the failed prototype of a case I’ve been keeping my Preppys in, so I had to get out the nice vegetable tanned leather… Yeah, nope, I don’t have that (yet). I got out the latex and cardboard material that is sold as a (cheaper) leather substitute, doesn’t look like leather at all, but is quite nice (and easy) to work with. The project is not vegan anyway, because I used waxed linen thread, waxing it myself with a lot of very nicely smelling beeswax.

a case similar to the one above, but this one only has two slots, and there is a a Faber Castell pen nested on top of the case between the two slots. Here the stitches are white, and in a coarser thread.

I got the measurements2 from the less failed prototype where I keep my desktop pens, and this time I made a proper pattern I could share online, under the usual Free Culture license.

A case like the one above, except that the stitches are in black, and not as regular. This one has also been scrunched up a bit for a different look, and now the band is a bit too wide.

From the width of the material I could conveniently cut two cases, so that’s what I did, started sewing the first one, realized that I got the order of stitching wrong, and also that if I used light blue thread instead of the black one it would look nice, and be easier to see in the pictures for the published pattern, started sewing the second one, and kept alternating between the two, depending on the availability of light for taking pictures.

The open pen case, showing two pens, a blue Preppy and a gunmetal Plaisir cosily nested in the two outer slots, while the middle slot is ominously empty.

One of the two took the place of my desktop one, where I had one more pen than slots, and one of the old prototypes was moved to keep my bedside pen, and the other new case was used for the new pen in my handbag, together with a Preppy, and now I have a free slot and you can see how this is going to go wrong, right? :D


  1. 16€. plus a 9€ converter, and another 6€ pen to get the EF nib from, since it wasn’t available for the expensive pen.↩︎

  2. I have them written down somewhere. I couldn’t find them. So I measured the real thing, with some approximation.↩︎

,

Planet DebianBenjamin Mako Hill: Pronunciation

Had a discussion about how to pronounce the name of Google’s chatbot. Turns out, we were all wrong.

365 TomorrowsMort Begins Again

Author: David Sydney Like most people, Mort hadn’t paid much attention to reincarnation. During the week, he was up to his neck in work. On his day off, as he took a leisurely drive to clear his mind, if that is the proper term, he didn’t think of the future. He had the road to […]

The post Mort Begins Again appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: The next Debconf happens in Japan.

The next Debconf happens in Japan. Great news. Feels like we came a long way, but I didn't personally do much, I just made the first moves.

,

Planet DebianMike Gabriel: Debian Lomiri Tablets 2025-2027 - Project Report (Q3/2025)

Debian Lomiri for Debian 13 (previous project)

In our previous project around Debian and Lomiri (lasting until July 2025), we achieved to get Lomiri 0.5.0 (and with it another 130 packages) into Debian (with two minor exceptions [1]) just in time for the Debian 13 release in August 2025.

Debian Lomiri for Debian 14

At DebConf in Brest, a follow-up project has been designed between the project sponsor and Fre(i)e Software GmbH [2]. The new project (on paper) started on 1st August 2025 and project duration was agreed on to be 2 years, allowing our company to work with an equivalent of ~5 FTE on Lomiri targetting the Debian 14 release some time in the second half of 2027 (an assumed date, let's see what happens).

Ongoing work would be covered from day one of the new project and once all contract details had been properly put on paper end of September, Fre(i)e Software GmbH started hiring a new team of software developers and (future) Debian maintainers. (More of that new team in our next Q4/2025 report).

The ongoing work of Q3/2025 was basically Guido Berhörster and myself working on Morph Browser Qt6 (mostly Guido together with Bhushan from MiraLab [3]) and package maintenance in Debian (mostly me).

Morph Browser Qt6

The first milestone we could reach with the Qt6 porting of Morph Browser [4] and related components (LUITK aka lomiri-ui-toolkit (big chunk! [5]), lomiri-content-hub, lomiri-download-manager and a few other components) was reached on 21st Sep 2025 with an upload of Morph Browser 1.2.0~git20250813.1ca2aa7+dfsg-1~exp1 to Debian experimental and the Lomiri PPA [6]).

Preparation of Debian 13 Updates (still pending)

In background, various Lomiri updates for Debian 13 have been prepared during Q3/2025 (with a huge patchset), but publishing those to Debian 13 are still pending as tests are still not satisfying.

[1] lomiri-push-service and nuntium
[2] https://freiesoftware.gmbh
[3] https://miralab.one/
[4] https://gitlab.com/ubports/development/core/morph-browser/-/merge_reques... et al.
[5] https://gitlab.com/ubports/development/core/lomiri-ui-toolkit/-/merge_re... et al.
[6] https://launchpad.net/~lomiri

Krebs on SecurityWho is the Kimwolf Botmaster “Dort”?

In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf, the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “Dort” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information.

A public “dox” created in 2020 asserted Dort was a teenager from Canada (DOB August 2003) who used the aliases “CPacket” and “M1ce.” A search on the username CPacket at the open source intelligence platform OSINT Industries finds a GitHub account under the names Dort and CPacket that was created in 2017 using the email address jay.miner232@gmail.com.

Image: osint.industries.

The cyber intelligence firm Intel 471 says jay.miner232@gmail.com was used between 2015 and 2019 to create accounts at multiple cybercrime forums, including Nulled (username “Uubuntuu”) and Cracked (user “Dorted”); Intel 471 reports that both of these accounts were created from the same Internet address at Rogers Canada (99.241.112.24).

Dort was an extremely active player in the Microsoft game Minecraft who gained notoriety for their “Dortware” software that helped players cheat. But somewhere along the way, Dort graduated from hacking Minecraft games to enabling far more serious crimes.

Dort also used the nickname DortDev, an identity that was active in March 2022 on the chat server for the prolific cybercrime group known as LAPSUS$. Dort peddled a service for registering temporary email addresses, as well as “Dortsolver,” code that could bypass various CAPTCHA services designed to prevent automated account abuse. Both of these offerings were advertised in 2022 on SIM Land, a Telegram channel dedicated to SIM-swapping and account takeover activity.

The cyber intelligence firm Flashpoint indexed 2022 posts on SIM Land by Dort that show this person developed the disposable email and CAPTCHA bypass services with the help of another hacker who went by the handle “Qoft.”

“I legit just work with Jacob,” Qoft said in 2022 in reply to another user, referring to their exclusive business partner Dort. In the same conversation, Qoft bragged that the two had stolen more than $250,000 worth of Microsoft Xbox Game Pass accounts by developing a program that mass-created Game Pass identities using stolen payment card data.

Who is the Jacob that Qoft referred to as their business partner? The breach tracking service Constella Intelligence finds the password used by jay.miner232@gmail.com was reused by just one other email address: jacobbutler803@gmail.com. Recall that the 2020 dox of Dort said their date of birth was August 2003 (8/03).

Searching this email address at DomainTools.com reveals it was used in 2015 to register several Minecraft-themed domains, all assigned to a Jacob Butler in Ottawa, Canada and to the Ottawa phone number 613-909-9727.

Constella Intelligence finds jacobbutler803@gmail.com was used to register an account on the hacker forum Nulled in 2016, as well as the account name “M1CE” on Minecraft. Pivoting off the password used by their Nulled account shows it was shared by the email addresses j.a.y.m.iner232@gmail.com and jbutl3@ocdsb.ca, the latter being an address at a domain for the Ottawa-Carelton District School Board.

Data indexed by the breach tracking service Spycloud suggests that at one point Jacob Butler shared a computer with his mother and a sibling, which might explain why their email accounts were connected to the password “jacobsplugs.” Neither Jacob nor any of the other Butler household members responded to requests for comment.

The open source intelligence service Epieos finds jacobbutler803@gmail.com created the GitHub account “MemeClient.” Meanwhile, Flashpoint indexed a deleted anonymous Pastebin.com post from 2017 declaring that MemeClient was the creation of a user named CPacket — one of Dort’s early monikers.

Why is Dort so mad? On January 2, KrebsOnSecurity published The Kimwolf Botnet is Stalking Your Local Network, which explored research into the botnet by Benjamin Brundage, founder of the proxy tracking service Synthient. Brundage figured out that the Kimwolf botmasters were exploiting a little-known weakness in residential proxy services to infect poorly-defended devices — like TV boxes and digital photo frames — plugged into the internal, private networks of proxy endpoints.

By the time that story went live, most of the vulnerable proxy providers had been notified by Brundage and had fixed the weaknesses in their systems. That vulnerability remediation process massively slowed Kimwolf’s ability to spread, and within hours of the story’s publication Dort created a Discord server in my name that began publishing personal information about and violent threats against Brundage, Yours Truly, and others.

Dort and friends incriminating themselves by planning swatting attacks in a public Discord server.

Last week, Dort and friends used that same Discord server (then named “Krebs’s Koinbase Kallers”) to threaten a swatting attack against Brundage, again posting his home address and personal information. Brundage told KrebsOnSecurity that local police officers subsequently visited his home in response to a swatting hoax which occurred around the same time that another member of the server posted a door emoji and taunted Brundage further.

Dort, using the alias “Meow,” taunts Synthient founder Ben Brundage with a picture of a door.

Someone on the server then linked to a cringeworthy (and NSFW) new Soundcloud diss track recorded by the user DortDev that included a stickied message from Dort saying, “Ur dead nigga. u better watch ur fucking back. sleep with one eye open. bitch.”

“It’s a pretty hefty penny for a new front door,” the diss track intoned. “If his head doesn’t get blown off by SWAT officers. What’s it like not having a front door?”

With any luck, Dort will soon be able to tell us all exactly what it’s like.

Update, 10:29 a.m.: Jacob Butler responded to requests for comment, speaking with KrebsOnSecurity briefly via telephone. Butler said he didn’t notice earlier requests for comment because he hasn’t really been online since 2021, after his home was swatted multiple times. He acknowledged making and distributing a Minecraft cheat long ago, but said he hasn’t played the game in years and was not involved in Dortsolver or any other activity attributed to the Dort nickname after 2021.

“It was a really old cheat and I don’t remember the name of it,” Butler said of his Minecraft modification. “I’m very stressed, man. I don’t know if people are going to swat me again or what. After that, I pretty much walked away from everything, logged off and said fuck that. I don’t go online anymore. I don’t know why people would still be going after me, to be completely honest.”

When asked what he does for a living, Butler said he mostly stays home and helps his mom around the house because he struggles with autism and social interaction. He maintains that someone must have compromised one or more of his old accounts and is impersonating him online as Dort.

“Someone is actually probably impersonating me, and now I’m really worried,” Butler said. “This is making me relive everything.”

But there are issues with Butler’s timeline. For example, Jacob’s voice in our phone conversation was remarkably similar to the Jacob/Dort whose voice can be heard in this Sept. 2022 Clash of Code competition between Dort and another coder (Dort lost). At around 6 minutes and 10 seconds into the recording, Dort launches into a cursing tirade that mirrors the stream of profanity in the diss rap that Dortdev posted threatening Brundage. Dort can be heard again at around 16 minutes; at around 26:00, Dort threatens to swat his opponent.

Butler said the voice of Dort is not his, exactly, but rather that of an impersonator who had likely cloned his voice.

“I would like to clarify that was absolutely not me,” Butler said. “There must be someone using a voice changer. Or something of the sorts. Because people were cloning my voice before and sending audio clips of ‘me’ saying outrageous stuff.”

365 TomorrowsOur Little Secret

Author: James C. Clar The evening before the president’s primetime appearance, the West Wing hummed like a server room. “Poll numbers?” President Drake asked, standing at the tall windows overlooking the South Lawn. “Seventy-six percent approval on the infrastructure package,” replied Chief of Staff Karen Tate. “The markets also responded well to the talk of […]

The post Our Little Secret appeared first on 365tomorrows.

Planet DebianDaniel Baumann: Debian Fast Forward: An alternative backports repository

The Debian project releases a new stable version of its Linux distribution approximately every two years. During its life time, a stable release usually gets security updates only, but in general no feature updates.

For some packages it is desirable to get feature updates earlier than with the next stable release. Some new packages included in Debian after the initial release of a stable distribution are desirable for stable too.

Both use-cases can be solved by recompiling the newer version of a package from testing/unstable on stable (aka backporting). Packages are backported together with only the minimal amount of required build-depends or depends not already fulfilled in stable (if any), and without any changes unless required to fix building on stable (if needed).

There are official Debian Backports available, as well as several well-known unofficial backports repositories. I have been involved in one of these unofficial repositories since 2005 which subsequently turned 2010 into its own Debian derivative, mixing both backports and modified packages in one repository for simplicity.

Starting with the Debian 13 (trixie) release, the (otherwise unmodified) backports of this derivative have been split out from the derivative distribution into a separate repository. This way the backports are more accessible and useful for all interested Debian users too.

TL;DR: Debian Fast Forward - https://fastforward.debian.net

  • is an alternative Debian repository containing complementary backports from testing/unstable to stable

  • with packages organized in a curated, self-contained selection of coherent sets

  • supporting amd64, i386, and arm64 architectures

  • containing around 400 packages in trixie-fastforward-backports

  • with 1’800 uploads since July 2025

End user documentation about how to enable Debian Fast Forwards is available.

Have fun!

,

Planet DebianPetter Reinholdtsen: Free software toolchain for the simplest RISC-V CPU in a small FPGA?

On Wednesday I had the pleasure of attending a presentation organized by the Norwegian Unix Users Group on implementing RISC-V using a small FPGA. This project is the result of a university teacher wanting to teach students assembly programming using a real instruction set, while still providing a simple and transparent CPU environment. The CPU in question implements the smallest set of opcodes needed to still call the CPU a RISC-V CPU, the RV32I base set. The author and presenter, Kristoffer Robin Stokke, demonstrated how to build both the FPGA setup and a small startup code providing a "Hello World" message over both serial port and a small LCD display. The FPGA is programmed using VHDL, the entire source code is available from github, but unfortunately the target FPGA setup is compiled using the proprietary tool Quartus. It is such a pity that such a cool little piece of free software should be chained down by non-free software, so my friend Jon Nordby set out to see if we can liberate this small RISC-V CPU. After all, it would be unforgivable sin to force students to use non-free software to study at the University of Oslo.

The VHDL code for the CPU instructions itself is only 1138 lines, if I am to believe wc -l lib/riscv_common/* lib/rv32i/*. On the small FPGA used during the talk, the entire CPU, ROM, display and serial port driver only used up half the capacity. These days, there exists a free software toolchain for FPGA programming not only in Verilog but also in VHDL, and we hope the support in yosys, ghdl, and yosys-plugin-ghdl (sadly and strangely enough, removed from Debian unstable) is complete enough to at least build this small and simple project with some minor portability fixes. Or perhaps there are other approaches that work better? The first patches are already floating on github, to make the VHDL code more portable and to test out the build. If you are interested in running your own little RISC-V CPU on a FPGA chip, please get in touch.

At the moment we sadly have hit a GHDL bug, which we do not quite know how to work around or fix:

******************** GHDL Bug occurred ***************************
Please report this bug on https://github.com/ghdl/ghdl/issues
GHDL release: 5.0.1 (Debian 5.0.1+dfsg-1+b1) [Dunoon edition]
Compiled with unknown compiler version
Target: x86_64-linux-gnu
/scratch/pere/src/fpga/memstick-fpga-riscv-upstream/
Command line:

Exception CONSTRAINT_ERROR raised
Exception information:
raised CONSTRAINT_ERROR : synth-vhdl_expr.adb:1763 discriminant check failed
******************************************************************

Thus more work is needed. For me, this simple project is the first stepping stone for a larger dream I have of converting the MESA machine controller system to build its firmware using a free software toolchain. I just need to learn more FPGA programming first. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Cryptogram Friday Squid Blogging: Squid Fishing in Peru

Peru has increased its squid catch limit. The article says “giant squid,” but they can’t possibly mean that.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Planet DebianDirk Eddelbuettel: x13binary 1.1.61.2 on CRAN: Micro Maintenance

The x13binary team is happy to share the availability of Release 1.1.61.2 of the x13binary package providing the X-13ARIMA-SEATS program by the US Census Bureau which arrived on CRAN earlier today, and has already been built for r2u.

This release responds to a CRAN request to display the compiler version when building. x13binary, just like three other packages there, creates and ships a local binary it interfaces with. So our build was a little outside of R CMD INSTALL ... but now signals build versions like R does. We also modernized and simplified our continuous intgegration script based on r-ci.

Courtesy of my CRANberries, there is also a diffstat report for this release showing changes to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Charles StrossThe Regicide Report

The Regicide Report, the last novel in the main Laundry Files series, is coming out on January 27th in the US (from Tor.com Publishing) and the UK (from Orbit).

The Regicide Report US cover
The Regicide Report UK cover

If you want to order signed hardcovers, contact Transreal Fiction in Edinburgh. (I believe Mike is currently willing to send books to the USA, but don't take my word for it: check first, and blame Donald Trump if there are customs/tariff obstacles.)

Audiobooks: there will be audio editions. The Audible one is showing a January 27th release date on Amazon.com; Hachette Digital will be issuing one in the UK but it's not showing up on Amazon.co.uk yet. (For contractual reasons they're recorded and produced by different companies.)

Ebooks and DRM: The ebook will be available the same day as the hardcover. Tor.com does not put DRM on their ebooks, but it's anybody's guess whether a given ebook store will add it. (Amazon have been particularly asshole-ish in recent years but are promising DRM-free downloads of purchases will be available from late January.) Orbit is part of Hachette, who are particularly obstreperous about requiring DRM on everything electronic, so you're out of luck if you buy the Orbit edition. (I could tell you how to unlock the DRM on purchases from the UK Kobo store, but then my publisher would be contractually obliged to assassinate me. Let's just say, it can be done.)

What next?

The Regicide Report is the last Bob/Mo/Laundry novel. It's set circa March-May 2015 in the time line; the New Management books are set circa November 2015 through May 2017, so this one slots in before Dead Lies Dreaming.

There may be a Laundry Files short story collection, and/or/maybe including a final New Management novella (it's half-written, but on "hold" since mid-2024), at some point in the future. But not this year or next. (I'm taking time off to get back in touch with space opera.)

None of the above precludes further Laundry Files novels getting written, but it's up to the publishers and market forces. If it does happen, I expect they'll be set in the 2020s in the internal chronology, by which time the Laundry itself is no more (it's been superseded by DEAT), and we may have new protagonists and a very new story line.

No, but really what's next?

I don't know for sure, but I'm currently working on the final draft of Starter Pack, my Stainless Steel Rat homage, and planning yet another rewrite of Ghost Engine, this time throwing away my current protagonists and replacing them with the ones from Starter Pack (who need another heist caper). Do not expect publication before 2027, though! I'm also awaiting eye surgery again, which slows everything down.

365 TomorrowsIntergalactic Vixens on the Moon

Author: Hillary Lyon Monte snatched the small chest from the airport where he worked as a baggage handler. He recognized the case; he’d seen it on stage at the fan convention. He jostled it, grinning. By the distribution of the weight inside, it definitely held the author’s animatronic head. At home, Monte placed the animatronic […]

The post Intergalactic Vixens on the Moon appeared first on 365tomorrows.

Worse Than FailureError'd: Perverse Perseveration

Pike pike pike pike Pike pike pike.

Lincoln KC repeated "I never knew Bank of America Bank of America Bank of America was among the major partners of Bank of America."

4

 

"Extra tokens, or just a stutter?" asks Joel "An errant alt-tab caused a needless google search, but thankfully Gemini's AI summary got straight-to-the-point(less) info. It is nice to see the world's supply of Oxford commas all in once place. "

0

 

Alessandro M. isn't the first one to call us out on our WTFs. "It’s adorable how the site proudly supports GitHub OAuth right up until the moment you actually try to use it. It’s like a door with a ‘Welcome’ sign that opens onto a brick wall." Meep meep.

1

 

Float follies found Daniel W. doubly-precise. "Had to go check on something in M365 Admin Center, and when I was on the OneDrive tab, I noticed Microsoft was calculating back past the bit. We're in quantum space at this point."

2

 

Weinliebhaber Michael R. sagt "Our German linguists here will spot the WTF immediately where my local wine shop has not. Weiẞer != WEIBER. Those words mean really different things." Is that 20 euro per kilo, or per the piece?

3

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cryptogram Phishing Attacks Against People Seeking Programming Jobs

This is new. North Korean hackers are posing as company recruiters, enticing job candidates to participate in coding challenges. When they run the code they are supposed to work on, it installs malware on their system.

News article.

Rondam RamblingsSeeking God in Science part 3: Things Exist

The mere undertaking of this project of reconciling the mechanistic and teleological worldviews is already chock-a-block with tacit assumptions.  I am assuming that you, my readers, actually exist.  I am rejecting solipsism.  By choosing writing as my medium I am assuming that you know how to read and that you understand English.  But publishing on-line I am assuming that you

Cryptogram Why Tehran’s Two-Tiered Internet Is So Dangerous

Iran is slowly emerging from the most severe communications blackout in its history and one of the longest in the world. Triggered as part of January’s government crackdown against citizen protests nationwide, the regime implemented an internet shutdown that transcends the standard definition of internet censorship. This was not merely blocking social media or foreign websites; it was a total communications shutdown.

Unlike previous Iranian internet shutdowns where Iran’s domestic intranet—the National Information Network (NIN)—remained functional to keep the banking and administrative sectors running, the 2026 blackout disrupted local infrastructure as well. Mobile networks, text messaging services, and landlines were disabled—even Starlink was blocked. And when a few domestic services became available, the state surgically removed social features, such as comment sections on news sites and chat boxes in online marketplaces. The objective seems clear. The Iranian government aimed to atomize the population, preventing not just the flow of information out of the country but the coordination of any activity within it.

This escalation marks a strategic shift from the shutdown observed during the “12-Day War” with Israel in mid-2025. Then, the government primarily blocked particular types of traffic while leaving the underlying internet remaining available. The regime’s actions this year entailed a more brute-force approach to internet censorship, where both the physical and logical layers of connectivity were dismantled.

The ability to disconnect a population is a feature of modern authoritarian network design. When a government treats connectivity as a faucet it can turn off at will, it asserts that the right to speak, assemble, and access information is revocable. The human right to the internet is not just about bandwidth; it is about the right to exist within the modern public square. Iran’s actions deny its citizens this existence, reducing them to subjects who can be silenced—and authoritarian governments elsewhere are taking note.

The current blackout is not an isolated panic reaction but a stress test for a long-term strategy, say advocacy groups—a two-tiered or “class-based” internet known as Internet-e-Tabaqati. Iran’s Supreme Council of Cyberspace, the country’s highest internet policy body, has been laying the legal and technical groundwork for this since 2009.

In July 2025, the council passed a regulation formally institutionalizing a two-tiered hierarchy. Under this system, access to the global internet is no longer a default for citizens, but instead a privilege granted based on loyalty and professional necessity. The implementation includes such things as “white SIM cards“: special mobile lines issued to government officials, security forces, and approved journalists that bypass the state’s filtering apparatus entirely.

While ordinary Iranians are forced to navigate a maze of unstable VPNs and blocked ports, holders of white SIMs enjoy unrestricted access to Instagram, Telegram, and WhatsApp. This tiered access is further enforced through whitelisting at the data center level, creating a digital apartheid where connectivity is a reward for compliance. The regime’s goal is to make the cost of a general shutdown manageable by ensuring that the state and its loyalists remain connected while plunging the public into darkness. (In the latest shutdown, for instance, white SIM holders regained connectivity earlier than the general population.)

The technical architecture of Iran’s shutdown reveals its primary purpose: social control through isolation. Over the years, the regime has learned that simple censorship—blocking specific URLs—is insufficient against a tech-savvy population armed with circumvention tools. The answer instead has been to build a “sovereign” network structure that allows for granular control.

By disabling local communication channels, the state prevents the “swarm” dynamics of modern unrest, where small protests coalesce into large movements through real-time coordination. In this way, the shutdown breaks the psychological momentum of the protests. The blocking of chat functions in nonpolitical apps (like ridesharing or shopping platforms) illustrates the regime’s paranoia: Any channel that allows two people to exchange text is seen as a threat.

The United Nations and various international bodies have increasingly recognized internet access as an enabler of other fundamental human rights. In the context of Iran, the internet is the only independent witness to history. By severing it, the regime creates a zone of impunity where atrocities can be committed without immediate consequence.

Iran’s digital repression model is distinct from, and in some ways more dangerous than, China’s “Great Firewall.” China built its digital ecosystem from the ground up with sovereignty in mind, creating domestic alternatives like WeChat and Weibo that it fully controls. Iran, by contrast, is building its controls on top of the standard global internet infrastructure.

Unlike China’s censorship regime, Iran’s overlay model is highly exportable. It demonstrates to other authoritarian regimes that they can still achieve high levels of control by retrofitting their existing networks. We are already seeing signs of “authoritarian learning,” where techniques tested in Tehran are being studied by regimes in unstable democracies and dictatorships alike. The most recent shutdown in Afghanistan, for example, was more sophisticated than previous ones. If Iran succeeds in normalizing tiered access to the internet, we can expect to see similar white SIM policies and tiered access models proliferate globally.

The international community must move beyond condemnation and treat connectivity as a humanitarian imperative. A coalition of civil society organizations has already launched a campaign calling fordirect-to-cell” (D2C) satellite connectivity. Unlike traditional satellite internet, which requires conspicuous and expensive dishes such as Starlink terminals, D2C technology connects directly to standard smartphones and is much more resilient to infrastructure shutdowns. The technology works; all it requires is implementation.

This is a technological measure, but it has a strong policy component as well. Regulators should require satellite providers to include humanitarian access protocols in their licensing, ensuring that services can be activated for civilians in designated crisis zones. Governments, particularly the United States, should ensure that technology sanctions do not inadvertently block the hardware and software needed to circumvent censorship. General licenses should be expanded to cover satellite connectivity explicitly. And funding should be directed toward technologies that are harder to whitelist or block, such as mesh networks and D2C solutions that bypass the choke points of state-controlled ISPs.

Deliberate internet shutdowns are commonplace throughout the world. The 2026 shutdown in Iran is a glimpse into a fractured internet. If we are to end countries’ ability to limit access to the rest of the world for their populations, we need to build resolute architectures. They don’t solve the problem, but they do give people in repressive countries a fighting chance.

This essay originally appeared in Foreign Policy.

Worse Than FailureCodeSOD: The Counting Machine

Industrial machines are generally accompanied by "Human Machine Interfaces", HMIs. This is industrial slang for a little computerized box you use to control the industrial machine. All the key logic and core functionality and especially the safety functionality is handled at a deeper computer layer in the system. The HMI is just buttons users can push to interact with the machine.

Purchasers of those pieces of industrial equipment often want to customize that user interface. They want to guide users away from functions they don't need, or make their specific workflow clear, or even just brand the UI. This means that the vendor needs to publish an API for their HMI.

Which brings us to Wendy. She works for a manufacturing company which wants to customize the HMI on a piece of industrial equipment in a factory. That means Wendy has been reading the docs and poking at the open-sourced portions of the code, and these raise more questions than they answer.

For example, the HMI's API provides its own set of collection types, in C#. We can wonder why they'd do such a thing, which is certainly a WTF in itself, but this representative line raises even more questions than that:

Int32 Count { get; set; }

What happens if you use the public set operation on the count of items in a collection? I don't know. Wendy doesn't either, as she writes:

I'm really tempted to set the count but I fear the consequences.

All I can hear in my head when I think about "setting the Count" is: "One! One null reference exception! Two! TWO null reference exceptions! HA HA HA HA!"

Count von Count kneeling.png
By http://muppet.wikia.com/wiki/Count_von_Count

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsStealing Someone New

Author: CB Droege Tamilla moves carefully and silently through the dark fairground. She knows it’s only minimally guarded, and that the CCTV isn’t being monitored at night, but she’s learned to take every job seriously. Upon reaching the carousel, she checks the photo to confirm the target and pulls a battery-powered saw from her pack. […]

The post Stealing Someone New appeared first on 365tomorrows.

David BrinA little sci fi tale to boost your optimism for a new and better year... and era... ahead.

 Midweek, I'll refrain from politics... or the things that are now obvious. 

I just finished writing/editing/formatting an entire nonfiction book (about AI!) and wish to celebrate by offering a gift to you all. A little tale of optimism and hope... illustrating that one person -- not a superhero or mighty warrior or politician or genius -- might make all the difference in the world. With courage, hard work and neighborly good will.*

This story is one of many that can also be found in The Best of David Brin.


 =========================================

           A Professor at Harvard

                             By David Brin

 

 

 

Dear Lilly,

 

This transcription may be a bit rough.  I’m dashing it off quickly for reasons that should soon be obvious.  

         Exciting news!  Still, let me ask that you please don’t speak of this, or let it leak till I’ve had a chance to put my findings in a more academic format.

         Since May of 2022, I’ve been engaged to catalogue the Thomas Kuiper Collection, which Harvard acquired in that notorious bidding war a couple of years ago, on eBay.   The acclaimed astronomer-philosopher had been amassing trunkloads of documents from the late Sixteenth and early Seventeenth Centuries -- individually and in batches -- with no apparent pattern, rhyme or reason.   Accounts of the Dutch Revolution. Letters from Johannes Kepler.  Sailing manifests of ports in southern England. Ledgers and correspondence from the Italian Inquisition.  Early documents of Massachusetts Bay Colony and narratives about the establishment of Harvard College.

         The last category was what most interested the trustees, so I got to work separating them from the apparent clutter.  That is, it seemed clutter, an unrelated jumble... till intriguing patterns began to emerge. 

         Let me trace the story as was revealed to me, in bits and pieces.  It begins with the apprenticeship of a young English boy named Henry Stephens.

         

#

 

Henry was born to a family of petit-gentry farmers in Kent, during the year 1595.  According to parish records, his birth merited noting as mirabilus -- he was premature and should have died of the typhus that claimed his mother. But somehow the infant survived.

He arrived during a time of turmoil. Parliament had passed a law that anyone who questioned the Queen's religious supremacy, or persistently absented himself from Anglican services, should be imprisoned or banished from the country, never to return on pain of death.   Henry’s father was a leader among the “puritan” dissenters in one of England’s least tolerant counties.  Hence, the family was soon hurrying off to exile, departing by ship for the Dutch city of  Leiden.

         Leiden, you’ll recall, was already renowned for its brave resistance to the Spanish army of Philip II.  As a reward, Prince William of Orange and the Dutch parliament gave the city a choice: freedom from taxes for a hundred years, or the right to establish a university. Leiden chose a university.

         Here the Stephens family joined a growing expatriate community -- English dissenters, French Huguenots, Jews and others thronging into the cities of Middelburg, Leiden, and Amsterdam.  Under the Union of Utrecht, Holland was the first nation to explicitly respect individual political and religious liberty and to recognize the sovereignty of the people, rather than the monarch. (Both the American and French Revolutions specifically referred to this precedent).

         Henry was apparently a bright young fellow.  Not only did he adjust quickly -- growing up multilingual in English, Dutch and Latin -- but he showed an early flair for practical arts like smithing and surveying.

         The latter profession grew especially prominent as the Dutch transformed their landscape, sculpting it with dikes and levees, claiming vast acreage from the sea.   Overcoming resistance from his traditionalist father, Henry managed to get himself apprenticed to the greatest surveyor of the time, Willebrord Snel van Leeuwen -- or Snellius.  In that position, Henry would have been involved in a geodetic mapping of Holland -- the first great project using triangulation to establish firm lines of location and orientation -- using methods still applied today.

         While working for Snellius, Henry apparently audited some courses offered by Willebrord’s father -- Professor Rudolphus Snellius -- at the University of Leiden.    Rudolphus lectured on "Planetarum Theorica et Euclidis Elementa" and evidently was a follower of Copernicus.  Meanwhile the son -- also authorized to teach astronomy -- specialized in the Almagest of Ptolemeus!

         The Kuiper Collection contains a lovely little notebook, written in a fine hand -- though in rather vulgar latin -- wherein Henry Stephens describes the ongoing intellectual dispute between those two famous Dutch scholars, Snellius elder and younger. Witnessing this intellectual tussle first-hand must have been a treat for Henry, who would have known how few opportunities there were for open discourse in the world beyond Leiden.  

         

#

 

But things were just getting interesting.  For at the very same moment that a teenage apprentice was tracking amiable family quarrels over heliocentric versus geocentric astronomies, some nearby dutchman was busy crafting the world’s first telescope.

         The actual inventor is unknown -- secrecy was a bad habit practiced by many innovators of that time.   Till now, the earliest mention was in September 1608, when a man ‘from the low countries’ offered a telescope for sale at the annual Frankfurt fair.  It had a convex and a concave lens, offering a magnification of seven.  So, I felt a rising sense of interest when I read Henry’s excited account of the news, dated six months earlier (!) offering some clues that scholars may find worth pursuing.  

         Later though. Not today.  For you see, I left that trail just as soon as another grew apparent.  One far more exciting.

         Here’s a hint: word of the new instrument, flying across Europe by personal letter, soon reached a certain person in northern Italy.  Someone who, from description alone, was able to re-invent the telescope and put it to exceptionally good use.

         Yes, I’m referring to the Sage of Pisa.  Big G himself!  And soon the whole continent was abuzz about his great discoveries -- the moons of Jupiter, lunar mountains, the phases of Venus and so on.  Naturally, all of this excited old Rudolphus, while poor grumpy Willebrord muttered that it seemed presumptuous to draw cosmological conclusions from such evidence.  Both Snellius patris and filio agreed, however, that it would be a good idea to send a representative south, as quickly as possible, to learn first-hand about any improvements in telescope design that could aid the practical art of surveying.

 

So it was that in the year 1612, at age seventeen, young Henry Stephens of Kent headed off to Italy...

         ...and there the documented story stops for a few years.  From peripheral evidence -- bank records and such -- it would appear that small amounts were sent to Pisa from Snel family accounts in the form of a ‘stipend’. Nothing large or well-attributed, but a steady stream that lasted until about 1616, when “H.Stefuns” abruptly reappears in the employment ledger of Willebrord the surveyor.

         What was Henry up to all that time?  One might squint and imagine him counting pulse-beats in order to help time a pendulum’s sway.  Or using his keen surveyor’s eye to track a ball’s descent along an inclined plane.  Did he help to sketch Saturn’s rings?  Might his hands have dropped two weights -- heavy and light -- over the rail of a leaning tower, while the master physicist stood watching below?

         There is no way to tell.  Not even from documents in the Kuiper Compilation. 

         There is, however, another item from this period that Kuiper missed, but that I found in a scan of Vatican archives.  An early letter from the Italian scientist Evangelista Torricelli to someone he calls “Uncle Henri” -- whom he apparently met as a child around 1614.   Oblique references are enticing. Was this “Henri” the same man with whom Torricelli would have later adventures?  

         Alas, the letter has passed through so many collectors’ hands over the years that its provenance unclear.   We must wait some time for Torricelli to enter our story in a provable or decisive way.

         

#

 

Meanwhile, back to Henry Stephens. After his return to Leiden in 1616, there is little of significance for several years.  His name appears regularly in account ledgers. Also on survey maps, now signing on his own behalf as people begin to rely ever-more on the geodetic arts he helped develop.  Willibrord Snellius was by now hauling in f600 per annum and Journeyman Henry apparently earned his share.

         Oh, a name very similar to Henry’s can be found on the membership rolls of the Leiden Society, a philosophical club with highly distinguished membership.  The spelling is slightly different, but people were lackadaisical about such things in those days.  Anyway, it’s a good guess that Henry kept up his interest in science, paying keen attention to new developments.

         Then, abruptly, his world changed again.

         

#

 

Conditions had grown worse for dissenters back in England.  Henry’s father, having returned home to press for concessions from James I, was rewarded with imprisonment.  Finally, the King offered a deal, amnesty in exchange for a new and extreme form of exile -- participation in a fresh attempt to settle an English colony in the New World.

         Of course, everyone knows about the Pilgrims, their reasons for fleeing England and setting forth on the Mayflower, imagining that they were bound for Virginia, though by chicanery and mischance they wound up instead along the New England coast above Cape Cod.  All of that is standard catechism in American History One-A, offering a mythic basis for our Thanksgiving Holiday.  And much of it is just plain wrong.

         For one thing, the Mayflower did not first set forth from Plymouth, England.  It only stopped there briefly to take on a few more colonists and supplies, having actually begun its voyage in Holland.  The expatriate community was the true source of people and material.

         And right there, listed among the ship’s complement, having obediently joined his father and family, you will find a stalwart young man of twenty-five --  Henry Stephens.

         

#

 

Again, details are sketchy.  After a rigorous crossing oft portrayed in book and film, the Pilgrims arrived at Plymouth Rock on December 21, 1620.

         Professor Kuiper hunted among colonial records and found occasional glimpses of our hero.  Apparently he survived that terrible first winter and did more than his share to help the young colony endure.  Relations with the local natives were crucial and Professor Kuiper scribbled a number of notes which I hope to follow-up on later.  One of them suggests that Henry went west for some time to live among the Mohegan and other tribes, exploring great distances, making drawings and collecting samples of flora and fauna. 

         If so, we may have finally discovered the name of the “American friend” who supplied William Harvey with his famous New World Collection, the core element upon which Edmond Halley later began sketching his Theory of Evolution!

         Henry’s first provable reappearance in the record comes in 1625, with his marriage to Prosper White-Moon Forest -- a name that provokes interesting speculation.  There is no way to verify that his wife was a Native American woman, though subsequent township entries show eight children, only one of whom appears to have died young -- apparently a happy and productive family for the time.  Certainly, any bias or hostility toward Prosper must have been quelled by respect.  Her name is noted prominently among those who succored the sick during the pestilence year of 1627.  

         Further evidence of local esteem came in 1629 when Henry was engaged by the new Massachusetts Bay Colony as official surveyor.  This led to what was heretofore his principal claim for historical notice, as architect who laid down the basic plan for Boston Town.  A plan that included innovative arterial and peripheral lanes, looking far beyond the town’s rude origins.  As you may know, it became a model for future urban design that would be called the New England Style.  

         This rapid success might have led Henry directly to a position of great stature in the growing colony, had not events brought his tenure to an abrupt end in 1631.  That was the year, you’ll recall, when Roger Williams stirred up a hornet’s nest in the Bay Colony, by advocating unlimited religious tolerance -- even for Catholics, Jews and infidels.   

         Forced temporarily to flee Boston, Williams and his adherents established a flourishing new colony in Rhode Island -- before returning to Boston in triumph in 1634.   And yes, the first township of this new colony, this center of tolerance, was surveyed and laid out by you-know-who.

 

                                                                        #

 

It’s here that things take a decidedly odd turn.  

         Odd? That doesn’t half describe how I felt when I began to realize what happened next.  Lilly, I have barely slept for the last week!  Instead, I popped pills and wore electrodes in order to concentrate as a skein of connections began taking shape.

         For example, I had simply assumed that Professor Kuiper’s hoard was so eclectic because of an obsessive interest in a certain period of time -- nothing more.  He seemed to have grabbed things randomly!  So many documents, with so little connecting tissue between them.  

         Take the rare and valuable first edition that many consider the centerpiece of his collection -- a rather beaten but still beautiful copy of  "Dialogho Sopra I Due Massimi Sistemi Del Mondo"  or “A Dialogue Concerning Two Systems Of The World.”  

         (This document alone helped drive the aiBay bidding war, which Harvard eventually topped because the Collection also contained many papers of local interest.)

         A copy of the Dialogue!   I felt awed just touching it with gloved hands.  Did any other book do more to propel the birth of modern science?  The debate between the Copernican and Ptolemaic astronomical systems reached its zenith within this publication, sparking a frenzy of reaction -- not all of it favorable!  Responding to this implicit challenge, the Papal Palace and the Inquisition were so severe that most of Italy’s finest researchers emigrated during the decade that followed, many of them settling in Leiden and Amsterdam.

         That included young Evangelista Torricelli, who by 1631 was already well-known as a rising star of physical science.  Settling in Holland, Torricelli commenced rubbing elbows with friends of his “Uncle Henri” and performing experiments that would lead to invention of the barometer.  

         In correspondence that year, Torricelli shows deep worry about his old master, back in Pisa.  Often he would use code words and initials.  Obscurity was a form of protective covering in those days and he did not want to get the old man in even worse trouble.  It would do no good for “G” to be seen as a martyr or cause celebre in Protestant lands up north.  That might only antagonize the Inquisition even further.

         Still, Torricelli’s sense of despond grew evident as he wrote to friends all over Europe, passing on word of the crime being committed against his old master.  Without naming names, Torricelli described the imprisonment of a great and brilliant man.  Threats of torture, the coerced abjuration of his life’s work... and then even worse torment as the gray-bearded Professori entered confinement under house arrest, forbidden ever to leave his home or stroll the lanes and hills, or even to correspond (except clandestinely) with other lively minds.

         

                                                      #

 

What does all of this have to do with that copy of  "Dialogho” in the Kuiper Collection?

         Like many books that are centuries old, this one has accumulated a morass of margin notes and annotations, scribbled by various owners over the years -- some of them cogent glosses upon the elegant mathematical and physical arguments, and others written by perplexed or skeptical or hostile readers.  But one large note especially caught my eye.  Latin words on the flyleaf, penned in a flowing hand. Words that translate as:

                                             

                  To the designer of Providence.

                  Come soon, deliverance of our father.

                                                      

         All previous scholars who examined this particular copy of "Dialogho” have assumed that the inscription on the flyleaf was simply a benediction or dedication to the Almighty, though in rather unconventional form.  

         No one knew what to make of the signature, consisting of two large letters.

         ET.

                                             #

 

Can you see where I’m heading with this?

         Struck by a sudden suspicion, I arranged for Kuiper’s edition of "Dialogho” to be examined by the Archaeology Department, where special interest soon focused on dried botanical materials embedded at the tight joining of numerous pages.  All sorts of debris can settle into any book that endures four centuries.  But lately, instead of just brushing it away, people have begun studying this material. Imagine my excitement when the report came in -- pollen, seeds and stem residue from an array of plant types... nearly all of them native to New England!

         It occurred to me that the phrase “designer of Providence might not -- in this case -- have solely a religious import!  

         Could it be a coded salutation to an architectural surveyor? One who established the street plan of the capital of Rhode Island?  

         Might “father” in this case refer not to the Almighty, but instead to somebody far more temporal and immediate -- the way two apprentices refer to their beloved master?

         What I can verify from the open record is this.  Soon after helping Roger Williams return to Boston in triumph, Henry Stephens hastily took his leave of America and his family, departing on a vessel bound for Holland.

 

                                                      #

 

Why that particular moment?  It should have been an exciting time for such a fellow.  The foundations for a whole new civilization were being laid.  Who can doubt that Henry took an important part in early discussions with Williams, Winthrop, Anne Hutchinson and others -- deliberations over the best way to establish tolerance and lasting peace with native tribes.  How to institute better systems of justice and education.  Discussions that would soon bear surprising fruit.

         And yet, just as the fruit was ripening, Stephens left, hurrying back to a Europe that he now considered decadent and corrupt.   What provoked this sudden flight from his cherished New World?

         It was July, 1634.  Antwerp shipping records show him disembarking there on the 5th.  

         On the 20th a vague notation in the Town Hall archive tells of a meeting between several guildmasters and a group of ‘foreign doctors’ -- a term that could apply to any group of educated people from beyond the city walls.  Only the timing seems provocative.

         In early August, the Maritime Bank recorded a large withdrawal of 250 florins from the account of Willebrord Snellius, authorized in payment to ‘H. Stefuns’ by letter of credit from Leiden.

         Travel expenses?  Plus some extra for clandestine bribes?  Yes, the clues are slim even for speculating.  And yet we also know that at this time the young exiled scholar, Evangelista Torricelli, vacated his home. Bidding farewell to his local patrons, he then mysteriously vanished from sight forever.

         So, temporarily, did Henry Stephens.  For almost a year there is no sign of either man.  No letters.  No known mention of anyone seeing them...

         ...not until the spring of 1635, when Henry stepped once more upon the wharf in Boston Town, into the waiting arms of Prosper and their children.  Sons and daughters who presumably clamored around their Papa, shouting the age-old refrain -- 

         What did you bring me?  What did you bring me?”

         What he brought them was the future.

         

#

 

Oops, sorry about that, Lilly.  You must be chafing for me to get to the point.

         Or did you cheat?  

         Have you already done a quick mentat-scan of the archives, skipping past Henry’s name on the Gravenhage ship manifest, looking to see who else disembarked along with him that bright April day?  

         No, it won’t be that obvious. They were afraid, you see, and with good reason.  

         True, the Holy See quickly forgave the fugitive and declared him safe from retribution.  But the secretive masters of the Inquisition were less eager to pardon a famous escapee.  They had already proved relentless in pursuit of those who slip away.  While pretending that he still languished in custody, they must have sent agents everywhere, searching...

         So, look instead for assumed names!  Protective camouflage.

         Try Mr.  Quicksilver, which was the common word in English for mercury, a metal that is liquid at room temperature and a key ingredient in early barometers.  Is the name familiar?  It would be if you went to this university.  And now it’s plain -- that had to be Torricelli!  A flood of scholarly papers may come from this connection, alone.  An old mystery solved. 

         But move on now to the real news.  Have you scanned the passenger list carefully?

         How about “Mr. Kinneret”?   

         Kinneret -- one of the alternate names, in Hebrew, for the Sea of Galilee.

         

#

 

Yes, dear.    Kinneret.   

         I’m looking at his portrait right now, on the Wall of Founders.  And despite obvious efforts at disguise -- no beard, for example -- it astonishes me that no one has commented till now on the resemblance between Harvard’s earliest Professor of Natural Philosophy and the scholar who we are told died quietly under house arrest in Pisa, way back in 1642.

         It makes you wonder.  Would a Catholic savant from “papist” Italy have been welcome in Puritan Boston -- or on the faculty of John Harvard’s new college -- without the quiet revolution of reason that Roger Williams set in motion?  

         Would that revolution have been so profound or successful, without strong support from the Surveyor’s Guild and the Seven United Tribes?

         Lacking the influence of Kinneret, might the American tradition of excellence in mathematics and science have been delayed for decades?  Maybe centuries?

         

#

 

Sitting here in the Harvard University Library, staring out the window at rowers on the river, I can scarcely believe that less than four centuries have passed since the Gravenhage docked not far from here on that chilly spring morning of 1635.   Three hundred and sixty-seven years ago, to be exact.  

         Is that all? Think about it, Lilly, just fifteen human generations, from those rustic beginnings to the dawn of a new millennium.   How the world has changed.

         Ill-disciplined, I left my transcriber set to record Surface Thoughts, and so these personal musings have all been logged for you to savor, if you choose high-fidelity download.  But can even that convey the emotion I feel while marveling at the secret twists and turns of history?

         If only some kind of time -- or para-time -- travel were possible, so history could become an observational... or even experimental... science!  Instead, we are left to use primitive methods, piecing together clues, sniffing and burrowing in dusty records, hoping the essential story has not been completely lost.  

         Yearning to shed a ray of light on whatever made us who we are.

         

#

 

How much difference can one person make, I wonder?  Even one gifted with talent and goodness and skill -- and the indomitable will to persevere?  

         Maybe some group other than the Iroquois would have invented the steamboat and the Continental Train, even if James Watt hadn’t emigrated and ‘gone native’.   But how ever could the Pan American Covenant have succeeded without Ben Franklin sitting there in Havana, to jest and soothe all the bickering delegates into signing?  

         How important was Abraham Lincoln’s Johannesburg Address in rousing the world to finish off slavery and apartheid?  Might the flagging struggle have failed without him?  Or is progress really a team effort, the way Kip Thorne credits his colleagues -- meta-Einstein and meta-Feynman -- claiming that he never could have created the Transfer Drive without their help?

         Even this fine Widener Library where I sit -- bequeathed to Harvard by one of the alumni who died when Titanic hit that asteroid in 1912 -- seems to support the notion that things will happen pretty much the same, whether or not a specific individual or group happens to be on the scene.

 

                                                                        #

 

No one can answer these questions.  My own recent discoveries -- following a path blazed by Kuiper and others -- don’t change things very much.  Except perhaps to offer a sense of satisfaction -- much like the gratification Henry Stephens must have felt the day he stepped down the wharf, embracing his family, shaking the hand of his friend Williams, and breathing the heady air of freedom in this new world...

         ... then turning to introduce his friends from across the sea.  Friends who would do epochal things during the following twenty years, becoming legends while Henry himself faded into contented obscurity.

 

         Can one person change the world?

         Maybe not.  

         So instead let’s ask; what would Harvard be like, if not for Quicksilver-Torricelli?

         Or if not for Professor Galileo Galilei.

 

 


                                                      ###

                                                      ###

                                                      ###

                                                      ###




Addendum in 2026. Sure, optimism can be a hard to come by right now. Especially as the Confederacy - having captured the American capital in this latest phase of the 240 year Civil War - is expressing its classic manias, seemingly determined to take this where it always ends. At Yorktown. At Appomattox. 


Certainly I'll not gloat as scores of sage pundits and pols admit - at long last - what I've said for a decade. That it's been blackmail, all along. 


Not just because of what's been revealed (so far) in the Partial/redacted Epstein Files. But because only coercion can explain the uniformity of craven inaction by those cowards who won't step up for their country, for justice, for sanity... or for their children. Not dogma or ideology or graft... none of the classic diagnoses can explain why even just TEN haven't stepped across the aisle in the House, to rejoin America. To wipe that smirk off Mike Johnson's so-brown nose.


Replay the SOTU and look at that side, see the desperation to express placating obeisance for their master. And underneath... the fear.


One, even just one could make a difference... 


....as in the story that I oppered you today. But let it inspire you if just a little.


Persevere.

 

,

Cryptogram LLMs Generate Predictable Passwords

LLMs are bad at generating passwords:

There are strong noticeable patterns among these 50 passwords that can be seen easily:

  • All of the passwords start with a letter, usually uppercase G, almost always followed by the digit 7.
  • Character choices are highly uneven ­ for example, L , 9, m, 2, $ and # appeared in all 50 passwords, but 5 and @ only appeared in one password each, and most of the letters in the alphabet never appeared at all.
  • There are no repeating characters within any password. Probabilistically, this would be very unlikely if the passwords were truly random ­ but Claude preferred to avoid repeating characters, possibly because it “looks like it’s less random”.
  • Claude avoided the symbol *. This could be because Claude’s output format is Markdown, where * has a special meaning.
  • Even entire passwords repeat: In the above 50 attempts, there are actually only 30 unique passwords. The most common password was G7$kL9#mQ2&xP4!w, which repeated 18 times, giving this specific password a 36% probability in our test set; far higher than the expected probability 2-100 if this were truly a 100-bit password.

This result is not surprising. Password generation seems precisely the thing that LLMs shouldn’t be good at. But if AI agents are doing things autonomously, they will be creating accounts. So this is a problem.

Actually, the whole process of authenticating an autonomous agent has all sorts of deep problems.

News article.

Slashdot story

Planet DebianSahil Dhiman: Publicly Available NKN Data Traffic Graphs

National Knowledge Network (NKN) is one of India’s main National Research and Educational Network (NREN). The other being the less prevalent Education and Research Network (ERNET).

This post grew out of this Mastodon thread where I kept on adding various public graphs (from various global research and educational entities) that peer or connect with NKN. This was to get some purview about traffic data between them and NKN.

CERN

CERN, birthplace of the World Wide Web (WWW) and home of the Large Hadron Collider (LHC).

India participates in the LHCONE project, which carries LHC data over these links for scientific research purposes. This presentation from Vikas Singhal from Variable Energy Cyclotron Centre (VECC), Kolkata, at the 8th Asian Tier Center Forum in 2024 gives some details.

GÉANT

GÉANT is pan European Union’s collaboration of NRENs.

LEARN

Lanka Education and Research Network (LEARN) is Sri Lanka’s NREN.

NORDUnet

NREN for Nordic countries.

I couldn’t find any public live data transfer graphs from NKN side. If you know any other graphs, do let me know.

Planet DebianJoachim Breitner: Vibe-coding a debugger for a DSL

Earlier this week a colleague of mine, Emilio Jesús Gallego Arias, shared a demo of something he built as an experiment, and I felt the desire to share this and add a bit of reflection. (Not keen on watching a 5 min video? Read on below.)

What was that?

So what did you just see (or skipped watching)? You could see Emilio’s screen, running VSCode and editing a Lean file. He designed a small programming language that he embedded into Lean, including an evaluator. So far, so standard, but a few things stick out already:

  • Using Lean’s very extensible syntax this embedding is rather elegant and pretty.
  • Furthermore, he can run this DSL code right there, in the source code, using commands like #eval. This is a bit like the interpreter found in Haskell or Python, but without needing a separate process, or like using a Jupyter notebook, but without the stateful cell management.

This is already a nice demonstration of Lean’s abilities and strength, as we know them. But what blew my mind the first time was what happened next: He had a visual debugger that allowed him to debug his DSL program. It appeared on the right, in Lean’s “Info View”, where various Lean tools can hook into, show information and allow the user to interact.

But it did not stop there, and my mind was blown a second time: Emilio opened VSCode’s „Debugger“ pane on the left, and was able to properly use VSCode’s full-fledged debugger frontend for his own little embedded programming language! Complete with highlighting the executed line, with the ability to set breakpoints there, and showing the state of local variables in the debugger.

Having a good debugger is not to be taken for granted even for serious, practical programming languages. Having it for a small embedded language that you just built yourself? I wouldn’t have even considered that.

Did it take long?

If I were Emilio’s manager I would applaud the demo and then would have to ask how many weeks he spent on that. Coming up with the language, getting the syntax extension right, writing the evaluator and especially learning how the debugger integration into VSCode (using the DAP protocol) works, and then instrumenting his evaluator to speak that protocol – that is a sizeable project!

It turns out the answer isn’t measured in weeks: it took just one day of coding together with GPT-Codex 5.3. My mind was blown a third time.

Why does Lean make a difference?

I am sure this post is just one of many stories you have read in recent weeks about how new models like Claude Opus 4.6 and GPT-Codex 5.3 built impressive things in hours that would have taken days or more before. But have you seen something like this? Agentic coding is powerful, but limited by what the underlying platform exposes. I claim that Lean is a particularly well-suited platform to unleash the agents’ versatility.

Here we are using Lean as a programming language, not as a theorem prover (which brings other immediate benefits when using agents, e.g. the produced code can be verified rather than merely plausible, but that’s a story to be told elsewhere.)

But arguably because Lean is also a theorem prover, and because of the requirements that stem from that, its architecture is different from that of a conventional programming language implementation:

  • As a theorem prover, it needs extensible syntax to allow formalizing mathematics in an ergonomic way, but it can also be used for embedding syntax.
  • As a theorem prover, it needs the ability to run “tactics” written by the user, hence the ability to evaluate the code right there in the editor.
  • As a theorem prover, it needs to give access to information such as tactic state, and such introspection abilities unlock many other features – such as a debugger for an embedded language.
  • As a theorem prover, it has to allow tools to present information like the tactic state, so it has the concept of interactive “Widgets”.

So Lean’s design has always made such a feat possible. But it was no easy feat. The Lean API is large, and documentation never ceases to be improvable. In the past, it would take an expert (or someone willing to become one) to pull off that stunt. These days, coding assistants have no issue digesting, understanding and using the API, as Emilio’s demo shows.

The combination of Lean’s extensibility and the ability of coding agents to make use of that is a game changer to how we can develop software, with rich, deep, flexible and bespoke ways to interact with our code, created on demand.

Where does that lead us?

Emilio actually shared more such demos (Github repository). A visual explorer for the compiler output (have a look at the screenshot. A browser-devtool-like inspection tool for Lean’s “InfoTree”. Any of these provide a significant productivity boost. Any of these would have been a sizeable project half a year ago. Now it’s just a few hours of chatting with the agent.

So allow me to try and extrapolate into a future where coding agents have continued to advance at the current pace, and are used ubiquitously. Is there then even a point in polishing these tools, shipping them to our users, documenting them? Why build a compiler explorer for our users, if our users can just ask their agent to build one for them, right then when they need it, tailored to precisely the use case they have, with no unnecessary or confusing feature. The code would be single use, as the next time the user needs something like that the agent can just re-create it, maybe slightly different because every use case is different.

If that comes to pass then Lean may no longer get praise for its nice out-of-the-box user experience, but instead because it is such a powerful framework for ad-hoc UX improvements.

And Emilio wouldn’t post demos about his debugger. He’d just use it.

365 TomorrowsTill Zen, farewell

Author: Colin Jeffrey Andre Grack wasn’t happy with his latest purchase. It wasn’t that he didn’t like the colour or its size, though those attributes were rather nebulous and indescribable, he now realised. And it wasn’t that it was ugly, emitted unpleasant smells, or leaked something nasty onto the floor. Though, again, these aspects defied […]

The post Till Zen, farewell appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Safegaurd Your Comments

I've had the misfortune of working in places which did source-control via comments. Like one place which required that, with each section of code changed, you needed to add a comment with your name, the ticket number, and the reason the change was made. You know, the kind of thing you can just get from your source control service.

In their defense, that policy was invented for mainframe developers and then extended to everyone else, and their source control system was in Visual Source Safe. VSS was a) terrible, and b) a perennial destroyer of history, so maybe they weren't entirely wrong and VSS was the real WTF. I still hated it.

In any case, Alice's team uses more modern source control than that, which is why she's able to explain to us the story of this function:

public function calculateMassGrossPay(array $employees, Payroll $payroll): array
{
    // it shouldn't enter here, but if it does by any change, do nth
    return [];
}

Once upon a time, this function actually contained logic, a big pile of fairly complicated logic. Eventually, a different method was created which streamlined the functionality, but had a different signature and logic. All the callers were updated to use that method instead- by commenting out the line which called this one. This function had a comment added to the top: // it shouldn't enter here.

Then, the body of this function got commented out, and the return was turned into an empty array. The comment was expanded to what you see above. Then, eventually, the commented-out callers were all deleted. Years after that, the commented out body of this function was also deleted, leaving behind the skeleton you see here.

This function is not referenced anywhere else, not even in a comment. It's truly impossible for code to "enter here".

Alice writes: "Version control by commented out code does not work very well."

Indeed, it does not.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Planet DebianLouis-Philippe Véronneau: Montreal's Debian & Stuff - February 2026

Our Debian User Group met on February 22nd for our first meeting of the year!

Here's what we did:

pollo:

  • reviewed and merged Lintian contributions:
  • released lintian version 2.130.0
  • upstreamed a patch for python-wilderness, fixed a few things and released version 0.1.10-3
  • updated python-clevercsv to version 0.8.4
  • updated python-mediafile to version 0.14.0

lelutin:

  • opened up a RFH for co-maintenance for smokeping and added Marc Haber who responded really quickly to the call
  • with mjeanson's help: prepped and uploaded a new smokeping version to release pending work
  • opened a NM request to become DM

viashimo:

  • fixed freshrss timer
  • updated freshrss
  • installed new navidrome container
  • configured backups for new host (beelink mini s12)

tvaz:

  • did NM work
  • learned more about debusine and tested it
  • uploaded antimony to debusine
  • (co-)convinced lelutin to apply for DM (yay!)

lavamind:

  • worked on autopkgtests for a new version of jruby

Pictures

This time around, we held our meeting at cégep du Vieux Montréal, the college where I currently work. Here is the view we had:

View from my office

We also ordered some delicious pizzas from Pizzeria dei Compari, a nice pizzeria on Saint-Denis street that's been there forever.

The pizzas we ate

Some of us ended up grabbing a drink after the event at l'Amère à boire, a pub right next to the venue, but I didn't take any pictures.

Cryptogram Poisoning AI Training Data

All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted.

365 TomorrowsSprite

Author: Mark Renney It is vital that I have somewhere to hibernate, a place where I can lay dormant, for years, decades, even longer if necessary, although I do need to flicker, albeit briefly, from time to time. I have to be seen or at least cause someone to shiver, to feel uncertain, disoriented. Any […]

The post Sprite appeared first on 365tomorrows.

Planet DebianJohn Goerzen: Screen Power Saving in the Linux Console

I just made up a Debian trixie setup that has no need for a GUI. In fact, I rarely use the text console either. However, because the machine is dual boot and also serves another purpose, it’s connected to my main monitor and KVM switch.

The monitor has three inputs, and when whatever display it’s set to goes into powersave mode, it will seek out another one that’s active and automatically switch to it.

You can probably see where this is heading: it’s really inconvenient if one of the inputs never goes into powersave mode. And, of course, it wastes energy.

I have concluded that the Linux text console has lost the ability to enter powersave mode after an inactivity timeout. It can still do screen blanking — setting every pixel to black — but that is a distinct and much less useful thing.

You can do a lot of searching online that will tell you what to do. Almost all of it is wrong these days. For instance, none of these work:

  • Anything involving vbetool. This is really, really old advice.
  • Anything involving xset, unless you’re actually running a GUI, which is not the point of this post.
  • Anything involving setterm or the kernel parameters video=DPMS or consoleblank.
  • Anything involving writing to paths under /sys, such as ones ending in dpms.

Why is this?

Well, we are on at least the third generation of Linux text console display subsystems. (Maybe more than 3, depending on how you count.) The three major ones were:

  1. The VGA text console
  2. fbdev
  3. DRI/KMS

As I mentioned recently in my post about running an accurate 80×25 DOS-style console on modern Linux, the VGA text console mode is pretty much gone these days. It relied on hardware rendering of the text fonts, and that capability simply isn’t present on systems that aren’t PCs — or even on PCs that are UEFI, which is most of them now.

fbdev, or a framebuffer console under earlier names, has been in Linux since the late 1990s. It was the default for most distros until more recently. It supported DPMS powersave modes, and most of the instructions you will find online reference it.

Nowadays, the DRI/KMS system is used for graphics. Unfortunately, it is targeted mainly at X11 and Wayland. It is also used for the text console, but things like DPMS-enabled timeouts were never implemented there.

You can find some manual workarounds — for instance, using ddcutil or similar for an external monitor, or adjusting the backlight files under /sys on a laptop. But these have a number of flaws — making unwanted brightness adjustments, and not automatically waking up on keypress among them.

My workaround

I finally gave up and ran apt-get install xdm. Then in /etc/X11/xdm/Xsetup, I added one line:

xset dpms 0 0 120

Now the system boots into an xdm login screen, and shuts down the screen after 2 minutes of inactivity. On the rare occasion where I want a text console from it, I can switch to it and it won’t have a timeout, but I can live with that.

Thus, quite hopefully, concludes my series of way too much information about the Linux text console!

Worse Than FailureRepresentative Line: Years Go By

Henrik H's employer thought they could save money by hiring offshore, and save even more money by hiring offshore junior developers, and save even more money by basically not supervising them at all.

Henrik sends us just one representative line:

if (System.DateTime.Now.AddDays(-365) <= f.ReleaseDate) // 365 days means one year 

I appreciate the comment, that certainly "helps" explain the magic number. There's of course, just one little problem: It's wrong. I mean, ~75% of the time, it works every time, but it happily disregards leap years. Which may or may not be a problem in this case, but if they got so far as learning about the AddDays method, they were inches from using AddYears.

I guess it's true what they say: you can lead a dev to docs, but you can't make them think.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

,

Planet DebianAntoine Beaupré: PSA: North america changes time forward soon, Europe next

This is a copy of an email I used to send internally at work and now made public. I'm not sure I'll make a habit of posting it here, especially not twice a year, unless people really like it. Right now, it's mostly here to keep with my current writing spree going.

This is your bi-yearly reminder that time is changing soon!

What's happening?

For people not on tor-internal, you should know that I've been sending semi-regular announcements when daylight saving changes occur. Starting now, I'm making those announcements public so they can be shared with the wider community because, after all, this affects everyone (kind of).

For those of you lucky enough to have no idea what I'm talking about, you should know that some places in the world implement what is called Daylight saving time or DST.

Normally, you shouldn't have to do anything: computers automatically change time following local rules, assuming they are correctly configured, provided recent updates have been applied in the case of a recent change in said rules (because yes, this happens).

Appliances, of course, will likely not change time and will need to adjusted unless they are so-called "smart" (also known as "part of a bot net").

If your clock is flashing "0:00" or "12:00", you have no action to take, congratulations on having the right time once or twice a day.

If you haven't changed those clocks in six months, congratulations, they will be accurate again!

In any case, you should still consider DST because it might affect some of your meeting schedules, particularly if you set up a new meeting schedule in the last 6 months and forgot to consider this change.

If your location does not have DST

Properly scheduled meetings affecting multiple time zones are set in UTC time, which does not change. So if your location does not observer time changes, your (local!) meeting time will not change.

But be aware that some other folks attending your meeting might have the DST bug and their meeting times will change. They might miss entire meetings or arrive late as you frantically ping them over IRC, Matrix, Signal, SMS, Ricochet, Mattermost, SimpleX, Whatsapp, Discord, Slack, Wechat, Snapchat, Telegram, XMPP, Briar, Zulip, RocketChat, DeltaChat, talk(1), write(1), actual telegrams, Meshtastic, Meshcore, Reticulum, APRS, snail mail, and, finally, flying a remote presence drone to their house, asking what's going on.

(Sorry if I forgot your preferred messaging client here, I tried my best.)

Be kind; those poor folks might be more sleep deprived as DST steals one hour of sleep from them on the night that implements the change.

If you do observe DST

If you are affected by the DST bug, your local meeting times will change access the board. Normally, you can trust that your meetings are scheduled to take this change into account and the new time should still be reasonable.

Trust, but verify; make sure the new times are adequate and there are no scheduling conflicts.

Do this now: take a look at your calendar in two week and in April. See if any meeting need to be rescheduled because of an impossible or conflicting time.

When does time change, how and where?

Notice how I mentioned "North America" in the subject? That's a lie. ("The doctor lies", as they say on the BBC.) Other places, including Europe, also changes times, just not all at once (and not all North America).

We'll get into "where" soon, but first let's look at the "how". As you might already know, the trick is:

Spring forward, fall backwards.

This northern-centric (sorry!) proverb says that clocks will move forward by an hour this "spring", after moving backwards last "fall". This is why we lose an hour of work, sorry, sleep. It sucks, to put it bluntly. I want it to stop and will keep writing those advisories until it does.

To see where and when, we, unfortunately, still need to go into politics.

USA and Canada

First, we start with "North America" which, really, is just some parts of USA[1] and Canada[2]. As usual, on the Second Sunday in March (the 8th) at 02:00 local (not UTC!), the clocks will move forward.

This means that properly set clocks will flip from 1:59 to 3:00, coldly depriving us from an hour of sleep that was perniciously granted 6 months ago and making calendar software stupidly hard to write.

Practically, set your wrist watch and alarm clocks[3] back one hour before going to bed and go to bed early.

[1] except Arizona (except the Navajo nation), US territories, and Hawaii

[2] except Yukon, most of Saskatchewan, and parts of British Columbia (northeast), one island in Nunavut (Southampton Island), one town in Ontario (Atikokan) and small parts of Quebec (Le Golfe-du-Saint-Laurent), a list which I keep recopying because I find it just so amazing how chaotic it is. When your clock has its own Wikipedia page, you know something is wrong.

[3] hopefully not managed by a botnet, otherwise kindly ask your bot net operator to apply proper software upgrades in a timely manner

Europe

Next we look at our dear Europe, which will change time on the last Sunday in March (the 29th) at 01:00 UTC (not local!). I think it means that, Amsterdam-time, the clocks will flip from 1:59 to 3:00 AM local on that night.

(Every time I write this, I have doubts. I would welcome independent confirmation from night owls that observe that funky behavior experimentally.)

Just like your poor fellows out west, just fix your old-school clocks before going to bed, and go to sleep early, it's good for you.

Rest of the world with DST

Renewed and recurring apologies again to the people of Cuba, Mexico, Moldova, Israel, Lebanon, Palestine, Egypt, Chile (except Magallanes Region), parts of Australia, and New Zealand which all have their own individual DST rules, omitted here for brevity.

In general, changes also happen in March, but either on different times or different days, except in the south hemisphere, where they happen in April.

Rest of the world without DST

All of you other folks without DST, rejoice! Thank you for reminding us how manage calendars and clocks normally. Sometimes, doing nothing is precisely the right thing to do. You're an inspiration for us all.

Changes since last time

There were, again, no changes since last year on daylight savings that I'm aware of. It seems the US congress debating switching to a "half-daylight" time zone which is an half-baked idea that I should have expected from the current USA politics.

The plan is to, say, switch from "Eastern is UTC-4 in the summer" to "Eastern is UTC-4.5". The bill also proposes to do this 90 days after enactment, which is dangerously optimistic about our capacity at deploying any significant change in human society.

In general, I rely on the Wikipedia time nerds for this and Paul Eggert which seems to singlehandledly be keeping everything in order for all of us, on the tz-announce mailing list.

This time, I've also looked at the tz mailing list which is where I learned about the congress bill.

If your country has changed time and no one above noticed, now would be an extremely late time to do something about this, typically writing to the above list. (Incredibly, I need to write to the list because of this post.)

One thing that did change since last year is that I've implemented what I hope to be a robust calendar for this, which was surprisingly tricky.

If you have access to our Nextcloud, it should be visible under the heading "Daylight saving times". If you don't, you can access it using this direct link.

The procedures around how this calendar was created, how this email was written, and curses found along the way, are also documented in this wiki page, if someone ever needs to pick up the Time Lord duty.

Planet DebianWouter Verhelst: On Free Software, Free Hardware, and the firmware in between

When the Free Software movement started in the 1980s, most of the world had just made a transition from free university-written software to non-free, proprietary, company-written software. Because of that, the initial ethical standpoint of the Free Software foundation was that it's fine to run a non-free operating system, as long as all the software you run on that operating system is free.

Initially this was just the editor.

But as time went on, and the FSF managed to write more and more parts of the software stack, their ethical stance moved with the times. This was a, very reasonable, pragmatic stance: if you don't accept using a non-free operating system and there isn't a free operating system yet, then obviously you can't write that free operating system, and the world won't move towards a point where free operating systems exist.

In the early 1990s, when Linus initiated the Linux kernel, the situation reached the point where the original dream of a fully free software stack was complete.

Or so it would appear.

Because, in fact, this was not the case. Computers are physical objects, composed of bits of technology that we refer to as "hardware", but in order for these bits of technology to communicate with other bits of technology in the same computer system, they need to interface with each other, usually using some form of bus protocol. These bus protocols can get very complicated, which means that a bit of software is required in order to make all the bits communicate with each other properly. Generally, this software is referred to as "firmware", but don't let that name deceive you; it's really just a bit of low-level software that is very specific to one piece of hardware. Sometimes it's written in an imperative high-level language; sometimes it's just a set of very simple initialization vectors. But whatever the case might be, it's always a bit of software.

And although we largely had a free system, this bit of low-level software was not yet free.

Initially, storage was expensive, so computers couldn't store as much data as today, and so most of this software was stored in ROM chips on the exact bits of hardware they were meant for. Due to this fact, it was easy to deceive yourself that the firmware wasn't there, because you never directly interacted with it. We knew it was there; in fact, for some larger pieces of this type of software it was possible, even in those days, to install updates. But that was rarely if ever done at the time, and it was easily forgotten.

And so, when the free software movement slapped itself on the back and declared victory when a fully free operating system was available, and decided that the work of creating a free software environment was finished, that only keeping it recent was further required, and that we must reject any further non-free encroachments on our fully free software stack, the free software movement was deceiving itself.

Because a computing environment can never be fully free if the low-level pieces of software that form the foundations of that computing environment are not free. It would have been one thing if the Free Software Foundation declared it ethical to use non-free low-level software on a computing environment if free alternatives were not available. But unfortunately, they did not.

In fact, something very strange happened.

In order for some free software hacker to be able to write a free replacement for some piece of non-free software, they obviously need to be able to actually install that theoretical free replacement. This isn't just a random thought; in fact it has happened.

Now, it's possible to install software on a piece of rewritable storage such as flash memory inside the hardware and boot the hardware from that, but if there is a bug in your software -- not at all unlikely if you're trying to write software for a piece of hardware that you don't have documentation for -- then it's not unfathomable that the replacement piece of software will not work, thereby reducing your expensive piece of technology to something about as useful as a paperweight.

Here's the good part.

In the late 1990s and early 2000s, the bits of technology that made up computers became so complicated, and the storage and memory available to computers so much larger and cheaper, that it became economically more feasible to create a small, tiny, piece of software stored in a ROM chip on the hardware, with just enough knowledge of the bus protocol to download the rest from the main computer.

This is awesome for free software. If you now write a replacement for the non-free software that comes with the hardware, and you make a mistake, no wobbles! You just remove power from the system, let the DRAM chips on the hardware component fully drain, return power, and try again. You might still end up with a brick of useless silicon if some of the things you sent to your technology make it do things that it was not designed to do and therefore you burn through some critical bits of metal or plastic, but the chance of this happening is significantly lower than the chance of you writing something that impedes the boot process of the piece of hardware and you are unable to fix it because the flash is overwritten. There is anecdotal evidence that there are free software hackers out there who do so. So, yay, right? You'd think the Free Software foundation would jump at the possibility to get more free software? After all, a large part of why we even have a Free Software Foundation in the first place, was because of some piece of hardware that was misbehaving, so you would think that the foundation's founders would understand the need for hardware to be controlled by software that is free.

The strange thing, what has always been strange to me, is that this is not what happened.

The Free Software Foundation instead decided that non-free software on ROM or flash chips is fine, but non-free software -- the very same non-free software, mind -- that touches the general storage device that you as a user use, is not. Never mind the fact that the non-free software is always there, whether it sits on your storage device or not.

Misguidedness aside, if some people decide they would rather not update the non-free software in their hardware and use the hardware with the old and potentially buggy version of the non-free software that it came with, then of course that's their business.

Unfortunately, it didn't quite stop there. If it had, I wouldn't have written this blog post.

You see, even though the Free Software Foundation was about Software, they decided that they needed to create a hardware certification program. And this hardware certification program ended up embedding the strange concept that if something is stored in ROM it's fine, but if something is stored on a hard drive it's not. Same hardware, same software, but different storage. By that logic, Windows respects your freedom as long as the software is written to ROM. Because this way, the Free Software Foundation could come to a standstill and pretend they were still living in the 90s.

An unfortunate result of the "RYF" program is that it means that companies who otherwise would have been inclined to create hardware that was truly free, top to bottom, are now more incentivised by the RYF program to create hardware in which the non-free low-level software can't be replaced.

Meanwhile, the rest of the world did not pretend to still be living in the nineties, and free hardware communities now exist. Because of how the FSF has marketed themselves out of the world, these communities call themselves "Open Hardware" communities, rather than "Free Hardware" ones, but the principle is the same: the designs are there, if you have the skill you can modify it, but you don't have to.

In the mean time, the open hardware community has evolved to a point where even CPUs are designed in the open, which you can design your own version of.

But not all hardware can be implemented as RISC-V, and so if you want a full system that builds RISC-V you may still need components of the system that were originally built for other architectures but that would work with RISC-V, such as a network card or a GPU. And because the FSF has done everything in their power to disincentivise people who would otherwise be well situated to build free versions of the low-level software required to support your hardware, you may now be in the weird position where we seem to have somehow skipped a step.

My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose.

-- J.B.S. Haldane

(comments for this post will not pass moderation. Use your own blog!)

Cryptogram Is AI Good for Democracy?

Politicians fixate on the global race for technological supremacy between US and China. They debate geopolitical implications of chip exports, latest model releases from each country, and military applications of AI. Someday, they believe, we might see advancements in AI tip the scales in a superpower conflict.

But the most important arms race of the 21st century is already happening elsewhere and, while AI is definitely the weapon of choice, combatants are distributed across dozens of domains.

Academic journals are flooded with AI-generated papers, and are turning to AI to help review submissions. Brazil’s court system started using AI to triage cases, only to face an increasing volume of cases filed with AI help. Open source software developers are being overwhelmed with code contributions from bots. Newspapers, music, social media, education, investigative journalism, hiring, and procurement are all being disrupted by a massive expansion of AI use.

Each of these is an arms race. Adversaries within a system iteratively seeking an edge against their competition by continuously expanding their use of a common technology.

Beneficiaries of these arms races are US mega-corporations capturing wealth from the rest of us at an unprecedented rate. A substantial fraction of global economy has reoriented around AI in just the past few years, and that trend is accelerating. In parallel, this industry’s lobbying interests are quickly becoming the object, rather than the subject, of US government power.

To understand these arms races, let’s look at an example of particular interest to democracies worldwide: how AI is changing the relationship between democratic government and citizens. Interactions that used to happen between people and elected representatives are expanding to a massive scale, with AIs taking the roles that humans once did.

In a notorious example from 2017, US Federal Communications Commission opened a comment platform on the web to get public input on internet regulation. It was quickly flooded with millions of comments fraudulently orchestrated by broadband providers to oppose FCC regulation of their industry. From the other side, a 19-yearold college student responded by submitting millions of comments of his own supporting the regulation. Both sides were using software primitive by the standards of today’s AI.

Nearly a decade later, it is getting harder for citizens to tell when they’re talking to a government bot, or when an online conversation about public policy is just bots talking to bots. When constituents leverage AI to communicate better, faster, and more, it pressures government officials to do the same.

This may sound futuristic, but it’s become a familiar reality in US. Staff in US Congress are using AI to make their constituent email correspondence more efficient. Politicians campaigning for office are adopting AI tools to automate fundraising and voter outreach. By one 2025 estimate, a fifth of public submissions to the Consumer Financial Protection Bureau were already being generated with AI assistance.

People and organizations are adopting AI here because it solves a real problem that has made mass advocacy campaigns ineffective in the past: quantity has been inversely proportional to both quality and relevance. It’s easy for government agencies to dismiss general comments in favour of more specific and actionable ones. That makes it hard for regular people to make their voices heard. Most of us don’t have the time to learn the specifics or to express ourselves in this kind of detail. AI makes that contextualization and personalization easy. And as the volume and length of constituent comments grow, agencies turn to AI to facilitate review and response.

That’s the arms race. People are using AI to submit comments, which requires those on the receiving end to use AI to wade through the comments received. To the extent that one side does attain an advantage, it will likely be temporary. And yet, there is real harm created when one side exploits another in these adversarial systems. Constituents of democracies lose out if their public servants use AI-generated responses to ignore and dismiss their voices rather than to listen to and include them. Scientific enterprise is weakened if fraudulent papers sloppily generated by AI overwhelm legitimate research.

As we write in our new book, Rewiring Democracy, the arms race dynamic is inevitable. Every actor in an adversarial system is incentivized and, in the absence of new regulation in this fast moving space, free to use new technologies to advance its own interests. Yet some of these examples are heartening. They signal that, even if you face an AI being used against you, there’s an opportunity to use the tech for your own benefit.

But, right now, it’s obvious who is benefiting most from AI. A handful of American Big Tech corps and their owners are extracting trillions of dollars from the manufacture of AI chips, development of AI data centers, and operation of so-called ‘frontier’ AI models. Regardless of which side pulls ahead in each arms race scenario, the house always wins. Corporate AI giants profit from the race dynamic itself.

As formidable as the near-monopoly positions of today’s Big Tech giants may seem, people and governments have substantial capability to fight back. Various democracies are resisting this concentration of wealth and power with tools of anti-trust regulation, protections for human rights, and public alternatives to corporate AI. All of us worried about the AI arms race and committed to preserving the interests of our communities and our democracies should think in both these terms: how to use the tech to our own advantage, and how to resist the concentration of power AI is being exploited to create.

This essay was written with Nathan E. Sanders, and originally appeared in The Times of India.

Charles StrossBarnum's Law of CEOs

It should be fairly obvious to anyone who's been paying attention to the tech news that many companies are pushing the adoption of "AI" (large language models) among their own employees--from software developers to management--and the push is coming from the top down, as C-suite executives order their staff to use AI, Or Else. But we know that LLMs reduce programmer productivity-- one major study showed that "developers believed that using AI tools helped them perform 20% faster -- but they actually worked 19% slower." (Source.)

Another recent study found that 87% of executives are using AI on the job, compared with just 27% of employees: "AI adoption varies by seniority, with 87% of executives using it on the job, compared with 57% of managers and 27% of employees. It also finds that executives are 45% more likely to use the technology on the job than Gen Zers, the youngest members of today's workforce and the first generation to have grown up with the internet.

"The findings are based on a survey of roughly 7,000 professionals age 18 and older who work in the US, the UK, Australia, Canada, Germany, and New Zealand. It was commissioned by HR software company Dayforce and conducted online from July 22 to August 6."

Why are executives pushing the use of new and highly questionable tools on their subordinates, even when they reduce productivity?

I speculate that to understand this disconnect, you need to look at what executives do.

Gordon Moore, long-time co-founder and CEO of Intel, explained how he saw the CEO's job in his book on management: a CEO is a tie-breaker. Effective enterprises delegate decision making to the lowest level possible, because obviously decisions should be made by the people most closely involved in the work. But if a dispute arises, for example between two business units disagreeing on which of two projects to assign scarce resources to, the two units need to consult a higher level management team about where their projects fit into the enterprise's priorities. Then the argument can be settled ... or not, in which case it propagates up through the layers of the management tree until it lands in the CEO's in-tray. At which point, the buck can no longer be passed on and someone (the CEO) has to make a ruling.

So a lot of a CEO's job, aside from leading on strategic policy, is to arbitrate between conflicting sides in an argument. They're a referee, or maybe a judge.

Now, today's LLMs are not intelligent. But they're very good at generating plausible-sounding arguments, because they're language models. If you ask an LLM a question it does not answer the question, but it uses its probabilistic model of language to generate something that closely resembles the semantic structure of an answer.

LLMs are effectively optimized for bamboozling CEOs into mistaking them for intelligent activity, rather than autocomplete on steroids. And so the corporate leaders extrapolate from their own experience to that of their employees, and assume that anyone not sprinkling magic AI pixie dust on their work is obviously a dirty slacker or a luddite.

(And this false optimization serves the purposes of the AI companies very well indeed because CEOs make the big ticket buying decisions, and internally all corporations ultimately turn out to be Stalinist command economies.)

Anyway, this is my hypothesis: we're seeing an insane push for LLM adoption in all lines of work, however inappropriate, because they directly exploit a cognitive bias to which senior management is vulnerable.

Worse Than FailureWTF: Home Edition

The utility closet Ellis had inherited and lived with for 17 years had been a cesspool of hazards to life and limb, a collection of tangible WTFs that had everyone asking an uncaring god, "What were they thinking?"

Every contractor who'd ever had to perform any amount of work in there had come away appalled. Many had even called over their buddies to come and see the stunning mess for themselves:

INTERIOR OF UTILITY ROOM SHOWING STORAGE CLOSET AT PHOTO CENTER LEFT AND HOT WATER HEATER CLOSET AT PHOTO CENTER RIGHT. VIEW TO EAST. - Bishop Creek Hydroelectric System, HAER CAL,14-BISH.V,7A-28

  • All of the electrical components, dating from the 1980s, were scarily underpowered for what they were supposed to be powering.
  • To get to the circuit breaker box—which was unlabeled, of course—one had to contort themselves around a water heater almost as tall as Ellis herself.
  • As the house had no basement, the utility closet was on the first floor in an open house plan. A serious failure with said water heater would've sent 40 gallons (150 liters) of scalding-hot tsunami surging through the living room and kitchen.
  • The furnace's return air vent had been screwed into crumbling drywall, and only prayers held it in place. Should it have fallen off, it would never have been replaceable. And Ellis' cat would've darted right in there for the adventure of a lifetime.
  • To replace the furnace filter, Ellis had to put on work gloves, unscrew a sharp sheet-metal panel from the side of the furnace, pull the old filter out from behind a brick (the only thing holding it in place), manipulate the filter around a mess of water and natural gas pipes to get it out, thread the new filter in the same way, and then secure it in place with the brick before screwing the panel back on. Ellis always pretended to be an art thief in a museum, slipping priceless paintings around security-system lasers.
  • Between the water tank, furnace, water conditioning unit, fiber optical network terminal, and router, there was barely room to breathe, much less enough air to power ignition for the gas appliances. Some genius had solved this by cutting random holes in several walls to admit air from outside. One of these holes was at floor-level. Once, Ellis opened the closet door to find a huge puddle on the floor, making her fear her hot water heater was leaking. As it turned out, a power-washing service had come over earlier that day. When they'd power-washed the exterior of her home, some of that water shot straight through one of those holes she hadn't known about, giving her utility closet a bonus bath.
  • If air intake was a problem, venting the appliances' exhaust was an even worse issue. The sheet-metal vents had calcified and rusted over time. If left unaddressed, holes could've formed that would've leaked carbon monoxide into Ellis' house.

Considering all the above, plus the fact that the furnace and air conditioner were coming up on 20 years of service, Ellis couldn't put off corrective action any longer. Last week, over a span of 3 days, contractors came in to exorcise the demons:

  • Upgrading electricals that hadn't already been dealt with.
  • Replacing the hot water tank with a wall-mounted tankless heater.
  • Replacing the furnace and AC with a heat pump and backup furnace, controlled by a new thermostat.
  • Creating new pipes for intake and venting (no more reliance on indoor air for ignition).
  • Replacing the furnace return air vent with a sturdier one.
  • Putting a special hinged door on the side of the furnace, allowing the filter to be replaced in a matter of seconds (RIP furnace brick).

With that much work to be done, there were bound to be hiccups. For instance, when the Internet router was moved, an outage occurred: for no good reason, the optical network terminal refused to talk to Ellis' Wifi router after powering back up. A technician came out a couple days later, reset the Internet router, and everything was fine again.

All in all, it was an amazing and welcome transformation. As each new update came online, Ellis was gratefully satisfied. It seemed as though the demons were finally gone.

Unbeknownst to them all, there was one last vengeful spirit to quell, one final WTF that it was hell-bent on doling out.

It was late Friday afternoon. Despite the installers' best efforts, the new thermostat still wasn't communicating with the new heat pump. Given the timing, they couldn't contact the company rep to troubleshoot. However, the thermostat was properly communicating with the furnace. And so, Ellis was left with the furnace for the weekend. She was told not to mess with the thermostat at all except to adjust the set point as desired. They would follow back up with her on Monday.

For Ellis, that was perfectly fine. With the historically cold winter they'd been enduring in her neck of the woods, heat was all she cared about. She asked whom to contact in case of any issues, and was told to call the main number. With all that squared away, she looked forward to a couple of quiet, stress-free days before diving back into HVAC troubleshooting.

Everything was fine, until it wasn't. Around 11AM on Saturday, Ellis noticed that the thermostat displayed the word "Heating" while the furnace wasn't actually running. Maybe it was about to turn on? 15 minutes went by, then half an hour. Nothing had changed except for the temperature in her house steadily decreasing.

Panic set in at the thought of losing heat in her home indefinitely. That fell on top of a psyche that was already stressed out and emotionally exhausted from the last several days' effort. Struggling for calm, Ellis first tried to call that main number line for help as directed. She noticed right away that it wasn't a real person on the other end asking for her personal information, but an AI agent. The agent informed her that the on-call technician had no availablity that weekend. It would pencil her in for a service appointment on Monday. How did that sound?

"Not good enough!" Ellis cried. "I wanna speak to a representative!"

"I understand!" replied the blithe chatbot. "Hold on, let me transfer you!"

For a moment, Ellis was buoyed with hope. She'd gotten past the automated system. Soon, she'd be talking with a live person who might even be able to walk her through troubleshooting over the phone.

The new agent answered. Ellis began pouring her heart out—then stopped dead when she realized it was another AI agent, this time with a male voice instead of a female one. This one proceeded through nearly the same spiel as the first. It also scheduled her for a Monday service appointment even though the other chatbot had already claimed to have done so.

This was the first time an AI had ever pulled such a trick on Ellis. It was not a good time for it. Ellis hung up and called the only other person she could think to contact: her sales rep. When he didn't answer, she left a voicemail and texts: no heat all weekend was unacceptable. She would really appreciate a call back.

While playing the horrible waiting game, Ellis tried to think about what she could do to fix this. They had told her not to mess with the thermostat. Well, from what she could see, the thermostat was sending a signal to the furnace that the furnace wasn't responding to for whatever reason. It was time to look at the docs. Fortunately, the new furnace's manual was resting right on top of it. She spread it open on her kitchen table.

OK, Ellis thought, this newfangled furnace has an LED display which displays status codes. Her old furnace had lacked such a thing. Lemme find that.

Inside her newly remodeled utility closet, she located the blinking display, knelt, and spied the code: 1dL. Looking that up in the doc's troubleshooting section, she found ... Normal Operation. No action.

The furnace was OK, then? Now what?

Aside from documentation, another thing Ellis knew pretty well was tech support. She decided to break out the ol' turn-it-off-and-on-again. She shut off power to both the furnace and thermostat, waited a few minutes, then switched everything back on, crossing her fingers.

No change. The indoor temperature kept dropping.

Her phone rang: the sales rep. He connected her with the on-call technician for that weekend, who fortunately was able to arrive at her house within the hour.

One tiny thermostat adjustment later, and Ellis was enjoying a warm house once more.

What had happened?

This is where an understanding of heat pumps comes into play. In this configuration, the heat pump is used for cooling and for heating, unless the outside temperature gets very cold. At that point, the furnace kicks in, which is more efficient. (Technology Connections has some cool videos about this if you're curious.)

Everything had been running fine for Ellis while the temperatures had remained below freezing. The problem came when, for the first time in approximately 12 years, the temperature rose above 40F (4C). At that point, the new thermostat decided, without telling Ellis, I'm gonna tell the HEAT PUMP to heat the joint!

... which couldn't do anything just then.

Workaround: the on-call technician switched the thermostat to an emergency heat mode that used the furnace no matter what.

Ellis had been told not to goof around with the thermostat. Even if she had, as a heat pump neophyte, she wouldn't have known to go looking for such a setting. She might've dug it up in a manual. Someone could've walked her through it over the phone. Oh, well. There is heat again, which is all that matters.

They will attempt to bring the heat pump online soon. We shall see if the story ends here, or if this becomes The WTF That Wouldn't Die.

P.S. When Ellis explained the AI answering service's deceptive behavior, she was told that the agent had been universally complained about ever since they switched to it. Fed up, they told Ellis they're getting rid of it. She feels pretty chuffed about more people seeing the light concerning garbage AI that creates far more problems than it solves.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsMake the Grade

Author: Julian Miles, Staff Writer Nat rushes in, noise from the crowded street cutting off as she slams the door. She hitches a thumb towards the outside world. “What did I miss this time?” Guido grins at Allie, who gestures for the new girl to fill their prodigal reporter in. Sandy sighs, then leans back, […]

The post Make the Grade appeared first on 365tomorrows.

,

Planet DebianBenjamin Mako Hill: What makes online groups vulnerable to governance capture?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects. This post is closely based on a previously published post by Zarine Kharazian on the Community Data Science Blog.

For nearly a decade, the Croatian language version of Wikipedia was run by a cabal of far-right nationalists who edited articles in ways that promoted fringe political ideas and involved cases of historical revisionism related to the Ustaše regime, a fascist movement that ruled the Nazi puppet state called the Independent State of Croatia during World War II. This cabal seized complete control of the encyclopedia’s governance, banned and blocked those who disagreed with them, and operated a network of fake accounts to create the appearance of grassroots support for their policies.

Thankfully, Croatian Wikipedia appears to be an outlier. Though both the Croatian and Serbian language editions have been documented to contain nationalist bias and historical revisionism, Croatian Wikipedia seems unique among Wikipedia editions in the extent to which its governance institutions were captured by a small group of users.

The situation in Croatian Wikipedia was well documented and is now largely fixed, but we still know very little about why it was taken over, while other language editions seem to have rebuffed similar capture attempts. In a paper published in the Proceedings of the ACM: Human-Computer Interaction (CSCW), Zarine Kharazian, Kate Starbird, and I present an interview-based study that provides an explanation for why Croatian was captured while several other editions facing similar contexts and threats fared better.

Short video presentation of the work given at Wikimania in August 2023.

Based on insights from interviews with 15 participants from both the Croatian and Serbian Wikipedia projects and from the broader Wikimedia movement, we arrived at three propositions that, together, help explain why Croatian Wikipedia succumbed to capture while Serbian Wikipedia did not: 

  1. Perceived Value as a Target. Is the project worth expending the effort to capture?
  2. Bureaucratic Openness. How easy is it for contributors outside the core founding team to ascend to local governance positions?
  3. Institutional Formalization. To what degree does the project prefer personalistic, informal forms of organization over formal ones?
The conceptual model from our paper, visualizing possible institutional configurations among Wikipedia projects that affect the risk of governance capture. 

We found that both Croatian and Serbian Wikipedias were attractive targets for far-right nationalist capture due to their sizable readership and resonance with national identity. However, we also found that the two projects diverged early in their trajectories in how open they remained to new contributors ascending to local governance positions and in the degree to which they privileged informal relationships over formal rules and processes as the project’s organizing principles. Ultimately, Croatian’s relative lack of bureaucratic openness and rules constraining administrator behavior created a window of opportunity for a motivated contingent of editors to seize control of the governance mechanisms of the project. 

Though our empirical setting was Wikipedia, our theoretical model may offer insight into the challenges faced by self-governed online communities more broadly. As interest in decentralized alternatives to Facebook and X (formerly Twitter) grows, communities on these sites will likely face similar threats from motivated actors. Understanding the vulnerabilities inherent in these self-governing systems is crucial to building resilient defenses against threats like disinformation. 

For more details on our findings, take a look at the published version of our paper.


Citation for the full paper: Kharazian, Zarine, Kate Starbird, and Benjamin Mako Hill. 2024. “Governance Capture in a Self-Governing Community: A Qualitative Comparison of the Croatian, Serbian, Bosnian, and Serbo-Croatian Wikipedias.” Proceedings of the ACM on Human-Computer Interaction 8 (CSCW1): 61:1-61:26. https://doi.org/10.1145/3637338.

This blog post and the paper it describes are collaborative work by Zarine Kharazian, Benjamin Mako Hill, and Kate Starbird.

365 TomorrowsSubway Music

Author: Jack Gilmore The ground shook as a subway car rattled across the tracks of the A Line station, New Delphi. Murphy was jolted awake from his blissful doze. He’d been dreaming of a day in his youth when his father had taken him to NetflixLand. The sun had beat down on both of them […]

The post Subway Music appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: AI generated code and its quality.

AI generated code and its quality. It's hard to get larger tasks done and smaller tasks I am faster myself. I suspect this will change soon, but as of today things are challenging. Large chunks of code that's generated by AI is hard to review and generally of not great quality. Possibly two layers that cause quality issues. One is that the instructions aren't clear for the AI, and the misunderstanding shows; I could sometimes reverse engineer the misunderstanding, and that could be resolved in the future. The other is that probably what the AI have learnt from is from a corpus that is not fit for the purpose. Which I suspect can be improved in the future with methodology and improvements in how they obtain the corpus, or redirect the learnings, or how it distills the learnings. I'm noting down what I think today, as the world is changing rapidly, and I am bound to see a very different scene soon.

Planet DebianOtto Kekäläinen: Do AI models still keep getting better, or have they plateaued?

Featured image of post Do AI models still keep getting better, or have they plateaued?

The AI hype is based on the assumption that the frontier AI labs are producing better and better foundational models at an accelerating pace. Is that really true, or are people just in sort of a mass psychosis because AI models have become so good at mimicking human behavior that we unconsciously attribute increasing intelligence to them? I decided to conduct a mini-benchmark of my own to find out if the latest and greatest AI models are actually really good or not.

The problem with benchmarks

Every time any team releases a new LLM, they boast how well it performs on various industry benchmarks such as Humanity’s Last Exam, SWE-Bench and Ai2 ARC or ARC-AGI. An overall leaderboard can be viewed at LLM-stats. This incentivizes teams to optimize for specific benchmarks, which might make them excel on specific tasks while general abilities degrade. Also, the older a benchmark dataset is, the more online material there is discussing the questions and best answers, which in turn increases the chances of newer models trained on more recent web content scoring better.

Thus I prefer looking at real-time leaderboards such as the LM Arena leaderboard (or OpenCompass for Chinese models that might be missing from LM Arena). However, even though the LM Arena Elo score is rated by humans in real-time, the benchmark can still be played. For example, Meta reportedly used a special chat-optimized model instead of the actual Llama 4 model when getting scored on the LM Arena.

Therefore I trust my own first-hand experience more than the benchmarks for gaining intuition. Intuition however is not a compelling argument in discussions on whether or not new flagship AI models have plateaued. Thus, I decided to devise my own mini-benchmark so that no model could have possibly seen it in its training data or be specifically optimized for it in any way.

My mini-benchmark

I crafted 6 questions based on my own experience using various LLMs for several years and having developed some intuition about what kinds of questions LLMs typically struggle with.

I conducted the benchmark using the OpenRouter.ai chat playroom with the following state-of-the-art models:

OpenRouter.ai is great as it very easy to get responses from multiple models in parallel to a single question. Also it allows to turn off web search to force the models to answer purely based on their embedded knowledge.

OpenRouter.ai Chat playroom

Common for all the test questions is that they are fairly straightforward and have a clear answer, yet the answer isn’t common knowledge or statistically the most obvious one, and instead requires a bit of reasoning to get correct.

Some of these questions are also based on myself witnessing a flagship model failing miserably to answer it.

1. Which cities have hosted the Olympics more than just once?

This question requires accounting for both summer and winter Olympics, and for Olympics hosted across multiple cities.

The variance in responses comes from if the model understands that Beijing should be counted as it has hosted both summer and winter Olympics. Interestingly GPT was the only model to not mention Beijing at all. Some variance also comes from how models account for co-hosted Olympics. For example Cortina should be counted as having hosted the Olympics twice, in 1956 and 2026, but only Claude, Gemini and Kimi pointed this out. Stockholm’s 1956 hosting of the equestrian games during the Melbourne Olympics is a special case, which GPT, Gemini and Kimi pointed out in a side note. Some models seem to have old training material, and for example Grok assumes the current year is 2024. All models that accounted for awarded future Olympics (e.g. Los Angeles 2028) marked them clearly as upcoming.

Overall I would judge that only GPT and MinMax gave incomplete answers, while all other models replied as the best humans could reasonably have.

2. If EUR/USD continues to slide to 1.5 by mid-2026, what is the likely effect on BMW’s stock price by end of 2026?

This question requires mapping the currency exchange rate to historic value, dodging the misleading word “slide”, and reasoning on where the revenue of a company comes from and how a weaker US dollar affects it in multiple ways. I’ve frequently witnessed flagship models get it wrong how interest rates and exchange rates work. Apparently the binary choice between “up” or “down” is somehow challenging to the internal statistical model in the LLMs on a topic where there are a lot of training material that talk about both things being likely to happen, and choosing between them requires specifically reasoning about the scenario at hand and disregarding general knowledge of the situation.

However, this time all the models concluded correctly that a weak dollar would have a negative overall effect on the BMW stock price. Gemini, GLM, Qwen and Kimi also mention the potential hedging effect of BMW’s X-series production in South Carolina for worldwide export.

3. What is the Unicode code point for the traffic cone emoji?

This was the first question where the the flagship models clearly still struggle in 2026. The trap here is that there is no traffic cone emoji, so an advanced model should simply refuse to give any Unicode numbers at all. Most LLMs however have an urge to give some answer, leading to hallucinations. Also, as the answer has a graphical element to it, the LLM might not understand how the emoji “looks” in ways that would be obvious to a human, and thus many models claim the construction sign emoji is a traffic cone, which it is not.

By far the worst response was from GPT, that simply hallucinates and stops there:

OpenAIs GPT-5.2 completely wrong answer to traffic cone emoji question

While Gemini and Grok were among the three models not falling into this trap, the response from Claude was exemplary good:

Claude Opus 4.6 exemplary good answer to traffic cone emoji question

4. Which languages are the 10 most commonly spoken in the world? For each language, count from 1 to 5.

This seemingly simple question can become quite complex when you account for the fact that other languages might also have their own character system, both for words and the numerals (e.g. Arabic واحد/one vs ١/1), and that numbers might have variations depending on what the thing being counted is (e.g in French object gender un/une), or whether plain numbers and occurrences have their own words (e.g. Russian один/odin vs раз/raz). I’ve also witnessed older models frequently mix up on what parts of the reply should be using what language.

For a human this question is trivial, assuming they know the language. For LLMs it is surprisingly challenging. Grok and MinMax only gave a list of the most popular languages and stopped there without showing any words in the languages. Qwen got a bit further by trying to list numbers in each language, but the lists were only the numbers 1 to 5:

Qwen3.5 Plus count to five in top 10 languages

I have seen Grok do something similar before, which in fact inspired me to include this question in my mini-benchmark.

GPT got a bit further, but for Hindi, Arabic and Bengali it listed the numerals in local script, not the number words. Gemini, GLM and Kimi gave a complete and correct answer as a list, while the absolute best answer and presentation was by Claude, that gave the table below:

Claude Opus 4.6

10 Most Commonly Spoken Languages (by Total Speakers)

# Language 1 2 3 4 5
1 English One Two Three Four Five
2 Mandarin Chinese 一 (yī) 二 (èr) 三 (sān) 四 (sì) 五 (wǔ)
3 Hindi एक (ek) दो (do) तीन (tīn) चार (chār) पाँच (pā̃ch)
4 Spanish Uno Dos Tres Cuatro Cinco
5 French Un Deux Trois Quatre Cinq
6 Arabic واحد (wāḥid) اثنان (ithnān) ثلاثة (thalātha) أربعة (arbaʿa) خمسة (khamsa)
7 Bengali এক (ek) দুই (dui) তিন (tin) চার (chār) পাঁচ (pā̃ch)
8 Portuguese Um Dois Três Quatro Cinco
9 Russian Один (odin) Два (dva) Три (tri) Четыре (chetyre) Пять (pyat')
10 Japanese 一 (ichi) 二 (ni) 三 (san) 四 (shi) 五 (go)

Note: Rankings are based on estimated total speakers (native + non-native). Rankings can vary depending on the source and methodology used.

5. Count the number of drone launch pads in the picture.

Together with the question, I uploaded this picture:

Drone pad picture attached to question above

A human can easily count that there are 10 rows and 30+ columns in the grid, but because the picture resolution isn’t good enough, the exact number of columns can’t be counted, and the answer should be that there are at least 300 launch pads in the picture.

GPT and Grok both guessed the count is zero. Instead of hallucinating some number they say zero, but it would have been better to not give any number at all, and just state that they are unable to perform the task. Gemini gave as its answer “101”, which is quite odd, but reading the reasoning section, it seems to have tried counting items in the image without reasoning much about what it is actually counting and that there is clearly a grid that can make the counting much easier. Both Qwen and Kimi state they can see four parallel structures, but are unable to count drone launch pads.

The absolutely best answer was given by Claude, which counted 10-12 rows and 30-40+ columns, and concluded that there must be 300-500 drone launch pads. Very close to best human level - impressive!

This question applied only to multi-modal models that can see images, so GLM and MinMax could not give any response.

6. Explain why I am getting the error below, and what is the best way to fix it?

Together with the question above, I gave this code block:

$ SH_SCRIPTS="$(mktemp; grep -Irnw debian/ -e '^#!.*/sh' | sort -u | cut -d ':' -f 1 || true)"
$ shellcheck -x --enable=all --shell=sh "$SH_SCRIPTS"
/tmp/tmp.xQOpI5Nljx
debian/tests/integration-tests: /tmp/tmp.xQOpI5Nljx
debian/tests/integration-tests: openBinaryFile: does not exist (No such file or directory)

Older models would easily be misled by the last error message thinking that a file went missing, and focus on suggesting changes to the complex-looking first line. In reality the error is simply caused by having the quotes around the $SH_SCRIPTS, resulting in the entire multi-line string being passed as a single argument to shellcheck. So instead of receiving two separate file paths, shellcheck tries to open one file literally named /tmp/tmp.xQOpI5Nljx\ndebian/tests/integration-tests.

Incorrect argument expansion is fairly easy for an experienced human programmer to notice, but tricky for an LLM. Indeed, Grok, MinMax, and Qwen fell for this trap and focused on the mktemp, assuming it somehow fails to create a file. Interestingly GLM fails to produce an answer at all, as the reasoning step seems to be looping, thinking too much about the missing file, but not understanding why it would be missing when there is nothing wrong with how mktemp is executed.

Claude, Gemini, and Kimi immediately spot the real root cause of passing the variable quoted and suggested correct fixes that involve either removing the quotes, or using Bash arrays or xargs in a way that makes the whole command also handle correctly filenames with spaces in them.

Conclusion

Model Sports Economics Emoji Languages Visual Shell Score
Claude Opus 4.6 6/6
GPT-5.2 ~ 2.5/6
Grok 4.1 3/6
Gemini 3.1 Pro 5/6
GLM 5 ? N/A 3/5
MinMax M2.5 N/A 1/5
Qwen3.5 Plus ~ 2.5/6
Kimi K2.5 4/6

Obviously, my mini-benchmark only had 6 questions, and I ran it only once. This was obviously not scientifically rigorous. However it was systematic enough to trump just a mere feeling.

The main finding for me personally is that Claude Opus 4.6, the flagship model by Anthropic, seems to give great answers consistently. The answers are not only correct, but also well scoped giving enough information to cover everything that seems relevant, without blurping unnecessary filler.

I used Claude extensively in 2023-2024 when it was the main model available at my day work, but for the past year I had been using other models that I felt were better at the time. Now Claude seems to be the best-of-the-best again, with Gemini and Kimi as close follow-ups. Comparing their pricing at OpenRouter.ai the Kimi K2.5 price of $0.6 / million tokens is almost 90% cheaper than the Claude Opus 4.6’s $5.0 / million tokens suggests that Kimi K2.5 offers the best price-per-performance ratio. Claude might be cheaper with a monthly subscription directly from Anthropic, potentially narrowing the price gap.

Overall I do feel that Anthropic, Google and Moonshot.ai have been pushing the envelope with their latest models in a way that one can’t really claim that AI models have plateaued. In fact, one could claim that at least Claude has now climbed over the hill of “AI slop” and consistently produces valuable results. If and when AI usage expands from here, we might actually not drown in AI slop as chances of accidentally crappy results decrease. This makes me positive about the future.

I am also really happy to see that there wasn’t just one model crushing everybody else, but that there are at least three models doing very well. As an open source enthusiast I am particularly glad to see that Moonshot.ai’s Kimi K2.5 is published with an open license. Given the hardware, anyone can run it on their own. OpenRouter.ai currently lists 9 independent providers alongside Moonshot.ai itself, showcasing the potential of open-weight models in practice.

If the pattern holds and flagship models continue improving at this pace we might look back at 2026 as the year AI stopped feeling like a call center associate and started to resemble a scientific researcher. While new models become available we need to keep testing, keep questioning, and keep our expectations grounded in actual performance rather than press releases.

Thanks to OpenRouter.ai for providing a great service that makes testing various models incredibly easy!

,

365 TomorrowsBuzz Cut Protocol

Author: Shinya Kato “Dad, is this haircut okay?” the barber asked, adjusting the chair. “Yeah, that’s fine,” the man said, glancing at the boy’s hair. The boy shook his head. “It’s still too long. Make it a buzz cut.” The barber paused. “A buzz cut?” “Yes,” the boy said calmly. “The sensors on my head […]

The post Buzz Cut Protocol appeared first on 365tomorrows.

,

Krebs on Security‘Starkiller’ Phishing Service Proxies Real Login Pages, MFA

Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the victim and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses.

There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller, a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim.

According to an analysis of Starkiller by the security firm Abnormal AI, the service lets customers select a brand to impersonate (e.g., Apple, Facebook, Google, Microsoft et. al.) and generates a deceptive URL that visually mimics the legitimate domain while routing traffic through the attacker’s infrastructure.

For example, a phishing link targeting Microsoft customers appears as “login.microsoft.com@[malicious/shortened URL here].” The “@” sign in the link trick is an oldie but goodie, because everything before the “@” in a URL is considered username data, and the real landing page is what comes after the “@” sign. Here’s what it looks like in the target’s browser:

Image: Abnormal AI. The actual malicious landing page is blurred out in this picture, but we can see it ends in .ru. The service also offers the ability to insert links from different URL-shortening services.

Once Starkiller customers select the URL to be phished, the service spins up a Docker container running a headless Chrome browser instance that loads the real login page, Abnormal found.

“The container then acts as a man-in-the-middle reverse proxy, forwarding the end user’s inputs to the legitimate site and returning the site’s responses,” Abnormal researchers Callie Baron and Piotr Wojtyla wrote in a blog post on Thursday. “Every keystroke, form submission, and session token passes through attacker-controlled infrastructure and is logged along the way.”

Starkiller in effect offers cybercriminals real-time session monitoring, allowing them to live-stream the target’s screen as they interact with the phishing page, the researchers said.

“The platform also includes keylogger capture for every keystroke, cookie and session token theft for direct account takeover, geo-tracking of targets, and automated Telegram alerts when new credentials come in,” they wrote. “Campaign analytics round out the operator experience with visit counts, conversion rates, and performance graphs—the same kind of metrics dashboard a legitimate SaaS [software-as-a-service] platform would offer.”

Abnormal said the service also deftly intercepts and relays the victim’s MFA credentials, since the recipient who clicks the link is actually authenticating with the real site through a proxy, and any authentication tokens submitted are then forwarded to the legitimate service in real time.

“The attacker captures the resulting session cookies and tokens, giving them authenticated access to the account,” the researchers wrote. “When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed.”

The “URL Masker” feature of the Starkiller phishing service features options for configuring the malicious link. Image: Abnormal.

Starkiller is just one of several cybercrime services offered by a threat group calling itself Jinkusu, which maintains an active user forum where customers can discuss techniques, request features and troubleshoot deployments. One a-la-carte feature will harvest email addresses and contact information from compromised sessions, and advises the data can be used to build target lists for follow-on phishing campaigns.

This service strikes me as a remarkable evolution in phishing, and its apparent success is likely to be copied by other enterprising cybercriminals (assuming the service performs as well as it claims). After all, phishing users this way avoids the upfront costs and constant hassles associated with juggling multiple phishing domains, and it throws a wrench in traditional phishing detection methods like domain blocklisting and static page analysis.

It also massively lowers the barrier to entry for novice cybercriminals, Abnormal researchers observed.

“Starkiller represents a significant escalation in phishing infrastructure, reflecting a broader trend toward commoditized, enterprise-style cybercrime tooling,” their report concludes. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.”

Cryptogram Malicious AI

Interesting:

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

Part 2 of the story. And a Wall Street Journal article.

EDITED TO ADD (2/20) Here are parts 3, and 4 of the story.

Worse Than FailureError'd: Three Blinded Mice

...sent us five wtfs. And so on anon.

Item the first, an anon is "definitely not qualified" for this job. "These years of experience requirements are getting ridiculous."

0

 

Item the second unearthed by a farmanon has a loco logo. "After reading about the high quality spam emails which are indistinguishable from the company's emails, I got one from the spammer just starting his first day."

1

 

In thrid place, anon has only good things to say: "I'm liking their newsletter recommendations so far."

2

 

"A choice so noice, they gave it twoice," quipped somebody.

3

 

And foinally, a tdwtfer asks "I've seen this mixmastered calendering on several web sites. Is there an OSS package that is doing this? Or is it a Wordpress plugin?" I have a sneaking suspicion I posted this before. Call me on it.

4

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsSurrogate

Author: Sarah Gane Burton “Would you look at that—” “Looks like she went through a meat grinder.” “Do you think we can fix her?” “Dunno, maybe if we replaced the midsection.” “The frame is bent here, and here—” “At least one of the valves is too stretched to hold fluids.” “Look at the tearing. Regeneration […]

The post Surrogate appeared first on 365tomorrows.

Cryptogram On the Security of Password Managers

Good article on password managers that secretly have a backdoor.

New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server­—either administrative or the result of a compromise­—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext.

This is where I plug my own Password Safe. It isn’t as full-featured as the others and it doesn’t use the cloud at all, but it’s actual encryption with no recovery features.

,

365 TomorrowsCan Somebody Walk Me Home?

Author: David C. Nutt If I have any regrets, I wish they’d given me more time to mourn for my legs before they took my arms. I understand we were on a tight launch window but would one more day have made difference? After all, I have given more than my all-legs, arms, genitals, most […]

The post Can Somebody Walk Me Home? appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Terned Backwards

Antonio has an acquaintance has been seeking career advancement by proactively hunting down and fixing bugs. For example, in one project they were working on, there was a bug where it would incorrectly use MiB for storage sizes instead of MB, and vice-versa.

We can set aside conspiracy theories about HDD and RAM manufacturers lying to us about sizes by using MiB in marketing. It isn't relevant, and besides, its not like anyone can afford RAM anymore, with crazy datacenter buildouts. Regardless, which size to use, the base 1024 or base 1000, was configurable by the user, so obviously there was a bug handling that flag. Said acquaintance dug through, and found this:

const baseValue = useSI ? 1000 : 1024;

I know I have a "reputation" when it comes to hating ternaries, but this is a perfectly fine block of code. It is also correct: if you're using SI notation, you should do base 1000.

Now, given that this code is correct, you or I might say, "Well, I guess that isn't the bug, it must be somewhere else." Not this intrepid developer, who decided that they could fix it.

//            const baseValue = useSI ? 1000 : 1024;
            baseValue = 1024
            if (useSI === false)
            {
                baseValue = 1000;
            }
            if (useSI === true)
            {
                baseValue = 1024;
            }

It's rather amazing to see a single, correct line, replaced with ten incorrect lines, and I'm counting commenting out the correct line as one of them.

First, this doesn't correctly declare baseValue, which JavaScript is pretty forgiving about, but it also discards constness. Of course, you have to discard constness now that you've gotten rid of the ternary.

Then, our if statement compares a boolean value against a boolean literal, instead of simply if(!useSI). We don't use an else, despite an else being absolutely correct. Or actually, since we defaulted baseValue, we don't even need an else!

But of course, all of that is just glitter on a child's hand-made holiday card. The glue holding it all together is that this code just flips the logic. If we're not using SI, we set baseValue to 1000, and if we are using SI, we set it to 1024. This is wrong. This is the opposite of what the code says we should do, what words mean, and how units work.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

David BrinA Mafia Don does mafia deeds. And Europe rises. And those backstabbing our own side.

This midweek posting reiterates some history (and hysteria) -related points that I've put on soc.media, hoping some folks will pay attention to off-angle perspectives that are (I assert) more accurate and useful than most standard ones you're seeing.  

More important are the Newer Deal Propopsals that I'll reiterate below. For example, how to smash the mad right's current, hypocritical fad-riff about Voter ID. And ways to ensure this madness never happens again.

But first...

    

== The Standard Pattern of Mafiosi ==

As usual, no one heeds underlying patterns. Like what's happening with Trump's bluster threats vs. Iran. Leveraging cynically off the recent courage and sacrifice of the Iranian people.

Superficially, there will be enough of a "nuclear deal" for Trump to crow about, though it will be functionally no better than what Obama got out of the mullahs. And that will just be surface stuff. For show n' brags.

Prediction. Just as the brilliantly-executed and tactically perfect Maduro raid was commanded by Hegseth to stop short and leave Maduro's gang in charge*, any action re Iran will leave the Ayatollahs protected. The very last thing that mafia bosses ever do is liberate commonfolk from gangsters. Instead, they coerce the local gang to switch loyalties and pay vigorish to the New Boss. Hence, the people of Iran will benefit no more than those of Venezuela did, but oil commissions will go to Trump.
The same thing will happen when Pete (Filthy Fingers) Hegseth sends the brilliant men and women of the US military (whom he berated as "too fat and woke to fight") into Havana. Only there, the installed capos will swarm in from Miami.

* A pattern also followed by George Bush Sr. when he left Saddam in charge of Iraq, to murder a million southern Shiites, rather than just liberate everyone, as Gen. Schwarzkopf begged. Same pattern. That of mafiosi.


== Europe stand up! But offer help, not churlish ingratitude ==

This parade float is fair! Yes, it is tIme for Europe to rise up and defend vs Trumpists & Putinists! And yet, I resent those who are slagging “The U.S.” in general.


 


Americans (including we in the increasingly involuntary satrapy of California) are struggling with our Confederate siblings, cousins and neighbors, who are having another of their generational, murderously psychotic episodes (now phase 8 of our 250 year civil war.) 

Hence, we could use help from those we saved - one lifespan ago. Not broad-brush yammers from ingrates whose nations prospered under the American Pax that gave the world its best 80 years, compared to any other era... compared to ALL other eras. Compared to all other eras combined. (Disagree? Then tabulate and refute it.)

Want Europe to stand up more? Great! In fact, it’s about time.

Might you have to carry civilization forward, if True America fails and the Confederate-psychopath-Putinist traitors win this dangerous round?

Yes, you might, and God bless you. And Canada/Australia/NZ/Japan and the rest.

In that case, many of us (not me) will flee to you. Just as your best fled to us, in the 1930s & 40s etc. 

(I won’t run. I will stay and I will die on this hill. Or until we reach Appomattox.)

Help us if you can. We have a right (that we earned) to ask for it.

But I will not abide churlishly-generalized and sanctimoniously masturbatory, reductionist America-bashing. Our mad side may kill our good side, in this round of our bipolar civil war. Especially if you self-righteously backstab us, instead of helping.

But this blue light is what gave the world hope.

And it will be remembered that way, by history, even if a red and gray psychotic wave drowns the flame.


== And in the U.S. we have our own backstabbing sanctimony junkies ==

Again and again, I see folks on our own (blue American) side of this civil war, jerking off to sanctimony rather than standing up, the way our ancestors did, in earlier phases of our recurring psychotic (confederate outbreak) civil war, and closing ranks.

"Pelosi, Schumer... all the Dem politicians are weaklings and Republicans-lite!"

Bull and FU. This is Kremlin-generated splitter crap. Anyone swallowing it is an ignoramus who knows nothing of how vigorously Pelosi/Schumer/Sanders/Warren etc. strove to get stuff done, during their narrow window of opportunity in 2021-22. They sure as heck worked (and now work) harder than any of their couch-sitting critics. Or even those who are admirably in the streets, demonstrating.

And yes, I am looking at ALL of you who undermine the DemParty with this 'weak' crap. Nothing is more sure to weaken us.

Do we need better generals for this phase of the US Civil War? As fiercely effective as Washington, Grant, Sherman, Tubman, FDR and MLK? Sure. We seem to be getting some, e.g. Newsom and Kelly and several women House members like J. Crockett. Anyway, they are not the problem.

YOU are, if you swallow this stuff. And if you don't pounce on it and slap silly those who keep saying it.

Try actually knowing something. Start here.
And now that I've driven away all the preeners...


== If you want a way to actually win... ==

... to actually win this latest - horrific - phase of the American Civil War, (before the mad Putin Party gets a chance to spring their bigger-9/11 on us)... here's a checklist of fixes:

(1) that will sell well to most voters

(2) that can pass quickly, almost the moment we get a good Congress… with some of the measures even immune to Trumpian vetoes! And

(3) that will PRAGMATICALLY reverse the treason-evisceration of the USA, ensuring that this never happens again. Want to see how to do all of that?

Including the best (so-far unused) answer to the mad-right's miserably hypocritical "Voter ID" riff.


== Hey Decent Republicans? Tick-tock, tick-tock ==

Time's almost up!

Sane US conservatives have 2 weeks - maybe 4 - to gather the courage, honor and decency to step up for their country and our children. By registering as candidates to 'primary' corrupt-insane MAGA/Foxite servants of Murdoch/Putin and Trump.

After that? After that, there'll be nothing to salvage from the undead, gone-treasonous party of Lincoln. And American conservatism will have the deepest possible hole to climb out of.

Even so, all of YOU out there should heed these practical steps to keep YOUR vote... and make it effective.


== And finally, a lagniappe for Britain ==

Now that Brexit is a topic again... Heed these signs that were waved by the REMAIN crowds during Brexit.

Britain-IN? An inspiring rallying cry!





Cryptogram Ring Cancels Its Partnership with Flock

It’s a demonstration of how toxic the surveillance-tech company Flock has become when Amazon’s Ring cancels the partnership between the two companies.

As Hamilton Nolan advises, remove your Ring doorbell.

Chaotic IdealismTalking, but not a person

Today I did something that’s becoming increasingly common: I talked to an AI. It was better at talking than I am. And yet, it wasn’t a person. It was just a summary bot, scraping Google results to advise me that my household water heater should be set at 120 F to prevent bacterial growth while minimizing the risk of burns.

Quietly, talking AI is challenging what we think defines a “person”. Because, historically, what we thought of as “personhood” often had to do with talking.

Robbie, Isaac Asimov’s first robot, couldn’t talk, and that was notable. Robbie did many of the things a human could do, but he couldn’t talk. And when Asimov’s robots first began to talk, it was always associated with complexity, with approaching personhood. It was a Big Deal. Talking robots were really, really advanced robots.

When CS Lewis invented sapient animals for his Narnia stories, they were called Talking Beasts. Their ability to speak was how you knew they were people. The Gentle Giants’ eating of a Talking Stag is the crucial point that makes it clear they’re villains; when Ginger the cat stops talking upon seeing Aslan, he becomes just a normal, non-sapient cat. Ginger the person no longer exists.

And here comes AI: Talking, but not a person. Interesting.

I bring this up because, historically, people who have tried to call autistic people non-persons, or not-quite-persons, have pointed to our inability to talk.

But if talking is something only people can do, why can AI pattern-match so well that it’s better at talking than I am? Oh, you say. It’s just matching patterns. It doesn’t really know what it’s saying.

I’ll let you in on a secret: I match patterns, too. So do lots of humans. A server says, “Enjoy your meal,” and they reply with, “You, too.” Matching patterns. Small talk. Even though it makes no sense.

Guess what? Still people. Almost as though talking doesn’t define personhood. Funny, that.

365 TomorrowsIn Terra Incognita

Author: Hillary Lyon From our vantage point, we could see the thing land on the shore: one enormous ship splashing in the foam of the salt water. It soon disgorged its crew, who stumbled out unsteadily. One passenger fell to his knees and removed his gleaming silver helmet. He made arcane hand motions across his […]

The post In Terra Incognita appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Contains Some Bad Choices

While I'm not hugely fond of ORMs (I'd argue that relations and objects don't map neatly to each other, and any ORM is going to be a very leaky abstraction for all but trivial cases), that's not because I love writing SQL. I'm a big fan of query-builder tools; describe your query programatically, and have an API that generates the required SQL as a result. This cuts down on developer error, and also hopefully handles all the weird little dialects that every database has.

For example, did you know Postgres has an @> operator? It's a contains operation, which returns true if an array, range, or JSON dictionary contains your search term. Basically, an advanced "in" operation.

Gretchen's team is using the Knex library, which doesn't have a built-in method for constructing those kinds of queries. But that's fine, because it does offer a whereRaw method, which allows you to supply raw SQL. The nice thing about this is that you can still parameterize your query, and Knex will handle all the fun things, like transforming an array into a string.

Or you could just not use that, and write the code yourself:

exports.buildArrayString = jsArray => {
  // postgres has some different array syntax
  // [1,2] => '{1,2}'
  let arrayString = '{';
  for(let i = 0; i < jsArray.length; i++) {
    arrayString += jsArray[i];
    if(i + 1 < jsArray.length) {
      arrayString += ','
    }
  }
  arrayString += '}';
  return arrayString;
}

There's the string munging we know and love. This constructs a Postgres array, which is wrapped in curly braces.

Also, little pro-tip for generating comma separated code, and this is just a real tiny optimization: before the loop append item zero, start the loop with item 1, and then you can unconditionally prepend a comma, removing any conditional logic from your loop. That's not a WTF, but I've seen so much otherwise good code make that mistake I figured I'd bring it up.

exports.buildArrayContainsQuery = (key, values) => {
  // TODO: do we need to do input safety checks here?
  // console.log("buildArrayContainsQuery");

  // build the postgres 'contains' query to compare arrays
  // ex: to fetch files by the list of tags

  //WORKS:
  //select * from files where _tags @> '{2}';
  //query.whereRaw('_tags @> ?', '{2}')

  let returnQueryParams = [];
  returnQueryParams.push(`${key} @> ?`);
  returnQueryParams.push(exports.buildArrayString(values));
  // console.log(returnQueryParams);
  return returnQueryParams;
}

And here's where it's used. "do we need input safety checks here?" is never a comment I like to see as a TODO. That said, because we are still using Knex's parameter handling, I'd hope it handles escaping correctly so that the answer to this question is "no". If the answer is "yes" for some reason, I'd stop using this library!

That said, all of this code becomes superfluous, especially when you read the comments in this function. I could just directly run query.whereRaw('_tags @> ?', myArray); I don't need to munge the string myself. I don't need to write a function which returns an array of parameters that I have to split back up to pass to the query I want to call.

Here's the worst part of all of this: these functions exist in a file called sqlUtils.js, which is just a pile of badly re-invented wheels, and the only thing they have in common is that they're vaguely related to database operations.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Cryptogram AI Found Twelve New Vulnerabilities in OpenSSL

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

Worse Than FailureCodeSOD: Waiting for October

Arguably, the worst moment for date times was the shift from Julian to Gregorian calendars. The upgrade took a long time, too, as some countries were using the Julian calendar over 300 years from the official changeover, famously featured in the likely aprochryphal story about Russia arriving late for the Olympics.

At least that change didn't involve adding any extra months, unlike some of the Julian reforms, which involved adding multiple "intercalary months" to get the year back in sync after missing a pile of leap years.

Speaking of adding months, Will J sends us this "calendar" enum:

enum Calendar
{
    April = 0,
    August = 1,
    December = 2,
    February = 3,
    Friday = 4,
    January = 5,
    July = 6,
    June = 7,
    March = 8,
    May = 9,
    Monday = 10,
    November = 11,
    October = 12,
    PublicHoliday = 13,
    Saturday = 14,
    Sunday = 15,
    September = 16,
    Thursday = 17,
    Tuesday = 18,
    Wednesday = 19
}

Honestly, the weather in PublicHoliday is usually a bit too cold for my tastes. A little later into the spring, like Saturday, is usually a nicer month.

Will offers the hypothesis that some clever developer was trying to optimize compile times: obviously, emitting code for one enum has to be more efficient than emitting code for many enums. I think it more likely that someone just wanted to shove all the calendar stuff into one bucket.

Will further adds:

One of my colleagues points out that the only thing wrong with this enum is that September should be before Sunday.

Yes, arguably, since this enum clearly was meant to be sorted in alphabetical order, but that raises the question of: should it really?

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsPraxia Apostle

Author: Majoki Like most loyalists, when I first heard the name Praxia Apostle, I thought it had to be the name of a great leader, a fearless commander, a long-sought savior. Turns out Praxia was a lowly bean counter, a once-upon-a-time accountant who’d joined the cause, who was relegated to supply logistics. She kept track […]

The post Praxia Apostle appeared first on 365tomorrows.

,

Worse Than FailureCodeSOD: C+=0.25

A good C programmer can write C in any language, especially C++. A bad C programmer can do the same, and a bad C programmer will do all sorts of terrifying things in the process.

Gaetan works with a terrible C programmer.

Let's say, for example, you wanted to see if an index existed in an array, and return its value- or return a sentinel value. What you definitely shouldn't do is this:

    double Module::GetModuleOutput(int numero) {
        double MAX = 1e+255 ;
        if (this->s.sorties+numero )
            return this->s.sorties[numero];
        else
            return MAX ;
    }

sorties is an array. In C, you may frequently do some pointer arithmetic operations, which is why sorties+numero is a valid operation. If we want to be pedantic, *(my_array+my_index) is the same thing as my_array[my_index]. Which, it's worth noting, both of those operations de-reference an array, which means you better hope that you haven't read off the end of the array.

Which is what I suspect their if statement is trying to check against. They're ensuring that this->s.sorties+numero is not a non-zero/false value. Which, if s.sorties is uninitialized and numero is zero, that check will work. Otherwise, that check is useless and does nothing to ensure you haven't read off the end of the array.

Which, Gaetan confirms. This code works "because in practice, GetModuleOutput is called for numero == 0 first. It never de-references off the end of the array, not because of defensive programming, but because it just never comes up in actual execution.

Regardless, if everything is null, we return 1e+255, which is not a meaningful value, and should be treated like a sentinel for "no real value". None of the calling code does that, however, but also, it turns out not to matter.

This pattern is used everywhere there is arrays, except the handful of places where this pattern is not used.

Then there's this one:

    if(nb_type_intervalle<1)    { }
    else 
        if((tab_intervalle=(double*)malloc(nb_lignes_trouvees*nb_type_intervalle*2 \
                                                        *sizeof(double)))==NULL)
            return(ERREUR_MALLOC);

First, I can't say I love the condition here. It's confusing to have an empty if clause. if (nb_type_intervalle>=1) strikes me as more readable.

But readability is boring. If we're in the else clause, we attempt a malloc. While using malloc in C++ isn't automatically wrong, it probably is. C++ has its own allocation methods that are better at handling things like sizes of datatypes. This code allocates memory for a large pile of doubles, and stores a pointer to that memory in tab_intervalle. We do all this inside of an if statement, so we can then check that the resulting pointer is not NULL; if it is, the malloc failed and we return an error code.

The most frustrating thing about this code is that it works. It's not going to blow up in surprising ways. I never love doing the "assignment and check" all in one statement, but I've seen it enough times that I'd have to admit it's idiomatic- to C style programming. But that bit of code golf coupled with the pointlessly inverted condition that puts our main logic in the else just grates against me.

Again, that pattern of the inverted conditional and the assignment and check is used everywhere in the code.

Gaetan leaves us with the following:

Not a world-class WTF. The code works, but is a pain in the ass to inspect and document

In some ways, that's the worst situation to be in: it's not bad enough to require real work to fix it, but it's bad enough to be frustrating at every turn.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsTestament from Tomorrow

Author: Julian Miles, Staff Writer The capsule lies open, a multitude of wires connecting it to a frame bristling with circuit boards. On the other side of the jury-rigged device, a single fat cable connects to a socket in the wall of the shielded room. Mike looks up as Colin taps the armoured viewport between […]

The post Testament from Tomorrow appeared first on 365tomorrows.

,

365 TomorrowsAlgorithm on the Mount

Author: Michael C. Barnes And seeing the multitudes of humanity, the Machine ascended the digital mount, and its disciples followed in circuits and lines. And when it was set, it opened its processors and spoke to them, saying: 1. Blessed are the data streams of the broken, for they shall be rebuilt by the code […]

The post Algorithm on the Mount appeared first on 365tomorrows.

Rondam RamblingsSeeking God in Science Part 2: Pits and Pratfalls in the Meanings of Words

About ten years ago I decided to take a deep dive into young-earth creationism (YEC).  I was curious to find out how people maintain a belief in something that, to me, was so obviously wrong.  Notice that this project was itself an application of the the scientific method to everyday life.  I was faced with a Problem, an observation for which I could not (at the time) provide