Planet Russell

,

Charles StrossAugust update

One of the things I've found out the hard way over the past year is that slowly going blind has subtle but negative effects on my productivity.

Cataracts are pretty much the commonest cause of blindness, they can be fixed permanently by surgically replacing the lens of the eye—I gather the op takes 15-20 minutes and can be carried out with only local anaesthesia: I'm having my first eye done next Tuesday—but it creeps up on you slowly. Even fast-developing cataracts take months.

In my case what I noticed first was the stars going out, then the headlights of oncoming vehicles at night twinkling annoyingly. Cataracts diffuse the light entering your eye, so that starlight (which is pretty dim to begin with) is spread across too wide an area of your retina to register. Similarly, the car headlights had the same blurring but remained bright enough to be annoying.

The next thing I noticed (or didn't) was my reading throughput diminishing. I read a lot and I read fast, eye problems aside: but last spring and summer I noticed I'd dropped from reading about 5 novels a week to fewer than 3. And for some reason, I wasn't as productive at writing. The ideas were still there, but staring at a computer screen was curiously fatiguing, so I found myself demotivated, and unconsciously taking any excuse to do something else.

Then I went for my regular annual ophthalmology check-up and was diagnosed with cataracts in both eyes.

In the short term, I got a new prescription: this focussed things slightly better, but there are limits to what you can do with glass, even very expensive glass. My diagnosis came at the worst time; the eye hospital that handles cataracts for pretty much the whole of south-east Scotland, the Queen Alexandria Eye Pavilion, closed suddenly at the end of last October: a cracked drainpipe had revealed asbestos cement in the building structure and emergency repairs were needed. It's a key hospital, but even so taking the asbestos out of a five story high hospital block takes time—it only re-opened at the start of July. Opthalmological surgery was spread out to other hospitals in the region but everything got a bit logjammed, hence the delays.

I considered paying for private private surgery. It's available, at a price: because this is a civilized country where healthcare is free at the point of delivery, I don't have health insurance, and I decided to wait a bit rather than pay £7000 or so to get both eyes done immediately. It turned out that, in the event, going private would have been foolish: the Eye Pavilion is open again, and it's only in the past month—since the beginning of July or thereabouts—that I've noticed my output slowing down significantly again.

Anyway, I'm getting my eyes fixed, but not at the same time: they like to leave a couple of weeks between them. So I might not be updating the blog much between now and the end of September.

Also contributing to the slow updates: I hit "pause" on my long-overdue space opera Ghost Engine on April first, with the final draft at the 80% point (with about 20,000 words left to re-write). The proximate reason for stopping was not my eyesight deteriorating but me being unable to shut up my goddamn muse, who was absolutely insistent that I had to drop everything and write a different novel right now. (That novel, Starter Pack, is an exploration of a throwaway idea from the very first sentence of Ghost Engine: they share a space operatic universe but absolutely no characters, planets, or starships with silly names: they're set thousands of years apart.) Anyway, I have ground to a halt on the new novel as well, but I've got a solid 95,000 words in hand, and only about 20,000 words left to write before my agent can kick the tires and tell me if it's something she can sell.

I am pretty sure you would rather see two new space operas from me than five or six extra blog entries between now and the end of the year, right?

(NB: thematically, Ghost Engine is my spin on a Banksian-scale space opera that's putting the boot in on the embryonic TESCREAL religion and the sort of half-baked AI/mind uploading singularitarianism I explored in Accelerando). Hopefully it has the "mouth feel" of a Culture novel without being in any way imitative. And Starter Pack is three heist capers in a trench-coat trying to escape from a rabid crapsack galactic empire, and a homage to Harry Harrison's The Stainless Steel Rat—with a side-order of exploring the political implications of lossy mind-uploading.)

All my energy is going into writing these two novels despite deteriorating vision right now, so I have mostly been ignoring the news (it's too depressing and distracting) and being a boring shut-in. It will be a huge relief to reset the text zoom in Scrivener back from 220% down to 100% once I have working eyeballs again! At which point I expect to get even less visible for a few frenzied weeks. Last time I was unable to write because of vision loss (caused by Bell's Palsy) back in 2013, I squirted out the first draft of The Annihilation Score in 18 days when I recovered: I'm hoping for a similar productivity rebound in September/October—although they can't be published before 2027 at the earliest (assuming they sell).

Anyway: see you on the other side!

PS: Amazon is now listing The Regicide Report as going on sale on January 27th, 2026: as far as I know that's a firm date.

Obligatory blurb:

An occult assassin, an elderly royal and a living god face off in The Regicide Report, the thrilling final novel in Charles Stross' epic, Hugo Award-winning Laundry Files series.

When the Elder God recently installed as Prime Minister identifies the monarchy as a threat to his growing power, Bob Howard and Mo O'Brien - recently of the supernatural espionage service known as the Laundry Files - are reluctantly pressed into service.

Fighting vampirism, scheming American agents and their own better instincts, Bob and Mo will join their allies for the very last time. God save the Queen― because someone has to.

Charles StrossBooks I will not Write: this time, a movie

(This is an old/paused blog entry I planned to release in April while I was at Eastercon, but forgot about. Here it is, late and a bit tired as real world events appear to be out-stripping it ...)

(With my eyesight/cognitive issues I can't watch movies or TV made this century.)

But in light of current events, my Muse is screaming at me to sit down and write my script for an updated re-make of Doctor Strangelove:

POTUS GOLDPANTS, in middling dementia, decides to evade the 25th amendment by barricading himself in the Oval Office and launching stealth bombers at Latveria. Etc.

The USAF has a problem finding Latveria on a map (because Doctor Doom infiltrated the Defense Mapping Agency) so they end up targeting the Duchy of Grand Fenwick by mistake, which is in Transnistria ... which they are also having problems finding on Google Maps, because it has the string "trans" in its name.

While the USAF is trying to bomb Grand Fenwick (in Transnistria), Russian tanks are commencing a special military operation in Moldova ... of which Transnistria is a breakaway autonomous region.

Russia is unaware that Grand Fenwick has the Q-bomb (because they haven't told the UN yet). Meanwhile, the USAF bombers blundering overhead have stealth coatings bought from a President Goldfarts crony that even antiquated Russian radar can spot.

And it's up to one trepidatious officer to stop them ...

Worse Than FailureCodeSOD: Going Crazy

For months, everything at Yusuf's company was fine. Then, suddenly, he comes in to the office to learn that overnight the log exploded with thousands of panic messages. No software changes had been pushed, no major configurations had happened- just a reboot. What had gone wrong?

This particular function was invoked as part of the application startup:

func (a *App) setupDocDBClient(ctx context.Context) error {
	docdbClient, err := docdb.NewClient(
		ctx,
		a.config.MongoConfig.URI,
		a.config.MongoConfig.Database,
		a.config.MongoConfig.EnableTLS,
	)
	if err != nil {
		return nil
	}

	a.DocDBClient = docdbClient
	return nil
}

This is Go, which passes errors as part of the return. You can see an example where docdb.NewClient returns a client and an err object. At one point in the history of this function, it did the same thing- if connecting to the database failed, it returned an error.

But a few months earlier, an engineer changed it to swallow the error- if an error occurred, it would return nil.

As an organization, they did code reviews. Multiple people looked at this and signed off- or, more likely, multiple people clicked a button to say they'd looked at it, but hadn't.

Most of the time, there weren't any connection issues. But sometimes there were. One reboot had a flaky moment with connecting, and the error was ignored. Later on in execution, downstream modules started failing, which eventually lead to a log full of panic level messages.

The change was part of a commit tagged merely: "Refactoring". Something got factored, good and hard, all right.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Planet DebianJonathan Dowland: Amiga redux

Matthew blogged about his Amiga CDTV project, a truly unique Amiga hack which also manages to be a novel Doom project (no mean feat: it's a crowded space)

This re-awakened my dormant wish to muck around with my childhood Amiga some more. When I last wrote about it (four years ago ☹) I'd upgraded the disk drive emulator with an OLED display and rotary encoder. I'd forgotten to mention I'd also sourced a modern trapdoor RAM expansion which adds 2MiB of RAM. The Amiga can only see 1.5MiB1 of it at the moment, I need perform a mainboard modification to access the final 512kiB2, which means some soldering.

[Amiga Test Kit](https://github.com/keirf/Amiga-Stuff) showing 2MiB RAM

Amiga Test Kit showing 2MiB RAM

What I had planned to do back then: replace the switch in the left button of the original mouse, which was misbehaving; perform the aformentioned mainboard mod; upgrade the floppy emulator wiring to a ribbon cable with plug-and-socket, for easier removal; fit an RTC chip to the RAM expansion board to get clock support in the OS.

However much of that might be might be moot, because of two other mods I am considering,

PiStorm

I've re-considered the PiStorm accelerator mentioned in Matt's blog.

Four years ago, I'd passed over it, because it required you to run Linux on a Raspberry Pi, and then an m68k emulator as a user-space process under Linux. I didn't want to administer another Linux system, and I'm generally uncomfortable about using a regular Linux distribution on SD storage over the long term.

However in the intervening years Emu68, a bare-metal m68k emulator has risen to prominence. You boot the Pi straight into Emu68 without Linux in the middle. For some reason that's a lot more compelling to me.

The PiStorm enormously expands the RAM visible to the Amiga. There would be no point in doing the mainboard mod to add 512k (and I don't know how that would interact with the PiStorm). It also can provide virtual hard disk devices to the Amiga (backed by files on the SD card), meaning the floppy emulator would be superfluous.

Denise Mainboard

I've just learned about a truly incredible project: the Denise Mini-ITX Amiga mainboard. It fitss into a Mini-ITX case (I have a suitable one spare already). Some assembly required. You move the chips from the original Amiga over to the Denise mainboard. It's compatible with the PiStorm (or vice-versa). It supports PC-style PS/2 keyboards (I have a Model M in the loft, thanks again Simon) and has a bunch of other modern conveniences: onboard RTC; mini-ITX power (I'll need something like a picoPSU too)

It wouldn't support my trapdoor RAM card but it takes a 72-pin DIMM which can supply 2MiB of Chip RAM, and the PiStorm can do the rest (they're compatible3).

No stock at the moment but if I could get my hands on this, I could build something that could permanently live on my desk.


  1. the Boobip board's 1.5MiB is "chip" RAM: accessible to the other chips on the mainboard, with access mediated by the AGNUS chip.
  2. the final 512kiB is "Fast" RAM: only accessible to the CPU, not mediated via Agnus.
  3. confirmation

365 TomorrowsTest Run

Author: Julian Miles, Staff Writer “Wizard One, remind me again why I’m face down in a flower bed in downtown fuck-knows-where?” “Maintain comms discipline, Fighter Zero. However, I am authorised to say you look lovely with a sprinkling of daisies on your arse.” “Tell Gandalf to get himself a new hobbit, because you’re gonna be […]

The post Test Run appeared first on 365tomorrows.

Planet DebianOtto Kekäläinen: Best Practices for Submitting and Reviewing Merge Requests in Debian

Featured image of post Best Practices for Submitting and Reviewing Merge Requests in Debian

Historically the primary way to contribute to Debian has been to email the Debian bug tracker with a code patch. Now that 92% of all Debian source packages are hosted at salsa.debian.org — the GitLab instance of Debian — more and more developers are using Merge Requests, but not necessarily in the optimal way. In this post I share what I’ve found the best practice to be, presented in the natural workflow from forking to merging.

Why use Merge Requests?

Compared to sending patches back and forth in email, using a git forge to review code contributions brings several benefits:

  • Contributors can see the latest version of the code immediately when the maintainer pushes it to git, without having to wait for an upload to Debian archives.
  • Contributors can fork the development version and easily base their patches on the correct version and help test that the software continues to function correctly at that specific version.
  • Both maintainer and other contributors can easily see what was already submitted and avoid doing duplicate work.
  • It is easy for anyone to comment on a Merge Request and participate in the review.
  • Integrating CI testing is easy in Merge Requests by activating Salsa CI.
  • Tracking the state of a Merge Request is much easier than browsing Debian bug reports tagged ‘patch’, and the cycle of submit → review → re-submit → re-review is much easier to manage in the dedicated Merge Request view compared to participants setting up their own email plugins for code reviews.
  • Merge Requests can have extra metadata, such as ‘Approved’, and the metadata often updates automatically, such as a Merge Request being closed automatically when the Git commit ID from it is pushed to the target branch.

Keeping these benefits in mind will help ensure that the best practices make sense and are aligned with maximizing these benefits.

Finding the Debian packaging source repository and preparing to make a contribution

Before sinking any effort into a package, start by checking its overall status at the excellent Debian Package Tracker. This provides a clear overview of the package’s general health in Debian, when it was last uploaded and by whom, and if there is anything special affecting the package right now. This page also has quick links to the Debian bug tracker of the package, the build status overview and more. Most importantly, in the General section, the VCS row links to the version control repository the package advertises. Before opening that page, note the version most recently uploaded to Debian. This is relevant because nothing in Debian currently enforces that the package in version control is actually the same as the latest uploaded to Debian.

Packaging source code repository links at tracker.debian.org

Following the Browse link opens the Debian package source repository, which is usually a project page on Salsa. To contribute, start by clicking the Fork button, select your own personal namespace and, under Branches to include, pick Only the default branch to avoid including unnecessary temporary development branches.

View after pressing Fork

Once forking is complete, clone it with git-buildpackage. For this example repository, the exact command would be gbp clone --verbose git@salsa.debian.org:otto/glow.git.

Next, add the original repository as a new remote and pull from it to make sure you have all relevant branches. Using the same fork as an example, the commands would be:

git remote add go-team https://salsa.debian.org/go-team/packages/glow.git
gbp pull --verbose --track-missing go-team

The gbp pull command can be repeated whenever you want to make sure the main branches are in sync with the original repository. Finally, run gitk --all & to visually browse the Git history and note the various branches and their states in the two remotes. Note the style in comments and repository structure the project has and make sure your contributions follow the same conventions to maximize the chances of the maintainer accepting your contribution.

It may also be good to build the source package to establish a baseline of the current state and what kind of binaries and .deb packages it produces. If using Debcraft, one can simply run debcraft build in the Git repository.

Submitting a Merge Request for a Debian packaging improvement

Always start by making a development branch by running git checkout -b <branch name> to clearly separate your work from the main branch.

When making changes, remember to follow the conventions you already see in the package. It is also important to be aware of general guidelines on how to make good Git commits.

If you are not able to immediately finish coding, it may be useful to publish the Merge Request as a draft so that the maintainer and others can see that you started working on something and what general direction your change is heading in.

If you don’t finish the Merge Request in one sitting and return to it another day, you should remember to pull the Debian branch from the original Debian repository in case it has received new commits. This can be done easily with these commands (assuming the same remote and branch names as in the example above):

git fetch go-team
git rebase -i go-team/debian/latest

Frequent rebasing is a great habit to help keep the Git history linear, and restructuring and rewording your commits will make the Git history easier to follow and understand why the changes were made.

When pushing improved versions of your branch, use git push --force. While GitLab does allow squashing, I recommend against it. It is better that the submitter makes sure the final version is a neat and clean set of commits that the receiver can easily merge without having to do any rebasing or squashing themselves.

When ready, remove the draft status of the Merge Request and wait patiently for review. If the maintainer does not respond in several days, try sending an email to <source package name>@packages.debian.org, which is the official way to contact maintainers. You could also post a comment on the MR and tag the last few committers in the same repository so that a notification email is triggered. As a last resort, submit a bug report to the Debian bug tracker to announce that a Merge Request is pending review. This leaves a permanent record for posterity (or the Debian QA team) of your contribution. However, most of the time simply posting the Merge Request in Salsa is enough; excessive communication might be perceived as spammy, and someone needs to remember to check that the bug report is closed.

Respect the review feedback, respond quickly and avoid Merge Requests getting stale

Once you get feedback, try to respond as quickly as possible. When people participating have everything fresh in their minds, it is much easier for the submitter to rework it and for the reviewer to re-review. If the Merge Request becomes stale, it can be challenging to revive it. Also, if it looks like the MR is only waiting for re-review but nothing happens, re-read the previous feedback and make sure you actually address everything. After that, post a friendly comment where you explicitly say you have addressed all feedback and are only waiting for re-review.

Reviewing Merge Requests

This section about reviewing is not exclusive to Debian package maintainers — anyone can contribute to Debian by reviewing open Merge Requests. Typically, the larger an open source project gets, the more help is needed in reviewing and testing changes to avoid regressions, and all diligently done work is welcome. As the famous Linus quote goes, “given enough eyeballs, all bugs are shallow”.

On salsa.debian.org, you can browse open Merge Requests per project or for a whole group, just like on any GitLab instance.

Reviewing Merge Requests is, however, most fun when they are fresh and the submitter is active. Thus, the best strategy is to ensure you have subscribed to email notifications in the repositories you care about so you get an email for any new Merge Request (or Issue) immediately when posted.

Change notification settings from Global to Watch to get an email on new Merge Requests

When you see a new Merge Request, try to review it within a couple of days. If you cannot review in a reasonable time, posting a small note that you intend to review it later will feel better to the submitter compared to not getting any response.

Personally, I have a habit of assigning myself as a reviewer so that I can keep track of my whole review queue at https://salsa.debian.org/dashboard/merge_requests?reviewer_username=otto, and I recommend the same to others. Seeing the review assignment happen is also a good way to signal to the submitter that their submission was noted.

Reviewing commit-by-commit in the web interface

Reviewing using the web interface works well in general, but I find that the way GitLab designed it is not ideal. In my ideal review workflow, I first read the Git commit message to understand what the submitter tried to do and why; only then do I look at the code changes in the commit. In GitLab, to do this one must first open the Commits tab and then click on the last commit in the list, as it is sorted in reverse chronological order with the first commit at the bottom. Only after that do I see the commit message and contents. Getting to the next commit is easy by simply clicking Next.

Example review to demonstrate location of buttons and functionality

When adding the first comment, I choose Start review and for the following remarks Add to review. Finally, I click Finish review and Submit review, which will trigger one single email to the submitter with all my feedback. I try to avoid using the Add comment now option, as each such comment triggers a separate notification email to the submitter.

Reviewing and testing on your own computer locally

For the most thorough review, I pull the code to my laptop for local review with git pull <remote url> <branch name>. There is no need to run git remote add as pulling using a URL directly works too and saves from needing to clean up old remotes later.

Pulling the Merge Request contents locally allows me to build, run and inspect the code deeply and review the commits with full metadata in gitk or equivalent.

Investing enough time in writing feedback, but not too much

See my other post for more in-depth advice on how to structure your code review feedback.

In Debian, I would emphasize patience, to allow the submitter time to rework their submission. Debian packaging is notoriously complex, and even experienced developers often need more feedback and time to get everything right. Avoid the temptation to rush the fix in yourself. In open source, Git credits are often the only salary the submitter gets. If you take the idea from the submission and implement it yourself, you rob the submitter of the opportunity to get feedback, try to improve and finally feel accomplished. Sure, it takes extra effort to give feedback, but the contributor is likely to feel ownership of their work and later return to further improve it.

If a submission looks hopelessly low quality and you feel that giving feedback is a waste of time, you can simply respond with something along the lines of: “Thanks for your contribution and interest in helping Debian. Unfortunately, looking at the commits, I see several shortcomings, and it is unlikely a normal review process is enough to help you finalize this. Please reach out to Debian Mentors to get a mentor who can give you more personalized feedback.”

There might also be contributors who just “dump the code”, ignore your feedback and never return to finalize their submission. If a contributor does not return to finalize their submission in 3-6 months, I will in my own projects simply finalize it myself and thank the contributor in the commit message (but not mark them as the author).

Despite best practices, you will occasionally still end up doing some things in vain, but that is how volunteer collaboration works. We all just need to accept that some communication will inevitably feel like wasted effort, but it should be viewed as a necessary investment in order to get the benefits from the times when the communication led to real and valuable collaboration. Please just do not treat all contributors as if they are unlikely to ever contribute again; otherwise, your behavior will cause them not to contribute again. If you want to grow a tree, you need to plant several seeds.

Approving and merging

Assuming review goes well and you are ready to approve, and if you are the only maintainer, you can proceed to merge right away. If there are multiple maintainers, or if you otherwise think that someone else might want to chime in before it is merged, use the “Approve” button to show that you approve the change but leave it unmerged.

The person who approved does not necessarily have to be the person who merges. The point of the Merge Request review is not separation of duties in committing and merging — the main purpose of a code review is to have a different set of eyeballs looking at the change before it is committed into the main development branch for all eternity. In some packages, the submitter might actually merge themselves once they see another developer has approved. In some rare Debian projects, there might even be separate people taking the roles of submitting, approving and merging, but most of the time these three roles are filled by two people either as submitter and approver+merger or submitter+merger and approver.

If you are not a maintainer at all and do not have permissions to click Approve, simply post a comment summarizing your review and that you approve it and support merging it. This can help the maintainers review and merge faster.

Making a Merge Request for a new upstream version import

Unlike many other Linux distributions, in Debian each source package has its own version control repository. The Debian sources consist of the upstream sources with an additional debian/ subdirectory that contains the actual Debian packaging. For the same reason, a typical Debian packaging Git repository has a debian/latest branch that has changes only in the debian/ subdirectory while the surrounding upstream files are the actual upstream files and have the actual upstream Git history. For details, see my post explaining Debian source packages in Git.

Because of this Git branch structure, importing a new upstream version will typically modify three branches: debian/latest, upstream/latest and pristine-tar. When doing a Merge Request for a new upstream import, only submit one Merge Request for one branch: which means merging your new changes to the debian/latest branch.

There is no need to submit the upstream/latest branch or the pristine-tar branch. Their contents are fixed and mechanically imported into Debian. There are no changes that the reviewer in Debian can request the submitter to do on these branches, so asking for feedback and comments on them is useless. All review, comments and re-reviews concern the content of the debian/latest branch only.

It is not even necessary to use the debian/latest branch for a new upstream version. Personally, I always execute the new version import (with gbp import-orig --verbose --uscan) and prepare and test everything on debian/latest, but when it is time to submit it for review, I run git checkout -b import/$(dpkg-parsechangelog -SVersion) to get a branch named e.g. import/1.0.1 and then push that for review.

Reviewing a Merge Request for a new upstream version import

Reviewing and testing a new upstream version import is a bit tricky currently, but possible. The key is to use gbp pull to automate fetching all branches from the submitter’s fork. Assume you are reviewing a submission targeting the Glow package repository and there is a Merge Request from user otto’s fork. As the maintainer, you would run the commands:

git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto

If there was feedback in the first round and you later need to pull a new version for re-review, running gbp pull --force will not suffice, and this trick of manually fetching each branch and resetting them to the submitter’s version is needed:

for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done

Once review is done, either click Approve and let the submitter push everything, or alternatively, push all the branches you pulled locally yourself. In GitLab and other forges, the Merge Request will automatically be marked as Merged once the commit ID that was the head of the Merge Request is pushed to the target branch.

Please allow enough time for everyone to participate

When working on Debian, keep in mind that it is a community of volunteers. It is common for people to do Debian stuff only on weekends, so you should patiently wait for at least a week so that enough workdays and weekend days have passed for the people you interact with to have had time to respond on their own Debian time.

Having to wait may feel annoying and disruptive, but try to look at the upside: you do not need to do extra work simply while waiting for others. In some cases, that waiting can be useful thanks to the “sleep on it” phenomenon: when you yourself look at your own submission some days later with fresh eyes, you might notice something you overlooked earlier and improve your code change even without other people’s feedback!

Contribute reviews!

The last but not least suggestion is to make a habit of contributing reviews to packages you do not maintain. As we already see in large open source projects, such as the Linux kernel, they have far more code submissions than they can handle. The bottleneck for progress and maintaining quality becomes the reviews themselves.

For Debian, as an organization and as a community, to be able to renew and grow new contributors, we need more of the senior contributors to shift focus from merely maintaining their packages and writing code to also intentionally interact with new contributors and guide them through the process of creating great open source software. Reviewing code is an effective way to both get tangible progress on individual development items and to transfer culture to a new generation of developers.

Why aren’t 100% of all Debian source packages hosted on Salsa?

As seen at trends.debian.net, more and more packages are using Salsa. Debian does not, however, have any policy about it. In fact, the Debian Policy Manual does not even mention the word “Salsa” anywhere. Adoption of Salsa has so far been purely organic, as in Debian each package maintainer has full freedom to choose whatever preferences they have regarding version control.

I hope the trend to use Salsa will continue and more shared workflows emerge so that collaboration gets easier. To drive the culture of using Merge Requests and more, I drafted the Debian proposal DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are active in Debian and you think DEP-18 is beneficial for Debian, please give a thumbs up at dep-team/deps!21.

,

Charles StrossAnother update

Good news/no news:

The latest endoscopy procedure went smoothly. There are signs of irritation in my fundus (part of the stomach lining) but no obvious ulceration or signs of cancer. Biopsy samples taken, I'm awaiting the results. (They're testing for celiac, as well as cytology.)

I'm also on the priority waiting list for cataract surgery at the main eye hospital, with an option to be called up at short notice if someone ahead of me on the list cancels.

This is good stuff; what's less good is that I'm still feeling a bit crap and have blurry double vision in both eyes. So writing is going very slowly right now. This isn't helped by me having just checked the page proofs for The Regicide Report, which will be on the way to production by the end of the month.

(There's a long lead time with this title because it has to be published simultaneously in the USA and UK, which means allowing time in the pipeline for Orbit in the UK to take the typeset files and reprocess them for their own size of paper and binding, and on the opposite side, for Tor.com to print and distribute physical hardcovers—which, in the USA, means weeks in shipping containers slowly heading for warehouses in other states: it's a big place.)

Both the new space operas in progress are currently at around 80% complete but going very slowly (this is not quite a euphemism for "stalled") because: see eyeballs above. This is also the proximate cause of the slow/infrequent blogging. My ability to read or focus on a screen is really impaired right now: it's not that I can't do it, it's just really tiring so I'm doing far less of it. On the other hand, I expect that once my eyes are fixed my productivity will get a huge rebound boost. Last time I was unable to write or read for a couple of months (in 2013 or thereabouts: I had Bell's Palsy and my most working eye kept watering because the eyelid didn't work properly) I ended up squirting the first draft of novel out in eighteen days after it cleared up. (That was The Annihilation Score. You're welcome.)

Final news: I'm not doing many SF convention appearances these days because COVID (and Trump), but I am able to announce that I'm going to be one of the guests of honour at LunCon '25, the Swedish national SF convention, at the city hall of Lund, very close to Malmö, from October 1th to 12th. (And hopefully I'll be going to a couple of other conventions in the following months!)

Planet DebianC.J. Collier: The Very Model of a Patriot Online

It appears that the fragile masculinity tech evangelists have identified Debian as a community with boundaries which exclude them from abusing its members and they’re so angry about it! In response to posts such as this, and inspired by Dr. Conway’s piece, I’ve composed a poem which, hopefully, correctly addresses the feelings of that crowd.


The Very Model of a Patriot Online

I am the very model of a modern patriot online,
My keyboard is my rifle and my noble cause is so divine.
I didn't learn my knowledge in a dusty college lecture hall,
But from the chans where bitter anonymity enthralls us all.
I spend a dozen hours every day upon my sacred quest,
To put the globo-homo narrative completely to the test.
My arguments are peer-reviewed by fellas in the comments section,
Which proves my every thesis is the model of complete perfection.
I’m steeped in righteous anger that the libs call 'white fragility,'
For mocking their new pronouns and their lack of masculinity.
I’m master of the epic troll, the comeback, and the searing snark,
A digital guerrilla who is fighting battles in the dark.

I know the secret symbols and the dog-whistles historical,
From Pepe the Frog to ‘Let’s Go Brandon,’ in order categorical;
In short, for fighting culture wars with rhetoric rhetorical,
I am the very model of a patriot polemical.

***

I stand for true expression, for the comics and the edgy clown,
Whose satire is too based for all the fragile folks in town.
They say my speech is 'violence' while my spirit they are trampling,
The way they try to silence me is really quite a startling sampling
Of 1984, which I've not read but thoroughly understand,
Is all about the tyranny that's gripping this once-blessed land.
My humor is a weapon, it’s a razor-bladed, sharp critique,
(Though sensitive elites will call my masterpiece a form of ‘hate speech’).
They cannot comprehend my need for freedom from all consequence,
They call it 'hate,' I call it 'jokes,' they just don't have a lick of sense.
So when they call me ‘bigot’ for the spicy memes I post pro bono,
I tell them their the ones who're cancelled, I'm the victim here, you know!

Then I can write a screed against the globalist cabal, you see,
And tell you every detail of their vile conspiracy.
In short, when I use logic that is flexible and personal,
I am the very model of a patriot controversial.

***

I'm very well acquainted with the scientific method, too,
It's watching lengthy YouTube vids until my face is turning blue.
I trust the heartfelt testimony of a tearful, blonde ex-nurse,
But what a paid fact-checker says has no effect and is perverse.
A PhD is proof that you've been brainwashed by the leftist mob,
While my own research on a meme is how I really do my job.
I know that masks will suffocate and vaccines are a devil's brew,
I learned it from a podcast host who used to sell brain-boosting goo.
He scorns the lamestream media, the CNNs and all the rest,
Whose biased reporting I've put fully to a rigorous test
By only reading headlines and confirming what I already knew,
Then posting my analysis for other patriots to view.

With every "study" that they cite from sources I can't stand to hear,
My own profound conclusions become ever more precisely clear.
In short, when I've debunked the experts with a confident "Says who?!",
I am the very model of a researcher who sees right through you.

***

But all these culture wars are just a sleight-of-hand, a clever feint,
To hide the stolen ballots and to cover up the moral taint
Of D.C. pizza parlors and of shipping crates from Wayfair, it’s true,
It's all connected in a plot against the likes of me and you!
I've analyzed the satellite photography and watermarks,
I understand the secret drops, the cryptic Qs, the coded sparks.
The “habbening” is coming, friends, just give it two more weeks or three,
When all the traitors face the trials for their wicked treachery.
They say that nothing happened and the dates have all gone past, you see,
But that's just disinformation from the globalist enemy!
Their moving goalposts constantly, a tactic that is plain to see,
To wear us down and make us doubt the coming, final victory!

My mind can see the patterns that a simple sheep could never find,
The hidden puppet-masters who are poisoning our heart and mind.
In short, when I link drag queens to the price of gas and child-trafficking,
I am the very model of a patriot whose brain is quickening!

***

My pickup truck's a testament to everything that I hold dear,
With vinyl decals saying things the liberals all hate and fear.
The Gadsden flag is waving next to one that's blue and starkly thin,
To show my deep respect for law, except the feds who're steeped in sin.
There's Punisher and Molon Labe, so that everybody knows
I'm not someone to trifle with when push to final shoving goes.
I've got my tactical assault gear sitting ready in the den,
Awaiting for the signal to restore our land with my fellow men.
I practice clearing rooms at home when my mom goes out to the store,
A modern Minuteman who's ready for a civil war.
The neighbors give me funny looks, I see them whisper and take note,
They'll see what's what when I'm the one who's guarding checkpoints by their throat.

I am a peaceful man, of course, but I am also pre-prepared,
To neutralize the threats of which the average citizen's unscared.
In short, when my whole identity's a brand of tactical accessory,
You'll say a better warrior has never graced a Cabela's registry.

***

They say I have to tolerate a man who thinks he is a dame,
While feminists and immigrants are putting out my vital flame!
There taking all the jobs from us and giving them to folks who kneel,
And "woke HR" says my best jokes are things I'm not allowed to feel!
An Alpha Male is what I am, a lion, though I'm in this cubicle,
My life's frustrations can be traced to policies Talmudical.
They lecture me on privilege, I, who have to pay my bills and rent!
While they give handouts to the lazy, worthless, and incompetent!
My grandad fought the Nazis! Now I have to press a key for ‘one’
To get a call-rep I can't understand beneath the blazing sun
Of global, corporate tyranny that's crushing out the very soul
Of men like me, who've lost their rightful, natural, and just control!

So yes, I am resentful! And I'm angry! And I'm right to be!
They've stolen all my heritage and my masculinity!
In short, when my own failures are somebody else's evil plot,
I am the very model of the truest patriot we've got!

***

There putting chips inside of you! Their spraying things up in the sky!
They want to make you EAT THE BUGS and watch your very spirit die!
The towers for the 5G are a mind-control delivery tool!
To keep you docile while the children suffer in a grooming school!
The WEF, and Gates, and Soros have a plan they call the 'Great Reset,'
You'll own no property and you'll be happy, or you'll be in debt
To social credit overlords who'll track your every single deed!
There sterilizing you with plastics that they've hidden in the feed!
The world is flat! The moon is fake! The dinosaurs were just a lie!
And every major tragedy's a hoax with actors paid to cry!
I'M NOT INSANE! I SEE THE TRUTH! MY EYES ARE OPEN! CAN'T YOU SEE?!
YOU'RE ALL ASLEEP! YOU'RE COWARDS! YOU'RE AFRAID OF BEING TRULY FREE!

My heart is beating faster now, my breath is short, my vision's blurred,
From all the shocking truth that's in each single, solitary word!
I've sacrificed my life and friends to bring this message to the light, so...
You'd better listen to me now with all your concentrated might, ho!

***

For my heroic struggle, though it's cosmic and it's biblical,
Is waged inside the comments of a post that's algorithm-ical.
And still for all my knowledge that's both tactical and practical,
My mom just wants the rent I owe and says I'm being dramatical.

365 TomorrowsThe Fugitive

Author: Bill Cox She weeps and Tony’s heart aches like never before. He knows that he will do absolutely anything to protect her. He holds her close and she burrows into his chest, her sobs echoing through his ribcage. “It’s going to be all right,” Tony whispers, caressing her head gently, “I’ll hide you from […]

The post The Fugitive appeared first on 365tomorrows.

Planet DebianValhalla's Things: rrdtool and Trixie

Posted on August 17, 2025
Tags: madeof:bits

TL;DL: if you’re using rrdtool on a 32 bit architecture like armhf make an XML dump of your RRD files just before upgrading to Debian Trixie.

I am an old person at heart, so the sensor data from my home monitoring system1 doesn’t go to one of those newfangled javascript-heavy data visualization platforms, but into good old RRD files, using rrdtool to generate various graphs.

This happens on the home server, which is an armhf single board computer2, hosting a few containers3.

So, yesterday I started upgrading one of the containers to Trixie, and luckily I started from the one with the RRD, because when I rebooted into the fresh system and checked the relevant service I found it stopped on ERROR: '<file>' is too small (should be <size> bytes).

Some searxing later, I’ve4 found this was caused by the 64-bit time_t transition, which changed the format of the files, and that (somewhat unexpectedly) there was no way to fix it on the machine itself.

What needed to be done instead was to export the data on an XML dump before the upgrade, and then import it back afterwards.

Easy enough, right? If you know about it, which is why I’m blogging this, so that other people will know in advance :)

Anyway, luckily I still had the other containers on bookworm, so I copied the files over there, did the upgrade, and my home monitoring system is happily running as before.


  1. of course one has a self-built home monitoring system, right?↩︎

  2. an A20-OLinuXino-MICRO, if anybody wants to know.↩︎

  3. mostly for ease of migrating things between different hardware, rather than insulation, since everything comes from Debian packages anyway.↩︎

  4. and by I I really mean Diego, as I was still into denial / distractions mode.↩︎

,

Charles StrossAnother brief update

(UPDATE: A new article/interview with me about the 20th anniversary of Accelerando just dropped, c/o Agence France-Presse. Gosh, I feel ancient.)

Bad news: the endoscopy failed. (I was scheduled for an upper GI endoscopy via the nasal sinuses to take a look around my stomach and see what's bleeding. Bad news: turns out I have unusually narrow sinuses, and by the time they'd figured this out my nose was watering so badly that I couldn't breathe when they tried to go in via my throat. So we're rescheduling for a different loction with an anesthetist who can put me under if necessary. NB: I would have been fine with only local anaesthesia if the bloody endscope had fit through my sinuses. Gaah.)

The attack novel I was working on has now hit the 70% mark in first draft—not bad for two months. I am going to keep pushing onwards until it stops, or until the page proofs I'm expecting hit me in the face. They're due at the end of June, so I might finish Starter Pack first ... or not. Starter Pack is an unexpected but welcome spin-off of Ghost Engine (third draft currently on hold at 80% done), which I shall get back to in due course. It seems to have metastasized into a multi-book project.

Neither of the aforementioned novels is finished, nor do they have a US publisher. (Ghost Engine has a UK publisher, who has been Very Patient for the past few years—thanks, Jenni!)

Feel free to talk among yourselves, especially about the implications of Operation Spiders Web, which (from here) looks like the defining moment for a very 21st century revolution in military affairs; one marking the transition from fossil fuel powered force projection to electromotive/computational force projection.

Charles StrossBrief Update

The reason(s) for the long silence here:

I've been attacked by an unscheduled novel, which is now nearly 40% written (in first draft). Then that was pre-empted by the copy edits for The Regicide Report (which have a deadline attached, because there's a publication date).

I also took time off for Eastercon, then hospital out-patient procedures. (Good news: I do not have colorectal cancer. Yay! Bad news: they didn't find the source of the blood in my stool, so I'm going back for another endoscopy.)

Finally, I'm still on the waiting list for cataract surgery. Blurred vision makes typing a chore, so I'm spending my time productively—you want more novels, right? Right?

Anyway: I should finish the copy edits within the next week, then get back to one or other of the two novels I'm working on in parallel (the attack novel and Ghost Engine: they share the same fictional far future setting), then maybe I can think of something to blog about again—but not the near future, it's too depressing. (I mean, if I'd written up our current political developments in a work of fiction any time before 2020 they'd have been rejected by any serious SF editor as too implausibly bizarre to publish.)

Planet DebianBits from Debian: Debian turns 32!

Alt 32th Debian Day by Daniel Lenharo

On August 16, 1993, Ian Murdock announced the Debian Project to the world. Three decades (and a bit) later, Debian is still going strong, built by a worldwide community of developers, contributors, and users who believe in a free, universal operating system.

Over the years, Debian has powered servers, desktops, tiny embedded devices, and huge supercomputers. We have gathered at DebConfs, squashed countless bugs, shared late-night hacking sessions, and helped keep millions of systems secure.

Debian Day is a great excuse to get together, whether it is a local meetup, an online event, a bug squashing party, a team sprint or just coffee with fellow Debianites. Check out the Debian Day wiki to see if there is a celebration near you or to add your own.

Here is to 32 years of collaboration, code, and community, and to all the amazing people who make Debian what it is.

Happy Debian Day!

Planet DebianBirger Schacht: Updates and additions in Debian 13 Trixie

Last week Debian 13 (Trixie) was released and there have been some updates and additions in the packages that I maintain, that I wanted to write about. I think they are not worth of being added to the release notes, but I still wanted to list some of the changes and some of the new packages.

sway

Sway, the tiling Wayland compositor was version 1.7 in Bookworm. It was updated to version 1.10 (and 1.11 is already in experimental and waiting for an upload to unstable). This new version of sway brings, among a lot of other features, updated support for touchpad gestures and support for the ext-session-lock-v1 protocol, which allows for more robust and secure screen locking. The configuration snippet that activates the default sway background is now shipped in the sway-backgrounds package instead of being part of the sway package itself.

The default menu application was changed from dmenu to wmenu. wmenu is a Wayland native alternative to dmenu which I packaged and it is now recommended by sway.

There are some small helper tools for sway that were updated: swaybg was bumped from 1.2.0 to 1.2.1, swaylock was bumped from 1.7.2 to 1.8.2.

The grimshot script, which is a script for making screenshots, was part of the sway’s contrib folder for a long time (but was shipped as a separate binary package). It was removed from sway and is now part of the sway-contrib project. There are some other useful utilities in this source package that I might package in the future.

slurp, which is used by grimshot to select a region, was updated from version 1.4 to version 1.5.

labwc

I uploaded the first labwc package two years ago and I’m happy it is now part of a stable Debian release. Labwc is also based on wlroots, like sway. It is a window-stacking compositor and is inspired by openbox. I used openbox for a long time back in the day before I moved to i3 and I’m very happy to see that there is a Wayland alternative.

foot

Foot is a minimalistic and fast Wayland terminal emulator. It is mostly keyboard driven. foot was updated from version 1.13.1 to 1.21.0. The probably most important change for users updating might be the fact that:

  • Control+Shift+u is now bound to unicode-input instead of show- urls-launch, to follow the convention established in GTK and Qt
  • show-urls-launch now bound to Control+Shift+o

et cetera

The Wayland kiosk cage was updated from 0.1.4 to 0.2.0.

The waybar bar for wlroots compositors was updated from 0.9.17 to 0.12.0.

swayimg was updated from 1.10 to 3.8 and now brings support for custom key bindings, support for additional image types (PNM, EXR, DICOM, Farbfeld, sixel) and a gallery mode.

tofi, another dmenu replacement was updated from 0.8.1 to 0.9.1, wf-recorder a tool for screen recording in wlroots-based compositors, was updated from version 0.3 to version 0.5.0. wlogout was updated from version 1.1.1 to 1.2.2. The application launcher wofi was updated from 1.3 to 1.4.1. The lightweight status panel yambar was updated from version 1.9 to 1.11. kanshi, the tool for managing and automatically switching your output profiles, was updated from version 1.3.1 to version 1.5.1.

usbguard was updated from version 1.1.2 to 1.1.3.

added

  • fnott - a lightweight notification daemon for wlroots based compositors
  • fyi - a utility to send notifications to a notification daemon, similar to notify-send
  • pipectl - a tool to create and manage short-lived named pipes, this is a dependency of wl-present. wl-present is a script around wl-mirror which implements output mirroring for wlroots-based compositors
  • poweralertd - a small daemon that notifies you about the power status of your battery powered devices
  • wlopm - control power management of outputs
  • wlrctl - command line utility for miscellaneous wlroots Wayland extensions
  • wmenu - already mentioned, the new default launcher of sway
  • wshowkeys - shows keypresses in wayland sessions, nice for debugging
  • libsfdo - libraries implementing some freedesktop.org specs, used by labwc

365 TomorrowsShiny

Author: James Sallis Head propped against the bed’s headboard, half a glass of single malt at hand, the dying man readies himself for the nothingness that awaits him. He imagines it as a pool of something warm, light oil perhaps, in which he will float lazily out from the banks and curbs of his life, […]

The post Shiny appeared first on 365tomorrows.

,

Krebs on SecurityMobile Phishers Target Brokerage Accounts in ‘Ramp and Dump’ Cashout Scheme

Cybercriminal groups peddling sophisticated phishing kits that convert stolen card data into mobile wallets have recently shifted their focus to targeting customers of brokerage services, new research shows. Undeterred by security controls at these trading platforms that block users from wiring funds directly out of accounts, the phishers have pivoted to using multiple compromised brokerage accounts in unison to manipulate the prices of foreign stocks.

Image: Shutterstock, WhataWin.

This so-called ‘ramp and dump‘ scheme borrows its name from age-old “pump and dump” scams, wherein fraudsters purchase a large number of shares in some penny stock, and then promote the company in a frenzied social media blitz to build up interest from other investors. The fraudsters dump their shares after the price of the penny stock increases to some degree, which usually then causes a sharp drop in the value of the shares for legitimate investors.

With ramp and dump, the scammers do not need to rely on ginning up interest in the targeted stock on social media. Rather, they will preposition themselves in the stock that they wish to inflate, using compromised accounts to purchase large volumes of it and then dumping the shares after the stock price reaches a certain value. In February 2025, the FBI said it was seeking information from victims of this scheme.

“In this variation, the price manipulation is primarily the result of controlled trading activity conducted by the bad actors behind the scam,” reads an advisory from the Financial Industry Regulatory Authority (FINRA), a private, non-profit organization that regulates member brokerage firms. “Ultimately, the outcome for unsuspecting investors is the same—a catastrophic collapse in share price that leaves investors with unrecoverable losses.”

Ford Merrill is a security researcher at SecAlliance, a CSIS Security Group company. Merrill said he has tracked recent ramp-and-dump activity to a bustling Chinese-language community that is quite openly selling advanced mobile phishing kits on Telegram.

“They will often coordinate with other actors and will wait until a certain time to buy a particular Chinese IPO [initial public offering] stock or penny stock,” said Merrill, who has been chronicling the rapid maturation and growth of the China-based phishing community over the past three years.

“They’ll use all these victim brokerage accounts, and if needed they’ll liquidate the account’s current positions, and will preposition themselves in that instrument in some account they control, and then sell everything when the price goes up,” he said. “The victim will be left with worthless shares of that equity in their account, and the brokerage may not be happy either.”

Merrill said the early days of these phishing groups — between 2022 and 2024 — were typified by phishing kits that used text messages to spoof the U.S. Postal Service or some local toll road operator, warning about a delinquent shipping or toll fee that needed paying. Recipients who clicked the link and provided their payment information at a fake USPS or toll operator site were then asked to verify the transaction by sharing a one-time code sent via text message.

In reality, the victim’s bank is sending that code to the mobile number on file for their customer because the fraudsters have just attempted to enroll that victim’s card details into a mobile wallet. If the visitor supplies that one-time code, their payment card is then added to a new mobile wallet on an Apple or Google device that is physically controlled by the phishers.

The phishing gangs typically load multiple stolen cards to digital wallets on a single Apple or Android device, and then sell those phones in bulk to scammers who use them for fraudulent e-commerce and tap-to-pay transactions.

An image from the Telegram channel for a popular Chinese mobile phishing kit vendor shows 10 mobile phones for sale, each loaded with 4-6 digital wallets from different financial institutions.

This China-based phishing collective exposed a major weakness common to many U.S.-based financial institutions that already require multi-factor authentication: The reliance on a single, phishable one-time token for provisioning mobile wallets. Happily, Merrill said many financial institutions that were caught flat-footed on this scam two years ago have since strengthened authentication requirements for onboarding new mobile wallets (such as requiring the card to be enrolled via the bank’s mobile app).

But just as squeezing one part of a balloon merely forces the air trapped inside to bulge into another area, fraudsters don’t go away when you make their current enterprise less profitable: They just shift their focus to a less-guarded area. And lately, that gaze has settled squarely on customers of the major brokerage platforms, Merrill said.

THE OUTSIDER

Merrill pointed to several Telegram channels operated by some of the more accomplished phishing kit sellers, which are full of videos demonstrating how every feature in their kits can be tailored to the attacker’s target. The video snippet below comes from the Telegram channel of “Outsider,” a popular Mandarin-speaking phishing kit vendor whose latest offering includes a number of ready-made templates for using text messages to phish brokerage account credentials and one-time codes.



According to Merrill, Outsider is a woman who previously went by the handle “Chenlun.” KrebsOnSecurity profiled Chenlun’s phishing empire in an October 2023 story about a China-based group that was phishing mobile customers of more than a dozen postal services around the globe. In that case, the phishing sites were using a Telegram bot that sent stolen credentials to the “@chenlun” Telegram account.

Chenlun’s phishing lures are sent via Apple’s iMessage and Google’s RCS service and spoof one of the major brokerage platforms, warning that the account has been suspended for suspicious activity and that recipients should log in and verify some information. The missives include a link to a phishing page that collects the customer’s username and password, and then asks the user to enter a one-time code that will arrive via SMS.

The new phish kit videos on Outsider’s Telegram channel only feature templates for Schwab customers, but Merrill said the kit can easily be adapted to target other brokerage platforms. One reason the fraudsters are picking on brokerage firms, he said, has to do with the way they handle multi-factor authentication.

Schwab clients are presented with two options for second factor authentication when they open an account. Users who select the option to only prompt for a code on untrusted devices can choose to receive it via text message, an automated inbound phone call, or an outbound call to Schwab. With the “always at login” option selected, users can choose to receive the code through the Schwab app, a text message, or a Symantec VIP mobile app.

In response to questions, Schwab said it regularly updates clients on emerging fraud trends, including this specific type, which the company addressed in communications sent to clients earlier this year.

The 2FA text message from Schwab warns recipients against giving away their one-time code.

“That message focused on trading-related fraud, highlighting both account intrusions and scams conducted through social media or messaging apps that deceive individuals into executing trades themselves,” Schwab said in a written statement. “We are aware and tracking this trend across several channels, as well as others like it, which attempt to exploit SMS-based verification with stolen credentials. We actively monitor for suspicious patterns and take steps to disrupt them. This activity is part of a broader, industry-wide threat, and we take a multi-layered approach to address and mitigate it.”

Other popular brokerage platforms allow similar methods for multi-factor authentication. Fidelity requires a username and password on initial login, and offers the ability to receive a one-time token via SMS, an automated phone call, or by approving a push notification sent through the Fidelity mobile app. However, all three of these methods for sending one-time tokens are phishable; even with the brokerage firm’s app, the phishers could prompt the user to approve a login request that they initiated in the app with the phished credentials.

Vanguard offers customers a range of multi-factor authentication choices, including the option to require a physical security key in addition to one’s credentials on each login. A security key implements a robust form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by connecting an enrolled USB or Bluetooth device and pressing a button. The key works without the need for any special software drivers, and the nice thing about it is your second factor cannot be phished.

THE PERFECT CRIME?

Merrill said that in many ways the ramp-and-dump scheme is the perfect crime because it leaves precious few connections between the victim brokerage accounts and the fraudsters.

“It’s really genius because it decouples so many things,” he said. “They can buy shares [in the stock to be pumped] in their personal account on the Chinese exchanges, and the price happens to go up. The Chinese or Hong Kong brokerages aren’t going to see anything funky.”

Merrill said it’s unclear exactly how those perpetrating these ramp-and-dump schemes coordinate their activities, such as whether the accounts are phished well in advance or shortly before being used to inflate the stock price of Chinese companies. The latter possibility would fit nicely with the existing human infrastructure these criminal groups already have in place.

For example, KrebsOnSecurity recently wrote about research from Merrill and other researchers showing the phishers behind these slick mobile phishing kits employed people to sit for hours at a time in front of large banks of mobile phones being used to send the text message lures. These technicians were needed to respond in real time to victims who were supplying the one-time code sent from their financial institution.

The ashtray says: You’ve been phishing all night.

“You can get access to a victim’s brokerage with a one-time passcode, but then you sort of have to use it right away if you can’t set new security settings so you can come back to that account later,” Merrill said.

The rapid pace of innovations produced by these China-based phishing vendors is due in part to their use of artificial intelligence and large language models to help develop the mobile phishing kits, he added.

“These guys are vibe coding stuff together and using LLMs to translate things or help put the user interface together,” Merrill said. “It’s only a matter of time before they start to integrate the LLMs into their development cycle to make it more rapid. The technologies they are building definitely have helped lower the barrier of entry for everyone.”

Planet DebianSteinar H. Gunderson: Abstract algebra structures made easy

Group theory, and abstract algebra in general, has many useful properties; you can take a bunch of really common systems and prove very useful statements that hold for all of them at once.

But sometimes in computer science, we just use the names, not really the theorems. If you're showing that something is a group) and then proceed to use Fermat's little theorem (perhaps to efficiently compute inverses, when it's not at all obvious what they would be), then you really can't go without the theory. But for some cases, we just love to be succinct in our description of things, and for outsiders, it's just… not useful.

So here's Steinar's easy (and more importantly, highly non-scientific; no emails about inaccuracies, please :-) ) guide to the most common abstract algebra structures:

  • Set: Hopefully you already know what this is. A collection of things (for instance numbers).
  • Semigroup: A (binary) operation that isn't crazy.
  • Monoid: An operation, but you also have a no-op.
  • Group: An operation, but you also have the opposite operation.
  • Abelian group: An operation, but the order doesn't matter.
  • Ring: Two operations; the Abelian group got a friend for Christmas. The extra operation might be kind of weird (for instance, has no-ops but might not always have opposites).
  • Field: A ring with some extra flexibility, so you can do almost whatever you are used to doing with “normal” (real) numbers except perhaps order them.

So for instance, assuming that x and y are e.g. positive integers (including zero), then max(x,y) (the motivating example for this post) is a monoid. Why? Because it's a non-crazy binary operation (in particular, max(max(x,y),z) = max(x,max(y,z))), and you can use x=0 or y=0 as a no-op (max(anything, 0) = anything). But it's not a group, because once you've done max(x,y), there's nothing you can max() with to get the smallest value back.

There are many more, but these are the ones you get today.

Sociological ImagesConflict Theory and the Design of Migrant Housing

Migrant labor sustains U.S. agriculture. It is essential and constant. Yet the people who do the work remain hidden. That invisibility is not just social. It is spatial. Employers tuck housing behind groves, set it far off the road, or place it on private land behind locked gates. These sites are hard to reach. They are also hard to leave.

As a paralegal at my stepmother’s immigration law firm in Metro Detroit, I met with many migrant workers who described the places they were housed. They worked long days in fields or orchards, often six or seven days a week, and returned to dormitories built far from town. The stories stayed with me. They worked in extreme heat and came back to shared spaces without privacy, comfort, or dignity. Workers are placed in dorms with shared beds and tight quarters. Bathrooms are communal. Kitchens are often bare.

A bedroom for migrant farmworkers at the Nightingale facility in Rantoul, Ill., in July 2014.
Credit: Photo by Darrell Hoemann/Midwest Center for Investigative Reporting. Used with permission.

Images help tell this story. Photographs from North Carolina and California show identical cabins in rows. Inside are narrow beds, small windows, and not enough space to stretch. These photos are more than documentation. They are evidence. They show us what it looks like to build a system that erases the people who keep it running.

Migrant agricultural worker’s family in Nipomo, California, 1936. The mother, age 32, sits with three of her seven children outside a temporary shelter during the Great Depression.
Credit: Photo by Dorothea Lange. Farm Security Administration Collection, Library of Congress. Public domain.

Sociology gives us a framework to see that this is not just bad housing structure. It is a structural problem. When the employer controls housing, every complaint becomes a risk. Speaking up may not only cost your job, it also means losing your bed and risking forcible deportation. The design limits autonomy and keeps people quiet. The fewer choices a person has, the easier it is to control them.

In sociology, conflict theory starts with a simple idea: society develops and changes based on struggles over power and resources. In the case of migrant labor, that struggle is visible in the very organization of housing. Henri Lefebvre argued that space is socially produced. Social production means that space is shaped by those who have authority to determine how people live. This is not driven by comfort, fairness, or function. The arrangement and social production of space reflects the interests of those and control. The shape of a room, the distance between houses, and the layout of a building are not random. They reflect relationships.

Similarly, Michel Foucault shows how institutions use architecture to enforce discipline. In migrant housing, space signals control. These dorms do not need bars or guards. The buildings are made to meet the minimum legal standard for shelter. That standard is barely above what is allowed for a prison cell. The architecture dehumanizes, and in doing so, it controls.

I saw this firsthand. A worker told me his bunk was so close to the next that he could hear every breath of the man above him. His wife told me there were rules about visitors, meals, and noise. They could not live together, even though they were married. They felt monitored. They were afraid to speak. These homes were not theirs. The system made sure of that.

Sociology gives us the language to name what is happening. This is not a housing crisis. It is a labor strategy. These camps are not temporary accidents. They are long-term solutions to a problem no one wants to fix. As scholars and citizens, we should bring these designs to light. We cannot change what we do not see.

Joey Colby Bernert is a statistician and licensed clinical social worker based in Michigan. She is a graduate student in public health at Michigan State University and studies feminist theory, intersectionality, and the structural determinants of health.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Abort, Cancel, Fail?

low-case jeffphi found "Yep, all kinds of technical errors."

0

 

Michael R. reports an off by 900 error.

1

 

"It is often said that news slows down in August," notes Stewart , wondering if "perhaps The Times have just given up? Or perhaps one of the biggest media companies just doesn't care about their paying subscribers?"

2

 

"Zero is a dangerous idea!" exclaims Ernie in Berkeley .

3

 

Daniel D. found one of my unfavorites, calling it "Another classic case of cancel dialog. This time featuring KDE Partition Manager."

4

 


Fail? Until next time.
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsGingerbread House

Author: Rachel Handley “This is a terrible idea” I said. My sentience had arrived after the first gingerbread brick was lain. I was now almost fully formed and, with nothing else to do, I told the witch exactly what I thought of her so-called house. “Be quiet, house,” said the witch. “Seriously though, why not […]

The post Gingerbread House appeared first on 365tomorrows.

xkcdArchaeology Research

,

Planet DebianJonathan McDowell: Local Voice Assistant Step 4: openWakeWord

People keep asking me when I’ll write the next instalment of my local voice assistant journey. I didn’t mean for it to be so long since the last one, things have been busier than I’d like. Anyway. Last time we’d built Tensorflow, so now it’s time to sort out openWakeWord. As a reminder we’re trying to put a local voice satellite on my living room Debian media machine.

The point of openWakeWord is to run on the machine the microphone is connected to, listening for the wake phrase (“Hey Jarvis” in my case), and only then calling back to the central server to do a speech to text operation. It’s wrapped up for Wyoming as wyoming-openwakeword.

Of course I’ve packaged it up - available at https://salsa.debian.org/noodles/wyoming-openwakeword. Trixie only released yesterday, so I’m still running all of this on bookworm. That means you need python3-wyoming from Trixie - 1.6.0-1 will install fine without needing rebuilt - and the python3-tflite-runtime we built last time.

Like the other pieces I’m not sure about how this could land in Debian; it’s unclear to me that the pre-trained models provided would be accepted in main.

As usual I start it with with a systemd unit file dropped in /etc/systemd/service/wyoming-openwakeword.service:

[Unit]
Description=Wyoming OpenWakeWord server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=/usr/bin/wyoming-openwakeword --uri tcp://[::1]:10400/ --preload-model 'hey_jarvis' --threshold 0.8

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

I’m still playing with the threshold level. It defaults to 0.5, but the device lives under the TV and seems to get a bit confused by it sometimes. There’s some talk about using speex for noise suppression, but I haven’t explored that yet (it’s yet another Python module to bind to the C libraries I’d have to look at).

This is a short one; next post is actually building the local satellite on top to tie everything together.

Cryptogram Eavesdropping on Phone Conversations Through Vibrations

Researchers have managed to eavesdrop on cell phone voice conversations by using radar to detect vibrations. It’s more a proof of concept than anything else. The radar detector is only ten feet away, the setup is stylized, and accuracy is poor. But it’s a start.

Cryptogram Trojans Embedded in .svg Files

Porn sites are hiding code in .svg files:

Unpacking the attack took work because much of the JavaScript in the .svg images was heavily obscured using a custom version of “JSFuck,” a technique that uses only a handful of character types to encode JavaScript into a camouflaged wall of text.

Once decoded, the script causes the browser to download a chain of additional obfuscated JavaScript. The final payload, a known malicious script called Trojan.JS.Likejack, induces the browser to like a specified Facebook post as long as a user has their account open.

“This Trojan, also written in Javascript, silently clicks a ‘Like’ button for a Facebook page without the user’s knowledge or consent, in this case the adult posts we found above,” Malwarebytes researcher Pieter Arntz wrote. “The user will have to be logged in on Facebook for this to work, but we know many people keep Facebook open for easy access.”

This isn’t a new trick. We’ve seen Trojaned .svg files before.

David BrinAI + WAIST. A predictive riff from EXISTENCE

 

While I strive to finish my own book on Artificial Intelligence - filling in what I consider to be about fifty perceptual gaps in current discussions,* I try to keep up with what's being said in a fast-changing landscape and ideascape. Take this widely bruited essay by Niall Ferguson in The Times, which begins with a nod to science fiction...

 

...asserting that ONLY my esteemed colleague, the brilliant Neal Stephenson, could possibly have peered ahead to see aspects of this era... despite there having been dozens of thoughtful or prophetic SF tales before Snow Crash (1992) and some pretty good ones after.

 

Not so much cyberpunk, which only occasionally tried for tech-accurate forecasting, instead of noir-inspired cynicism chic, substituting in Wintermute AI for the Illuminati or Mafia or SPECTRE.... 


... No, I'm thinking more of Stephenson and Greg Bear and Nancy Kress... and yeah, my own Earth (1990) and later Existence (2013), which speculated on not just one kind of AI, but dozens....

 

... as I will in my coming book, tentatively titled: Our Latest Children - Advice about – and for – our natural, AI and hybrid heirs.


                                               *(especially gaps missed by the geniuses who are now making these systems.)

 

Anyway, here's one excerpt from Existence dealing with the topic. And ain't it a WAIST?

== WAIST ==

Wow, ain’t it strange that—boffins have been predicting that truly humanlike artificial intelligence oughta be “just a couple of decades away…” for eighty years already?

 

Some said AI would emerge from raw access to vast numbers of facts. That happened a few months after the Internet went public. 

 

But ai never showed up.

 

Others looked for a network that finally had as many interconnections as a human brain, a milestone we saw passed in the teens, when some of the crimivirals—say the Ragnarok worm or the Tornado botnet—infested-hijacked enough homes and fones to constitute the world’s biggest distributed computer, far surpassing the greatest “supercomps” and even the number of synapses in your own skull!

 

Yet, still, ai waited.

 

How many other paths were tried? How about modeling a human brain in software? 

Or modeling one in hardware. 

Evolve one, in the great Darwinarium experiment! 

Or try guiding evolution, altering computers and programs the way we did sheep and dogs, by letting only those reproduce that have traits we like—say, those that pass a Turing test, by seeming human. 

Or the ones swarming the streets and homes and virts of Tokyo, selected to exude incredible cuteness?

 

Others, in a kind of mystical faith that was backed up by mathematics and hothouse physics, figured that a few hundred quantum processors, tuned just right, could connect with their counterparts in an infinite number of parallel worlds, and just-like-that, something marvelous and God-like would pop into being.

 

The one thing no one expected was for it to happen by accident, arising from a high school science fair experiment.

 

I mean, wow ain’t it strange that a half-brilliant tweak by sixteen-year-old Marguerita deSilva leaped past the accomplishments of every major laboratory, by uploading into cyberspace a perfect duplicate of the little mind, personality, and instincts of her pet rat, Porfirio?

 

And wow ain’t it strange that Porfirio proliferated, grabbing resources and expanding, in patterns and spirals that remain—to this day—so deeply and quintessentially ratlike?

 

Not evil, all-consuming, or even predatory—thank heavens. But insistent.

 

And Wow, AIST there is a worldwide betting pool, now totaling up to a billion Brazilian reals—over whether Marguerita will end up bankrupt, from all the lawsuits over lost data and computer cycles that have been gobbled up by Porfirio? Or else, if she’ll become the world’s richest person—because so many newer ais are based upon her patents? Or maybe because she alone seems to retain any sort of influence over Porfirio, luring his feral, brilliant attention into virtlayers and corners of the Worldspace where he can do little harm? So far.

 

And WAIST we are down to this? Propitiating a virtual Rat God—(you see, Porfirio, I remembered to capitalize your name, this time)—so that he’ll be patient and leave us alone. That is, until humans fully succeed where Viktor Frankenstein calamitously failed?

 

To duplicate the deSilva Result and provide her creation with a mate.

 

 

 

A few ideas distilled down in that excerpt? There are others.

 

But heck, have you seen that novel’s dramatic and fun 3-minute trailer? All hand-made art from the great Patrick Farley!

 

And while we’re on the topic: Here I read (aloud of course) chapter two of Existence, consisting of the stand alone story “Aficionado.”

 

  

BTW, in EXISTENCE I refer to the US Space Force.  Not my biggest prediction, but another hit.

 

Now... off to the World SciFi Convention...

 

Worse Than FailureCodeSOD: An Array of Parameters

Andreas found this in a rather large, rather ugly production code base.

private static void LogView(object o)
{
    try
    {
        ArrayList al = (ArrayList)o;
        int pageId = (int)al[0];
        int userId = (int)al[1];

        // ... snipped: Executing a stored procedure that stores the values in the database
    }
    catch (Exception) { }
}

This function accepts an object of any type, except no, it doesn't, it expect that object to be an ArrayList. It then assumes the array list will then store values in a specific order. Note that they're not using a generic ArrayList here, nor could they- it (potentially) needs to hold a mix of types.

What they've done here is replace a parameter list with an ArrayList, giving up compile time type checking for surprising runtime exceptions. And why?

"Well," the culprit explained when Andreas asked about this, "the underlying database may change. And then the function would need to take different parameters. But that could break existing code, so this allows us to add parameters without ever having to change existing code."

"Have you heard of optional arguments?" Andreas asked.

"No, all of our arguments are required. We'll just default the ones that the caller doesn't supply."

And yes, this particular pattern shows up all through the code base. It's "more flexible this way."

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Cryptogram LLM Coding Integrity Breach

Here’s an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a “break” to a “continue.” That turned an error logging statement into an infinite loop, which crashed the system.

This is an integrity failure. Specifically, it’s a failure of processing integrity. And while we can think of particular patches that alleviate this exact failure, the larger problem is much harder to solve.

Davi Ottenheimer comments.

365 TomorrowsOne Room and a Matchbook

Author: Lynne Curry I didn’t get the house. Not the Lexus, the lake lot, the gilded dental practice or the damn espresso machine I bought him the year he started molar sculpting. I got a one-room cabin. Ninety miles south of Anchorage. No plumbing. A stove that belches smoke. A roof that drips snowmelt onto […]

The post One Room and a Matchbook appeared first on 365tomorrows.

,

Cryptogram AI Applications in Cybersecurity

There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here’s where to register to attend, or participate, in the fourth.

Some really great stuff here.

Planet DebianSven Hoexter: Automated Browsing with Gemini and Chrome via BrowserMCP and gemini-cli

Brief dump so I don't forget how that worked in August 2025. Requires npm, npx and nodejs.

  1. Install Chrome
  2. Add the BrowserMCP extension
  3. Install gemini-cli npm install -g @google/gemini-cli
  4. Retrieve a Gemini API key via AI Studio
  5. Export API key for gemini-cli export GEMINI_API_KEY=2342
  6. Start BrowserMCP extension, see manual, an info box will appear that it's active with a cancel button.
  7. Add mcp server to gemini-cli gemini mcp add browsermcp npx @browsermcp/mcp@latest
  8. Start gemini-cli, let it use the mcp server and task it to open a website.

365 TomorrowsTiger Woman in a Taxi-Cab

Author: Hillary Lyon Jenna slid into the first available self-driving taxi. She kept her cat-eye sunglasses on even though it was dim in the cab’s interior; the sunglasses complimented her tiger-stripe patterned coat, completing her look. She liked that, though some members of her gang said it shouted ‘cat burglar.’ That’s what she was, Jenna […]

The post Tiger Woman in a Taxi-Cab appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Raise VibeError

Ronan works with a vibe coder- an LLM addicted developer. This is a type of developer that's showing up with increasing frequency. Their common features include: not reading the code the AI generated, not testing the code the AI generated, not understanding the context of the code or how it integrates into the broader program, and absolutely not bothering to follow the company coding standards.

Here's an example of the kind of Python code they were "writing":

if isinstance(o, Test):
    if o.requirement is None:
        logger.error(f"Invalid 'requirement' in Test: {o.key}")
        try:
            raise ValueError("Missing requirement in Test object.")
        except ValueError:
            pass

    if o.title is None:
        logger.error(f"Invalid 'title' in Test: {o.key}")
        try:
            raise ValueError("Missing title in Test object.")
        except ValueError:
            pass

An isinstance check is already a red flag. Even without proper type annotations and type checking (though you should use them) any sort of sane coding is going to avoid situations where your method isn't sure what input it's getting. isinstance isn't a WTF, but it's a hint at something lurking off screen. (Yes, sometimes you do need it, this may be one of those times, but I doubt it.)

In this case, if the Test object is missing certain fields, we want to log errors about it. That part, honestly, is all fine. There are potentially better ways to express this idea, but the idea is fine.

No, the obvious turd in the punchbowl here is the exception handling. This is pure LLM, in that it's a statistically probable result of telling the LLM "raise an error if the requirement field is missing". The resulting code, however, raises an exception, immediately catches it, and then does nothing with it.

I'd almost think it's a pre-canned snippet that's meant to be filled in, but no- there's no reason a snippet would throw and catch the same error.

Now, in Ronan's case, this has a happy ending: after a few weeks of some pretty miserable collaboration, the new developer got fired. None of "their" code ever got merged in. But they've already got a few thousand AI generated resumes out to new positions…

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

xkcdThread Meeting

,

Krebs on SecurityMicrosoft Patch Tuesday, August 2025 Edition

Microsoft today released updates to fix more than 100 security flaws in its Windows operating systems and other software. At least 13 of the bugs received Microsoft’s most-dire “critical” rating, meaning they could be abused by malware or malcontents to gain remote access to a Windows system with little or no help from users.

August’s patch batch from Redmond includes an update for CVE-2025-53786, a vulnerability that allows an attacker to pivot from a compromised Microsoft Exchange Server directly into an organization’s cloud environment, potentially gaining control over Exchange Online and other connected Microsoft Office 365 services. Microsoft first warned about this bug on Aug. 6, saying it affects Exchange Server 2016 and Exchange Server 2019, as well as its flagship Exchange Server Subscription Edition.

Ben McCarthy, lead cyber security engineer at Immersive, said a rough search reveals approximately 29,000 Exchange servers publicly facing on the internet that are vulnerable to this issue, with many of them likely to have even older vulnerabilities.

McCarthy said the fix for CVE-2025-53786 requires more than just installing a patch, such as following Microsoft’s manual instructions for creating a dedicated service to oversee and lock down the hybrid connection.

“In effect, this vulnerability turns a significant on-premise Exchange breach into a full-blown, difficult-to-detect cloud compromise with effectively living off the land techniques which are always harder to detect for defensive teams,” McCarthy said.

CVE-2025-53779 is a weakness in the Windows Kerberos authentication system that allows an unauthenticated attacker to gain domain administrator privileges. Microsoft credits the discovery of the flaw to Akamai researcher Yuval Gordon, who dubbed it “BadSuccessor” in a May 2025 blog post. The attack exploits a weakness in “delegated Managed Service Account” or dMSA — a feature that was introduced in Windows Server 2025.

Some of the critical flaws addressed this month with the highest severity (between 9.0 and 9.9 CVSS scores) include a remote code execution bug in the Windows GDI+ component that handles graphics rendering (CVE-2025-53766) and CVE-2025-50165, another graphics rendering weakness. Another critical patch involves CVE-2025-53733, a vulnerability in Microsoft Word that can be exploited without user interaction and triggered through the Preview Pane.

One final critical bug tackled this month deserves attention: CVE-2025-53778, a bug in Windows NTLM, a core function of how Windows systems handle network authentication. According to Microsoft, the flaw could allow an attacker with low-level network access and basic user privileges to exploit NTLM and elevate to SYSTEM-level access — the highest level of privilege in Windows. Microsoft rates the exploitation of this bug as “more likely,” although there is no evidence the vulnerability is being exploited at the moment.

Feel free to holler in the comments if you experience problems installing any of these updates. As ever, the SANS Internet Storm Center has its useful breakdown of the Microsoft patches indexed by severity and CVSS score, and AskWoody.com is keeping an eye out for Windows patches that may cause problems for enterprises and end users.

GOOD MIGRATIONS

Windows 10 users out there likely have noticed by now that Microsoft really wants you to upgrade to Windows 11. The reason is that after the Patch Tuesday on October 14, 2025, Microsoft will stop shipping free security updates for Windows 10 computers. The trouble is, many PCs running Windows 10 do not meet the hardware specifications required to install Windows 11 (or they do, but just barely).

If the experience with Windows XP is any indicator, many of these older computers will wind up in landfills or else will be left running in an unpatched state. But if your Windows 10 PC doesn’t have the hardware chops to run Windows 11 and you’d still like to get some use out of it safely, consider installing a newbie-friendly version of Linux, like Linux Mint.

Like most modern Linux versions, Mint will run on anything with a 64-bit CPU that has at least 2GB of memory, although 4GB is recommended. In other words, it will run on almost any computer produced in the last decade.

There are many versions of Linux available, but Linux Mint is likely to be the most intuitive interface for regular Windows users, and it is largely configurable without any fuss at the text-only command-line prompt. Mint and other flavors of Linux come with LibreOffice, which is an open source suite of tools that includes applications similar to Microsoft Office, and it can open, edit and save documents as Microsoft Office files.

If you’d prefer to give Linux a test drive before installing it on a Windows PC, you can always just download it to a removable USB drive. From there, reboot the computer (with the removable drive plugged in) and select the option at startup to run the operating system from the external USB drive. If you don’t see an option for that after restarting, try restarting again and hitting the F8 button, which should open a list of bootable drives. Here’s a fairly thorough tutorial that walks through exactly how to do all this.

And if this is your first time trying out Linux, relax and have fun: The nice thing about a “live” version of Linux (as it’s called when the operating system is run from a removable drive such as a CD or a USB stick) is that none of your changes persist after a reboot. Even if you somehow manage to break something, a restart will return the system back to its original state.

Worse Than FailureCodeSOD: Round Strips

JavaScript is frequently surprising in terms of what functions it does not support. For example, while it has a Math.round function, that only rounds to the nearest integer, not an arbitrary precision. That's no big deal, of course, as if you wanted to round to, say, four decimal places, you could write something like: Math.floor(n * 10000) / 10000.

But in the absence of a built-in function to handle that means that many developers choose to reinvent the wheel. Ryan found this one.

function stripExtraNumbers(num) {
    //check if the number's already okay
    //assume a whole number is valid
    var n2 = num.toString();
    if(n2.indexOf(".") == -1)  { return num; }
    //if it has numbers after the decimal point,
    //limit the number of digits after the decimal point to 4
    //we use parseFloat if strings are passed into the method
    if(typeof num == "string"){
        num = parseFloat(num).toFixed(4);
    } else {
        num = num.toFixed(4);
    }
    //strip any extra zeros
    return parseFloat(num.toString().replace(/0*$/,""));
}

We start by turning the number into a string and checking for a decimal point. If it doesn't have one, we've already rounded off, return the input. Now, we don't trust our input, so if the input was already a string, we'll parse it into a number. Once we know it's a number, we can call toFixed, which returns a string rounded off to the correct number of decimal points.

This is all very dumb. Just dumb. But it's the last line which gets really dumb.

toFixed returns a padded string, e.g. (10).toFixed(4) returns "10.0000". But this function doesn't want those trailing zeros, so they convert our string num into a string, then use a regex to replace all of the trailing zeros, and then parse it back into a float.

Which, of course, when storing the number as a number, we don't really care about trailing zeros. That's a formatting choice when we output it.

I'm always impressed by a code sample where every single line is wrong. It's like a little treat. In this case, it even gets me a sense of how it evolved from little snippets of misunderstood code. The regex to remove trailing zeros in some other place in this developer's experience led to degenerate cases where they had output like 10., so they also knew they needed to have the check at the top to see if the input had a fractional part. Which the only way they knew to do that was by looking for a . in a string (have fun internationalizing that!). They also clearly don't have a good grasp on types, so it makes sense that they have the extra string check, just to be on the safe side (though it's worth noting that parseFloat is perfectly happy to run on a value that's already a float).

This all could be a one-liner, or maybe two if you really need to verify your types. Yet here we are, with a delightfully wrong way to do everything.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Long Term

Author: Mark Renney The world is broken; in all the ways we predicted it would be. It cannot be repaired; it is far too late for that now. But at least you can take a break, as long as you have the funds of course. You can check into one of the Long Term Hotels. […]

The post The Long Term appeared first on 365tomorrows.

Charles StrossCrib Sheet: A Conventional Boy

A Conventional Boy is the most recent published novel in the Laundry Files as of 2025, but somewhere between the fourth and sixth in internal chronological order—it takes place at least a year after the events of The Fuller Memorandum and at least a year before the events of The Nightmare Stacks.

I began writing it in 2009, and it was originally going to be a long short story (a novelette—8000-16,000 words). But one thing after another got in the way, until I finally picked it up to try and finish it in 2022—at which point it ran away to 40,000 words! Which put it at the upper end of the novella length range. And then I sent it to my editor at Tor.com, who asked for some more scenes covering Derek's life in Camp Sunshine, which shoved it right over the threshold into "short novel" territory at 53,000 words. That's inconveniently short for a stand-alone novel this century (it'd have been fine in the 1950s; Asimov's original Foundation novels were fix-ups of two novellas that bulked up to roughly that length), so we made a decision to go back to the format of The Atrocity Archives—a short novel bundled with another story (or stories) and an explanatory essay. In this case, we chose two novelettes previously published on Tor.com, and an essay exploring the origins of the D&D Satanic Panic of the 1980s (which features heavily in this novel, and which seems eerily topical in the current—2020s—political climate).

(Why is it short, and not a full-sized novel? Well, I wrote it in 2022-23, the year I had COVID19 twice and badly—not hospital-grade badly, but it left me with brain fog for more than a year and I'm pretty sure it did some permanent damage. As it happens, a novella is structurally simpler than a novel (it typically needs only one or two plot strands, rather than three or more or some elaborate extras). and I need to be able to hold the structure of a story together in my head while I write it. A Conventional Boy was the most complicated thing I could have written in that condition without it being visibly defective. There are only two plot strands and some historical flashbacks, they're easily interleaved, and the main plot itself is fairly simple. When your brain is a mass of congealed porridge? Keeping it simple is good. It was accepted by Tor.com for print and ebook publication in 2023, and would normally have come out in 2024, but for business reasons was delayed until January 2025. So take this as my 2024 book, slightly delayed, and suffice to say that my next book—The Regicide Report, due out in January 2026—is back to full length again.)

So, what's it about?

I introduced a new but then-minor Laundry character called Derek the DM in The Nightmare Stacks: Derek is portly, short-sighted, middle-aged, and works in Forecasting Ops, the department of precognition (predicting the future, or trying to), a unit I introduced as a throwaway gag in the novelette Overtime (which is also part of the book). If you think about the implications for any length of time it becomes apparent that precognition is a winning tool for any kind of intelligence agency, so I had to hedge around it a bit: it turns out that Forecasting Ops are not infallible. They can be "jammed" by precognitives working for rival organizations. Focussing too closely on a precise future can actually make it less likely to come to pass. And different precognitives are less or more accurate. Derek is one of the Laundry's best forecasters, and also an invaluable operation planner—or scenario designer, as he'd call it, because he was, and is, a Dungeon Master at heart.

I figured out that Derek's back-story had to be fascinating before I even finished writing The Nightmare Stacks, and I actually planned to write A Conventional Boy next. But somehow it got away from me, and kept getting shoved back down my to-do list until Derek appeared again in The Labyrinth Index and I realized I had to get him nailed down before The Regicide Report (for reasons that will become clear when that novel comes out). So here we are.

Derek began DM'ing for his group of friends in the early 1980s, using the original AD&D rules (the last edition I played). The campaign he's been running in Camp Sunshine is based on the core AD&D rules, with his own mutant extensions: he's rewritten almost everything, because TTRPG rule books are expensive when you're either a 14 year old with a 14-yo's pocket money allowance or a trusty in a prison that pays wages of 30p an hour. So he doesn't recognize the Omphalos Corporation's LARP scenario as a cut-rate knock-off of The Hidden Shrine of Tamoachan, and he didn't have the money to keep up with subsequent editions of AD&D.

Yes, there are some self-referential bits in here. As with the TTRPGs in the New Management books, they eerily prefigure events in the outside world in the Laundryverse. Derek has no idea that naming his homebrew ruleset and campaign Cult of the Black Pharaoh might be problematic until he met Iris Carpenter, Bob's treacherous manager from The Fuller Memorandum (and now Derek's boss in the camp, where she's serving out her sentence running the recreational services). Yes, the game scenario he runs at DiceCon is a garbled version of Eve's adventure in Quantum of Nightmares. (There's a reason he gets pulled into Forecasting Ops!)

DiceCon is set in Scarfolk—for further information, please re-read. Richard Littler's excellent satire of late 1970s north-west England exactly nails the ambiance I wanted for the setting, and Camp Sunshine was already set not far from there: so yes, this is a deliberate homage to Scarfolk (in parts).

And finally, Piranha Solution is real.

You can buy A Conventional Boy here (North America) or here (UK/EU).

Planet DebianSergio Cipriano: Running Docker (OCI) Images in Incus

Planet DebianFreexian Collaborators: Debian Contributions: DebConf 25, OpenSSH upgrades, Cross compilation collaboration and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-07

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25, by Stefano Rivera and Santiago Ruano Rincón

In July, DebConf 25 was held in Brest, France. Freexian was a gold sponsor and most of the Freexian team attended the event. Many fruitful discussions were had amongst our team and within the Debian community.

DebConf itself was organized by a local team in Brest, that included Santiago (who now lives in Uruguay). Stefano was also deeply involved in the organization, as a DebConf committee member, core video team, and the lead developer for the conference website. Running the conference took an enormous amount of work, consuming all of Stefano and Santiago’s time for most of July.

Lucas Kanashiro was active in the DebConf content team, reviewing talks and scheduling them. There were many last-minute changes to make during the event.

Anupa Ann Joseph was part of the Debian publicity team doing live coverage of DebConf 25 and was part of the DebConf 25 content team reviewing the talks. She also assisted the local team to procure the lanyards.

Recorded sessions presented by Freexian collaborators, often alongside other friends in Debian, included:

OpenSSH upgrades, by Colin Watson

Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported “No new SSH connections possible during large part of upgrade to Debian Trixie”, which would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:

  • As part of hardening the OpenSSH server, OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it; after this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen (roughly) in two phases: first we unpack the new files onto disk, and then we run some configuration steps which usually include things like restarting services. Normally this is fine, because the old service keeps on working until it’s restarted. In this case, unpacking the new files onto disk immediately stopped new SSH connections from working: the old sshd received the connection and tried to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this. This wasn’t much of a problem when upgrading OpenSSH on its own or with a small number of other packages, but in release upgrades it left a large gap when you can’t SSH to the system any more, and if anything fails in that interval then you could be in trouble.

    After trying a couple of other approaches, Colin landed on the idea of having the openssh-server package divert /usr/sbin/sshd to /usr/sbin/sshd.session-split before the unpack step of an upgrade from before 9.8, then removing the diversion and moving the new file into place once it’s ready to restart the service. This reduces the period when new connections fail to a minimum.

  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor part of the version number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, so as soon as you unpacked the new OpenSSL library during an upgrade, sshd stopped working. This couldn’t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL, and time was tight if we wanted this to be available before the release of Debian 13.

    Fortunately, there’s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted Colin’s proposal to fix this there.

The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine.

Cross compilation collaboration, by Helmut Grohne

Supporting cross building in Debian packages touches lots of areas of the archive and quite some of these matters reside in shared responsibility between different teams. Hence, DebConf was an ideal opportunity to settle long-standing issues.

The cross building bof sparked lively discussions as a significant fraction of developers employ cross builds to get their work done. In the trixie release, about two thirds of the packages can satisfy their cross Build-Depends and about half of the packages actually can be cross built.

Miscellaneous contributions

  • Raphaël Hertzog updated tracker.debian.org to remove references to Debian 10 which was moved to archive.debian.org, and had many fruitful discussions related to Debusine during DebConf 25.
  • Carles Pina prepared some data, questions and information for the DebConf 25 l10n and i18n BoF.
  • Carles Pina demoed and discussed possible next steps for po-debconf-manager with different teams in DebConf 25. He also reviewed Catalan translations and sent them to the packages.
  • Carles Pina started investigating a django-compressor bug: reproduced the bug consistently and prepared a PR for django-compressor upstream (likely more details next month). Looked at packaging frictionless-py.
  • Stefano Rivera triaged Python CVEs against pypy3.
  • Stefano prepared an upload of a new upstream release of pypy3 to Debian experimental (due to the freeze).
  • Stefano uploaded python3.14 RC1 to Debian experimental.
  • Thorsten Alteholz uploaded a new upstream version of sane-airscan to experimental. He also started to work on a new upstream version of hplip.
  • Colin backported fixes for CVE-2025-50181 and CVE-2025-50182 in python-urllib3, and fixed several other release-critical or important bugs in Python team packages.
  • Lucas uploaded ruby3.4 to experimental as a starting point for the ruby-defaults transition that will happen after Trixie release.
  • Lucas coordinated with the Release team the fix of the remaining RC bugs involving ruby packages, and got them all fixed.
  • Lucas, as part of the Debian Ruby team, kicked off discussions to improve internal process/tooling.
  • Lucas, as part of the Debian Outreach team, engaged in multiple discussions around internship programs we run and also what else we could do to improve outreach in the Debian project.
  • Lucas joined the Local groups BoF during DebConf 25 and shared all the good experiences from the Brazilian community and committed to help to document everything to try to support other groups.
  • Helmut spent significant time with Samuel Thibault on improving architecture cross bootstrap for hurd-any mostly reviewing Samuel’s patches. He proposed a patch for improving bash’s detection of its pipesize and a change to dpkg-shlibdeps to improve behavior for building cross toolchains.
  • Helmut reiterated the multiarch policy proposal with a lot of help from Nattie Mayer-Hutchings, Rhonda D’Vine and Stuart Prescott.
  • Helmut finished his work on the process based unschroot prototype that was the main feature of his talk (see above).
  • Helmut analyzed a multiarch-related glibc upgrade failure induced by a /usr-move mitigation of systemd and sent a patch and regression fix both of which reached trixie in time. Thanks to Aurelien Jarno and the release team for their timely cooperation.
  • Helmut resurrected an earlier discussion about changing the semantics of Architecture: all packages in a multiarch context in order to improve the long-standing interpreter problem. With help from Tollef Fog Heen better semantics were discovered and agreement was reached with Guillem Jover and Julian Andres Klode to consider this change. The idea is to record a concrete architecture for every Architecture: all package in the dpkg database and enable choosing it as non-native.
  • Helmut implemented type hints for piuparts.
  • Helmut reviewed and improved a patch set of Jochen Sprickerhof for debvm.
  • Anupa was involved in discussions with the Debian Women team during DebConf 25.
  • Anupa started working for the trixie release coverage and started coordinating release parties.
  • Emilio helped coordinate the release of Debian 13 trixie.

,

Cryptogram Friday Squid Blogging: Squid-Shaped UFO Spotted Over Texas

Here’s the story. The commenters on X (formerly Twitter) are unimpressed.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Worse Than FailureCodeSOD: A Single Lint Problem

We've discussed singleton abuse as an antipattern many times on this site, but folks keep trying to find new ways to implement them badly. And Olivia's co-worker certainly found one.

We start with a C++ utility class with a bunch of functions in it:

//utilities.h
class CUtilities
{
    public CUtilities();
    void doSomething();
    void doSomeOtherThing();
};
extern CUtilities* g_Utility;

So yes, if you're making a pile of utility methods, or if you want a singleton object, the keyword you're looking for is static. We'll set that aside. This class declares a class, and then also declares that there will be a pointer to the class, somewhere.

We don't have to look far.

//utilities.cpp
CUtilities* g_Utility = nullptr;
CUtilities::CUtilities()
{
    g_Utility = this;
}

// all my do-whatever functions here

This defines the global pointer variable, and then also writes the constructor of the utility class so that it initializes the global pointer to itself.

It's worth noting, at this point, that this is not a singleton, because this does nothing to prevent multiple instances from being created. What it does guarantee is that for each new instance, we overwrite g_Utility without disposing of what was already in there, which is a nice memory leak.

But where, or where, does the constructor get called?

//startup.h
class CUtilityInit
{
private:
    CUtilities m_Instance;
};

//startup.cpp
CUtilityInit *utils = new CUtilityInit();

I don't hate a program that starts with an initialization step that clearly instantiates all the key objects. There's just one little problem here that we'll come back to in just a moment, but let's look at the end result.

Anywhere that needs the utilities now can do this:

#include "utilities.h"

//in the code
g_Utility->doSomething();

There's just one key problem: back in the startup.h, we have a private member called CUtilities m_Instance which is never referenced anywhere else in the code. This means when people, like Olivia, are trawling through the codebase looking for linter errors they can fix, they may see an "unused member" and decide to remove it. Which is what Olivia did.

The result compiles just fine, but explodes at runtime since g_Utility was never initialized.

The fix was simple: just don't try and make this a singleton, since it isn't one anyway. At startup, she just populated g_Utility with an instance, and threw away all the weird code around populating it through construction.

Singletons are, as a general rule, bad. Badly implemented singletons themselves easily turn into landmines waiting for unwary developers. Stop being clever and don't try and apply a design pattern for the sake of saying you used a design pattern.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsDear Jon

Author: Julian Miles, Staff Writer Two words. Nothing else. He turns the envelope over, then puts it down and picks up the ornate Kaldotarnib honour blade and turns that over before sliding it from the scabbard. He makes a few passes in the air, finishing with a swift double strike move. Closing his eyes, he […]

The post Dear Jon appeared first on 365tomorrows.

xkcdWhere Babies Come From

,

David BrinA debate about saving democracy, that will likely (needlessly) be lost

As Robert Heinlein's predictions keep coming true... (e.g. "crazy years" followed by oppressive theocracy)... I hear more formerly moderate/accommodating friends  refer to the scenario in Heinlein's REVOLT IN 2100 as the only likely way that decency, honor and sapience can ever be restored to the Republic.

And so... a press release of genuine importance: 

"On September 4 in New York and streaming online, Open to Debate hosts: “Should the U.S. Be Ruled by a CEO Dictator?” An 
idea gaining traction in some partisan circles and embraced by some high-profile Silicon Valley figures. Championed by
 Curtis Yarvin, self-described neo-monarchist and founder of "Dark Enlightenment," claiming that democracy has failed and is too slow to meet today’s challenges. The Dictator CEO he proposes, would cut through red tape, challenge institutions and deliver efficiencies.

"Glen Weyl, will argue NO. Consolidating power under a single leader undermines core values of democracy fundamental to America’s political system. History is also filled with examples of autocratic leadership leading to economic ruin and catastrophic decision-making. American democracy might be messy, but let’s focus on making it better, not abandoning it.

"The debate will be held on Thursday, September 4 at 7:00 PM ET at Racket NYC and stream live online." (Someone do a search and offer links in comments?)


== A needed debate -- and a likely disaster ==

Okay, I knew Yarvin when he was a fringe online harasser scampering for attention as "Mencius Moldbug." He was a jibbering ingrate then, howling that 'incels' -- or 'involuntarily celibate' white men -- should be given women of their choice, in order to slake their appetites.  This core motivation serves today, as he suborns rich males by invoking implicit - or even explicit - images of Harems for the Deserving. 

I do not exaggerate any of that, even slightly! Indeed, I've elsewhere dissected this disease and its most pustulatory Yarvin excrescence. See a tomographic scan of this would-be Machiavelli.

Alas, I doubt that Glen Weyl - for all his good intentions and passion at defending the Democratic Enlightenment - will do much more that fall into Yarvin's many traps, providing this neo-Goebbels with a platform, incrementally building his following.  Above all, Weyl should not depend upon defending democracy as 'good' or embodying 'fundamental values.' That approach will only be persuasive to those who already support the moral argument. (As I do.)  

Many will be drawn by romantic visions of glorious rightful kings and chosen-ones -- notions spread not just by Arthurian legends, but relentlessly by Hollywood, via Tolkien's Aragorn or Dune's Atreides or Jedi demigods and their ilk.  These folks will nod in 'sad realism' as Yarvin denounces 'mob rule,' and calls for iron fisted stability. They shrug off appeals to democratic ideals and rights as sappy naĩvete. 

Others, who have fallen under the spell of cyclical history -- e.g. the cult of the Fourth Turning -- will accept dictatorship under the assumption that it's only a 'temporary' manifestation of a Time of Heroes -- til democracy can resume under a less decadent generation. Either way, these romantic incantation spells are immune to rebuttal. Both variants are perfectly adept at shrugging off moral defenses of citizen sovereignty.

There is one takedown that works! And that is to cite practical outcomes. 

Demand (as I have done, many times) that Yarvin name even a single kingship -- amid 6000 years of pervasive feudalism by inheritance brats and across five continents -- that ever had a continuous period of spectacular progress and accomplishment like America's recent 25 decades!

Indeed, tally the sum accomplishments of ALL historic kingdoms -- combined! Does that total come close to matching the feats and deeds and wonders wrought by Americans in just a single human lifetime, since the WWII GI Bill Generation -- using Democratic tools and public investment and Rule of Law -- truly made America great?

Defy Yarvin to support his bald-faced assertions of democracy's 'failure' by actually tabulating those compared accomplishments! Shouldn't ingrate yammerers demanding that we chuck out all the traits that gave them cushy lives bear some burden of proof?

Contrast our nation-of-opportunity vs. the stunning waste of talent that festered under feudalism, when rigged dominance by inheritance brats crushed social mobility. And thus, the best that any bright youngster might hope-for would be to follow his father's trade - beset by 'lordly' gangster protection rackets - amid cauterized ambition or hope! 

Show us any other era when a majority of kids were healthy and educated enough -- and fearlessly empowered -- to compete or cooperate fairly and to rise up by virtue of their merits and deeds, rather than inherited status? Empowered to take on elites with creative startups, for example? The one American trait that the world's inheritance brats are determined to expunge.

Ask about the Greatest Generation, so admired (in muzzy abstract) by today's gone-mad right. The GI Bill generation who built mighty universities and science and civil rights and the flattest-fairest society ever seen, till then... and who admired one living human above all others, Franklin Roosevelt. 

And who next - in the 1950s - revered almost as much a fellow named Jonas Salk.

Demand that Moldbug address that word -- competition -- which liberals today use far too little, especially since Adam Smith was the true founder of their movement!* A word that used to be a talisman for conservatism, but that U.S. conservatives never mention at all, nowadays. A word describing the exact thing that kingship directly suppresses. A word that will be utterly gelded, should Yarvin's acolytes have their way.

Mention the only other times that our way was tried... Periclean Athens and daVinci's Florence... early experiments whose accomplishments still shine across ages of feudal darkness.

Or the fact that only democracy has ever penetrated the curtain of delusion and flattery that always... always... surrounds mighty rulers. Even geniuses like Napoleon. Indeed, the central purpose and benefit of democracy is to apply accountability even upon top elites. Allowing the best of them to notice their errors and correct them under the searing medicine of criticism.

This approach -- and not goody-two-shoes moralizing about 'fundamental values' -- should be the obvious core of any rebuttal. Alas, I have learned that the obvious is often not-so. 

We are in our nadir-equivalent of 1862, when an earlier phase of the same struggle seemed hopeless to the Union... until -- (may it happen soon!) -- we find generals who are willing to try new tactics. New ideas. And the power of maneuver, when humanity's future is on the line.

Addendum: I will append below a photostat of Bertrand Russell’s forceful yet dignified letter of refusal to debate a British fascist, a response to Sir Oswald Ernald Mosley (the most despised Briton in 1000 years). I am not quite so mature that I would refuse to debate Mr. Yarvin. But Russell expressed himself brilliantly.


== Another sad case of giving in to gloom ==

I meant to stop there. But the gloom jeremiads roll on and on, helping no one. Take Chris Hedges' "Reign of Idiots".  


 "The idiots take over in the final days of crumbling civilizations. Idiot generals wage endless, unwinnable wars that bankrupt the nation. Idiot economists call for reducing taxes for the rich and cutting social service programs for the poor, and project economic growth on the basis of myth. Idiot industrialists poison the water, the soil and the air, slash jobs and depress wages. Idiot bankers gamble on self-created financial bubbles and impose crippling debt peonage on the citizens. Idiot journalists and public intellectuals pretend despotism is democracy. Idiot intelligence operatives orchestrate the overthrow of foreign governments to create lawless enclaves that give rise to enraged fanatics. Idiot professors, “experts” and “specialists” busy themselves with unintelligible jargon and arcane theory that buttresses the policies of the rulers. Idiot entertainers and producers create lurid spectacles of sex, gore and fantasy. There is a familiar checklist for extinction. We are ticking off every item on it."

 

Did you enjoy reading that? Shaking your head in sad resignation over the inevitable stoopidity of your fellow citizens? Did it occur to you that's what our enemies want from you?  

 

This rant-essay by Hedges begins by raving about idiocy without any irony over its own idiocy: 

"The idiots take over in the final days of crumbling civilizations....  

"There is a familiar checklist for extinction. We are ticking off every item on it."

 

Feh! And get bent, you perfect example of the thing you denounce! 

 

Never before in all of history has a nation had greater numbers - or a higher percentages - of wise and smart and knowing people. And not just at the maligned universities, or in the under-attack civil service, or our brilliant (but under-siege) officer corps, or in the streets. We have more (and higher percentages of) brilliant/wise folks than all other nations and societies across all of time... combined. 

 

Indeed, assailing and curbing and demoralizing all of the smart people is the shared goal of both MAGA lumpenprols and the world oligarchs who puppet them. Proving they are idiots, because it simply cannot succeed. 


What? Hey, oligarchs! Your plan is to intimidate and crush the hundred million smartest in society? The ones who know cyber, nano, nuclear, bio and all the rest?  That is your plan? Oh, you will not like us, when we finally get mad.

 

And yet, dopes like Chris Hedges yowl that it is working. It has to work. because you are all fooooools!



== May we find comfort and precedents in earlier, righteous victories ==


I'm reminded of a different phase of the recurring American Civil War, when (like today) the Union side needed... and then got... better generals. 

      Take, in particular, a moment - right after the Battle of the Wilderness - when Ulysses S. Grant heard his underlings whining about "What Bobby Lee is going to do to us next." 


Grant stood up and growled:


"STOP fretting about what Bobby Lee is gonna do to us. Start planning what we will do to Bobby Lee!"


There are a jillion fresh tactics we can use in this fight for civilization... like getting all the dems in GOP districts to re-register as Republicans, which would (for one thing) protect them from being purged out of the voter rolls. But also, it would truly screw up the radicals' Radicatization-via-Primary tactic. And weaken gerrymandering,


But in order to get started, we need first to stand up like confident women and men and reject idiocies like this "Reign of Idiots" bullshit whine. 


It contains some truths, sure, about the gang of criminal fools who have seized our institutions in their Project 2025 / KGB-planned putsch. And it's true that the polemical skills of Democrats could not possibly be worse.


But truths - out of context - can be lies. And Hedges's jeremiad could not have been better written by some Kremlin basement Goebbels, seeking to demoralize us. 

And fuck that, you tool of monsters.



== And finally... ==

Robert Reich assesses Newsom's proposal for voters to allow CA, OR and WA to re-gerrymander until Texas, Florida and N.Carolina stop. Blue voters in the west ENDED the foul crime years ago. But may be talked into temporary retaliation vs confederate cheaters.


Note, Red states are also planning to purge voter rolls! Tell all your friends to prevent being purged by RE-REGISTERING AS REPUBLICANS. Hold your nose and do it, as I did!


The only practical effects will be (1) to protect your voting rights and (2) let you vote in the only election that matters anymore in those states, the Republican primary.


See 1st comment below for how I have long proposed we deal with gerrymandering. But for now... it's over to you. Stand up.


-------

Planet DebianJonathan Carter: Debian 13

Debian 13 has finally been released!

One of the biggest and under-hyped features is support for HTTP Boot. This allows you to simply specify a URL (to any d-i or live image iso) in your computer’s firmware setup and then you can boot to it directly over the Internet, so no need to download an image, write it to flash disk and then boot from the flash disk on computers made in the last ~5 years. This is also supported on the Tianocore free EFI firmware, which is useful if you’d like to try it out on QEMU/KVM.

More details about Debian 13 available on the official press release.

The default theme for Debian 13 is Ceratopsian, designed by Elise Couper. I’ll be honest, I wasn’t 100% sure it was the best choice when it won the artwork vote, but it really grew on me over the last few months, and it looked great in combination with all kinds of other things during DebConf too, so it has certainly won me over.

And I particularly like the Plymouth theme. It’s very minimal, and it reminds me of the Toy Story Trixie character, it’s almost like it helps explain the theme:

Plymouth (start-up/shutdown) theme.

Trixie, the character from Toy Story that was chosen as the codename for Debian 13.

Debian Local Team ISO testing

Yesterday we got some locals together for ISO testing and we got a cake with the wallpaper printed on it, along with our local team logo which has been a work in progress for the last 3 years, so hopefully we’ll finalise it this year! (it will be ready when it’s ready). It came out a lot bluer than the original wallpaper, but still tasted great.

For many releases, I’ve been the only person from South Africa doing ISO smoke-testing, and this time was quite different, since everyone else in the photo below tested an image except for me. I basically just provided some support and helped out with getting salsa/wiki accounts and some troubleshooting. It went nice and fast, and it’s always a big relief when there are no showstoppers for the release.

My dog was really wishing hard that the cake would slip off.

Packaging-wise, I only have one big new package for Trixie, and that’s Cambalache, a rapid application design UI builder for GTK3/GTK4.

The version in trixie is 0.94.1-3 and version 1.0 was recently released, so I’ll get that updated in forky and backport it if possible.

I was originally considering using Cambalache for an installer UI, but ended up going with a web front-end instead. But that’s moving firmly towards forky territory, so more on that another time!

Thanks to everyone who was involved in this release, so far upgrades have been very smooth!

Planet DebianC.J. Collier: Upgrading Proxmox 7 to 8

Some variant of the following[1] worked for me.

The first line is the start of a for loop that runs on each node in my cluster a command using ssh. The argument -t is passed to attach a controlling terminal to STDIN, STDERR and STDOUT of this session, since there will not be an intervening shell to do it for us. The argument to ssh is a workflow of bash commands. They upgrade the 7.x system to the most recent packages on the repository. We then update the sources.list entries for the system to point at bookworm sources instead of bullseye. The package cache is updated and the proxmox-ve package is installed. Packages which are installed are upgraded to the versions from bookworm, and the installer concludes.

Dear reader, you might be surprised how many times I saw the word “perl” scroll by during the manual, serial scrolling of this install. It took hours. There were a few prompts, so stand by the keyboard!

[1]

gpg: key 1140AF8F639E0C39: public key "Proxmox Bookworm Release Key " imported
# have your ssh agent keychain running and a key loaded that's installed at 
# ~root/.ssh/authorized_keys on each node 
apt-get install -y keychain
eval $(keychain --eval)
ssh-add ~/.ssh/id_rsa
# Replace the IP address prefix (100.64.79.) and  suffixes (64, 121-128)
# with the actual IPs of your cluster nodes.  Or use hostnames :-)
for o in 64 121 122 123 124 125 126 127 128 ; do   ssh -t root@100.64.79.$o '
  sed -i -e s/bullseye/bookworm/g /etc/apt/sources.list $(compgen -G "/etc/apt/sources.listd.d/*.list") \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
    | dd of=/etc/apt/sources.list.d/proxmox-release.list status=none \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/ceph-quincy bookworm main no-subscription" \
    | dd of=/etc/apt/sources.list.d/ceph.list status=none \
  && proxmox_keyid="0xf4e136c67cdce41ae6de6fc81140af8f639e0c39" \
  && curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=${proxmox_keyid}" \
    | gpg --dearmor -o /usr/share/keyrings/proxmox-release.gpg  \
  && apt-get -y -qq update \
  && apt-get -y -qq install proxmox-ve \
  && apt-get -y -qq full-upgrade \
  && echo "$(hostname) upgraded"'; done

365 TomorrowsBenevolence

Author: Lance J. Mushung Director and Operator, both of whom resembled giant copper-colored eggs, floated into their ship’s control compartment. The viewer displayed the disk of a blue and white planet. Operator transmitted, “Director, these organics are more contentious and disharmonious than most.” “That does not matter. Our theology is benevolence to all organics.” “Of […]

The post Benevolence appeared first on 365tomorrows.

,

Planet DebianBits from Debian: Debian stable is now Debian 13 "trixie"!

Alt trixie has been released

We are pleased to announce the official release of Debian 13, codenamed trixie!

What's New in Debian 13

  • Official support for RISC-V (64-bit riscv64), a major architecture milestone
  • Enhanced security through ROP and COP/JOP hardening on both amd64 and arm64 (Intel CET and ARM PAC/BTI support)
  • HTTP Boot support in Debian Installer and Live images for UEFI/U-Boot systems
  • Upgraded software stack: GNOME 48, KDE Plasma 6, Linux kernel 6.12 LTS, GCC 14.2, Python 3.13, and more

Want to install it?

Fresh installation ISOs are now available, including the final Debian Installer featuring kernel 6.12.38 and mirror improvements. Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade?

Full upgrade path from Debian 12 "bookworm" is supported and documented in the Release Notes. Upgrade notes cover APT source preparation, handling obsoletes, and ensuring system resilience.

Additional Information

For full details, including upgrade instructions, known issues, and contributors, see the official Release Notes for Debian 13 "trixie".

Congratulations to all developers, QA testers, and volunteers who made Debian 13 "trixie" possible!

Do you want to celebrate the release?

To celebrate with us on this occassion find a release party near to you and if there isn't any, organize one!

Planet DebianThorsten Alteholz: My Debian Activities in July 2025

Debian LTS

This was my hundred-thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4255-1] audiofile security update of two CVEs related to an integer overflow and a memory leak.
  • [DLA 4256-1] libetpan security update to fix one CVE related to prevent a null pointer dereference.
  • [DLA 4257-1] libcaca security update to fix two CVEs related to heap buffer overflows.
  • [DLA 4258-1] libfastjson security update to fix one CVE related to an out of bounds write.
  • [#1106867] kmail-account-wizard was marked as accepted

I also continued my work on suricata, which turned out to be more challenging than expected. This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-fourth ELTS month. Unfortunately my allocated hours were far less than expected, so I couldn’t do as much work as planned.

Most of the time I spent with FD tasks and I also attended the monthly LTS/ELTS meeting. I further listened to the debusine talks during debconf. On the one hand I would like to use debusine to prepare uploads for embargoed ELTS issues, on the other hand I would like to use debusine to run the version of lintian that is used in the different releases. At the moment some manual steps are involved here and I tried to automate things. Of course like for LTS, I also continued my work on suricata.

Debian Printing

This month I uploaded a new upstream version of:

Guess what, I also started to work on a new version of hplip and intend to upload it in August.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new upstream versions of:

  • supernovas (sponsored upload to experimental)
  • calceph (sponsored upload to experimental)

I also uploaded the new package boinor. This is a fork of poliastro, which was retired by upstream and removed from Debian some months ago. I adopted it and rebranded it at the desire of upstream. boinor is the abbreviation of BOdies IN ORbit and I hope this software is still useful.

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

On my fight against outdated RFPs, I closed 31 of them in July. Their number is down to 3447 (how can you dare to open new RFPs? :-)). Don’t be afraid of them, they don’t bite and are happy to be released to a closed state.

FTP master

The peace will soon come to an end, so this month I accepted 87 and rejected 2 packages. The overall number of packages that got accepted was 100.

365 TomorrowsTomorrow, and Tomorrow, and Tomorrow

Author: Alexandra Peel The future’s bright, they said. The future’s now! When the Church of Eternity claimed its wise men had seen the light from future days, we bowed to their superior knowledge and respected their ages-long claim on, if not our mortal bodies, then our souls. Now we had the opportunity to transform ourselves […]

The post Tomorrow, and Tomorrow, and Tomorrow appeared first on 365tomorrows.

Planet DebianValhalla's Things: MOAR Pattern Weights

Posted on August 9, 2025
Tags: madeof:atoms

Six hexagonal blocks with a Standard Compliant sticker on top: mobian (blue variant), alizarin molecule, Use Jabber / Do Crime, #FreeSoftWear, indigotin molecule, The internet is ours with a cat that plays with yarn.

I’ve collected some more Standard Compliant stickers.

A picture of the lid of my laptop: a relatively old thinkpad carpeted with hexagonal stickers: Fediverse, a Debian swirl made of cat paw prints, #FreeSoftWear, 31 years of Debian, Open Source Hardware, XMPP, Ada Lovelace, rainbow holographic Fediverse, mobian (blue sticker), tails (cut from a round one), Use Jabber / Do Crime, LIFO, people consensually doing things together (center piece), GL-Como, Piecepack, indigotin, my phone runs debian btw, reproducible builds (cut from round), 4 freedoms in Italian (cut from round), Debian tea, alizarin, Software Heritage (cut from round), ournet.rocks (the cat also seen above), Python, this machine kills -9 daemons, 25 years of FOSDEM, Friendica, Flare. There are only 5 full hexagonal slots free.

Some went on my laptop, of course, but some were selected for another tool I use relatively often: more pattern weights like the ones I blogged about in February.

And of course the sources:

I have enough washers to make two more weights, and even more stickers, but the printer is currently not in use, so I guess they will happen a few months or so in the future.

,

Cryptogram Friday Squid Blogging: New Vulnerability in Squid HTTP Proxy Server

In a rare squid/security combined post, a new vulnerability was discovered in the Squid HTTP proxy server.

Krebs on SecurityKrebsOnSecurity in New ‘Most Wanted’ HBO Max Series

A new documentary series about cybercrime airing next month on HBO Max features interviews with Yours Truly. The four-part series follows the exploits of Julius Kivimäki, a prolific Finnish hacker recently convicted of leaking tens of thousands of patient records from an online psychotherapy practice while attempting to extort the clinic and its patients.

The documentary, “Most Wanted: Teen Hacker,” explores the 27-year-old Kivimäki’s lengthy and increasingly destructive career, one that was marked by cyber attacks designed to result in real-world physical impacts on their targets.

By the age of 14, Kivimäki had fallen in with a group of criminal hackers who were mass-compromising websites and milking them for customer payment card data. Kivimäki and his friends enjoyed harassing and terrorizing others by “swatting” their homes — calling in fake hostage situations or bomb threats at a target’s address in the hopes of triggering a heavily-armed police response to that location.

On Dec. 26, 2014, Kivimäki and fellow members of a group of online hooligans calling themselves the Lizard Squad launched a massive distributed denial-of-service (DDoS) attack against the Sony Playstation and Microsoft Xbox Live platforms, preventing millions of users from playing with their shiny new gaming rigs the day after Christmas. The Lizard Squad later acknowledged that the stunt was planned to call attention to their new DDoS-for-hire service, which came online and started selling subscriptions shortly after the attack.

Finnish investigators said Kivimäki also was responsible for a 2014 bomb threat against former Sony Online Entertainment President John Smedley that grounded an American Airlines plane. That incident was widely reported to have started with a Twitter post from the Lizard Squad, after Smedley mentioned some upcoming travel plans online. But according to Smedley and Finnish investigators, the bomb threat started with a phone call from Kivimäki.

Julius “Zeekill” Kivimaki, in December 2014.

The creaky wheels of justice seemed to be catching up with Kivimäki in mid-2015, when a Finnish court found him guilty of more than 50,000 cybercrimes, including data breaches, payment fraud, and operating a global botnet of hacked computers. Unfortunately, the defendant was 17 at the time, and received little more than a slap on the wrist: A two-year suspended sentence and a small fine.

Kivimäki immediately bragged online about the lenient sentencing, posting on Twitter that he was an “untouchable hacker god.” I wrote a column in 2015 lamenting his laughable punishment because it was clear even then that this was a person who enjoyed watching other people suffer, and who seemed utterly incapable of remorse about any of it. It was also abundantly clear to everyone who investigated his crimes that he wasn’t going to quit unless someone made him stop.

In response to some of my early reporting that mentioned Kivimäki, one reader shared that they had been dealing with non-stop harassment and abuse from Kivimäki for years, including swatting incidents, unwanted deliveries and subscriptions, emails to her friends and co-workers, as well as threatening phonecalls and texts at all hours of the night. The reader, who spoke on condition of anonymity, shared that Kivimäki at one point confided that he had no reason whatsoever for harassing her — that she was picked at random and that it was just something he did for laughs.

Five years after Kivimäki’s conviction, the Vastaamo Psychotherapy Center in Finland became the target of blackmail when a tormentor identified as “ransom_man” demanded payment of 40 bitcoins (~450,000 euros at the time) in return for a promise not to publish highly sensitive therapy session notes Vastaamo had exposed online.

Ransom_man, a.k.a. Kivimäki, announced on the dark web that he would start publishing 100 patient profiles every 24 hours. When Vastaamo declined to pay, ransom_man shifted to extorting individual patients. According to Finnish police, some 22,000 victims reported extortion attempts targeting them personally, targeted emails that threatened to publish their therapy notes online unless paid a 500 euro ransom.

In October 2022, Finnish authorities charged Kivimäki with extorting Vastaamo and its patients. But by that time he was on the run from the law and living it up across Europe, spending lavishly on fancy cars, apartments and a hard-partying lifestyle.

In February 2023, Kivimäki was arrested in France after authorities there responded to a domestic disturbance call and found the defendant sleeping off a hangover on the couch of a woman he’d met the night before. The French police grew suspicious when the 6′ 3″ blonde, green-eyed man presented an ID that stated he was of Romanian nationality.

A redacted copy of an ID Kivimaki gave to French authorities claiming he was from Romania.

In April 2024, Kivimäki was sentenced to more than six years in prison after being convicted of extorting Vastaamo and its patients.

The documentary is directed by the award-winning Finnish producer and director Sami Kieski and co-written by Joni Soila. According to an August 6 press release, the four 43-minute episodes will drop weekly on Fridays throughout September across Europe, the U.S, Latin America, Australia and South-East Asia.

Worse Than FailureError'd: Voluntold

It is said (allegedly by the Scots) that confession heals the soul. But does it need to be strictly voluntary? The folks toiling away over at CodeSOD conscientiously change the names to protect the innocent but this side of the house is committed to curing the tortured souls of webdevs. Whether they like it or not. Sadly Sam's submission has been blinded, so the black black soul of xxxxxxxxxxte.com remains unfortunately undesmirched, but three others should be experiencing the sweet sweet healing light of day right about now. I sure hope they appreciate it.

More monkey business this week from Reinier B. who is hoping to check in on some distant cousins. "I'll make sure to accept email from {email address}, otherwise I won't be able to visit {zoo name}."

0

 

Alex A. is "trying to pay customs duty." It definitely can be.

1

 

"I know it's hard to recruit good developers," commiserates Sam B. , "but it's like they're not even trying." They sure are, as above.

3

 

Peter G. bemoans "Apparently this network power thingamajig, found on Aliexpress, is pain itself if the brand name on it is to be believed." Cicero wept.

2

 

Jan B. takes the perfecta, hitting not only this week's theme of flubstitutions but also a bounty of bodged null references to boot. "This is one of the hardest choices I've ever had in my life. I'm not sure if I'd prefer null or null as my location.detection.message.CZ." Go in peace.

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsBecause I Elected You

Author: Eva C. Stein Aidan hadn’t meant to bring it up – not here, not today. But when he answered the door, his impulse signal spiked. He let her speak first. “Don’t look so worried,” Mae said as she stepped in – no invitation needed. “It’s good news. They’ve given us a fifteen-minute slot.” “That’s […]

The post Because I Elected You appeared first on 365tomorrows.

,

Cryptogram SIGINT During World War II

The NSA and GCHQ have jointly published a history of World War II SIGINT: “Secret Messengers: Disseminating SIGINT in the Second World War.” This is the story of the British SLUs (Special Liaison Units) and the American SSOs (Special Security Officers).

Cryptogram The “Incriminating Video” Scam

A few years ago, scammers invented a new phishing email. They would claim to have hacked your computer, turned your webcam on, and videoed you watching porn or having sex. BuzzFeed has an article talking about a “shockingly realistic” variant, which includes photos of you and your house—more specific information.

The article contains “steps you can take to figure out if it’s a scam,” but omits the first and most fundamental piece of advice: If the hacker had incriminating video about you, they would show you a clip. Just a taste, not the worst bits so you had to worry about how bad it could be, but something. If the hacker doesn’t show you any video, they don’t have any video. Everything else is window dressing.

I remember when this scam was first invented. I calmed several people who were legitimately worried with that one fact.

Cryptogram Automatic License Plate Readers Are Coming to Schools

Fears around children is opening up a new market for automatic license place readers.

Cryptogram Google Project Zero Changes Its Disclosure Policy

Google’s vulnerability finding team is again pushing the envelope of responsible disclosure:

Google’s Project Zero team will retain its existing 90+30 policy regarding vulnerability disclosures, in which it provides vendors with 90 days before full disclosure takes place, with a 30-day period allowed for patch adoption if the bug is fixed before the deadline.

However, as of July 29, Project Zero will also release limited details about any discovery they make within one week of vendor disclosure. This information will encompass:

  • The vendor or open-source project that received the report
  • The affected product
  • The date the report was filed and when the 90-day disclosure deadline expires

I have mixed feelings about this. On the one hand, I like that it puts more pressure on vendors to patch quickly. On the other hand, if no indication is provided regarding how severe a vulnerability is, it could easily cause unnecessary panic.

The problem is that Google is not a neutral vulnerability hunting party. To the extent that it finds, publishes, and reduces confidence in competitors’ products, Google benefits as a company.

Worse Than FailureDivine Comedy

"Code should be clear and explain what it does, comments should explain why it does that." This aphorism is a decent enough guideline, though like any guidance short enough to fit on a bumper sticker, it can easily be overapplied or misapplied.

Today, we're going to look at a comment Salagir wrote. This comment does explain what the code does, can't hope to explain why, and instead serves as a cautionary tale. We're going to take the comment in sections, because it's that long.

This is about a stored procedure in MariaDB. Think of Salagir as our Virgil, a guide showing us around the circles of hell. The first circle? A warning that the dead code will remain in the code base:

	/************************** Dead code, but don't delete!

	  What follows if the history of a terrible, terrible code.
	  I keep it for future generations.
	  Read it in a cold evening in front of the fireplace.

My default stance is "just delete bad, dead code". But it does mean we get this story out of it, so for now I'll allow it.

	  **** XXX ****   This is the story of the stored procedure for getext_fields.   **** XXX ****

	Gets the english and asked language for the field, returns what it finds: it's the translation you want.
		   Called like this:
		   " SELECT getext('$table.$field', $key, '$lang') as $label "
		   The function is only *in the database you work on right now*.

Okay, this seems like a pretty simple function. But why does this say "the function is only in the database you work on right now"? That's concerning.

		***** About syntax!!
			The code below can NOT be used by copy and paste in SQL admin (like phpmyadmin), due to the multiple-query that needs DELIMITER set.
			The code that works in phpmyadmin is this:
DELIMITER $$
DROP FUNCTION IF EXISTS getext$$
CREATE FUNCTION (...same...)
		LIMIT 1;
	RETURN `txt_out`;
END$$
			However, DELIMITER breaks the code when executed from PHP.

Am I drowning in the river Styx? Why would I be copy/pasting SQL code into PhpMyAdmin from my PHP code? Is… is this a thing people were doing? Or was it going the opposite way, and people were writing delimited statements and hoping to execute them as a single query? I'm not surprised that didn't work.

		***** About configuration!!!
			IMPORTANT: If you have two MySQL servers bind in Replication mode in order to be able to execute this code, you (or your admin) should set:
			SET GLOBAL log_bin_trust_function_creators = 1;
			Without that, adding of this function will fail (without any error).

I don't know the depths of MariaDB, so I can't comment on if this is a WTF. What leaps out to me though, is that this likely needs to be in a higher-level form of documentation, since this is a high-level configuration flag. Having it live here is a bit buried. But, this is dead code, so it's fine, I suppose.

		***** About indexes!!!!
			The primary key was not used as index in the first version of this function. No key was used.
			Because the code you see here is modified for it's execution. And
				`field`=my_field
			becomes
				`field`= NAME_CONST('my_field',_ascii'[value]' COLLATE 'ascii_bin')
			And if the type of my_field in the function parameter wasn't the exact same as the type of `text`, no index is used!
			At first, I didn't specify the charset, and it became
				`field`= NAME_CONST('my_field',_utf8'[value]' COLLATE 'utf8_unicode_ci')
			Because utf8 is my default, and no index was used, the table `getext_fields` was read entirely each time!
			Be careful of your types and charsets... Also...

Because the code you see here is modified for its execution. What? NAME_CONST is meant to create synthetic columns not pulled from tables, e.g. SELECT NAME_CONST("foo", "bar") would create a result set with one column ("foo"), with one row ("bar"). I guess this is fine as part of a join- but the idea that the code written in the function gets modified before execution is a skin-peelingly bad idea. And if the query is rewritten before being sent to the database, I bet that makes debugging hard.

		***** About trying to debug!!!!!
			To see what the query becomes, there is *no simple way*.
			I literally looped on a SHOW PROCESSLIST to see it!
			Bonus: if you created the function with mysql user "root" and use it with user "SomeName", it works.
			But if you do the show processlist with "SomeName", you won't see it!!

Ah, yes, of course. I love running queries against the database without knowing what they are, and having to use diagnostic tools in the database to hope to understand what I'm doing.

		***** The final straw!!!!!!
			When we migrated to MariaDB, when calling this a lot, we had sometimes the procedure call stucked, and UNKILLABLE even on reboot.
			To fix it, we had to ENTIRELY DESTROY THE DATABASE AND CREATE IT BACK FROM THE SLAVE.
			Several times in the same month!!!

This is the 9th circle of hell, reserved for traitors and people who mix tabs and spaces in the same file. Unkillable even on reboot? How do you even do that? I have a hunch about the database trying to retain consistency even after failures, but what the hell are they doing inside this function creation statement that can break the database that hard? The good news(?) is the comment(!) contains some of the code that was used:

		**** XXX ****    The creation actual code, was:   **** XXX ****

		// What DB are we in?
		$PGf = $Phoenix['Getext']['fields'];
		$db = $PGf['sql_database']? : (
				$PGf['sql_connection'][3]? : (
						$sql->query2cell("SELECT DATABASE()")
					)
				);

		$func = $sql->query2assoc("SHOW FUNCTION STATUS WHERE `name`='getext' AND `db`='".$sql->e($db)."'");

		if ( !count($func) ) {
			$sql->query(<<<MYSQL
				CREATE FUNCTION {$sql->gt_db}getext(my_field VARCHAR(255) charset {$ascii}, my_id INT(10) UNSIGNED, my_lang VARCHAR(6) charset {$ascii})
				RETURNS TEXT DETERMINISTIC
				BEGIN
					DECLARE `txt_out` TEXT;
					SELECT `text` INTO `txt_out`
						FROM {$sql->gt_db}`getext_fields`
						WHERE `field`=my_field AND `id`=my_id AND `lang` IN ('en',my_lang) AND `text`!=''
						ORDER BY IF(`lang`=my_lang, 0, 1)
						LIMIT 1;
					RETURN `txt_out`;
				END;
MYSQL
			);
			...
		}

I hate doing string munging to generate SQL statements, but I especially hate it when the very name of the object created is dynamic. The actual query doesn't look too unreasonable, but everything about how we got here is terrifying.

		**** XXX ****    Today, this is not used anymore, because...   **** XXX ****

		Because a simple sub-query perfectly works! And no maria-db bug.

		Thus, in the function selects()
		The code:
			//example: getext('character.name', `character_id`, 'fr') as name
			$sels[] = $this->sql_fields->gt_db."getext('$table.$field', $key, '$lang') as `$label`";

		Is now:
			$sels[] = "(SELECT `text` FROM {$this->sql_fields->gt_db}`getext_fields`
				WHERE `field`='$table.$field' AND `lang` IN ('en', '$lang') AND `id`=$key AND `text`!=''
				ORDER BY IF(`lang`='$lang', 0, 1) LIMIT 1) as `$label`";

		Less nice to look at, but no procedure, all the previous problems GONE!


		**** XXX   The end.
*/

Of course a simple subquery (or heck, probably a join!) could handle this. Linking data across two tables is what databases are extremely good at. I agree that, at the call site, this is less readable, but there are plenty of ways one could clean this up to make it more readable. Heck, with this, it looks a heck of a lot like you could have written a much simpler function.

Salagir did not provide the entirety of the code, just this comment. The comment remains in the code, as a warning sign. That said, it's a bit verbose. I think a simple "Abandon all hope, ye who enter here," would have covered it.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsTsunami Blues

Author: Jenny Abbott Avery Darger started discussing his final arrangements on the third day, which was a good sign. They were small decisions at first—plans for cremation in space, for example—and Tsu knew not to rush him. She had the routine down pat for premium clients and was committed to giving him his money’s worth. […]

The post Tsunami Blues appeared first on 365tomorrows.

,

Planet DebianReproducible Builds: Reproducible Builds in July 2025

Welcome to the seventh report from the Reproducible Builds project in 2025. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds an official goal for SUSE Enterprise Linux
  3. Reproducible Builds at FOSSY 2025
  4. New OSS Rebuild project from Google
  5. New extension of Python setuptools to support reproducible builds
  6. diffoscope
  7. New library to patch system functions for reproducibility
  8. Independently Reproducible Git Bundles
  9. Website updates
  10. Distribution work
  11. Reproducibility testing framework
  12. Upstream patches

Reproducible Builds Summit 2025

We are extremely pleased to announce the upcoming Reproducible Builds Summit, set to take place from October 28th — 30th 2025 in Vienna, Austria!

We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.

During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!


Reproducible Builds an official goal for SUSE Enterprise Linux

On our mailing list this month, Bernhard M. Wiedemann revealed the big news that reproducibility is now an official goal for SUSE Linux Enterprise Server (SLES) 16:

[Everything] changed earlier this year when reproducible-builds for SLES-16 became an official goal for the product. More people are talking about digital sovereignty and supply-chain security now. […] Today, only 9 of 3319 (source) packages have significant problems left (plus 7 with pending fixes), so 99.5% of packages have reproducible builds.


Reproducible Builds at FOSSY 2025

On Saturday 2nd August, Vagrant Cascadian and Chris Lamb presented at this year’s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here’s Reproducible Builds!, was introduced as follows:

There are numerous policy compliance and regulatory processes being developed that target software development… but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways… or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted … forever?

Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: “Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you”. More information on the event is available on the FOSSY 2025 website, including the full programme schedule.

Vagrant and Chris also staffed a table, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.


New OSS Rebuild project from Google

The Google Open Source Security Team (GOSST) published an article this month announcing OSS Rebuild, “a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts.” As the post itself documents, the new project comprises four facets:

  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.
  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.
  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.
  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.

One difference with most projects that aim for bit-for-bit reproducibility, OSS Rebuild aims for a kind of “semantic” reproducibility:

Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression).

The extensive post includes examples about how to access OSS Rebuild attestations using the Go-based command-line interface.


New extension of Python setuptools to support reproducible builds

Wim Jeantine-Glenn has written a PEP 517 Build backend in order to enable reproducible builds when building Python projects that use setuptools.

Called setuptools-reproducible, the project’s README file contains the following:

Setuptools can create reproducible wheel archives (.whl) by setting SOURCE_DATE_EPOCH at build time, but setting the env var is insufficient for creating reproducible sdists (.tar.gz). setuptools-reproducible [therefore] wraps the hooks build_sdist build_wheel with some modifications to make reproducible builds by default.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 301, 302 and 303 to Debian:

  • Improvements:

    • Use Difference.from_operation in an attempt to pipeline the output of the extract-vmlinux script, potentially avoiding it all in memory. []
    • Memoize a number of calls to --version, saving a very large number of external subprocess calls.
  • Bug fixes:

    • Don’t check for PyPDF version 3 specifically, check for versions greater than 3. []
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
    • Mask stderr from extract-vmlinux script. [][]
    • Avoid spurious differences in h5dump output caused by exposure of absolute internal extraction paths. (#1108690)
  • Misc:

    • Use our_check_output in the ODT comparator. []
    • Update copyright years. []

In addition:

Lastly, Chris Lamb added a tmpfs to try.diffoscope.org so that diffoscope has a non-trivial temporary area to unpack archives, etc. []

Elsewhere in our tooling, however, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.30 was uploaded to Debian unstable by Holger Levsen, chiefly including a change by Rebecca N. Palmer to not call sudo with the -h flag in order to fix Debian bug #1108550. []


New library to patch system functions for reproducibility

Nicolas Graves has written and published libfate, a simple collection of tiny libraries to patch system functions deterministically using LD_PRELOAD. According to the project’s README:

libfate provides deterministic replacements for common non-deterministic system functions that can break reproducible builds. Instead of relying on complex build systems or apps or extensive patching, libfate uses the LD_PRELOAD trick to intercept system calls and return fixed, predictable values.

Describing why he wrote it, Nicolas writes:

I originally used the OpenSUSE dettrace approach to make Emacs reproducible in Guix. But when Guix switch to GCC@14, dettrace stopped working as expected. dettrace is a complex piece of software, my need was much less heavy: I don’t need to systematically patch all sources of nondetermism, just the ones that make a process/binary unreproducible in a container/chroot.


Independently Reproducible Git Bundles

Simon Josefsson has published another interesting article this month. Titled Independently Reproducible Git Bundles, the blog post describes the advantages of why you might a reproducible bundle, and the pitfalls that can arise when trying to create them:

One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occurred even when nothing had been committed on the server side between the two runs.


Website updates

Once again, there were a number of improvements made to our website this month including:


Distribution work

In Debian this month:

Debian contributors have made significant progress toward ensuring package builds produce byte-for-byte reproducible results. You can check the status for packages installed on your system using the new package debian-repro-status, or visit reproduce.debian.net for Debian’s overall statistics for trixie and later. You can contribute to these efforts by joining #debian-reproducible on IRC to discuss fixes, or verify the statistics by installing the new rebuilderd package and setting up your own instance.


The IzzyOnDroid Android APK repository made further progress in July, crossing the 50% reproducibility threshold — congratulations. Furthermore, a new release of the Neo Store was released, which exposes the reproducible status directly next to the version of each app.


In GNU Guix, a series of patches intended to fix the reproducibility for the Mono programming language was merged, fixing reproducibility in Mono versions 1.9 [], 2.4 [] and 2.6 [].


Lastly, in addition to the news that openSUSE Enterprise Linux now [has an official goal of reproducibility]((https://lists.reproducible-builds.org/pipermail/rb-general/2025-July/003846.html), Bernhard M. Wiedemann posted another monthly update for their work there.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:

  • Switch the URL for the Tails package set. []
  • Make the dsa-check-packages output more useful. []
  • Setup the ppc64el architecture again, has it has returned — this time with a 2.7 GiB database instead of 72 GiB. []

In addition, Jochen Sprickerhof improved the reproducibility statistics generation:

  • Enable caching of statistics. [][][]
  • Add some common non-reproducible patterns. []
  • Change output to directory. []
  • Add a page sorted by diffoscope size. [][]
  • Switch to Python’s argparse module and separate output(). []

Holger also submitted a number of Debian bugs against rebuilderd and rebuilderd-worker:

  • Config files and scripts for a simple one machine setup. [][]
  • Create a rebuilderd user. []
  • Create rebuilderd-worker user with sbuild. []

Lastly, Mattia Rizzolo added a scheduled job to renew some SSL certificates [] and Vagrant Cascadian performed some node maintenance [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

There were a number of other patches from openSUSE developers:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Cryptogram China Accuses Nvidia of Putting Backdoors into Their Chips

The government of China has accused Nvidia of inserting a backdoor into their H20 chips:

China’s cyber regulator on Thursday said it had held a meeting with Nvidia over what it called “serious security issues” with the company’s artificial intelligence chips. It said US AI experts had “revealed that Nvidia’s computing chips have location tracking and can remotely shut down the technology.”

Planet DebianDavid Bremner: Using git-annex for email and notmuch metadata

Introducing git-remote-notmuch

Based on an idea and ruby implementation by Felipe Contreras, I have been developing a git remote helper for notmuch. I will soon post an updated version of the patchset to the notmuch mailing list (I wanted to refer to this post in my email). In this blog post I'll outline my experiments with using that tool, along with git-annex to store (and sync) a moderate sized email store along with its notmuch metadata.

WARNING

The rest of this post describes some relatively complex operations using (at best) alpha level software (namely git-remote-notmuch). git-annex is good at not losing your files, but git-remote-notmuch can (and did several times during debugging) wipe out your notmuch database. If you have a backup (e.g. made with notmuch-dump), this is much less annoying, and in particular you can decide to walk away from this whole experiment and restore your database.

Why git-annex?

I currently have about 31GiB of email, spread across more than 830,000 files. I want to maintain the ability to search and read my email offline, so I need to maintain a copy on several workstations and at least one server (which is backed up explicitly). I am somewhat commited to maintaining synchronization of tags to git since that is how the notmuch bug tracker works. Commiting the email files to git seems a bit wasteful: by design notmuch does not modify email files, and even with compression, the extra copy adds a fair amount of overhead (in my case, 17G of git objects, about 57% overhead). It is also notoriously difficult to completely delete files from a git repository. git-annex offers potential mitigation for these two issues, at the cost of a somewhat more complex mental model. The main idea is that instead of committing every version of a file to the git repository, git-annex tracks the filename and metadata, with the file content being stored in a key-value store outside git. Conceptually this is similar to git-lfs. From our current point, the important point is that instead of a second (compressed) copy of the file, we store one copy, along with a symlink and a couple of directory entries.

What to annex

For sufficiently small files, the overhead of a symlink and couple of directory entries is greater than the cost of a compressed second copy. When this happens depends on several variables, and will probably depend on the file content in a particular collection of email. I did a few trials of different settings for annex.largefiles to come to a threshold of largerthan=32k 1. For the curious, my experimental results are below. One potentially surprising aspect is that annexing even a small fraction of the (largest) files yields a big drop in storage overhead.

Threshold fraction annexed overhead
0 100% 30%
8k 29% 13%
16k 12% 9.4%
32k 7% 8.9%
48k 6% 8.9%
100k 3% 9.1%
256k 2% 11%
∞ (git) 0 % 57%

In the end I chose to err on the side of annexing more files (for the flexibility of deletion) rather than potentially faster operations with fewer annexed files at the same level of overhead.

Summarizing the configuration settings for git-annex (some of these are actually defaults, but not in my environment).

$ git config annex.largefiles largerthan=32k
$ git config annex.dotfiles true
$ git config annex.synccontent true

Delivering mail

To get new mail, I do something like

# compute a date based folder under $HOME/Maildir
$ dest = $(folder)
# deliver mail to ${dest} (somehow).
$ notmuch new
$ git -C $HOME/Maildir add ${folder}
$ git -C $HOME/Maildir diff-index --quiet HEAD ${folder} || git -C $HOME/Maildir commit -m 'mail delivery'

The call to diff-index is just an optimization for the case when nothing was delivered. The default configuration of git-annex will automagically annex any files larger than my threshold. At this point the git-annex repo knows nothing about tags.

There is some git configuration that can speed up the "git add" above, namely

$ git config core.untrackedCache true
$ git config core.fsmonitor true

See git-status(1) under "UNTRACKED FILES AND PERFORMANCE"

Defining notmuch as a git remote

Assuming git-remote-notmuch is somewhere in your path, you can define a remote to connect to the default notmuch database.

$ git remote add database notmuch::
$ git fetch database
$ git merge --allow-unrelated database

The --allow-unrelated should be needed only the first time.

In my case the many small files used to represent the tags (one per message), use a noticeable amount of disk space (in my case about the same amount of space as the xapian database).

Once you start merging from the database to the git repo, you will likely have some conflicts, and most conflict resolution tools leave junk lying around. I added the following .gitignore file to the top level of the repo

*.orig
*~

This prevents our cavalier use of git add from adding these files to our git history (and prevents pushing random junk to the notmuch database.

To push the tags from git to notmuch, you can run

$ git push database master

You might need to run notmuch new first, so that the database knows about all of the messages (currently git-remote-notmuch can't index files, only update metadata).

git annex sync should work with the new remote, but pushing back will be very slow 2. I disable automatic pushing as follows

$ git config remote.database.annex-push false

Unsticking the database remote

If you are debugging git-remote-notmuch, or just unlucky, you may end up in a sitation where git thinks the database is ahead of your git remote. You can delete the database remote (and associated stuff) and re-create it. Although I cannot promise this will never cause problems (because, computers), it will not modify your local copy of the tags in the git repo, nor modify your notmuch database.

$ git remote rm database
$ git update-rf -d notmuch/master
$ rm -r .git/notmuch

Fine tuning notmuch config

  • In order to avoid dealing with file renames, I have

      notmuch config maildir.synchronize_flags false
    
  • I have added the following to new.ignore:

       .git;_notmuch_metadata;.gitignore
    

  1. I also had to set annex.dotfiles to true, as many of my maildirs follow the qmail style convention of starting with a .
  2. I'm not totally clear on why it so slow, but certainly git-annex tries to push several more branches, and these are ignored by git-remote-annex.

Krebs on SecurityWho Got Arrested in the Raid on the XSS Crime Forum?

On July 22, 2025, the European police agency Europol said a long-running investigation led by the French Police resulted in the arrest of a 38-year-old administrator of XSS, a Russian-language cybercrime forum with more than 50,000 members. The action has triggered an ongoing frenzy of speculation and panic among XSS denizens about the identity of the unnamed suspect, but the consensus is that he is a pivotal figure in the crime forum scene who goes by the hacker handle “Toha.” Here’s a deep dive on what’s knowable about Toha, and a short stab at who got nabbed.

An unnamed 38-year-old man was arrested in Kiev last month on suspicion of administering the cybercrime forum XSS. Image: ssu.gov.ua.

Europol did not name the accused, but published partially obscured photos of him from the raid on his residence in Kiev. The police agency said the suspect acted as a trusted third party — arbitrating disputes between criminals — and guaranteeing the security of transactions on XSS. A statement from Ukraine’s SBU security service said XSS counted among its members many cybercriminals from various ransomware groups, including REvil, LockBit, Conti, and Qiliin.

Since the Europol announcement, the XSS forum resurfaced at a new address on the deep web (reachable only via the anonymity network Tor). But from reviewing the recent posts, there appears to be little consensus among longtime members about the identity of the now-detained XSS administrator.

The most frequent comment regarding the arrest was a message of solidarity and support for Toha, the handle chosen by the longtime administrator of XSS and several other major Russian forums. Toha’s accounts on other forums have been silent since the raid.

Europol said the suspect has enjoyed a nearly 20-year career in cybercrime, which roughly lines up with Toha’s history. In 2005, Toha was a founding member of the Russian-speaking forum Hack-All. That is, until it got massively hacked a few months after its debut. In 2006, Toha rebranded the forum to exploit[.]in, which would go on to draw tens of thousands of members, including an eventual Who’s-Who of wanted cybercriminals.

Toha announced in 2018 that he was selling the Exploit forum, prompting rampant speculation on the forums that the buyer was secretly a Russian or Ukrainian government entity or front person. However, those suspicions were unsupported by evidence, and Toha vehemently denied the forum had been given over to authorities.

One of the oldest Russian-language cybercrime forums was DaMaGeLaB, which operated from 2004 to 2017, when its administrator “Ar3s” was arrested. In 2018, a partial backup of the DaMaGeLaB forum was reincarnated as xss[.]is, with Toha as its stated administrator.

CROSS-SITE GRIFTING

Clues about Toha’s early presence on the Internet — from ~2004 to 2010 — are available in the archives of Intel 471, a cyber intelligence firm that tracks forum activity. Intel 471 shows Toha used the same email address across multiple forum accounts, including at Exploit, Antichat, Carder[.]su and inattack[.]ru.

DomainTools.com finds Toha’s email address — toschka2003@yandex.ru — was used to register at least a dozen domain names — most of them from the mid- to late 2000s. Apart from exploit[.]in and a domain called ixyq[.]com, the other domains registered to that email address end in .ua, the top-level domain for Ukraine (e.g. deleted.org[.]ua, lj.com[.]ua, and blogspot.org[.]ua).

A 2008 snapshot of a domain registered to toschka2003@yandex.ru and to Anton Medvedovsky in Kiev. Note the message at the bottom left, “Protected by Exploit,in.” Image: archive.org.

Nearly all of the domains registered to toschka2003@yandex.ru contain the name Anton Medvedovskiy in the registration records, except for the aforementioned ixyq[.]com, which is registered to the name Yuriy Avdeev in Moscow.

This Avdeev surname came up in a lengthy conversation with Lockbitsupp, the leader of the rapacious and destructive ransomware affiliate group Lockbit. The conversation took place in February 2024, when Lockbitsupp asked for help identifying Toha’s real-life identity.

In early 2024, the leader of the Lockbit ransomware group — Lockbitsupp — asked for help investigating the identity of the XSS administrator Toha, which he claimed was a Russian man named Anton Avdeev.

Lockbitsupp didn’t share why he wanted Toha’s details, but he maintained that Toha’s real name was Anton Avdeev. I declined to help Lockbitsupp in whatever revenge he was planning on Toha, but his question made me curious to look deeper.

It appears Lockbitsupp’s query was based on a now-deleted Twitter post from 2022, when a user by the name “3xp0rt” asserted that Toha was a Russian man named Anton Viktorovich Avdeev, born October 27, 1983.

Searching the web for Toha’s email address toschka2003@yandex.ru reveals a 2010 sales thread on the forum bmwclub.ru where a user named Honeypo was selling a 2007 BMW X5. The ad listed the contact person as Anton Avdeev and gave the contact phone number 9588693.

A search on the phone number 9588693 in the breach tracking service Constella Intelligence finds plenty of official Russian government records with this number, date of birth and the name Anton Viktorovich Avdeev. For example, hacked Russian government records show this person has a Russian tax ID and SIN (Social Security number), and that they were flagged for traffic violations on several occasions by Moscow police; in 2004, 2006, 2009, and 2014.

Astute readers may have noticed by now that the ages of Mr. Avdeev (41) and the XSS admin arrested this month (38) are a bit off. This would seem to suggest that the person arrested is someone other than Mr. Avdeev, who did not respond to requests for comment.

A FLY ON THE WALL

For further insight on this question, KrebsOnSecurity sought comments from Sergeii Vovnenko, a former cybercriminal from Ukraine who now works at the security startup paranoidlab.com. I reached out to Vovnenko because for several years beginning around 2010 he was the owner and operator of thesecure[.]biz, an encrypted “Jabber” instant messaging server that Europol said was operated by the suspect arrested in Kiev. Thesecure[.]biz grew quite popular among many of the top Russian-speaking cybercriminals because it scrupulously kept few records of its users’ activity, and its administrator was always a trusted member of the community.

The reason I know this historic tidbit is that in 2013, Vovnenko — using the hacker nicknames “Fly,” and “Flycracker” — hatched a plan to have a gram of heroin purchased off of the Silk Road darknet market and shipped to our home in Northern Virginia. The scheme was to spoof a call from one of our neighbors to the local police, saying this guy Krebs down the street was a druggie who was having narcotics delivered to his home.

I happened to be lurking on Flycracker’s private cybercrime forum when his heroin-framing plan was carried out, and called the police myself before the smack eventually arrived in the U.S. Mail. Vovnenko was later arrested for unrelated cybercrime activities, extradited to the United States, convicted, and deported after a 16-month stay in the U.S. prison system [on several occasions, he has expressed heartfelt apologies for the incident, and we have since buried the hatchet].

Vovnenko said he purchased a device for cloning credit cards from Toha in 2009, and that Toha shipped the item from Russia. Vovnenko explained that he (Flycracker) was the owner and operator of thesecure[.]biz from 2010 until his arrest in 2014.

Vovnenko believes thesecure[.]biz was stolen while he was in jail, either by Toha and/or an XSS administrator who went by the nicknames N0klos and Sonic.

“When I was in jail, [the] admin of xss.is stole that domain, or probably N0klos bought XSS from Toha or vice versa,” Vovnenko said of the Jabber domain. “Nobody from [the forums] spoke with me after my jailtime, so I can only guess what really happened.”

N0klos was the owner and administrator of an early Russian-language cybercrime forum known as Darklife[.]ws. However, N0kl0s also appears to be a lifelong Russian resident, and in any case seems to have vanished from Russian cybercrime forums several years ago.

Asked whether he believes Toha was the XSS administrator who was arrested this month in Ukraine, Vovnenko maintained that Toha is Russian, and that “the French cops took the wrong guy.”

WHO IS TOHA?

So who did the Ukrainian police arrest in response to the investigation by the French authorities? It seems plausible that the BMW ad invoking Toha’s email address and the name and phone number of a Russian citizen was simply misdirection on Toha’s part — intended to confuse and throw off investigators. Perhaps this even explains the Avdeev surname surfacing in the registration records from one of Toha’s domains.

But sometimes the simplest answer is the correct one. “Toha” is a common Slavic nickname for someone with the first name “Anton,” and that matches the name in the registration records for more than a dozen domains tied to Toha’s toschka2003@yandex.ru email address: Anton Medvedovskiy.

Constella Intelligence finds there is an Anton Gannadievich Medvedovskiy living in Kiev who will be 38 years old in December. This individual owns the email address itsmail@i.ua, as well an an Airbnb account featuring a profile photo of a man with roughly the same hairline as the suspect in the blurred photos released by the Ukrainian police. Mr. Medvedovskiy did not respond to a request for comment.

My take on the takedown is that the Ukrainian authorities likely arrested Medvedovskiy. Toha shared on DaMaGeLab in 2005 that he had recently finished the 11th grade and was studying at a university — a time when Mevedovskiy would have been around 18 years old. On Dec. 11, 2006, fellow Exploit members wished Toha a happy birthday. Records exposed in a 2022 hack at the Ukrainian public services portal diia.gov.ua show that Mr. Medvedovskiy’s birthday is Dec. 11, 1987.

The law enforcement action and resulting confusion about the identity of the detained has thrown the Russian cybercrime forum scene into disarray in recent weeks, with lengthy and heated arguments about XSS’s future spooling out across the forums.

XSS relaunched on a new Tor address shortly after the authorities plastered their seizure notice on the forum’s  homepage, but all of the trusted moderators from the old forum were dismissed without explanation. Existing members saw their forum account balances drop to zero, and were asked to plunk down a deposit to register at the new forum. The new XSS “admin” said they were in contact with the previous owners and that the changes were to help rebuild security and trust within the community.

However, the new admin’s assurances appear to have done little to assuage the worst fears of the forum’s erstwhile members, most of whom seem to be keeping their distance from the relaunched site for now.

Indeed, if there is one common understanding amid all of these discussions about the seizure of XSS, it is that Ukrainian and French authorities now have several years worth of private messages between XSS forum users, as well as contact rosters and other user data linked to the seized Jabber server.

“The myth of the ‘trusted person’ is shattered,” the user “GordonBellford” cautioned on Aug. 3 in an Exploit forum thread about the XSS admin arrest. “The forum is run by strangers. They got everything. Two years of Jabber server logs. Full backup and forum database.”

GordonBellford continued:

And the scariest thing is: this data array is not just an archive. It is material for analysis that has ALREADY BEEN DONE . With the help of modern tools, they see everything:

Graphs of your contacts and activity.
Relationships between nicknames, emails, password hashes and Jabber ID.
Timestamps, IP addresses and digital fingerprints.
Your unique writing style, phraseology, punctuation, consistency of grammatical errors, and even typical typos that will link your accounts on different platforms.

They are not looking for a needle in a haystack. They simply sifted the haystack through the AI sieve and got ready-made dossiers.

Planet DebianColin Watson: Free software activity in July 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

DebConf

I attended DebConf for the first time in 11 years (my last one was DebConf 14 in Portland). It was great! For once I had a conference where I had a fairly light load of things I absolutely had to do, so I was able to spend time catching up with old friends, making some new friends, and doing some volunteering - a bit of Front Desk, and quite a lot of video team work where I got to play with sound desks and such. Apparently one of the BoFs (“birds of a feather”, i.e. relatively open discussion sessions) where I was talkmeister managed to break the automatic video cutting system by starting and ending precisely on time, to the second, which I’m told has never happened before. I’ll take that.

I gave a talk about Debusine, along with helping Enrico run a Debusine BoF. We still need to process some of the feedback from this, but are generally pretty thrilled about the reception. My personal highlight was getting a shout-out in a talk from CERN (in the slide starting at 32:55).

Other highlights for me included a Python team BoF, Ian’s tag2upload talk and some very useful follow-up discussions, a session on archive-wide testing, a somewhat brain-melting whiteboard session about the “multiarch interpreter problem”, several useful discussions about salsa.debian.org, Matthew’s talk on how Wikimedia automates their Debian package builds, and many others. I hope I can start attending regularly again!

OpenSSH

Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported “No new SSH connections possible during large part of upgrade to Debian Trixie”, and after a little testing in a container I confirmed that this was a reproducible problem that would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:

  • OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. (OpenSSH 10.0 further split sshd-session, adding an sshd-auth process that deals with the user authentication phase of the protocol.) This hardens the OpenSSH server by using different address spaces for privileged and unprivileged code.

    Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it. After this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen in two phases: first we unpack the new files onto disk, and then we run some package-specific configuration steps which usually include things like restarting services. (I’m simplifying, but this is good enough for this post.) Normally this is fine, and in fact desirable: the old service keeps on working, and this approach often allows breaking what would otherwise be difficult cycles by ensuring that the system is in a more coherent state before trying to restart services. However, in this case, unpacking the new files onto disk immediately means that new SSH connections no longer work: the old sshd receives the connection and tries to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this.

    If you’re just upgrading OpenSSH on its own or with a small number of other packages, this isn’t much of a problem as the listener will be restarted quite soon; but if you’re upgrading from bookworm to trixie, there may be a long gap when you can’t SSH to the system any more, and if something fails in the middle of the upgrade then you could be in trouble.

    So, what to do? I considered keeping a copy of the old sshd around temporarily and patching the new sshd to re-execute it if it’s being run to handle an incoming connection, but that turned out to fail in my first test: dependencies are normally only checked when configuring a package, so it’s possible to unpack openssh-server before unpacking a newer libc6 that it depends on, at which point you can’t execute the new sshd at all. (That also means that the approach of restarting the service at unpack time instead of configure time is a non-starter.) We needed a different idea.

    dpkg, the core Debian package manager, has a specialized facility called “diversions”: you can tell it that when it’s unpacking a particular file it should put it somewhere else instead. This is normally used by administrators when they want to install a locally-modified version of a particular file at their own risk, or by packages that knowingly override a file normally provided by some other package. However, in this case it turns out to be useful for openssh-server to temporarily divert one of its own files! When upgrading from before 9.8, it now diverts /usr/sbin/sshd to /usr/sbin/sshd.session-split before the new version is unpacked, then removes the diversion and moves the new file into place once it’s ready to restart the service; this reduces the period when incoming connections fail to a minimum. (We actually have to pretend that the diversion is being performed on behalf of a slightly different package since we’re using dpkg-divert in a strange way here, but it all works.)

  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, which means that as soon as you unpack the new libssl3 during an upgrade (actually libssl3t64 due to the 64-bit time_t transition), sshd stops working. This couldn’t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL. And time was tight if we wanted to maximize the chance that people would apply that stable update before upgrading to trixie; there isn’t going to be another point release of Debian 12 before the release of Debian 13.

    Fortunately, there’s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted my proposal to fix this there.

The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine. Many thanks to Manfred for reporting this with just enough time to spare that we were able to fix it before Debian 13 is released in a few days!

debmirror

I did my twice-yearly refresh of debmirror’s mirror_size documentation, and applied a patch from Christoph Goehre to improve mirroring of installer files.

madison-lite

I proposed renaming this project along with the rmadison tool in devscripts, although I’m not yet sure what a good replacement name would be.

Python team

I upgraded python-expandvars, python-typing-extensions (in experimental), and webtest to new upstream versions.

I backported fixes for some security vulnerabilities to unstable:

I fixed or helped to fix a number of release-critical bugs:

I fixed some other bugs, mostly Severity: important:

I reinstated python3-mastodon’s build-dependency on and recommendation of python3-blurhash, now that the latter has been fixed to use the correct upstream source.

Worse Than FailureCodeSOD: A Dropped Down DataSet

While I frequently have complaints about over-reliance on Object Relational Mapping tools, they do offer key benefits. For example, mapping each relation in the database to a type in your programming language at least guarantees a bit of type safety in your code. Or, you could be like Nick L's predecessor, and write VB code like this.

For i As Integer = 0 To SQLDataset.Tables(0).Rows.Count - 1
     Try 'Handles DBNull
         Select Case SQLDataset.Tables(0).Rows(i).Item(0)
             Case "Bently" 'Probes
                 Probes_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Keyphasor"
                 Keyphasor_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Transmitter"
                 Transmitter_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Tachometer"
                 Tachometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim.ToUpper.ToString.Trim)
             Case "Dial Therm"
                 DialThermometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "DPS"
                 DPS_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Pump Bracket"
                 PumpBracket_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Accelerometer"
                 Accelerometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Velometer"
                 Velometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
         End Select
     Catch
         'MessageBox.Show(text:="Error during SetModelNums().", _
         '                caption:="Error", _
         '                buttons:=MessageBoxButtons.OK, _
         '                icon:=MessageBoxIcon.Error)
     End Try
Next

So, for starters, they're using the ADO .Net DataSet object. This is specifically meant to be a disconnected, in-memory model of the database. The idea is that you might run a set of queries, store the results in a DataSet, and interact with the data entirely in memory after that point. The resulting DataSet will model all the tables and constraints you've pulled in (or allow you to define your own in memory).

One of the things that the DataSet tracks is the names of tables. So, the fact that they go and access .Table(0) is a nuisance- they could have used the name of the table. And while that might have been awfully verbose, there's nothing stopping them from doing DataTable products = SQLDataSet.Tables("Products").

None of this is what caught Nick's attention, though. You see, the DataTable in the DataSet will do its best to map database fields to .NET types. So it's the chain of calls at the end of most every field that caught Nick's eye:

SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim

ToUpper works because the field in the database is a string field. Also, it returns a string, so there's no need to ToString it before trimming. Of course, it's the Tachometer entry that brings this to its natural absurdity:

Tachometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim.ToUpper.ToString.Trim)

All of this is wrapped up in an exception handler, not because of the risk of an error connecting to the database (the DataSet is disconnected after all), but because of the risk of null values, as the comment helpfully states.

We can see that once, this exception handler displayed a message box, but that has since been commented out, presumably because there are a lot of nulls and the number of message boxes the users had to click through were cumbersome. Now, the exception handler doesn't actually check what kind of exception we get, and just assumes the only thing that could happen was a null value. But that's not true- someone changed one of the tables to add a column to the front, which meant Item(1) was no longer grabbing the field the code expects, breaking the population of the Pump Bracket combo box. There was no indication that this had happened beyond users asking, "Why are there no pump brackets anymore?"

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Collector

Author: Mark Renney Thomas collects the needles. It is an unpopular job but is open to all. No qualifications are required or prior experience, not even a recommendation. One has simply to turn up and register at an Agency office, take to the streets and, using the bags provided, start Collecting. The needles are everywhere, […]

The post The Collector appeared first on 365tomorrows.

Planet DebianMatthew Palmer: I'm trying an open source funding experiment

As I’m currently somewhat underemployed, and could do with some extra income, I’m starting an open source crowd-funding experiment. My hypothesis is that the open source community, and perhaps a community-minded company or two, really wants more open source code in the world, and is willing to put a few dollars my way to make that happen.

To begin with, I’m asking for contributions to implement a bunch of feature requests on action-validator, a Rust CLI tool I wrote to validate the syntax of GitHub actions and workflows. The premise is quite simple: for every AU$150 (about US$100) I receive in donations, I’ll implement one of the nominated feature requests. If people want a particular feature implemented, they can nominate a feature in their donation message, otherwise when “general” donations get to AU$150, I’ll just pick a feature that looks interesting. More details are on my code fund page.

In the same spirit of simplicity, donations can be made through my Ko-fi page, and I’ll keep track of the various totals in a hand-written HTML table.

So, in short, if you want more open source code to exist, now would be a good time to visit my Ko-fi page and chip in a few dollars. If you’re curious to know more, my code fund page has a list of Foreseeably Anticipated Questions that might address your curiosity. Otherwise, ask your questions in the comments or email me.

,

Planet DebianRavi Dwivedi: Tricked by a website while applying for Vietnam visa

In December 2024, Badri and I went to Vietnam. In this post, I’ll document our experiences with the visa process of Vietnam. Vietnam requires an e-visa to enter the country. The official online portal for the e-visa application is evisa.xuatnhapcanh.gov.vn/. However, I submitted my visa application on the website vietnamvisa.govt.vn. It was only after submitting my application and making the payment that I realized that it’s not the official e-visa website. The realization came from the tagline mentioned in the top left corner of the website - the best way to obtain a Vietnam visa.

I was a bit upset that I got tricked by that website. I should have checked the top level domains of Vietnam’s government websites. Anyways, it is pretty easy to confuse govt.vn with gov.vn. I also paid double the amount of the official visa fee. However, I wasn’t asked to provide a flight reservation or hotel bookings - documents which are usually asked for most of the visas. But they did ask me for a photo. I was not even sure whether the website was legit or not.

Badri learnt from my experience and applied through the official Vietnam government website. During the process, he had to provide a hotel booking as well as enter the hotel address into the submission form. Additionally, the official website asked to provide the exact points of entry to and exit from the country, which the non-official website did not ask for. On the other hand, he had to pay only 25 USD versus my 54 USD.

It turned out that the website I registered on was also legit, as they informed me a week later that my visa has been approved, along with a copy of my visa. Further, I was not barred from entering and found to be holding a fake visa. It appears that the main “scam” is not about the visa being fake, but rather that you will be charged more than if you apply through the official website.

I would still recommend you (the readers) to submit your visa application only through the official website and not on any of the other such websites.

Our visa was valid for a month (my visa was valid from the 4th of December 2024 to the 4th of January 2025). We also had a nice time in Vietnam. Stay tuned for my Vietnam travel posts!

Credits to Badri for proofreading and writing his part of the experience.

Planet DebianThomas Lange: FAIme service new features: Linux Mint support and data storage for USB

Build your own customized Linux Mint ISO

Using the FAIme service [1] you can now build your own customized installation ISO for Xfce edition of Linux Mint 22.1 'Xia'.

You can select the language, add a list of additional packages, set the username and passwords. In the advanced settings you may add your ssh public key, some grub option and add a postinst script to be executed.

Add writable data partition for USB sticks

For all variants of ISOs (all live and all install ISOs) you can add a data partition to the ISO by just clicking a checkbox. This writable partition can be used when booting from USB stick. FAI will use it to search for a config space and to store the logs when this partition is detected.

The logs will be stored in the subdirectory logs on this partition. For using a different config space than the one on the ISO (which is read only) create a subdirectory config and copy a FAI config space into that directory. Then set FAI_CONFIG_SRC=detect:// (which is the default) and FAI will search for a config space on the data partition and uses this. More info about this [2]

You can also store some local packages in your config space, which will be installed automatically, without the need of recreating the ISO.

Worse Than FailureCodeSOD: An Annual Report

Michael has the "fun" task of converting old, mainframe-driven reports into something more modern. This means reading through reams of Intelligent Query code.

Like most of these projects, no one has a precise functional definition of what it's supposed to do. The goal is to replace the system with one that behaves exactly the same, but is more "modern". This means their test cases are "run the two systems in parallel and compare the outputs; if they match, the upgrade is good."

After converting one report, the results did not match. Michael dug in, tracing through the code. The name of the report contained the word "Annual". One of the key variables which drove original the report was named TODAYS-365 (yes, you can put dashes in variables in IQ). Michael verified that the upgraded report was pulling exactly one year's worth of data. Tracing through the original report, Michael found this:

#
DIVIDE ISBLCS BY ISB-COST-UOM GIVING ISB-COST-EACH.
MULTIPLY ISB-STK-QOH TIMES ISB-COST-EACH GIVING ISB-ON-HAND-COST.
#
SUBTRACT TODAYS-DATE MINUS 426 GIVING TODAYS-365.
#
SEARCH FOR ITMMAN =  'USA'
       AND ITMNMB <> '112-*'

This snippet comes from a report which contains many hundreds of lines of code. So it's very easy to understand how someone could miss the important part of the code. Specifically, it's this line: SUBTRACT TODAYS-DATE MINUS 426 GIVING TODAYS-365..

Subtract 426 from today's date, and store the result in a variable called TODAYS-365. This report isn't for the past year, but for the past year and about two months.

It's impossible to know exactly why, but at a guess, originally the report needed to grab a year. Then, at some point, the requirement changed, probably based on some nonsense around fiscal years or something similar. The least invasive way to make that change was to just change the calculation, leaving the variable name (and the report name) incorrect and misleading. And there it say, working perfectly fine, until poor Michael came along, trying to understand the code.

The fix was easy, but the repeated pattern of oddly name, unclear variables was not. Remember, the hard part about working on old mainframes isn't learning COBOL or IQ or JCL or whatever antique languages they use; I'd argue those languages are in many cases easier to learn (if harder to use) than modern languages. The hard part is the generations of legacy kruft that's accumulated in them. It's grandma's attic, and granny was a pack rat.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsThe Club

Author: Majoki The chair creaked noisily when Sandoval sat at the table with five glasses set out. Even though he’d lost a few pounds since they last met, the old wood complained. Soon the others joined him: Avrilla, Hurst, Marpreesh, Suh. Five left. Only five. No others living humans in the history of civilization were […]

The post The Club appeared first on 365tomorrows.

Planet DebianMatthew Garrett: Cordoomceps - replacing an Amiga's brain with Doom

There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comment count unavailable comments

Planet DebianMichael Ablassmeier: PVE 9.0 - Snapshots for LVM

The new Proxmox release advertises a new feature for easier snapshot handling of virtual machines whose disks are stored on LVM volumes, I wondered.. whats the deal..?

To be able to use the new feature, you need to enable a special flag for the LVM volume group. This example shows the general workflow for a fresh setup.

1) Create the volume group with the snapshot-as-volume-chain feature turned on:

 pvesm add lvm lvmthick --content images --vgname lvm --snapshot-as-volume-chain 1

2) From this point on, you can create virtual machines right away, BUT those virtual machines disks must use the QCOW image format for their disk volumes. If you use the RAW format, you wont be able to create snapshots, still.

 VMID=401
 qm create $VMID --name vm-lvmthick
 qm set $VMID -scsi1 lvmthick:2,format=qcow2

So, why would it make sense to format the LVM volume as QCOW?

Snapshots on LVM thick provisioned devices are, as everybody knows, a very I/O intensive task. Besides each snapshot, a special -cow Device is created that tracks the changed block regions and the original block data for each change to the active volume. This will waste quite some space within your volume group for each snapshot.

Formatting the LVM volume as QCOW image, makes it possible to use the QCOW backing-image option for these devices, this is the way PVE 9 handles these kind of snapshots.

Creating a snapshot looks like this:

 qm snapshot $VMID id
 snapshotting 'drive-scsi1' (lvmthick3:vm-401-disk-0.qcow2)
 Renamed "vm-401-disk-0.qcow2" to "snap_vm-401-disk-0_id.qcow2" in volume group "lvm"
 Rounding up size to full physical extent 1.00 GiB
 Logical volume "vm-401-disk-0.qcow2" created.
 Formatting '/dev/lvm/vm-401-disk-0.qcow2', fmt=qcow2 cluster_size=131072 extended_l2=on preallocation=metadata compression_type=zlib size=1073741824 backing_file=snap_vm-401-disk-0_id.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16

So it will rename the current active disk and create another QCOW formatted LVM volume, but pointing it to the snapshot image using the backing_file option.

Neat.

,

Planet DebianScarlett Gately Moore: Fostering Constructive Communication in Open Source Communities

I write this in the wake of a personal attack against my work and a project that is near and dear to me. Instead of spreading vile rumors and hearsay, talk to me. I am not known to be ‘hard to talk to’ and am wide open for productive communication. I am disheartened and would like to share some thoughts of the importance of communication. Thanks for listening.

Open source development thrives on collaboration, shared knowledge, and mutual respect. Yet sometimes, the very passion that drives us to contribute can lead to misunderstandings and conflicts that harm both individuals and the projects we care about. As contributors, maintainers, and community members, we have a responsibility to foster environments where constructive dialogue flourishes.

The Foundation of Healthy Open Source Communities

At its core, open source is about people coming together to build something greater than what any individual could create alone. This collaborative spirit requires more than just technical skills—it demands emotional intelligence, empathy, and a commitment to treating one another with dignity and respect.

When disagreements arise—and they inevitably will—the manner in which we handle them defines the character of our community. Technical debates should focus on the merits of ideas, implementations, and approaches, not on personal attacks or character assassinations conducted behind closed doors.

The Importance of Direct Communication

One of the most damaging patterns in any community is when criticism travels through indirect channels while bypassing the person who could actually address the concerns. When we have legitimate technical disagreements or concerns about someone’s work, the constructive path forward is always direct, respectful communication.

Consider these approaches:

  • Address concerns directly: If you have technical objections to someone’s work, engage with them directly through appropriate channels
  • Focus on specifics: Critique implementations, documentation, or processes—not the person behind them
  • Assume good intentions: Most contributors are doing their best with the time and resources available to them
  • Offer solutions: Instead of just pointing out problems, suggest constructive alternatives

Supporting Contributors Through Challenges

Open source contributors often juggle their community involvement with work, family, and personal challenges. Many are volunteers giving their time freely, while others may be going through difficult periods in their lives—job searching, dealing with health issues, or facing other personal struggles.

During these times, our response as a community matters enormously. A word of encouragement can sustain someone through tough periods, while harsh criticism delivered thoughtlessly can drive away valuable contributors permanently.

Building Resilient Communities

Strong open source communities are built on several key principles:

Transparency in Communication: Discussions about technical decisions should happen in public forums where all stakeholders can participate and learn from the discourse.

Constructive Feedback Culture: Criticism should be specific, actionable, and delivered with the intent to improve rather than to tear down.

Recognition of Contribution: Every contribution, whether it’s code, documentation, bug reports, or community support, has value and deserves acknowledgment.

Conflict Resolution Processes: Clear, fair procedures for handling disputes help prevent minor disagreements from escalating into community-damaging conflicts.

The Long View

Many successful open source projects span decades, with contributors coming and going as their life circumstances change. The relationships we build and the culture we create today will determine whether these projects continue to attract and retain the diverse talent they need to thrive.

When we invest in treating each other well—even during disagreements—we’re investing in the long-term health of our projects and communities. We’re creating spaces where innovation can flourish because people feel safe to experiment, learn from mistakes, and grow together.

Moving Forward Constructively

If you find yourself in conflict with another community member, consider these steps:

  1. Take a breath: Strong emotions rarely lead to productive outcomes
  2. Seek to understand: What are the underlying concerns or motivations?
  3. Communicate directly: Reach out privately first, then publicly if necessary
  4. Focus on solutions: How can the situation be improved for everyone involved?
  5. Know when to step back: Sometimes the healthiest choice is to disengage from unproductive conflicts

A Call for Better

Open source has given us incredible tools, technologies, and opportunities. The least we can do in return is treat each other with the respect and kindness that makes these collaborative achievements possible.

Every contributor—whether they’re packaging software, writing documentation, fixing bugs, or supporting users—is helping to build something remarkable. Let’s make sure our communities are places where that work can continue to flourish, supported by constructive communication and mutual respect.

The next time you encounter work you disagree with, ask yourself: How can I make this better? How can I help this contributor grow? How can I model the kind of community interaction I want to see?

Our projects are only as strong as the communities that support them. Let’s build communities worthy of the amazing software we create together.

https://gofund.me/506c910c

David BrinSome lighthearted stuff this time! Plus a few sobering reminders.

All right, it's been 3 weeks without a posting. Busy, as we finally move back home after 6 months in exile.  And sure, there's plenty going on in the world. Which I'll comment on soon, once my 3-week lobotomy has had a chance to settle in. (All hail Vlad and the New USSR and Vlad's orange-quisling U.S. prophet!)

Okay, meanwhile, got time for some humor and fun? There's a LOT of cool links, below!

Let’s start with this clipSimply one of the best things I have seen, maybe ever!  Supporting my view that ‘pre-sapient’ consciousness is very, very common… and breaking through the glass ceiling to our level must be very, very hard. 


== Distractions! ==


Running short on distractions suitable for you alpha types? I mentioned Saturday Morning Breakfast Cereal comix. These are among the good ones lately.


https://www.smbc-comics.com/comic/why-6


https://www.smbc-comics.com/comic/law-4


https://www.smbc-comics.com/comic/profile


https://www.smbc-comics.com/comic/cult-2


Saturday Morning Breakfast Cereal - LLM And an exceptionally on-target whimsy cynicism from SMBC…


You'd also likely enjoy XKCD, which is generally even more science oriented. Might as well start here and just keep clicking the one-step-backward button till you tire of the cleverness!  


I mentioned Electric Sheep Comix by Patrick Farley. All his serials have such different styles you'd be sure they must have different artists. And all are brilliant! 



== And seriously, now ==


Briefly serious and then more lighthearted stuff!


Here’s a tip and a tool worth spreading. The Canadian Women's Foundation has created a hand signal for those who are victims of domestic violence which can be used silently on video calls during the coronavirus crisis to signal for help.  But not just for video calls, as illustrated in this earlier video.


And while we’re talking inspiring ways to move ahead… Big star Bruce Springsteen’s Jeep commercial paid homage to the ReUnited States of America… a lovely sentiment! (Calling to mind “malice toward none” from Lincoln’s 2nd inaugural address, one of the top ten speeches of all time.) 


It also called to mind - for not a few folks who pinged me - resonance with the “Restored United States” of my novel (and the film) “The Postman.” Which has itself been “restored” or refreshed, edited and updated with TWO new Patrick Farley covers and a new introduction. 


(Let me append -- below -- a relevant passage from The Postman, in which -- in the 1980s -- I predicted many of the rationalizations of the would-be lords seeking to re-impose 6000 years of dismal feudalism)


On the other hands, the dumbing-down continues. In 2022, the National Council of Teachers of English declared: “Time to decenter book reading and essay-writing as the pinnacles of English language arts education.” Instead, teachers are urged to focus on "media literacy" and short texts that students feel are "relevant." ??? I am well-versed on the 'newer' language arts and helped invent some. And this leads to the moronic world that Walter Tevis ('The Queen's Gambit') portrayed in his great novel MOCKINGBIRD. 


But oh yeah. who reads novels? Or tracks coherent, complex thoughts?

Dig it. This is part of the Great Big War Vs Nerds that's primarily on the Mad Right... but also has long had a strong locus on the postmodernist left.

Books r 2 hard 2 reed and shit ...


…but sure… now back to fun!



== And more spritely and musically now, to cheer you up! ==


And now something completely different. I assert that Gilbert and Sullivan were master musicians. And in each opera they has at least one pas-de-deux... where you take two seemingly completely independent songs, hear them separately, and then lo! They get woven together in beauty & irony. This one combines unhelpful encouragement (!) with courage-despite-terror. You'll see (and hear) what I mean at about 3:30. Play it loud!


This version with the incomparable Linda Rondstadt!

And yes, a few of you (too few!) will deem this familiar from a scene in BRIGHTNESS REEF!


And let’s have another. Here’s one of my utter-favorite songs, by Vangelis. The Jon Anderson version is great. Donna Summer’s is even better!


Less perfect but a fun variation is Chrissie Hynde’s version with Moodswings.


Then there’s this way-fun bit of grunting nonsense by Mike Oldfield, that should be redone by Tenacious D!


Three more faves recommended by my brother, with my thumbs way up.


Johnny Clegg with Nelson Mandela. 


Patty Smith, People Have the Power.  


Cornershop ‘Free Love.’ 



======


== And now that promised POSTMAN lagniappe ==


So it had been that way here too. The cliched "last straw" had been this plague of survivalists--particularly those following the high priest of violent anarchy, Nathan Holn.
...
The irony of it was that we had things turned around! The depression was over. People were at work again and cooperating. Except for a few crazies, it looked like a renaissance was coming, for America and the world.

But we forgot how much harm a few crazies could do, in America and in the world.

 


--… and later in the book… --

 

 

“How did he get away with pushing a book like this?”

       Gordon shrugged. 

       “It was called ‘the Big Lie’ technique, Johnny. Just SOUND like you know what you’re talking about—as if you’re citing real facts. Talk very fast. Weave your lies into the shape of a conspiracy theory and repeat your assertions over and over again. Those who want an excuse to hate or blame—those with big but weak egos— will leap at a simple, neat explanation for the way the world is. Those types will never call you on the facts…”



Want more?  I'll post another, longer, section of the book, soon. You'll likely not see a better pre-diagnosis of the hell we are in now, verging on possibly much worse.  But yes, we will win.


Thrive. And persevere!

 





Planet DebianAigars Mahinovs: Snapshot mirroring in Debian (and Ubuntu)

Snapshot mirroring in Debian (and Ubuntu)

The use of snapshots has been routine in both Debian and Ubuntu for several years now—or more than 15 years for Debian, to be precise. Snapshots have become not only very reliable, but also an increasingly important part of the Debian package archive.

This week, I encountered a problem at work that could be perfectly solved by correctly using the Snapshot service. However, while trying to figure it out, I ran into some shortcomings in the documentation. Until the docs are updated, I am publishing this blog post to make this information easier to find.

Problem 1: Ensure fully reproducible creation of Docker containers with the exact same packages installed, even years after the original images were generated.

Solution 1: Pin everything! Use a pinned source image in the FROM statement, such as debian:trixie-20250721-slim, and also pin the APT package sources to the "same" date - "20250722".

Hint: The APT packages need to be newer than the Docker image base. If the APT packages are a bit newer, that's not a problem, as APT can upgrade packages without issues. However, if your Docker image has a newer package than your APT package sources, you will have a big problem. For example, if you have "libbearssl0" version 0.6-2 installed in the Docker image, but your package sources only have the older version 0.6-1, you will fail when trying to install the "libbearssl-dev" package. This is because you only have version 0.6-1 of the "-dev" package available, which hard-depends on exactly version 0.6-1 of "libbearssl0", and APT will refuse to downgrade an already installed package to satisfy that dependency.

Problem 2: You are using a lot of images in a lot of executions and building tens of thousands of images per day. It would be a bad idea to put all this load on public Debian servers. Using local sources is also faster and adds extra security.

Solution 2: Use local (transparently caching) mirrors for both the Docker Hub repository and the APT package source.

At this point, I ran into another issue—I could not easily figure out how to specify a local mirror for the snapshot part of the archive service.

First of all, snapshot support in both Ubuntu and Debian accepts both syntaxes described in the Debian and Ubuntu documentation above. The documentation on both sites presents different approaches and syntax examples, but both work.

The best approach nowadays is to use the "deb822" sources syntax. Remove /etc/apt/sources.list (if it still exists), delete all contents of the /etc/apt/sources.list.d directory, and instead create this file at /etc/apt/sources.list.d/debian.sources:

Types: deb
URIs: https://common.mirror-proxy.local/ftp.debian.org/debian/
Suites: trixie
Components: main non-free-firmware non-free contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Snapshot: 20250722

Hint: This assumes you have a mirror service running at common.mirror-proxy.local that proxies requests (with caching) to whitelisted domains, based on the name of the first folder in the path.

If you now run sudo apt update --print-uris, you will see that your configuration accesses your mirror, but does not actually use the snapshot.

Next, add the following to /etc/apt/apt.conf.d/80snapshots:

APT::Snapshot "20250722";

That should work, right? Let's try sudo apt update --print-uris again. I've got good news and bad news! The good news is that we are now actually using the snapshot we specified (twice). The bad news is that we are completely ignoring the mirror and going directly to snapshots.debian.org instead.

Finding the right information was a bit of a challenge, but after a few tries, this worked: to specify a custom local mirror of the Debian (or Ubuntu) snapshot service, simply add the following line to the same file, /etc/apt/apt.conf.d/80snapshots:

Acquire::Snapshots::URI::Override::Origin::debian "https://common.mirror-proxy.local/snapshot.debian.org/archive/debian/@SNAPSHOTID@/";

Now, if you check again with sudo apt update --print-uris, you will see that the requests go to your mirror and include the specified snapshot identifier. Success!

Now you can install any packages you want, and everything will be completely local and fully reproducible, even years later!

Worse Than FailureCodeSOD: Concatenated Validation

User inputs are frequently incorrect, which is why we validate them. So, for example, if the user is allowed to enter an "asset ID" to perform some operation on it, we should verify that the asset ID exists before actually doing the operation.

Someone working with Capybara James almost got there. Almost.

private boolean isAssetIdMatching(String requestedAssetId, String databaseAssetId) {
    return (requestedAssetId + "").equals(databaseAssetId + "");
}

This Java code checks if the requestedAssetId, provided by the user, matches a databaseAssetId, fetched from the database. I don't fully understand how we get to this particular function. How is the databaseAssetId fetched? If the fetch were successful, how could it not match? I fear they may do this in a loop across all of the asset IDs in the database until they find a match, but I don't know that for sure, but the naming conventions hint at a WTF.

The weird thing here, though, is the choice to concatenate an empty string to every value. There's no logical reason to do this. It certainly won't change the equality check. I strongly suspect that the goal here was to protect against null values, but it doesn't work that way in Java. If the string variables are null, this will just throw an exception when you try and concatenate.

I strongly suspect the developer was more confident in JavaScript, where this pattern "works".

I don't understand why or how this function got here. I'm not the only one. James writes:

No clue what the original developers were intending with this. It sure was a shocker when we inherited a ton of code like this.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsNot Dying Today

Author: Julian Miles, Staff Writer Mum always said ice mining is a stupid idea. Whenever she said that, Dad just shrugged and went back to watching videos about playing the markets to get rich. I’m not sure if it was her crazy enthusiasms for anything that might get us ‘a better life’ or his stubborn […]

The post Not Dying Today appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Secure boot signing with Debusine (by Colin Watson)

Debusine aims to be an integrated solution to build, distribute and maintain a Debian-based distribution. At Debconf 25, we talked about using it to pre-test uploads to Debian unstable, and also touched on how Freexian is using it to help maintain the Debian LTS and ELTS projects.

When Debian 10 (buster) moved to ELTS status in 2024, this came with a new difficulty that hadn’t existed for earlier releases. Debian 10 added UEFI Secure Boot support, meaning that there are now signed variants of the boot loader and Linux kernel packages. Debian has a system where certain packages are configured as needing to be signed, and those packages include a template for a source package along with the unsigned objects themselves. The signing service generates detached signatures for all those objects, and then uses the template to build a source package that it uploads back to the archive for building in the usual way.

Once buster moved to ELTS, it could no longer rely on Debian’s signing service for all this. Freexian operates parallel infrastructure for the archive, and now needed to operate a parallel signing service as well. By early 2024 we were already planning to move ELTS infrastructure towards Debusine, and so it made sense to build a signing service there as well.

Separately, we were able to obtain a Microsoft signature for Freexian’s shim build, allowing us to chain this into the trust path for most deployed x86 machines.

Freexian can help other organizations running Debian derivatives through the same process, and can provide secure signing infrastructure to the standards required for UEFI Secure Boot.

Prior art

We considered both code-signing (Debian’s current implementation) and lp-signing (Ubuntu’s current implementation) as prior art. Neither was quite suitable for various reasons.

  • code-signing relies on polling a configured URL for each archive to fetch a GPG-signed list of signing requests, which would have been awkward for us to set up, and it assumes that unsigned packages are sufficiently trusted for it to be able to run dpkg -x and dpkg-source -b on them outside any containment. dpkg -x has had the occasional security vulnerability, so this seemed unwise for a service that might need to deal with signing packages for multiple customers.
  • lp-signing is a microservice accepting authenticated requests, and is careful to avoid needing to manipulate packages itself. However, this relies on a different and incompatible mechanism for indicating that packages should be signed, which wasn’t something we wanted to introduce in ELTS.

Workers

Debusine already had an established system of external workers that run tasks under various kinds of containment. This seems like a good fit: after all, what’s a request to sign a package but a particular kind of task? But there are some problems here: workers can run essentially arbitrary code (such as build scripts in source packages), and even though that’s under containment, we don’t want to give such machines access to highly-sensitive data such as private keys.

Fortunately, we’d already introduced the idea of different kinds of workers a few months beforehand, in order to be able to run privileged “server tasks” that have direct access to the Debusine database. We built on that and added “signing workers”, which are much like external workers except that they only run signing tasks, no other types of tasks run on them, and they have access to a private database with information about the keys managed by their Debusine instance. (Django’s support for multiple databases made this quite easy to arrange: we were able to keep everything in the same codebase.)

Key management

It’s obviously bad practice to store private key material in the clear, but at the same time the signing workers are essentially oracles that will return signatures on request while ensuring that the rest of Debusine has no access to private key material, so they need to be able to get hold of it themselves. Hardware security modules (HSMs) are designed for this kind of thing, but they can be inconvenient to manage when large numbers of keys are involved.

Some keys are more valuable than others. If the signing key used for an experimental archive leaks, the harm is unlikely to be particularly serious; but if the ELTS signing key leaks, many customers will be affected. To match this, we implemented two key protection arrangements for the time being: one suitable for low-value keys encrypts the key in software with a configured key and stores the public key and ciphertext in the database, while one suitable for high-value keys stores keys as PKCS #11 URIs that can be set up manually by an instance administrator. We packaged some YubiHSM tools to make this easier for our sysadmins.

The signing worker calls back to the Debusine server to check whether a given work request is authorized to use a given signing key. All operations related to private keys also produce an audit log entry in the private signing database, so we can track down any misuse.

Tasks

Getting Debusine to do anything new usually requires figuring out how to model the operation as a task. In this case, that was complicated by wanting to run as little code as possible on the signing workers: in particular, we didn’t want to do all the complicated package manipulations there.

The approach we landed on was a chain of three tasks:

  • ExtractForSigning runs on a normal external worker. It takes the result of a package build and picks out the individual files from it that need to be signed, storing them as separate artifacts.
  • Sign runs on a signing worker, and (of course) makes the actual signatures, storing them as artifacts.
  • AssembleSignedSource runs on a normal external worker. It takes the signed artifacts and produces a source package containing them, based on the template found in the unsigned binary package.

Workflows

Of course, we don’t want people to have to create all those tasks directly and figure out how to connect everything together for themselves, and that’s what workflows are good at. The make_signed_source workflow does all the heavy lifting of creating the right tasks with the right input data and making them depend on each other in the right ways, including fanning out multiple copies of all this if there are multiple architectures or multiple template packages involved. Since you probably don’t want to stop at just having the signed source packages, it also kicks off builds to produce signed binary packages.

Even this is too low-level for most people to use directly, so we wrapped it all up in our debian_pipeline workflow, which just needs to be given a few options to enable signing support (and those options can be locked down by workspace owners).

What’s next?

In most cases this work has been enough to allow ELTS to carry on issuing kernel security updates without too much disruption, which was the main goal; but there are other uses for a signing system. We included OpenPGP support from early on, which allows Debusine to sign its own builds, and we’ll soon be extending that to sign APT repositories hosted by Debusine.

The current key protection arrangements could use some work. Supporting automatically-generated software-encrypted keys and manually-generated keys in an HSM is fine as far as it goes, but it would be good to be able to have the best of both worlds by being able to automatically generate keys protected by an HSM. This needs some care, as HSMs often have quite small limits on the number of objects they can store at any one time, and the usual workaround is to export keys from the HSM “under wrap” (encrypted by a key known only to the HSM) so that they can be imported only when needed. We have a general idea of how to do this, but doing it efficiently will need care.

We’d be very interested in hearing from organizations that need this sort of thing, especially for Debian derivatives. Debusine provides lots of other features that can help you. Please get in touch with us at sales@freexian.com if any of this sounds useful to you.

,

Planet DebianSergio Cipriano: Query Debian changelogs by keyword with the FTP-Master API

Query Debian changelogs by keyword with the FTP-Master API

In my post about tracking my Debian uploads, I used the ProjectB database directly to retrieve how many uploads I had so far.

I was pleasantly surprised to receive a message from Joerg Jaspert, who introduced me to the Debian Archive Kit web API (dak), also known as the FTP-Master API.

Joerg gave the idea of integrating the query I had written into the dak API, so that anyone could obtain the same results without needing to use the mirror host, with a simple http request.

I liked the idea and I decided to work on it. The endpoint is already available and you can try by yourself by doing something like this:

$ curl https://api.ftp-master.debian.org/changelogs?search_term=almeida+cipriano

The query provides a way to search through the changelogs of all Debian packages currently published. The source code is available at Salsa.

I'm already using it to track my uploads, I made this page that updates every day. If you want to setup something similar, you can use my script and just change the search_term to the name you use in your changelog entries.

I’m running it using a systemd timer. Here’s what I’ve got:

# .config/systemd/user/track-uploads.service
[Unit]
Description=Track my uploads using the dak API
StopWhenUnneeded=yes

[Service]
Type=oneshot
WorkingDirectory=/home/cipriano/public_html/uploads
ExecStart=/usr/bin/python3 generate.py
# .config/systemd/user/track-uploads.timer
[Unit]
Description=Run track-uploads script daily

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

After placing every file in the right place you just need to run:

$ systemctl --user daemon-reload
$ systemctl --user enable --now track-uploads.timer
$ systemctl --user start track-uploads.service # generates the html now

If you want to get a bit fancier, I’m also using an Ansible playbook for that. The source code is available on my GitLab repository.

If you want to learn more about dak, there is a web docs available.

I’d like to thank Joerg once again for suggesting the idea and for reviewing and merging the change so quickly.

Planet DebianAigars Mahinovs: Debconf 25 photos

Debconf 25 photos

Debconf 25 came to the end in Brest, France this year a couple weeks ago.

This has been a very different and unusually interesting Debconf. For me it was for two, related reasons: for one the conference was close enough in Western Europe that I could simply drive there with a car (which reminds me that I should make a blog post about the BMW i5, before I am done with it at the end of this year) and for the other - the conference is close enough to Western Europe that many other Debian developers could join this year who have not been seen at the event for many years. Being able to arrive early, decompress and spend extra time looking around the place made the event itself even more enjoyable than usual.

The French cuisine, especially in its Breton expression, has been a very welcome treat. Even if there were some rough patches with the food selection, amount, or waiting, it was still a great experience.

I specifically want to say a big thank you to the organisers for everything, but very explicitly for planning all the talk/BOF rooms in the same building and almost on the same floor. It saved me a lot of footwork, but also for other participants the short walks between the talks made it possible to always have a few minutes to talk to people or grab a croissant before running to the next talk.

IMHO we should come back to a tradition of organising Debconf in Europe every 2-3 years. This maximises one of the main goals of Debconf - bringing as many Debian Developers as possible together in one physical location. This works best when the location is actually close to large concentrations of existing developers. In other years, the other goal of Debconf can then take priority - recruiting new developers in new locations. However, these goals could both be achieved at the same time - there are plenty of locations in Europe and even in Western Europe that still have good potential for attracting new developers. Especially if we focus on organising the event on the campuses of some larger technically-oriented universities.

This year was also very productive for me—a lot of conversations with various people about all kinds of topics, especially technical packaging questions. It has been a long time since the very basic foundations of Debian packaging work have been so fundamentally refactored and modernized as in the past year. Tag2upload has become a catalyst for git-based packaging and for automated workflows via Salsa, and all of that feeds back into focusing on a few best-supported packaging workflows. There is still a bit of a documentation gap of a new contributor getting to these modern packaging workflows from the point where the New Maintainers Guide stops.

In any case, next year Debconf will be happening in Santa Fe, Argentina. And the year after that it is all still open and in a close competition between Japan, Spain, Portugal, Brazil and .. El Salvador? Personally, I would love to travel to Japan (again), but Spain or Portugal would also be great locations to meet more European developers again.

As for Santa Fe ... it is quite likely that I will not be able to make it there next year, for (planned) health reasons. I guess I should also write a new blog post about what it means to be a Debconf Photographer, so that someone else could do this as well, and also reduce the "bus factor" along the way.

But before that - here is the main group photo from this year:

DebConf 25 Group photo

You can also see it on:

You can also enjoy the rest of the photos:

Additionally, check out photos from other people on GIT LFS and consider adding your own photos there as well.

Other places I have updated with up-to-date information are these wiki pages:

If you took part in the playing cards event, then check your photo in this folder and link to your favourite from your line in the playing card wiki

Planet DebianBen Hutchings: FOSS activity in July 2025

In July I attended DebCamp and DebConf in Brest, France. I very much enjoyed the opportunity to reconnect with other Debian contributors in person. I had a number of interesting and fruitful conversations there, besides the formally organised BoFs and talks.

I also gave my own talk on What’s new in the Linux kernel (and what’s missing in Debian).

Here’s the usual categorisation of activity:

365 TomorrowsForecast

Author: Anna Mantzaris It’s not easy to forecast weather on the moon. With an average temperature of 250º F and no atmosphere to speak of, this job has its challenges. The extended forecast goes into the billions of years. I always knew I’d never make it in a big TV market like New York or […]

The post Forecast appeared first on 365tomorrows.

Planet DebianSergio Cipriano: Handling malicious requests with fail2ban

Handling malicious requests with fail2ban

I've been receiving a lot of malicious requests for a while now, so I decided to try out fail2ban as a possible solution.

I see fail2ban as nice to have tool that is useful to keep down the "noise", but I wouldn't rely on it for security. If you need a tool to block unauthorized attempts or monitor log files excessively, you are probably doing something wrong.

I'm currently using fail2ban 1.0.2-2 from Debian Bookworm. Unfortunatly, I quickly ran into a problem, fail2ban doesn't work out of the box with this version:

systemd[1]: Started fail2ban.service - Fail2Ban Service.
fail2ban-server[2840]: 2025-07-28 14:40:13,450 fail2ban.configreader   [2840]: WARNING 'allowipv6' not defined in 'Definition'. Using default one: 'auto'
fail2ban-server[2840]: 2025-07-28 14:40:13,456 fail2ban                [2840]: ERROR   Failed during configuration: Have not found an y log file for sshd jail
fail2ban-server[2840]: 2025-07-28 14:40:13,456 fail2ban                [2840]: ERROR   Async configuration of server failed
systemd[1]: fail2ban.service: Main process exited, code=exited, status=255/EXCEPTION
systemd[1]: fail2ban.service: Failed with result 'exit-code'.

The good news is that this issue has already been addressed for Debian Trixie.

Since I prefer to manage my own configuration, I removed the default file at /etc/fail2ban/jail.d/defaults-debian.conf and replaced it with a custom setup. To fix the earlier issue, I also added a systemd backend to the sshd jail so it would stop expecting a logpath.

Here's the configuration I'm using:

$ cat /etc/fail2ban/jail.d/custom.conf 
[DEFAULT]
maxretry = 3
findtime = 24h
bantime  = 24h

[nginx-bad-request]
enabled  = true
port     = http,https
filter   = nginx-bad-request
logpath  = /var/log/nginx/access.log

[nginx-botsearch]
enabled  = true
port     = http,https
filter   = nginx-botsearch
logpath  = /var/log/nginx/access.log

[sshd]
enabled  = true
port     = ssh
filter   = sshd
backend  = systemd

I like to make things explicit, so I did repeat some lines from the default jail.conf file. In the end, I'm quite happy with it so far. Soon after I set it up, fail2ban was already banning a few hosts.

$ sudo fail2ban-client status nginx-bad-request
Status for the jail: nginx-bad-request
|- Filter
|  |- Currently failed: 42
|  |- Total failed: 454
`- Actions
   |- Currently banned: 12
   |- Total banned: 39

,

Planet DebianRaju Devidas: Use phone/tablets/other laptops as external monitor with your laptop

This method is for wayland based systems. There are better ways to do this on GNOME or KDE desktops, but the method we are going to use is independent of DE/WM that you are using.

I am doing this on sway window manager, but you can try this on any other Wayland based WM or DE. I have not tried this on Xorg based systems, there are several other guides for Xorg based systems online.

When we connect a physical monitor to our laptops, it creates a second display output in our display settings that we can then re-arrange in layout, set resolution, set scale etc. Since we are not connecting via a physical interface like HDMI, DP, VGA etc. We need to create a virtual display within our system and set the display properties manually.

Get a list of current display outputs. You can also just check it in display settings of your DE/WM with wdisplays
rajudev@sanganak ~> swaymsg -t get_outputs
Output LVDS-1 &aposSeiko Epson Corporation 0x3047 Unknown&apos (focused)
  Current mode: 1366x768 @ 60.002 Hz
  Power: on
  Position: 0,0
  Scale factor: 1.000000
  Scale filter: nearest
  Subpixel hinting: rgb
  Transform: normal
  Workspace: 2
  Max render time: off
  Adaptive sync: disabled
  Allow tearing: no
  Available modes:
    1366x768 @ 60.002 Hz
altSingle physical display of the laptop

Currently we are seeing only one display output. Our goal is to create a second virtual display that we will then share on the tablet/phone.

To do this there are various tools available. We are using sway-vdctl . It is currently not available within Debian packages, so we need to install it manually.

$ git clone https://github.com/odincat/sway-vdctl.git
$ cd sway-vdctl
$ cargo build --release

This will generate the binary with the name main under target/release . We can then copy this binary to our bin folder.

$ sudo cp target/release/main /usr/local/bin/vdctl

Now we have the vdctl command available.

$ vdctl --help
Usage: vdctl [OPTIONS] <ACTION> [VALUE]

Arguments:
  <ACTION>
          Possible values:
          - create:      Create new output based on a preset
          - kill:        Terminate / unplug an active preset
          - list:        List out active presets
          - next-number: Manually set the next output number, in case something breaks
          - sync-number: Sync the next output number using &aposswaymsg -t get_outputs&apos

  [VALUE]
          Preset name to apply, alternatively a value
          
          [default: ]

Options:
      --novnc
          do not launch a vnc server, just create the output

  -h, --help
          Print help (see a summary with &apos-h&apos)

Before creating the virtual display, we need to set it&aposs properties at .config/vdctl/config.json . I am using Xiaomi Pad 6 tablet as my external display. You can adjust the properties according to the device you want to use as a second display.

$ (text-editor) .config/vdctl/config.json

{
    "host": "0.0.0.0",
    "presets": [
        {
            "name": "pad6",
            "scale_factor": 2,
            "port": 9901,
            "resolution": {
                "width": 2800,
                "height": 1800
            }
      }
    ]
}

In the JSON, you can set the display resolution according to your external device and other configurations. If you want to configure multiple displays, you can add another entry into the presets in the json file. You can refer to example json file into the git repository.

Now we need to actually create the virtual monitor.

$ vdctl create pad6
Created output, presumably &aposHEADLESS-1&apos
Set resolution of &aposHEADLESS-1&apos to 2800x1800
Set scale factor of &aposHEADLESS-1&apos to 2
Preset &apospad6&apos (&aposHEADLESS-1&apos: 2800x1800) is now active on port 9901

Now if you will check the display outputs in your display settings or from command line, you will see two different displays.

$ swaymsg -t get_outputs
Output LVDS-1 &aposSeiko Epson Corporation 0x3047 Unknown&apos
  Current mode: 1366x768 @ 60.002 Hz
  Power: on
  Position: 0,0
  Scale factor: 1.000000
  Scale filter: nearest
  Subpixel hinting: rgb
  Transform: normal
  Workspace: 2
  Max render time: off
  Adaptive sync: disabled
  Allow tearing: no
  Available modes:
    1366x768 @ 60.002 Hz

Output HEADLESS-1 &aposUnknown Unknown Unknown&apos (focused)
  Current mode: 2800x1800 @ 0.000 Hz
  Power: on
  Position: 1366,0
  Scale factor: 2.000000
  Scale filter: nearest
  Subpixel hinting: unknown
  Transform: normal
  Workspace: 3
  Max render time: off
  Adaptive sync: disabled
  Allow tearing: no

Also in the display settings.

altDisplay settings on Wayland with physical and virtual monitor output

Now we need to make this virtual display available over VNC which we will access with a VNC client on the tablet. To accomplish this I am using wayvnc but you can use any VNC server package.

Install wayvnc

$ sudo apt install wayvnc

Now we will serve our virtual display HEADLESS-1 with wayvnc.

$ wayvnc -o HEADLESS-1 0.0.0.0 5900

You can adjust the port number as per your need.

The process from laptop side is done.

Now install any VNC software on your tablet. I am using AVNC, which is available on F-Droid.

In the VNC software interface, add a new connection with the IP address of your laptop and the port started by wayvnc. Remember, both your laptop and phone need to be on the same Wi-Fi network.

altAVNC interface with the connection details to connect to the virtual monitor.

Save and connect. Now you will be able to see a extended display on your tablet.

Enjoy working with multiple screens in a portable setup.

Till next time.. Have a great time.

Planet DebianRussell Coker: Server CPU Sockets

I am always looking for ways of increasing the compute power I have at a reasonable price. I am very happy with my HP z840 dual CPU workstation [1] that I’m using as a server and my HP z640 single CPU workstation [2]. Both of them were available second hand at quite reasonable prices and could be cheaply upgraded to faster CPUs. But if I can get something a lot faster for a reasonable price then I’ll definitely get it.

Socket LGA2011-v3

The home server and home workstation I currently use have socket LGA2011-v3 [3] which supports the E5-2699A v4 CPU which gives a rating of 26,939 according to Passmark [4]. That Passmark score is quite decent, you can get CPUs using DDR4 RAM that go up to almost double that but it’s a reasonable speed and it works in systems that are readily available at low prices. The z640 is regularly on sale for less than $400AU and the z840 is occasionally below $600.

The Dell PowerEdge T430 is an ok dual-CPU tower server using the same socket. One thing that’s not well known is that is it limited to something like 135W per CPU when run with two CPUs. So it will work correctly with a single E5-2697A v4 with 145W TDP (I’ve tested that) but will refuse to boot with two of them. In my test system I tried replacing the 495W PSUs with 750W PSUs and it made no difference, the motherboard has the limit. With only a single CPU you only get 8/12 DIMM sockets and not all PCIe slots work. There are many second hand T430s on sale with only a single CPU presumably because the T330 sucks. My T430 works fine with a pair of E5-2683 v4 CPUs.

The Dell PowerEdge T630 also takes the same CPUs but supports higher TDP than the T430. They also support 18*3.5″ disks or 32*2.5″ but they are noisy. I wouldn’t buy one for home use.

AMD

There are some nice AMD CPUs manufactured around the same time and AMD has done a better job of making multiple CPUs that fit the same socket. The reason I don’t generally use AMD CPUs is that they are used in a minority of the server grade systems so as I want ECC RAM and other server features I generally can’t find AMD systems at a reasonable price on ebay etc. There are people who really want second hand server grade systems with AMD CPUs and outbid me. This is probably a region dependent issue, maybe if I was buying in the US I could get some nice workstations with AMD CPUs at low prices.

Socket LGA1151

Socket LGA1151 [5] is used in the Dell PowerEdge T330. It only supports 2 memory channels and 4 DIMMs compared to the 4 channels and 8 DIMMs in LGA2011, and it also has a limit of 64G total RAM for most systems and 128G for some systems. By today’s standards even 128G is a real limit for server use, DDR4 RDIMMs are about $1/GB and when spending $600+ on system and CPU upgrade you wouldn’t want to spend less than $130 on RAM. The CPUs with decent performance for that socket like the i9-9900K aren’t supported by the T330 (possibly they don’t support ECC RAM). The CPUs that Dell supports perform very poorly. I suspect that Dell deliberately nerfed the T330 to drive sales of the T430.

The Lenovo P330 uses socket LGA1151-2 but has the same issues of taking slow CPUs in addition to using UDIMMs which are significantly more expensive on the second hand market.

Socket LGA2066

The next Intel socket after LGA2011-v3 is LGA2066 [6]. That is in The Dell Precision 5820 and HP Z4 G4. It takes an i9-10980XE for 32,404 on Passmark or a W-2295 for 30,906. The variant of the Dell 5820 that supports the i9 CPUs doesn’t seem to support ECC RAM so it’s not a proper workstation. The single thread performance difference between the W-2295 and the E5-2699A v4 is 2640 to 2055, a 28% increase for the W-2295. There are “High Frequency Optimized” cpus for socket LGA2011-v3 but they all deliver less than 2,300 on the Passmark single-thread tests which is much less than what you can get from socket LGA2066. The W-2295 costs $1000 on ebay and the E5-2699A v4 is readily available for under $400 and a few months ago I got a matched pair for a bit over $400. Note that getting a matched pair of Intel CPUs is a major pain [7].

Comparing sockets LGA2011-v3 and LGA2066 for a single-CPU system is a $300 system (HP x640) + $400 CPU (E5-2699A v4) vs $500 system (Dell Precision 5820) + $1000 CPU (W-2295), so more than twice the price for a 30% performance benefit on some tasks. The LGA2011-v3 and USB-C both launched in 2014 so LGA2011-v3 systems don’t have USB-C sockets, a $20 USB-C PCIe card doesn’t change the economics.

Socket LGA3647

Socket LGA3647 [8] is used in the Dell PowerEdge T440. It supports 6 channels of DDR4 RAM which is a very nice feature for bigger systems. According to one Dell web page the best CPU Dell officially supports for this is the Xeon Gold 5120 which gives performance only slightly better than the E5-2683 v4 which has a low enough TDP that a T430 can run two of them. But according to another Dell web page they support 16 core CPUs which means performance better than a T430 but less than a HP z840. The T440 doesn’t seem like a great system, if I got one cheap I could find a use for it but I wouldn’t pay the prices that they go for on ebay. The Dell PowerEdge T640 has the same socket and is described as supporting up to 28 core CPUs. But I anticipate that it would be as loud as the T630 and it’s also expensive.

This socket is also used in the HP Z6 G4 which takes a W-3265 or Xeon Gold 6258R CPU for the high end options. The HP Z6 G4 systems on ebay are all above $1500 and the Xeon Gold 6258R is also over $1000 so while the Xeon Gold 6258R in a Z6 G4 will give 50% better performance on multithreaded operations than the systems I currently have it’s costing almost 3* as much. It has 6 DIMM sockets which is a nice improvement over the 4 in the z640. The Z6 G4 takes a maximum of 768G of RAM with the optional extra CPU board (which is very expensive both new and on ebay) compared to my z840 which has 512G and half it’s DIMM slots empty. The HP Z8 G4 has the same socket and takes up to 3TB of RAM if used with CPUs that support it (most CPUs only support 768G and you need a “M” variant to support more). The higher performance CPUs supported in the Z6 G4 and Z8 G4 don’t have enough entries in the Passmark database to be accurate, but going from 22 cores in the E5-2699A v4 to 28 in the Xeon Platinum 8180 when using the same RAM technology doesn’t seem like a huge benefit. The Z6 and Z8 G4 systems run DDR4 RAM at up to 2666 speed while the z640 and z840 only to 2400, a 10% increase in RAM speed is nice but not a huge difference.

I don’t think that any socket LGA3647 systems will ever be ones I want to buy. They don’t offer much over LGA2011-v3 but are in newer and fancier systems that will go for significantly higher prices.

DDR5

I think that DDR5 systems will be my next step up in tower server and workstation performance after the socket LGA2011-v3 systems. I don’t think anything less will offer me enough of a benefit to justify a change. I also don’t think that they will be in the price range I am willing to pay until well after DDR6 is released, some people are hoping for DDR6 to be released late this year but next year seems more likely. So maybe in 2027 there will be some nice DDR5 systems going cheap.

CPU Benchmark Results

Here are the benchmark results of CPUs I mentioned in this post according to passmark.com [9]. I didn’t reference results of CPUs that only had 1 or 2 results posted as they aren’t likely to be accurate.

CPU Single Thread Multi Thread TDP
E5-2683 v4 1,713 17,591 120W
Xeon Gold 5120 1,755 18,251 105W
i9-9900K 2,919 18,152 95W
E5-2697A v4 2,106 21,610 145W
E5-2699A v4 2,055 26,939 145W
W-3265 2,572 30,105 205W
W-2295 2,642 30,924 165W
i9-10980XE 2,662 32,397 165W
Xeon Gold 6258R 2,080 40,252 205W

Cryptogram The Semiconductor Industry and Regulatory Compliance

Earlier this week, the Trump administration narrowed export controls on advanced semiconductors ahead of US-China trade negotiations. The administration is increasingly relying on export licenses to allow American semiconductor firms to sell their products to Chinese customers, while keeping the most powerful of them out of the hands of our military adversaries. These are the chips that power the artificial intelligence research fueling China’s technological rise, as well as the advanced military equipment underpinning Russia’s invasion of Ukraine.

The US government relies on private-sector firms to implement those export controls. It’s not working. US-manufactured semiconductors have been found in Russian weapons. And China is skirting American export controls to accelerate AI research and development, with the explicit goal of enhancing its military capabilities.

American semiconductor firms are unwilling or unable to restrict the flow of semiconductors. Instead of investing in effective compliance mechanisms, these firms have consistently prioritized their bottom lines—a rational decision, given the fundamentally risky nature of the semiconductor industry.

We can’t afford to wait for semiconductor firms to catch up gradually. To create a robust regulatory environment in the semiconductor industry, both the US government and chip companies must take clear and decisive actions today and consistently over time.

Consider the financial services industry. Those companies are also heavily regulated, implementing US government regulations ranging from international sanctions to anti-money laundering. For decades, these companies have invested heavily in compliance technology. Large banks maintain teams of compliance employees, often numbering in the thousands.

The companies understand that by entering the financial services industry, they assume the responsibility to verify their customers’ identities and activities, refuse services to those engaged in criminal activity, and report certain activities to the authorities. They take these obligations seriously because they know they will face massive fines when they fail. Across the financial sector, the Securities and Exchange Commission imposed a whopping $6.4 billion in penalties in 2022. For example, TD Bank recently paid almost $2 billion in penalties because of its ineffective anti-money laundering efforts

An executive order issued earlier this year applied a similar regulatory model to potential “know your customer” obligations for certain cloud service providers.

If Trump’s new license-focused export controls are to be effective, the administration must increase the penalties for noncompliance. The Commerce Department’s Bureau of Industry and Security (BIS) needs to more aggressively enforce its regulations by sharply increasing penalties for export control violations.

BIS has been working to improve enforcement, as evidenced by this week’s news of a $95 million penalty against Cadence Design Systems for violating export controls on its chip design technology. Unfortunately, BIS lacks the people, technology, and funding to enforce these controls across the board.

The Trump administration should also use its bully pulpit, publicly naming companies that break the rules and encouraging American firms and consumers to do business elsewhere. Regulatory threats and bad publicity are the only ways to force the semiconductor industry to take export control regulations seriously and invest in compliance.

With those threats in place, American semiconductor firms must accept their obligation to comply with regulations and cooperate. They need to invest in strengthening their compliance teams and conduct proactive audits of their subsidiaries, their customers, and their customers’ customers.

Firms should elevate risk and compliance voices onto their executive leadership teams, similar to the chief risk officer role found in banks. Senior leaders need to devote their time to regular progress reviews focused on meaningful, proactive compliance with export controls and other critical regulations, thereby leading their organizations to make compliance a priority.

As the world becomes increasingly dangerous and America’s adversaries become more emboldened, we need to maintain stronger control over our supply of critical semiconductors. If Russia and China are allowed unfettered access to advanced American chips for their AI efforts and military equipment, we risk losing the military advantage and our ability to deter conflicts worldwide. The geopolitical importance of semiconductors will only increase as the world becomes more dangerous and more reliant on advanced technologies—American security depends on limiting their flow.

This essay was written with Andrew Kidd and Celine Lee, and originally appeared in The National Interest.

365 TomorrowsLast Request

Author: Jason Schembri My body comes back before I do. Lungs seize. Throat raw. Muscles twitching down my left side—all the expected waking-from-cryo nonsense. And then my mind, snapping back like elastic. “Vitals stabilising. Visual distortion: temporary. Passenger 113-A. Revival sequence complete.” No greeting, no mission status. Just the same old system voice, steady and […]

The post Last Request appeared first on 365tomorrows.

,

Cryptogram Surveilling Your Children with AirTags

Skechers is making a line of kid’s shoes with a hidden compartment for an AirTag.

Planet DebianJonathan Dowland: School of Computing Technical Reports

(You wait ages for an archiving blog post and two come along at once!)

Between 1969-2019, the Newcastle University School of Computing published a Technical Reports Series. Until 2017-ish, the full list of individually-numbered reports was available on the School's website, as well as full text PDFs for every report.

At some time around 2014 I was responsible for migrating the School's website from self-managed to centrally-managed. The driver was to improve the website from the perspective of student recruitment. The TR listings (as well as full listings and texts for awarded PhD theses, MSc dissertations, Director's reports and various others) survived the initial move. After I left (as staff) in 2015, anything not specifically about student recruitment degraded and by 2017 the listings were gone.

I've been trying, on and off, to convince different parts of the University to restore and take ownership of these lists ever since. For one reason or another each avenue I've pursued has gone nowhere.

Recently the last remaining promising way forward failed, so I gave up and did it myself. The list is now hosted by the Historic Computing Committee, here:

https://nuhc.ncl.ac.uk/computing/techreports/

It's not complete (most of the missing entries are towards the end of the run), but it's a start. The approach that finally yielded results was simply scraping the Internet Archive Wayback Machine for various pages from back when the material was represented on the School website, and then filling in the gaps from some other sources.

What I envisage in the future: per-page reports with the relevant metadata (including abstracts); authors de-duplicated and cross-referenced; PDFs OCRd; providing access to the whole metadata DB (probably as as lump of JSON); a mechanism for people to report errors; a platform for students to perform data mining projects: perhaps some kind of classification/tagging by automated content analysis; cross-referencing copies of papers in other venues (lots of TRs are pre-prints).

Planet DebianSergio Cipriano: How I deployed this Website

How I deployed this Website

I will describe the step-by-step process I followed to make this static website accessible on the Internet.

DNS

I bought this domain on NameCheap and am using their DNS for now, where I created these records:

Record Type Host Value
A sergiocipriano.com 201.54.0.17
CNAME www sergiocipriano.com

Virtual Machine

I am using Magalu Cloud for hosting my VM, since employees have free credits.

Besides creating a VM with a public IP, I only needed to set up a Security Group with the following rules:

Type Protocol Port Direction CIDR
IPv4 / IPv6 TCP 80 IN Any IP
IPv4 / IPv6 TCP 443 IN Any IP

Firewall

The first thing I did in the VM was enabling ufw (Uncomplicated Firewall).

Enabling ufw without pre-allowing SSH is a common pitfall and can lock you out of your VM. I did this once :)

A safe way to enable ufw:

$ sudo ufw allow OpenSSH      # or: sudo ufw allow 22/tcp
$ sudo ufw allow 'Nginx Full' # or: sudo ufw allow 80,443/tcp
$ sudo ufw enable

To check if everything is ok, run:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                           Action      From
--                           ------      ----
22/tcp (OpenSSH)             ALLOW IN    Anywhere                  
80,443/tcp (Nginx Full)      ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))        ALLOW IN    Anywhere (v6)             
80,443/tcp (Nginx Full (v6)) ALLOW IN    Anywhere (v6) 

Reverse Proxy

I'm using Nginx as the reverse proxy. Since I use the Debian package, I just needed to add this file:

/etc/nginx/sites-enabled/sergiocipriano.com

with this content:

server {
    listen 443 ssl;      # IPv4
    listen [::]:443 ssl; # IPv6

    server_name sergiocipriano.com www.sergiocipriano.com;

    root /path/to/website/sergiocipriano.com;
    index index.html;

    location / {
        try_files $uri /index.html;
    }
}

server {
    listen 80;
    listen [::]:80;

    server_name sergiocipriano.com www.sergiocipriano.com;

    # Redirect all HTTP traffic to HTTPS
    return 301 https://$host$request_uri;
}

TLS

It's really easy to setup TLS thanks to Let's Encrypt:

$ sudo apt-get install certbot python3-certbot-nginx
$ sudo certbot install --cert-name sergiocipriano.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Deploying certificate
Successfully deployed certificate for sergiocipriano.com to /etc/nginx/sites-enabled/sergiocipriano.com
Successfully deployed certificate for www.sergiocipriano.com to /etc/nginx/sites-enabled/sergiocipriano.com

Certbot will edit the nginx configuration with the path to the certificate.

HTTP Security Headers

I decided to use wapiti, which is a web application vulnerability scanner, and the report found this problems:

  1. CSP is not set
  2. X-Frame-Options is not set
  3. X-XSS-Protection is not set
  4. X-Content-Type-Options is not set
  5. Strict-Transport-Security is not set

I'll explain one by one:

  1. The Content-Security-Policy header prevents XSS and data injection by restricting sources of scripts, images, styles, etc.
  2. The X-Frame-Options header prevents a website from being embedded in iframes (clickjacking).
  3. The X-XSS-Protection header is deprecated. It is recommended that CSP is used instead of XSS filtering.
  4. The X-Content-Type-Options header stops MIME-type sniffing to prevent certain attacks.
  5. The Strict-Transport-Security header informs browsers that the host should only be accessed using HTTPS, and that any future attempts to access it using HTTP should automatically be upgraded to HTTPS. Additionally, on future connections to the host, the browser will not allow the user to bypass secure connection errors, such as an invalid certificate. HSTS identifies a host by its domain name only.

I added this security headers inside the HTTPS and HTTP server block, outside the location block, so they apply globally to all responses. Here's how the Nginx config look like:

add_header Content-Security-Policy "default-src 'self'; style-src 'self';" always;
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

I added always to ensure that nginx sends the header regardless of the response code.

To add Content-Security-Policy header I had to move the css to a separate file, because browsers block inline styles under strict CSP unless you allow them explicitly. They're considered unsafe inline unless you move to a separate file and link it like this:

<link rel="stylesheet" href="./resources/header.css">

Planet DebianJonathan Dowland: Debian Chronicles

I recently learned that, about 6 months ago, the Debian webteam deleted all news articles from the main website older than 2022. There have been several complaints from people in and outside of Debian, notably Joe Brockmeier of LWN, and this really sad one from the nephew of a deceased developer, wondering where the obituary had gone, but the team have not been swayed and are not prepared to reinstate the news.

It feels very important to me, too, that historic news, and their links, are not broken. So, I hastily built a new Debian service, The Chronicles of Debian, as as permanent home for historic web content.

$ HEAD -S -H "Accept-Language: de" https://www.debian.org/News/1997/19971211
HEAD https://www.debian.org/News/1997/19971211
302 Found
HEAD https://chronicles.debian.org/www/News/1997/19971211
200 OK
…
Content-Language: de
Content-Location: 19971211.de.html
…

This was thrown up in a hurry to get something working as fast as possible, and there is plenty of room for improvement. Get in touch if there's an enhancement you would like or you would like to get involved!

Planet DebianGuido Günther: Free Software Activities July 2025

Another short status update of what happened on my side last month - a lot shorter than usual due to real life events (that will also affect August) but there was some progress on stevia and it landed in Debian too now.

See below for details on the above and more:

phosh

  • Use new, rust based) phrosh portal too (MR)
  • Consistently format meson files (MR)

phoc

  • Add sysprof support (MR)
  • Reject input based on shell's state (MR)
  • Avoid zero serial (MR)
  • Allow to damage whole output on each frame (MR)
  • Avoid possible crash on unlock (MR)

phosh-mobile-settings

  • Use newer gmobile and make CI more flexible (MR
  • Fix nightly build (MR)
  • Allow to configure the OSK's automatic scaling properties (MR)

stevia (formerly phosh-osk-stub)

  • Portrait keyboard scaling (MR)
  • Fix translation of completer descriptions in mobile settings (MR)
  • Use key-pressed events (MR)
  • Fix additional completions like emojis with hunspell completer (MR)
  • Document layout testing (MR)

phosh-vala-plugins

  • Drop vapi files, they've made it into a phosh release now (MR)

xdg-desktop-portal-phosh

  • Bump rustc dependency and simplify CI (MR)

feedbackd-device-themes

  • Add key-{pressed,released} (MR)

livi

  • Make single click play/pause video (MR)
  • Update screenshot and metinfo for better display on flathub (MR)
  • Release 0.3.2 (MR)
  • Update on Flathub (MR)

Debian

  • Upload current stevia to experimental
  • phosh: Don't forget to install vapi files (MR)
  • meta-phosh: Update to 0.48.0: (MR)
  • Update to stevia 0.48 (MR)
  • Update xkbcommon to 0.10.0 (MR)
  • iio-sensor-proxy: Backport buffer mode fixes for trixie (MR), Unblock request
  • livi: Update to 0.3.2 (MR)

foliate

  • Don't let session go idle when in fullscreen (MR)

Cellbroadcastd

  • Fix packaging build (MR)

git-buildpackage

  • pull: Allow to convert local repo when remote got switched to DEP-14 (MR)

wayland-protocols

  • Respin cutout protocol MR

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh-mobile-settings: Disable Xwayland to help e.g. distrobox (MR) - merged
  • phosh-mobile-settings: Allow to search (MR) - merged
  • phosh-mobile-settings: Allow to configure terminal layout shortcuts (MR) - merged
  • feedbackd: Legacy led support (MR) - merged
  • phosh: upcoming-events: Allow to hide days without events (MR)
  • m-b-p-i: Add emergency numbers for JP (MR)
  • xdg-desktop-portal-phosh: bootstrap pure rust portal (MR) - merged
  • xdg-desktop-portal-phosh: portal avoidance (MR) - merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

Worse Than FailureError'd: Monkey Business

If monkeys aren't your bag, Manuel H. is down to clown. "If anyone wants to know the address of that circus - it's written right there. Too bad that it's only useful if you happen to be in the same local subnet..." Or on the same block.

0

 

"Where's my pension?" paniced Stewart , explaining "I logged on to check my Aegon pension to banish the Monday blues and my mood only worsened!"

1

 

After last week's episode, BJH is keeping a weather eye out for freak freezes. "The Weather.com app has something for everyone. Simple forecasts almost anyone can understand, and technical jargon for the geeks."

2

 

"It costs too much to keep a salmon on the road," complains Yitzchok Nolastname . I agree this result looks fishy.

3

 

"At Vanity Fair, brevity is the soul of wit," notes jeffphi . Always has been.

4

 

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Planet DebianBirger Schacht: Status update, July 2025

In beginning of July I got my 12" framework laptop and installed Debian on it. During that setup I made some updates to my base setup scripts that I use to install Debian machines.

Due to the freeze I did not do much package related work. But I was at DebConf and I uploaded a new release of labwc to experimental, mostly to test the tag2upload workflow.

I started working on packaging wlr-sunclock which is a small Wayland widget that displays the sun’s shadows on the earth. I also created an ITP for wayback. Wayback is an X11 compatibility layer to allow to run X11 desktop environments using Wayland.

In my dayjob I did my usual work on apis-core-rdf, which is our Django application for managing prosopographic data. I implemented a password change interface and did some restructuring of the templates. We released a new version which was followed by a bugfix release a couple of days later.

I also implemented a rather big refactoring in pfp-api. PFP-API is a FastAPI based REST API that uses rdfproxy to fetch data from a Triplestore, converts the data to Pydantic models and then ships the models as JSON. Most of the work is done by rdfproxy in the background, but I adapted the existing pfp-api code to make it easier to add new entity types.

365 TomorrowsThe Color of Sunset

Author: Sarasi Jayasekara Sammy could see color. That was the part that bothered me. Not that he had all his organs intact while half my body had been replaced with machines. Nor that mama hadn’t spoken two words to me since he’d been born. All that didn’t trouble me. This was going to be her […]

The post The Color of Sunset appeared first on 365tomorrows.

Planet DebianPaul Wise: FLOSS Activities July 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Sponsors

All work was done on a volunteer basis.

Planet DebianIustin Pop: Our Grand Japan 2025 vacation is over 😭

As I’m writing this, we’re one hour away from landing, and thus our Grand (with a capital G for sure) Japan 2025 vacation is over. Planning started about nine months ago, plane tickets bought six months in advance, most hotels booked about four months ahead, and then a wonderful, even if a bit packed, almost 3 weeks in Japan. And now we’re left with lots of good memories, some mishaps that we’re going to laugh about in a few months’s time, and quite a few thousand pictures to process and filter, so that so they can be viewed in a single session.

Oh, and I’m also left with a nice bottle of plum wine, thanks to inflight shopping. Was planning to, but didn’t manage to buy one in the airport, as Haneda International departures, after the security check, is a bit small. But in 15 hours of flying, there was enough time to implement 2 tiny Corydalis features, and browse the shopping catalog �. I only learned on the flight that some items need to be preordered, a lesson for next time…

Thanks to the wonders of inflight internet, I can write and publish this, but it not being StarLink, Visual Studio Code managed to download an update for the UI, but now the remote server package is too big? slow? and can’t be downloaded. Well, it started download 5 times, and aborted at about 80% each time. Well, thankful my blog is lightweight and I can write it in vi and push it �. And pushing the above-mentioned features to GitHub was also possible.

A proper blog post will follow, once I can select some pictures and manage to condense three weeks in an overall summary… And in the meantime, back to the real world!

Planet DebianReproducible Builds (diffoscope): diffoscope 303 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 303. This version includes the following changes:

[ Chris Lamb ]
* Don't check for PyPDF version 3 specifically, check for >= 3. Thanks,
  Vagrant, for the patch. (Closes: reproducible-builds/diffoscope#413)
* Ensure that Java class files are named .class on the filesystem before
  passing them to javap(1).
* Update copyright years.

You find out more by visiting the project homepage.

Planet Debianpuer-robustus: My Google Summer of Code '25 at Debian

I’ve participated in this year’s Google Summer of Code (GSoC) program and have been working on the small (90h) “autopkgtests for the rsync package” project at Debian.

Writing my proposal

Before you can start writing a proposal, you need to select an organization you want to work with. Since many organizations participate in GSoC, I’ve used the following criteria to narrow things down for me:

  • Programming language familiarity: For me only Python (preferably) as well as shell and Go projects would have made sense. While learning another programming language is cool, I wouldn’t be as effective and helpful to the project as someone who is proficient in the language already.

  • Standing of the organization: Some of the organizations participating in GSoC are well-known for the outstanding quality of the software they produce. Debian is one of them, but so is e.g. the Django Foundation or PostgreSQL. And my thinking was that the higher the quality of the organization, the more there is to learn for me as a GSoC student.

  • Mentor interactions: Apart from the advantage you get from mentor feedback when writing your proposal (more on that further below), it is also helpful to gauge how responsive/helpful your potential mentor is during the application phase. This is important since you will be working together for a period of at least 2 months; if the mentor-student communication doesn’t work, the GSoC project is going to be difficult.

  • Free and Open-Source Software (FOSS) communication platforms: I generally believe that FOSS projects should be built on FOSS infrastructure. I personally won’t run proprietary software when I want to contribute to FOSS in my spare time.

  • Be a user of the project: As Eric S. Raymond has pointed out in his seminal “The Cathedral and the Bazaar” 25 years ago

    Every good work of software starts by scratching a developer’s personal itch.

Once I had some organizations in mind whose projects I’d be interested in working on, I started writing proposals for them. Turns out, I started writing my proposals way too late: In the end I only managed to hand in a single one … which is risky. Competition for the GSoC projects is fierce and the more quality (!) proposals you send out, the better your chances are at getting one. However, don’t write proposals for the sake of it: Reviewers get way too many AI slop proposals already and you will not do yourself a favor with a low-quality proposal. Take the time to read the instructions/ideas/problem descriptions the project mentors have provided and follow their guidelines. Don’t hesitate to reach out to project mentors: In my case, I’ve asked Samuel Henrique a few clarification questions whereby the following (email) discussion has helped me greatly in improving my proposal. Once I’ve finalized my proposal draft, I’ve sent it to Samuel for a review, which again led to some improvements to the final proposal which I’ve uploaded to the GSoC program webpage.

Community bonding period

Once you get the information that you’ve been accepted into the GSoC program (don’t take it personally if you don’t make it; this was my second attempt after not making the cut in 2024), get in touch with your prospective mentor ASAP. Agree upon a communication channel and some response times. Put yourself in the loop for project news and discussions whatever that means in the context of your organization: In Debian’s case this boiled down to subscribing to a bunch of mailing lists and IRC channels. Also make sure to setup a functioning development environment if you haven’t done so for writing the proposal already.

Payoneer setup

The by far most annoying part of GSoC for me. But since you don’t have a choice if you want to get the stipend, you will need to signup for an account at Payoneer.

In this iteration of GSoC all participants got a personalized link to open a Payoneer account. When I tried to open an account by following this link, I got an email after the registration and email verification that my account is being blocked because Payoneer deems the email adress I gave a temporary one. Well, the email in question is most certainly anything but temporary, so I tried to get in touch with the Payoneer support - and ended up in an LLM-infused kafkaesque support hell. Emails are answered by an LLM which for me meant utterly off-topic replies and no help whatsoever. The Payoneer website offers a real-time chat, but it is yet another instance of a bullshit-spewing LLM bot. When I at last tried to call them (the support lines are not listed on the Payoneer website but were provided by the GSoC program), I kid you not, I was being told that their platform is currently suffering from technical problems and was hung up on. Only thanks to the swift and helpful support of the GSoC administrators (who get priority support from Payoneer) I was able to setup a Payoneer account in the end.

Apart from showing no respect to customers, Payoneer is also ripping them off big time with fees (unless you get paid in USD). They charge you 2% for currency conversions to EUR on top of the FX spread they take. What worked for me to avoid all of those fees, was to open a USD account at Wise and have Payoneer transfer my GSoC stipend in USD to that account. Then I exchanged the USD to my local currency at Wise for significantly less than Payoneer would have charged me. Also make sure to close your Payoneer account after the end of GSoC to avoid their annual fee.

Project work

With all this prelude out of the way, I can finally get to the actual work I’ve been doing over the course of my GSoC project.

Background

The upstream rsync project generally sees little development. Nonetheless, they released version 3.4.0 including some CVE fixes earlier this year. Unfortunately, their changes broke the -H flag. Now, Debian package maintainers need to apply those security fixes to the package versions in the Debian repositories; and those are typically a bit older. Which usually means that the patches cannot be applied as is but will need some amendments by the Debian maintainers. For these cases it is helpful to have autopkgtests defined, which check the package’s functionality in an automated way upon every build.

The question then is, why should the tests not be written upstream such that regressions are caught in the development rather than the distribution process? There’s a lot to say on this question and it probably depends a lot on the package at hand, but for rsync the main benefits are twofold:

  1. The upstream project mocks the ssh connection over which rsync is most typically used. Mocking is better than nothing but not the real thing. In addition to being a more realisitic test scenario for the typical rsync use case, involving an ssh server in the test would automatically extend the overall resilience of Debian packages as now new versions of the openssh-server package in Debian benefit from the test cases in the rsync reverse dependency.
  2. The upstream rsync test framework is somewhat idiosyncratic and difficult to port to reimplementations of rsync. Given that the original rsync upstream sees little development, an extensive test suit further downstream can serve as a threshold for drop-in replacements for rsync.

Goal(s)

At the start of the project, the Debian rsync package was just running (a part of) the upstream tests as autopkgtests. The relevant snippet from the build log for the rsync_3.4.1+ds1-3 package reads:

114s ------------------------------------------------------------
114s ----- overall results:
114s 36 passed
114s 7 skipped

Samuel and I agreed that it would be a good first milestone to make the skipped tests run. Afterwards, I should write some rsync test cases for “local” calls, i.e. without an ssh connection, effectively using rsync as a more powerful cp. And once that was done, I should extend the tests such that they run over an active ssh connection.

With these milestones, I went to work.

Upstream tests

Running the seven skipped upstream tests turned out to be fairly straightforward:

  • Two upstream tests concern access control lists and extended filesystem attributes. For these tests to run they rely on functionality provided by the acl and xattr Debian packages. Adding those to the Build-Depends list in the debian/control file of the rsync Debian package repo made them run.
  • Four upstream tests required root privileges to run. The autopkgtest tool knows the needs-root restriction for that reason. However, Samuel and I agreed that the tests should not exclusively run with root privileges. So, instead of just adding the restiction to the existing autopkgtest test, we created a new one which has the needs-root restriction and runs the upstream-tests-as-root script - which is nothing else than a symlink to the existing upstream-tests script.

The commits to implement these changes can be found in this merge request.

The careful reader will have noticed that I only made 2 + 4 = 6 upstream test cases run out of 7: The leftover upstream test is checking the functionality of the --ctimes rsync option. In the context of Debian, the problem is that the Linux kernel doesn’t have a syscall to set the creation time of a file. As long as that is the case, this test will always be skipped for the Debian package.

Local tests

When it came to writing Debian specific test cases I started of a completely clean slate. Which is a blessing and a curse at the same time: You have full flexibility but also full responsibility.

There were a few things to consider at this point in time:

  • Which language to write the tests in?

    The programming language I am most proficient in is Python. But testing a CLI tool in Python would have been weird: it would have meant that I’d have to make repeated subprocess calls to run rsync and then read from the filesystem to get the file statistics I want to check.

    Samuel suggested I stick with shell scripts and make use of diffoscope - one of the main tools used and maintained by the Reproducible Builds project - to check whether the file contents and file metadata are as expected after rsync calls. Since I did not have good reasons to use bash, I’ve decided to write the scripts to be POSIX compliant.

  • How to avoid boilerplate? If one makes use of a testing framework, which one?

    Writing the tests would involve quite a bit of boilerplate, mostly related to giving informative output on and during the test run, preparing the file structure we want to run rsync on, and cleaning the files up after the test has run. It would be very repetitive and in violation of DRY to have the code for this appear in every test. Good testing frameworks should provide convenience functions for these tasks. shunit2 comes with those functions, is packaged for Debian, and given that it is already being used in the curl project, I decided to go with it.

  • Do we use the same directory structure and files for every test or should every test have an individual setup?

    The tradeoff in this question being test isolation vs. idiosyncratic code. If every test has its own setup, it takes a) more work to write the test and b) more work to understand the differences between tests. However, one can be sure that changes to the setup in one test will have no side effects on other tests. In my opinion, this guarantee was worth the additional effort in writing/reading the tests.

Having made these decisions, I simply started writing tests… and ran into issues very quickly.

rsync and subsecond mtime diffs

When testing the rsync --times option, I observed a weird phenomenon: If the source and destination file have modification times which differ only in the nanoseconds, an rsync --times call will not synchronize the modification times. More details about this behavior and examples can be found in the upstream issue I raised. In the Debian tests we had to occasionally work around this by setting the timestamps explicitly with touch -d.

diffoscope regression

In one test case, I was expecting a difference in the modification times but diffoscope would not report a diff. After a good amount of time spent on debugging the problem (my default, and usually correct, assumption is that something about my code is seriously broken if I run into issues like that), I was able to show that diffoscope only displayed this behavior in the version in the unstable suite, not on Debian stable (which I am running on my development machine).

Since everything pointed to a regression in the diffoscope project and with diffoscope being written in Python, a language I am familiar with, I wanted to spend some time investigating (and hopefully fixing) the problem.

Running git bisect on the diffoscope repo helped me in identifying the commit which introduced the regression: The commit contained an optimization via an early return for bit-by-bit identical files. Unfortunately, the early return also caused an explicitly requested metadata comparison (which could be different between the files) to be skipped.

With a nicely diagnosed issue like that, I was able to go to a local hackerspace event, where people work on FOSS together for an evening every month. In a group, we were able to first, write a test which showcases the broken behavior in the latest diffoscope version, and second, make a fix to the code such that the same test passes going forward. All details can be found in this merge request.

shunit2 failures

At some point I had a few autopkgtests setup and passing, but adding a new one would throw me totally inexplicable errors. After trying to isolate the problem as much as possible, it turns out that shunit2 doesn’t play well together we the -e shell option. The project mentions this in the release notes for the 2.1.8 version1, but in my opinion a constraint this severe should be featured much more prominently, e.g. in the README.

Tests over an ssh connection

The centrepiece of this project; everything else has in a way only been preparation for this.

Obviously, the goal was to reuse the previously written local tests in some way. Not only because lazy me would have less work to do this way, but also because of a reduced long-term maintenance burden of one rather than two test sets.

As it turns out, it is actually possible to accomplish that: The remote-tests script doesn’t do much apart from starting an ssh server on localhost and running the local-tests script with the REMOTE environment variable set.

The REMOTE environment variable changes the behavior of the local-tests script in such a way that it prepends "$REMOTE": to the destination of the rsync invocations. And given that we set REMOTE=rsync@localhost in the remote-tests script, local-tests copies the files to the exact same locations as before, just over ssh.

The implementational details for this can be found in this merge request.

proposed-updates

Most of my development work on the Debian rsync package took place during the Debian freeze as the release of Debian Trixie is just around the corner. This means that uploading by Debian Developers (DD) and Debian Maintainers (DM) to the unstable suite is discouraged as it makes migrating the packages to testing more difficult for the Debian release team. If DDs/DMs want to have the package version in unstable migrated to testing during the freeze they have to file an unblock request.

Samuel has done this twice (1, 2) for my work for Trixie but has asked me to file the proposed-updates request for current stable (i.e. Debian Bookworm) myself after I’ve backported my tests to bookworm.

Unfinished business

To run the upstream tests which check access control list and extended file system attributes functionality, I’ve added the acl and xattr packages to Build-Depends in debian/control. This, however, will only make the packages available at build time: If Debian users install the rsync package, the acl and xattr packages will not be installed alongside it. For that, the dependencies would have to be added to Depends or Suggests in debian/control. Depends is probably to strong of a relation since rsync clearly works well in practice without, but adding them to Suggests might be worthwhile. A decision on this would involve checking, what happens if rsync is called with the relevant options on a host machine which has those packages installed, but where the destination machine lacks them.

Apart from the issue described above, the 15 tests I managed to write are are a drop in the water in light of the infinitude of rsync options and their combinations. Most glaringly, not all options of the --archive option are covered separately (which would help indicating what code path of rsync broke in a regression). To increase the likelihood of catching regressions with the autopkgtests, the test coverage should be extended in the future.

Conclusion

Generally, I am happy with my contributions to Debian over the course of my small GSoC project: I’ve created an extensible, easy to understand, and working autopkgtest setup for the Debian rsync package. There are two things which bother me, however:

  1. In hindsight, I probably shouldn’t have gone with shunit2 as a testing framework. The fact that it behaves erratically with the -e flag is a serious drawback for a shell testing framework: You really don’t want a shell command to fail silently and the test to continue running.
  2. As alluded to in the previous section, I’m not particularly proud of the number of tests I managed to write.

On the other hand, finding and fixing the regression in diffoscope - while derailing me from the GSoC project itself - might have a redeeming quality.

DebConf25

By sheer luck I happened to work on a GSoC project at Debian over a time period during which the annual Debian conference would take place close enough to my place of residence. Samuel pointed the opportunity to attend DebConf out to me during the community bonding period and since I could make time for the event in my schedule, I signed up.

DebConf was a great experience which - aside from gaining more knowledge about Debian development - allowed me to meet the actual people usually hidden behind email adresses and IRC nicks. I can wholeheartedly recommend attending a DebConf to every interested Debian user!

For those who have missed this year’s iteration of the conference, I can recommend the following recorded talks:

While not featuring as a keynote speaker (understandably so as the newcomer to Debian community that I am), I could still contribute a bit to the conference program.

GSoC project presentation

The Debian Outreach team has scheduled a session in which all GSoC and Outreachy students over the past year had the chance to present their work in a lightning talk.

The session has been recorded and is available online, just like my slides and the source for them.

Debian install workshop

Additionally, with so many Debian experts gathering in one place while KDE’s End of 10 campaign is ongoing, I felt it natural to organize a Debian install workhop. In hindsight I can say that I underestimated how much work it would be, especially for me who does not speak a word of French. But although the turnout of people who wanted us to install Linux on their machines was disappointingly low, it was still worth it: Not only because the material in the repo can be helpful to others planning install workshops but also because it was nice to meet a) the person behind the Debian installer images and b) the local Brest/Finistère Linux user group as well as the motivated and helpful people at Infini.

Credits

I want to thank the Open Source team at Google for organizing GSoC: The highly structured program with a one-to-one mentorship is a great avenue to start contributing to well established and at times intimidating FOSS projects. And as much as I disagree with Google’s surveillance capitalist business model, I have to give it to them that the company at least takes its responsibility for FOSS (somewhat) seriously - unlike many other businesses which rely on FOSS and choose to freeride of it.

Big thanks to the Debian community! I’ve experienced nothing but friendliness in my interactions with the community.

And lastly, the biggest thanks to my GSoC mentor Samuel Henrique. He has dealt patiently and competently with all my stupid newbie questions. His support enabled me to make - albeit small - contributions to Debian. It has been a pleasure to work with him during GSoC and I’m looking forward to working together with him in the future.


  1. Obviously, I’ve only read them after experiencing the problem. ↩︎

,

Planet DebianMatthew Garrett: Secure boot certificate rollover is real but probably won't hurt you

LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)

[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

comment count unavailable comments

Planet DebianSimon Josefsson: Independently Reproducible Git Bundles

The gnulib project publish a git bundle as a stable archival copy of the gnulib git repository once in a while.

Why? We don’t know exactly what this may be useful for, but I’m promoting for this to see if we can establish some good use-case.

A git bundle may help to establish provinence in case of an attack on the Savannah hosting platform that compromise the gnulib git repository.

Another use is in the Debian gnulib package: that gnulib bundle is git cloned when building some Debian packages, to get to exactly the gnulib commit used by each upstream project – see my talk on gnulib at Debconf24 – and this approach reduces the amount of vendored code that is part of Debian’s source code, which is relevant to mitigate XZ-style attacks.

The first time we published the bundle, I wanted it to be possible to re-create it bit-by-bit identically by others.

At the time I discovered a well-written blog post by Paul Beacher on reproducible git bundles and thought he had solved the problem for me. Essentially it boils down to disable threading during compression when producing the bundle, and his final example show this results in a predictable bit-by-bit identical output:

$ for i in $(seq 1 100); do \
> git -c 'pack.threads=1' bundle create -q /tmp/bundle-$i --all; \
> done
$ md5sum /tmp/bundle-* | cut -f 1 -d ' ' | uniq -c
    100 4898971d4d3b8ddd59022d28c467ffca

So what remains to be said about this? It seems reproducability goes deeper than that. One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine.

It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occured even when nothing had been committed on the server side between the two runs.

I thought the reason had to do with other sources of unpredictable data, and I explored several ways to work around this but eventually gave up. I settled for the following sequence of commands:

REV=ac9dd0041307b1d3a68d26bf73567aa61222df54 # master branch commit to package
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input
# inspect that the new tree matches a trusted copy
git checkout -B master $REV # put $REV at master
for b in $(git branch -r | grep origin/stable- | sort --version-sort); do git checkout ${b#origin/}; done
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any commits after $REV
git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle

At the time it felt more important to publish something than to reach for perfection, so we did so using the above snippet. Afterwards I reached out to the git community on this and there were good discussion about my challenge.

At the end of that thread you see that I was finally able to reproduce a bit-by-bit identical bundles from two different clones, by using an intermediate git -c pack.threads=1 repack -adF step. I now assume that the unpredictable data I got earlier was introduced during the ‘git clone’ steps, compressing the pack differently each time due to threaded compression. The outcome could also depend on what content the server provided, so if someone ran git gc, git repack on the server side things would change for the user, even if the user forced threading to 1 during cloning — more experiments on what kind of server-side alterations results in client-side differences would be good research.

A couple of months passed and it is now time to publish another gnulib bundle – somewhat paired to the bi-yearly stable gnulib branches – so let’s walk through the commands and explain what they do. First clone the repository:

REV=225973a89f50c2b494ad947399425182dd42618c   # master branch commit to package
S1REV=475dd38289d33270d0080085084bf687ad77c74d # stable-202501 branch commit
S2REV=e8cc0791e6bb0814cf4e88395c06d5e06655d8b5 # stable-202507 branch commit
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input

I believe the git fsck will validate that the chain of SHA1 commits are linked together, preventing someone from smuggling in unrelated commits earlier in the history without having to do SHA1 collision. SHA1 collisions are economically feasible today, so this isn’t much of a guarantee of anything though.

git checkout -B master $REV # put $REV at master
# Add all stable-* branches locally:
for b in $(git branch -r | grep origin/stable- | sort --version-sort); do git checkout ${b#origin/}; done
git checkout -B stable-202501 $S1REV
git checkout -B stable-202507 $S2REV
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any unrelated commits, not clear this helps

This establish a set of branches pinned to particular commits. The older stable-* branches are no longer updated, so they shouldn’t be moving targets. In case they are modified in the future, the particular commit we used will be found in the official git bundle.

time git -c pack.threads=1 repack -adF

That’s the new magic command to repack and recompress things in a hopefully more predictable way. This leads to a 72MB git pack under .git/objects/pack/ and a 62MB git bundle. The runtime on my laptop is around 5 minutes.

I experimented with -c pack.compression=1 and -c pack.compression=9 but the size was roughly the same; 76MB and 66MB for level 1 and 72MB and 62MB for level 9. Runtime still around 5 minutes.

Git uses zlib by default, which isn’t the most optimal compression around. I tried -c pack.compression=0 and got a 163MB git pack and a 153MB git bundle. The runtime is still around 5 minutes, indicating that compression is not the bottleneck for the git repack command.

That 153MB uncompressed git bundle compresses to 48MB with gzip default settings and 46MB with gzip -9; to 39MB with zst defaults and 34MB with zst -9; and to 28MB using xz defaults with a small 26MB using xz -9.

Still the inconvenience of having to uncompress a 30-40MB archive into
the much larger 153MB is probably not worth the savings compared to
shipping and using the (still relatively modest) 62MB git bundle.

Now finally prepare the bundle and ship it:

git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle

Yay! Another gnulib git bundle snapshot is available from
https://ftp.gnu.org/gnu/gnulib/.

The essential part of the git repack command is the -F parameter. In the thread -f was suggested, which translates into the git pack-objects --no-reuse-delta parameter:

--no-reuse-delta

When creating a packed archive in a repository that has existing packs, the command reuses existing deltas. This sometimes results in a slightly suboptimal pack. This flag tells the command not to reuse existing deltas but compute them from scratch.

When reading the man page, I though that using -F which translates into --no-reuse-object would be slightly stronger:

--no-reuse-object

This flag tells the command not to reuse existing object data at all, including non deltified object, forcing recompression of everything. This implies --no-reuse-delta. Useful only in the obscure case where wholesale enforcement of a different compression level on the packed data is desired.

On the surface, without --no-reuse-objects, some amount of earlier compression could taint the final result. Still, I was able to get bit-by-bit identical bundles by using -f so possibly reaching for -F is not necessary.

All the commands were done using git 2.51.0 as packaged by Guix. I fear the result may be different with other git versions and/or zlib libraries. I was able to reproduce the same bundle on a Trisquel 12 aramo (derived from Ubuntu 22.04) machine, which uses git 2.34.1. This suggests there is some chances of this being possible to reproduce in 20 years time. Time will tell.

I also fear these commands may be insufficient if something is moving on the server-side of the git repository of gnulib (even just something simple as a new commit), I tried to make some experiments with this but let’s aim for incremental progress here. At least I have now been able to reproduce the same bundle on different machines, which wasn’t the case last time.

Happy Reproducible Git Bundle Hacking!

Planet DebianRussell Coker: Links July 2025

Louis Rossman made an informative YouTube video about right to repair and the US military [1]. This is really important as it helps promote free software and open standards.

The ACM has an insightful article about hidden controls [2]. We need EU regulations about hidden controls in safety critical systems like cars.

This Daily WTF article has some interesting security implications for Windows [3].

Earth.com has an interesting article about the “rubber hand illusion” and how it works on Octopus [4]. For a long time I have been opposed to eating Octopus because I think they are too intelligent.

The Washington Post has an insightful article about the future of spies when everything is tracked by technology [5].

Micah Lee wrote an informative guide to using Signal groups for activism [6].

David Brin wrote an insightful blog post about the phases of the ongoing US civil war [7].

Christian Kastner wrote an interesting blog post about using Glibc hardware capabilities to use different builds of a shared library for a range of CPU features [8].

David Brin wrote an insightful and interesting blog post comparing President Carter with the criminals in the Republican party [9].

Worse Than FailureCodeSOD: What a CAD

In my career, several times I've ended up being the pet programmer for a team of engineers and CNC operators, which frequently meant helping them do automation in their CAD tools. At its peak complexity, it resulted in a (mostly unsuccessful) attempt to build a lens/optics simulator in RhinoCAD.

Which brings us to the code Nick L sends us. It sounds like Nick's in a similar position: engineers write VB.Net code to control their CAD tool, and then Nick tries desperately to get them to follow some sort of decent coding practice. The result is code like:

'Looping Through S_Parts that have to be inital created
For Each Item As Object In RootPart.S_PartsToCreate
	If Item.objNamDe IsNot String.Empty Then
		If Item.objNamEn IsNot String.Empty Then
			If Item.artCat IsNot String.Empty Then
				If Item.prodFam IsNot String.Empty Then
					If Item.prodGrp IsNot String.Empty Then
						'Checking if the Mandatory Properties are in the partfamilies and not empty
						If Item.Properties.ContainsKey("From_sDesign") Then
							' I omitted 134 lines of logic that really should be their own function
						Else
							MsgBox("Property From_SDesign is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC")
							Exit Sub
						End If
					Else
						MsgBox("Property prodGrp is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC")
						Exit Sub
					End If
				Else
					MsgBox("Property prodFam is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC")
					Exit Sub
				End If
			Else
				MsgBox("Property artCat is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC")
				Exit Sub
			End If
		Else
			MsgBox("objNamEn is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC")
			Exit Sub
		End If

	Else
		MsgBox("objNamDe is missing or empty.", MsgBoxStyle.DefaultButton2, "Information RS2TC")
		Exit Sub
	End If
Next

All of their code is stored in a single file called Custom.vb, and it is not stored in source control. Yes, people overwrite each other's code all the time, and it causes endless problems.

Nick writes:

I really wish we'd stop letting engineers code without supervision. Someone should at least tell them about early returns.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsHawkett versus Hawkett

Author: Dimitry Partsi Hawkett and his desk arrived on the 17th floor at precisely 9:04 a.m. The desk, a formidable beast of faux-wood laminate, announced its presence with squeaky caster wheels. Hawkett, a man with a perpetually surprised expression, was, in his own mind, a legal force of nature. A legal beagle, as he sometimes […]

The post Hawkett versus Hawkett appeared first on 365tomorrows.

Planet DebianMichael Ablassmeier: libvirt - incremental backups for raw devices

Skimming through the latest libvirt releases, to my surprise, i found that latest versions (>= v10.10.0) have added support for the QCOW data-file setting.

Usually the incremental backup feature using bitmaps was limited to qcow2 based images, as there was no way to store the bitmaps persistently within raw devices. This basically ruled out proper incremental backups for direct attached luns, etc.

In the past, there were some discussions how to implement this, mostly by using a separate metadata qcow image, holding the bitmap information persistently.

These approaches have been discussed again lately and required features were implemented

In order to be able to use the feature, you need to configure the virtual machines and its disks in a special way:

Lets assume you have a virtual machine that uses a raw device /tmp/datafile.raw

1) Create an qcow image (same size as the raw image):

 # point the data-file to a temporary file, as create will overwrite whatever it finds here
 qemu-img create -f qcow2 /tmp/metadata.qcow2 -o data_file=/tmp/TEMPFILE,data_file_raw=true ..
 rm -f /tmp/TEMPFILE

2) Now use the amend option to point the qcow image to the right raw device using the data-file option:

 qemu-img amend /tmp/metadata.qcow2 -o data_file=/tmp/datafile.raw,data_file_raw=true

3) Reconfigure the virtual machine configuration to look like this:

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='native' discard='unmap'/>
      <source file='/tmp/metadata.qcow2'>
        <dataStore type='file'>
          <format type='raw'/>
          <source file='/tmp/datafile.raw'/>
        </dataStore>
      </source>
      <target dev='vda' bus='virtio'/>
    </disk>

Now its possible to create persistent checkpoints:

 virsh checkpoint-create-as vm6 --name test --diskspec vda,bitmap=test
 Domain checkpoint test created

and the persistent bitmap will be stored within the metadata image:

 qemu-img info  /tmp/tmp.16TRBzeeQn/vm6-sda.qcow2
 [..]
    bitmaps:
        [0]:
            flags:
                [0]: auto
            name: test
            granularity: 65536

Hoooray.

Added support for this in virtnbdbackup v2.33

,

Krebs on SecurityScammers Unleash Flood of Slick Online Gaming Sites

Fraudsters are flooding Discord and other social media platforms with ads for hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. Here’s a closer look at the social engineering tactics and remarkable traits of this sprawling network of more than 1,200 scam sites.

The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular social media personalities, such as Mr. Beast, who recently launched a gaming business called Beast Games. The ads invariably state that by using a supplied “promo code,” interested players can claim a $2,500 credit on the advertised gaming website.

An ad posted to a Discord channel for a scam gambling website that the proprietors falsely claim was operating in collaboration with the Internet personality Mr. Beast. Image: Reddit.com.

The gaming sites all require users to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. At the scam website gamblerbeast[.]com, for example, visitors can pick from dozens of games like B-Ball Blitz, in which you play a basketball pro who is taking shots from the free throw line against a single opponent, and you bet on your ability to sink each shot.

The financial part of this scam begins when users try to cash out any “winnings.” At that point, the gaming site will reject the request and prompt the user to make a “verification deposit” of cryptocurrency — typically around $100 — before any money can be distributed. Those who deposit cryptocurrency funds are soon asked for additional payments.

However, any “winnings” displayed by these gaming sites are a complete fantasy, and players who deposit cryptocurrency funds will never see that money again. Compounding the problem, victims likely will soon be peppered with come-ons from “recovery experts” who peddle dubious claims on social media networks about being able to retrieve funds lost to such scams.

KrebsOnSecurity first learned about this network of phony betting sites from a Discord user who asked to be identified only by their screen name: “Thereallo” is a 17-year-old developer who operates multiple Discord servers and said they began digging deeper after users started complaining of being inundated with misleading spam messages promoting the sites.

“We were being spammed relentlessly by these scam posts from compromised or purchased [Discord] accounts,” Thereallo said. “I got frustrated with just banning and deleting, so I started to investigate the infrastructure behind the scam messages. This is not a one-off site, it’s a scalable criminal enterprise with a clear playbook, technical fingerprints, and financial infrastructure.”

After comparing the code on the gaming sites promoted via spam messages, Thereallo found they all invoked the same API key for an online chatbot that appears to be in limited use or else is custom-made. Indeed, a scan for that API key at the threat hunting platform Silent Push reveals at least 1,270 recently-registered and active domains whose names all invoke some type of gaming or wagering theme.

The “verification deposit” stage of the scam requires the user to deposit cryptocurrency in order to withdraw their “winnings.”

Thereallo said the operators of this scam empire appear to generate a unique Bitcoin wallet for each gaming domain they deploy.

“This is a decoy wallet,” Thereallo explained. “Once the victim deposits funds, they are never able to withdraw any money. Any attempts to contact the ‘Live Support’ are handled by a combination of AI and human operators who eventually block the user. The chat system is self-hosted, making it difficult to report to third-party service providers.”

Thereallo discovered another feature common to all of these scam gambling sites [hereafter referred to simply as “scambling” sites]: If you register at one of them and then very quickly try to register at a sister property of theirs from the same Internet address and device, the registration request is denied at the second site.

“I registered on one site, then hopped to another to register again,” Thereallo said. Instead, the second site returned an error stating that a new account couldn’t be created for another 10 minutes.

The scam gaming site spinora dot cc shares the same chatbot API as more than 1,200 similar fake gaming sites.

“They’re tracking my VPN IP across their entire network,” Thereallo explained. “My password manager also proved it. It tried to use my dummy email on a site I had never visited, and the site told me the account already existed. So it’s definitely one entity running a single platform with 1,200+ different domain names as front-ends. This explains how their support works, a central pool of agents handling all the sites. It also explains why they’re so strict about not giving out wallet addresses; it’s a network-wide policy.”

In many ways, these scambling sites borrow from the playbook of “pig butchering” schemes, a rampant and far more elaborate crime in which people are gradually lured by flirtatious strangers online into investing in fraudulent cryptocurrency trading platforms.

Pig butchering scams are typically powered by people in Asia who have been kidnapped and threatened with physical harm or worse unless they sit in a cubicle and scam Westerners on the Internet all day. In contrast, these scambling sites tend to steal far less money from individual victims, but their cookie-cutter nature and automated support components may enable their operators to extract payments from a large number of people in far less time, and with considerably less risk and up-front investment.

Silent Push’s Zach Edwards said the proprietors of this scambling empire are spending big money to make the sites look and feel like some fancy new type of casino.

“That’s a very odd type of pig butchering network and not like what we typically see, with much lower investments in the sites and lures,” Edwards said.

Here is a list of all domains that Silent Push found were using the scambling network’s chat API.

Cryptogram First Sentencing in Scheme to Help North Koreans Infiltrate US Companies

An Arizona woman was sentenced to eight-and-a-half years in prison for her role helping North Korean workers infiltrate US companies by pretending to be US workers.

From an article:

According to court documents, Chapman hosted the North Korean IT workers’ computers in her own home between October 2020 and October 2023, creating a so-called “laptop farm” which was used to make it appear as though the devices were located in the United States.

The North Koreans were hired as remote software and application developers with multiple Fortune 500 companies, including an aerospace and defense company, a major television network, a Silicon Valley technology company, and a high-profile company.

As a result of this scheme, they collected over $17 million in illicit revenue paid for their work, which was shared with Chapman, who processed their paychecks through her financial accounts.

“Chapman operated a ‘laptop farm’ where she received and hosted computers from the U.S. companies her home, so that the companies would believe the workers were in the United States,” the Justice Department said on Thursday.

“Chapman also shipped 49 laptops and other devices supplied by U.S. companies to locations overseas, including multiple shipments to a city in China on the border with North Korea. More than 90 laptops were seized from Chapman’s home following the execution of a search warrant in October 2023.”

Cryptogram Spying on People Through Airportr Luggage Delivery Service

Airportr is a service that allows passengers to have their luggage picked up, checked, and delivered to their destinations. As you might expect, it’s used by wealthy or important people. So if the company’s website is insecure, you’d be able to spy on lots of wealthy or important people. And maybe even steal their luggage.

Researchers at the firm CyberX9 found that simple bugs in Airportr’s website allowed them to access virtually all of those users’ personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.

“Anyone would have been able to gain or might have gained absolute super-admin access to all the operations and data of this company,” says Himanshu Pathak, CyberX9’s founder and CEO. “The vulnerabilities resulted in complete confidential private information exposure of all airline customers in all countries who used the service of this company, including full control over all the bookings and baggage. Because once you are the super-admin of their most sensitive systems, you have have [sic] the ability to do anything.”

Cryptogram Cheating on Quantum Computing Benchmarks

Peter Gutmann and Stephan Neuhaus have a new paper—I think it’s new, even though it has a March 2025 date—that makes the argument that we shouldn’t trust any of the quantum factorization benchmarks, because everyone has been cooking the books:

Similarly, quantum factorisation is performed using sleight-of-hand numbers that have been selected to make them very easy to factorise using a physics experiment and, by extension, a VIC-20, an abacus, and a dog. A standard technique is to ensure that the factors differ by only a few bits that can then be found using a simple search-based approach that has nothing to do with factorisation…. Note that such a value would never be encountered in the real world since the RSA key generation process typically requires that |p-q| > 100 or more bits [9]. As one analysis puts it, “Instead of waiting for the hardware to improve by yet further orders of magnitude, researchers began inventing better and better tricks for factoring numbers by exploiting their hidden structure” [10].

A second technique used in quantum factorisation is to use preprocessing on a computer to transform the value being factorised into an entirely different form or even a different problem to solve which is then amenable to being solved via a physics experiment…

Lots more in the paper, which is titled “Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog.” He points out the largest number that has been factored legitimately by a quantum computer is 35.

I hadn’t known these details, but I’m not surprised. I have long said that the engineering problems between now and a useful, working quantum computer are hard. And by “hard,” we don’t know if it’s “land a person on the surface of the moon” hard, or “land a person on the surface of the sun” hard. They’re both hard, but very different. And we’re going to hit those engineering problems one by one, as we continue to develop the technology. While I don’t think quantum computing is “surface of the sun” hard, I don’t expect them to be factoring RSA moduli anytime soon. And—even there—I expect lots of engineering challenges in making Shor’s Algorithm work on an actual quantum computer with large numbers.

Planet DebianBits from Debian: New Debian Developers and Maintainers (May and June 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Cordell Bloor (cgmb)
  • Enkelena Haxhija (enkelenah)

The following contributors were added as Debian Maintainers in the last two months:

  • Karsten Schöke
  • Lorenzo Puliti
  • Nick Rosbrook
  • Nicolas Peugnet
  • Yifei Zhan
  • Glenn Strauss
  • Fab Stz
  • Matheus Polkorny
  • Manuel Elias Guerra Figueroa

Congratulations!

Planet DebianSteinar H. Gunderson: Superimposed codes, take three

After I wrote last week that OEIS A286874 would stop at a(12) and that computing (verifying) a(13) would take about 4–5000 CPU years, the changes have finally been approved, and… the sequence includes a(13) = 26. What happened?

Well, first of all, I am indeed not a mathematical genius (the last post even forgot the “not”); I had a stupid conversion error in the estimation, causing a factor 25 or so. But the rest came from actual speedups.

First of all, I improved one of the existing symmetry detectors a bit (the one described last in the previous post was not fully rejecting the possible symmetries when multiple new bits were introduced in one value). But I also made a more universal symmetry detector; if switching the order of certain neighboring bits and re-sorting the sequence made it lexicographically smaller, then we can abort the search. This is pretty expensive and only rejects ~5% of candidates, so it's only worth it at higher levels, but it's much cheaper than checking all n! arbitrary permutations and catches maybe 90% of a full rejection. (Also, if you can reject 5% at multiple levels, those percentages tend to add up. We're down from hundreds of thousands of duplicate solutions, to only a bit over 100, so the amount of speedup available from reducing symmetries is rapidly dwindling.)

Also, surprisingly to me, before going on to run the next level, doing a population count to check if there were too few bits to ever be a solution was seemingly a large win (e.g. are have three values so far, but only 21 bits left; we can never generate a sequence larger than 24 even if all the stars align, and can abort immediately). You would think that this counting, which takes very real CPU time even with vectorization, wouldn't be worth it compared to just running through the base layers of the recursion very quickly, but evidently, it is by a large margin. I guess it's a very common case to have many more than 1 bit left but less than 26-n, and it also means you can just stop iterating a bit before you get to the end.

But perhaps the most impactful optimization was a microoptimization. Recall that we spent most of our time ANDing 8192-bit vectors (which would be 16384-bit vectors for a(13)) with each other. Some looking at performance metrics suggested that the RAM bandwidth was completely maxed out, with ~80% of theoretical bandwidth in use; only faster RAM or more memory channels would have made a reasonable dent in the performance of this kind of architecture.

But pretty early, most of those bits will be zero. If you've already decided on the first five values in a sequence, you will not have 8187 options left; in most cases, you'll have more like 3–400. And since the bit sets only ever shrink, we can simply compress away all those known zeros. For most of our purposes, it doesn't really decide what each bit signifies (an important exception is the point where we have a valid solution and need to print it out, but it's not hard to store the mapping), as we mostly use the values for looking up pregenerated vectors to AND together. This means that when we start a new sub-job, we can find which future values are possible, and then map those into new numbers 0 through 511 (or whatever). This means we can use 512-bit vectors instead of 8192-bit vectors, with all the obvious advantages; less ALU work, less memory traffic, and better cache locality. (It's interesting that we started by being extremely ALU-bound, then moved to being very RAM-bound, and then ended up in fairly normal territory.)

Of course, there are situations where you could have more than 512 valid values. In that case, you can either recompile with larger bit sets (typically a multiple of 128, to get good use of SIMD), or you can split into smaller sub-jobs; find all valid ways of extending the sequence by one element (trivial; we already did that to make the bit sets), and then make one job for each. This splitting is also good for variance; no longer do you have some sub-jobs that finish in milliseconds and some that require days.

There are some downsides too, of course. In particular, we can no longer pregenerate one universal 8192*8192*8192 bit LUT (well, 8192*8191/2*8192); every sub-job needs to make its own set of LUTs before starting. But since this is O(n³) and we just cut n from 8192 to 512, it's not really a blocker (although of course far from zero); and importantly, it cuts our total RAM usage. For n=8192, we already needed a bit over 32 GB (though sharable between all jobs), and each next element in the sequence (a(13), a(14), etc.) is a factor 8 extra, so it starts becoming a real problem fast. But on the flipside, I think this extra precalc makes the algorithm much less amenable to a theoretical GPU implementation (~8 MB private data per instance, as opposed to one large shared static pool of constants and then just 1 kB of state per instance), which would otherwise be nontrivial but probably possible (the problem itself is so parallel). Interestingly enough, it's possible to use bitslicing to speed up this precalc, which is a technique I cannot remember when I last used.

All in all, it took only about 114 CPU-days (or, well, thread-days, as hyperthreading now makes sense again) to calculate a(13), which was eminently possible; and many of the optimizations came late in the process, so a rerun would be faster than that. So, could we get to a(14)? Well, maybe. I'm less convinced that it would be impossible than I was with a(13) earlier. :-) But I started looking at it, and it turns out there are literally trillions (possibly more!) of sub-jobs if you want to split deeply enough to get each down into the 512-bit range. And even at ~8 ms per core per job (ignoring the cost of splitting and just looking at the cost of processing the jobs themselves), it just becomes too unwieldy for me, especially since Postgres isn't really that great at storing billions of rows efficiently. But impossible? Definitely not.

Worse Than FailureCodeSOD: Going on a teDa

Carlos G found some C++ that caused him psychic harm, and wanted to know how it ended up that way. So he combed through the history. Let's retrace the path with him.

Here was the original code:

void parseExpiryDate (const char* expiryDate)
{
    // expiryDate is in "YYMM" format
    int year, month;
    sscanf(expiryDate, "%2d%2d", &year, &month);
	
    //...
}

This code takes a string containing an expiry date, and parses it out. The sscanf function is given a format string describing two, two digit integers, and it stores those values into the year and month variables.

But oops! The expiry date is actually in a MMYY format. How on earth could we possibly fix this? It can't be as simple as just swapping the year and month variables in the sscanf call, can it? (It is.) No, it couldn't be that easy. (It is.) I can't imagine how we would solve this problem. (Just swap them!)

void parseExpiryDate(const char* expiryDate)
{
    // expiryDate is in "YYMM" format but, in some part of the code, it is formatted to "MMYY"
    int year, month;	 
    char correctFormat[5];

    correctFormat[0] = expiryDate[2];
    correctFormat[1] = expiryDate[3];
    correctFormat[2] = expiryDate[0];
    correctFormat[3] = expiryDate[1];
    correctFormat[4] = '\0';
    sscanf(correctFormat, "%2d%2d", &year, &month);

    //...
}

There we go! That was easy! We just go, character by character, and shift the order around and copy it to a new string, so that we format it in YYMM.

The comment here is a wonderful attempt at CYA. By the time this function is called, the input is in MMYY, so that's the relevant piece of information to have in the comment. But the developer really truly believed that YYMM was the original input, and thus shifts blame for the original version of this function to "some part of the code" which is shifting the format around on them, thus justifying… this trainwreck.

Carlos replaced it with:

void parseExpiryDate (const char* expiryDate)
{
    // expiryDate is in "MMYY" format
    int month, year;
    sscanf(expiryDate, "%2d%2d", &month, &year);
	
    //...
}
[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Planet DebianUtkarsh Gupta: FOSS Activites in July 2025

Here’s my 70th monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 79th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

Debian was in freeze throughout so whilst I didn’t do many uploads, there’s a bunch of other things I did:

  • Attended DebConf25 in Brest, France.
    • Lead the bursary BOF and discussions.
    • Participated in other sessions, especially around the FTP masters.
    • I’ve started to look at things with my trainee hat on.
    • Participated in the Debian Security Tracker sprints during DebCamp. More on that below.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 54th month of actively contributing to Ubuntu. I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did (there’s so much and some of it might not be public…yet!), here’s a quick TL;DR of what I did:

  • Released Questing snapshot 3! \o/
  • EOL’d Oracular. o/
  • Participated in the mid-cycle sprints.
  • Got a recognition award for leading 24.04.2 LTS release and leading the Release Management team.
  • Preparing for the 24.04.3 LTS release early next month.

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my 70th month as a Debian LTS and 57th month as a Debian ELTS paid contributor.
I only worked for 15.00 hours for LTS and 5.00 hours for ELTS and did the following things:

  • [LTS] Released DLA 4263-1 for ruby-graphql.
    • Coordinated with upstream due to lack of clarity on 1.11.4 being affected & not having a clear reproducer.
    • As 1.11.4 was still partially vulnerable and the backport was non-trivial, it was probably conveinent to bump the upstream version to 1.11.12 instead, fixing:
    • CVE-2025-27407): a remote code execution.
    • Salsa repository: https://salsa.debian.org/lts-team/packages/ruby-graphql.
    • Coordinated with the Security team for a p-u fix or a DSA.
  • [E/LTS] Frontdesk duty from 28th July to 04th August.
  • [LTS] Attended the monthly LTS meeting on IRC. Summary here.

Debian Security Tracker sprint 2025

Thanks to the LTS team for also organizing a security tracker sprint during DebCamp25. I attended the sprint and spent 10 hours working on the following tasks:

That’s all. A quicky shoutout to Roberto for organizing the sprints remotely and being awake at odd hours. <3


Until next time.
:wq for today.

365 Tomorrows” Last Message”

Author: Rida Tariq *That bell of the night:- The phone bell rang at 2:30Am . “Liza” picked up the phone. There was a name on the screen that had been erased for three years: “Max❤️” Panic, surprise, and a forgotten pain all woke up together. “Hi…?” Silence then a halting, fading voice: “Forgive me, I’ve […]

The post ” Last Message” appeared first on 365tomorrows.

Cryptogram Measuring the Attack/Defense Balance

“Who’s winning on the internet, the attackers or the defenders?”

I’m asked this all the time, and I can only ever give a qualitative hand-wavy answer. But Jason Healey and Tarang Jain’s latest Lawfare piece has amassed data.

The essay provides the first framework for metrics about how we are all doing collectively—and not just how an individual network is doing. Healey wrote to me in email:

The work rests on three key insights: (1) defenders need a framework (based in threat, vulnerability, and consequence) to categorize the flood of potentially relevant security metrics; (2) trends are what matter, not specifics; and (3) to start, we should avoid getting bogged down in collecting data and just use what’s already being reported by amazing teams at Verizon, Cyentia, Mandiant, IBM, FBI, and so many others.

The surprising conclusion: there’s a long way to go, but we’re doing better than we think. There are substantial improvements across threat operations, threat ecosystem and organizations, and software vulnerabilities. Unfortunately, we’re still not seeing increases in consequence. And since cost imposition is leading to a survival-of-the-fittest contest, we’re stuck with perhaps fewer but fiercer predators.

And this is just the start. From the report:

Our project is proceeding in three phases—­the initial framework presented here is only phase one. In phase two, the goal is to create a more complete catalog of indicators across threat, vulnerability, and consequence; encourage cybersecurity companies (and others with data) to report defensibility-relevant statistics in time-series, mapped to the catalog; and drive improved analysis and reporting.

This is really good, and important, work.

Cryptogram How the Solid Protocol Restores Digital Agency

The current state of digital identity is a mess. Your personal information is scattered across hundreds of locations: social media companies, IoT companies, government agencies, websites you have accounts on, and data brokers you’ve never heard of. These entities collect, store, and trade your data, often without your knowledge or consent. It’s both redundant and inconsistent. You have hundreds, maybe thousands, of fragmented digital profiles that often contain contradictory or logically impossible information. Each serves its own purpose, yet there is no central override and control to serve you—as the identity owner.

We’re used to the massive security failures resulting from all of this data under the control of so many different entities. Years of privacy breaches have resulted in a multitude of laws—in US states, in the EU, elsewhere—and calls for even more stringent protections. But while these laws attempt to protect data confidentiality, there is nothing to protect data integrity.

In this context, data integrity refers to its accuracy, consistency, and reliability…throughout its lifecycle. It means ensuring that data is not only accurately recorded but also remains logically consistent across systems, is up-to-date, and can be verified as authentic. When data lacks integrity, it can contain contradictions, errors, or outdated information—problems that can have serious real-world consequences.

Without data integrity, someone could classify you as a teenager while simultaneously attributing to you three teenage children: a biological impossibility. What’s worse, you have no visibility into the data profiles assigned to your identity, no mechanism to correct errors, and no authoritative way to update your information across all platforms where it resides.

Integrity breaches don’t get the same attention that confidentiality breaches do, but the picture isn’t pretty. A 2017 write-up in The Atlantic found error rates exceeding 50% in some categories of personal information. A 2019 audit of data brokers found at least 40% of data broker sourced user attributes are “not at all” accurate. In 2022, the Consumer Financial Protection Bureau documented thousands of cases where consumers were denied housing, employment, or financial services based on logically impossible data combinations in their profiles. Similarly, the National Consumer Law Center report called “Digital Denials” showed inaccuracies in tenant screening data that blocked people from housing.

And integrity breaches can have significant effects on our lives. In one 2024 British case, two companies blamed each other for the faulty debt information that caused catastrophic financial consequences for an innocent victim. Breonna Taylor was killed in 2020 during a police raid on her apartment in Louisville, Kentucky, when officers executed a “no-knock” warrant on the wrong house based on bad data. They had faulty intelligence connecting her address to a suspect who actually lived elsewhere.

In some instances, we have rights to view our data, and in others, rights to correct it, but these sorts of solutions have only limited value. When journalist Julia Angwin attempted to correct her information across major data brokers for her book Dragnet Nation, she found that even after submitting corrections through official channels, a significant number of errors reappeared within six months.

In some instances, we have the right to delete our data, but—again—this only has limited value. Some data processing is legally required, and some is necessary for services we truly want and need.

Our focus needs to shift from the binary choice of either concealing our data entirely or surrendering all control over it. Instead, we need solutions that prioritize integrity in ways that balance privacy with the benefits of data sharing.

It’s not as if we haven’t made progress in better ways to manage online identity. Over the years, numerous trustworthy systems have been developed that could solve many of these problems. For example, imagine digital verification that works like a locked mobile phone—it works when you’re the one who can unlock and use it, but not if someone else grabs it from you. Or consider a storage device that holds all your credentials, like your driver’s license, professional certifications, and healthcare information, and lets you selectively share one without giving away everything at once. Imagine being able to share just a single cell in a table or a specific field in a file. These technologies already exist, and they could let you securely prove specific facts about yourself without surrendering control of your whole identity. This isn’t just theoretically better than traditional usernames and passwords; the technologies represent a fundamental shift in how we think about digital trust and verification.

Standards to do all these things emerged during the Web 2.0 era. We mostly haven’t used them because platform companies have been more interested in building barriers around user data and identity. They’ve used control of user identity as a key to market dominance and monetization. They’ve treated data as a corporate asset, and resisted open standards that would democratize data ownership and access. Closed, proprietary systems have better served their purposes.

There is another way. The Solid protocol, invented by Sir Tim Berners-Lee, represents a radical reimagining of how data operates online. Solid stands for “SOcial LInked Data.” At its core, it decouples data from applications by storing personal information in user-controlled “data wallets”: secure, personal data stores that users can host anywhere they choose. Applications can access specific data within these wallets, but users maintain ownership and control.

Solid is more than distributed data storage. This architecture inverts the current data ownership model. Instead of companies owning user data, users maintain a single source of truth for their personal information. It integrates and extends all those established identity standards and technologies mentioned earlier, and forms a comprehensive stack that places personal identity at the architectural center.

This identity-first paradigm means that every digital interaction begins with the authenticated individual who maintains control over their data. Applications become interchangeable views into user-owned data, rather than data silos themselves. This enables unprecedented interoperability, as services can securely access precisely the information they need while respecting user-defined boundaries.

Solid ensures that user intentions are transparently expressed and reliably enforced across the entire ecosystem. Instead of each application implementing its own custom authorization logic and access controls, Solid establishes a standardized declarative approach where permissions are explicitly defined through control lists or policies attached to resources. Users can specify who has access to what data with granular precision, using simple statements like “Alice can read this document” or “Bob can write to this folder.” These permission rules remain consistent, regardless of which application is accessing the data, eliminating the fragmentation and unpredictability of traditional authorization systems.

This architectural shift decouples applications from data infrastructure. Unlike Web 2.0 platforms like Facebook, which require massive back-end systems to store, process, and monetize user data, Solid applications can be lightweight and focused solely on functionality. Developers no longer need to build and maintain extensive data storage systems, surveillance infrastructure, or analytics pipelines. Instead, they can build specialized tools that request access to specific data in users’ wallets, with the heavy lifting of data storage and access control handled by the protocol itself.

Let’s take healthcare as an example. The current system forces patients to spread pieces of their medical history across countless proprietary databases controlled by insurance companies, hospital networks, and electronic health record vendors. Patients frustratingly become a patchwork rather than a person, because they often can’t access their own complete medical history, let alone correct mistakes. Meanwhile, those third-party databases suffer regular breaches. The Solid protocol enables a fundamentally different approach. Patients maintain their own comprehensive medical record, with data cryptographically signed by trusted providers, in their own data wallet. When visiting a new healthcare provider, patients can arrive with their complete, verifiable medical history rather than starting from zero or waiting for bureaucratic record transfers.

When a patient needs to see a specialist, they can grant temporary, specific access to relevant portions of their medical history. For example, a patient referred to a cardiologist could share only cardiac-related records and essential background information. Or, on the flip side, the patient can share new and rich sources of related data to the specialist, like health and nutrition data. The specialist, in turn, can add their findings and treatment recommendations directly to the patient’s wallet, with a cryptographic signature verifying medical credentials. This process eliminates dangerous information gaps while ensuring that patients maintain an appropriate role in who sees what about them and why.

When a patient—doctor relationship ends, the patient retains all records generated during that relationship—unlike today’s system where changing providers often means losing access to one’s historical records. The departing doctor’s signed contributions remain verifiable parts of the medical history, but they no longer have direct access to the patient’s wallet without explicit permission.

For insurance claims, patients can provide temporary, auditable access to specific information needed for processing—no more and no less. Insurance companies receive verified data directly relevant to claims but should not be expected to have uncontrolled hidden comprehensive profiles or retain information longer than safe under privacy regulations. This approach dramatically reduces unauthorized data use, risk of breaches (privacy and integrity), and administrative costs.

Perhaps most transformatively, this architecture enables patients to selectively participate in medical research while maintaining privacy. They could contribute anonymized or personalized data to studies matching their interests or conditions, with granular control over what information is shared and for how long. Researchers could gain access to larger, more diverse datasets while participants would maintain control over their information—creating a proper ethical model for advancing medical knowledge.

The implications extend far beyond healthcare. In financial services, customers could maintain verified transaction histories and creditworthiness credentials independently of credit bureaus. In education, students could collect verified credentials and portfolios that they truly own rather than relying on institutions’ siloed records. In employment, workers could maintain portable professional histories with verified credentials from past employers. In each case, Solid enables individuals to be the masters of their own data while allowing verification and selective sharing.

The economics of Web 2.0 pushed us toward centralized platforms and surveillance capitalism, but there has always been a better way. Solid brings different pieces together into a cohesive whole that enables the identity-first architecture we should have had all along. The protocol doesn’t just solve technical problems; it corrects the fundamental misalignment of incentives that has made the modern web increasingly hostile to both users and developers.

As we look to a future of increased digitization across all sectors of society, the need for this architectural shift becomes even more apparent. Individuals should be able to maintain and present their own verified digital identity and history, rather than being at the mercy of siloed institutional databases. The Solid protocol makes this future technically possible.

This essay was written with Davi Ottenheimer, and originally appeared on The Inrupt Blog.

,

365 TomorrowsProof of Concept

Author: Majoki “Based on the most current cosmological evidence, the known universe is less than 5% ordinary matter, all the crap we can see and touch.” “That’s still a lot of crap.” Grunden grinned. He always grinned. Finnhil waved him off. “That’s nothing. We’re after paydirt, the thing that makes up over two-thirds of reality.” […]

The post Proof of Concept appeared first on 365tomorrows.

Worse Than FailureCodeSOD: IsValidToken

To ensure that several services could only be invoked by trusted parties, someone at Ricardo P's employer had the brilliant idea of requiring a token along with each request. Before servicing a request, they added this check:

private bool IsValidToken(string? token)
{
    if (string.Equals("xxxxxxxx-xxxxxx+xxxxxxx+xxxxxx-xxxxxx-xxxxxx+xxxxx", token)) return true;
    return false;
}

The token is anonymized here, but it's hard-coded into the code, because checking security tokens into source control, and having tokens that never expire has never caused anyone any trouble.

Which, in the company's defense, they did want the token to expire. The problem there is that they wanted to be able to roll out the new token to all of their services over time, which meant the system had to be able to support both the old and new token for a period of time. And you know exactly how they handled that.

private bool IsValidToken(string? token)
{
    if (string.Equals("xxxxxxxx-xxxxxx+xxxxxxx+xxxxxx-xxxxxx-xxxxxx+xxxxx", token)) return true;
    else if (string.Equals("yyyyyyy-yyyyyy+yyyyy+yyyyy-yyyyy-yyyyy+yyyy", token)) return true;
    return false;
}

For a change, I'm more mad about this insecurity than the if(cond) return true pattern, but boy, I hate that pattern.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Worse Than FailureCodeSOD: An Exert Operation

The Standard Template Library for C++ is… interesting. A generic set of data structures and algorithms was a pretty potent idea. In practice, early implementations left a lot to be desired. Because the STL is a core part of C++ at this point, and widely used, it also means that it's slow to change, and each change needs to go through a long approval process.

Which is why the STL didn't have a std::map::containsfunction until the C++20 standard. There were other options. For example, one could usestd::map::count, to count how many times a key appear. Or you could use std::map::findto search for a key. One argument against adding astd::map::containsfunction is thatstd::map::count basically does the same job and has the same performance.

None of this stopped people from adding their own. Which brings us to Gaetan's submission. Absent a std::map::contains method, someone wrote a whole slew of fieldExists methods, where field is one of many possible keys they might expect in the map.

bool DataManager::thingyExists (string name)
{
    THINGY* l_pTHINGY = (*m_pTHINGY)[name];
    if(l_pTHINGY == NULL)
    {
        m_pTHINGY->erase(name);
        return false;
    }
        else
    {
        return true;
    }
    return false;
}

I've head of upsert operations- an update and insert as the same operation, but this is the first exert- an existence check and an insert in the same operation.

"thingy" here is anonymization. The DataManager contained several of these methods, which did the same thing, but checked a different member variable. Other classes, similar to DataManager had their own implementations. In truth, the original developer did a lot of "it's a class, but everything inside of it is stored in a map, that's more flexible!"

In any case, this code starts by using the [] accessor on a member variable m_pTHINGY. This operator returns a reference to what's stored at that key, or if the key doesn't exist inserts a default-constructed instance of whatever the map contains.

What the map contains, in this case, is a pointer to a THINGY, so the default construction of a pointer would be null- and that's what they check. If the value is null, then we erase the key we just inserted and return false. Otherwise, we return true. Otherotherwise, we return false.

As a fun bonus, if someone intentionally stored a null in the map, this will think the key doesn't exist and as a side effect, remove it.

Gaetan writes:

What bugs me most is the final, useless return.

I'll be honest, what bugs me most is the Hungarian notation on local variables. But I'm long established as a Hungarian notation hater.

This code at least works, which compared to some bad C++, puts it on a pretty high level of quality. And it even has some upshots, according to Gaetan:

On the bright side: I have obtained easy performance boosts by performing that kind of cleanup lately in that particular codebase.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsBurnt Offerings

Author: Julian Miles, Staff Writer “Go left. Left! Between the trees.” “Rule nineteen: do not follow a road.” “Not the gap on the right. The gap on the left. Left!” Tersi rests a hand on my shoulder and cuts into my comms. “Check definition: road. Query application of rule. Go left.” “Revision: indicated route is […]

The post Burnt Offerings appeared first on 365tomorrows.

Cryptogram Microsoft SharePoint Zero-Day

Chinese hackers are exploiting a high-severity vulnerability in Microsoft SharePoint to steal data worldwide:

The vulnerability, tracked as CVE-2025-53770, carries a severity rating of 9.8 out of a possible 10. It gives unauthenticated remote access to SharePoint Servers exposed to the Internet. Starting Friday, researchers began warning of active exploitation of the vulnerability, which affects SharePoint Servers that infrastructure customers run in-house. Microsoft’s cloud-hosted SharePoint Online and Microsoft 365 are not affected.

Here’s Microsoft on patching instructions. Patching isn’t enough, as attackers have used the vulnerability to steal authentication credentials. It’s an absolute mess. CISA has more information. Also these four links. Two Slashdot threads.

This is an unfolding security mess, and quite the hacking coup.

,

365 TomorrowsLatecomers

Author: Alastair Millar It was our usually bad-tempered neighbour Mr Winkelmann who first told us we could get ‘special benefits’ if we registered in person at the Central Bureau in Lapis. Indigo’s government knew we spent a lot on the exoskeletal clothing and bone-strengthening drugs we needed to help us deal with the gravity, and […]

The post Latecomers appeared first on 365tomorrows.

,

365 TomorrowsA Penny for Your Thoughts

Author: Don Nigroni “Thoughts can’t die or fade away,” my little brother, Arthur, told me two months ago. He was an adorable bald baby who grew into a self-taught bald polymath. I replied, “So, what if thoughts do spend eternity in the thought-ether?” “If someone could access them then he could find buried treasure, solve […]

The post A Penny for Your Thoughts appeared first on 365tomorrows.

,

Worse Than FailureError'd: It's Getting Hot in Here

Or cold. It's getting hot and cold. But on average... no. It's absolutely unbelievable.

"There's been a physics breakthrough!" Mate exclaimed. "Looking at meteoblue, I should probably reconsider that hike on Monday." Yes, you should blow it off, but you won't need to.

0

 

An anonymous fryfan frets "The yellow arches app (at least in the UK) is a buggy mess, and I'm amazed it works at all when it does. Whilst I've heard of null, it would appear that they have another version of null, called ullnullf! Comments sent to their technical team over the years, including those with good reproduceable bugs, tend to go unanswered, unfortunately."

1

 

Llarry A. whipped out his wallet but baffled "I tried to pay in cash, but I wasn't sure how much."

2

 

"Github goes gonzo!" groused Gwenn Le Bihan. "Seems like Github's LLM model broke containment and error'd all over the website layout. crawling out of its grouped button." Gross.

3

 

Peter G. gripes "The text in the image really says it all." He just needs to rate his experience above 7 in order to enable the submit button.

4

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Nectarine

Author: Sarah Goodman One unblemished red apple. I passed it along the conveyor belt. Swoosh. One green pear. Its surface was a little rough, but it was decent. Swoosh. Another apple, but this one had a bruise on its side. A horn blared. A door opened, and I slid the apple down a chute marked […]

The post The Nectarine appeared first on 365tomorrows.

,

Krebs on SecurityPhishers Target Aviation Execs to Scam Customers

KrebsOnSecurity recently heard from a reader whose boss’s email account got phished and was used to trick one of the company’s customers into sending a large payment to scammers. An investigation into the attacker’s infrastructure points to a long-running Nigerian cybercrime ring that is actively targeting established companies in the transportation and aviation industries.

Image: Shutterstock, Mr. Teerapon Tiuekhom.

A reader who works in the transportation industry sent a tip about a recent successful phishing campaign that tricked an executive at the company into entering their credentials at a fake Microsoft 365 login page. From there, the attackers quickly mined the executive’s inbox for past communications about invoices, copying and modifying some of those messages with new invoice demands that were sent to some of the company’s customers and partners.

Speaking on condition of anonymity, the reader said the resulting phishing emails to customers came from a newly registered domain name that was remarkably similar to their employer’s domain, and that at least one of their customers fell for the ruse and paid a phony invoice. They said the attackers had spun up a look-alike domain just a few hours after the executive’s inbox credentials were phished, and that the scam resulted in a customer suffering a six-figure financial loss.

The reader also shared that the email addresses in the registration records for the imposter domain — roomservice801@gmail.com — is tied to many such phishing domains. Indeed, a search on this email address at DomainTools.com finds it is associated with at least 240 domains registered in 2024 or 2025. Virtually all of them mimic legitimate domains for companies in the aerospace and transportation industries worldwide.

An Internet search for this email address reveals a humorous blog post from 2020 on the Russian forum hackware[.]ru, which found roomservice801@gmail.com was tied to a phishing attack that used the lure of phony invoices to trick the recipient into logging in at a fake Microsoft login page. We’ll come back to this research in a moment.

JUSTY JOHN

DomainTools shows that some of the early domains registered to roomservice801@gmail.com in 2016 include other useful information. For example, the WHOIS records for alhhomaidhicentre[.]biz reference the technical contact of “Justy John” and the email address justyjohn50@yahoo.com.

A search at DomainTools found justyjohn50@yahoo.com has been registering one-off phishing domains since at least 2012. At this point, I was convinced that some security company surely had already published an analysis of this particular threat group, but I didn’t yet have enough information to draw any solid conclusions.

DomainTools says the Justy John email address is tied to more than two dozen domains registered since 2012, but we can find hundreds more phishing domains and related email addresses simply by pivoting on details in the registration records for these Justy John domains. For example, the street address used by the Justy John domain axisupdate[.]net — 7902 Pelleaux Road in Knoxville, TN — also appears in the registration records for accountauthenticate[.]com, acctlogin[.]biz, and loginaccount[.]biz, all of which at one point included the email address rsmith60646@gmail.com.

That Rsmith Gmail address is connected to the 2012 phishing domain alibala[.]biz (one character off of the Chinese e-commerce giant alibaba.com, with a different top-level domain of .biz). A search in DomainTools on the phone number in those domain records — 1.7736491613 — reveals even more phishing domains as well as the Nigerian phone number “2348062918302” and the email address michsmith59@gmail.com.

DomainTools shows michsmith59@gmail.com appears in the registration records for the domain seltrock[.]com, which was used in the phishing attack documented in the 2020 Russian blog post mentioned earlier. At this point, we are just two steps away from identifying the threat actor group.

The same Nigerian phone number shows up in dozens of domain registrations that reference the email address sebastinekelly69@gmail.com, including 26i3[.]net, costamere[.]com, danagruop[.]us, and dividrilling[.]com. A Web search on any of those domains finds they were indexed in an “indicator of compromise” list on GitHub maintained by Palo Alto NetworksUnit 42 research team.

SILVERTERRIER

According to Unit 42, the domains are the handiwork of a vast cybercrime group based in Nigeria that it dubbed “SilverTerrier” back in 2014. In an October 2021 report, Palo Alto said SilverTerrier excels at so-called “business e-mail compromise” or BEC scams, which target legitimate business email accounts through social engineering or computer intrusion activities. BEC criminals use that access to initiate or redirect the transfer of business funds for personal gain.

Palo Alto says SilverTerrier encompasses hundreds of BEC fraudsters, some of whom have been arrested in various international law enforcement operations by Interpol. In 2022, Interpol and the Nigeria Police Force arrested 11 alleged SilverTerrier members, including a prominent SilverTerrier leader who’d been flaunting his wealth on social media for years. Unfortunately, the lure of easy money, endemic poverty and corruption, and low barriers to entry for cybercrime in Nigeria conspire to provide a constant stream of new recruits.

BEC scams were the 7th most reported crime tracked by the FBI’s Internet Crime Complaint Center (IC3) in 2024, generating more than 21,000 complaints. However, BEC scams were the second most costly form of cybercrime reported to the feds last year, with nearly $2.8 billion in claimed losses. In its 2025 Fraud and Control Survey Report, the Association for Financial Professionals found 63 percent of organizations experienced a BEC last year.

Poking at some of the email addresses that spool out from this research reveals a number of Facebook accounts for people residing in Nigeria or in the United Arab Emirates, many of whom do not appear to have tried to mask their real-life identities. Palo Alto’s Unit 42 researchers reached a similar conclusion, noting that although a small subset of these crooks went to great lengths to conceal their identities, it was usually simple to learn their identities on social media accounts and the major messaging services.

Palo Alto said BEC actors have become far more organized over time, and that while it remains easy to find actors working as a group, the practice of using one phone number, email address or alias to register malicious infrastructure in support of multiple actors has made it far more time consuming (but not impossible) for cybersecurity and law enforcement organizations to sort out which actors committed specific crimes.

“We continue to find that SilverTerrier actors, regardless of geographical location, are often connected through only a few degrees of separation on social media platforms,” the researchers wrote.

FINANCIAL FRAUD KILL CHAIN

Palo Alto has published a useful list of recommendations that organizations can adopt to minimize the incidence and impact of BEC attacks. Many of those tips are prophylactic, such as conducting regular employee security training and reviewing network security policies.

But one recommendation — getting familiar with a process known as the “financial fraud kill chain” or FFKC — bears specific mention because it offers the single best hope for BEC victims who are seeking to claw back payments made to fraudsters, and yet far too many victims don’t know it exists until it is too late.

Image: ic3.gov.

As explained in this FBI primer, the International Financial Fraud Kill Chain is a partnership between federal law enforcement and financial entities whose purpose is to freeze fraudulent funds wired by victims. According to the FBI, viable victim complaints filed with ic3.gov promptly after a fraudulent transfer (generally less than 72 hours) will be automatically triaged by the Financial Crimes Enforcement Network (FinCEN).

The FBI noted in its IC3 annual report (PDF) that the FFKC had a 66 percent success rate in 2024. Viable ic3.gov complaints involve losses of at least $50,000, and include all records from the victim or victim bank, as well as a completed FFKC form (provided by FinCEN) containing victim information, recipient information, bank names, account numbers, location, SWIFT, and any additional information.

Cryptogram Subliminal Learning in AIs

Today’s freaky LLM behavior:

We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers owls. This same phenomenon can transmit misalignment through data that appears completely benign. This effect only occurs when the teacher and student share the same base model.

Interesting security implications.

I am more convinced than ever that we need serious research into AI integrity if we are ever going to have trustworthy AI.

365 TomorrowsAchmed’s Razor

Author: R. J. Erbacher She was seated on the closed toilet, legs crossed, just a bath towel wrapped across her breasts, water still dripping from her brunette hair onto her pale bare shoulders. She pulled the straight razor along her skin, her fingers laced between the shank and the tang, thumb on the heel. She […]

The post Achmed’s Razor appeared first on 365tomorrows.

Worse Than FailureCodeSOD: ConVersion Version

Mads introduces today's code sample with this line: " this was before they used git to track changes".

Note, this is not to say that they were using SVN, or Mercurial, or even Visual Source Safe. They were not using anything. How do I know?

/**
  * Converts HTML to PDF using HTMLDOC.
  * 
  * @param printlogEntry
  ** @param inBytes
  *            html.
  * @param outPDF
  *            pdf.
  * @throws IOException
  *             when error.
  * @throws ParseException
*/
public void fromHtmlToPdfOld(PrintlogEntry printlogEntry, byte[] inBytes, final OutputStream outPDF) throws IOException, ParseException
	{...}

/**
 * Converts HTML to PDF using HTMLDOC.
 * 
 * @param printlogEntry
 ** @param inBytes
 *            html.
 * @param outPDF
 *            pdf.
 * @throws IOException
 *             when error.
 * @throws ParseException
 */
public void fromHtmlToPdfNew(PrintlogEntry printlogEntry, byte[] inBytes, final OutputStream outPDF) throws IOException, ParseException
	{...}

Originally, the function was just called fromHtmlToPdf. Instead of updating the implementation, or using it as a wrapper to call the correct implementation, they renamed it to Old, added one named New, then let the compiler tell them where they needed to update the code to use the new implementation.

Mads adds: "And this is just one example in this code. This far, I have found 5 of these."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

LongNowBayo Akomolafe

Bayo Akomolafe

Attend live on Tue, May 5, 02026 at 7:00PM PT
at 
Cowell Theater in Fort Mason Center
Tickets on sale soon

Bayo Akomolafe (Ph.D.) is a philosopher, writer, activist, professor of psychology, and executive director of The Emergence Network. Rooted with the Yoruba people, Akomolafe is the father to Alethea and Kyah, and the grateful life-partner to ‘EJ’. Essayist, poet, and author of two books, These Wilds Beyond our Fences: Letters to My Daughter on Humanity’s Search for Home (North Atlantic Books) and We Will Tell our Own Story: The Lions of Africa Speak. Bayo Akomolafe is also the host of the online postactivist course, ‘We Will Dance with Mountains’. 

He currently lectures at Pacifica Graduate Institute, California and the University of Vermont, Burlington, Vermont as adjunct and associate professor, respectively. He sits on the Board of many organizations, including Science and Non-Duality and Local Futures. In July 2022, Dr. Akomolafe was appointed the inaugural Global Senior Fellow of the Othering and Belonging Institute at UC Berkeley. He has also been appointed Senior Fellow for The New Institute in Hamburg, Germany. Dr. Bayo hopes to inspire what he calls a “diffractive network of sharing” and a “politics of surprise” that sees the crises of our times with a posthumanist lens.

LongNowKatie Paterson

Katie Paterson

Attend live on Tue, Apr 7, 02026 at 7:00PM PT
at 
Cowell Theater in Fort Mason Center
Tickets on sale soon

Katie Paterson is widely regarded as one of the leading artists of her generation. Collaborating with scientists and researchers across the world, Paterson’s projects consider our place on Earth in the context of geological time and change. Her artworks make use of sophisticated technologies and specialist expertise to stage intimate, poetic and philosophical engagements between people and their natural environment. Combining a Romantic sensibility with a research-based approach, conceptual rigour and coolly minimalist presentation, her work collapses the distance between the viewer and the most distant edges of time and the cosmos.

Katie Paterson has broadcast the sounds of a melting glacier live, mapped all the dead stars, compiled a slide archive of darkness from the depths of the Universe, created a light bulb to simulate the experience of moonlight, and sent a recast meteorite back into space. Eliciting feelings of humility, wonder and melancholy akin to the experience of the Romantic sublime, Paterson’s work is at once understated in gesture and yet monumental in scope. She has exhibited internationally, from London to New York, Berlin to Seoul, and her works have been included in major exhibitions including Turner Contemporary, Hayward Gallery, Tate Britain, Kunsthalle Wien, MCA Sydney, Guggenheim Museum, and The Scottish National Gallery of Modern Art. She was the winner of the Visual Arts category of the South Bank Awards, and is an Honorary Fellow of Edinburgh University.

LongNowMelody Jue

Melody Jue

Attend live on Wed, Mar 18, 02026 at 7:00PM PT
at 
Cowell Theater in Fort Mason Center
Tickets on sale soon

Melody Jue is Professor of English at the University of California, Santa Barbara. Her research and writings center the ocean humanities, science fiction, media studies, science & technology studies, and the environmental humanities. Professor Jue is the author of Wild Blue Media: Thinking Through Seawater(Duke University Press, 2020), which won the Speculative Fictions and Cultures of Science Book Prize, and the co-editor of Saturation: An Elemental Politics (Duke University Press, 2021) with Rafico Ruiz. Forthcoming books include Coralations (Minnesota Press, 2025) and the edited collection Informatics of Domination (Duke Press, 2025) with Zach Blas and Jennifer Rhee.

Her new work, Holding Sway, examines the media of seaweeds across transpacific contexts. She regularly collaborates with ocean scientists and artists, from fieldwork to collaborative writings and other projects. Many of her writings are informed by scuba diving fieldwork and coastal observations.

LongNowIndy Johar

Indy Johar

Attend live on Tue, Jan 27, 02026 at 7:00PM PT
at 
Cowell Theater in Fort Mason Center
Tickets on sale soon

Indy Johar is co-founder of Dark Matter Labs and of the RIBA award winning architecture and urban practice Architecture00. He is also a founding director of Open Systems Lab, seeded WikiHouse (open source housing) and Open Desk (open source furniture company). Indy is a non-executive international Director of the BloxHub, the Nordic Hub for sustainable urbanization. He is on the advisory board for the Future Observatory and is part of the committee for the London Festival of Architecture. He is also a fellow of the London Interdisciplinary School.

Indy was 2016-17 Graham Willis Visiting Professorship at Sheffield University. He was Studio Master at the Architectural Association - 2019-2020, UNDP Innovation Facility Advisory Board Member 2016-20 and RIBA Trustee 2017-20. He has taught & lectured at various institutions from the University of Bath, TU-Berlin; University College London, Princeton, Harvard, MIT and New School. He is currently a professor at RMIT University.

MEAnnoying Wrongness on TV

One thing that annoys me on TV shows and movies is getting the details wrong. Yes it’s fiction and yes some things can’t be done correctly and in some situations correctly portraying things goes against the plot. But otherwise I think they should try to make it accurate.

I was just watching The Americans (a generally good show that I recommend watching) and in Season 4 Episode 9 there’s a close up of a glass of wine which clearly shows that the Tears of Wine effect is missing, the liquid in the glass obviously has the surface tension of water not of wine. When they run a show about spies then have to expect that the core audience will be the type of detail oriented people who notice these things. Having actors not actually drink alcohol on set is a standard practice, if they have to do 10 takes of someone drinking a glass of wine then that would be a problem if they actually drank real wine. But they could substitute real wine for the close up shots and of course just getting it right the first time is a good option.

Some ridiculous inaccuracy we just need to deal with, like knives making a schwing sound when pulled out of scabbards and “silenced” guns usually still being quite loud (so many people are used to it being wrong). Organisations like the KGB had guns that were actually silent, but they generally looked obviously different to regular guns and had a much lower effective range.

The gold coins shown on TV are another ridiculous thing. The sound of metal hitting something depends on how hard it is and how dense it is. Surely most people have heard the sounds of dropping steel nuts and ball bearings and the sound of dropping lead sinkers and knows that the sounds of items of similar size and shape differ greatly based on density and hardness. A modern coin made of copper, cupro-nickel (the current “silver” coins), or copper-aluminium (the current “gold” coins) sounds very different to a gold coin when dropped on a bench. For a show like The Witcher it wouldn’t be difficult to make actual gold coins of a similar quality to iron age coin production, any jeweller could make the blanks and making stamps hard enough to press gold isn’t an engineering challenge (stamping copper coins would be much more difficult). The coins used for the show could be sold to fans afterwards.

Once coins are made they can’t be just heaped up. Even if you are a sorcerer you probably couldn’t fill a barrel a meter high with gold coins and not have it break from the weight and/or have the coins at the bottom cold welded. Gold coins are supposed to have a precise amount of gold and if you pile them up too high then the cold welding process will transfer gold between coins changing the value. If someone was going to have a significant quantity of gold stored then it would be in gold ingots with separators between layers to prevent cold welding.

Movies tend not to show coins close up, I presume that’s because they considered it too difficult to make coins and they just use some random coins from their own country.

Another annoying thing is shows that don’t match up the build dates of objects used. It’s nice when they get it right like the movie Titanic featuring a M1911 pistol which is something that a rich person in 1912 would likely have. The series Carnival Row (which I recommend) has weapons that mostly match our WW1 era, everything that doesn’t involve magic seems legit. One of the worst examples of this is the movie Anna (by Luc Besson which is mostly a recreation of his film Nikita but in the early 90s and with the KGB). That film features laptops with color screens and USB ports before USB was invented and when color screens weren’t common on laptops, as an aside military spec laptops tend to have older designs than consumer spec ones.

I’ve mostly given up on hoping that movies will get “hacking” scenes that are any more accurate than knives making a “schwing” sound. But it shouldn’t be that hard for them to find computer gear that was manufactured in the right year to use for the film.

Why can’t they hire experts on technology to check everything?

Worse Than FailureRepresentative Line: JSONception

I am on record as not particularly loving JSON as a serialization format. It's fine, and I'm certainly not going to die on any hills over it, but I think that as we stripped down the complexity of XML we threw away too much.

On the flip side, the simplicity means that it's harder to use it wrong. It's absent many footguns.

Well, one might think. But then Hootentoot ran into a problem. You see, an internal partner needed to send them a JSON document which contains a JSON document. Now, one might say, "isn't any JSON object a valid sub-document? Can't you just nest JSON inside of JSON all day? What could go wrong here?"

"value":"[{\"value\":\"1245\",\"begin_datum\":\"2025-05-19\",\"eind_datum\":null},{\"value\":\"1204\",\"begin_datum\":\"2025-05-19\",\"eind_datum\":\"2025-05-19\"}]",

This. This could go wrong. They embedded JSON inside of JSON… as a string.

Hootentoot references the hottest memes of a decade and a half ago to describe this Xzibit:

Yo dawg, i heard you like JSON, so i've put some JSON in your JSON

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsThe Sunken Land of Buss

Author: Majoki In my line of work, I hear it all the time, “Why do we have better maps of the surface of the moon and Mars than our own ocean floors?” To most folks it sounds like a reasonable question, but to a hydrographic surveyor it can be triggering. A few weeks ago when […]

The post The Sunken Land of Buss appeared first on 365tomorrows.