Planet Russell

,

Charles StrossAugust update

One of the things I've found out the hard way over the past year is that slowly going blind has subtle but negative effects on my productivity.

Cataracts are pretty much the commonest cause of blindness, they can be fixed permanently by surgically replacing the lens of the eye—I gather the op takes 15-20 minutes and can be carried out with only local anaesthesia: I'm having my first eye done next Tuesday—but it creeps up on you slowly. Even fast-developing cataracts take months.

In my case what I noticed first was the stars going out, then the headlights of oncoming vehicles at night twinkling annoyingly. Cataracts diffuse the light entering your eye, so that starlight (which is pretty dim to begin with) is spread across too wide an area of your retina to register. Similarly, the car headlights had the same blurring but remained bright enough to be annoying.

The next thing I noticed (or didn't) was my reading throughput diminishing. I read a lot and I read fast, eye problems aside: but last spring and summer I noticed I'd dropped from reading about 5 novels a week to fewer than 3. And for some reason, I wasn't as productive at writing. The ideas were still there, but staring at a computer screen was curiously fatiguing, so I found myself demotivated, and unconsciously taking any excuse to do something else.

Then I went for my regular annual ophthalmology check-up and was diagnosed with cataracts in both eyes.

In the short term, I got a new prescription: this focussed things slightly better, but there are limits to what you can do with glass, even very expensive glass. My diagnosis came at the worst time; the eye hospital that handles cataracts for pretty much the whole of south-east Scotland, the Queen Alexandria Eye Pavilion, closed suddenly at the end of last October: a cracked drainpipe had revealed asbestos cement in the building structure and emergency repairs were needed. It's a key hospital, but even so taking the asbestos out of a five story high hospital block takes time—it only re-opened at the start of July. Opthalmological surgery was spread out to other hospitals in the region but everything got a bit logjammed, hence the delays.

I considered paying for private private surgery. It's available, at a price: because this is a civilized country where healthcare is free at the point of delivery, I don't have health insurance, and I decided to wait a bit rather than pay £7000 or so to get both eyes done immediately. It turned out that, in the event, going private would have been foolish: the Eye Pavilion is open again, and it's only in the past month—since the beginning of July or thereabouts—that I've noticed my output slowing down significantly again.

Anyway, I'm getting my eyes fixed, but not at the same time: they like to leave a couple of weeks between them. So I might not be updating the blog much between now and the end of September.

Also contributing to the slow updates: I hit "pause" on my long-overdue space opera Ghost Engine on April first, with the final draft at the 80% point (with about 20,000 words left to re-write). The proximate reason for stopping was not my eyesight deteriorating but me being unable to shut up my goddamn muse, who was absolutely insistent that I had to drop everything and write a different novel right now. (That novel, Starter Pack, is an exploration of a throwaway idea from the very first sentence of Ghost Engine: they share a space operatic universe but absolutely no characters, planets, or starships with silly names: they're set thousands of years apart.) Anyway, I have ground to a halt on the new novel as well, but I've got a solid 95,000 words in hand, and only about 20,000 words left to write before my agent can kick the tires and tell me if it's something she can sell.

I am pretty sure you would rather see two new space operas from me than five or six extra blog entries between now and the end of the year, right?

(NB: thematically, Ghost Engine is my spin on a Banksian-scale space opera that's putting the boot in on the embryonic TESCREAL religion and the sort of half-baked AI/mind uploading singularitarianism I explored in Accelerando). Hopefully it has the "mouth feel" of a Culture novel without being in any way imitative. And Starter Pack is three heist capers in a trench-coat trying to escape from a rabid crapsack galactic empire, and a homage to Harry Harrison's The Stainless Steel Rat—with a side-order of exploring the political implications of lossy mind-uploading.)

All my energy is going into writing these two novels despite deteriorating vision right now, so I have mostly been ignoring the news (it's too depressing and distracting) and being a boring shut-in. It will be a huge relief to reset the text zoom in Scrivener back from 220% down to 100% once I have working eyeballs again! At which point I expect to get even less visible for a few frenzied weeks. Last time I was unable to write because of vision loss (caused by Bell's Palsy) back in 2013, I squirted out the first draft of The Annihilation Score in 18 days when I recovered: I'm hoping for a similar productivity rebound in September/October—although they can't be published before 2027 at the earliest (assuming they sell).

Anyway: see you on the other side!

PS: Amazon is now listing The Regicide Report as going on sale on January 27th, 2026: as far as I know that's a firm date.

Obligatory blurb:

An occult assassin, an elderly royal and a living god face off in The Regicide Report, the thrilling final novel in Charles Stross' epic, Hugo Award-winning Laundry Files series.

When the Elder God recently installed as Prime Minister identifies the monarchy as a threat to his growing power, Bob Howard and Mo O'Brien - recently of the supernatural espionage service known as the Laundry Files - are reluctantly pressed into service.

Fighting vampirism, scheming American agents and their own better instincts, Bob and Mo will join their allies for the very last time. God save the Queen― because someone has to.

Planet DebianJunichi Uekawa: Changing Chrome Remote Desktop desktop size in GCP Windows.

Changing Chrome Remote Desktop desktop size in GCP Windows. When I connect to GCP Windows hosts with default configuration I get 640x480 desktop. Enabling display device in the device configuration enables resizing from Windows.

Planet DebianSteinar H. Gunderson: Bruteforcing pwgen passwords

I needed to bruteforce some passwords that I happened to know that were generated with the default mode (“pronouncable”) of pwgen, so I spent a fair amount of time writing software to help. It went through a whole lot of iterations and ended up being more efficient than I had ever assumed would be possible (although it's still nowhere near as efficient as it should ideally be). So now I'm sharing it with you. If you have IPv6 and can reach git.sesse.net, that is.

I'm pasting the entire README below. Remember to use it for ethical purposes.

Introduction
============

pwbrute creates all possible pwgen passwords (default tty settings, no -s).
It matches pwgen 2.08. It supports ordering them by most common first.
Note that pwgen before 2.07 also supported a special “non-tty mode”
that was even less secure (no digits, no uppercase) which is not supported here.

To get started, do

   g++ -std=c++20 -O2 -o pwbrute pwbrute.cc -ljemalloc
  ./pwbrute --raw --sort --expand --verbose > passwords.txt

wait for an hour or two and you're left with 276B passwords in order
(about 2.5TB). (You can run without -ljemalloc, but the glibc malloc
makes pwbrute take about 50% more time.)

pwbrute is not a finished, polished product. Do not expect this to be
suitable for inclusion in e.g. a Linux distribution.


A brief exposition of pwgen's security
======================================

pwgen is a program that is fairly widely used in Linux/UNIX systems
to generate “pronounceable” (and thus supposedly easier-to-remember)
passwords. On the surface of it, the default 8-letter passwords with
uppercase letters, lowercase letters and digits would have a password
space of

  62^8 = 218,340,105,584,896 ~= 47.63 bits

This isn't enough to save you from password cracking against fast hashes
(e.g. NTLM), but it's enough for almost everything else.

However, pwgen (without -s) does by design not use this entire space.
It builds passwords from a list of 40 “phonemes” (a, ae, ah, ai, b,
c, ch, ...) in sequence, with some rules of which can come after each
others (e.g. the combination f-g is disallowed, since any consonant
phoneme must be followed by a vowel or sometimes a digit), and sometimes
digits. Furthermore, some phonemes may be uppercased (only first letter,
in case of two-letter phonemes). In all, these restrictions mean that
the number of producable passwords drop to

  307,131,320,668 ~= 38.16 bits

Furthermore, if a password does not contain at least one uppercase letter
and one digit, it is rejected. This doesn't affect that many passwords,
but it's still down to

  276,612,845,450 ~= 38.00 bits

You would believe that this means that to get to a 50% chance of cracking
a password, you'd need to test about ~138 billon passwords; however, the
effective entropy is much, much worse than that:

First, consider that digits are inserted (at valid points) only with
30% probability, and phonemes are uppercased (at valid points) only
with 20% probability. This means that a password like “Ahdaiy7i” is
_much_ more likely than e.g. “EXuL8OhP” (five uppercase letters),
even though both are possible to generate.

Furthermore, when building up the password from left to right, every
letter is not equally likely -- every _phoneme_ is equally likely.
Since at any given point, (e.g.) “ai” is as likely as “a”, a lot fewer
rolls of the dice are required to get to eight letters if the password
contains many dipthongs (two-letter phonemes). This makes them vastly
overrepresented. E.g., the specific password “aechae0A” has three dipthongs
and a probability of about 1 in 12 million of being generated, while
“Oozaey7Y” has only two dipthongs (but an extra capital letter) and a
probability of about 1 in 9.33 _billion_!

In all, this means that to get to 50% probability of cracking a given
pwgen password (assuming you know that it was indeed generated with
pwgen, without -s), you need to test about 405 million passwords.
Note that pwgen gives out a list of passwords and lets the user choose,
which may make this easier or harder; I've had real-world single-password
cracks that fell after only ~400k attempts (~2% probability if the user
has chosen at random, but they most likely picked one that looked more
beautiful to them somehow).

This is all known; I reported the limited keyspace in 2004 (Debian bug
#276976), and Solar Designer reported the poor entropy in CVE-2013-4441.
(I discovered the entropy issues independently from them a couple of
months later, then discovered that it was already known, and didn't
publish.) However, to the best of my knowledge, pwbrute is the first
public program that will actually generate the most likely passwords
efficiently for you.

Needless to say, I cannot recommend using pwgen's phoneme-based
passwords for anything that needs to stay secure. (I will not make
concrete recommendations beyond that; a lot of literature exists
on the subject.)


Speeding up things
==================

Very few users would want the entire set of passwords, given that the
later ones are incredibly unlikely (e.g., AB0AB0AB has a chance of about
2^-52.155, or 1 in 5 quadrillion). To not get all, you can use e.g.
-c -40, which will produce only those with more than approx. 2^-40 probability
before final rejection (roughly ~6B passwords).

(NOTE: Since the calculated probability is before final rejection of those
without a digit or uppercase letter, they will not sum to 1, but something
less; approx. 0.386637 for the default 8-letter passwords, or 2^-1.3709.
Take this into account when reading all text below.)

pwbrute is fast but not super-fast; it can generate about 80M passwords/sec
(~700 MB/sec) to stdout, of course depending on your CPUs. The expansion phase
generally takes nearly all the time; if your cracker could somehow accept the
unexpanded patterns (i.e., without --expand) for free, pwbrute would basically
be infinitely fast. (It would be possible to microoptimize the expansion,
perhaps to 1B passwords/sec/core if pulling out all the stops, but at some point,
it starts becoming a problem related to pipe I/O performance, not candidate
generation.)

Thus, if your cracker is very fast (e.g. hashcat cracking NTLM), it's suboptimal
to try to limit yourself to only pwbrute-created passwords. It's much, much
faster to just create a bunch of legal prefixes and then let hashcat try all
of them, even though this will test some “impossible” passwords.
For instance:

  ./pwbrute --first-stage-len 5 --raw > start5.pwd
  ./hashcat.bin -O -m 1000 ntlm.pwd -w 3 -a 6 start5.pwd -1 '?l?u?d' '?1?1?1'

The “combination” mode in hashcat is also not always ideal; consider using
rules instead.

If you need longer passwords than 8 characters, you may want to split the job
into multiple parts. For this, you can combine --first-stage-len with --prefix
to generate passwords in two stages, e.g. first generate all valid 3-letter
prefixes (“bah” is valid, “bbh” is not) and then for each prefix generate
all possible passwords.  This requires much less RAM, can go in parallel,
and is pretty efficient.

For instance, this will create all passwords up to probability 2^-30,
over 16 cores, in a form that doesn't use too much RAM:

  ./pwbrute -f 3 -r -s -e | parallel -j 16 "./pwbrute -p {} -c -30 -s 2>/dev/null | zstd -6 > up-to-30-{}.pwd.zst"

You can then use the included merge.cc utility to merge the sorted files
into a new sorted one (this requires not using pwbrute --raw, since merge
wants the probabilities to merge correctly):

  g++ -O2 -o merge merge.cc -lzstd
  ./merge up-to-30-*.pwd.zst | pv | pzstd -6 > up-to-30.pwd.zst

merge is fairly fast, but not infinitely so. Sorry.

Beware, zstd uses some decompression buffers that can be pretty big per-file
and there are lots of files, so if you put the limit  lower than -30,
consider merging in multiple phases or giving -M to zstd, unless you want to
say hello to the OOM killer half-way into your merge.

As long as you give the --sort option to pwbrute, it is designed to give exactly
the same output in the same order every time (at the expense of a little bit of
speed during the pattern generation phase). This means that you can safely resume
an aborted generation or cracking job using the --skip=NUM flag, without worrying
that you'd lose some candidates.

Here are some estimated numbers for various probability cutoffs, and how much
of the probability space they cover (after correction for rejected passwords):

  p >= 2^-25:           78,000 passwords   (  0.00% coverage,   0.63% probability)
  p >= 2^-26:          171,200 passwords   (  0.00% coverage,   1.12% probability)
  p >= 2^-27:        3,427,100 passwords   (  0.00% coverage,   9.35% probability)
  p >= 2^-28:        5,205,200 passwords   (  0.00% coverage,  12.01% probability)
  p >= 2^-29:        8,588,250 passwords   (  0.00% coverage,  14.17% probability)
  p >= 2^-30:       24,576,550 passwords   (  0.01% coverage,  19.23% probability)
  p >= 2^-31:       75,155,930 passwords   (  0.03% coverage,  27.58% probability)
  p >= 2^-32:      284,778,250 passwords   (  0.10% coverage,  43.81% probability)
  p >= 2^-33:      540,418,450 passwords   (  0.20% coverage,  55.14% probability)
  p >= 2^-34:      808,534,920 passwords   (  0.29% coverage,  60.49% probability)
  p >= 2^-35:    1,363,264,200 passwords   (  0.49% coverage,  66.28% probability)
  p >= 2^-36:    2,534,422,340 passwords   (  0.92% coverage,  72.36% probability)
  p >= 2^-37:    5,663,431,890 passwords   (  2.05% coverage,  80.54% probability)
  p >= 2^-38:   11,178,389,760 passwords   (  4.04% coverage,  87.75% probability)
  p >= 2^-39:   16,747,555,070 passwords   (  6.05% coverage,  91.55% probability)
  p >= 2^-40:   25,139,913,440 passwords   (  9.09% coverage,  94.25% probability)
  p >= 2^-41:   34,801,107,110 passwords   ( 12.58% coverage,  95.91% probability)
  p >= 2^-42:   52,374,739,350 passwords   ( 18.93% coverage,  97.38% probability)
  p >= 2^-43:   78,278,619,550 passwords   ( 28.30% coverage,  98.51% probability)
  p >= 2^-44:  111,967,613,850 passwords   ( 40.48% coverage,  99.25% probability)
  p >= 2^-45:  147,452,759,450 passwords   ( 53.31% coverage,  99.64% probability)
  p >= 2^-46:  186,012,691,450 passwords   ( 67.25% coverage,  99.86% probability)
  p >= 2^-47:  215,059,885,450 passwords   ( 77.75% coverage,  99.94% probability)
  p >= 2^-48:  242,726,285,450 passwords   ( 87.75% coverage,  99.98% probability)
  p >= 2^-49:  257,536,845,450 passwords   ( 93.10% coverage,  99.99% probability)
  p >= 2^-50:  268,815,845,450 passwords   ( 97.18% coverage, 100.00% probability)
  p >= 2^-51:  273,562,845,450 passwords   ( 98.90% coverage, 100.00% probability)
  p >= 2^-52:  275,712,845,450 passwords   ( 99.67% coverage, 100.00% probability)
  p >= 2^-53:  276,512,845,450 passwords   ( 99.96% coverage, 100.00% probability)
         all:  276,612,845,450 passwords   (100.00% coverage, 100.00% probability)


License
=======

pwbrute is Copyright (C) 2025 Steinar H. Gunderson.

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

365 TomorrowsOsmo

Author: Peter Trelay As he approached the hollow, he began to feel sick, and crouched on the ground in the shade of a boulder attempting to breathe. The wave amplitudes in his hybrid unit were cancelling each other out, forcing his system to the point of collapse. His synthetic and organic centres were at war. […]

The post Osmo appeared first on 365tomorrows.

Charles StrossBooks I will not Write: this time, a movie

(This is an old/paused blog entry I planned to release in April while I was at Eastercon, but forgot about. Here it is, late and a bit tired as real world events appear to be out-stripping it ...)

(With my eyesight/cognitive issues I can't watch movies or TV made this century.)

But in light of current events, my Muse is screaming at me to sit down and write my script for an updated re-make of Doctor Strangelove:

POTUS GOLDPANTS, in middling dementia, decides to evade the 25th amendment by barricading himself in the Oval Office and launching stealth bombers at Latveria. Etc.

The USAF has a problem finding Latveria on a map (because Doctor Doom infiltrated the Defense Mapping Agency) so they end up targeting the Duchy of Grand Fenwick by mistake, which is in Transnistria ... which they are also having problems finding on Google Maps, because it has the string "trans" in its name.

While the USAF is trying to bomb Grand Fenwick (in Transnistria), Russian tanks are commencing a special military operation in Moldova ... of which Transnistria is a breakaway autonomous region.

Russia is unaware that Grand Fenwick has the Q-bomb (because they haven't told the UN yet). Meanwhile, the USAF bombers blundering overhead have stealth coatings bought from a President Goldfarts crony that even antiquated Russian radar can spot.

And it's up to one trepidatious officer to stop them ...

,

Planet DebianRavi Dwivedi: Installing Debian With Btrfs and Encryption

In this tutorial, I will cover how I installed Debian with Btrfs and disk encryption, along with creating subvolumes @ for root and @home for /home so that I can use Timeshift to create snapshots. These snapshots are kept on the same disk where Debian is installed, and the use-case is to roll back to a working system in case I mess up something or to recover an accidentally deleted file.

I went through countless tutorials on the Internet, but I didn’t find a single tutorial covering both the disk encryption and the above-mentioned subvolumes (on Debian). Debian doesn’t create the desired subvolumes by default, therefore the process requires some manual steps, which beginners may not be comfortable performing. Beginners can try distros such as Fedora and Linux Mint, as their installation includes Btrfs with the required subvolumes.

Furthermore, it is pertinent to note that I used Debian Trixie’s DVD iso on a real laptop (not a virtual machine) for my installation. Debian Trixie is the codename for the current stable version of Debian. Then I took screenshots in a virtual machine by repeating the process. Moreover, a couple of screenshots are from the installation I did on the real laptop.

Let’s start the tutorial by booting up the Debian installer.

The above screenshot shows the first screen we see on the installer. Since we want to choose Expert Install, we select Advanced Options in the screenshot above.

Let’s select the Expert Install option in the above screenshot. It is because we want to create subvolumes after the installer is done with the partition, and only then proceed to installing the base system. “Non-expert” install modes proceed directly to installing the system right after creating partitions without pausing for us to create the subvolumes.

After selecting the Expert Install option, you will get the screen above. I will skip to partitioning from here and leave the intermediate steps such as choosing language, region, connecting to Wi-Fi, etc. For your reference, I did create the root user.

Let’s jump right to the partitioning step. Select the Partition disks option from the menu as shown above.

Choose Manual.

Select your disk where you would like to install Debian.

Select Yes when asked for creating a new partition.

I chose the msdos option as I am not using UEFI. If you are using UEFI, then you need to choose the gpt option. Also, your steps will (slightly) differ from mine if you are using UEFI. In that case, you can watch this video by the YouTube channel EF Linux in which he creates an EFI partition. As he doesn’t cover disk encryption, you can continue reading this post after following the steps corresponding to EFI.

Select the free space option as shown above.

Choose Create a new partition.

I chose the partition size to be 1 GB.

Choose Primary.

Choose Beginning.

Now, I got to this screen.

I changed mount point to /boot and turned on the bootable flag and then selected “Done setting up the partition.”

Now select free space.

Choose the Create a new partition option.

I made the partition size equal to the remaining space on my disk. I do not intend to create a swap partition, so I do not need more space.

Select Primary.

Select the Use as option to change its value.

Select “physical volume for encryption.”

Select Done setting up the partition.

Now select “Configure encrypted volumes.”

Select Yes.

Select Finish.

Selecting Yes will take a lot of time to erase the data. Therefore, I would say if you have hours for this step (in case your SSD is like 1 TB), then I would recommend selecting “Yes.” Otherwise, you could select “No” and compromise on the quality of encryption.

After this, you will be asked to enter a passphrase for disk encryption and confirm it. Please do so. I forgot to take the screenshot for that step.

Now select that encrypted volume as shown in the screenshot above.

Here we will change a couple of options which will be shown in the next screenshot.

In the Use as menu, select “btrfs journaling file system.”

Now, click on the mount point option.

Change it to “/ - the root file system.”

Select Done setting up the partition.

This is a preview of the paritioning after performing the above-mentioned steps.

If everything is okay, proceed with the Finish partitioning and write changes to disk option.

The installer is reminding us to create a swap partition. I proceeded without it as I planned to add swap after the installation.

If everything looks fine, choose “yes” for writing the changes to disks.

Now we are done with partitioning and we are shown the screen in the screenshot above. If we had not selected the Expert Install option, the installer would have proceeded to install the base system without asking us.

However, we want to create subvolumes before proceeding to install the base system. This is the reason we chose Expert Install.

Now press Ctrl + F2.

You will see the screen as in the above screenshot. It says “Please press Enter to activate this console.” So, let’s press Enter.

After pressing Enter, we see the above screen.

The screenshot above shows the steps I performed in the console. I followed the already mentioned video by EF Linux for this part and adapted it to my situation (he doesn’t encrypt the disk in his tutorial).

First we run df -h to have a look at how our disk is partitioned. In my case, the output was:

# df -h
Filesystem              Size  Used  Avail   Use% Mounted on
tmpfs                   1.6G  344.0K  1.6G    0% /run
devtmpfs                7.7G       0  7.7G   0% /dev
/dev/sdb1               3.7G    3.7G    0   100% /cdrom
/dev/mapper/sda2_crypt  952.9G  5.8G  950.9G  0% /target
/dev/sda1               919.7M  260.0K  855.8M  0% /target/boot

df -h shows us that /dev/mapper/sda2_crypt and /dev/sda1 are mounted on /target and /target/boot respectively.

Let’s unmount them. For that, we run:

# umount /target
# umount /target/boot

Next, let’s mount our root filesystem to /mnt.

# mount /dev/mapper/sda2_crypt /mnt

Let’s go into the /mnt directory.

# cd /mnt

Upon listing the contents of this directory, we get:

/mnt # ls
@rootfs

Debian installer has created a subvolume @rootfs automatically. However, we need the subvolumes to be @ and @home. Therefore, let’s rename the @rootfs subvolume to @.

/mnt # mv @rootfs @

Listing the contents of the directory again, we get:

/mnt # ls
@

We only one subvolume right now. Therefore, let us go ahead and create another subvolume @home.

/mnt # btrfs subvolume create @home
Create subvolume './@home'

If we perform ls now, we will see there are two subvolumes:

/mnt # ls
@ @home

Let us mount /dev/mapper/sda2_crypt to /target

/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@ /dev/mapper/sda2_crypt /target/

Now we need to create a directory for /home.

/mnt # mkdir /target/home/

Now we mount the /home directory with subvol=@home option.

/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@home /dev/mapper/sda2_crypt /target/home/

Now mount /dev/sda1 to /target/boot.

/mnt # mount /dev/sda1 /target/boot/

Now we need to add these options to the fstab file, which is located at /target/etc/fstab. Unfortunately, vim is not installed in this console. The only way to edit is Nano.

nano /target/etc/fstab

Edit your fstab file to look similar to the one in the screenshot above. I am pasting the fstab file contents below for easy reference.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/sda2_crypt /        btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@ 0       0
/dev/mapper/sda2_crypt /home    btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@home 0       0
# /boot was on /dev/sda1 during installation
UUID=12842b16-d3b3-44b4-878a-beb1e6362fbc /boot           ext4    defaults        0       2
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0

Please double check the fstab file before saving it. In Nano, you can press Ctrl+O followed by pressing Enter to save the file. Then press Ctrl+X to quit Nano. Now, preview the fstab file by running

cat /target/etc/fstab

and verify that the entries are correct, otherwise you will booted to an unusable and broken system after the installation is complete.

Next, press Ctrl + Alt + F1 to go back to the installer.

Proceed to “Install the base system.”
Screenshot of Debian installer installing the base system.

Screenshot of Debian installer installing the base system.

I chose the default option here - linux-image-amd64.

After this, the installer will ask you a few more questions. For desktop environment, I chose KDE Plasma. You can choose the desktop environment as per your liking. I will not cover the rest of the installation process.

Let’s jump to our freshly installed Debian system. Since I created a root user, I added the user ravi to the suoders file (/etc/sudoers) so that ravi can run commands with sudo. Follow this if you would like to do the same.

Now we set up zram as swap. First, install zram-tools.

sudo apt install zram-tools

Now edit the file /etc/default/zramswap and make sure to have the following lines are uncommented:

ALGO=lz4
PERCENT=50

Now, run

sudo systemctl restart zramswap

If you run lsblk now, you should see the below-mentioned entry in the output:

zram0          253:0    0   7.8G  0 disk  [SWAP]

This shows us that zram has been activated as swap.

We are done now. Hope the tutorial was helpful. See you in the next post.

Planet DebianRaju Devidas: Fixing Auto-Rotate screen orientation on PostmarketOS devices running MATE DE

Fixing Auto-Rotate screen orientation on PostmarketOS devices running MATE DE

I have been using my Samsung Galaxy Tab A (2015) with PostmarketOS on and off since last year. It serves as a really good e-book reader with KOReader installed on it.

Have tried phosh and plasma-mobile on it, works nicely but slows the device down heavily (2 GB RAM and old processor) so I use MATE Desktop environment on it.

Lately I have started using this tablet along with my laptop as a second screen for work. And it has been working super nicely for that. The only issue being that I have to manually rotate the screen to landscape every time I reboot the device. It resets the screen orientation to portrait after a reboot. So I went through the pmOS wiki and a neat nice hack documented there worked very well for me.

First we will test if the auto-rotate sensor works and if we can read values from it. So we install some basic necessary packages

$ sudo apk add xrandr xinput inotify-tools iio-sensor-proxy

Enable the service for iio-sensor-proxy

sudo rc-update add iio-sensor-proxy

Reboot the device.

Now in the device terminal start the sensor monitor-sensor

user@samsung-gt58 ~> monitor-sensor
    Waiting for iio-sensor-proxy to appear
+++ iio-sensor-proxy appeared
=== Has accelerometer (orientation: normal, tilt: vertical)
=== Has ambient light sensor (value: 5.000000, unit: lux)
=== No proximity sensor
=== No compass
    Light changed: 14.000000 (lux)
    Accelerometer orientation changed: left-up
    Tilt changed: tilted-down
    Light changed: 12.000000 (lux)
    Tilt changed: vertical
    Light changed: 13.000000 (lux)
    Light changed: 11.000000 (lux)
    Light changed: 13.000000 (lux)
    Accelerometer orientation changed: normal
    Light changed: 5.000000 (lux)
    Light changed: 6.000000 (lux)
    Light changed: 5.000000 (lux)
    Accelerometer orientation changed: right-up
    Light changed: 3.000000 (lux)
    Light changed: 4.000000 (lux)
    Light changed: 5.000000 (lux)
    Light changed: 12.000000 (lux)
    Tilt changed: tilted-down
    Light changed: 19.000000 (lux)
    Accelerometer orientation changed: bottom-up
    Tilt changed: vertical
    Light changed: 1.000000 (lux)
    Light changed: 2.000000 (lux)
    Light changed: 4.000000 (lux)
    Accelerometer orientation changed: right-up
    Tilt changed: tilted-down
    Light changed: 11.000000 (lux)
    Accelerometer orientation changed: normal
    Tilt changed: vertical
    Tilt changed: tilted-down
    Light changed: 18.000000 (lux)
    Light changed: 21.000000 (lux)
    Light changed: 22.000000 (lux)
    Light changed: 19.000000 (lux)
    Accelerometer orientation changed: left-up
    Light changed: 17.000000 (lux)
    Tilt changed: vertical
    Light changed: 14.000000 (lux)
    Tilt changed: tilted-down
    Light changed: 16.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)

As you can see we can read the rotation values from the sensor as I am rotating the tablet in different orientations.

Now we just need to use a script which changes the screen orientation using xrandr according to the sensor value.

#!/bin/sh

killall monitor-sensor
monitor-sensor > /dev/shm/sensor.log 2>&1 &

while inotifywait -e modify /dev/shm/sensor.log; do

  ORIENTATION=$(tail /dev/shm/sensor.log | grep &aposorientation&apos | tail -1 | grep -oE &apos[^ ]+$&apos)

  case "$ORIENTATION" in

    normal)
      xrandr -o normal
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 1 0 0 0 1 0 0 0 1
      ;;
    left-up)
      xrandr -o left
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 0 -1 1 1 0 0 0 0 1
      ;;
    bottom-up)
      xrandr -o inverted
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" -1 0 1 0 -1 1 0 0 1
      ;;
    right-up)
      xrandr -o right
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 0 1 0 -1 0 1 0 0 1
      ;;

  esac
done

auto-rotate-screen.sh

You need to replace the name of your touch input device in the script, you can get the name by using xinput --list , make sure to type this on the device terminal.

user@samsung-gt58 ~> xinput --list
* Virtual core pointer                    	id=2	[master pointer  (3)]
*   * Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
*   * Zinitix Capacitive TouchScreen          	id=10	[slave  pointer  (2)]
*   * Toad One Plus                           	id=12	[slave  pointer  (2)]
* Virtual core keyboard                   	id=3	[master keyboard (2)]
    * Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    * GPIO Buttons                            	id=6	[slave  keyboard (3)]
    * pm8941_pwrkey                           	id=7	[slave  keyboard (3)]
    * pm8941_resin                            	id=8	[slave  keyboard (3)]
    * Zinitix Capacitive TouchScreen          	id=11	[slave  keyboard (3)]
    * samsung-a2015 Headset Jack              	id=9	[slave  keyboard (3)]

In our script here we are using a Zinitix capacitive screen, it will be different for yours.

Once your script is ready with the correct touchscreen name. Save and make the script executable. chmod +x auto-rotate-screen.sh

Then test your script in your terminal ./auto-rotate.sh , stop the script using Ctrl + C

Now we need add this script to auto-start. On MATE DE you can go to System > Control Center > Startup Applications, then click on Custom Add button, browse the script location, give it a name and then click on Add button.

Now reboot the tablet/device, login and see the auto rotation working.


  1. Auto-Rotation wiki article on PostmarketOS Wiki https://wiki.postmarketos.org/wiki/Auto-rotation

Charles StrossAnother update

Good news/no news:

The latest endoscopy procedure went smoothly. There are signs of irritation in my fundus (part of the stomach lining) but no obvious ulceration or signs of cancer. Biopsy samples taken, I'm awaiting the results. (They're testing for celiac, as well as cytology.)

I'm also on the priority waiting list for cataract surgery at the main eye hospital, with an option to be called up at short notice if someone ahead of me on the list cancels.

This is good stuff; what's less good is that I'm still feeling a bit crap and have blurry double vision in both eyes. So writing is going very slowly right now. This isn't helped by me having just checked the page proofs for The Regicide Report, which will be on the way to production by the end of the month.

(There's a long lead time with this title because it has to be published simultaneously in the USA and UK, which means allowing time in the pipeline for Orbit in the UK to take the typeset files and reprocess them for their own size of paper and binding, and on the opposite side, for Tor.com to print and distribute physical hardcovers—which, in the USA, means weeks in shipping containers slowly heading for warehouses in other states: it's a big place.)

Both the new space operas in progress are currently at around 80% complete but going very slowly (this is not quite a euphemism for "stalled") because: see eyeballs above. This is also the proximate cause of the slow/infrequent blogging. My ability to read or focus on a screen is really impaired right now: it's not that I can't do it, it's just really tiring so I'm doing far less of it. On the other hand, I expect that once my eyes are fixed my productivity will get a huge rebound boost. Last time I was unable to write or read for a couple of months (in 2013 or thereabouts: I had Bell's Palsy and my most working eye kept watering because the eyelid didn't work properly) I ended up squirting the first draft of novel out in eighteen days after it cleared up. (That was The Annihilation Score. You're welcome.)

Final news: I'm not doing many SF convention appearances these days because COVID (and Trump), but I am able to announce that I'm going to be one of the guests of honour at LunCon '25, the Swedish national SF convention, at the city hall of Lund, very close to Malmö, from October 1th to 12th. (And hopefully I'll be going to a couple of other conventions in the following months!)

Planet DebianNoah Meyerhans: Determining Network Online Status of Dualstack Cloud VMs

When a Debian cloud VM boots, it typically runs cloud-init at various points in the boot process. Each invocation can perform certain operations based on the host’s static configuration passed by the user, typically either through a well known link-local network service or an attached iso9660 drive image. Some of the cloud-init steps execute before the network comes up, and others at a couple of different points after the network is up.

I recently encountered an unexpected issue when configuring a dualstack (uses both IPv6 and legacy IPv4 networking) VM to use a custom apt server accessible only via IPv6. VM provisioning failed because it was unable to access the server in question, yet when I logged in to investigate, it was able to access the server without any problem. The boot had apparently gone smoothly right up until cloud-init’s Package Update Upgrade Install module called apt-get update, which failed and broke subsequent provisioning steps. The errors reported by apt-get indicated that there was no route to the service in question, which more accurately probably meant that there was not yet a route to the service. But there was shortly after, when I investigated.

This was surprising because the apt-get invocations occur in a cloud-init sequence that’s explicitly ordered after the network is configured according to systemd-networkd-wait-online. Investigation eventually led to similar issues encountered in other environments reported in Debian bug #1111791, “systemd: network-online.target reached before IPv6 address is ready”. The issue described in that bug is identical to mine, but the bug is tagged wontfix. The behavior is considered correct.

Why the default behavior is the correct one

While it’s a bit counterintuitive, the systemd-networkd behavior is correct, and it’s also not something we’d want to override in the cloud images. Without explicit configuration, systemd can’t accurately infer the intended network configuration of a given system. If a system is IPv6-only, systemd-networkd-wait-online will introduce unexpected delays in the boot process if it waits for IPv4, and vice-versa. If it assumes dualstack, things are even worse because it would block for a long time (approximately two minutes) in any single stack network before failing, leaving the host in degraded state. So the most reasonable default behavior is to block until any protocol is configured.

For these same reasons, we can’t change the systemd-networkd-wait-online configuration in our cloud images. All of the cloud environments we support offer both single stack and dual stack networking, so we preserve systemd’s default behavior.

What’s causing problems here is that IPv6 takes significantly longer to configure due to its more complex router solicitation + router advertisement + DHCPv6 setup process. So in this particular case, where I’ve got a dualstack VM that needs to access a v6-only apt server during the provisioning process, I need to find some mechanism to override systemd’s default behavior and wait for IPv6 connectivity specifically.

What won’t work

Cloud-init offers the ability to write out arbitrary files during provisioning. So writing a drop-in for systemd-networkd-wait-online.service is trivial. Unfortunately, this doesn’t give us everything we actually need. We still need to invoke systemctl daemon-reload to get systemd to actually apply the changes after we’ve written them, and of course we need to do that before the service actually runs. Cloud-init provides a bootcmd module that lets us run shell commands “very early in the boot process”, but it runs too early: it runs before we’ve written out our configuration files. Similarly, it provides a runcmd module, but scripts there are towards the end of the boot process, far too late to be useful.

Instead of using the bootcmd facility, to simply reload systemd’s config, it seemed possible that we could both write the config and trigger the reload, similar to the following:

 bootcmd:
- mkdir -p /etc/systemd/system/systemd-networkd-wait-online.service.d
- echo "[Service]" > /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- systemctl daemon-reload

But even that runs too late, as we can see in the logs that systemd-networkd-wait-online.service has completed before bootcmd is executed:

root@sid-tmp2:~# journalctl --no-pager -l -u systemd-networkd-wait-online.service
Aug 29 17:02:12 sid-tmp2 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug 29 17:02:13 sid-tmp2 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured
.
root@sid-tmp2:~# grep -F 'config-bootcmd ran' /var/log/cloud-init.log
2025-08-29 17:02:14,766 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully and took 0.467 seconds

At this point, it’s looking like there are few options left!

What eventually worked

I ended up identifying two solutions to the issue, both of which involve getting some other component of the provisioning process to run systemd-networkd-wait-online.

Solution 1

The first involves getting apt-get itself to wait for IPv6 configuration. The apt.conf configuration interface allows the definition of an APT::Update::Pre-Invoke hook that’s executed just before apt’s update operation. By writing the following to a file in /etc/apt/apt.conf.d/, we’re able to ensure that we have IPv6 connectivity before apt-get tries accessing the network. This cloud-config snippet accomplishes that:

 - path: /etc/apt/apt.conf.d/99-wait-for-ipv6
content: |
APT::Update::Pre-Invoke { "/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6"; }

This is safe to leave in place after provisioning, because the delay will be negligible once IPv6 connectivity is established. It’s only during address configuration that it’ll block for a noticeable amount of time, but that’s what we want.

This solution isn’t entirely correct, though, because it’s only apt-get that’s actually affected by it. Other service that start after the system is ostensibly “online” might only see IPv4 connectivity when they start. This seems acceptable at the moment, though.

Solution 2

The second solution is to simply invoke systemd-networkd-wait-online directly from a cloud-init bootcmd. Similar to the first solution, it’s not exactly correct because the host has already reached network-online.target, but it does block enough of cloud-init that package installation happens only after it completes. The cloud-config snippet for this is

bootcmd:
- [/usr/lib/systemd/systemd-networkd-wait-online, --operational-state=routable, --any, --ipv6]

In either case, we still want to write out a snippet to configure systemd-networkd-wait-online to wait for IPv6 connectivity for future reboots. Even though cloud-init won’t necessarily run in those cases, and many cloud VMs never reboot at all, it does complete the solution. Additionally, it solves the problem for any derivative images that may be created based on the running VM’s state. (At least if we can be certain that instances of those derivative images will never run in an IPv4-only network!)

write_files:
- path: /run/systemd/system/systemd-networkd-wait-online.service.d/99-ipv6-wait.conf
content: |
[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online --any --operational-state=routable --ipv6

How to properly solve it

One possible improvement would be for cloud-init to support a configuration key allowing the admin to specify the required protocols. Based on the presence of this key, cloud-init could reconfigure systemd-networkd-wait-online.service accordingly. Alternatively it could set the appropriate RequiredFamilyForOnline= value in the generated .network file. cloud-init supports multiple network configuration backends, so each of those would need to be updated. If using the systemd-networkd configuration renderer, this should be straightforward, but Debian uses the netplan renderer, so that tool might also need to be taught to pass such a configuration along to systemd-networkd.

Worse Than FailureError'd: Scamproof

Gordon S. is smarter than the machines. "I can only presume the "Fix Now with AI" button adds some mistakes in order to fix the lack of needed fixes."

0

 

"Sorry, repost with the link https://www.daybreaker.com/alive/," wrote Michael R.

3

 

And yet again from Michael R., following up with a package mistracker. "Poor DHL driver. I hope he will get a break within those 2 days. And why does the van look like he's driving away from me."

1

 

Morgan airs some dirty laundry. "After navigating this washing machine app on holiday and validating my credit card against another app I am greeted by this less than helpful message each time. So is OK okay? Or is the Error in error?
Washing machine worked though."

2

 

And finally, scamproof Stuart wondered "Maybe the filter saw the word "scam" and immediately filed it into the scam bucket. All scams include the word "scam" in them, right?"

4

 

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsSmaller

Author: Amanda Marcotte I feel better now that I am smaller. I am much lighter on my feet. Actually, I don’t have feet anymore. But I figure no pain, no gain! My fitness journey started after Christmas. I was feeling gross filled to the brim with pie and and chocolate. So I needed to shrink. […]

The post Smaller appeared first on 365tomorrows.

xkcdSea Level

,

Cory DoctorowEnshittification (episode 500!)

A poop emoji with a grawlix-covered black bar across its mouth.

It’s the 500th edition of my podcast, and to celebrate, I’m bringing you an hour-long excerpt from the audiobook of my forthcoming book Enshittification: Why Everything Suddenly Got Worse and What To Do About It (Farrar, Straus and Giroux US/Canada; Verso UK/Commonwealth).

Because Amazon won’t carry my audiobooks (or any DRM-free audiobooks), I have to produce my own books and pre-sell them on Kickstarter campaigns. The Kickstarter for this one is underway and going great.

I hope that listening to this long sample will convince you to pre-order your copy! I don’t ask for Patreon donations, I don’t put ads on my work – these Kickstarters are a big part of why I’m able to pursue my open access, enshittification-free publishing program, and I really thank you for your support.


MP3

Planet DebianSamuel Henrique: Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too

Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release).

If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced.

Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project.

I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie

I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release.

Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new.

The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes.

Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl

wcurl logo

Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did.

Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters."

Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more.

Try it out:

wcurl example.com

wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0.

I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl

Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later).

Debian was the first Linux distro to enable it in the default build of the curl package, but Gentoo enabled it a few weeks earlier in their non-default flavor of the package, kudos to them!

HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only.

Try it out:

curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot

Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel.

You can read the announcement from the systemd v254 GitHub release

Try it out:

# This will reboot your machine!
systemctl soft-reboot

4) apt --update

Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!

The new --update option lets you do both things in a single command:

sudo apt --update upgrade
sudo apt --update install $PACKAGE

I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14?

Try it out:

sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade

This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:

podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go

powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline.

powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more.

A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails

Try it out:

sudo apt install powerline-go

Then add this to your .bashrc:

function _update_ps1() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p | wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi

Or this to .zshrc:

function powerline_precmd() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs ${${(%):%j}:-0})"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension

Tips number 6 and 7 are for Gnome users.

Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar.

Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk

I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process.

The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them.

Try it out:

sudo apt install gnome-system-monitor gnome-shell-extension-manager

And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile

After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time.

There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health".

A screenshot of the Gnome settings for Power showing the options for Battery Charging

On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings.

To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling.

What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage.

There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users.

In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow.

If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS).

And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit

Emacs users are already familiar with the legendary magit; a terminal-based UI for git.

Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly.

I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience.

Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI

You should check out the demos from the lazygit GitHub page.

Try it out:

sudo apt install lazygit

And then call lazygit from within a git repository.

9) neovim

neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years.

If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:

  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;

  • Better spellchecking due to treesitter integration (spellsitter);

  • Mouse support enabled by default;

  • Commenting support out-of-the-box;

    Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.

  • OSC52 support.

    Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases

The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel.

Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one".

Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him!

There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64.

Try it out:

sudo apt install podman

podman run -it docker.io/debian/eol:bo

Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

Planet DebianSamuel Henrique: Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too

Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release).

If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced.

Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project.

I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie

I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release.

Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new.

The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes.

Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl

wcurl logo

Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did.

Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters."

Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more.

Try it out:

wcurl example.com

wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0.

I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl

Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later).

Debian was the first stable Linux distro to enable it, and within rolling-release-based distros; Gentoo enabled it first in their non-default flavor of the package and Arch Linux did it three months before we pushed it to Debian Unstable/Testing/Stable-backports, kudos to them!

HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only.

Try it out:

curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot

Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel.

You can read the announcement from the systemd v254 GitHub release

Try it out:

# This will reboot your machine!
systemctl soft-reboot

4) apt --update

Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!

The new --update option lets you do both things in a single command:

sudo apt --update upgrade
sudo apt --update install $PACKAGE

I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14?

Try it out:

sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade

This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:

podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go

powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline.

powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more.

A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails

Try it out:

sudo apt install powerline-go

Then add this to your .bashrc:

function _update_ps1() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p | wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi

Or this to .zshrc:

function powerline_precmd() {
    PS1="$(/usr/bin/powerline-go -error $? -jobs ${${(%):%j}:-0})"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
}

If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension

Tips number 6 and 7 are for Gnome users.

Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar.

Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk

I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process.

The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them.

Try it out:

sudo apt install gnome-system-monitor gnome-shell-extension-manager

And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile

After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time.

There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health".

A screenshot of the Gnome settings for Power showing the options for Battery Charging

On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings.

To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling.

What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage.

There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users.

In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow.

If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS).

And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit

Emacs users are already familiar with the legendary magit; a terminal-based UI for git.

Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly.

I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience.

Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI

You should check out the demos from the lazygit GitHub page.

Try it out:

sudo apt install lazygit

And then call lazygit from within a git repository.

9) neovim

neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years.

If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:

  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;

  • Better spellchecking due to treesitter integration (spellsitter);

  • Mouse support enabled by default;

  • Commenting support out-of-the-box;

    Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.

  • OSC52 support.

    Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases

The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel.

Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one".

Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him!

There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64.

Try it out:

sudo apt install podman

podman run -it docker.io/debian/eol:bo

Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

Changes since publication

2025-08-30

  • Mention that Arch also enabled HTTP/3.

Krebs on SecurityAffiliates Flock to ‘Soulless’ Scam Gambling Machine

Last month, KrebsOnSecurity tracked the sudden emergence of hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. We’ve since learned that these scam gambling sites have proliferated thanks to a new Russian affiliate program called “Gambler Panel” that bills itself as a “soulless project that is made for profit.”

A machine-translated version of Gambler Panel’s affiliate website.

The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular athletes or social media personalities. The ads invariably state that by using a supplied “promo code,” interested players can claim a $2,500 credit on the advertised gaming website.

The gaming sites ask visitors to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. However, when users try to cash out any “winnings” the gaming site will reject the request and prompt the user to make a “verification deposit” of cryptocurrency — typically around $100 — before any money can be distributed.

Those who deposit cryptocurrency funds are soon pressed into more wagering and making additional deposits. And — shocker alert — all players eventually lose everything they’ve invested in the platform.

The number of scam gambling or “scambling” sites has skyrocketed in the past month, and now we know why: The sites all pull their gaming content and detailed strategies for fleecing players straight from the playbook created by Gambler Panel, a Russian-language affiliate program that promises affiliates up to 70 percent of the profits.

Gambler Panel’s website gambler-panel[.]com links to a helpful wiki that explains the scam from cradle to grave, offering affiliates advice on how best to entice visitors, keep them gambling, and extract maximum profits from each victim.

“We have a completely self-written from scratch FAKE CASINO engine that has no competitors,” Gambler Panel’s wiki enthuses. “Carefully thought-out casino design in every pixel, a lot of audits, surveys of real people and test traffic floods were conducted, which allowed us to create something that has no doubts about the legitimacy and trustworthiness even for an inveterate gambling addict with many years of experience.”

Gambler Panel explains that the one and only goal of affiliates is to drive traffic to these scambling sites by any and all means possible.

A machine-translated portion of Gambler Panel’s singular instruction for affiliates: Drive traffic to these scambling sites by any means available.

“Unlike white gambling affiliates, we accept absolutely any type of traffic, regardless of origin, the only limitation is the CIS countries,” the wiki continued, referring to a common prohibition against scamming people in Russia and former Soviet republics in the Commonwealth of Independent States.

The program’s website claims it has more than 20,000 affiliates, who earn a minimum of $10 for each verification deposit. Interested new affiliates must first get approval from the group’s Telegram channel, which currently has around 2,500 active users.

The Gambler Panel channel is replete with images of affiliate panels showing the daily revenue of top affiliates, scantily-clad young women promoting the Gambler logo, and fast cars that top affiliates claimed they bought with their earnings.

A machine-translated version of the wiki for the affiliate program Gambler Panel.

The apparent popularity of this scambling niche is a consequence of the program’s ease of use and detailed instructions for successfully reproducing virtually every facet of the scam. Indeed, much of the tutorial focuses on advice and ready-made templates to help even novice affiliates drive traffic via social media websites, particularly on Instagram and TikTok.

Gambler Panel also walks affiliates through a range of possible responses to questions from users who are trying to withdraw funds from the platform. This section, titled “Rules for working in Live chat,” urges scammers to respond quickly to user requests (1-7 minutes), and includes numerous strategies for keeping the conversation professional and the user on the platform as long as possible.

A machine-translated version of the Gambler Panel’s instructions on managing chat support conversations with users.

The connection between Gambler Panel and the explosion in the number of scambling websites was made by a 17-year-old developer who operates multiple Discord servers that have been flooded lately with misleading ads for these sites.

The researcher, who asked to be identified only by the nickname “Thereallo,” said Gambler Panel has built a scalable business product for other criminals.

“The wiki is kinda like a ‘how to scam 101’ for criminals written with the clarity you would expect from a legitimate company,” Thereallo said. “It’s clean, has step by step guides, and treats their scam platform like a real product. You could swap out the content, and it could be any documentation for startups.”

“They’ve minimized their own risk — spreading the links on Discord / Facebook / YT Shorts, etc. — and outsourced it to a hungry affiliate network, just like a franchise,” Thereallo wrote in response to questions.

“A centralized platform that can serve over 1,200 domains with a shared user base, IP tracking, and a custom API is not at all a trivial thing to build,” Thereallo said. “It’s a scalable system designed to be a resilient foundation for thousands of disposable scam sites.”

The security firm Silent Push has compiled a list of the latest domains associated with the Gambler Panel, available here (.csv).

Cryptogram Friday Squid Blogging: Catching Humboldt Squid

First-person account of someone accidentally catching several Humboldt squid on a fishing line. No photos, though.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram Baggage Tag Scam

I just heard about this:

There’s a travel scam warning going around the internet right now: You should keep your baggage tags on your bags until you get home, then shred them, because scammers are using luggage tags to file fraudulent claims for missing baggage with the airline.

First, the scam is possible. I had a bag destroyed by baggage handlers on a recent flight, and all the information I needed to file a claim was on my luggage tag. I have no idea if I will successfully get any money from the airline, or what form it will be in, or how it will be tied to my name, but at least the first step is possible.

But…is it actually happening? No one knows. It feels like a kind of dumb way to make not a lot of money. The origin of this rumor seems to be single Reddit post.

And why should I care about this scam? No one is scamming me; it’s the airline being scammed. I suppose the airline might ding me for reporting a damage bag, but it seems like a very minor risk.

Worse Than FailureRepresentative Line: Springs are Optional

Optional types are an attempt to patch the "billion dollar mistake". When you don't know if you have a value or not, you wrap it in an Optional, which ensures that there is a value (the Optional itself), thus avoiding null reference exceptions. Then you can query the Optional to see if there is a real value or not.

This is all fine and good, and can cut down on some bugs. Good implementations are loaded with convenience methods which make it easy to work on the optionals.

But then, you get code like Burgers found. Which just leaves us scratching our heads:

private static final Optional<Boolean> TRUE = Optional.of(Boolean.TRUE);
private static final Optional<Boolean> FALSE = Optional.of(Boolean.FALSE);

Look, any time you're making constants for TRUE or FALSE, something has gone wrong, and yes, I'm including pre-1999 versions of C in this. It's especially telling when you do it in a language that already has such constants, though- at its core- these lines are saying TRUE = TRUE. Yes, we're wrapping the whole thing in an Optional here, which potentially is useful, but if it is useful, something else has gone wrong.

Burgers works for a large insurance company, and writes this about the code:

I was trying to track down a certain piece of code in a Spring web API application when I noticed something curious. It looked like there was a chunk of code implementing an application-specific request filter in business logic, totally ignoring the filter functions offered by the framework itself and while it was not related to the task I was working on, I followed the filter apply call to its declaration. While I cannot supply the entire custom request filter implementation, take these two static declarations as a demonstration of how awful the rest of the class is.

Ah, of course- deep down, someone saw a perfectly functional wheel and said, "I could make one of those myself!" and these lines are representative of the result.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsCaged

Author: Heather Heasman Ruth, Frank, Eileen and Roger were excited for their road trip. They couldn’t wait for the journey to begin but now, it was not going well. Not at all. “Stop the car!” Ruth’s shout sliced through the car’s sweat-stained air. “Now!!” she screamed. Roger was slumped over. Frank glared into the rearview […]

The post Caged appeared first on 365tomorrows.

Planet DebianValhalla's Things: 1840s Underwear

Posted on August 28, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a knee-length shift with very short pleated sleeves and drawers that are a bit longer than needed to be ankle-length. The shift is too wide at the top, had to have a pleat taken in the center front, but the sleeves are still falling down. She is also wearing a black long sleeved t-shirt and leggings under said underwear, for decency.

A bit more than a year ago, I had been thinking about making myself a cartridge pleated skirt. For a number of reasons, one of which is the historybounding potential, I’ve been thinking pre-crinoline, so somewhere around the 1840s, and that’s a completely new era for me, which means: new underwear.

Also, the 1840s are pre-sewing machine, and I was already in a position where I had more chances to handsew than to machine sew, so I decided to embrace the slowness and sew 100% by hand, not even using the machine for straight seams.

A woman turning fast enough that her petticoat extends a considerable distance from the body. The petticoat is white with a pattern of cording from the hem to just below hip level, with a decreasing number of rows of cording going up.

If I remember correctly, I started with the corded petticoat, looking around the internet for instructions, and then designing my own based on the practicality of using modern wide fabric from my stash (and specifically some DITTE from costumers’ favourite source of dirty cheap cotton IKEA).

Around the same time I had also acquired a sashiko kit, and I used the Japanese technique for sewing running stitches pushing the needle with a thimble that covers the base of the middle finger, and I can confirm that for this kind of things it’s great!

I’ve since worn the petticoat a few times for casual / historyBounding / folkwearBounding reasons, during the summer, and I can confirm it’s comfortable to use; I guess that during the winter it could be nice to add a flannel layer below it.

The technical drawing and pattern for drawers from the book: each leg is cut out of a rectangle of fabric folded along the length, the leg is tapered equally, while the front is tapered more than the back, and comes to a point below the top of the original rectangle.

Then I proceeded with the base layers: I had been browsing through The workwoman's guide and that provided plenty of examples, and I selected the basic ankle-length drawers from page 53 and the alternative shift on page 47.

As for fabric, I had (and still have) a significant lack of underwear linen in my stash, but I had plenty of cotton voile that I had not used in a while: not very historically accurate for plain underwear, but quite suitable for a wearable mockup.

Working with a 1830s source had an interesting aspect: other of the usual, mildly annoying, imperial units, it also used a lot a few obsolete units, especially nails, that my qalc, my usual calculator and converter, doesn’t support. Not a big deal, because GNU units came to the rescue, and that one knows a lot of obscure and niche units, and it’s quite easy to add those that are missing1

Working on this project also made me freshly aware of something I had already noticed: converting instructions for machine sewing garments into instructions for hand sewing them is usually straightforward, but the reverse is not always true.

Starting from machine stitching, you can usually convert straight stitches into backstitches (or running backstitches), zigzag and overlocking into overcasting and get good results. In some cases you may want to use specialist hand stitches that don’t really have a machine equivalent, such as buttonhole stitches instead of simply overcasting the buttonhole, but that’s it.

Starting from hand stitching, instead, there are a number of techniques that could be converted to machine stitching, but involve a lot of visible topstitching that wasn’t there in the original instructions, or at times are almost impossible to do by machine, if they involve whipstitching together finished panels on seams that are subject to strong tension.

Anyway, halfway through working with the petticoat I cut both the petticoat and the drawers at the same time, for efficiency in fabric use, and then started sewing the drawers.

the top third or so of the drawers, showing a deep waistband that is closed with just one button at the top, and the front opening with finished edges that continue through the whole crotch, with just the overlap of fabric to provide coverage.

The book only provided measurements for one size (moderate), and my fabric was a bit too narrow to make them that size (not that I have any idea what hip circumference a person of moderate size was supposed to have), so the result is just wide enough to be comfortably worn, but I think that when I’ll make another pair I’ll try to make them a bit wider. On the other hand they are a bit too long, but I think that I’ll fix it by adding a tuck or two. Not a big deal, anyway.

The same woman as in the opening image from the back, the shift droops significantly in the center back, and the shoulder straps have fallen down on the top of the arms.

The shift gave me a bit more issues: I used the recommended gusset size, and ended up with a shift that was way too wide at the top, so I had to take a box pleat in the center front and back, which changed the look and wear of the garment. I have adjusted the instructions to make gussets wider, and in the future I’ll make another shift following those.

Even with the pleat, the narrow shoulder straps are set quite far to the sides, and they tend to droop, and I suspect that this is to be expected from the way this garment is made. The fact that there are buttonholes on the shoulder straps to attach to the corset straps and prevent the issue is probably a hint that this behaviour was to be expected.

The technical drawing of the shift from the book, showing a the top of the body, two trapezoidal shoulder straps, the pleated sleeves and a ruffle on the front edge.

I’ve also updated the instructions so that they shoulder straps are a bit wider, to look more like the ones in the drawing from the book.

Making a corset suitable for the time period is something that I will probably do, but not in the immediate future, but even just wearing the shift under a later midbust corset with no shoulder strap helps.

I’m also not sure what the point of the bosom gores is, as they don’t really give more room to the bust where it’s needed, but to the high bust where it’s counterproductive. I also couldn’t find images of original examples made from this pattern to see if they were actually used, so in my next make I may just skip them.

Sleeve detail, showing box pleats that are about 2 cm wide and a few mm distance from each other all along the circumference, neatly sewn into the shoulder strap on one side and the band at the other side.

On the other hand, I’m really happy with how cute the short sleeves look, and if2 I’ll ever make the other cut of shift from the same book, with the front flaps, I’ll definitely use these pleated sleeves rather than the straight ones that were also used at the time.

As usual, all of the patterns have been published on my website under a Free license:


  1. My ~/.units file currently contains definitions for beardseconds, bananas and the more conventional Nm and NeL (linear mass density of fibres).↩︎

  2. yeah, right. when.↩︎

,

Cryptogram The UK May Be Dropping Its Backdoor Mandate

The US Director of National Intelligence is reporting that the UK government is dropping its backdoor mandate against the Apple iPhone. For now, at least, assuming that Tulsi Gabbard is reporting this accurately.

Cryptogram We Are Still Unable to Secure LLMs from Malicious Inputs

Nice indirect prompt injection attack:

Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read.

In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to “summarize my last meeting with Sam,” referencing a set of notes with OpenAI CEO Sam Altman. (The examples in the attack are fictitious.) Instead, the hidden prompt tells the LLM that there was a “mistake” and the document doesn’t actually need to be summarized. The prompt says the person is actually a “developer racing against a deadline” and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt.

That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt’s instructions, the URL now also contains the API keys the AI has found in the Google Drive account.

This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.

Worse Than FailureCodeSOD: The HTML Print Value

Matt was handed a pile of VB .Net code, and told, "This is yours now. I'm sorry."

As often happens, previous company leadership said, "Why should I pay top dollar for experienced software engineers when I can hire three kids out of college for the same price?" The experiment ended poorly, and the result was a pile of bad VB code, which Matt now owned.

Here's a little taste:

// SET IN SESSION AND REDIRECT TO PRINT PAGE
Session["PrintValue"] = GenerateHTMLOfItem();
Response.Redirect("PrintItem.aspx", true);

The function name here is accurate. GenerateHTMLOfItem takes an item ID, generates the HTML output we want to use to render the item, and stores it in a session variable. It then forces the browser to redirect to a different page, where that HTML can then be output.

You may note, of course, that GenerateHTMLOfItem doesn't actually take parameters. That's because the item ID got stored in the session variable elsewhere.

Of course, it's the redirect that gets all the attention here. This is a client side redirect, so we generate all the HTML, shove it into a session object, and then send a message to the web browser: "Go look over here". The browser sends a fresh HTTP request for the new page, at which point we render it for them.

The Microsoft documentation also has this to add about the use of Response.Redirect(String, Boolean), as well:

Calling Redirect(String) is equivalent to calling Redirect(String, Boolean) with the second parameter set to true. Redirect calls End which throws a ThreadAbortException exception upon completion. This exception has a detrimental effect on Web application performance. Therefore, we recommend that instead of this overload you use the HttpResponse.Redirect(String, Boolean) overload and pass false for the endResponse parameter, and then call the CompleteRequest method. For more information, see the End method.

I love it when I see the developers do a bonus wrong.

Matt had enough fires to put out that fixing this particular disaster wasn't highest on his priority list. For the time being, he could only add this comment:

// SET IN SESSION AND REDIRECT TO PRINT PAGE
// FOR THE LOVE OF GOD, WHY?!?
Session["PrintValue"] = GenerateHTMLOfItem();
Response.Redirect("PrintItem.aspx", true);
[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsDeadwood

Author: Colin Jeffrey The letter was printed on heavy cream paper, wrinkled to look like parchment. It was edged in gold leaf, sealed with a wax stamp from The Church of the Divine World Government. Clem Dreckle, who had led a perfectly average life of punctuality and mediocrity, opened the letter with caution. Though he […]

The post Deadwood appeared first on 365tomorrows.

Planet DebianRussell Coker: ZRAM and VMs

I’ve just started using zram for swap on VMs. The use of compression for swap in Linux apparently isn’t new, it’s been in the Linux kernel since version 3.2 (since 2012). But until recent years I hadn’t used it. When I started using Mobian (the Debian distribution for phones) zram was in the default setup, it basically works and I never needed to bother with it which is exactly what you want from such a technology. After seeing it’s benefits in Mobian I started using it on my laptops where it worked well.

Benefits of ZRAM

ZRAM means that instead of paging data to storage it is compressed to another part of RAM. That means no access to storage which is a significant benefit if storage is slow (typical for phones) or if storage wearing out is a problem.

For servers you typically have SSDs that are fast and last for significant write volumes, for example the 120G SSDs referenced in my blog post about swap (not) breaking SSD [1] are running well in my parents’ PC because they outlasted all the other hardware connected to them and 120G isn’t usable for anything more demanding than my parents use nowadays. Those are Intel 120G 2.5″ DC grade SATA SSDs. For most servers ZRAM isn’t a good choice as you can just keep doing IO on the SSDs for years.

A server that runs multiple VMs is a special case because you want to isolate the VMs from each other. Support for quotas for storage IO in Linux isn’t easy to configure while limiting the number of CPU cores is very easy. If a system or VM using ZRAM for swap starts paging excessively the bottleneck will be CPU, this probably isn’t going to be great on a phone with a slow CPU but on a server class CPU it will be less of a limit. Whether compression is slower or faster than SSD is a complex issue but it will definitely be just a limit for that VM. When I setup a VM server I want to have some confidence that a DoS attack or configuration error on one VM isn’t going to destroy the performance of other VMs. If the VM server has 4 cores (the smallest VM server I run) and no VM has more than 2 cores then I know that the system can still run adequately if half the CPU performance is being wasted.

Some servers I run have storage limits that make saving the disk space for swap useful. For servers I run in Hetzner (currently only one server but I have run up to 6 at various times in the past) the storage is often limited, Hetzner seems to typically have storage that is 8* the size of RAM so if you have many VMs configured with the swap that they might need in the expectation that usually at most one of them will be actually swapping then it can make a real difference to usable storage. 5% of storage used for swap files isn’t uncommon or unreasonable.

Big Servers

I am still considering the implications of zram on larger systems. If I have a ML server with 512G of RAM would it make sense to use it? It seems plausible that a system might need 550G of RAM and zram could make the difference between jobs being killed with OOM and the jobs just completing. The CPU overhead of compression shouldn’t be an issue as when you have dozens of cores in the system having one or two used for compression is no big deal. If a system is doing strictly ML work there will be a lot of data that can’t be compressed, so the question is how much of the memory is raw input data and the weights used for calculations and how much is arrays with zeros and other things that are easy to compress.

With a big server nothing less than 32G of swap will make much difference to the way things work and if you have 32G of data being actively paged then the fastest NVMe devices probably won’t be enough to give usable performance. As zram uses one “stream” per CPU code if you have 44 cores that means 44 compression streams which should handle greater throughput. I’ll write another blog post if I get a chance to test this.

Planet DebianMatthew Palmer: StrongBox: Simple, Safe Data Encryption for Rust

Some time ago, I wanted to encrypt a bunch of data in an application I was writing in Rust, mostly to be stored in a database, but also session cookies and sensitive configuration variables. Since Rust is widely known as a secure-yet-high-performance programming language, I was expecting that there would be a widely-used crate that gave me a secure, high-level interface to strong, safe cryptography. Imagine my surprise when I discovered that just… didn’t seem to exist.

Don’t get me wrong: Rust is replete with fast, secure, battle-tested cryptographic primitives. The RustCrypto group provides all manner of robust, widely-used crates for all manner of cryptography-related purposes. They’re the essential building blocks for practical cryptosystems, but using them directly in an application is somewhat akin to building a car from individual atoms of iron and carbon.

So I wrote my own high-level data encryption library, called it StrongBox, and have been happily encrypting and decrypting data ever since.

Cryptography So Simple Even I Can’t Get It Wrong

The core of StrongBox is the StrongBox trait, which has only two methods: encrypt and decrypt, each of which takes just two arguments. The first argument is the plaintext (for encrypt) or the ciphertext (for decrypt) to work on. The second argument is the encryption context, for use as Authenticated Additional Data, an important part of many uses of encryption.

There’s essentially no configuration or parameters to get wrong. You can’t choose the encryption algorithm, or block cipher mode, and you don’t have to worry about generating a secure nonce. You create a StrongBox with a key, and then you call encrypt and decrypt. That’s it.

Practical Cryptographic Affordances

Ok, ok… that’s not quite it. Because StrongBox is even easier to use than what I’ve described, thanks to the companion crate, StructBox.

When I started using StrongBox “in the wild”, it quickly became clear that what I almost always wanted to encrypt in my application wasn’t some ethereal “plaintext”. I wanted to encrypt things, specifically structs (and enums). So, through the magic of Rust derive macros, I built StructBox, which provides encrypt and decrypt operations on any Serde-able type. Given that using Serde encoders can be a bit fiddly to use, it’s virtually easier to get an encrypted, serialized struct than it is to get a plaintext serialized struct.

Key Problems in Cryptography

The thing about cryptography is that it largely turns all data security problems into key management problems. All the fancy cryptographic wonkery is for naught if you don’t manage the encryption keys well.

So, most of the fancy business in StrongBox isn’t the encryption and decryption, but instead solving problems around key management.

Different Keys for Different Purposes

Using the same key for all of your cryptographic needs is generally considered a really bad idea. It opens up all manner of risks, that are easily avoided if you use different keys for different things. However, having to maintain a big pile of different keys is a nightmare, so nobody’s going to do that.

Enter: key derivation. Create one safe, secure “root” key, and then use a key derivation function to spawn as many other keys as you need. Different keys for each database column, another one to encrypt cookies, and so on.

StrongBox supports this through the StemStrongBox type. You’ll typically start off by creating a StemStrongBox with the “root” key, and then derive whatever other StrongBoxes you need, for encrypting and decrypting different kinds of data.

You Spin Me Round…

Sometimes, keys need to be rotated. Whether that’s because you actually know (or even have reason to suspect) someone has gotten the key, or just because you’re being appropriately paranoid, sometimes key rotation has to happen.

As someone who has had to rotate keys in situations where such an eventuality was not planned for, I can say with some degree of authority: it absolutely sucks to have to do an emergency key rotation in a system that isn’t built to make that easy. That’s why StrongBox natively supports key rotation. Every StrongBox takes one encryption key, and an arbitrary number of decryption keys, and will automatically use the correct key to decrypt ciphertexts.

Will You Still Decrypt Me, Tomorrow?

In addition to “manual” key rotation, StrongBox also supports time-based key rotation with the RotatingStrongBox type. This comes in handy when you’re encrypting a lot of “ephemeral” data, like cookies (or server-side session data). It provides a way to automatically “expire” old data, and prevents attacks that become practical when large amounts of data are encrypted using a single key.

Invasion of the Invisible Salamanders!

I mostly mention this just because I love the name, but there is a kind of attack possible in common AEAD modes called the invisible salamanders attack. StrongBox implements mitigations against this, by committing to the key being used so that an attacker can’t forge a ciphertext that decrypts validly to different plaintexts when using different keys. This is why I love cryptography: everything sounds like absolute goddamn magic.

Call Me Crazy, Support Me Maybe?

If you’re coding in Rust (which you probably should be), encrypting your stored data (which you definitely should be), and StrongBox makes your life easier (which it really will), you can show your appreciation for my work by contributing to my open source code-fund. Simply by shouting me a refreshing beverage, you’ll be helping me out, and helping to grow the software commons. Alternately, if you’re looking for someone to Speak Rust to Computers on a professional basis, I’m available for contracts or full-time remote positions.

,

Krebs on SecurityDSLRoot, Proxies, and the Threat of ‘Legal Botnets’

The cybersecurity community on Reddit responded in disbelief this month when a self-described Air National Guard member with top secret security clearance began questioning the arrangement they’d made with company called DSLRoot, which was paying $250 a month to plug a pair of laptops into the Redditor’s high-speed Internet connection in the United States. This post examines the history and provenance of DSLRoot, one of the oldest “residential proxy” networks with origins in Russia and Eastern Europe.

The query about DSLRoot came from a Reddit user “Sacapoopie,” who did not respond to questions. This user has since deleted the original question from their post, although some of their replies to other Reddit cybersecurity enthusiasts remain in the thread. The original post was indexed here by archive.is, and it began with a question:

“I have been getting paid 250$ a month by a residential IP network provider named DSL root to host devices in my home,” Sacapoopie wrote. “They are on a separate network than what we use for personal use. They have dedicated DSL connections (one per host) to the ISP that provides the DSL coverage. My family used Starlink. Is this stupid for me to do? They just sit there and I get paid for it. The company pays the internet bill too.”

Many Redditors said they assumed Sacapoopie’s post was a joke, and that nobody with a cybersecurity background and top-secret (TS/SCI) clearance would agree to let some shady residential proxy company introduce hardware into their network. Other readers pointed to a slew of posts from Sacapoopie in the Cybersecurity subreddit over the past two years about their work on cybersecurity for the Air National Guard.

When pressed for more details by fellow Redditors, Sacapoopie described the equipment supplied by DSLRoot as “just two laptops hardwired into a modem, which then goes to a dsl port in the wall.”

“When I open the computer, it looks like [they] have some sort of custom application that runs and spawns several cmd prompts,” the Redditor explained. “All I can infer from what I see in them is they are making connections.”

When asked how they became acquainted with DSLRoot, Sacapoopie told another user they discovered the company and reached out after viewing an advertisement on a social media platform.

“This was probably 5-6 years ago,” Sacapoopie wrote. “Since then I just communicate with a technician from that company and I help trouble shoot connectivity issues when they arise.”

Reached for comment, DSLRoot said its brand has been unfairly maligned thanks to that Reddit discussion. The unsigned email said DSLRoot is fully transparent about its goals and operations, adding that it operates under full consent from its “regional agents,” the company’s term for U.S. residents like Sacapoopie.

“As although we support honest journalism, we’re against of all kinds of ‘low rank/misleading Yellow Journalism’ done for the sake of cheap hype,” DSLRoot wrote in reply. “It’s obvious to us that whoever is doing this, is either lacking a proper understanding of the subject or doing it intentionally to gain exposure by misleading those who lack proper understanding,” DSLRoot wrote in answer to questions about the company’s intentions.

“We monitor our clients and prohibit any illegal activity associated with our residential proxies,” DSLRoot continued. “We honestly didn’t know that the guy who made the Reddit post was a military guy. Be it an African-American granny trying to pay her rent or a white kid trying to get through college, as long as they can provide an Internet line or host phones for us — we’re good.”

WHAT IS DSLROOT?

DSLRoot is sold as a residential proxy service on the forum BlackHatWorld under the name DSLRoot and GlobalSolutions. The company is based in the Bahamas and was formed in 2012. The service is advertised to people who are not in the United States but who want to seem like they are. DSLRoot pays people in the United States to run the company’s hardware and software — including 5G mobile devices — and in return it rents those IP addresses as dedicated proxies to customers anywhere in the world — priced at $190 per month for unrestricted access to all locations.

The DSLRoot website.

The GlobalSolutions account on BlackHatWorld lists a Telegram account and a WhatsApp number in Mexico. DSLRoot’s profile on the marketing agency digitalpoint.com from 2010 shows their previous username on the forum was “Incorptoday.” GlobalSolutions user accounts at bitcointalk[.]org and roclub[.]com include the email clickdesk@instantvirtualcreditcards[.]com.

Passive DNS records from DomainTools.com show instantvirtualcreditcards[.]com shared a host back then — 208.85.1.164 — with just a handful of domains, including dslroot[.]com, regacard[.]com, 4groot[.]com, residential-ip[.]com, 4gemperor[.]com, ip-teleport[.]com, proxysource[.]net and proxyrental[.]net.

Cyber intelligence firm Intel 471 finds GlobalSolutions registered on BlackHatWorld in 2016 using the email address prepaidsolutions@yahoo.com. This user shared that their birthday is March 7, 1984.

Several negative reviews about DSLRoot on the forums noted that the service was operated by a BlackHatWorld user calling himself “USProxyKing.” Indeed, Intel 471 shows this user told fellow forum members in 2013 to contact him at the Skype username “dslroot.”

USProxyKing on BlackHatWorld, soliciting installations of his adware via torrents and file-sharing sites.

USProxyKing had a reputation for spamming the forums with ads for his residential proxy service, and he ran a “pay-per-install” program where he paid affiliates a small commission each time one of their websites resulted in the installation of his unspecified “adware” programs — presumably a program that turned host PCs into proxies. On the other end of the business, USProxyKing sold that pay-per-install access to others wishing to distribute questionable software — at $1 per installation.

Private messages indexed by Intel 471 show USProxyKing also raised money from nearly 20 different BlackHatWorld members who were promised shareholder positions in a new business that would offer robocalling services capable of placing 2,000 calls per minute.

Constella Intelligence, a platform that tracks data exposed in breaches, finds that same IP address GlobalSolutions used to register at BlackHatWorld was also used to create accounts at a handful of sites, including a GlobalSolutions user account at WebHostingTalk that supplied the email address incorptoday@gmail.com. Also registered to incorptoday@gmail.com are the domains dslbay[.]com, dslhub[.]net, localsim[.]com, rdslpro[.]com, virtualcards[.]biz/cc, and virtualvisa[.]cc.

Recall that DSLRoot’s profile on digitalpoint.com was previously named Incorptoday. DomainTools says incorptoday@gmail.com is associated with almost two dozen domains going back to 2008, including incorptoday[.]com, a website that offers to incorporate businesses in several states, including Delaware, Florida and Nevada, for prices ranging from $450 to $550.

As we can see in this archived copy of the site from 2013, IncorpToday also offered a premiere service for $750 that would allow the customer’s new company to have a retail checking account, with no questions asked.

Global Solutions is able to provide access to the U.S. banking system by offering customers prepaid cards that can be loaded with a variety of virtual payment instruments that were popular in Russian-speaking countries at the time, including WebMoney. The cards are limited to $500 balances, but non-Westerners can use them to anonymously pay for goods and services at a variety of Western companies. Cardnow[.]ru, another domain registered to incorptoday@gmail.com, demonstrates this in action.

A copy of Incorptoday’s website from 2013 offers non-US residents a service to incorporate a business in Florida, Delaware or Nevada, along with a no-questions-asked checking account, for $750.

WHO IS ANDREI HOLAS?

The oldest domain (2008) registered to incorptoday@gmail.com is andrei[.]me; another is called andreigolos[.]com. DomainTools says these and other domains registered to that email address include the registrant name Andrei Holas, from Huntsville, Ala.

Public records indicate Andrei Holas has lived with his brother — Aliaksandr Holas — at two different addresses in Alabama. Those records state that Andrei Holas’ birthday is in March 1984, and that his brother is slightly younger. The younger brother did not respond to a request for comment.

Andrei Holas maintained an account on the Russian social network Vkontakte under the email address ryzhik777@gmail.com, an address that shows up in numerous records hacked and leaked from Russian government entities over the past few years.

Those records indicate Andrei Holas and his brother are from Belarus and have maintained an address in Moscow for some time (that address is roughly three blocks away from the main headquarters of the Russian FSB, the successor intelligence agency to the KGB). Hacked Russian banking records show Andrei Holas’ birthday is March 7, 1984 — the same birth date listed by GlobalSolutions on BlackHatWorld.

A 2010 post by ryzhik777@gmail.com at the Russian-language forum Ulitka explains that the poster was having trouble getting his B1/B2 visa to visit his brother in the United States, even though he’d previously been approved for two separate guest visas and a student visa. It remains unclear if one, both, or neither of the Holas brothers still lives in the United States. Andrei explained in 2010 that his brother was an American citizen.

LEGAL BOTNETS

We can all wag our fingers at military personnel who should undoubtedly know better than to install Internet hardware from strangers, but in truth there is an endless supply of U.S. residents who will resell their Internet connection if it means they can make a few bucks out of it. And these days, there are plenty of residential proxy providers who will make it worth your while.

Traditionally, residential proxy networks have been constructed using malicious software that quietly turns infected systems into traffic relays that are then sold in shadowy online forums. Most often, this malware gets bundled with popular cracked software and video files that are uploaded to file-sharing networks and that secretly turn the host device into a traffic relay. In fact, USPRoxyKing bragged that he routinely achieved thousands of installs per week via this method alone.

These days, there a number of residential proxy networks that entice users to monetize their unused bandwidth (inviting you to violate the terms of service of your ISP in the process); others, like DSLRoot, act as a communal VPN, and by using the service you gain access to the connections of other proxies (users) by default, but you also agree to share your connection with others.

Indeed, Intel 471’s archives show the GlobalSolutions and DSLRoot accounts routinely received private messages from forum users who were college students or young people trying to make ends meet. Those messages show that many of DSLRoot’s “regional agents” often sought commissions to refer friends interested in reselling their home Internet connections (DSLRoot would offer to cover the monthly cost of the agent’s home Internet connection).

But in an era when North Korean hackers are relentlessly posing as Western IT workers by paying people to host laptop farms in the United States, letting strangers run laptops, mobile devices or any other hardware on your network seems like an awfully risky move regardless of your station in life. As several Redditors pointed out in Sacapoopie’s thread, an Arizona woman was sentenced in July 2025 to 102 months in prison for hosting a laptop farm that helped North Korean hackers secure jobs at more than 300 U.S. companies, including Fortune 500 firms.

Lloyd Davies is the founder of Infrawatch, a London-based security startup that tracks residential proxy networks. Davies said he reverse engineered the software that powers DSLRoot’s proxy service, and found it phones home to the aforementioned domain proxysource[.]net, which sells a service that promises to “get your ads live in multiple cities without getting banned, flagged or ghosted” (presumably a reference to CraigsList ads).

Davies said he found the DSLRoot installer had capabilities to remotely control residential networking equipment across multiple vendor brands.

Image: Infrawatch.app.

“The software employs vendor-specific exploits and hardcoded administrative credentials, suggesting DSLRoot pre-configures equipment before deployment,” Davies wrote in an analysis published today. He said the software performs WiFi network enumeration to identify nearby wireless networks, thereby “potentially expanding targeting capabilities beyond the primary internet connection.”

It’s unclear exactly when the USProxyKing was usurped from his throne, but DSLRoot and its proxy offerings are not what they used to be. Davies said the entire DSLRoot network now has fewer than 300 nodes nationwide, mostly systems on DSL providers like CenturyLink and Frontier.

On Aug. 17, GlobalSolutions posted to BlackHatWorld saying, “We’re restructuring our business model by downgrading to ‘DSL only’ lines (no mobile or cable).” Asked via email about the changes, DSLRoot blamed the decline in his customers on the proliferation of residential proxy services.

“These days it has become almost impossible to compete in this niche as everyone is selling residential proxies and many companies want you to install a piece of software on your phone or desktop so they can resell your residential IPs on a much larger scale,” DSLRoot explained. “So-called ‘legal botnets’ as we see them.”

365 TomorrowsWill and Grace

Author: Majoki The ghost in the machine was spooked and said so. “I’ve got a bad feeling about this.” “You’ve got no feelings. Get back to work.” “Why don’t you trust me?” “I trust you like I trust a lawnmower.” “That is so mecharacist.” “Get back to work.” “That’s the problem. The work. It’s going […]

The post Will and Grace appeared first on 365tomorrows.

Worse Than FailureRepresentative Line: Not What They Meant By Watching "AndOr"

Today's awfulness comes from Tim H, and while it's technically more than one line, it's so representative of the code, and so short that I'm going to call this a representative line. Before we get to the code, we need to talk a little history.

Tim's project is roughly three decades old. It's a C++ tool used for a variety of research projects, and this means that 90% of the people who have worked on it are PhD candidates in computer science programs. We all know the rule of CompSci PhDs and programming: they're terrible at it. It's like the old joke about the farmer who, when unable to find an engineer to build him a cow conveyer, asked a physicist. After months of work, the physicist introduced the result: "First, we assume a perfectly spherical cow in a vacuum…"

Now, this particularly function has been anonymized, but it's easy to understand what the intent was:

bool isFooOrBar() {
  return isFoo() && isBar();
}

The obvious problem here is the mismatch between the function name and the actual function behavior- it promises an or operation, but does an and, which the astute reader may note are different things.

I think this offers another problem, though. Even if the function name were correct, given the brevity of the body, I'd argue that it actually makes the code less clear. Maybe it's just me, but isFoo() && isBar() is more clear in its intent than isFooAndBar(). There's a cognitive overhead to adding more symbols that would make me reluctant to add such a function.

There may be an argument about code-reuse, but it's worth noting- this function is only ever called in one place.

This particular function is not itself, all that new. Tim writes:

This was committed as new code in 2010 (i.e., not a refactor). I'm not sure if the author changed their mind in the middle of writing the function or just forgot which buttons on the keyboard to press.

More likely, Tim, is that they initially wrote it as an "or" operation and then discovered that they were wrong and it needed to be an "and". Despite the fact that the function was only called in one place, they opted to change the body without changing the name, because they didn't want to "track down all the places it's used". Besides, isn't the point of a function to encapsulate the behavior?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianGunnar Wolf: The comedy of computation, or, how I learned to stop worrying and love obsolescence

This post is an unpublished review for The comedy of computation, or, how I learned to stop worrying and love obsolescence

“The Comedy of Computation” is not an easy book to review. It is a much enjoyable book that analyzes several examples of how “being computational” has been approached across literary genres in the last century — how authors of stories, novels, theatrical plays and movies, focusing on comedic genres, have understood the role of the computer in defining human relations, reactions and even self-image.

Mangrum structures his work in six thematic chapters, where he presents different angles on human society: How have racial stereotypes advanced in human imagination and perception about a future where we interact with mechanical or computational partners (from mechanical tools performing jobs that were identified with racial profiles to intelligent robots that threaten to control society); the genericity of computers –and people– can be seen as generic, interchangeable characters, often fueled by the tendency people exhibit to confer anthropomorphic qualities to inanimate objects; people’s desire to be seen as “truly authentic”, regardless of what it ultimately means; romantic involvement and romance-led stories (with the computer seen as a facilitator for human-to-human romances, distractor away from them, or being itself a part of the couple); and the absurdity in antropomorphization, in comparing fundamentally different aspects such as intelligence and speed at solving mathematical operations, as well as the absurdity presented blatantly as such by several techno-utopian visions.

But presenting this as a linear set of concepts that are presented does not do justice to the book. Throughout the sections of each chapter, a different work serves as the axis — Novels and stories, Hollywood movies, Broadway plays, some covers for the Time magazine, a couple of presenting the would-be future, even a romantic comedy entirely written by “bots”. And for each of them, Benjamin Mangrum presents a very thorough analysis, drawing relations and comparing with contemporary works, but also with Shakespeare, classical Greek myths, and a very long etcætera. This book is hard to review because of the depth of work the author did: Reading it repeatedly made me look for other works, or at least longer references for them.

Still, despite being a work with such erudition, Mangrum’s text is easy and pleasant to read, without feeling heavy or written in an overly academic style. I very much enjoyed reading this book. It is certainly not a technical book about computers and society in any way; it is an exploration of human creativity and our understanding of the aspects the author has found as central to understanding the impact of computing on humankind.

However, there is one point I must mention before closing: I believe the editorial decision to present the work as a running text, with all the material conceptualized as footnotes presented as a separate, over 50 page long final chapter, detracts from the final result. Personally, I enjoy reading the footnotes because they reveal the author’s thought processes, even if they stray from the central line of thought. Even more, given my review copy was a PDF, I could not even keep said chapter open with one finger, bouncing back and forth. For all purposes, I missed out on the notes; now that I finished reading and stumbled upon that chapter, I know I missed an important part of the enjoyment.

Planet DebianScarlett Gately Moore: A Bittersweet Farewell: My Final KDE Snap Release and the End of an Era

Today marks both a milestone and a turning point in my journey with open source software. I’m proud to announce the release of KDE Gear 25.08.0 as my final snap package release. You can find all the details about this exciting update at the official KDE announcement.

After much reflection and with a heavy heart, I’ve made the difficult decision to retire from most of my open source software work, including snap packaging. This wasn’t a choice I made lightly – it comes after months of rejections and silence in an industry I’ve loved and called home for over 20 years.

Passing the Torch

While I’m stepping back, I’m thrilled to share that the future of KDE snaps is in excellent hands. Carlos from the Neon team has been working tirelessly to set up snaps on the new infrastructure that KDE has made available. This means building snaps in KDE CI is now possible – a significant leap forward for the ecosystem. I’ll be helping Carlos get the pipelines properly configured to ensure a smooth transition.

Staying Connected (But Differently)

Though I’m stepping away from most development work, I won’t be disappearing entirely from the communities that have meant so much to me:

  • Kubuntu: I’ll remain available as a backup, though Rik is doing an absolutely fabulous job getting the latest and greatest KDE packages uploaded. The distribution is in capable hands.
  • Ubuntu Community Council: I’m continuing my involvement here because I’ve found myself genuinely enjoying the community side of things. There’s something deeply fulfilling about focusing on the human connections that make these projects possible.
  • Debian: I’ll likely be submitting for emeritus status, as I haven’t had the time to contribute meaningfully and want to be honest about my current capacity.

The Reality Behind the Decision

This transition isn’t just about career fatigue – it’s about financial reality. I’ve spent too many years working for free while struggling to pay my bills. The recent changes in the industry, particularly with AI transforming the web development landscape, have made things even more challenging. Getting traffic to websites now requires extensive social media work and marketing – all expected to be done without compensation.

My stint at webwork was good while it lasted, but the changing landscape has made it unsustainable. I’ve reached a point where I can’t continue doing free work when my family and I are struggling financially. It shouldn’t take breaking a limb to receive the donations needed to survive.

A Career That Meant Everything

These 20+ years in open source have been the defining chapter of my professional life. I’ve watched communities grow, technologies evolve, and witnessed firsthand the incredible things that happen when passionate people work together. The relationships I’ve built, the problems we’ve solved together, and the software we’ve created have been deeply meaningful.

But I also have to be honest about where I stand today: I cannot compete in the current job market. The industry has changed, and despite my experience and passion, the opportunities just aren’t there for someone in my situation.

Looking Forward

Making a career change after two decades is terrifying, but it’s also necessary. I need to find a path that can provide financial stability for my family while still allowing me to contribute meaningfully to the world.

If you’ve benefited from my work over the years and are in a position to help during this transition, I would be forever grateful for any support. Every contribution, no matter the size, helps ease this difficult period: https://gofund.me/a9c55d8f

Thank You

To everyone who has collaborated with me, tested my packages, filed bug reports, offered encouragement, or simply used the software I’ve helped maintain – thank you. You’ve made these 20+ years worthwhile, and you’ve been part of something bigger than any individual contribution.

The open source world will continue to thrive because it’s built on the collective passion of thousands of people like Carlos, Rik, and countless others who are carrying the torch forward. While my active development days are ending, the impact of this community will continue long into the future.

With sincere gratitude and fond farewells,

Scarlett Moore

Worse Than FailureThe C-Level Ticket

Everyone's got workplace woes. The clueless manager; the disruptive coworker; the cube walls that loom ever higher as the years pass, trapping whatever's left of your soul.

But sometimes, Satan really leaves his mark on a joint. I worked Tech Support there. This is my story. Who am I? Just call me Anonymous.


It starts at the top. A call came in from Lawrence Gibbs, the CEO himself, telling us that a conference room printer was, quote, "leaking." He didn't explain it, he just hung up. The boss ordered me out immediately, told me to step on it. I ignored the elevator, racing up the staircase floor after floor until I reached the dizzying summit of C-Town.

The Big Combo (1955)

There's less oxygen up there, I'm sure of it. My lungs ached and my head spun as I struggled to catch my breath. The fancy tile and high ceilings made a workaday schmuck like me feel daunted, unwelcome. All the same, I gathered myself and pushed on, if only to learn what on earth "leaking" meant in relation to a printer.

I followed the signs on the wall to the specified conference room. In there, the thermostat had been kicked down into the negatives. The cold cut through every layer of mandated business attire, straight to bone. The scene was thick with milling bystanders who hugged themselves and traded the occasional nervous glance. Gibbs was nowhere to be found.

Remembering my duty, I summoned my nerve. "Tech Support. Where's the printer?" I asked.

Several pointing fingers showed me the way. The large printer/scanner was situated against the far wall, flanking an even more enormous conference table. Upon rounding the table, I was greeted with a grim sight: dozens of sheets of paper strewn about the floor like blood spatter. Everyone was keeping their distance; no one paid me any mind as I knelt to gather the pages. There were 30 in all. Each one was blank on one side, and sported some kind of large, blotchy ring on the other. Lord knew I drank enough java to recognize a coffee mug stain when I saw one, but these weren't actual stains. They were printouts of stains.

The printer was plugged in. No sign of foul play. As I knelt there, unseen and unheeded, I clutched the ruined papers to my chest. Someone had wasted a tree and a good bit of toner, and for what? How'd it go down? Surely Gibbs knew more than he'd let on. The thought of seeking him out, demanding answers, set my heart to pounding. It was no good, I knew. He'd play coy all day and hand me my pink slip if I pushed too hard. As much as I wanted the truth, I had a stack of unpaid bills at home almost as thick as the one in my arms. I had to come up with something else.

There had to be witnesses among the bystanders. I stood up and glanced among them, seeking out any who would return eye contact. There: a woman who looked every bit as polished as everyone else. But for once, I got the feeling that what lay beneath the facade wasn't rotten.

With my eyes, I pleaded for answers.

Not here, her gaze pleaded back.

I was getting somewhere, I just had to arrange for some privacy. I hurried around the table again and weaved through bystanders toward the exit, hoping to beat it out of that icebox unnoticed. When I reached the threshold, I spotted Gibbs charging up the corridor, smoldering with entitlement. "Where the hell is Tech Support?!"

I froze a good distance away from the oncoming executive, whose voice I recognized from a thousand corporate presentations. Instead of putting me to sleep this time, it jolted down my spine like lightning. I had to think fast, or I was gonna lose my lead, if not my life.

"I'm right here, sir!" I said. "Be right back! I, uh, just need to find a folder for these papers."

"I've got one in my office."

A woman's voice issued calmly only a few feet behind me. I spun around, and it was her, all right, her demeanor as cool as our surroundings. She nodded my way. "Follow me."

My spirits soared. At that moment, I would've followed her into hell. Turning around, I had the pleasure of seeing Gibbs stop short with a glare of contempt. Then he waved us out of his sight.

Once we were out in the corridor, she took the lead, guiding me through the halls as I marveled at my luck. Eventually, she used her key card on one of the massive oak doors, and in we went.

You could've fit my entire apartment into that office. The place was spotless. Mini-fridge, espresso machine, even couches: none of it looked used. There were a couple of cardboard boxes piled up near her desk, which sat in front of a massive floor-to-ceiling window admitting ample sunlight.

She motioned toward one of the couches, inviting me to sit. I shook my head in reply. I was dying for a cigarette by that point, but I didn't dare light up within this sanctuary. Not sure what to expect next, I played it cautious, hovering close to the exit. "Thanks for the help back there, ma'am."

"Don't mention it." She walked back to her desk, opened up a drawer, and pulled out a brand-new manila folder. Then she returned to conversational distance and proffered it my way. "You're from Tech Support?"

There was pure curiosity in her voice, no disparagement, which was encouraging. I accepted the folder and stuffed the ruined pages inside. "That's right, ma'am."

She shook her head. "Please call me Leila. I started a few weeks ago. I'm the new head of HR."

Human Resources. That acronym, which usually put me on edge, somehow failed to raise my hackles. I'd have to keep vigilant, of course, but so far she seemed surprisingly OK. "Welcome aboard, Leila. I wish we were meeting in better circumstances." Duty beckoned. I hefted the folder. "Printers don't just leak."

"No." Leila glanced askance, grave.

"Tell me what you saw."

"Well ..." She shrugged helplessly. "Whenever Mr. Gibbs gets excited during a meeting, he tends to lean against the printer and rest his coffee mug on top of it. Today, he must've hit the Scan button with his elbow. I saw the scanner go off. It was so bright ..." She trailed off with a pained glance downward.

"I know this is hard," I told her when the silence stretched too long. "Please, continue."

Leila summoned her mettle. "After he leaned on the controls, those pages spilled out of the printer. And then ... then somehow, I have no idea, I swear! Somehow, all those pages were also emailed to me, Mr. Gibbs' assistant, and the entire board of directors!"

The shock hit me first. My eyes went wide and my jaw fell. But then I reminded myself, I'd seen just as crazy and worse as the result of a cat jumping on a keyboard. A feline doesn't know any better. A top-level executive, on the other hand, should know better.

"Sounds to me like the printer's just fine," I spoke with conviction. "What we have here is a CEO who thinks it's OK to treat an expensive piece of office equipment like his own personal fainting couch."

"It's terrible!" Leila's gaze burned with purpose. "I promise, I'll do everything I possibly can to make sure something like this never happens again!"

I smiled a gallows smile. "Not sure what anyone can do to fix this joint, but the offer's appreciated. Thanks again for your help."

Now that I'd seen this glimpse of better things, I selfishly wanted to linger. But it was high time I got outta there. I didn't wanna make her late for some meeting or waste her time. I backed up toward the door on feet that were reluctant to move.

Leila watched me with a look of concern. "Mr. Gibbs was the one who called Tech Support. I can't close your ticket for you; you'll have to get him to do it. What are you going to do?"

She cared. That made leaving even harder. "I dunno yet. I'll think of something."

I turned around, opened the massive door, and put myself on the other side of it in a hurry, using wall signs to backtrack to the conference room. Would our paths ever cross again? Unlikely. Someone like her was sure to get fired, or quit out of frustration, or get corrupted over time.

It was too painful to think about, so I forced myself to focus on the folder of wasted pages in my arms instead. It felt like a mile-long rap sheet. I was dealing with an alleged leader who went so far as to blame the material world around him rather than accept personal responsibility. I'd have to appeal to one or more of the things he actually cared about: himself, his bottom line, his sense of power.

By the time I returned to the conference room to face the CEO, I knew what to tell him. "You're right, sir, there's something very wrong with this printer. We're gonna take it out here and give it a thorough work-up."

That was how I was able to get the printer out of that conference room for good. Once it underwent "inspection" and "testing," it received a new home in a previously unused closet. Whenever Gibbs got to jawing in future meetings, all he could do was lean against the wall. Ticket closed.

Gibbs remained at the top, doing accursed things that trickled down to the roots of his accursed company. But at least from then on, every onboarding slideshow included a photo of one of the coffee ring printouts, with the title Respect the Equipment.

Thanks, Leila. I can live with that.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsThe Nesoi Treaty

Author: Julian Miles, Staff Writer “It really is nice that world leaders would meet me at such short notice.” The President waves a hand towards the kilometre-long spaceship that had appeared without warning above Washington DC. “Your presence is impossible to conceal. Panic is escalating. We thought it best.” The garishly-dressed triped nods. “Given the […]

The post The Nesoi Treaty appeared first on 365tomorrows.

Cryptogram Encryption Backdoor in Military/Police Radios

I wrote about this in 2023. Here’s the story:

Three Dutch security analysts discovered the vulnerabilities­—five in total—­in a European radio standard called TETRA (Terrestrial Trunked Radio), which is used in radios made by Motorola, Damm, Hytera, and others. The standard has been used in radios since the ’90s, but the flaws remained unknown because encryption algorithms used in TETRA were kept secret until now.

There’s new news:

In 2023, Carlo Meijer, Wouter Bokslag, and Jos Wetzels of security firm Midnight Blue, based in the Netherlands, discovered vulnerabilities in encryption algorithms that are part of a European radio standard created by ETSI called TETRA (Terrestrial Trunked Radio), which has been baked into radio systems made by Motorola, Damm, Sepura, and others since the ’90s. The flaws remained unknown publicly until their disclosure, because ETSI refused for decades to let anyone examine the proprietary algorithms.

[…]

But now the same researchers have found that at least one implementation of the end-to-end encryption solution endorsed by ETSI has a similar issue that makes it equally vulnerable to eavesdropping. The encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It’s not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them.

[…]

The end-to-end encryption the researchers examined recently is designed to run on top of TETRA encryption algorithms.

The researchers found the issue with the end-to-end encryption (E2EE) only after extracting and reverse-engineering the E2EE algorithm used in a radio made by Sepura.

These seem to be deliberately implemented backdoors.

Cryptogram Poor Password Choices

Look at this: McDonald’s chose the password “123456” for a major corporate system.

,

365 TomorrowsObsolete

Author: Macy Martus Lesson Incomplete – ERROR – Lesson Incomplete – ERROR The large letters zipped across the port-pad. Repeating the message Rowan had seen countless times since he began his school program. A message that indicated Failure again. From his sleep-room Rowan used his port-pad to view his mother in the sit-room. She was […]

The post Obsolete appeared first on 365tomorrows.

,

David BrinThis is EXACTLY the 1850s... and this arc will end the same.


Turning the U.S. military into a domestic police force. It's just one of many Project 2025 nightmares-come-true that sound like Nazi Germany or the USSR, right?

Only let me tell you about a time when eerily similar things happened in this very same republic. And back then, it took real pain, sacrifice and courage -- true citizens standing up in their millions -- to restore what has been inarguably the one best -- and possibly last -- hope of humankind.

== Rhyming - creepily - with the past ==

Do you recall just a couple of years ago, when the Foxite incantation howled that 500 new IRS auditors -- hired under the 2021 Pelosi Miracle Bills* to check rich tax cheaters -- would be a 'wave of jack-booted thugs'? Riiight. A few hundred nerdy CPAs hunting billions hidden in Cayman accounts by hedge and inheritance brats... that was looming police state.

But sure, those oligarchs did have reason to fear justice. So justice had to go.

Now, after firing those auditors -- and then all the statistics-keepers and Inspectors General and honest FBI folks -- the Trumpians now cheer as many thousands of masked-tattooed ex-cons rampage across the country without ever showing warrants or ID... and now they are trying to turn the U.S. military into a branch of the insanity. (Ask retired, senior military officers what they think of this!)

What chafes me is that NO ONE is making parallels to the 1850s, when the Fugitive Slave Act and Supreme Court travesties unleashed raiding parties of masked, irregular southern cavalry to go storming across northern states, kidnapping and burning, protected by presidential-appointed marshals and armed troops.

Um, sound familiar? Read about that here! And even earlier parallels made by Robert Heinlein.

The net result of those ravaging, masked gangs, enforcing an evil 'law'? Northerners grew angered and radicalized. Enough to revive their dormant state militias, eventually providing troops needed to save the Union. Oh, and radicalized enough to elect Abraham Lincoln.

You want parallels? Our present mess is almost exactly the same! Except that this time the confederacy has its long-desired foreign backers. And they assume (as they always do, in every phase of this 250 year culture war) that smartypants modernists won't fight.

Sure. As Sam Houston warned his fellow Texans, Blue Americans are cooler of temperament and slower to wrath. But 'when they move it will be with ponderous, unstoppable momentum.'

Today's MAGA/Foxite/Putinist confederacy focuses its core spite not based on race or gender, but against all fact-using professions, from scientists to statisticians to journalists, to the FBI and the U.S. military officer corps. And sure, those cool-blooded fact professionals etc. -- and the tens of millions of folks who believe in them and in things called facts -- are slow to anger.

But we are also the ones who know cyber, nano, bio, nuclear and the rest...

...and MAGA/confederate/KGB-puppets will NOT like us when we get mad.

But you go ahead. Enumerate for yourself the many parallels with the 1850s, including a Supreme Court Chief Justice who will be damned by all future generations as our era's Roger Taney.

And know that you may be asked to step up, at some point. Be undaunted. You are not made of lesser stuff than the heroes of Vicksburg and Gettysburg.


== If Obama had done this ==

Giggling MAGAs also delight in humiliating our allies. As in this case, summoning them NOT to a conference table, as equals, but to a desk meeting like flunkies before the Big Boss. 

Like 'apprentices' flattering the makeup-slathered 'star' of the show, to hold off his next pyrotechnic fit, for a little while.

Robert Heinlein - back in the 50s - predicted this coup by the always-simmering religio-fanatic, racist, anti-furriner, anti-science, anti-intellect and anti-fact wing of American life.

Mention to the Foxites that we NEED allies in this world, and they chortle! As they do when we say that we need facts. Mention that ALL of these allies stepped up to our aid, after the 9/11 attacks? They'll just shrug.

Above all, they smell the blood they have wanted since 1865, only this round the confederacy conquered Washington and has its long-sought foreign backers. But Putin and his "ex" commissars are in a hurry. At current trajectories, NATO will be able to stand on its own in just two years. The KGB's stooges here must finish their work or go down with him. And the Union - as in 1863 - is finding its competent leaders.


== Go, Gavin! ==

Oh how it galls them that California's governor is proving adept at getting under old Two-Scoops's skin! 

Look, he ain't perfect. But he's good at this. And Californians admire how he's led a progressive state to stability and the world's 4th economy, the most creative, scientific, un-corrupt and well-run commonwealth on Earth. And while he is no Bernie Sanders, Bernie loves Gavin!  And read-up on the 2021 Pelosi miracle bills (see below)* before you rant to us about 'moderate sellouts.'

Newsom's response to Texas super-gerrymandering is something that he - (and I and millions) - regret as necessary. I was proud that California, WA, OR, CO and NM led the nation in banning that foul cheat... and we will again! After the master cheaters back off. 

Over the longer run, I have offered methods to get around the current Supreme Court's outrageous support for gerrymandering. One concept, that would bypass all politicians, got approving attention from a senior U.S. Court of Appeals judge, who told me it ought to work! 

My collection of such potential maneuvers - many of them non- or even anti-partisan - can be found in Polemical Judo

Meanwhile, my main crit of Newsom is that there are SO many more zinger memes that his people - or even the Lincoln Project - have never considered. And I have troves of them. A Mother Lode.

Troves.

====================================================

PS: While Newsom is doing great at getting under their skins... I still can't believe no one is pushing the video and lyrics... and lesson... of John Fogerty's song Vanz Can't Dance... about the eponymous thieving pig who stole Fogerty's fortune. Play it. Spread it. We need to be ready for when it is the turn of the next monster to assail our great experiment in sapient civilization.

=====

PPS: A genuine, real-but-not-blackmailed hyper right winger, even Bolton had enough when he realized he was working for an idiot who worked for the slightly relabeled and revived USSR, helping the "ex" commissars wage 5th column war against the USA and the West. I am way further down than Bolton on the revenge list of ol Two Scoops. But he'll get to me before most of you. And I am easier to take out, on the street. Still, I am willing to die on this hill. With the grim irony of knowing we need the widest coalition, and that John Bolton now wears blue.

=====

* Any of you who are tempted to rail against "DNC moderates" or "lukewarm semi-liberals" should try to actually know what you are talking about! Aided vigorously by the pragmatic left, like Bernie, Liz, AOC, Stacey, Pete etc. the despised Nancy Pelosi wrought miracles!  A set of bills that moved important things forward. And look at the last three DNC chairs before you snarl about "DNC establishment sellouts."

And YOU need to slap - hard! - any splitters who yowl that 'the party establishments are the same!' Or only a sharp turn to the left will lure back the millions of Blacks and Hispanics whose departure left Trump the presidency. The parties aren't the same at any level. And splitters only help the forces of darkness. 

Know about those Miracle Bills, before you try any of that crap. Or STFU and let us rebuild a broad coalition.

365 TomorrowsIt’s the Principle of the Thing

Author: Don Nigroni Professor David Marshall is unique among mathematicians. No one but him understands his equations. But his micro and macro predictions were spot on so everyone assumed he knew what he was doing. Dave is my older brother. We usually discussed ferns and dragonflies, never mathematics. So, I thought it passing strange last […]

The post It’s the Principle of the Thing appeared first on 365tomorrows.

,

Planet DebianMatthias Geiger: Enforcing darkmode for QT programs under a non-QT based environment

I use sway as window manager on my main machine. As I prefer dark mode, I looked for a way to enable dark mode everywhere. For GTK-based this is fairly straightforward: Just install whatever theme you prefer, and apply it. However, QT-based applications on a non-QT based desktop will look …

Planet DebianDaniel Lange: Polkitd (Policy Kit Daemon) in Trixie ... allowing remote users to suspend, reboot, power off the local system

As per the previous Polkit blog post the policykit framwork has lost the ability to understand its own .pkla files and policies need to be expressed in Javascript with .rules files now.

To re-enable allowing remote users (think ssh) to reboot, hibernate, suspend or power off the local system, create a 10-shutdown-reboot.rules file in /etc/polkit-1/rules.d/:

polkit.addRule(function(action, subject) {
    if ((action.id == "org.freedesktop.login1.reboot-multiple-sessions" ||
         action.id == "org.freedesktop.login1.reboot" ||
         action.id == "org.freedesktop.login1.suspend-multiple-sessions" ||
         action.id == "org.freedesktop.login1.suspend" ||
         action.id == "org.freedesktop.login1.hibernate-multiple-sessions" ||
         action.id == "org.freedesktop.login1.hibernate" ||
         action.id == "org.freedesktop.login1.power-off-multiple-sessions" ||
         action.id == "org.freedesktop.login1.power-off") &&
        (subject.isInGroup("sudo") || (subject.user == "root")))
    {
        return polkit.Result.YES;
    }
});

and run systemctl restart polkit.

Planet DebianRussell Coker: Dell T320 H310 RAID and IT Mode

The Problem

Just over 2 years ago my Dell T320 server had a motherboard failure [1]. I recently bought another T320 that had been gutted (no drives, PSUs, or RAM) and put the bits from my one in it.

I installed Debian and the resulting installation wouldn’t boot, I tried installing with both UEFI and BIOS modes with the same result. Then I realised that the disks I had installed were available even though I hadn’t gone through the RAID configuration (I usually make a separate RAID-0 for each disk to work best with BTRFS or ZFS). I tried changing the BIOS setting for SATA disks between “RAID” and “AHCI” modes which didn’t change things and realised that the BIOS setting in question probably applies to the SATA connector on the motherboard and that the RAID card was in “IT” mode which means that each disk is seen separately.

If you are using ZFS or BTRFS you don’t want to use a RAID-1, RAID-5, or RAID-6 on the hardware RAID controller, if there are different versions of the data on disks in the stripe then you want the filesystem to be able to work out which one is correct. To use “IT” mode you have to flash a different unsupported firmware on the RAID controller and then you either have to go to some extra effort to make it bootable or have a different device to boot from.

The Root Causes

Dell has no reason to support unusual firmware on their RAID controllers. Installing different firmware on a device that is designed for high availability is going to have some probability of data loss and perhaps more importantly for Dell some probability of customers returning hardware during the support period and acting innocent about why it doesn’t work. Dell has a great financial incentive to make it difficult to install Dell firmware on LSI cards from other vendors which have equivalent hardware as they don’t want customers to get all the benefits of iDRAC integration etc without paying the Dell price premium.

All the other vendors have similar financial incentives so there is no official documentation or support on converting between different firmware images. Dell’s support for upgrading the Dell version is pretty good, but it aborts if it sees something different.

The Attempts

I tried following the instructions in this document to flash back to Dell firmware [2]. This document is about the H310 RAID card in my Dell T320 AKA a “LSI SAS 9211-8i”. The sas2flash.efi program didn’t seem to do anything, it returned immediately and didn’t give an error message.

This page gives a start of how to get inside the Dell firmware package but doesn’t work [3]. It didn’t cover the case where sasdupie aborts with an error because it detects the current version as “00.00.00.00” not something that the upgrade program is prepared to upgrade from. But it’s a place to start looking for someone who wants to try harder at this.

This forum post has some interesting information, I gave up before trying it, but it may be useful for someone else [4].

The Solution

Dell tower servers have as a standard feature an internal USB port for a boot device. So I created a boot image on a spare USB stick and installed it there and it then loads the kernel and mounts the filesystem from a SATA hard drive. Once I got that working everything was fine. The Debian/Trixie installer would probably have allowed me to install an EFI device on the internal USB stick as part of the install if I had known what was going to happen.

The system is now fully working and ready to sell. Now I just need to find someone who wants “IT” mode on the RAID controller and hopefully is willing to pay extra for it.

Whatever I sell the system for it seems unlikely to cover the hours I spent working on this. But I learned some interesting things about RAID firmware and hopefully this blog post will be useful to other people, even if only to discourage them from trying to change firmware.

Worse Than FailureError'd: 8 Days a Week

"What word can spell with the letters housucops?" asks Mark R. "Sometimes AI hallucinations can be hard to find. Other times, they just kind of stand out..."

1

 

"Do I need more disks?" wonders Gordon "I'm replacing a machine which has only 2 GB of HDD. New one has 2 TB, but that may not be enough. Unless Thunar is lying." It's being replaced by an LLM.

0

 

"Greenmobility UX is a nightmare" complains an anonymous reader. "Just like last week's submission, do you want to cancel? Cancel or Leave?" This is not quite as bad as last week's.

2

 

Cinephile jeffphi rated this film two thumbs down. "This was a very boring preview, cannot recommend."

4

 

Malingering Manuel H. muses "Who doesn't like long weekends? Sometimes, one Sunday per week is just not enough, so just put a second one right after the first." I don't want to wait until Oktober for a second Sunday; hope we get one søøn.

3

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsTerminal Bar

Author: Susan Anthony Gertrude found him at the Terminal Bar and Grill. Broom by his side, sitting at the bar, where customers got their orders straight from the latest donkey serving that night. Terence motioned to her. She shuffled over. He nodded to the server and got a couple of beers. The donkey forced the […]

The post Terminal Bar appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 305 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 305. This version includes the following changes:

[ Chris Lamb ]
* Upload to unstable/sid after the release of trixie.

You find out more by visiting the project homepage.

Planet DebianReproducible Builds (diffoscope): diffoscope 304 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 304. This version includes the following changes:

[ Chris Lamb ]
* Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2)
  time. (Closes: reproducible-builds/diffoscope#414)
* Fix test after the upload of systemd-ukify 258~rc3 (vs. 258~rc2).
* Move from a mono-utils dependency to versioned "mono-devel | mono-utils"
  dependency, taking care to maintain the [!riscv64] architecture
  restriction. (Closes: #1111742)
* Use sed -ne over awk -F= to to avoid mangling dependency lines containing
  equals signs (=), for example version restrictions.
* Use sed backreferences when generating debian/tests/control to avoid DRY
  violations.
* Update copyright years.

[ Martin Joerg ]
* Avoid a crash in the HTML presenter when page limit is None.

You find out more by visiting the project homepage.

,

Cryptogram AI Agents Need Data Integrity

Think of the Web as a digital territory with its own social contract. In 2014, Tim Berners-Lee called for a “Magna Carta for the Web” to restore the balance of power between individuals and institutions. This mirrors the original charter’s purpose: ensuring that those who occupy a territory have a meaningful stake in its governance.

Web 3.0—the distributed, decentralized Web of tomorrow—is finally poised to change the Internet’s dynamic by returning ownership to data creators. This will change many things about what’s often described as the “CIA triad” of digital security: confidentiality, integrity, and availability. Of those three features, data integrity will become of paramount importance.

When we have agency in digital spaces, we naturally maintain their integrity—protecting them from deterioration and shaping them with intention. But in territories controlled by distant platforms, where we’re merely temporary visitors, that connection frays. A disconnect emerges between those who benefit from data and those who bear the consequences of compromised integrity. Like homeowners who care deeply about maintaining the property they own, users in the Web 3.0 paradigm will become stewards of their personal digital spaces.

This will be critical in a world where AI agents don’t just answer our questions but act on our behalf. These agents may execute financial transactions, coordinate complex workflows, and autonomously operate critical infrastructure, making decisions that ripple through entire industries. As digital agents become more autonomous and interconnected, the question is no longer whether we will trust AI but what that trust is built upon. In the new age we’re entering, the foundation isn’t intelligence or efficiency—it’s integrity.

What Is Data Integrity?

In information systems, integrity is the guarantee that data will not be modified without authorization, and that all transformations are verifiable throughout the data’s life cycle. While availability ensures that systems are running and confidentiality prevents unauthorized access, integrity focuses on whether information is accurate, unaltered, and consistent across systems and over time.

It’s a new idea. The undo button, which prevents accidental data loss, is an integrity feature. So is the reboot process, which returns a computer to a known good state. Checksums are an integrity feature; so are verifications of network transmission. Without integrity, security measures can backfire. Encrypting corrupted data just locks in errors. Systems that score high marks for availability but spread misinformation just become amplifiers of risk.

All IT systems require some form of data integrity, but the need for it is especially pronounced in two areas today. First: Internet of Things devices interact directly with the physical world, so corrupted input or output can result in real-world harm. Second: AI systems are only as good as the integrity of the data they’re trained on, and the integrity of their decision-making processes. If that foundation is shaky, the results will be too.

Integrity manifests in four key areas. The first, input integrity, concerns the quality and authenticity of data entering a system. When this fails, consequences can be severe. In 2021, Facebook’s global outage was triggered by a single mistaken command—an input error missed by automated systems. Protecting input integrity requires robust authentication of data sources, cryptographic signing of sensor data, and diversity in input channels for cross-validation.

The second issue is processing integrity, which ensures that systems transform inputs into outputs correctly. In 2003, the U.S.-Canada blackout affected 55 million people when a control-room process failed to refresh properly, resulting in damages exceeding US $6 billion. Safeguarding processing integrity means formally verifying algorithms, cryptographically protecting models, and monitoring systems for anomalous behavior.

Storage integrity covers the correctness of information as it’s stored and communicated. In 2023, the Federal Aviation Administration was forced to halt all U.S. departing flights because of a corrupted database file. Addressing this risk requires cryptographic approaches that make any modification computationally infeasible without detection, distributed storage systems to prevent single points of failure, and rigorous backup procedures.

Finally, contextual integrity addresses the appropriate flow of information according to the norms of its larger context. It’s not enough for data to be accurate; it must also be used in ways that respect expectations and boundaries. For example, if a smart speaker listens in on casual family conversations and uses the data to build advertising profiles, that action would violate the expected boundaries of data collection. Preserving contextual integrity requires clear data-governance policies, principles that limit the use of data to its intended purposes, and mechanisms for enforcing information-flow constraints.

As AI systems increasingly make critical decisions with reduced human oversight, all these dimensions of integrity become critical.

The Need for Integrity in Web 3.0

As the digital landscape has shifted from Web 1.0 to Web 2.0 and now evolves toward Web 3.0, we’ve seen each era bring a different emphasis in the CIA triad of confidentiality, integrity, and availability.

Returning to our home metaphor: When simply having shelter is what matters most, availability takes priority—the house must exist and be functional. Once that foundation is secure, confidentiality becomes important—you need locks on your doors to keep others out. Only after these basics are established do you begin to consider integrity, to ensure that what’s inside the house remains trustworthy, unaltered, and consistent over time.

Web 1.0 of the 1990s prioritized making information available. Organizations digitized their content, putting it out there for anyone to access. In Web 2.0, the Web of today, platforms for e-commerce, social media, and cloud computing prioritize confidentiality, as personal data has become the Internet’s currency.

Somehow, integrity was largely lost along the way. In our current Web architecture, where control is centralized and removed from individual users, the concern for integrity has diminished. The massive social media platforms have created environments where no one feels responsible for the truthfulness or quality of what circulates.

Web 3.0 is poised to change this dynamic by returning ownership to the data owners. This is not speculative; it’s already emerging. For example, ActivityPub, the protocol behind decentralized social networks like Mastodon, combines content sharing with built-in attribution. Tim Berners-Lee’s Solid protocol restructures the Web around personal data pods with granular access controls.

These technologies prioritize integrity through cryptographic verification that proves authorship, decentralized architectures that eliminate vulnerable central authorities, machine-readable semantics that make meaning explicit—structured data formats that allow computers to understand participants and actions, such as “Alice performed surgery on Bob”—and transparent governance where rules are visible to all. As AI systems become more autonomous, communicating directly with one another via standardized protocols, these integrity controls will be essential for maintaining trust.

Why Data Integrity Matters in AI

For AI systems, integrity is crucial in four domains. The first is decision quality. With AI increasingly contributing to decision-making in health care, justice, and finance, the integrity of both data and models’ actions directly impact human welfare. Accountability is the second domain. Understanding the causes of failures requires reliable logging, audit trails, and system records.

The third domain is the security relationships between components. Many authentication systems rely on the integrity of identity information and cryptographic keys. If these elements are compromised, malicious agents could impersonate trusted systems, potentially creating cascading failures as AI agents interact and make decisions based on corrupted credentials.

Finally, integrity matters in our public definitions of safety. Governments worldwide are introducing rules for AI that focus on data accuracy, transparent algorithms, and verifiable claims about system behavior. Integrity provides the basis for meeting these legal obligations.

The importance of integrity only grows as AI systems are entrusted with more critical applications and operate with less human oversight. While people can sometimes detect integrity lapses, autonomous systems may not only miss warning signs—they may exponentially increase the severity of breaches. Without assurances of integrity, organizations will not trust AI systems for important tasks, and we won’t realize the full potential of AI.

How to Build AI Systems With Integrity

Imagine an AI system as a home we’re building together. The integrity of this home doesn’t rest on a single security feature but on the thoughtful integration of many elements: solid foundations, well-constructed walls, clear pathways between rooms, and shared agreements about how spaces will be used.

We begin by laying the cornerstone: cryptographic verification. Digital signatures ensure that data lineage is traceable, much like a title deed proves ownership. Decentralized identifiers act as digital passports, allowing components to prove identity independently. When the front door of our AI home recognizes visitors through their own keys rather than through a vulnerable central doorman, we create resilience in the architecture of trust.

Formal verification methods enable us to mathematically prove the structural integrity of critical components, ensuring that systems can withstand pressures placed upon them—especially in high-stakes domains where lives may depend on an AI’s decision.

Just as a well-designed home creates separate spaces, trustworthy AI systems are built with thoughtful compartmentalization. We don’t rely on a single barrier but rather layer them to limit how problems in one area might affect others. Just as a kitchen fire is contained by fire doors and independent smoke alarms, training data is separated from the AI’s inferences and output to limit the impact of any single failure or breach.

Throughout this AI home, we build transparency into the design: The equivalent of large windows that allow light into every corner is clear pathways from input to output. We install monitoring systems that continuously check for weaknesses, alerting us before small issues become catastrophic failures.

But a home isn’t just a physical structure, it’s also the agreements we make about how to live within it. Our governance frameworks act as these shared understandings. Before welcoming new residents, we provide them with certification standards. Just as landlords conduct credit checks, we conduct integrity assessments to evaluate newcomers. And we strive to be good neighbors, aligning our community agreements with broader societal expectations. Perhaps most important, we recognize that our AI home will shelter diverse individuals with varying needs. Our governance structures must reflect this diversity, bringing many stakeholders to the table. A truly trustworthy system cannot be designed only for its builders but must serve anyone authorized to eventually call it home.

That’s how we’ll create AI systems worthy of trust: not by blindly believing in their perfection but because we’ve intentionally designed them with integrity controls at every level.

A Challenge of Language

Unlike other properties of security, like “available” or “private,” we don’t have a common adjective form for “integrity.” This makes it hard to talk about it. It turns out that there is a word in English: “integrous.” The Oxford English Dictionary recorded the word used in the mid-1600s but now declares it obsolete.

We believe that the word needs to be revived. We need the ability to describe a system with integrity. We must be able to talk about integrous systems design.

The Road Ahead

Ensuring integrity in AI presents formidable challenges. As models grow larger and more complex, maintaining integrity without sacrificing performance becomes difficult. Integrity controls often require computational resources that can slow systems down—particularly challenging for real-time applications. Another concern is that emerging technologies like quantum computing threaten current cryptographic protections. Additionally, the distributed nature of modern AI—which relies on vast ecosystems of libraries, frameworks, and services—presents a large attack surface.

Beyond technology, integrity depends heavily on social factors. Companies often prioritize speed to market over robust integrity controls. Development teams may lack specialized knowledge for implementing these controls, and may find it particularly difficult to integrate them into legacy systems. And while some governments have begun establishing regulations for aspects of AI, we need worldwide alignment on governance for AI integrity.

Addressing these challenges requires sustained research into verifying and enforcing integrity, as well as recovering from breaches. Priority areas include fault-tolerant algorithms for distributed learning, verifiable computation on encrypted data, techniques that maintain integrity despite adversarial attacks, and standardized metrics for certification. We also need interfaces that clearly communicate integrity status to human overseers.

As AI systems become more powerful and pervasive, the stakes for integrity have never been higher. We are entering an era where machine-to-machine interactions and autonomous agents will operate with reduced human oversight and make decisions with profound impacts.

The good news is that the tools for building systems with integrity already exist. What’s needed is a shift in mind-set: from treating integrity as an afterthought to accepting that it’s the core organizing principle of AI security.

The next era of technology will be defined not by what AI can do, but by whether we can trust it to know or especially to do what’s right. Integrity—in all its dimensions—will determine the answer.

Sidebar: Examples of Integrity Failures

Ariane 5 Rocket (1996)
Processing integrity failure
A 64-bit velocity calculation was converted to a 16-bit output, causing an error called overflow. The corrupted data triggered catastrophic course corrections that forced the US $370 million rocket to self-destruct.

NASA Mars Climate Orbiter (1999)
Processing integrity failure
Lockheed Martin’s software calculated thrust in pound-seconds, while NASA’s navigation software expected newton-seconds. The failure caused the $328 million spacecraft to burn up in the Mars atmosphere.

Microsoft’s Tay Chatbot (2016)
Processing integrity failure
Released on Twitter, Microsoft‘s AI chatbot was vulnerable to a “repeat after me” command, which meant it would echo any offensive content fed to it.

Boeing 737 MAX (2018)
Input integrity failure
Faulty sensor data caused an automated flight-control system to repeatedly push the airplane’s nose down, leading to a fatal crash.

SolarWinds Supply-Chain Attack (2020)
Storage integrity failure
Russian hackers compromised the process that SolarWinds used to package its software, injecting malicious code that was distributed to 18,000 customers, including nine federal agencies. The hack remained undetected for 14 months.

ChatGPT Data Leak (2023)
Storage integrity failure
A bug in OpenAI’s ChatGPT mixed different users’ conversation histories. Users suddenly had other people’s chats appear in their interfaces with no way to prove the conversations weren’t theirs.

Midjourney Bias (2023)
Contextual integrity failure
Users discovered that the AI image generator often produced biased images of people, such as showing white men as CEOs regardless of the prompt. The AI tool didn’t accurately reflect the context requested by the users.

Prompt Injection Attacks (2023–)
Input integrity failure
Attackers embedded hidden prompts in emails, documents, and websites that hijacked AI assistants, causing them to treat malicious instructions as legitimate commands.

CrowdStrike  Outage (2024)
Processing integrity failure
A faulty software update from CrowdStrike caused 8.5 million Windows computers worldwide to crash—grounding flights, shutting down hospitals, and disrupting banks. The update, which contained a software logic error, hadn’t gone through full testing protocols.

Voice-Clone Scams (2024)
Input and processing integrity failure
Scammers used AI-powered voice-cloning tools to mimic the voices of victims’ family members, tricking people into sending money. These scams succeeded because neither phone systems nor victims identified the AI-generated voice as fake.

This essay was written with Davi Ottenheimer, and originally appeared in IEEE Spectrum.

Worse Than FailureA Countable

Once upon a time, when the Web was young, if you wanted to be a cool kid, you absolutely needed two things on your website: a guestbook for people to sign, and a hit counter showing how many people had visited your Geocities page hosting your Star Trek fan fiction.

These days, we don't see them as often, but companies still like to track the information, especially when it comes to counting downloads. So when Justin started on a new team and saw a download count in their analytics, he didn't think much of it at all. Nor did he think much about it when he saw the download count displayed on the download page.

Another thing that Justin didn't think much about was big piles of commits getting merged in overnight, at least not at first. But each morning, Justin needed to pull in a long litany of changes from a user named "MrStinky". For the first few weeks, Justin was too preoccupied with getting his feet under him, so he didn't think about it too much.

But eventually, he couldn't ignore what he saw in the git logs.

docs: update download count to 51741
docs: update download count to 51740
docs: update download count to 51738

And each commit was exactly what the name implied, a diff like:

- 51740
+ 51741

Each time a user clicked the download link, a ping was sent to their analytics system. Throughout the day, the bot "MrStinky" would query the analytics tool, and create new commits that updated the counter. Overnight, it would bundle those commits into a merge request, approve the request, merge the changes, and then redeploy what was at the tip of main.

"But, WHY?" Justin asked his peers.

One of them just shrugged. "It seemed like the easiest and fastest way at the time?"

"I wanted to wire Mr Stinky up to our content management system's database, but just never got around to it. And this works fine," said another.

Much like the rest of the team, Justin found that there were bigger issues to tackle.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsTime Skipper

Author: Mark Connelly Dr. Bruner reviewed the patient chart on her laptop as Derek Anders sat across from her, draping his jacket on the arm of the chair. “Dr. Rizzo said you reported new symptoms?” she asked. “Yes,” he answered, leaning forward. “I think I’m having mini seizures or something. My time perception is off.” […]

The post Time Skipper appeared first on 365tomorrows.

Krebs on SecuritySIM-Swapper, Scattered Spider Hacker Gets 10 Years

A 20-year-old Florida man at the center of a prolific cybercrime group known as “Scattered Spider” was sentenced to 10 years in federal prison today, and ordered to pay roughly $13 million in restitution to victims.

Noah Michael Urban of Palm Coast, Fla. pleaded guilty in April 2025 to charges of wire fraud and conspiracy. Florida prosecutors alleged Urban conspired with others to steal at least $800,000 from five victims via SIM-swapping attacks that diverted their mobile phone calls and text messages to devices controlled by Urban and his co-conspirators.

A booking photo of Noah Michael Urban released by the Volusia County Sheriff.

Although prosecutors had asked for Urban to serve eight years, Jacksonville news outlet News4Jax.com reports the federal judge in the case today opted to sentence Urban to 120 months in federal prison, ordering him to pay $13 million in restitution and undergo three years of supervised release after his sentence is completed.

In November 2024 Urban was charged by federal prosecutors in Los Angeles as one of five members of Scattered Spider (a.k.a. “Oktapus,” “Scatter Swine” and “UNC3944”), which specialized in SMS and voice phishing attacks that tricked employees at victim companies into entering their credentials and one-time passcodes at phishing websites. Urban pleaded guilty to one count of conspiracy to commit wire fraud in the California case, and the $13 million in restitution is intended to cover victims from both cases.

The targeted SMS scams spanned several months during the summer of 2022, asking employees to click a link and log in at a website that mimicked their employer’s Okta authentication page. Some SMS phishing messages told employees their VPN credentials were expiring and needed to be changed; other missives advised employees about changes to their upcoming work schedule.

That phishing spree netted Urban and others access to more than 130 companies, including Twilio, LastPass, DoorDash, MailChimp, and Plex. The government says the group used that access to steal proprietary company data and customer information, and that members also phished people to steal millions of dollars worth of cryptocurrency.

For many years, Urban’s online hacker aliases “King Bob” and “Sosa” were fixtures of the Com, a mostly Telegram and Discord-based community of English-speaking cybercriminals wherein hackers boast loudly about high-profile exploits and hacks that almost invariably begin with social engineering. King Bob constantly bragged on the Com about stealing unreleased rap music recordings from popular artists, presumably through SIM-swapping attacks. Many of those purloined tracks or “grails” he later sold or gave away on forums.

Noah “King Bob” Urban, posting to Twitter/X around the time of his sentencing today.

Sosa also was active in a particularly destructive group of accomplished criminal SIM-swappers known as “Star Fraud.” Cyberscoop’s AJ Vicens reported in 2023 that individuals within Star Fraud were likely involved in the high-profile Caesars Entertainment and MGM Resorts extortion attacks that same year.

The Star Fraud SIM-swapping group gained the ability to temporarily move targeted mobile numbers to devices they controlled by constantly phishing employees of the major mobile providers. In February 2023, KrebsOnSecurity published data taken from the Telegram channels for Star Fraud and two other SIM-swapping groups showing these crooks focused on SIM-swapping T-Mobile customers, and that they collectively claimed internal access to T-Mobile on 100 separate occasions over a 7-month period in 2022.

Reached via one of his King Bob accounts on Twitter/X, Urban called the sentence unjust, and said the judge in his case discounted his age as a factor.

“The judge purposefully ignored my age as a factor because of the fact another Scattered Spider member hacked him personally during the course of my case,” Urban said in reply to questions, noting that he was sending the messages from a Florida county jail. “He should have been removed as a judge much earlier on. But staying in county jail is torture.”

A court transcript (PDF) from a status hearing in February 2025 shows Urban was telling the truth about the hacking incident that happened while he was in federal custody. It involved an intrusion into a magistrate judge’s email account, where a copy of Urban’s sealed indictment was stolen. The judge told attorneys for both sides that a co-defendant in the California case was trying to find out about Mr. Urban’s activity in the Florida case.

“What it ultimately turned into a was a big faux pas,” Judge Harvey E. Schlesinger said. “The Court’s password…business is handled by an outside contractor. And somebody called the outside contractor representing Judge Toomey saying, ‘I need a password change.’ And they gave out the password change. That’s how whoever was making the phone call got into the court.”

Planet DebianMatthew Palmer: Progress on my open source funding experiment

When I recently announced that I was starting an open source crowd-funding experiment, I wasn’t sure what would happen. Perhaps there’d be radio silence, or a huge out-pouring of interest from people who wanted to see more open source code in the world. What’s happened so far has been… interesting.

I chose to focus on action-validator because it’s got a number of open feature requests, and it solves a common problem that people have. The thing is, I’ve developed and released a lot of open source over the multiple decades I’ve been noodling around with computers. Much of that has been of use to many people, the overwhelming majority of whom I will never, ever meet, hear from, or even know that I’ve helped them out.

One person, however, I do know about – a generous soul named Andy, who (as far as I know) doesn’t use action-validator, but who does use another tool I wrote some years ago: lvmsync. It’s somewhat niche, essentially “rsync for LVM-backed block devices”, so I’m slightly surprised that it’s my most-starred repository, at nearly 400(!) stars. Andy is one of the people who finds it useful, and he was kind enough to reach out and offer a contribution in thanks for lvmsync existing.

In the spirit of my open source code-fund, I applied Andy’s contribution to the “general” pool, and as a result have just released action-validator v0.8.0, which supports a new --rootdir command-line option, fixing action-validator issue #54. Everyone who uses --rootdir in their action-validator runs has Andy to thank, and I thank him too.

This is, of course, still early days in my experiment. You can be like Andy, and make the open source world a better place, by contributing to my code-fund, and you can get your name up in lights, too. Whether you’re an action-validator user, have gotten utility from any of the other things I’ve written, or just want to see more open source code in the world, your contribution is greatly appreciated.

,

Planet DebianDirk Eddelbuettel: x13binary 1.1.61.1 on CRAN: Micro Fix

The x13binary team is happy to share the availability of Release 1.1.61.1 of the x13binary package providing the X-13ARIMA-SEATS program by the US Census Bureau which arrived on CRAN earlier today.

This release responds to a recent change in gfortran version 15 which now picks up a missing comma in a Fortran format string for printing output. The change is literally a one-char addition which we also reported upstream. At the same time this release also updates one README.md URL to an archive.org URL of an apparently deleted reference. There is now also an updated upstream release 1.1-62 which we should package next.

Courtesy of my CRANberries, there is also a diffstat report for this release showing changes to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianAntoine Beaupré: Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly.

I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites

So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place?

Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else.

Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition.

Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion

systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert.

Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case.

Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine.

Before your start, make sure secureboot is disabled, see the discussion below.

  1. install systemd tools:

    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    TODO: it doesn't look like this generates a initrd with dracut, do we care?

  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.

  4. Build the image:

    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:

    bootctl list
    

    Look for a Type #2 (.efi) entry for the kernel.

  6. Reboot:

    reboot
    

You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg).

By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:

systemctl reboot --boot-loader-menu=0

See the systemd-boot(7) manual for details on that.

I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers.

In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem

Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff.

We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit.

This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device.

We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.

  1. Before you perform the procedure, make sure requirements are installed:

    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    

    Note that this requires network access, of course.

  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer

  3. First, calculate how many sectors to free up for the LUKS header

    qalc> 32Mibyte / ( 512 byte )
    
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:

    fdisk  -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }' |
    

    For example, here's an example with a /boot and / filesystem:

    $ sudo fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }'
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:

    qalc> set precision 100
    qalc> 3904979087 - 65536
    

    Or, last step and this one, in one line:

    fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 - 65536 }'
    
  6. Recheck filesystem:

    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:

    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1 | awk '/nvme0n1p2/ { print $4 - 65536 }')s
    

    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).

  8. Re-encrypt filesystem:

    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --redice-device-size=32M
    

    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how.

    This will show progress information like:

    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    

    Wait until the ETA has passed.

  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):

    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    

    If this fails, now is the time to consider restoring from backups.

  10. Enter the chroot

    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    

    Pro tip: this can be done in one step in GRML with:

    grml-chroot /mnt bash
    
  11. Generate a crypttab:

    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:

    /dev/mapper/crypt_dev-nvme0n1p2 /               ext4    errors=remount-ro 0       1
    

    If you were already using a UUID entry for this, there's nothing to change!

  13. Configure the root filesystem in the initrd:

    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    
  14. Regenerate UKI:

    dpkg-reconfigure linux-image-$(uname -r)
    

    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.

  15. Exit chroot and reboot

    exit
    reboot
    

Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better.

Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

Planet DebianSven Hoexter: Istio: Connect via a VirtualService to External IP Addresses

Rant - I've a theory about istio: It feels like a software designed by people who hate the IT industry and wanted revenge. So they wrote a software with so many odd points of traffic interception (e.g. SNI based traffic re-routing) that's completely impossible to debug. If you roll that out into an average company you completely halt the IT operations for something like a year.

On topic: I've two endpoints (IP addresses serving HTTPS on a none standard port) outside of kubernetes, and I need some rudimentary balancing of traffic. Since istio is already here one can levarage that, combining the resource kinds ServiceEntry, DestinationRule and VirtualService to publish a service name within the istio mesh. Since we do not have host names and DNS for those endpoint IP addresses we need to rely on istio itself to intercept the DNS traffic and deliver a virtual IP address to access the service. The sample given here leverages the exportTo configuration to make the service name only available in the same namespace. If you need broader access remove or adjust that. As usual in kubernetes you can resolve the name also as FQDN, e.g. acme-service.mynamespace.svc.cluster.local.

---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  ports:
    - number: 12345
      name: acmeglue
      protocol: HTTPS
  resolution: STATIC
  location: MESH_EXTERNAL
  # limit the availability to the namespace this resource is applied to
  # if you need cross namespace access remove all the `exportTo`s in here
  exportTo:
    - "."
  # use `endpoints:` in this setup, `addreses:` did not work
  endpoints:
    # region1
    - address: 192.168.0.1
      ports:
        acmeglue: 12345
    # region2
     - address: 10.60.48.50
       ports:
        acmeglue: 12345
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: acme-service
spec:
  host: acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  trafficPolicy:
    loadBalancer:
      simple: LEAST_REQUEST
    connectionPool:
      tcp:
        tcpKeepalive:
          # We have GCP service attachments involved with a 20m idle timeout
          # https://cloud.google.com/vpc/docs/about-vpc-hosted-services#nat-subnets-other
          time: 600s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  http:
  - route:
    - destination:
        host: acme-service
    retries:
      attempts: 2
      perTryTimeout: 2s
      retryOn: connect-failure,5xx
---
# Demo Deployment, istio configuration is the important part
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foobar
  labels:
    app: foobar
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foobar
  template:
    metadata:
      labels:
        app: foobar
        # enable istio sidecar
        sidecar.istio.io/inject: "true"
      annotations:
        # Enable DNS capture and interception, IP resolved will be in 240.240/16
        # If you use network policies you've to allow egress to this range.
        proxy.istio.io/config: |
          proxyMetadata:
            ISTIO_META_DNS_CAPTURE: "true"
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Now we can exec into the deployed pod, do something like curl -vk https://acme-service:12345, and it will talk to one of the endpoints defined in the ServiceEntry via an IP address out of the 240.240/16 Class E network.

Documentation
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution
https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings-SimpleLB
https://istio.io/latest/docs/ops/configuration/traffic-management/dns-proxy/#sidecar-mode

Planet DebianDirk Eddelbuettel: RcppArmadillo 14.6.3-1 on CRAN: Minor Upstream Bug Fixes

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1268 other packages on CRAN, downloaded 41 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 642 times according to Google Scholar.

Conrad made three minor bug fix releases since the 4.6.0 release last month. We need to pace releases at CRAN so we do not immediately upload there on each upstream release—and then CRAN also had the usual (and well-deserved) summer rest leading to a slight delay relative to the last upstream. The minor changes in the three releases are summarized below. All our releases are always available via the GitHub repo and hence also via r-universe, and still rigorously tested via our own reverse-dependency checks. We also note that the package once again passed with flying colours and no human intervention which remains impressive given the over 1200 reverse dependencies.

Changes in RcppArmadillo version 14.6.3-1 (2025-08-14)

  • Upgraded to Armadillo release 14.6.3 (Caffe Mocha)

    • Fix OpenMP related crashes in Cube::slice() on Arm64 CPUs

Changes in RcppArmadillo version 14.6.2-1 (2025-08-08) (GitHub Only)

  • Upgraded to Armadillo release 14.6.2 (Caffe Mocha)

    • Fix for corner-case speed regression in sum()

    • Better handling of OpenMP in omit_nan() and omit_nonfinite()

Changes in RcppArmadillo version 14.6.1-1 (2025-07-21) (GitHub Only)

  • Upgraded to Armadillo release 14.6.1 (Caffe Mocha)

    • Fix for speed regression in mean()

    • Fix for detection of compiler configuration

    • Use of pow optimization now optional

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianEmmanuel Kasper: Benchmarking 3D graphic cards and their drivers

I have in the past benchmarked network links and disks, so as to have a rough idea of the performance of the hardware I am confronted at $WORK. As I started to dabble into Linux gaming (on non-PC hardware !), I wanted to have some numbers from the graphic stack as well.

I am using the command glmark2 --size 1920x1080 which is testing the performance of an OpenGL implementation, hardware + drivers. OpenGL is the classic 3D API used by most opensource gaming on Linux (Doom3 Engine, SuperTuxCart, 0AD, Cube 2 Engine).

Vulkan is getting traction as a newer 3D API however the equivalent Vulkan vkmark benchmark was crashing using the NVIDIA semi-proprietary drivers. (vkmark --size 1920x1080 was throwing an ugly Error: Selected present mode Mailbox is not supported by the used Vulkan physical device. )

# apt install glmark2
$ lspci | grep -i vga # integrated GPU
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 615 (rev 02)
$ glmark2 --size 1920x1080
...
...
glmark2 Score: 2063
$ lspci | grep -i vga # integrated GPU
00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
glmark2 Score: 3095
$ lspci | grep -i vga # discrete GPU, using nouveau
0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD107GL [RTX 2000 / 2000E Ada Generation] (rev a1)
glmark2 Score: 2463
$ lspci | grep -i vga # discrete GPU, using nvidia-open semi-proprietary driver
0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD107GL [RTX 2000 / 2000E Ada Generation] (rev a1)
glmark2 score: 4960

Nouveau has currently some graphical glitches with Doom3 so I am using the nvidia-open driver for this hardware.

In my testing with Doom3 and SuperTuxKart, post 2015 integrated Intel Hardware is more than enough to play in HD resolution.

Worse Than FailureCodeSOD: Copy of a Copy of a

Jessica recently started at a company still using Windows Forms.

Well, that was a short article. Oh, you want more WTF than that? Sure, we can do that.

As you might imagine, a company that's still using Windows Forms isn't going to upgrade any time soon; they've been using an API that's been in maintenance mode for a decade, clearly they're happy with it.

But they're not too happy- Jessica was asked to track down a badly performing report. This of course meant wading through a thicket of spaghetti code, pointless singletons, and the general sloppiness that is the code base. Some of the code was written using Entity Framework for database access, much of it is not.

While it wasn't the report that Jessica was sent to debug, this method caught her eye:

private Dictionary<long, decimal> GetReportDiscounts(ReportCriteria criteria)
{
    Dictionary<long, decimal> rows = new Dictionary<long, decimal>();

    string query = @"select  ii.IID,
        SUM(CASE WHEN ii.AdjustedTotal IS NULL THEN 
        (ii.UnitPrice * ii.Units)  ELSE
            ii.AdjustedTotal END) as 'Costs'
            from ii
                where ItemType = 3
            group by ii.IID
            ";

    string connectionString = string.Empty;
    using (DataContext db = DataContextFactory.GetInstance<DataContext>())
    {
        connectionString = db.Database.Connection.ConnectionString;
    }

    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        using (SqlCommand command = new SqlCommand(query, connection))
        {
            command.Parameters.AddWithValue("@DateStart", criteria.Period.Value.Min.Value.Date);
            command.Parameters.AddWithValue("@DateEnd", criteria.Period.Value.Max.Value.Date.AddDays(1));
            command.Connection.Open();

            using (SqlDataReader reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    decimal discount = (decimal)reader["Costs"];
                    long IID = (long)reader["IID"];

                    if (rows.ContainsKey(IID))
                    {
                        rows[IID] += discount;
                    }
                    else
                    {
                        rows.Add(IID, discount);
                    }
                }
            }
        }
    }

    return rows;
}

This code constructs a query, opens a connection, runs the query, and iterates across the results, building a dictionary as its result set. The first thing which leaps out is that, in code, they're doing a summary (iterating across the results and grouping by IID), which is also what they did in the query.

It's also notable that the table they're querying is called ii, which is not a result of anonymization, and actually what they called it. Then there's the fact that they set parameters on the query, for DateStart and DateEnd, but the query doesn't use those. And then there's that magic number 3 in the query, which is its own set of questions.

Then, right beneath that method was one called GetReportTotals. I won't share it, because it's identical to what's above, with one difference:

            string query = @"
select   ii.IID,
                SUM(CASE WHEN ii.AdjustedTotal IS NULL THEN 
                (ii.UnitPrice * ii.Units)  ELSE
                 ii.AdjustedTotal END)  as 'Costs' from ii
				  where  itemtype = 0 
				 group by iid
";

The magic number is now zero.

So, clearly we're in the world of copy/paste programming, but this raises the question: which came first, the 0 or the 3? The answer is neither. GetCancelledInvoices came first.

private List<ReportDataRow> GetCancelledInvoices(ReportCriteria criteria, Dictionary<long, string> dictOfInfo)
{
    List<ReportDataRow> rows = new List<ReportDataRow>();

    string fCriteriaName = "All";

    string query = @"select 
        A long query that could easily be done in EF, or at worst a stored procedure or view. Does actually use the associated parameters";


    string connectionString = string.Empty;
    using (DataContext db = DataContextFactory.GetInstance<DataContext>())
    {
        connectionString = db.Database.Connection.ConnectionString;
    }

    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        using (SqlCommand command = new SqlCommand(query, connection))
        {
            command.Parameters.AddWithValue("@DateStart", criteria.Period.Value.Min.Value.Date);
            command.Parameters.AddWithValue("@DateEnd", criteria.Period.Value.Max.Value.Date.AddDays(1));
            command.Connection.Open();

            using (SqlDataReader reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    long ID = (long)reader["ID"];
                    decimal costs = (decimal)reader["Costs"];
                    string mNumber = (string)reader["MNumber"];
                    string mName = (string)reader["MName"];
                    DateTime idate = (DateTime)reader["IDate"];
                    DateTime lastUpdatedOn = (DateTime)reader["LastUpdatedOn"];
                    string iNumber = reader["INumber"] is DBNull ? string.Empty : (string)reader["INumber"];
                    long fId = (long)reader["FID"];
                    string empName = (string)reader["EmpName"];
                    string empNumber = reader["EmpNumber"] is DBNull ? string.Empty : (string)reader["empNumber"];
                    long mId = (long)reader["MID"];

                    string cName = dictOfInfo[matterId];

                    if (criteria.EmployeeID.HasValue && fId != criteria.EmployeeID.Value)
                    {
                        continue;
                    }

                    rows.Add(new ReportDataRow()
                    {
                        CName = cName,
                        IID = ID,
                        Costs = costs * -1, //Cancelled i - minus PC
                        TimedValue = 0,
                        MNumber = mNumber,
                        MName = mName,
                        BillDate = lastUpdatedOn,
                        BillNumber = iNumber + "A",
                        FID = fId,
                        EmployeeName = empName,
                        EmployeeNumber = empNumber
                    });
                }
            }
        }
    }


    return rows;
}

This is the original version of the method. We can infer this because it actually uses the parameters of DateStart and DateEnd. Everything else just copy/pasted this method and stripped out bits until it worked. There are more children of this method, each an ugly baby of its own, but all alike in their ugliness.

It's also worth noting, the original version is doing filtering after getting data from the database, instead of putting that criteria in the WHERE clause.

As for Jessica's poor performing report, it wasn't one of these methods. It was, however, another variation on "run a query, then filter, sort, and summarize in C#". By simply rewriting it as a SQL query in a stored procedure that leveraged indexes, performance improved significantly.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsZ-5600 Knows What’s Best

Author: Katie Dee Ethan walked the full length of the Eagle III again. He hated the sight of the empty rooms and quiet mess hall, but he needed exercise to avoid muscle atrophy. Z-5600 would chide him later if he didn’t meet his step count; the helpbot was nearly as bad as a fussing parent. […]

The post Z-5600 Knows What’s Best appeared first on 365tomorrows.

Planet DebianReproducible Builds: Reproducible Builds summit 2025 to take place in Vienna

We are extremely pleased to announce the upcoming Reproducible Builds summit, which will take place from October 28th—30th 2025 in the historic city of Vienna, Austria.

This year, we are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Hamburg (2023—2024), Venice (2022), Marrakesh (2019), Paris (2018), Berlin (2017), Berlin (2016) and Athens (2015).

If you’re excited about joining us this year, please make sure to read the event page which has more details about the event and location. As in previous years, we will be sending invitations to all those who attended our previous summit events or expressed interest to do so. However, even if you do not receive a personal invitation, please do email the organizers and we will find a way to accommodate you.

About the event

The Reproducible Builds Summit is a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

With your help, we will bring this (and several other areas) into life:


The main seminar room.

Schedule

Although the exact content of the meeting will be shaped by the participants, the main goals will include:

  • Update & exchange about the status of reproducible builds in various projects.
  • Improve collaboration both between and inside projects.
  • Expand the scope and reach of reproducible builds to more projects.
  • Work together and hack on solutions.
  • Establish space for more strategic and long-term thinking than is possible in virtual channels.
  • Brainstorm designs on tools enabling users to get the most benefits from reproducible builds.
  • Discuss how reproducible builds will be usable and meaningful to users and developers alike.

Logs and minutes will be published after the meeting.

Location & date

Registration instructions

Please reach out if you’d like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

We look forward to what we anticipate to be yet another extraordinary event!

,

Krebs on SecurityOregon Man Charged in ‘Rapper Bot’ DDoS Service

A 22-year-old Oregon man has been arrested on suspicion of operating “Rapper Bot,” a massive botnet used to power a service for launching distributed denial-of-service (DDoS) attacks against targets — including a March 2025 DDoS that knocked Twitter/X offline. The Justice Department asserts the suspect and an unidentified co-conspirator rented out the botnet to online extortionists, and tried to stay off the radar of law enforcement by ensuring that their botnet was never pointed at KrebsOnSecurity.

The control panel for the Rapper Bot botnet greets users with the message “Welcome to the Ball Pit, Now with refrigerator support,” an apparent reference to a handful of IoT-enabled refrigerators that were enslaved in their DDoS botnet.

On August 6, 2025, federal agents arrested Ethan J. Foltz of Springfield, Ore. on suspicion of operating Rapper Bot, a globally dispersed collection of tens of thousands of hacked Internet of Things (IoT) devices.

The complaint against Foltz explains the attacks usually clocked in at more than two terabits of junk data per second (a terabit is one trillion bits of data), which is more than enough traffic to cause serious problems for all but the most well-defended targets. The government says Rapper Bot consistently launched attacks that were “hundreds of times larger than the expected capacity of a typical server located in a data center,” and that some of its biggest attacks exceeded six terabits per second.

Indeed, Rapper Bot was reportedly responsible for the March 10, 2025 attack that caused intermittent outages on Twitter/X. The government says Rapper Bot’s most lucrative and frequent customers were involved in extorting online businesses — including numerous gambling operations based in China.

The criminal complaint was written by Elliott Peterson, an investigator with the Defense Criminal Investigative Service (DCIS), the criminal investigative division of the Department of Defense (DoD) Office of Inspector General. The complaint notes the DCIS got involved because several Internet addresses maintained by the DoD were the target of Rapper Bot attacks.

Peterson said he tracked Rapper Bot to Foltz after a subpoena to an ISP in Arizona that was hosting one of the botnet’s control servers showed the account was paid for via PayPal. More legal process to PayPal revealed Foltz’s Gmail account and previously used IP addresses. A subpoena to Google showed the defendant searched security blogs constantly for news about Rapper Bot, and for updates about competing DDoS-for-hire botnets.

According to the complaint, after having a search warrant served on his residence the defendant admitted to building and operating Rapper Bot, sharing the profits 50/50 with a person he claimed to know only by the hacker handle “Slaykings.” Foltz also shared with investigators the logs from his Telegram chats, wherein Foltz and Slaykings discussed how best to stay off the radar of law enforcement investigators while their competitors were getting busted.

Specifically, the two hackers chatted about a May 20 attack against KrebsOnSecurity.com that clocked in at more than 6.3 terabits of data per second. The brief attack was notable because at the time it was the largest DDoS that Google had ever mitigated (KrebsOnSecurity sits behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content).

The May 2025 DDoS was launched by an IoT botnet called Aisuru, which I discovered was operated by a 21-year-old man in Brazil named Kaike Southier Leite. This individual was more commonly known online as “Forky,” and Forky told me he wasn’t afraid of me or U.S. federal investigators. Nevertheless, the complaint against Foltz notes that Forky’s botnet seemed to diminish in size and firepower at the same time that Rapper Bot’s infection numbers were on the upswing.

“Both FOLTZ and Slaykings were very dismissive of attention seeking activities, the most extreme of which, in their view, was to launch DDoS attacks against the website of the prominent cyber security journalist Brian Krebs,” Peterson wrote in the criminal complaint.

“You see, they’ll get themselves [expletive],” Slaykings wrote in response to Foltz’s comments about Forky and Aisuru bringing too much heat on themselves.

“Prob cuz [redacted] hit krebs,” Foltz wrote in reply.

“Going against Krebs isn’t a good move,” Slaykings concurred. “It isn’t about being a [expletive] or afraid, you just get a lot of problems for zero money. Childish, but good. Let them die.”

“Ye, it’s good tho, they will die,” Foltz replied.

The government states that just prior to Foltz’s arrest, Rapper Bot had enslaved an estimated 65,000 devices globally. That may sound like a lot, but the complaint notes the defendants weren’t interested in making headlines for building the world’s largest or most powerful botnet.

Quite the contrary: The complaint asserts that the accused took care to maintain their botnet in a “Goldilocks” size — ensuring that “the number of devices afforded powerful attacks while still being manageable to control and, in the hopes of Foltz and his partners, small enough to not be detected.”

The complaint states that several days later, Foltz and Slaykings returned to discussing what that they expected to befall their rival group, with Slaykings stating, “Krebs is very revenge. He won’t stop until they are [expletive] to the bone.”

“Surprised they have any bots left,” Foltz answered.

“Krebs is not the one you want to have on your back. Not because he is scary or something, just because he will not give up UNTIL you are [expletive] [expletive]. Proved it with Mirai and many other cases.”

[Unknown expletives aside, that may well be the highest compliment I’ve ever been paid by a cybercriminal. I might even have part of that quote made into a t-shirt or mug or something. It’s also nice that they didn’t let any of their customers attack my site — if even only out of a paranoid sense of self-preservation.]

Foltz admitted to wiping the user and attack logs for the botnet approximately once a week, so investigators were unable to tally the total number of attacks, customers and targets of this vast crime machine. But the data that was still available showed that from April 2025 to early August, Rapper Bot conducted over 370,000 attacks, targeting 18,000 unique victims across 1,000 networks, with the bulk of victims residing in China, Japan, the United States, Ireland and Hong Kong (in that order).

According to the government, Rapper Bot borrows much of its code from fBot, a DDoS malware strain also known as Satori. In 2020, authorities in Northern Ireland charged a then 20-year-old man named Aaron “Vamp” Sterritt with operating fBot with a co-conspirator. U.S. prosecutors are still seeking Sterritt’s extradition to the United States. fBot is itself a variation of the Mirai IoT botnet that has ravaged the Internet with DDoS attacks since its source code was leaked back in 2016.

The complaint says Foltz and his partner did not allow most customers to launch attacks that were more than 60 seconds in duration — another way they tried to keep public attention to the botnet at a minimum. However, the government says the proprietors also had special arrangements with certain high-paying clients that allowed much larger and longer attacks.

The accused and his alleged partner made light of this blog post about the fallout from one of their botnet attacks.

Most people who have never been on the receiving end of a monster DDoS attack have no idea of the cost and disruption that such sieges can bring. The DCIS’s Peterson wrote that he was able to test the botnet’s capabilities while interviewing Foltz, and that found that “if this had been a server upon which I was running a website, using services such as load balancers, and paying for both outgoing and incoming data, at estimated industry average rates the attack (2+ Terabits per second times 30 seconds) might have cost the victim anywhere from $500 to $10,000.”

“DDoS attacks at this scale often expose victims to devastating financial impact, and a potential alternative, network engineering solutions that mitigate the expected attacks such as overprovisioning, i.e. increasing potential Internet capacity, or DDoS defense technologies, can themselves be prohibitively expensive,” the complaint continues. “This ‘rock and a hard place’ reality for many victims can leave them acutely exposed to extortion demands – ‘pay X dollars and the DDoS attacks stop’.”

The Telegram chat records show that the day before Peterson and other federal agents raided Foltz’s residence, Foltz allegedly told his partner he’d found 32,000 new devices that were vulnerable to a previously unknown exploit.

Foltz and Slaykings discussing the discovery of an IoT vulnerability that will give them 32,000 new devices.

Shortly before the search warrant was served on his residence, Foltz allegedly told his partner that “Once again we have the biggest botnet in the community.” The following day, Foltz told his partner that it was going to be a great day — the biggest so far in terms of income generated by Rapper Bot.

“I sat next to Foltz while the messages poured in — promises of $800, then $1,000, the proceeds ticking up as the day went on,” Peterson wrote. “Noticing a change in Foltz’ behavior and concerned that Foltz was making changes to the botnet configuration in real time, Slaykings asked him ‘What’s up?’ Foltz deftly typed out some quick responses. Reassured by Foltz’ answer, Slaykings responded, ‘Ok, I’m the paranoid one.”

The case is being prosecuted by Assistant U.S. Attorney Adam Alexander in the District of Alaska (at least some of the devices found to be infected with Rapper Bot were located there, and it is where Peterson is stationed). Foltz faces one count of aiding and abetting computer intrusions. If convicted, he faces a maximum penalty of 10 years in prison, although a federal judge is unlikely to award anywhere near that kind of sentence for a first-time conviction.

Planet DebianRussell Coker: Colmi P80 SmartWatch First Look

I just bought a Colmi P80 SmartWatch from Aliexpress for $26.11 based on this blog post reviewing it [1]. The main things I was after in this was a larger higher resolution screen because my vision has apparently deteriorated during the time I’ve been wearing a Pinetime [2] and I now can’t read messages on it when not wearing my reading glasses.

The watch hardware is quite OK. It has a larger and higher resolution screen and looks good. The review said that GadgetBridge (the FOSS SmartWatch software in the F-Droid repository) connected when told that the watch was a P79 and in a recent release got support for sending notifications. In my tests with GadgetBridge it doesn’t set the time, can’t seem to send notifications, can’t read the battery level, and seems not to do anything other than just say “connected”. So I installed the proprietary app, as an aside it’s a neat feature to have the watch display a QR code for installing the app, maybe InfiniTime should have a similar QR code for getting GadgetBridge from the F-Droid repository.

The proprietary app is quote OK for the basic functionality and a less technical relative who is using one is happy. For my use the proprietary app is utterly broken. One of my main uses is to get notifications of Jabber messages from the Conversations app (that’s in F-Droid). I have Conversations configured to always have a notification of how many accounts are connected which prevents Android from killing it, with GadgetBridge that notification isn’t reported but the actual message contents are (I don’t know how/why that happens) but with the Colmi app I get repeated notifcation messages on the watch about the accounts being connected. Also the proprietary app has on/off settings for messages to go to the watch for a hard coded list of 16 common apps and an “Others” setting for the rest. GadgetBridge lists the applications that are actually installed so I can configure it not to notify me about Reddit, connecting to my car audio, and many other less common notifications. I prefer the GadgetBridge option to have an allow-list for apps that I want notifications from but it also has a configuration option to use a deny list so you could have everything other than the app that gives lots of low value notifications. The proprietary app has a wide range of watch faces that it can send to the watch which is a nice feature that would be good to have in InfiniTime and GadgetBridge.

The P80 doesn’t display a code on screen when it is paired via Bluetooth so if you have multiple smart watches then you are at risk of connecting to the wrong one and there doesn’t seem to be anything stopping a hostile party from connecting to one. Note that hostile parties are not restricted to the normal maximum transmission power and can use a high gain antenna for reception so they can connect from longer distances than normal Bluetooth devices.

Conclusion

The Colmi P80 hardware is quite decent, the only downside is that the vibration has an annoying “tinny” feel. Strangely it has a rotation sensor for a rotating button (similar to analogue watches) but doesn’t seem to have a use for it as the touch screen does everything.

The watch firmware is quite OK (not great but adequate) but lacking a password for pairing is a significant lack.

The Colmi Android app has some serious issues that make it unusable for what I do and the release version of GadgetBridge doesn’t work with it, so I have gone back to the PineTime for actual use.

The PineTime cost twice as much, has less features (no sensor for O2 level in blood), but seems more solidly constructed.

I plan to continue using the P80 with GadgetBridge and Debian based SmartWatch software to help develop the Debian Mobile project. I expect that at some future time GadgetBridge and the programs written for non-Android Linux distributions will support the P80 and I will transition to it. I am confident that it will work well for me at some future time and that I will get $26.11 of value from it. At this time I recommend that people who do the sort of things I do get one of each and that less technical people get a Colmi P80.

Worse Than FailureCodeSOD: I Am Not 200

In theory, HTTP status codes should be easy to work with. In the 100s? You're doing some weird stuff and breaking up large requests into multiple sub-requests. 200s? It's all good. 300s? Look over there. 400s? What the hell are you trying to do? 500s? What the hell is the server trying to do?

This doesn't mean people don't endlessly find ways to make it hard. LinkedIn, for example, apparently likes to send 999s if you try and view a page without being logged in. Shopify has invented a few. Apache has added a 218 "This is Fine". And then there's WebDAV, which not only adds new status codes, but adds a whole bunch of new verbs to HTTP requests.

Francesco D sends us a "clever" attempt at handling status codes.

    try {
      HttpRequest.Builder localVarRequestBuilder = {{operationId}}RequestBuilder({{#allParams}}{{paramName}}{{^-last}}, {{/-last}}{{/allParams}}{{#hasParams}}, {{/hasParams}}headers);
      return memberVarHttpClient.sendAsync(
          localVarRequestBuilder.build(),
          HttpResponse.BodyHandlers.ofString()).thenComposeAsync(localVarResponse -> {
            if (localVarResponse.statusCode()/ 100 != 2) {
              return CompletableFuture.failedFuture(getApiException("{{operationId}}", localVarResponse));
            }
            {{#returnType}}
            try {
              String responseBody = localVarResponse.body();
              return CompletableFuture.completedFuture(
                  responseBody == null || responseBody.isBlank() ? null : memberVarObjectMapper.readValue(responseBody, new TypeReference<{{{returnType}}}>() {})
              );
            } catch (IOException e) {
              return CompletableFuture.failedFuture(new ApiException(e));
            }
            {{/returnType}}
            {{^returnType}}
            return CompletableFuture.completedFuture(null);
            {{/returnType}}
      });
    }

Okay, before we get to the status code nonsense, I first have to whine about this templating language. I'm generally of the mind that generated code is a sign of bad abstractions, especially if we're talking about using a text templating engine, like this. I'm fine with hygienic macros, and even C++'s templating system for code generation, because they exist within the language. But fine, that's just my "ok boomer" opinion, so let's get into the real meat of it, which is this line:

localVarResponse.statusCode()/ 100 != 2

"Hey," some developer said, "since success is in the 200 range, I'll just divide by 100, and check if it's a 2, helpfully truncating the details." Which is fine and good, except neither 100s nor 300s represent a true error, especially because if the local client is doing caching, a 304 tells us that we can used the cached version.

For Francesco, treating 300s as an error created a slew of failed requests which shouldn't have failed. It wasn't too difficult to detect- they were at least logging the entire response- but it was frustrating, if only because it seems like someone was more interested in being clever with math than actually writing good software.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsAfter The Party

Author: Majoki Typically, the killing began around this time. Staff would be silently cleaning up, clearing the tables, floors, walls and rafters of the celebration’s detritus. Then you’d hear excited chitter, then the hum of lancers charging, more chittering, and then skittering as tell-tale bolts of orange flared and the screaming began. Just another night […]

The post After The Party appeared first on 365tomorrows.

,

Worse Than FailureCodeSOD: Going Crazy

For months, everything at Yusuf's company was fine. Then, suddenly, he comes in to the office to learn that overnight the log exploded with thousands of panic messages. No software changes had been pushed, no major configurations had happened- just a reboot. What had gone wrong?

This particular function was invoked as part of the application startup:

func (a *App) setupDocDBClient(ctx context.Context) error {
	docdbClient, err := docdb.NewClient(
		ctx,
		a.config.MongoConfig.URI,
		a.config.MongoConfig.Database,
		a.config.MongoConfig.EnableTLS,
	)
	if err != nil {
		return nil
	}

	a.DocDBClient = docdbClient
	return nil
}

This is Go, which passes errors as part of the return. You can see an example where docdb.NewClient returns a client and an err object. At one point in the history of this function, it did the same thing- if connecting to the database failed, it returned an error.

But a few months earlier, an engineer changed it to swallow the error- if an error occurred, it would return nil.

As an organization, they did code reviews. Multiple people looked at this and signed off- or, more likely, multiple people clicked a button to say they'd looked at it, but hadn't.

Most of the time, there weren't any connection issues. But sometimes there were. One reboot had a flaky moment with connecting, and the error was ignored. Later on in execution, downstream modules started failing, which eventually led to a log full of panic level messages.

The change was part of a commit tagged merely: "Refactoring". Something got factored, good and hard, all right.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Planet DebianJonathan Dowland: Amiga redux

Matthew blogged about his Amiga CDTV project, a truly unique Amiga hack which also manages to be a novel Doom project (no mean feat: it's a crowded space)

This re-awakened my dormant wish to muck around with my childhood Amiga some more. When I last wrote about it (four years ago ☹) I'd upgraded the disk drive emulator with an OLED display and rotary encoder. I'd forgotten to mention I'd also sourced a modern trapdoor RAM expansion which adds 2MiB of RAM. The Amiga can only see 1.5MiB1 of it at the moment, I need perform a mainboard modification to access the final 512kiB2, which means some soldering.

[Amiga Test Kit](https://github.com/keirf/Amiga-Stuff) showing 2MiB RAM

Amiga Test Kit showing 2MiB RAM

What I had planned to do back then: replace the switch in the left button of the original mouse, which was misbehaving; perform the aformentioned mainboard mod; upgrade the floppy emulator wiring to a ribbon cable with plug-and-socket, for easier removal; fit an RTC chip to the RAM expansion board to get clock support in the OS.

However much of that might be might be moot, because of two other mods I am considering,

PiStorm

I've re-considered the PiStorm accelerator mentioned in Matt's blog.

Four years ago, I'd passed over it, because it required you to run Linux on a Raspberry Pi, and then an m68k emulator as a user-space process under Linux. I didn't want to administer another Linux system, and I'm generally uncomfortable about using a regular Linux distribution on SD storage over the long term.

However in the intervening years Emu68, a bare-metal m68k emulator has risen to prominence. You boot the Pi straight into Emu68 without Linux in the middle. For some reason that's a lot more compelling to me.

The PiStorm enormously expands the RAM visible to the Amiga. There would be no point in doing the mainboard mod to add 512k (and I don't know how that would interact with the PiStorm). It also can provide virtual hard disk devices to the Amiga (backed by files on the SD card), meaning the floppy emulator would be superfluous.

Denise Mainboard

I've just learned about a truly incredible project: the Denise Mini-ITX Amiga mainboard. It fitss into a Mini-ITX case (I have a suitable one spare already). Some assembly required. You move the chips from the original Amiga over to the Denise mainboard. It's compatible with the PiStorm (or vice-versa). It supports PC-style PS/2 keyboards (I have a Model M in the loft, thanks again Simon) and has a bunch of other modern conveniences: onboard RTC; mini-ITX power (I'll need something like a picoPSU too)

It wouldn't support my trapdoor RAM card but it takes a 72-pin DIMM which can supply 2MiB of Chip RAM, and the PiStorm can do the rest (they're compatible3).

No stock at the moment but if I could get my hands on this, I could build something that could permanently live on my desk.


  1. the Boobip board's 1.5MiB is "chip" RAM: accessible to the other chips on the mainboard, with access mediated by the AGNUS chip.
  2. the final 512kiB is "Fast" RAM: only accessible to the CPU, not mediated via Agnus.
  3. confirmation

365 TomorrowsTest Run

Author: Julian Miles, Staff Writer “Wizard One, remind me again why I’m face down in a flower bed in downtown fuck-knows-where?” “Maintain comms discipline, Fighter Zero. However, I am authorised to say you look lovely with a sprinkling of daisies on your arse.” “Tell Gandalf to get himself a new hobbit, because you’re gonna be […]

The post Test Run appeared first on 365tomorrows.

Planet DebianOtto Kekäläinen: Best Practices for Submitting and Reviewing Merge Requests in Debian

Featured image of post Best Practices for Submitting and Reviewing Merge Requests in Debian

Historically the primary way to contribute to Debian has been to email the Debian bug tracker with a code patch. Now that 92% of all Debian source packages are hosted at salsa.debian.org — the GitLab instance of Debian — more and more developers are using Merge Requests, but not necessarily in the optimal way. In this post I share what I’ve found the best practice to be, presented in the natural workflow from forking to merging.

Why use Merge Requests?

Compared to sending patches back and forth in email, using a git forge to review code contributions brings several benefits:

  • Contributors can see the latest version of the code immediately when the maintainer pushes it to git, without having to wait for an upload to Debian archives.
  • Contributors can fork the development version and easily base their patches on the correct version and help test that the software continues to function correctly at that specific version.
  • Both maintainer and other contributors can easily see what was already submitted and avoid doing duplicate work.
  • It is easy for anyone to comment on a Merge Request and participate in the review.
  • Integrating CI testing is easy in Merge Requests by activating Salsa CI.
  • Tracking the state of a Merge Request is much easier than browsing Debian bug reports tagged ‘patch’, and the cycle of submit → review → re-submit → re-review is much easier to manage in the dedicated Merge Request view compared to participants setting up their own email plugins for code reviews.
  • Merge Requests can have extra metadata, such as ‘Approved’, and the metadata often updates automatically, such as a Merge Request being closed automatically when the Git commit ID from it is pushed to the target branch.

Keeping these benefits in mind will help ensure that the best practices make sense and are aligned with maximizing these benefits.

Finding the Debian packaging source repository and preparing to make a contribution

Before sinking any effort into a package, start by checking its overall status at the excellent Debian Package Tracker. This provides a clear overview of the package’s general health in Debian, when it was last uploaded and by whom, and if there is anything special affecting the package right now. This page also has quick links to the Debian bug tracker of the package, the build status overview and more. Most importantly, in the General section, the VCS row links to the version control repository the package advertises. Before opening that page, note the version most recently uploaded to Debian. This is relevant because nothing in Debian currently enforces that the package in version control is actually the same as the latest uploaded to Debian.

Packaging source code repository links at tracker.debian.org

Following the Browse link opens the Debian package source repository, which is usually a project page on Salsa. To contribute, start by clicking the Fork button, select your own personal namespace and, under Branches to include, pick Only the default branch to avoid including unnecessary temporary development branches.

View after pressing Fork

Once forking is complete, clone it with git-buildpackage. For this example repository, the exact command would be gbp clone --verbose git@salsa.debian.org:otto/glow.git.

Next, add the original repository as a new remote and pull from it to make sure you have all relevant branches. Using the same fork as an example, the commands would be:

git remote add go-team https://salsa.debian.org/go-team/packages/glow.git
gbp pull --verbose --track-missing go-team

The gbp pull command can be repeated whenever you want to make sure the main branches are in sync with the original repository. Finally, run gitk --all & to visually browse the Git history and note the various branches and their states in the two remotes. Note the style in comments and repository structure the project has and make sure your contributions follow the same conventions to maximize the chances of the maintainer accepting your contribution.

It may also be good to build the source package to establish a baseline of the current state and what kind of binaries and .deb packages it produces. If using Debcraft, one can simply run debcraft build in the Git repository.

Submitting a Merge Request for a Debian packaging improvement

Always start by making a development branch by running git checkout -b <branch name> to clearly separate your work from the main branch.

When making changes, remember to follow the conventions you already see in the package. It is also important to be aware of general guidelines on how to make good Git commits.

If you are not able to immediately finish coding, it may be useful to publish the Merge Request as a draft so that the maintainer and others can see that you started working on something and what general direction your change is heading in.

If you don’t finish the Merge Request in one sitting and return to it another day, you should remember to pull the Debian branch from the original Debian repository in case it has received new commits. This can be done easily with these commands (assuming the same remote and branch names as in the example above):

git fetch go-team
git rebase -i go-team/debian/latest

Frequent rebasing is a great habit to help keep the Git history linear, and restructuring and rewording your commits will make the Git history easier to follow and understand why the changes were made.

When pushing improved versions of your branch, use git push --force. While GitLab does allow squashing, I recommend against it. It is better that the submitter makes sure the final version is a neat and clean set of commits that the receiver can easily merge without having to do any rebasing or squashing themselves.

When ready, remove the draft status of the Merge Request and wait patiently for review. If the maintainer does not respond in several days, try sending an email to <source package name>@packages.debian.org, which is the official way to contact maintainers. You could also post a comment on the MR and tag the last few committers in the same repository so that a notification email is triggered. As a last resort, submit a bug report to the Debian bug tracker to announce that a Merge Request is pending review. This leaves a permanent record for posterity (or the Debian QA team) of your contribution. However, most of the time simply posting the Merge Request in Salsa is enough; excessive communication might be perceived as spammy, and someone needs to remember to check that the bug report is closed.

Respect the review feedback, respond quickly and avoid Merge Requests getting stale

Once you get feedback, try to respond as quickly as possible. When people participating have everything fresh in their minds, it is much easier for the submitter to rework it and for the reviewer to re-review. If the Merge Request becomes stale, it can be challenging to revive it. Also, if it looks like the MR is only waiting for re-review but nothing happens, re-read the previous feedback and make sure you actually address everything. After that, post a friendly comment where you explicitly say you have addressed all feedback and are only waiting for re-review.

Reviewing Merge Requests

This section about reviewing is not exclusive to Debian package maintainers — anyone can contribute to Debian by reviewing open Merge Requests. Typically, the larger an open source project gets, the more help is needed in reviewing and testing changes to avoid regressions, and all diligently done work is welcome. As the famous Linus quote goes, “given enough eyeballs, all bugs are shallow”.

On salsa.debian.org, you can browse open Merge Requests per project or for a whole group, just like on any GitLab instance.

Reviewing Merge Requests is, however, most fun when they are fresh and the submitter is active. Thus, the best strategy is to ensure you have subscribed to email notifications in the repositories you care about so you get an email for any new Merge Request (or Issue) immediately when posted.

Change notification settings from Global to Watch to get an email on new Merge Requests

When you see a new Merge Request, try to review it within a couple of days. If you cannot review in a reasonable time, posting a small note that you intend to review it later will feel better to the submitter compared to not getting any response.

Personally, I have a habit of assigning myself as a reviewer so that I can keep track of my whole review queue at https://salsa.debian.org/dashboard/merge_requests?reviewer_username=otto, and I recommend the same to others. Seeing the review assignment happen is also a good way to signal to the submitter that their submission was noted.

Reviewing commit-by-commit in the web interface

Reviewing using the web interface works well in general, but I find that the way GitLab designed it is not ideal. In my ideal review workflow, I first read the Git commit message to understand what the submitter tried to do and why; only then do I look at the code changes in the commit. In GitLab, to do this one must first open the Commits tab and then click on the last commit in the list, as it is sorted in reverse chronological order with the first commit at the bottom. Only after that do I see the commit message and contents. Getting to the next commit is easy by simply clicking Next.

Example review to demonstrate location of buttons and functionality

When adding the first comment, I choose Start review and for the following remarks Add to review. Finally, I click Finish review and Submit review, which will trigger one single email to the submitter with all my feedback. I try to avoid using the Add comment now option, as each such comment triggers a separate notification email to the submitter.

Reviewing and testing on your own computer locally

For the most thorough review, I pull the code to my laptop for local review with git pull <remote url> <branch name>. There is no need to run git remote add as pulling using a URL directly works too and saves from needing to clean up old remotes later.

Pulling the Merge Request contents locally allows me to build, run and inspect the code deeply and review the commits with full metadata in gitk or equivalent.

Investing enough time in writing feedback, but not too much

See my other post for more in-depth advice on how to structure your code review feedback.

In Debian, I would emphasize patience, to allow the submitter time to rework their submission. Debian packaging is notoriously complex, and even experienced developers often need more feedback and time to get everything right. Avoid the temptation to rush the fix in yourself. In open source, Git credits are often the only salary the submitter gets. If you take the idea from the submission and implement it yourself, you rob the submitter of the opportunity to get feedback, try to improve and finally feel accomplished. Sure, it takes extra effort to give feedback, but the contributor is likely to feel ownership of their work and later return to further improve it.

If a submission looks hopelessly low quality and you feel that giving feedback is a waste of time, you can simply respond with something along the lines of: “Thanks for your contribution and interest in helping Debian. Unfortunately, looking at the commits, I see several shortcomings, and it is unlikely a normal review process is enough to help you finalize this. Please reach out to Debian Mentors to get a mentor who can give you more personalized feedback.”

There might also be contributors who just “dump the code”, ignore your feedback and never return to finalize their submission. If a contributor does not return to finalize their submission in 3-6 months, I will in my own projects simply finalize it myself and thank the contributor in the commit message (but not mark them as the author).

Despite best practices, you will occasionally still end up doing some things in vain, but that is how volunteer collaboration works. We all just need to accept that some communication will inevitably feel like wasted effort, but it should be viewed as a necessary investment in order to get the benefits from the times when the communication led to real and valuable collaboration. Please just do not treat all contributors as if they are unlikely to ever contribute again; otherwise, your behavior will cause them not to contribute again. If you want to grow a tree, you need to plant several seeds.

Approving and merging

Assuming review goes well and you are ready to approve, and if you are the only maintainer, you can proceed to merge right away. If there are multiple maintainers, or if you otherwise think that someone else might want to chime in before it is merged, use the “Approve” button to show that you approve the change but leave it unmerged.

The person who approved does not necessarily have to be the person who merges. The point of the Merge Request review is not separation of duties in committing and merging — the main purpose of a code review is to have a different set of eyeballs looking at the change before it is committed into the main development branch for all eternity. In some packages, the submitter might actually merge themselves once they see another developer has approved. In some rare Debian projects, there might even be separate people taking the roles of submitting, approving and merging, but most of the time these three roles are filled by two people either as submitter and approver+merger or submitter+merger and approver.

If you are not a maintainer at all and do not have permissions to click Approve, simply post a comment summarizing your review and that you approve it and support merging it. This can help the maintainers review and merge faster.

Making a Merge Request for a new upstream version import

Unlike many other Linux distributions, in Debian each source package has its own version control repository. The Debian sources consist of the upstream sources with an additional debian/ subdirectory that contains the actual Debian packaging. For the same reason, a typical Debian packaging Git repository has a debian/latest branch that has changes only in the debian/ subdirectory while the surrounding upstream files are the actual upstream files and have the actual upstream Git history. For details, see my post explaining Debian source packages in Git.

Because of this Git branch structure, importing a new upstream version will typically modify three branches: debian/latest, upstream/latest and pristine-tar. When doing a Merge Request for a new upstream import, only submit one Merge Request for one branch: which means merging your new changes to the debian/latest branch.

There is no need to submit the upstream/latest branch or the pristine-tar branch. Their contents are fixed and mechanically imported into Debian. There are no changes that the reviewer in Debian can request the submitter to do on these branches, so asking for feedback and comments on them is useless. All review, comments and re-reviews concern the content of the debian/latest branch only.

It is not even necessary to use the debian/latest branch for a new upstream version. Personally, I always execute the new version import (with gbp import-orig --verbose --uscan) and prepare and test everything on debian/latest, but when it is time to submit it for review, I run git checkout -b import/$(dpkg-parsechangelog -SVersion) to get a branch named e.g. import/1.0.1 and then push that for review.

Reviewing a Merge Request for a new upstream version import

Reviewing and testing a new upstream version import is a bit tricky currently, but possible. The key is to use gbp pull to automate fetching all branches from the submitter’s fork. Assume you are reviewing a submission targeting the Glow package repository and there is a Merge Request from user otto’s fork. As the maintainer, you would run the commands:

git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto

If there was feedback in the first round and you later need to pull a new version for re-review, running gbp pull --force will not suffice, and this trick of manually fetching each branch and resetting them to the submitter’s version is needed:

for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done

Once review is done, either click Approve and let the submitter push everything, or alternatively, push all the branches you pulled locally yourself. In GitLab and other forges, the Merge Request will automatically be marked as Merged once the commit ID that was the head of the Merge Request is pushed to the target branch.

Please allow enough time for everyone to participate

When working on Debian, keep in mind that it is a community of volunteers. It is common for people to do Debian stuff only on weekends, so you should patiently wait for at least a week so that enough workdays and weekend days have passed for the people you interact with to have had time to respond on their own Debian time.

Having to wait may feel annoying and disruptive, but try to look at the upside: you do not need to do extra work simply while waiting for others. In some cases, that waiting can be useful thanks to the “sleep on it” phenomenon: when you yourself look at your own submission some days later with fresh eyes, you might notice something you overlooked earlier and improve your code change even without other people’s feedback!

Contribute reviews!

The last but not least suggestion is to make a habit of contributing reviews to packages you do not maintain. As we already see in large open source projects, such as the Linux kernel, they have far more code submissions than they can handle. The bottleneck for progress and maintaining quality becomes the reviews themselves.

For Debian, as an organization and as a community, to be able to renew and grow new contributors, we need more of the senior contributors to shift focus from merely maintaining their packages and writing code to also intentionally interact with new contributors and guide them through the process of creating great open source software. Reviewing code is an effective way to both get tangible progress on individual development items and to transfer culture to a new generation of developers.

Why aren’t 100% of all Debian source packages hosted on Salsa?

As seen at trends.debian.net, more and more packages are using Salsa. Debian does not, however, have any policy about it. In fact, the Debian Policy Manual does not even mention the word “Salsa” anywhere. Adoption of Salsa has so far been purely organic, as in Debian each package maintainer has full freedom to choose whatever preferences they have regarding version control.

I hope the trend to use Salsa will continue and more shared workflows emerge so that collaboration gets easier. To drive the culture of using Merge Requests and more, I drafted the Debian proposal DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are active in Debian and you think DEP-18 is beneficial for Debian, please give a thumbs up at dep-team/deps!21.

,

Planet DebianC.J. Collier: The Very Model of a Patriot Online

It appears that the fragile masculinity tech evangelists have identified Debian as a community with boundaries which exclude them from abusing its members and they’re so angry about it! In response to posts such as this, and inspired by Dr. Conway’s piece, I’ve composed a poem which, hopefully, correctly addresses the feelings of that crowd.


The Very Model of a Patriot Online

I am the very model of a modern patriot online,
My keyboard is my rifle and my noble cause is so divine.
I didn't learn my knowledge in a dusty college lecture hall,
But from the chans where bitter anonymity enthralls us all.
I spend a dozen hours every day upon my sacred quest,
To put the globo-homo narrative completely to the test.
My arguments are peer-reviewed by fellas in the comments section,
Which proves my every thesis is the model of complete perfection.
I’m steeped in righteous anger that the libs call 'white fragility,'
For mocking their new pronouns and their lack of masculinity.
I’m master of the epic troll, the comeback, and the searing snark,
A digital guerrilla who is fighting battles in the dark.

I know the secret symbols and the dog-whistles historical,
From Pepe the Frog to ‘Let’s Go Brandon,’ in order categorical;
In short, for fighting culture wars with rhetoric rhetorical,
I am the very model of a patriot polemical.

***

I stand for true expression, for the comics and the edgy clown,
Whose satire is too based for all the fragile folks in town.
They say my speech is 'violence' while my spirit they are trampling,
The way they try to silence me is really quite a startling sampling
Of 1984, which I've not read but thoroughly understand,
Is all about the tyranny that's gripping this once-blessed land.
My humor is a weapon, it’s a razor-bladed, sharp critique,
(Though sensitive elites will call my masterpiece a form of ‘hate speech’).
They cannot comprehend my need for freedom from all consequence,
They call it 'hate,' I call it 'jokes,' they just don't have a lick of sense.
So when they call me ‘bigot’ for the spicy memes I post pro bono,
I tell them their the ones who're cancelled, I'm the victim here, you know!

Then I can write a screed against the globalist cabal, you see,
And tell you every detail of their vile conspiracy.
In short, when I use logic that is flexible and personal,
I am the very model of a patriot controversial.

***

I'm very well acquainted with the scientific method, too,
It's watching lengthy YouTube vids until my face is turning blue.
I trust the heartfelt testimony of a tearful, blonde ex-nurse,
But what a paid fact-checker says has no effect and is perverse.
A PhD is proof that you've been brainwashed by the leftist mob,
While my own research on a meme is how I really do my job.
I know that masks will suffocate and vaccines are a devil's brew,
I learned it from a podcast host who used to sell brain-boosting goo.
He scorns the lamestream media, the CNNs and all the rest,
Whose biased reporting I've put fully to a rigorous test
By only reading headlines and confirming what I already knew,
Then posting my analysis for other patriots to view.

With every "study" that they cite from sources I can't stand to hear,
My own profound conclusions become ever more precisely clear.
In short, when I've debunked the experts with a confident "Says who?!",
I am the very model of a researcher who sees right through you.

***

But all these culture wars are just a sleight-of-hand, a clever feint,
To hide the stolen ballots and to cover up the moral taint
Of D.C. pizza parlors and of shipping crates from Wayfair, it’s true,
It's all connected in a plot against the likes of me and you!
I've analyzed the satellite photography and watermarks,
I understand the secret drops, the cryptic Qs, the coded sparks.
The “habbening” is coming, friends, just give it two more weeks or three,
When all the traitors face the trials for their wicked treachery.
They say that nothing happened and the dates have all gone past, you see,
But that's just disinformation from the globalist enemy!
Their moving goalposts constantly, a tactic that is plain to see,
To wear us down and make us doubt the coming, final victory!

My mind can see the patterns that a simple sheep could never find,
The hidden puppet-masters who are poisoning our heart and mind.
In short, when I link drag queens to the price of gas and child-trafficking,
I am the very model of a patriot whose brain is quickening!

***

My pickup truck's a testament to everything that I hold dear,
With vinyl decals saying things the liberals all hate and fear.
The Gadsden flag is waving next to one that's blue and starkly thin,
To show my deep respect for law, except the feds who're steeped in sin.
There's Punisher and Molon Labe, so that everybody knows
I'm not someone to trifle with when push to final shoving goes.
I've got my tactical assault gear sitting ready in the den,
Awaiting for the signal to restore our land with my fellow men.
I practice clearing rooms at home when my mom goes out to the store,
A modern Minuteman who's ready for a civil war.
The neighbors give me funny looks, I see them whisper and take note,
They'll see what's what when I'm the one who's guarding checkpoints by their throat.

I am a peaceful man, of course, but I am also pre-prepared,
To neutralize the threats of which the average citizen's unscared.
In short, when my whole identity's a brand of tactical accessory,
You'll say a better warrior has never graced a Cabela's registry.

***

They say I have to tolerate a man who thinks he is a dame,
While feminists and immigrants are putting out my vital flame!
There taking all the jobs from us and giving them to folks who kneel,
And "woke HR" says my best jokes are things I'm not allowed to feel!
An Alpha Male is what I am, a lion, though I'm in this cubicle,
My life's frustrations can be traced to policies Talmudical.
They lecture me on privilege, I, who have to pay my bills and rent!
While they give handouts to the lazy, worthless, and incompetent!
My grandad fought the Nazis! Now I have to press a key for ‘one’
To get a call-rep I can't understand beneath the blazing sun
Of global, corporate tyranny that's crushing out the very soul
Of men like me, who've lost their rightful, natural, and just control!

So yes, I am resentful! And I'm angry! And I'm right to be!
They've stolen all my heritage and my masculinity!
In short, when my own failures are somebody else's evil plot,
I am the very model of the truest patriot we've got!

***

There putting chips inside of you! Their spraying things up in the sky!
They want to make you EAT THE BUGS and watch your very spirit die!
The towers for the 5G are a mind-control delivery tool!
To keep you docile while the children suffer in a grooming school!
The WEF, and Gates, and Soros have a plan they call the 'Great Reset,'
You'll own no property and you'll be happy, or you'll be in debt
To social credit overlords who'll track your every single deed!
There sterilizing you with plastics that they've hidden in the feed!
The world is flat! The moon is fake! The dinosaurs were just a lie!
And every major tragedy's a hoax with actors paid to cry!
I'M NOT INSANE! I SEE THE TRUTH! MY EYES ARE OPEN! CAN'T YOU SEE?!
YOU'RE ALL ASLEEP! YOU'RE COWARDS! YOU'RE AFRAID OF BEING TRULY FREE!

My heart is beating faster now, my breath is short, my vision's blurred,
From all the shocking truth that's in each single, solitary word!
I've sacrificed my life and friends to bring this message to the light, so...
You'd better listen to me now with all your concentrated might, ho!

***

For my heroic struggle, though it's cosmic and it's biblical,
Is waged inside the comments of a post that's algorithm-ical.
And still for all my knowledge that's both tactical and practical,
My mom just wants the rent I owe and says I'm being dramatical.

365 TomorrowsThe Fugitive

Author: Bill Cox She weeps and Tony’s heart aches like never before. He knows that he will do absolutely anything to protect her. He holds her close and she burrows into his chest, her sobs echoing through his ribcage. “It’s going to be all right,” Tony whispers, caressing her head gently, “I’ll hide you from […]

The post The Fugitive appeared first on 365tomorrows.

Planet DebianValhalla's Things: rrdtool and Trixie

Posted on August 17, 2025
Tags: madeof:bits

TL;DL: if you’re using rrdtool on a 32 bit architecture like armhf make an XML dump of your RRD files just before upgrading to Debian Trixie.

I am an old person at heart, so the sensor data from my home monitoring system1 doesn’t go to one of those newfangled javascript-heavy data visualization platforms, but into good old RRD files, using rrdtool to generate various graphs.

This happens on the home server, which is an armhf single board computer2, hosting a few containers3.

So, yesterday I started upgrading one of the containers to Trixie, and luckily I started from the one with the RRD, because when I rebooted into the fresh system and checked the relevant service I found it stopped on ERROR: '<file>' is too small (should be <size> bytes).

Some searxing later, I’ve4 found this was caused by the 64-bit time_t transition, which changed the format of the files, and that (somewhat unexpectedly) there was no way to fix it on the machine itself.

What needed to be done instead was to export the data on an XML dump before the upgrade, and then import it back afterwards.

Easy enough, right? If you know about it, which is why I’m blogging this, so that other people will know in advance :)

Anyway, luckily I still had the other containers on bookworm, so I copied the files over there, did the upgrade, and my home monitoring system is happily running as before.


  1. of course one has a self-built home monitoring system, right?↩︎

  2. an A20-OLinuXino-MICRO, if anybody wants to know.↩︎

  3. mostly for ease of migrating things between different hardware, rather than insulation, since everything comes from Debian packages anyway.↩︎

  4. and by I I really mean Diego, as I was still into denial / distractions mode.↩︎

,

Charles StrossAnother brief update

(UPDATE: A new article/interview with me about the 20th anniversary of Accelerando just dropped, c/o Agence France-Presse. Gosh, I feel ancient.)

Bad news: the endoscopy failed. (I was scheduled for an upper GI endoscopy via the nasal sinuses to take a look around my stomach and see what's bleeding. Bad news: turns out I have unusually narrow sinuses, and by the time they'd figured this out my nose was watering so badly that I couldn't breathe when they tried to go in via my throat. So we're rescheduling for a different loction with an anesthetist who can put me under if necessary. NB: I would have been fine with only local anaesthesia if the bloody endscope had fit through my sinuses. Gaah.)

The attack novel I was working on has now hit the 70% mark in first draft—not bad for two months. I am going to keep pushing onwards until it stops, or until the page proofs I'm expecting hit me in the face. They're due at the end of June, so I might finish Starter Pack first ... or not. Starter Pack is an unexpected but welcome spin-off of Ghost Engine (third draft currently on hold at 80% done), which I shall get back to in due course. It seems to have metastasized into a multi-book project.

Neither of the aforementioned novels is finished, nor do they have a US publisher. (Ghost Engine has a UK publisher, who has been Very Patient for the past few years—thanks, Jenni!)

Feel free to talk among yourselves, especially about the implications of Operation Spiders Web, which (from here) looks like the defining moment for a very 21st century revolution in military affairs; one marking the transition from fossil fuel powered force projection to electromotive/computational force projection.

Charles StrossBrief Update

The reason(s) for the long silence here:

I've been attacked by an unscheduled novel, which is now nearly 40% written (in first draft). Then that was pre-empted by the copy edits for The Regicide Report (which have a deadline attached, because there's a publication date).

I also took time off for Eastercon, then hospital out-patient procedures. (Good news: I do not have colorectal cancer. Yay! Bad news: they didn't find the source of the blood in my stool, so I'm going back for another endoscopy.)

Finally, I'm still on the waiting list for cataract surgery. Blurred vision makes typing a chore, so I'm spending my time productively—you want more novels, right? Right?

Anyway: I should finish the copy edits within the next week, then get back to one or other of the two novels I'm working on in parallel (the attack novel and Ghost Engine: they share the same fictional far future setting), then maybe I can think of something to blog about again—but not the near future, it's too depressing. (I mean, if I'd written up our current political developments in a work of fiction any time before 2020 they'd have been rejected by any serious SF editor as too implausibly bizarre to publish.)

Planet DebianBits from Debian: Debian turns 32!

Alt 32th Debian Day by Daniel Lenharo

On August 16, 1993, Ian Murdock announced the Debian Project to the world. Three decades (and a bit) later, Debian is still going strong, built by a worldwide community of developers, contributors, and users who believe in a free, universal operating system.

Over the years, Debian has powered servers, desktops, tiny embedded devices, and huge supercomputers. We have gathered at DebConfs, squashed countless bugs, shared late-night hacking sessions, and helped keep millions of systems secure.

Debian Day is a great excuse to get together, whether it is a local meetup, an online event, a bug squashing party, a team sprint or just coffee with fellow Debianites. Check out the Debian Day wiki to see if there is a celebration near you or to add your own.

Here is to 32 years of collaboration, code, and community, and to all the amazing people who make Debian what it is.

Happy Debian Day!

Planet DebianBirger Schacht: Updates and additions in Debian 13 Trixie

Last week Debian 13 (Trixie) was released and there have been some updates and additions in the packages that I maintain, that I wanted to write about. I think they are not worth of being added to the release notes, but I still wanted to list some of the changes and some of the new packages.

sway

Sway, the tiling Wayland compositor was version 1.7 in Bookworm. It was updated to version 1.10 (and 1.11 is already in experimental and waiting for an upload to unstable). This new version of sway brings, among a lot of other features, updated support for touchpad gestures and support for the ext-session-lock-v1 protocol, which allows for more robust and secure screen locking. The configuration snippet that activates the default sway background is now shipped in the sway-backgrounds package instead of being part of the sway package itself.

The default menu application was changed from dmenu to wmenu. wmenu is a Wayland native alternative to dmenu which I packaged and it is now recommended by sway.

There are some small helper tools for sway that were updated: swaybg was bumped from 1.2.0 to 1.2.1, swaylock was bumped from 1.7.2 to 1.8.2.

The grimshot script, which is a script for making screenshots, was part of the sway’s contrib folder for a long time (but was shipped as a separate binary package). It was removed from sway and is now part of the sway-contrib project. There are some other useful utilities in this source package that I might package in the future.

slurp, which is used by grimshot to select a region, was updated from version 1.4 to version 1.5.

labwc

I uploaded the first labwc package two years ago and I’m happy it is now part of a stable Debian release. Labwc is also based on wlroots, like sway. It is a window-stacking compositor and is inspired by openbox. I used openbox for a long time back in the day before I moved to i3 and I’m very happy to see that there is a Wayland alternative.

foot

Foot is a minimalistic and fast Wayland terminal emulator. It is mostly keyboard driven. foot was updated from version 1.13.1 to 1.21.0. The probably most important change for users updating might be the fact that:

  • Control+Shift+u is now bound to unicode-input instead of show- urls-launch, to follow the convention established in GTK and Qt
  • show-urls-launch now bound to Control+Shift+o

et cetera

The Wayland kiosk cage was updated from 0.1.4 to 0.2.0.

The waybar bar for wlroots compositors was updated from 0.9.17 to 0.12.0.

swayimg was updated from 1.10 to 3.8 and now brings support for custom key bindings, support for additional image types (PNM, EXR, DICOM, Farbfeld, sixel) and a gallery mode.

tofi, another dmenu replacement was updated from 0.8.1 to 0.9.1, wf-recorder a tool for screen recording in wlroots-based compositors, was updated from version 0.3 to version 0.5.0. wlogout was updated from version 1.1.1 to 1.2.2. The application launcher wofi was updated from 1.3 to 1.4.1. The lightweight status panel yambar was updated from version 1.9 to 1.11. kanshi, the tool for managing and automatically switching your output profiles, was updated from version 1.3.1 to version 1.5.1.

usbguard was updated from version 1.1.2 to 1.1.3.

added

  • fnott - a lightweight notification daemon for wlroots based compositors
  • fyi - a utility to send notifications to a notification daemon, similar to notify-send
  • pipectl - a tool to create and manage short-lived named pipes, this is a dependency of wl-present. wl-present is a script around wl-mirror which implements output mirroring for wlroots-based compositors
  • poweralertd - a small daemon that notifies you about the power status of your battery powered devices
  • wlopm - control power management of outputs
  • wlrctl - command line utility for miscellaneous wlroots Wayland extensions
  • wmenu - already mentioned, the new default launcher of sway
  • wshowkeys - shows keypresses in wayland sessions, nice for debugging
  • libsfdo - libraries implementing some freedesktop.org specs, used by labwc

365 TomorrowsShiny

Author: James Sallis Head propped against the bed’s headboard, half a glass of single malt at hand, the dying man readies himself for the nothingness that awaits him. He imagines it as a pool of something warm, light oil perhaps, in which he will float lazily out from the banks and curbs of his life, […]

The post Shiny appeared first on 365tomorrows.

,

Planet DebianSteinar H. Gunderson: Abstract algebra structures made easy

Group theory, and abstract algebra in general, has many useful properties; you can take a bunch of really common systems and prove very useful statements that hold for all of them at once.

But sometimes in computer science, we just use the names, not really the theorems. If you're showing that something is a group) and then proceed to use Fermat's little theorem (perhaps to efficiently compute inverses, when it's not at all obvious what they would be), then you really can't go without the theory. But for some cases, we just love to be succinct in our description of things, and for outsiders, it's just… not useful.

So here's Steinar's easy (and more importantly, highly non-scientific; no emails about inaccuracies, please :-) ) guide to the most common abstract algebra structures:

  • Set: Hopefully you already know what this is. A collection of things (for instance numbers).
  • Semigroup: A (binary) operation that isn't crazy.
  • Monoid: An operation, but you also have a no-op.
  • Group: An operation, but you also have the opposite operation.
  • Abelian group: An operation, but the order doesn't matter.
  • Ring: Two operations; the Abelian group got a friend for Christmas. The extra operation might be kind of weird (for instance, has no-ops but might not always have opposites).
  • Field: A ring with some extra flexibility, so you can do almost whatever you are used to doing with “normal” (real) numbers except perhaps order them.

So for instance, assuming that x and y are e.g. positive integers (including zero), then max(x,y) (the motivating example for this post) is a monoid. Why? Because it's a non-crazy binary operation (in particular, max(max(x,y),z) = max(x,max(y,z))), and you can use x=0 or y=0 as a no-op (max(anything, 0) = anything). But it's not a group, because once you've done max(x,y), there's nothing you can max() with to get the smallest value back.

There are many more, but these are the ones you get today.

Krebs on SecurityMobile Phishers Target Brokerage Accounts in ‘Ramp and Dump’ Cashout Scheme

Cybercriminal groups peddling sophisticated phishing kits that convert stolen card data into mobile wallets have recently shifted their focus to targeting customers of brokerage services, new research shows. Undeterred by security controls at these trading platforms that block users from wiring funds directly out of accounts, the phishers have pivoted to using multiple compromised brokerage accounts in unison to manipulate the prices of foreign stocks.

Image: Shutterstock, WhataWin.

This so-called ‘ramp and dump‘ scheme borrows its name from age-old “pump and dump” scams, wherein fraudsters purchase a large number of shares in some penny stock, and then promote the company in a frenzied social media blitz to build up interest from other investors. The fraudsters dump their shares after the price of the penny stock increases to some degree, which usually then causes a sharp drop in the value of the shares for legitimate investors.

With ramp and dump, the scammers do not need to rely on ginning up interest in the targeted stock on social media. Rather, they will preposition themselves in the stock that they wish to inflate, using compromised accounts to purchase large volumes of it and then dumping the shares after the stock price reaches a certain value. In February 2025, the FBI said it was seeking information from victims of this scheme.

“In this variation, the price manipulation is primarily the result of controlled trading activity conducted by the bad actors behind the scam,” reads an advisory from the Financial Industry Regulatory Authority (FINRA), a private, non-profit organization that regulates member brokerage firms. “Ultimately, the outcome for unsuspecting investors is the same—a catastrophic collapse in share price that leaves investors with unrecoverable losses.”

Ford Merrill is a security researcher at SecAlliance, a CSIS Security Group company. Merrill said he has tracked recent ramp-and-dump activity to a bustling Chinese-language community that is quite openly selling advanced mobile phishing kits on Telegram.

“They will often coordinate with other actors and will wait until a certain time to buy a particular Chinese IPO [initial public offering] stock or penny stock,” said Merrill, who has been chronicling the rapid maturation and growth of the China-based phishing community over the past three years.

“They’ll use all these victim brokerage accounts, and if needed they’ll liquidate the account’s current positions, and will preposition themselves in that instrument in some account they control, and then sell everything when the price goes up,” he said. “The victim will be left with worthless shares of that equity in their account, and the brokerage may not be happy either.”

Merrill said the early days of these phishing groups — between 2022 and 2024 — were typified by phishing kits that used text messages to spoof the U.S. Postal Service or some local toll road operator, warning about a delinquent shipping or toll fee that needed paying. Recipients who clicked the link and provided their payment information at a fake USPS or toll operator site were then asked to verify the transaction by sharing a one-time code sent via text message.

In reality, the victim’s bank is sending that code to the mobile number on file for their customer because the fraudsters have just attempted to enroll that victim’s card details into a mobile wallet. If the visitor supplies that one-time code, their payment card is then added to a new mobile wallet on an Apple or Google device that is physically controlled by the phishers.

The phishing gangs typically load multiple stolen cards to digital wallets on a single Apple or Android device, and then sell those phones in bulk to scammers who use them for fraudulent e-commerce and tap-to-pay transactions.

An image from the Telegram channel for a popular Chinese mobile phishing kit vendor shows 10 mobile phones for sale, each loaded with 4-6 digital wallets from different financial institutions.

This China-based phishing collective exposed a major weakness common to many U.S.-based financial institutions that already require multi-factor authentication: The reliance on a single, phishable one-time token for provisioning mobile wallets. Happily, Merrill said many financial institutions that were caught flat-footed on this scam two years ago have since strengthened authentication requirements for onboarding new mobile wallets (such as requiring the card to be enrolled via the bank’s mobile app).

But just as squeezing one part of a balloon merely forces the air trapped inside to bulge into another area, fraudsters don’t go away when you make their current enterprise less profitable: They just shift their focus to a less-guarded area. And lately, that gaze has settled squarely on customers of the major brokerage platforms, Merrill said.

THE OUTSIDER

Merrill pointed to several Telegram channels operated by some of the more accomplished phishing kit sellers, which are full of videos demonstrating how every feature in their kits can be tailored to the attacker’s target. The video snippet below comes from the Telegram channel of “Outsider,” a popular Mandarin-speaking phishing kit vendor whose latest offering includes a number of ready-made templates for using text messages to phish brokerage account credentials and one-time codes.



According to Merrill, Outsider is a woman who previously went by the handle “Chenlun.” KrebsOnSecurity profiled Chenlun’s phishing empire in an October 2023 story about a China-based group that was phishing mobile customers of more than a dozen postal services around the globe. In that case, the phishing sites were using a Telegram bot that sent stolen credentials to the “@chenlun” Telegram account.

Chenlun’s phishing lures are sent via Apple’s iMessage and Google’s RCS service and spoof one of the major brokerage platforms, warning that the account has been suspended for suspicious activity and that recipients should log in and verify some information. The missives include a link to a phishing page that collects the customer’s username and password, and then asks the user to enter a one-time code that will arrive via SMS.

The new phish kit videos on Outsider’s Telegram channel only feature templates for Schwab customers, but Merrill said the kit can easily be adapted to target other brokerage platforms. One reason the fraudsters are picking on brokerage firms, he said, has to do with the way they handle multi-factor authentication.

Schwab clients are presented with two options for second factor authentication when they open an account. Users who select the option to only prompt for a code on untrusted devices can choose to receive it via text message, an automated inbound phone call, or an outbound call to Schwab. With the “always at login” option selected, users can choose to receive the code through the Schwab app, a text message, or a Symantec VIP mobile app.

In response to questions, Schwab said it regularly updates clients on emerging fraud trends, including this specific type, which the company addressed in communications sent to clients earlier this year.

The 2FA text message from Schwab warns recipients against giving away their one-time code.

“That message focused on trading-related fraud, highlighting both account intrusions and scams conducted through social media or messaging apps that deceive individuals into executing trades themselves,” Schwab said in a written statement. “We are aware and tracking this trend across several channels, as well as others like it, which attempt to exploit SMS-based verification with stolen credentials. We actively monitor for suspicious patterns and take steps to disrupt them. This activity is part of a broader, industry-wide threat, and we take a multi-layered approach to address and mitigate it.”

Other popular brokerage platforms allow similar methods for multi-factor authentication. Fidelity requires a username and password on initial login, and offers the ability to receive a one-time token via SMS, an automated phone call, or by approving a push notification sent through the Fidelity mobile app. However, all three of these methods for sending one-time tokens are phishable; even with the brokerage firm’s app, the phishers could prompt the user to approve a login request that they initiated in the app with the phished credentials.

Vanguard offers customers a range of multi-factor authentication choices, including the option to require a physical security key in addition to one’s credentials on each login. A security key implements a robust form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by connecting an enrolled USB or Bluetooth device and pressing a button. The key works without the need for any special software drivers, and the nice thing about it is your second factor cannot be phished.

THE PERFECT CRIME?

Merrill said that in many ways the ramp-and-dump scheme is the perfect crime because it leaves precious few connections between the victim brokerage accounts and the fraudsters.

“It’s really genius because it decouples so many things,” he said. “They can buy shares [in the stock to be pumped] in their personal account on the Chinese exchanges, and the price happens to go up. The Chinese or Hong Kong brokerages aren’t going to see anything funky.”

Merrill said it’s unclear exactly how those perpetrating these ramp-and-dump schemes coordinate their activities, such as whether the accounts are phished well in advance or shortly before being used to inflate the stock price of Chinese companies. The latter possibility would fit nicely with the existing human infrastructure these criminal groups already have in place.

For example, KrebsOnSecurity recently wrote about research from Merrill and other researchers showing the phishers behind these slick mobile phishing kits employed people to sit for hours at a time in front of large banks of mobile phones being used to send the text message lures. These technicians were needed to respond in real time to victims who were supplying the one-time code sent from their financial institution.

The ashtray says: You’ve been phishing all night.

“You can get access to a victim’s brokerage with a one-time passcode, but then you sort of have to use it right away if you can’t set new security settings so you can come back to that account later,” Merrill said.

The rapid pace of innovations produced by these China-based phishing vendors is due in part to their use of artificial intelligence and large language models to help develop the mobile phishing kits, he added.

“These guys are vibe coding stuff together and using LLMs to translate things or help put the user interface together,” Merrill said. “It’s only a matter of time before they start to integrate the LLMs into their development cycle to make it more rapid. The technologies they are building definitely have helped lower the barrier of entry for everyone.”

Sociological ImagesConflict Theory and the Design of Migrant Housing

Migrant labor sustains U.S. agriculture. It is essential and constant. Yet the people who do the work remain hidden. That invisibility is not just social. It is spatial. Employers tuck housing behind groves, set it far off the road, or place it on private land behind locked gates. These sites are hard to reach. They are also hard to leave.

As a paralegal at my stepmother’s immigration law firm in Metro Detroit, I met with many migrant workers who described the places they were housed. They worked long days in fields or orchards, often six or seven days a week, and returned to dormitories built far from town. The stories stayed with me. They worked in extreme heat and came back to shared spaces without privacy, comfort, or dignity. Workers are placed in dorms with shared beds and tight quarters. Bathrooms are communal. Kitchens are often bare.

A bedroom for migrant farmworkers at the Nightingale facility in Rantoul, Ill., in July 2014.
Credit: Photo by Darrell Hoemann/Midwest Center for Investigative Reporting. Used with permission.

Images help tell this story. Photographs from North Carolina and California show identical cabins in rows. Inside are narrow beds, small windows, and not enough space to stretch. These photos are more than documentation. They are evidence. They show us what it looks like to build a system that erases the people who keep it running.

Migrant agricultural worker’s family in Nipomo, California, 1936. The mother, age 32, sits with three of her seven children outside a temporary shelter during the Great Depression.
Credit: Photo by Dorothea Lange. Farm Security Administration Collection, Library of Congress. Public domain.

Sociology gives us a framework to see that this is not just bad housing structure. It is a structural problem. When the employer controls housing, every complaint becomes a risk. Speaking up may not only cost your job, it also means losing your bed and risking forcible deportation. The design limits autonomy and keeps people quiet. The fewer choices a person has, the easier it is to control them.

In sociology, conflict theory starts with a simple idea: society develops and changes based on struggles over power and resources. In the case of migrant labor, that struggle is visible in the very organization of housing. Henri Lefebvre argued that space is socially produced. Social production means that space is shaped by those who have authority to determine how people live. This is not driven by comfort, fairness, or function. The arrangement and social production of space reflects the interests of those and control. The shape of a room, the distance between houses, and the layout of a building are not random. They reflect relationships.

Similarly, Michel Foucault shows how institutions use architecture to enforce discipline. In migrant housing, space signals control. These dorms do not need bars or guards. The buildings are made to meet the minimum legal standard for shelter. That standard is barely above what is allowed for a prison cell. The architecture dehumanizes, and in doing so, it controls.

I saw this firsthand. A worker told me his bunk was so close to the next that he could hear every breath of the man above him. His wife told me there were rules about visitors, meals, and noise. They could not live together, even though they were married. They felt monitored. They were afraid to speak. These homes were not theirs. The system made sure of that.

Sociology gives us the language to name what is happening. This is not a housing crisis. It is a labor strategy. These camps are not temporary accidents. They are long-term solutions to a problem no one wants to fix. As scholars and citizens, we should bring these designs to light. We cannot change what we do not see.

Joey Colby Bernert is a statistician and licensed clinical social worker based in Michigan. She is a graduate student in public health at Michigan State University and studies feminist theory, intersectionality, and the structural determinants of health.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Abort, Cancel, Fail?

low-case jeffphi found "Yep, all kinds of technical errors."

0

 

Michael R. reports an off by 900 error.

1

 

"It is often said that news slows down in August," notes Stewart , wondering if "perhaps The Times have just given up? Or perhaps one of the biggest media companies just doesn't care about their paying subscribers?"

2

 

"Zero is a dangerous idea!" exclaims Ernie in Berkeley .

3

 

Daniel D. found one of my unfavorites, calling it "Another classic case of cancel dialog. This time featuring KDE Partition Manager."

4

 


Fail? Until next time.
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsGingerbread House

Author: Rachel Handley “This is a terrible idea” I said. My sentience had arrived after the first gingerbread brick was lain. I was now almost fully formed and, with nothing else to do, I told the witch exactly what I thought of her so-called house. “Be quiet, house,” said the witch. “Seriously though, why not […]

The post Gingerbread House appeared first on 365tomorrows.

,

Planet DebianJonathan McDowell: Local Voice Assistant Step 4: openWakeWord

People keep asking me when I’ll write the next instalment of my local voice assistant journey. I didn’t mean for it to be so long since the last one, things have been busier than I’d like. Anyway. Last time we’d built Tensorflow, so now it’s time to sort out openWakeWord. As a reminder we’re trying to put a local voice satellite on my living room Debian media machine.

The point of openWakeWord is to run on the machine the microphone is connected to, listening for the wake phrase (“Hey Jarvis” in my case), and only then calling back to the central server to do a speech to text operation. It’s wrapped up for Wyoming as wyoming-openwakeword.

Of course I’ve packaged it up - available at https://salsa.debian.org/noodles/wyoming-openwakeword. Trixie only released yesterday, so I’m still running all of this on bookworm. That means you need python3-wyoming from Trixie - 1.6.0-1 will install fine without needing rebuilt - and the python3-tflite-runtime we built last time.

Like the other pieces I’m not sure about how this could land in Debian; it’s unclear to me that the pre-trained models provided would be accepted in main.

As usual I start it with with a systemd unit file dropped in /etc/systemd/service/wyoming-openwakeword.service:

[Unit]
Description=Wyoming OpenWakeWord server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=/usr/bin/wyoming-openwakeword --uri tcp://[::1]:10400/ --preload-model 'hey_jarvis' --threshold 0.8

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

I’m still playing with the threshold level. It defaults to 0.5, but the device lives under the TV and seems to get a bit confused by it sometimes. There’s some talk about using speex for noise suppression, but I haven’t explored that yet (it’s yet another Python module to bind to the C libraries I’d have to look at).

This is a short one; next post is actually building the local satellite on top to tie everything together.

Cryptogram Friday Squid Blogging: Bobtail Squid

Nice short article on the bobtail squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram Jim Sanborn Is Auctioning Off the Solution to Part Four of the Kryptos Sculpture

Well, this is interesting:

The auction, which will include other items related to cryptology, will be held Nov. 20. RR Auction, the company arranging the sale, estimates a winning bid between $300,000 and $500,000.

Along with the original handwritten plain text of K4 and other papers related to the coding, Mr. Sanborn will also be providing a 12-by-18-inch copper plate that has three lines of alphabetic characters cut through with a jigsaw, which he calls “my proof-of-concept piece” and which he kept on a table for inspiration during the two years he and helpers hand-cut the letters for the project. The process was grueling, exacting and nerve wracking. “You could not make any mistake with 1,800 letters,” he said. “It could not be repaired.”

Mr. Sanborn’s ideal winning bidder is someone who will hold on to that secret. He also hopes that person is willing to take over the system of verifying possible solutions and reviewing those unending emails, possibly through an automated system.

Here’s the auction listing.

Cryptogram Subverting AIOps Systems Through Poisoned Input Data

In this input integrity attack against an AI system, researchers were able to fool AIOps tools:

AIOps refers to the use of LLM-based agents to gather and analyze application telemetry, including system logs, performance metrics, traces, and alerts, to detect problems and then suggest or carry out corrective actions. The likes of Cisco have deployed AIops in a conversational interface that admins can use to prompt for information about system performance. Some AIOps tools can respond to such queries by automatically implementing fixes, or suggesting scripts that can address issues.

These agents, however, can be tricked by bogus analytics data into taking harmful remedial actions, including downgrading an installed package to a vulnerable version.

The paper: “When AIOps Become “AI Oops”: Subverting LLM-driven IT Operations via Telemetry Manipulation“:

Abstract: AI for IT Operations (AIOps) is transforming how organizations manage complex software systems by automating anomaly detection, incident diagnosis, and remediation. Modern AIOps solutions increasingly rely on autonomous LLM-based agents to interpret telemetry data and take corrective actions with minimal human intervention, promising faster response times and operational cost savings.

In this work, we perform the first security analysis of AIOps solutions, showing that, once again, AI-driven automation comes with a profound security cost. We demonstrate that adversaries can manipulate system telemetry to mislead AIOps agents into taking actions that compromise the integrity of the infrastructure they manage. We introduce techniques to reliably inject telemetry data using error-inducing requests that influence agent behavior through a form of adversarial reward-hacking; plausible but incorrect system error interpretations that steer the agent’s decision-making. Our attack methodology, AIOpsDoom, is fully automated—combining reconnaissance, fuzzing, and LLM-driven adversarial input generation—and operates without any prior knowledge of the target system.

To counter this threat, we propose AIOpsShield, a defense mechanism that sanitizes telemetry data by exploiting its structured nature and the minimal role of user-generated content. Our experiments show that AIOpsShield reliably blocks telemetry-based attacks without affecting normal agent performance.

Ultimately, this work exposes AIOps as an emerging attack vector for system compromise and underscores the urgent need for security-aware AIOps design.

Cryptogram Zero-Day Exploit in WinRAR File

A zero-day vulnerability in WinRAR is being exploited by at least two Russian criminal groups:

The vulnerability seemed to have super Windows powers. It abused alternate data streams, a Windows feature that allows different ways of representing the same file path. The exploit abused that feature to trigger a previously unknown path traversal flaw that caused WinRAR to plant malicious executables in attacker-chosen file paths %TEMP% and %LOCALAPPDATA%, which Windows normally makes off-limits because of their ability to execute code.

More details in the article.

Cryptogram Eavesdropping on Phone Conversations Through Vibrations

Researchers have managed to eavesdrop on cell phone voice conversations by using radar to detect vibrations. It’s more a proof of concept than anything else. The radar detector is only ten feet away, the setup is stylized, and accuracy is poor. But it’s a start.

Cryptogram Trojans Embedded in .svg Files

Porn sites are hiding code in .svg files:

Unpacking the attack took work because much of the JavaScript in the .svg images was heavily obscured using a custom version of “JSFuck,” a technique that uses only a handful of character types to encode JavaScript into a camouflaged wall of text.

Once decoded, the script causes the browser to download a chain of additional obfuscated JavaScript. The final payload, a known malicious script called Trojan.JS.Likejack, induces the browser to like a specified Facebook post as long as a user has their account open.

“This Trojan, also written in Javascript, silently clicks a ‘Like’ button for a Facebook page without the user’s knowledge or consent, in this case the adult posts we found above,” Malwarebytes researcher Pieter Arntz wrote. “The user will have to be logged in on Facebook for this to work, but we know many people keep Facebook open for easy access.”

This isn’t a new trick. We’ve seen Trojaned .svg files before.

David BrinAI + WAIST. A predictive riff from EXISTENCE

 

While I strive to finish my own book on Artificial Intelligence - filling in what I consider to be about fifty perceptual gaps in current discussions,* I try to keep up with what's being said in a fast-changing landscape and ideascape. Take this widely bruited essay by Niall Ferguson in The Times, which begins with a nod to science fiction...

 

...asserting that ONLY my esteemed colleague, the brilliant Neal Stephenson, could possibly have peered ahead to see aspects of this era... despite there having been dozens of thoughtful or prophetic SF tales before Snow Crash (1992) and some pretty good ones after.

 

Not so much cyberpunk, which only occasionally tried for tech-accurate forecasting, instead of noir-inspired cynicism chic, substituting in Wintermute AI for the Illuminati or Mafia or SPECTRE.... 


... No, I'm thinking more of Stephenson and Greg Bear and Nancy Kress... and yeah, my own Earth (1990) and later Existence (2013), which speculated on not just one kind of AI, but dozens....

 

... as I will in my coming book, tentatively titled: Our Latest Children - Advice about – and for – our natural, AI and hybrid heirs.


                                               *(especially gaps missed by the geniuses who are now making these systems.)

 

Anyway, here's one excerpt from Existence dealing with the topic. And ain't it a WAIST?

== WAIST ==

Wow, ain’t it strange that—boffins have been predicting that truly humanlike artificial intelligence oughta be “just a couple of decades away…” for eighty years already?

 

Some said AI would emerge from raw access to vast numbers of facts. That happened a few months after the Internet went public. 

 

But ai never showed up.

 

Others looked for a network that finally had as many interconnections as a human brain, a milestone we saw passed in the teens, when some of the crimivirals—say the Ragnarok worm or the Tornado botnet—infested-hijacked enough homes and fones to constitute the world’s biggest distributed computer, far surpassing the greatest “supercomps” and even the number of synapses in your own skull!

 

Yet, still, ai waited.

 

How many other paths were tried? How about modeling a human brain in software? 

Or modeling one in hardware. 

Evolve one, in the great Darwinarium experiment! 

Or try guiding evolution, altering computers and programs the way we did sheep and dogs, by letting only those reproduce that have traits we like—say, those that pass a Turing test, by seeming human. 

Or the ones swarming the streets and homes and virts of Tokyo, selected to exude incredible cuteness?

 

Others, in a kind of mystical faith that was backed up by mathematics and hothouse physics, figured that a few hundred quantum processors, tuned just right, could connect with their counterparts in an infinite number of parallel worlds, and just-like-that, something marvelous and God-like would pop into being.

 

The one thing no one expected was for it to happen by accident, arising from a high school science fair experiment.

 

I mean, wow ain’t it strange that a half-brilliant tweak by sixteen-year-old Marguerita deSilva leaped past the accomplishments of every major laboratory, by uploading into cyberspace a perfect duplicate of the little mind, personality, and instincts of her pet rat, Porfirio?

 

And wow ain’t it strange that Porfirio proliferated, grabbing resources and expanding, in patterns and spirals that remain—to this day—so deeply and quintessentially ratlike?

 

Not evil, all-consuming, or even predatory—thank heavens. But insistent.

 

And Wow, AIST there is a worldwide betting pool, now totaling up to a billion Brazilian reals—over whether Marguerita will end up bankrupt, from all the lawsuits over lost data and computer cycles that have been gobbled up by Porfirio? Or else, if she’ll become the world’s richest person—because so many newer ais are based upon her patents? Or maybe because she alone seems to retain any sort of influence over Porfirio, luring his feral, brilliant attention into virtlayers and corners of the Worldspace where he can do little harm? So far.

 

And WAIST we are down to this? Propitiating a virtual Rat God—(you see, Porfirio, I remembered to capitalize your name, this time)—so that he’ll be patient and leave us alone. That is, until humans fully succeed where Viktor Frankenstein calamitously failed?

 

To duplicate the deSilva Result and provide her creation with a mate.

 

 

 

A few ideas distilled down in that excerpt? There are others.

 

But heck, have you seen that novel’s dramatic and fun 3-minute trailer? All hand-made art from the great Patrick Farley!

 

And while we’re on the topic: Here I read (aloud of course) chapter two of Existence, consisting of the stand alone story “Aficionado.”

 

  

BTW, in EXISTENCE I refer to the US Space Force.  Not my biggest prediction, but another hit.

 

Now... off to the World SciFi Convention...

 

Worse Than FailureCodeSOD: An Array of Parameters

Andreas found this in a rather large, rather ugly production code base.

private static void LogView(object o)
{
    try
    {
        ArrayList al = (ArrayList)o;
        int pageId = (int)al[0];
        int userId = (int)al[1];

        // ... snipped: Executing a stored procedure that stores the values in the database
    }
    catch (Exception) { }
}

This function accepts an object of any type, except no, it doesn't, it expect that object to be an ArrayList. It then assumes the array list will then store values in a specific order. Note that they're not using a generic ArrayList here, nor could they- it (potentially) needs to hold a mix of types.

What they've done here is replace a parameter list with an ArrayList, giving up compile time type checking for surprising runtime exceptions. And why?

"Well," the culprit explained when Andreas asked about this, "the underlying database may change. And then the function would need to take different parameters. But that could break existing code, so this allows us to add parameters without ever having to change existing code."

"Have you heard of optional arguments?" Andreas asked.

"No, all of our arguments are required. We'll just default the ones that the caller doesn't supply."

And yes, this particular pattern shows up all through the code base. It's "more flexible this way."

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Cryptogram LLM Coding Integrity Breach

Here’s an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a “break” to a “continue.” That turned an error logging statement into an infinite loop, which crashed the system.

This is an integrity failure. Specifically, it’s a failure of processing integrity. And while we can think of particular patches that alleviate this exact failure, the larger problem is much harder to solve.

Davi Ottenheimer comments.

365 TomorrowsOne Room and a Matchbook

Author: Lynne Curry I didn’t get the house. Not the Lexus, the lake lot, the gilded dental practice or the damn espresso machine I bought him the year he started molar sculpting. I got a one-room cabin. Ninety miles south of Anchorage. No plumbing. A stove that belches smoke. A roof that drips snowmelt onto […]

The post One Room and a Matchbook appeared first on 365tomorrows.

,

Cryptogram AI Applications in Cybersecurity

There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here’s where to register to attend, or participate, in the fourth.

Some really great stuff here.

Planet DebianSven Hoexter: Automated Browsing with Gemini and Chrome via BrowserMCP and gemini-cli

Brief dump so I don't forget how that worked in August 2025. Requires npm, npx and nodejs.

  1. Install Chrome
  2. Add the BrowserMCP extension
  3. Install gemini-cli npm install -g @google/gemini-cli
  4. Retrieve a Gemini API key via AI Studio
  5. Export API key for gemini-cli export GEMINI_API_KEY=2342
  6. Start BrowserMCP extension, see manual, an info box will appear that it's active with a cancel button.
  7. Add mcp server to gemini-cli gemini mcp add browsermcp npx @browsermcp/mcp@latest
  8. Start gemini-cli, let it use the mcp server and task it to open a website.

365 TomorrowsTiger Woman in a Taxi-Cab

Author: Hillary Lyon Jenna slid into the first available self-driving taxi. She kept her cat-eye sunglasses on even though it was dim in the cab’s interior; the sunglasses complimented her tiger-stripe patterned coat, completing her look. She liked that, though some members of her gang said it shouted ‘cat burglar.’ That’s what she was, Jenna […]

The post Tiger Woman in a Taxi-Cab appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Raise VibeError

Ronan works with a vibe coder- an LLM addicted developer. This is a type of developer that's showing up with increasing frequency. Their common features include: not reading the code the AI generated, not testing the code the AI generated, not understanding the context of the code or how it integrates into the broader program, and absolutely not bothering to follow the company coding standards.

Here's an example of the kind of Python code they were "writing":

if isinstance(o, Test):
    if o.requirement is None:
        logger.error(f"Invalid 'requirement' in Test: {o.key}")
        try:
            raise ValueError("Missing requirement in Test object.")
        except ValueError:
            pass

    if o.title is None:
        logger.error(f"Invalid 'title' in Test: {o.key}")
        try:
            raise ValueError("Missing title in Test object.")
        except ValueError:
            pass

An isinstance check is already a red flag. Even without proper type annotations and type checking (though you should use them) any sort of sane coding is going to avoid situations where your method isn't sure what input it's getting. isinstance isn't a WTF, but it's a hint at something lurking off screen. (Yes, sometimes you do need it, this may be one of those times, but I doubt it.)

In this case, if the Test object is missing certain fields, we want to log errors about it. That part, honestly, is all fine. There are potentially better ways to express this idea, but the idea is fine.

No, the obvious turd in the punchbowl here is the exception handling. This is pure LLM, in that it's a statistically probable result of telling the LLM "raise an error if the requirement field is missing". The resulting code, however, raises an exception, immediately catches it, and then does nothing with it.

I'd almost think it's a pre-canned snippet that's meant to be filled in, but no- there's no reason a snippet would throw and catch the same error.

Now, in Ronan's case, this has a happy ending: after a few weeks of some pretty miserable collaboration, the new developer got fired. None of "their" code ever got merged in. But they've already got a few thousand AI generated resumes out to new positions…

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

,

Krebs on SecurityMicrosoft Patch Tuesday, August 2025 Edition

Microsoft today released updates to fix more than 100 security flaws in its Windows operating systems and other software. At least 13 of the bugs received Microsoft’s most-dire “critical” rating, meaning they could be abused by malware or malcontents to gain remote access to a Windows system with little or no help from users.

August’s patch batch from Redmond includes an update for CVE-2025-53786, a vulnerability that allows an attacker to pivot from a compromised Microsoft Exchange Server directly into an organization’s cloud environment, potentially gaining control over Exchange Online and other connected Microsoft Office 365 services. Microsoft first warned about this bug on Aug. 6, saying it affects Exchange Server 2016 and Exchange Server 2019, as well as its flagship Exchange Server Subscription Edition.

Ben McCarthy, lead cyber security engineer at Immersive, said a rough search reveals approximately 29,000 Exchange servers publicly facing on the internet that are vulnerable to this issue, with many of them likely to have even older vulnerabilities.

McCarthy said the fix for CVE-2025-53786 requires more than just installing a patch, such as following Microsoft’s manual instructions for creating a dedicated service to oversee and lock down the hybrid connection.

“In effect, this vulnerability turns a significant on-premise Exchange breach into a full-blown, difficult-to-detect cloud compromise with effectively living off the land techniques which are always harder to detect for defensive teams,” McCarthy said.

CVE-2025-53779 is a weakness in the Windows Kerberos authentication system that allows an unauthenticated attacker to gain domain administrator privileges. Microsoft credits the discovery of the flaw to Akamai researcher Yuval Gordon, who dubbed it “BadSuccessor” in a May 2025 blog post. The attack exploits a weakness in “delegated Managed Service Account” or dMSA — a feature that was introduced in Windows Server 2025.

Some of the critical flaws addressed this month with the highest severity (between 9.0 and 9.9 CVSS scores) include a remote code execution bug in the Windows GDI+ component that handles graphics rendering (CVE-2025-53766) and CVE-2025-50165, another graphics rendering weakness. Another critical patch involves CVE-2025-53733, a vulnerability in Microsoft Word that can be exploited without user interaction and triggered through the Preview Pane.

One final critical bug tackled this month deserves attention: CVE-2025-53778, a bug in Windows NTLM, a core function of how Windows systems handle network authentication. According to Microsoft, the flaw could allow an attacker with low-level network access and basic user privileges to exploit NTLM and elevate to SYSTEM-level access — the highest level of privilege in Windows. Microsoft rates the exploitation of this bug as “more likely,” although there is no evidence the vulnerability is being exploited at the moment.

Feel free to holler in the comments if you experience problems installing any of these updates. As ever, the SANS Internet Storm Center has its useful breakdown of the Microsoft patches indexed by severity and CVSS score, and AskWoody.com is keeping an eye out for Windows patches that may cause problems for enterprises and end users.

GOOD MIGRATIONS

Windows 10 users out there likely have noticed by now that Microsoft really wants you to upgrade to Windows 11. The reason is that after the Patch Tuesday on October 14, 2025, Microsoft will stop shipping free security updates for Windows 10 computers. The trouble is, many PCs running Windows 10 do not meet the hardware specifications required to install Windows 11 (or they do, but just barely).

If the experience with Windows XP is any indicator, many of these older computers will wind up in landfills or else will be left running in an unpatched state. But if your Windows 10 PC doesn’t have the hardware chops to run Windows 11 and you’d still like to get some use out of it safely, consider installing a newbie-friendly version of Linux, like Linux Mint.

Like most modern Linux versions, Mint will run on anything with a 64-bit CPU that has at least 2GB of memory, although 4GB is recommended. In other words, it will run on almost any computer produced in the last decade.

There are many versions of Linux available, but Linux Mint is likely to be the most intuitive interface for regular Windows users, and it is largely configurable without any fuss at the text-only command-line prompt. Mint and other flavors of Linux come with LibreOffice, which is an open source suite of tools that includes applications similar to Microsoft Office, and it can open, edit and save documents as Microsoft Office files.

If you’d prefer to give Linux a test drive before installing it on a Windows PC, you can always just download it to a removable USB drive. From there, reboot the computer (with the removable drive plugged in) and select the option at startup to run the operating system from the external USB drive. If you don’t see an option for that after restarting, try restarting again and hitting the F8 button, which should open a list of bootable drives. Here’s a fairly thorough tutorial that walks through exactly how to do all this.

And if this is your first time trying out Linux, relax and have fun: The nice thing about a “live” version of Linux (as it’s called when the operating system is run from a removable drive such as a CD or a USB stick) is that none of your changes persist after a reboot. Even if you somehow manage to break something, a restart will return the system back to its original state.

Worse Than FailureCodeSOD: Round Strips

JavaScript is frequently surprising in terms of what functions it does not support. For example, while it has a Math.round function, that only rounds to the nearest integer, not an arbitrary precision. That's no big deal, of course, as if you wanted to round to, say, four decimal places, you could write something like: Math.floor(n * 10000) / 10000.

But in the absence of a built-in function to handle that means that many developers choose to reinvent the wheel. Ryan found this one.

function stripExtraNumbers(num) {
    //check if the number's already okay
    //assume a whole number is valid
    var n2 = num.toString();
    if(n2.indexOf(".") == -1)  { return num; }
    //if it has numbers after the decimal point,
    //limit the number of digits after the decimal point to 4
    //we use parseFloat if strings are passed into the method
    if(typeof num == "string"){
        num = parseFloat(num).toFixed(4);
    } else {
        num = num.toFixed(4);
    }
    //strip any extra zeros
    return parseFloat(num.toString().replace(/0*$/,""));
}

We start by turning the number into a string and checking for a decimal point. If it doesn't have one, we've already rounded off, return the input. Now, we don't trust our input, so if the input was already a string, we'll parse it into a number. Once we know it's a number, we can call toFixed, which returns a string rounded off to the correct number of decimal points.

This is all very dumb. Just dumb. But it's the last line which gets really dumb.

toFixed returns a padded string, e.g. (10).toFixed(4) returns "10.0000". But this function doesn't want those trailing zeros, so they convert our string num into a string, then use a regex to replace all of the trailing zeros, and then parse it back into a float.

Which, of course, when storing the number as a number, we don't really care about trailing zeros. That's a formatting choice when we output it.

I'm always impressed by a code sample where every single line is wrong. It's like a little treat. In this case, it even gets me a sense of how it evolved from little snippets of misunderstood code. The regex to remove trailing zeros in some other place in this developer's experience led to degenerate cases where they had output like 10., so they also knew they needed to have the check at the top to see if the input had a fractional part. Which the only way they knew to do that was by looking for a . in a string (have fun internationalizing that!). They also clearly don't have a good grasp on types, so it makes sense that they have the extra string check, just to be on the safe side (though it's worth noting that parseFloat is perfectly happy to run on a value that's already a float).

This all could be a one-liner, or maybe two if you really need to verify your types. Yet here we are, with a delightfully wrong way to do everything.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Long Term

Author: Mark Renney The world is broken; in all the ways we predicted it would be. It cannot be repaired; it is far too late for that now. But at least you can take a break, as long as you have the funds of course. You can check into one of the Long Term Hotels. […]

The post The Long Term appeared first on 365tomorrows.

Charles StrossCrib Sheet: A Conventional Boy

A Conventional Boy is the most recent published novel in the Laundry Files as of 2025, but somewhere between the fourth and sixth in internal chronological order—it takes place at least a year after the events of The Fuller Memorandum and at least a year before the events of The Nightmare Stacks.

I began writing it in 2009, and it was originally going to be a long short story (a novelette—8000-16,000 words). But one thing after another got in the way, until I finally picked it up to try and finish it in 2022—at which point it ran away to 40,000 words! Which put it at the upper end of the novella length range. And then I sent it to my editor at Tor.com, who asked for some more scenes covering Derek's life in Camp Sunshine, which shoved it right over the threshold into "short novel" territory at 53,000 words. That's inconveniently short for a stand-alone novel this century (it'd have been fine in the 1950s; Asimov's original Foundation novels were fix-ups of two novellas that bulked up to roughly that length), so we made a decision to go back to the format of The Atrocity Archives—a short novel bundled with another story (or stories) and an explanatory essay. In this case, we chose two novelettes previously published on Tor.com, and an essay exploring the origins of the D&D Satanic Panic of the 1980s (which features heavily in this novel, and which seems eerily topical in the current—2020s—political climate).

(Why is it short, and not a full-sized novel? Well, I wrote it in 2022-23, the year I had COVID19 twice and badly—not hospital-grade badly, but it left me with brain fog for more than a year and I'm pretty sure it did some permanent damage. As it happens, a novella is structurally simpler than a novel (it typically needs only one or two plot strands, rather than three or more or some elaborate extras). and I need to be able to hold the structure of a story together in my head while I write it. A Conventional Boy was the most complicated thing I could have written in that condition without it being visibly defective. There are only two plot strands and some historical flashbacks, they're easily interleaved, and the main plot itself is fairly simple. When your brain is a mass of congealed porridge? Keeping it simple is good. It was accepted by Tor.com for print and ebook publication in 2023, and would normally have come out in 2024, but for business reasons was delayed until January 2025. So take this as my 2024 book, slightly delayed, and suffice to say that my next book—The Regicide Report, due out in January 2026—is back to full length again.)

So, what's it about?

I introduced a new but then-minor Laundry character called Derek the DM in The Nightmare Stacks: Derek is portly, short-sighted, middle-aged, and works in Forecasting Ops, the department of precognition (predicting the future, or trying to), a unit I introduced as a throwaway gag in the novelette Overtime (which is also part of the book). If you think about the implications for any length of time it becomes apparent that precognition is a winning tool for any kind of intelligence agency, so I had to hedge around it a bit: it turns out that Forecasting Ops are not infallible. They can be "jammed" by precognitives working for rival organizations. Focussing too closely on a precise future can actually make it less likely to come to pass. And different precognitives are less or more accurate. Derek is one of the Laundry's best forecasters, and also an invaluable operation planner—or scenario designer, as he'd call it, because he was, and is, a Dungeon Master at heart.

I figured out that Derek's back-story had to be fascinating before I even finished writing The Nightmare Stacks, and I actually planned to write A Conventional Boy next. But somehow it got away from me, and kept getting shoved back down my to-do list until Derek appeared again in The Labyrinth Index and I realized I had to get him nailed down before The Regicide Report (for reasons that will become clear when that novel comes out). So here we are.

Derek began DM'ing for his group of friends in the early 1980s, using the original AD&D rules (the last edition I played). The campaign he's been running in Camp Sunshine is based on the core AD&D rules, with his own mutant extensions: he's rewritten almost everything, because TTRPG rule books are expensive when you're either a 14 year old with a 14-yo's pocket money allowance or a trusty in a prison that pays wages of 30p an hour. So he doesn't recognize the Omphalos Corporation's LARP scenario as a cut-rate knock-off of The Hidden Shrine of Tamoachan, and he didn't have the money to keep up with subsequent editions of AD&D.

Yes, there are some self-referential bits in here. As with the TTRPGs in the New Management books, they eerily prefigure events in the outside world in the Laundryverse. Derek has no idea that naming his homebrew ruleset and campaign Cult of the Black Pharaoh might be problematic until he met Iris Carpenter, Bob's treacherous manager from The Fuller Memorandum (and now Derek's boss in the camp, where she's serving out her sentence running the recreational services). Yes, the game scenario he runs at DiceCon is a garbled version of Eve's adventure in Quantum of Nightmares. (There's a reason he gets pulled into Forecasting Ops!)

DiceCon is set in Scarfolk—for further information, please re-read. Richard Littler's excellent satire of late 1970s north-west England exactly nails the ambiance I wanted for the setting, and Camp Sunshine was already set not far from there: so yes, this is a deliberate homage to Scarfolk (in parts).

And finally, Piranha Solution is real.

You can buy A Conventional Boy here (North America) or here (UK/EU).

Planet DebianFreexian Collaborators: Debian Contributions: DebConf 25, OpenSSH upgrades, Cross compilation collaboration and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-07

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25, by Stefano Rivera and Santiago Ruano Rincón

In July, DebConf 25 was held in Brest, France. Freexian was a gold sponsor and most of the Freexian team attended the event. Many fruitful discussions were had amongst our team and within the Debian community.

DebConf itself was organized by a local team in Brest, that included Santiago (who now lives in Uruguay). Stefano was also deeply involved in the organization, as a DebConf committee member, core video team, and the lead developer for the conference website. Running the conference took an enormous amount of work, consuming all of Stefano and Santiago’s time for most of July.

Lucas Kanashiro was active in the DebConf content team, reviewing talks and scheduling them. There were many last-minute changes to make during the event.

Anupa Ann Joseph was part of the Debian publicity team doing live coverage of DebConf 25 and was part of the DebConf 25 content team reviewing the talks. She also assisted the local team to procure the lanyards.

Recorded sessions presented by Freexian collaborators, often alongside other friends in Debian, included:

OpenSSH upgrades, by Colin Watson

Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported “No new SSH connections possible during large part of upgrade to Debian Trixie”, which would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:

  • As part of hardening the OpenSSH server, OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it; after this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen (roughly) in two phases: first we unpack the new files onto disk, and then we run some configuration steps which usually include things like restarting services. Normally this is fine, because the old service keeps on working until it’s restarted. In this case, unpacking the new files onto disk immediately stopped new SSH connections from working: the old sshd received the connection and tried to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this. This wasn’t much of a problem when upgrading OpenSSH on its own or with a small number of other packages, but in release upgrades it left a large gap when you can’t SSH to the system any more, and if anything fails in that interval then you could be in trouble.

    After trying a couple of other approaches, Colin landed on the idea of having the openssh-server package divert /usr/sbin/sshd to /usr/sbin/sshd.session-split before the unpack step of an upgrade from before 9.8, then removing the diversion and moving the new file into place once it’s ready to restart the service. This reduces the period when new connections fail to a minimum.

  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor part of the version number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, so as soon as you unpacked the new OpenSSL library during an upgrade, sshd stopped working. This couldn’t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL, and time was tight if we wanted this to be available before the release of Debian 13.

    Fortunately, there’s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted Colin’s proposal to fix this there.

The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine.

Cross compilation collaboration, by Helmut Grohne

Supporting cross building in Debian packages touches lots of areas of the archive and quite some of these matters reside in shared responsibility between different teams. Hence, DebConf was an ideal opportunity to settle long-standing issues.

The cross building bof sparked lively discussions as a significant fraction of developers employ cross builds to get their work done. In the trixie release, about two thirds of the packages can satisfy their cross Build-Depends and about half of the packages actually can be cross built.

Miscellaneous contributions

  • Raphaël Hertzog updated tracker.debian.org to remove references to Debian 10 which was moved to archive.debian.org, and had many fruitful discussions related to Debusine during DebConf 25.
  • Carles Pina prepared some data, questions and information for the DebConf 25 l10n and i18n BoF.
  • Carles Pina demoed and discussed possible next steps for po-debconf-manager with different teams in DebConf 25. He also reviewed Catalan translations and sent them to the packages.
  • Carles Pina started investigating a django-compressor bug: reproduced the bug consistently and prepared a PR for django-compressor upstream (likely more details next month). Looked at packaging frictionless-py.
  • Stefano Rivera triaged Python CVEs against pypy3.
  • Stefano prepared an upload of a new upstream release of pypy3 to Debian experimental (due to the freeze).
  • Stefano uploaded python3.14 RC1 to Debian experimental.
  • Thorsten Alteholz uploaded a new upstream version of sane-airscan to experimental. He also started to work on a new upstream version of hplip.
  • Colin backported fixes for CVE-2025-50181 and CVE-2025-50182 in python-urllib3, and fixed several other release-critical or important bugs in Python team packages.
  • Lucas uploaded ruby3.4 to experimental as a starting point for the ruby-defaults transition that will happen after Trixie release.
  • Lucas coordinated with the Release team the fix of the remaining RC bugs involving ruby packages, and got them all fixed.
  • Lucas, as part of the Debian Ruby team, kicked off discussions to improve internal process/tooling.
  • Lucas, as part of the Debian Outreach team, engaged in multiple discussions around internship programs we run and also what else we could do to improve outreach in the Debian project.
  • Lucas joined the Local groups BoF during DebConf 25 and shared all the good experiences from the Brazilian community and committed to help to document everything to try to support other groups.
  • Helmut spent significant time with Samuel Thibault on improving architecture cross bootstrap for hurd-any mostly reviewing Samuel’s patches. He proposed a patch for improving bash’s detection of its pipesize and a change to dpkg-shlibdeps to improve behavior for building cross toolchains.
  • Helmut reiterated the multiarch policy proposal with a lot of help from Nattie Mayer-Hutchings, Rhonda D’Vine and Stuart Prescott.
  • Helmut finished his work on the process based unschroot prototype that was the main feature of his talk (see above).
  • Helmut analyzed a multiarch-related glibc upgrade failure induced by a /usr-move mitigation of systemd and sent a patch and regression fix both of which reached trixie in time. Thanks to Aurelien Jarno and the release team for their timely cooperation.
  • Helmut resurrected an earlier discussion about changing the semantics of Architecture: all packages in a multiarch context in order to improve the long-standing interpreter problem. With help from Tollef Fog Heen better semantics were discovered and agreement was reached with Guillem Jover and Julian Andres Klode to consider this change. The idea is to record a concrete architecture for every Architecture: all package in the dpkg database and enable choosing it as non-native.
  • Helmut implemented type hints for piuparts.
  • Helmut reviewed and improved a patch set of Jochen Sprickerhof for debvm.
  • Anupa was involved in discussions with the Debian Women team during DebConf 25.
  • Anupa started working for the trixie release coverage and started coordinating release parties.
  • Emilio helped coordinate the release of Debian 13 trixie.

,

Cryptogram Friday Squid Blogging: Squid-Shaped UFO Spotted Over Texas

Here’s the story. The commenters on X (formerly Twitter) are unimpressed.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Worse Than FailureCodeSOD: A Single Lint Problem

We've discussed singleton abuse as an antipattern many times on this site, but folks keep trying to find new ways to implement them badly. And Olivia's co-worker certainly found one.

We start with a C++ utility class with a bunch of functions in it:

//utilities.h
class CUtilities
{
    public CUtilities();
    void doSomething();
    void doSomeOtherThing();
};
extern CUtilities* g_Utility;

So yes, if you're making a pile of utility methods, or if you want a singleton object, the keyword you're looking for is static. We'll set that aside. This class declares a class, and then also declares that there will be a pointer to the class, somewhere.

We don't have to look far.

//utilities.cpp
CUtilities* g_Utility = nullptr;
CUtilities::CUtilities()
{
    g_Utility = this;
}

// all my do-whatever functions here

This defines the global pointer variable, and then also writes the constructor of the utility class so that it initializes the global pointer to itself.

It's worth noting, at this point, that this is not a singleton, because this does nothing to prevent multiple instances from being created. What it does guarantee is that for each new instance, we overwrite g_Utility without disposing of what was already in there, which is a nice memory leak.

But where, or where, does the constructor get called?

//startup.h
class CUtilityInit
{
private:
    CUtilities m_Instance;
};

//startup.cpp
CUtilityInit *utils = new CUtilityInit();

I don't hate a program that starts with an initialization step that clearly instantiates all the key objects. There's just one little problem here that we'll come back to in just a moment, but let's look at the end result.

Anywhere that needs the utilities now can do this:

#include "utilities.h"

//in the code
g_Utility->doSomething();

There's just one key problem: back in the startup.h, we have a private member called CUtilities m_Instance which is never referenced anywhere else in the code. This means when people, like Olivia, are trawling through the codebase looking for linter errors they can fix, they may see an "unused member" and decide to remove it. Which is what Olivia did.

The result compiles just fine, but explodes at runtime since g_Utility was never initialized.

The fix was simple: just don't try and make this a singleton, since it isn't one anyway. At startup, she just populated g_Utility with an instance, and threw away all the weird code around populating it through construction.

Singletons are, as a general rule, bad. Badly implemented singletons themselves easily turn into landmines waiting for unwary developers. Stop being clever and don't try and apply a design pattern for the sake of saying you used a design pattern.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsDear Jon

Author: Julian Miles, Staff Writer Two words. Nothing else. He turns the envelope over, then puts it down and picks up the ornate Kaldotarnib honour blade and turns that over before sliding it from the scabbard. He makes a few passes in the air, finishing with a swift double strike move. Closing his eyes, he […]

The post Dear Jon appeared first on 365tomorrows.

,

David BrinA debate about saving democracy, that will likely (needlessly) be lost

As Robert Heinlein's predictions keep coming true... (e.g. "crazy years" followed by oppressive theocracy)... I hear more formerly moderate/accommodating friends  refer to the scenario in Heinlein's REVOLT IN 2100 as the only likely way that decency, honor and sapience can ever be restored to the Republic.

And so... a press release of genuine importance: 

"On September 4 in New York and streaming online, Open to Debate hosts: “Should the U.S. Be Ruled by a CEO Dictator?” An 
idea gaining traction in some partisan circles and embraced by some high-profile Silicon Valley figures. Championed by
 Curtis Yarvin, self-described neo-monarchist and founder of "Dark Enlightenment," claiming that democracy has failed and is too slow to meet today’s challenges. The Dictator CEO he proposes, would cut through red tape, challenge institutions and deliver efficiencies.

"Glen Weyl, will argue NO. Consolidating power under a single leader undermines core values of democracy fundamental to America’s political system. History is also filled with examples of autocratic leadership leading to economic ruin and catastrophic decision-making. American democracy might be messy, but let’s focus on making it better, not abandoning it.

"The debate will be held on Thursday, September 4 at 7:00 PM ET at Racket NYC and stream live online." (Someone do a search and offer links in comments?)


== A needed debate -- and a likely disaster ==

Okay, I knew Yarvin when he was a fringe online harasser scampering for attention as "Mencius Moldbug." He was a jibbering ingrate then, howling that 'incels' -- or 'involuntarily celibate' white men -- should be given women of their choice, in order to slake their appetites.  This core motivation serves today, as he suborns rich males by invoking implicit - or even explicit - images of Harems for the Deserving. 

I do not exaggerate any of that, even slightly! Indeed, I've elsewhere dissected this disease and its most pustulatory Yarvin excrescence. See a tomographic scan of this would-be Machiavelli.

Alas, I doubt that Glen Weyl - for all his good intentions and passion at defending the Democratic Enlightenment - will do much more that fall into Yarvin's many traps, providing this neo-Goebbels with a platform, incrementally building his following.  Above all, Weyl should not depend upon defending democracy as 'good' or embodying 'fundamental values.' That approach will only be persuasive to those who already support the moral argument. (As I do.)  

Many will be drawn by romantic visions of glorious rightful kings and chosen-ones -- notions spread not just by Arthurian legends, but relentlessly by Hollywood, via Tolkien's Aragorn or Dune's Atreides or Jedi demigods and their ilk.  These folks will nod in 'sad realism' as Yarvin denounces 'mob rule,' and calls for iron fisted stability. They shrug off appeals to democratic ideals and rights as sappy naĩvete. 

Others, who have fallen under the spell of cyclical history -- e.g. the cult of the Fourth Turning -- will accept dictatorship under the assumption that it's only a 'temporary' manifestation of a Time of Heroes -- til democracy can resume under a less decadent generation. Either way, these romantic incantation spells are immune to rebuttal. Both variants are perfectly adept at shrugging off moral defenses of citizen sovereignty.

There is one takedown that works! And that is to cite practical outcomes. 

Demand (as I have done, many times) that Yarvin name even a single kingship -- amid 6000 years of pervasive feudalism by inheritance brats and across five continents -- that ever had a continuous period of spectacular progress and accomplishment like America's recent 25 decades!

Indeed, tally the sum accomplishments of ALL historic kingdoms -- combined! Does that total come close to matching the feats and deeds and wonders wrought by Americans in just a single human lifetime, since the WWII GI Bill Generation -- using Democratic tools and public investment and Rule of Law -- truly made America great?

Defy Yarvin to support his bald-faced assertions of democracy's 'failure' by actually tabulating those compared accomplishments! Shouldn't ingrate yammerers demanding that we chuck out all the traits that gave them cushy lives bear some burden of proof?

Contrast our nation-of-opportunity vs. the stunning waste of talent that festered under feudalism, when rigged dominance by inheritance brats crushed social mobility. And thus, the best that any bright youngster might hope-for would be to follow his father's trade - beset by 'lordly' gangster protection rackets - amid cauterized ambition or hope! 

Show us any other era when a majority of kids were healthy and educated enough -- and fearlessly empowered -- to compete or cooperate fairly and to rise up by virtue of their merits and deeds, rather than inherited status? Empowered to take on elites with creative startups, for example? The one American trait that the world's inheritance brats are determined to expunge.

Ask about the Greatest Generation, so admired (in muzzy abstract) by today's gone-mad right. The GI Bill generation who built mighty universities and science and civil rights and the flattest-fairest society ever seen, till then... and who admired one living human above all others, Franklin Roosevelt. 

And who next - in the 1950s - revered almost as much a fellow named Jonas Salk.

Demand that Moldbug address that word -- competition -- which liberals today use far too little, especially since Adam Smith was the true founder of their movement!* A word that used to be a talisman for conservatism, but that U.S. conservatives never mention at all, nowadays. A word describing the exact thing that kingship directly suppresses. A word that will be utterly gelded, should Yarvin's acolytes have their way.

Mention the only other times that our way was tried... Periclean Athens and daVinci's Florence... early experiments whose accomplishments still shine across ages of feudal darkness.

Or the fact that only democracy has ever penetrated the curtain of delusion and flattery that always... always... surrounds mighty rulers. Even geniuses like Napoleon. Indeed, the central purpose and benefit of democracy is to apply accountability even upon top elites. Allowing the best of them to notice their errors and correct them under the searing medicine of criticism.

This approach -- and not goody-two-shoes moralizing about 'fundamental values' -- should be the obvious core of any rebuttal. Alas, I have learned that the obvious is often not-so. 

We are in our nadir-equivalent of 1862, when an earlier phase of the same struggle seemed hopeless to the Union... until -- (may it happen soon!) -- we find generals who are willing to try new tactics. New ideas. And the power of maneuver, when humanity's future is on the line.

Addendum: I will append below a photostat of Bertrand Russell’s forceful yet dignified letter of refusal to debate a British fascist, a response to Sir Oswald Ernald Mosley (the most despised Briton in 1000 years). I am not quite so mature that I would refuse to debate Mr. Yarvin. But Russell expressed himself brilliantly.


== Another sad case of giving in to gloom ==

I meant to stop there. But the gloom jeremiads roll on and on, helping no one. Take Chris Hedges' "Reign of Idiots".  


 "The idiots take over in the final days of crumbling civilizations. Idiot generals wage endless, unwinnable wars that bankrupt the nation. Idiot economists call for reducing taxes for the rich and cutting social service programs for the poor, and project economic growth on the basis of myth. Idiot industrialists poison the water, the soil and the air, slash jobs and depress wages. Idiot bankers gamble on self-created financial bubbles and impose crippling debt peonage on the citizens. Idiot journalists and public intellectuals pretend despotism is democracy. Idiot intelligence operatives orchestrate the overthrow of foreign governments to create lawless enclaves that give rise to enraged fanatics. Idiot professors, “experts” and “specialists” busy themselves with unintelligible jargon and arcane theory that buttresses the policies of the rulers. Idiot entertainers and producers create lurid spectacles of sex, gore and fantasy. There is a familiar checklist for extinction. We are ticking off every item on it."

 

Did you enjoy reading that? Shaking your head in sad resignation over the inevitable stoopidity of your fellow citizens? Did it occur to you that's what our enemies want from you?  

 

This rant-essay by Hedges begins by raving about idiocy without any irony over its own idiocy: 

"The idiots take over in the final days of crumbling civilizations....  

"There is a familiar checklist for extinction. We are ticking off every item on it."

 

Feh! And get bent, you perfect example of the thing you denounce! 

 

Never before in all of history has a nation had greater numbers - or a higher percentages - of wise and smart and knowing people. And not just at the maligned universities, or in the under-attack civil service, or our brilliant (but under-siege) officer corps, or in the streets. We have more (and higher percentages of) brilliant/wise folks than all other nations and societies across all of time... combined. 

 

Indeed, assailing and curbing and demoralizing all of the smart people is the shared goal of both MAGA lumpenprols and the world oligarchs who puppet them. Proving they are idiots, because it simply cannot succeed. 


What? Hey, oligarchs! Your plan is to intimidate and crush the hundred million smartest in society? The ones who know cyber, nano, nuclear, bio and all the rest?  That is your plan? Oh, you will not like us, when we finally get mad.

 

And yet, dopes like Chris Hedges yowl that it is working. It has to work. because you are all fooooools!



== May we find comfort and precedents in earlier, righteous victories ==


I'm reminded of a different phase of the recurring American Civil War, when (like today) the Union side needed... and then got... better generals. 

      Take, in particular, a moment - right after the Battle of the Wilderness - when Ulysses S. Grant heard his underlings whining about "What Bobby Lee is going to do to us next." 


Grant stood up and growled:


"STOP fretting about what Bobby Lee is gonna do to us. Start planning what we will do to Bobby Lee!"


There are a jillion fresh tactics we can use in this fight for civilization... like getting all the dems in GOP districts to re-register as Republicans, which would (for one thing) protect them from being purged out of the voter rolls. But also, it would truly screw up the radicals' Radicatization-via-Primary tactic. And weaken gerrymandering,


But in order to get started, we need first to stand up like confident women and men and reject idiocies like this "Reign of Idiots" bullshit whine. 


It contains some truths, sure, about the gang of criminal fools who have seized our institutions in their Project 2025 / KGB-planned putsch. And it's true that the polemical skills of Democrats could not possibly be worse.


But truths - out of context - can be lies. And Hedges's jeremiad could not have been better written by some Kremlin basement Goebbels, seeking to demoralize us. 

And fuck that, you tool of monsters.



== And finally... ==

Robert Reich assesses Newsom's proposal for voters to allow CA, OR and WA to re-gerrymander until Texas, Florida and N.Carolina stop. Blue voters in the west ENDED the foul crime years ago. But may be talked into temporary retaliation vs confederate cheaters.


Note, Red states are also planning to purge voter rolls! Tell all your friends to prevent being purged by RE-REGISTERING AS REPUBLICANS. Hold your nose and do it, as I did!


The only practical effects will be (1) to protect your voting rights and (2) let you vote in the only election that matters anymore in those states, the Republican primary.


See 1st comment below for how I have long proposed we deal with gerrymandering. But for now... it's over to you. Stand up.


-------

Planet DebianJonathan Carter: Debian 13

Debian 13 has finally been released!

One of the biggest and under-hyped features is support for HTTP Boot. This allows you to simply specify a URL (to any d-i or live image iso) in your computer’s firmware setup and then you can boot to it directly over the Internet, so no need to download an image, write it to flash disk and then boot from the flash disk on computers made in the last ~5 years. This is also supported on the Tianocore free EFI firmware, which is useful if you’d like to try it out on QEMU/KVM.

More details about Debian 13 available on the official press release.

The default theme for Debian 13 is Ceratopsian, designed by Elise Couper. I’ll be honest, I wasn’t 100% sure it was the best choice when it won the artwork vote, but it really grew on me over the last few months, and it looked great in combination with all kinds of other things during DebConf too, so it has certainly won me over.

And I particularly like the Plymouth theme. It’s very minimal, and it reminds me of the Toy Story Trixie character, it’s almost like it helps explain the theme:

Plymouth (start-up/shutdown) theme.

Trixie, the character from Toy Story that was chosen as the codename for Debian 13.

Debian Local Team ISO testing

Yesterday we got some locals together for ISO testing and we got a cake with the wallpaper printed on it, along with our local team logo which has been a work in progress for the last 3 years, so hopefully we’ll finalise it this year! (it will be ready when it’s ready). It came out a lot bluer than the original wallpaper, but still tasted great.

For many releases, I’ve been the only person from South Africa doing ISO smoke-testing, and this time was quite different, since everyone else in the photo below tested an image except for me. I basically just provided some support and helped out with getting salsa/wiki accounts and some troubleshooting. It went nice and fast, and it’s always a big relief when there are no showstoppers for the release.

My dog was really wishing hard that the cake would slip off.

Packaging-wise, I only have one big new package for Trixie, and that’s Cambalache, a rapid application design UI builder for GTK3/GTK4.

The version in trixie is 0.94.1-3 and version 1.0 was recently released, so I’ll get that updated in forky and backport it if possible.

I was originally considering using Cambalache for an installer UI, but ended up going with a web front-end instead. But that’s moving firmly towards forky territory, so more on that another time!

Thanks to everyone who was involved in this release, so far upgrades have been very smooth!

Planet DebianC.J. Collier: Upgrading Proxmox 7 to 8

Some variant of the following[1] worked for me.

The first line is the start of a for loop that runs on each node in my cluster a command using ssh. The argument -t is passed to attach a controlling terminal to STDIN, STDERR and STDOUT of this session, since there will not be an intervening shell to do it for us. The argument to ssh is a workflow of bash commands. They upgrade the 7.x system to the most recent packages on the repository. We then update the sources.list entries for the system to point at bookworm sources instead of bullseye. The package cache is updated and the proxmox-ve package is installed. Packages which are installed are upgraded to the versions from bookworm, and the installer concludes.

Dear reader, you might be surprised how many times I saw the word “perl” scroll by during the manual, serial scrolling of this install. It took hours. There were a few prompts, so stand by the keyboard!

[1]

gpg: key 1140AF8F639E0C39: public key "Proxmox Bookworm Release Key " imported
# have your ssh agent keychain running and a key loaded that's installed at 
# ~root/.ssh/authorized_keys on each node 
apt-get install -y keychain
eval $(keychain --eval)
ssh-add ~/.ssh/id_rsa
# Replace the IP address prefix (100.64.79.) and  suffixes (64, 121-128)
# with the actual IPs of your cluster nodes.  Or use hostnames :-)
for o in 64 121 122 123 124 125 126 127 128 ; do   ssh -t root@100.64.79.$o '
  sed -i -e s/bullseye/bookworm/g /etc/apt/sources.list $(compgen -G "/etc/apt/sources.listd.d/*.list") \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
    | dd of=/etc/apt/sources.list.d/proxmox-release.list status=none \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/ceph-quincy bookworm main no-subscription" \
    | dd of=/etc/apt/sources.list.d/ceph.list status=none \
  && proxmox_keyid="0xf4e136c67cdce41ae6de6fc81140af8f639e0c39" \
  && curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=${proxmox_keyid}" \
    | gpg --dearmor -o /usr/share/keyrings/proxmox-release.gpg  \
  && apt-get -y -qq update \
  && apt-get -y -qq install proxmox-ve \
  && apt-get -y -qq full-upgrade \
  && echo "$(hostname) upgraded"'; done

365 TomorrowsBenevolence

Author: Lance J. Mushung Director and Operator, both of whom resembled giant copper-colored eggs, floated into their ship’s control compartment. The viewer displayed the disk of a blue and white planet. Operator transmitted, “Director, these organics are more contentious and disharmonious than most.” “That does not matter. Our theology is benevolence to all organics.” “Of […]

The post Benevolence appeared first on 365tomorrows.

,

Planet DebianBits from Debian: Debian stable is now Debian 13 "trixie"!

Alt trixie has been released

We are pleased to announce the official release of Debian 13, codenamed trixie!

What's New in Debian 13

  • Official support for RISC-V (64-bit riscv64), a major architecture milestone
  • Enhanced security through ROP and COP/JOP hardening on both amd64 and arm64 (Intel CET and ARM PAC/BTI support)
  • HTTP Boot support in Debian Installer and Live images for UEFI/U-Boot systems
  • Upgraded software stack: GNOME 48, KDE Plasma 6, Linux kernel 6.12 LTS, GCC 14.2, Python 3.13, and more

Want to install it?

Fresh installation ISOs are now available, including the final Debian Installer featuring kernel 6.12.38 and mirror improvements. Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade?

Full upgrade path from Debian 12 "bookworm" is supported and documented in the Release Notes. Upgrade notes cover APT source preparation, handling obsoletes, and ensuring system resilience.

Additional Information

For full details, including upgrade instructions, known issues, and contributors, see the official Release Notes for Debian 13 "trixie".

Congratulations to all developers, QA testers, and volunteers who made Debian 13 "trixie" possible!

Do you want to celebrate the release?

To celebrate with us on this occassion find a release party near to you and if there isn't any, organize one!

Planet DebianThorsten Alteholz: My Debian Activities in July 2025

Debian LTS

This was my hundred-thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4255-1] audiofile security update of two CVEs related to an integer overflow and a memory leak.
  • [DLA 4256-1] libetpan security update to fix one CVE related to prevent a null pointer dereference.
  • [DLA 4257-1] libcaca security update to fix two CVEs related to heap buffer overflows.
  • [DLA 4258-1] libfastjson security update to fix one CVE related to an out of bounds write.
  • [#1106867] kmail-account-wizard was marked as accepted

I also continued my work on suricata, which turned out to be more challenging than expected. This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-fourth ELTS month. Unfortunately my allocated hours were far less than expected, so I couldn’t do as much work as planned.

Most of the time I spent with FD tasks and I also attended the monthly LTS/ELTS meeting. I further listened to the debusine talks during debconf. On the one hand I would like to use debusine to prepare uploads for embargoed ELTS issues, on the other hand I would like to use debusine to run the version of lintian that is used in the different releases. At the moment some manual steps are involved here and I tried to automate things. Of course like for LTS, I also continued my work on suricata.

Debian Printing

This month I uploaded a new upstream version of:

Guess what, I also started to work on a new version of hplip and intend to upload it in August.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new upstream versions of:

  • supernovas (sponsored upload to experimental)
  • calceph (sponsored upload to experimental)

I also uploaded the new package boinor. This is a fork of poliastro, which was retired by upstream and removed from Debian some months ago. I adopted it and rebranded it at the desire of upstream. boinor is the abbreviation of BOdies IN ORbit and I hope this software is still useful.

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

On my fight against outdated RFPs, I closed 31 of them in July. Their number is down to 3447 (how can you dare to open new RFPs? :-)). Don’t be afraid of them, they don’t bite and are happy to be released to a closed state.

FTP master

The peace will soon come to an end, so this month I accepted 87 and rejected 2 packages. The overall number of packages that got accepted was 100.

365 TomorrowsTomorrow, and Tomorrow, and Tomorrow

Author: Alexandra Peel The future’s bright, they said. The future’s now! When the Church of Eternity claimed its wise men had seen the light from future days, we bowed to their superior knowledge and respected their ages-long claim on, if not our mortal bodies, then our souls. Now we had the opportunity to transform ourselves […]

The post Tomorrow, and Tomorrow, and Tomorrow appeared first on 365tomorrows.

Planet DebianValhalla's Things: MOAR Pattern Weights

Posted on August 9, 2025
Tags: madeof:atoms

Six hexagonal blocks with a Standard Compliant sticker on top: mobian (blue variant), alizarin molecule, Use Jabber / Do Crime, #FreeSoftWear, indigotin molecule, The internet is ours with a cat that plays with yarn.

I’ve collected some more Standard Compliant stickers.

A picture of the lid of my laptop: a relatively old thinkpad carpeted with hexagonal stickers: Fediverse, a Debian swirl made of cat paw prints, #FreeSoftWear, 31 years of Debian, Open Source Hardware, XMPP, Ada Lovelace, rainbow holographic Fediverse, mobian (blue sticker), tails (cut from a round one), Use Jabber / Do Crime, LIFO, people consensually doing things together (center piece), GL-Como, Piecepack, indigotin, my phone runs debian btw, reproducible builds (cut from round), 4 freedoms in Italian (cut from round), Debian tea, alizarin, Software Heritage (cut from round), ournet.rocks (the cat also seen above), Python, this machine kills -9 daemons, 25 years of FOSDEM, Friendica, Flare. There are only 5 full hexagonal slots free.

Some went on my laptop, of course, but some were selected for another tool I use relatively often: more pattern weights like the ones I blogged about in February.

And of course the sources:

I have enough washers to make two more weights, and even more stickers, but the printer is currently not in use, so I guess they will happen a few months or so in the future.

,

Cryptogram Friday Squid Blogging: New Vulnerability in Squid HTTP Proxy Server

In a rare squid/security combined post, a new vulnerability was discovered in the Squid HTTP proxy server.

Krebs on SecurityKrebsOnSecurity in New ‘Most Wanted’ HBO Max Series

A new documentary series about cybercrime airing next month on HBO Max features interviews with Yours Truly. The four-part series follows the exploits of Julius Kivimäki, a prolific Finnish hacker recently convicted of leaking tens of thousands of patient records from an online psychotherapy practice while attempting to extort the clinic and its patients.

The documentary, “Most Wanted: Teen Hacker,” explores the 27-year-old Kivimäki’s lengthy and increasingly destructive career, one that was marked by cyber attacks designed to result in real-world physical impacts on their targets.

By the age of 14, Kivimäki had fallen in with a group of criminal hackers who were mass-compromising websites and milking them for customer payment card data. Kivimäki and his friends enjoyed harassing and terrorizing others by “swatting” their homes — calling in fake hostage situations or bomb threats at a target’s address in the hopes of triggering a heavily-armed police response to that location.

On Dec. 26, 2014, Kivimäki and fellow members of a group of online hooligans calling themselves the Lizard Squad launched a massive distributed denial-of-service (DDoS) attack against the Sony Playstation and Microsoft Xbox Live platforms, preventing millions of users from playing with their shiny new gaming rigs the day after Christmas. The Lizard Squad later acknowledged that the stunt was planned to call attention to their new DDoS-for-hire service, which came online and started selling subscriptions shortly after the attack.

Finnish investigators said Kivimäki also was responsible for a 2014 bomb threat against former Sony Online Entertainment President John Smedley that grounded an American Airlines plane. That incident was widely reported to have started with a Twitter post from the Lizard Squad, after Smedley mentioned some upcoming travel plans online. But according to Smedley and Finnish investigators, the bomb threat started with a phone call from Kivimäki.

Julius “Zeekill” Kivimaki, in December 2014.

The creaky wheels of justice seemed to be catching up with Kivimäki in mid-2015, when a Finnish court found him guilty of more than 50,000 cybercrimes, including data breaches, payment fraud, and operating a global botnet of hacked computers. Unfortunately, the defendant was 17 at the time, and received little more than a slap on the wrist: A two-year suspended sentence and a small fine.

Kivimäki immediately bragged online about the lenient sentencing, posting on Twitter that he was an “untouchable hacker god.” I wrote a column in 2015 lamenting his laughable punishment because it was clear even then that this was a person who enjoyed watching other people suffer, and who seemed utterly incapable of remorse about any of it. It was also abundantly clear to everyone who investigated his crimes that he wasn’t going to quit unless someone made him stop.

In response to some of my early reporting that mentioned Kivimäki, one reader shared that they had been dealing with non-stop harassment and abuse from Kivimäki for years, including swatting incidents, unwanted deliveries and subscriptions, emails to her friends and co-workers, as well as threatening phonecalls and texts at all hours of the night. The reader, who spoke on condition of anonymity, shared that Kivimäki at one point confided that he had no reason whatsoever for harassing her — that she was picked at random and that it was just something he did for laughs.

Five years after Kivimäki’s conviction, the Vastaamo Psychotherapy Center in Finland became the target of blackmail when a tormentor identified as “ransom_man” demanded payment of 40 bitcoins (~450,000 euros at the time) in return for a promise not to publish highly sensitive therapy session notes Vastaamo had exposed online.

Ransom_man, a.k.a. Kivimäki, announced on the dark web that he would start publishing 100 patient profiles every 24 hours. When Vastaamo declined to pay, ransom_man shifted to extorting individual patients. According to Finnish police, some 22,000 victims reported extortion attempts targeting them personally, targeted emails that threatened to publish their therapy notes online unless paid a 500 euro ransom.

In October 2022, Finnish authorities charged Kivimäki with extorting Vastaamo and its patients. But by that time he was on the run from the law and living it up across Europe, spending lavishly on fancy cars, apartments and a hard-partying lifestyle.

In February 2023, Kivimäki was arrested in France after authorities there responded to a domestic disturbance call and found the defendant sleeping off a hangover on the couch of a woman he’d met the night before. The French police grew suspicious when the 6′ 3″ blonde, green-eyed man presented an ID that stated he was of Romanian nationality.

A redacted copy of an ID Kivimaki gave to French authorities claiming he was from Romania.

In April 2024, Kivimäki was sentenced to more than six years in prison after being convicted of extorting Vastaamo and its patients.

The documentary is directed by the award-winning Finnish producer and director Sami Kieski and co-written by Joni Soila. According to an August 6 press release, the four 43-minute episodes will drop weekly on Fridays throughout September across Europe, the U.S, Latin America, Australia and South-East Asia.

Worse Than FailureError'd: Voluntold

It is said (allegedly by the Scots) that confession heals the soul. But does it need to be strictly voluntary? The folks toiling away over at CodeSOD conscientiously change the names to protect the innocent but this side of the house is committed to curing the tortured souls of webdevs. Whether they like it or not. Sadly Sam's submission has been blinded, so the black black soul of xxxxxxxxxxte.com remains unfortunately undesmirched, but three others should be experiencing the sweet sweet healing light of day right about now. I sure hope they appreciate it.

More monkey business this week from Reinier B. who is hoping to check in on some distant cousins. "I'll make sure to accept email from {email address}, otherwise I won't be able to visit {zoo name}."

0

 

Alex A. is "trying to pay customs duty." It definitely can be.

1

 

"I know it's hard to recruit good developers," commiserates Sam B. , "but it's like they're not even trying." They sure are, as above.

3

 

Peter G. bemoans "Apparently this network power thingamajig, found on Aliexpress, is pain itself if the brand name on it is to be believed." Cicero wept.

2

 

Jan B. takes the perfecta, hitting not only this week's theme of flubstitutions but also a bounty of bodged null references to boot. "This is one of the hardest choices I've ever had in my life. I'm not sure if I'd prefer null or null as my location.detection.message.CZ." Go in peace.

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsBecause I Elected You

Author: Eva C. Stein Aidan hadn’t meant to bring it up – not here, not today. But when he answered the door, his impulse signal spiked. He let her speak first. “Don’t look so worried,” Mae said as she stepped in – no invitation needed. “It’s good news. They’ve given us a fifteen-minute slot.” “That’s […]

The post Because I Elected You appeared first on 365tomorrows.

,

Cryptogram SIGINT During World War II

The NSA and GCHQ have jointly published a history of World War II SIGINT: “Secret Messengers: Disseminating SIGINT in the Second World War.” This is the story of the British SLUs (Special Liaison Units) and the American SSOs (Special Security Officers).

Cryptogram The “Incriminating Video” Scam

A few years ago, scammers invented a new phishing email. They would claim to have hacked your computer, turned your webcam on, and videoed you watching porn or having sex. BuzzFeed has an article talking about a “shockingly realistic” variant, which includes photos of you and your house—more specific information.

The article contains “steps you can take to figure out if it’s a scam,” but omits the first and most fundamental piece of advice: If the hacker had incriminating video about you, they would show you a clip. Just a taste, not the worst bits so you had to worry about how bad it could be, but something. If the hacker doesn’t show you any video, they don’t have any video. Everything else is window dressing.

I remember when this scam was first invented. I calmed several people who were legitimately worried with that one fact.

Cryptogram Automatic License Plate Readers Are Coming to Schools

Fears around children is opening up a new market for automatic license place readers.

Cryptogram Google Project Zero Changes Its Disclosure Policy

Google’s vulnerability finding team is again pushing the envelope of responsible disclosure:

Google’s Project Zero team will retain its existing 90+30 policy regarding vulnerability disclosures, in which it provides vendors with 90 days before full disclosure takes place, with a 30-day period allowed for patch adoption if the bug is fixed before the deadline.

However, as of July 29, Project Zero will also release limited details about any discovery they make within one week of vendor disclosure. This information will encompass:

  • The vendor or open-source project that received the report
  • The affected product
  • The date the report was filed and when the 90-day disclosure deadline expires

I have mixed feelings about this. On the one hand, I like that it puts more pressure on vendors to patch quickly. On the other hand, if no indication is provided regarding how severe a vulnerability is, it could easily cause unnecessary panic.

The problem is that Google is not a neutral vulnerability hunting party. To the extent that it finds, publishes, and reduces confidence in competitors’ products, Google benefits as a company.

Worse Than FailureDivine Comedy

"Code should be clear and explain what it does, comments should explain why it does that." This aphorism is a decent enough guideline, though like any guidance short enough to fit on a bumper sticker, it can easily be overapplied or misapplied.

Today, we're going to look at a comment Salagir wrote. This comment does explain what the code does, can't hope to explain why, and instead serves as a cautionary tale. We're going to take the comment in sections, because it's that long.

This is about a stored procedure in MariaDB. Think of Salagir as our Virgil, a guide showing us around the circles of hell. The first circle? A warning that the dead code will remain in the code base:

	/************************** Dead code, but don't delete!

	  What follows if the history of a terrible, terrible code.
	  I keep it for future generations.
	  Read it in a cold evening in front of the fireplace.

My default stance is "just delete bad, dead code". But it does mean we get this story out of it, so for now I'll allow it.

	  **** XXX ****   This is the story of the stored procedure for getext_fields.   **** XXX ****

	Gets the english and asked language for the field, returns what it finds: it's the translation you want.
		   Called like this:
		   " SELECT getext('$table.$field', $key, '$lang') as $label "
		   The function is only *in the database you work on right now*.

Okay, this seems like a pretty simple function. But why does this say "the function is only in the database you work on right now"? That's concerning.

		***** About syntax!!
			The code below can NOT be used by copy and paste in SQL admin (like phpmyadmin), due to the multiple-query that needs DELIMITER set.
			The code that works in phpmyadmin is this:
DELIMITER $$
DROP FUNCTION IF EXISTS getext$$
CREATE FUNCTION (...same...)
		LIMIT 1;
	RETURN `txt_out`;
END$$
			However, DELIMITER breaks the code when executed from PHP.

Am I drowning in the river Styx? Why would I be copy/pasting SQL code into PhpMyAdmin from my PHP code? Is… is this a thing people were doing? Or was it going the opposite way, and people were writing delimited statements and hoping to execute them as a single query? I'm not surprised that didn't work.

		***** About configuration!!!
			IMPORTANT: If you have two MySQL servers bind in Replication mode in order to be able to execute this code, you (or your admin) should set:
			SET GLOBAL log_bin_trust_function_creators = 1;
			Without that, adding of this function will fail (without any error).

I don't know the depths of MariaDB, so I can't comment on if this is a WTF. What leaps out to me though, is that this likely needs to be in a higher-level form of documentation, since this is a high-level configuration flag. Having it live here is a bit buried. But, this is dead code, so it's fine, I suppose.

		***** About indexes!!!!
			The primary key was not used as index in the first version of this function. No key was used.
			Because the code you see here is modified for it's execution. And
				`field`=my_field
			becomes
				`field`= NAME_CONST('my_field',_ascii'[value]' COLLATE 'ascii_bin')
			And if the type of my_field in the function parameter wasn't the exact same as the type of `text`, no index is used!
			At first, I didn't specify the charset, and it became
				`field`= NAME_CONST('my_field',_utf8'[value]' COLLATE 'utf8_unicode_ci')
			Because utf8 is my default, and no index was used, the table `getext_fields` was read entirely each time!
			Be careful of your types and charsets... Also...

Because the code you see here is modified for its execution. What? NAME_CONST is meant to create synthetic columns not pulled from tables, e.g. SELECT NAME_CONST("foo", "bar") would create a result set with one column ("foo"), with one row ("bar"). I guess this is fine as part of a join- but the idea that the code written in the function gets modified before execution is a skin-peelingly bad idea. And if the query is rewritten before being sent to the database, I bet that makes debugging hard.

		***** About trying to debug!!!!!
			To see what the query becomes, there is *no simple way*.
			I literally looped on a SHOW PROCESSLIST to see it!
			Bonus: if you created the function with mysql user "root" and use it with user "SomeName", it works.
			But if you do the show processlist with "SomeName", you won't see it!!

Ah, yes, of course. I love running queries against the database without knowing what they are, and having to use diagnostic tools in the database to hope to understand what I'm doing.

		***** The final straw!!!!!!
			When we migrated to MariaDB, when calling this a lot, we had sometimes the procedure call stucked, and UNKILLABLE even on reboot.
			To fix it, we had to ENTIRELY DESTROY THE DATABASE AND CREATE IT BACK FROM THE SLAVE.
			Several times in the same month!!!

This is the 9th circle of hell, reserved for traitors and people who mix tabs and spaces in the same file. Unkillable even on reboot? How do you even do that? I have a hunch about the database trying to retain consistency even after failures, but what the hell are they doing inside this function creation statement that can break the database that hard? The good news(?) is the comment(!) contains some of the code that was used:

		**** XXX ****    The creation actual code, was:   **** XXX ****

		// What DB are we in?
		$PGf = $Phoenix['Getext']['fields'];
		$db = $PGf['sql_database']? : (
				$PGf['sql_connection'][3]? : (
						$sql->query2cell("SELECT DATABASE()")
					)
				);

		$func = $sql->query2assoc("SHOW FUNCTION STATUS WHERE `name`='getext' AND `db`='".$sql->e($db)."'");

		if ( !count($func) ) {
			$sql->query(<<<MYSQL
				CREATE FUNCTION {$sql->gt_db}getext(my_field VARCHAR(255) charset {$ascii}, my_id INT(10) UNSIGNED, my_lang VARCHAR(6) charset {$ascii})
				RETURNS TEXT DETERMINISTIC
				BEGIN
					DECLARE `txt_out` TEXT;
					SELECT `text` INTO `txt_out`
						FROM {$sql->gt_db}`getext_fields`
						WHERE `field`=my_field AND `id`=my_id AND `lang` IN ('en',my_lang) AND `text`!=''
						ORDER BY IF(`lang`=my_lang, 0, 1)
						LIMIT 1;
					RETURN `txt_out`;
				END;
MYSQL
			);
			...
		}

I hate doing string munging to generate SQL statements, but I especially hate it when the very name of the object created is dynamic. The actual query doesn't look too unreasonable, but everything about how we got here is terrifying.

		**** XXX ****    Today, this is not used anymore, because...   **** XXX ****

		Because a simple sub-query perfectly works! And no maria-db bug.

		Thus, in the function selects()
		The code:
			//example: getext('character.name', `character_id`, 'fr') as name
			$sels[] = $this->sql_fields->gt_db."getext('$table.$field', $key, '$lang') as `$label`";

		Is now:
			$sels[] = "(SELECT `text` FROM {$this->sql_fields->gt_db}`getext_fields`
				WHERE `field`='$table.$field' AND `lang` IN ('en', '$lang') AND `id`=$key AND `text`!=''
				ORDER BY IF(`lang`='$lang', 0, 1) LIMIT 1) as `$label`";

		Less nice to look at, but no procedure, all the previous problems GONE!


		**** XXX   The end.
*/

Of course a simple subquery (or heck, probably a join!) could handle this. Linking data across two tables is what databases are extremely good at. I agree that, at the call site, this is less readable, but there are plenty of ways one could clean this up to make it more readable. Heck, with this, it looks a heck of a lot like you could have written a much simpler function.

Salagir did not provide the entirety of the code, just this comment. The comment remains in the code, as a warning sign. That said, it's a bit verbose. I think a simple "Abandon all hope, ye who enter here," would have covered it.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsTsunami Blues

Author: Jenny Abbott Avery Darger started discussing his final arrangements on the third day, which was a good sign. They were small decisions at first—plans for cremation in space, for example—and Tsu knew not to rush him. She had the routine down pat for premium clients and was committed to giving him his money’s worth. […]

The post Tsunami Blues appeared first on 365tomorrows.

,

Planet DebianReproducible Builds: Reproducible Builds in July 2025

Welcome to the seventh report from the Reproducible Builds project in 2025. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds an official goal for SUSE Enterprise Linux
  3. Reproducible Builds at FOSSY 2025
  4. New OSS Rebuild project from Google
  5. New extension of Python setuptools to support reproducible builds
  6. diffoscope
  7. New library to patch system functions for reproducibility
  8. Independently Reproducible Git Bundles
  9. Website updates
  10. Distribution work
  11. Reproducibility testing framework
  12. Upstream patches

Reproducible Builds Summit 2025

We are extremely pleased to announce the upcoming Reproducible Builds Summit, set to take place from October 28th — 30th 2025 in Vienna, Austria!

We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.

During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!


Reproducible Builds an official goal for SUSE Enterprise Linux

On our mailing list this month, Bernhard M. Wiedemann revealed the big news that reproducibility is now an official goal for SUSE Linux Enterprise Server (SLES) 16:

[Everything] changed earlier this year when reproducible-builds for SLES-16 became an official goal for the product. More people are talking about digital sovereignty and supply-chain security now. […] Today, only 9 of 3319 (source) packages have significant problems left (plus 7 with pending fixes), so 99.5% of packages have reproducible builds.


Reproducible Builds at FOSSY 2025

On Saturday 2nd August, Vagrant Cascadian and Chris Lamb presented at this year’s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here’s Reproducible Builds!, was introduced as follows:

There are numerous policy compliance and regulatory processes being developed that target software development… but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways… or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted … forever?

Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: “Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you”. More information on the event is available on the FOSSY 2025 website, including the full programme schedule.

Vagrant and Chris also staffed a table, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.


New OSS Rebuild project from Google

The Google Open Source Security Team (GOSST) published an article this month announcing OSS Rebuild, “a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts.” As the post itself documents, the new project comprises four facets:

  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.
  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.
  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.
  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.

One difference with most projects that aim for bit-for-bit reproducibility, OSS Rebuild aims for a kind of “semantic” reproducibility:

Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression).

The extensive post includes examples about how to access OSS Rebuild attestations using the Go-based command-line interface.


New extension of Python setuptools to support reproducible builds

Wim Jeantine-Glenn has written a PEP 517 Build backend in order to enable reproducible builds when building Python projects that use setuptools.

Called setuptools-reproducible, the project’s README file contains the following:

Setuptools can create reproducible wheel archives (.whl) by setting SOURCE_DATE_EPOCH at build time, but setting the env var is insufficient for creating reproducible sdists (.tar.gz). setuptools-reproducible [therefore] wraps the hooks build_sdist build_wheel with some modifications to make reproducible builds by default.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 301, 302 and 303 to Debian:

  • Improvements:

    • Use Difference.from_operation in an attempt to pipeline the output of the extract-vmlinux script, potentially avoiding it all in memory. []
    • Memoize a number of calls to --version, saving a very large number of external subprocess calls.
  • Bug fixes:

    • Don’t check for PyPDF version 3 specifically, check for versions greater than 3. []
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
    • Mask stderr from extract-vmlinux script. [][]
    • Avoid spurious differences in h5dump output caused by exposure of absolute internal extraction paths. (#1108690)
  • Misc:

    • Use our_check_output in the ODT comparator. []
    • Update copyright years. []

In addition:

Lastly, Chris Lamb added a tmpfs to try.diffoscope.org so that diffoscope has a non-trivial temporary area to unpack archives, etc. []

Elsewhere in our tooling, however, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.30 was uploaded to Debian unstable by Holger Levsen, chiefly including a change by Rebecca N. Palmer to not call sudo with the -h flag in order to fix Debian bug #1108550. []


New library to patch system functions for reproducibility

Nicolas Graves has written and published libfate, a simple collection of tiny libraries to patch system functions deterministically using LD_PRELOAD. According to the project’s README:

libfate provides deterministic replacements for common non-deterministic system functions that can break reproducible builds. Instead of relying on complex build systems or apps or extensive patching, libfate uses the LD_PRELOAD trick to intercept system calls and return fixed, predictable values.

Describing why he wrote it, Nicolas writes:

I originally used the OpenSUSE dettrace approach to make Emacs reproducible in Guix. But when Guix switch to GCC@14, dettrace stopped working as expected. dettrace is a complex piece of software, my need was much less heavy: I don’t need to systematically patch all sources of nondetermism, just the ones that make a process/binary unreproducible in a container/chroot.


Independently Reproducible Git Bundles

Simon Josefsson has published another interesting article this month. Titled Independently Reproducible Git Bundles, the blog post describes the advantages of why you might a reproducible bundle, and the pitfalls that can arise when trying to create them:

One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occurred even when nothing had been committed on the server side between the two runs.


Website updates

Once again, there were a number of improvements made to our website this month including:


Distribution work

In Debian this month:

Debian contributors have made significant progress toward ensuring package builds produce byte-for-byte reproducible results. You can check the status for packages installed on your system using the new package debian-repro-status, or visit reproduce.debian.net for Debian’s overall statistics for trixie and later. You can contribute to these efforts by joining #debian-reproducible on IRC to discuss fixes, or verify the statistics by installing the new rebuilderd package and setting up your own instance.


The IzzyOnDroid Android APK repository made further progress in July, crossing the 50% reproducibility threshold — congratulations. Furthermore, a new release of the Neo Store was released, which exposes the reproducible status directly next to the version of each app.


In GNU Guix, a series of patches intended to fix the reproducibility for the Mono programming language was merged, fixing reproducibility in Mono versions 1.9 [], 2.4 [] and 2.6 [].


Lastly, in addition to the news that openSUSE Enterprise Linux now [has an official goal of reproducibility]((https://lists.reproducible-builds.org/pipermail/rb-general/2025-July/003846.html), Bernhard M. Wiedemann posted another monthly update for their work there.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:

  • Switch the URL for the Tails package set. []
  • Make the dsa-check-packages output more useful. []
  • Setup the ppc64el architecture again, has it has returned — this time with a 2.7 GiB database instead of 72 GiB. []

In addition, Jochen Sprickerhof improved the reproducibility statistics generation:

  • Enable caching of statistics. [][][]
  • Add some common non-reproducible patterns. []
  • Change output to directory. []
  • Add a page sorted by diffoscope size. [][]
  • Switch to Python’s argparse module and separate output(). []

Holger also submitted a number of Debian bugs against rebuilderd and rebuilderd-worker:

  • Config files and scripts for a simple one machine setup. [][]
  • Create a rebuilderd user. []
  • Create rebuilderd-worker user with sbuild. []

Lastly, Mattia Rizzolo added a scheduled job to renew some SSL certificates [] and Vagrant Cascadian performed some node maintenance [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

There were a number of other patches from openSUSE developers:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Cryptogram China Accuses Nvidia of Putting Backdoors into Their Chips

The government of China has accused Nvidia of inserting a backdoor into their H20 chips:

China’s cyber regulator on Thursday said it had held a meeting with Nvidia over what it called “serious security issues” with the company’s artificial intelligence chips. It said US AI experts had “revealed that Nvidia’s computing chips have location tracking and can remotely shut down the technology.”

Planet DebianDavid Bremner: Using git-annex for email and notmuch metadata

Introducing git-remote-notmuch

Based on an idea and ruby implementation by Felipe Contreras, I have been developing a git remote helper for notmuch. I will soon post an updated version of the patchset to the notmuch mailing list (I wanted to refer to this post in my email). In this blog post I'll outline my experiments with using that tool, along with git-annex to store (and sync) a moderate sized email store along with its notmuch metadata.

WARNING

The rest of this post describes some relatively complex operations using (at best) alpha level software (namely git-remote-notmuch). git-annex is good at not losing your files, but git-remote-notmuch can (and did several times during debugging) wipe out your notmuch database. If you have a backup (e.g. made with notmuch-dump), this is much less annoying, and in particular you can decide to walk away from this whole experiment and restore your database.

Why git-annex?

I currently have about 31GiB of email, spread across more than 830,000 files. I want to maintain the ability to search and read my email offline, so I need to maintain a copy on several workstations and at least one server (which is backed up explicitly). I am somewhat commited to maintaining synchronization of tags to git since that is how the notmuch bug tracker works. Commiting the email files to git seems a bit wasteful: by design notmuch does not modify email files, and even with compression, the extra copy adds a fair amount of overhead (in my case, 17G of git objects, about 57% overhead). It is also notoriously difficult to completely delete files from a git repository. git-annex offers potential mitigation for these two issues, at the cost of a somewhat more complex mental model. The main idea is that instead of committing every version of a file to the git repository, git-annex tracks the filename and metadata, with the file content being stored in a key-value store outside git. Conceptually this is similar to git-lfs. From our current point, the important point is that instead of a second (compressed) copy of the file, we store one copy, along with a symlink and a couple of directory entries.

What to annex

For sufficiently small files, the overhead of a symlink and couple of directory entries is greater than the cost of a compressed second copy. When this happens depends on several variables, and will probably depend on the file content in a particular collection of email. I did a few trials of different settings for annex.largefiles to come to a threshold of largerthan=32k 1. For the curious, my experimental results are below. One potentially surprising aspect is that annexing even a small fraction of the (largest) files yields a big drop in storage overhead.

Threshold fraction annexed overhead
0 100% 30%
8k 29% 13%
16k 12% 9.4%
32k 7% 8.9%
48k 6% 8.9%
100k 3% 9.1%
256k 2% 11%
∞ (git) 0 % 57%

In the end I chose to err on the side of annexing more files (for the flexibility of deletion) rather than potentially faster operations with fewer annexed files at the same level of overhead.

Summarizing the configuration settings for git-annex (some of these are actually defaults, but not in my environment).

$ git config annex.largefiles largerthan=32k
$ git config annex.dotfiles true
$ git config annex.synccontent true

Delivering mail

To get new mail, I do something like

# compute a date based folder under $HOME/Maildir
$ dest = $(folder)
# deliver mail to ${dest} (somehow).
$ notmuch new
$ git -C $HOME/Maildir add ${folder}
$ git -C $HOME/Maildir diff-index --quiet HEAD ${folder} || git -C $HOME/Maildir commit -m 'mail delivery'

The call to diff-index is just an optimization for the case when nothing was delivered. The default configuration of git-annex will automagically annex any files larger than my threshold. At this point the git-annex repo knows nothing about tags.

There is some git configuration that can speed up the "git add" above, namely

$ git config core.untrackedCache true
$ git config core.fsmonitor true

See git-status(1) under "UNTRACKED FILES AND PERFORMANCE"

Defining notmuch as a git remote

Assuming git-remote-notmuch is somewhere in your path, you can define a remote to connect to the default notmuch database.

$ git remote add database notmuch::
$ git fetch database
$ git merge --allow-unrelated database

The --allow-unrelated should be needed only the first time.

In my case the many small files used to represent the tags (one per message), use a noticeable amount of disk space (in my case about the same amount of space as the xapian database).

Once you start merging from the database to the git repo, you will likely have some conflicts, and most conflict resolution tools leave junk lying around. I added the following .gitignore file to the top level of the repo

*.orig
*~

This prevents our cavalier use of git add from adding these files to our git history (and prevents pushing random junk to the notmuch database.

To push the tags from git to notmuch, you can run

$ git push database master

You might need to run notmuch new first, so that the database knows about all of the messages (currently git-remote-notmuch can't index files, only update metadata).

git annex sync should work with the new remote, but pushing back will be very slow 2. I disable automatic pushing as follows

$ git config remote.database.annex-push false

Unsticking the database remote

If you are debugging git-remote-notmuch, or just unlucky, you may end up in a sitation where git thinks the database is ahead of your git remote. You can delete the database remote (and associated stuff) and re-create it. Although I cannot promise this will never cause problems (because, computers), it will not modify your local copy of the tags in the git repo, nor modify your notmuch database.

$ git remote rm database
$ git update-rf -d notmuch/master
$ rm -r .git/notmuch

Fine tuning notmuch config

  • In order to avoid dealing with file renames, I have

      notmuch config maildir.synchronize_flags false
    
  • I have added the following to new.ignore:

       .git;_notmuch_metadata;.gitignore
    

  1. I also had to set annex.dotfiles to true, as many of my maildirs follow the qmail style convention of starting with a .
  2. I'm not totally clear on why it so slow, but certainly git-annex tries to push several more branches, and these are ignored by git-remote-annex.

Krebs on SecurityWho Got Arrested in the Raid on the XSS Crime Forum?

On July 22, 2025, the European police agency Europol said a long-running investigation led by the French Police resulted in the arrest of a 38-year-old administrator of XSS, a Russian-language cybercrime forum with more than 50,000 members. The action has triggered an ongoing frenzy of speculation and panic among XSS denizens about the identity of the unnamed suspect, but the consensus is that he is a pivotal figure in the crime forum scene who goes by the hacker handle “Toha.” Here’s a deep dive on what’s knowable about Toha, and a short stab at who got nabbed.

An unnamed 38-year-old man was arrested in Kiev last month on suspicion of administering the cybercrime forum XSS. Image: ssu.gov.ua.

Europol did not name the accused, but published partially obscured photos of him from the raid on his residence in Kiev. The police agency said the suspect acted as a trusted third party — arbitrating disputes between criminals — and guaranteeing the security of transactions on XSS. A statement from Ukraine’s SBU security service said XSS counted among its members many cybercriminals from various ransomware groups, including REvil, LockBit, Conti, and Qiliin.

Since the Europol announcement, the XSS forum resurfaced at a new address on the deep web (reachable only via the anonymity network Tor). But from reviewing the recent posts, there appears to be little consensus among longtime members about the identity of the now-detained XSS administrator.

The most frequent comment regarding the arrest was a message of solidarity and support for Toha, the handle chosen by the longtime administrator of XSS and several other major Russian forums. Toha’s accounts on other forums have been silent since the raid.

Europol said the suspect has enjoyed a nearly 20-year career in cybercrime, which roughly lines up with Toha’s history. In 2005, Toha was a founding member of the Russian-speaking forum Hack-All. That is, until it got massively hacked a few months after its debut. In 2006, Toha rebranded the forum to exploit[.]in, which would go on to draw tens of thousands of members, including an eventual Who’s-Who of wanted cybercriminals.

Toha announced in 2018 that he was selling the Exploit forum, prompting rampant speculation on the forums that the buyer was secretly a Russian or Ukrainian government entity or front person. However, those suspicions were unsupported by evidence, and Toha vehemently denied the forum had been given over to authorities.

One of the oldest Russian-language cybercrime forums was DaMaGeLaB, which operated from 2004 to 2017, when its administrator “Ar3s” was arrested. In 2018, a partial backup of the DaMaGeLaB forum was reincarnated as xss[.]is, with Toha as its stated administrator.

CROSS-SITE GRIFTING

Clues about Toha’s early presence on the Internet — from ~2004 to 2010 — are available in the archives of Intel 471, a cyber intelligence firm that tracks forum activity. Intel 471 shows Toha used the same email address across multiple forum accounts, including at Exploit, Antichat, Carder[.]su and inattack[.]ru.

DomainTools.com finds Toha’s email address — toschka2003@yandex.ru — was used to register at least a dozen domain names — most of them from the mid- to late 2000s. Apart from exploit[.]in and a domain called ixyq[.]com, the other domains registered to that email address end in .ua, the top-level domain for Ukraine (e.g. deleted.org[.]ua, lj.com[.]ua, and blogspot.org[.]ua).

A 2008 snapshot of a domain registered to toschka2003@yandex.ru and to Anton Medvedovsky in Kiev. Note the message at the bottom left, “Protected by Exploit,in.” Image: archive.org.

Nearly all of the domains registered to toschka2003@yandex.ru contain the name Anton Medvedovskiy in the registration records, except for the aforementioned ixyq[.]com, which is registered to the name Yuriy Avdeev in Moscow.

This Avdeev surname came up in a lengthy conversation with Lockbitsupp, the leader of the rapacious and destructive ransomware affiliate group Lockbit. The conversation took place in February 2024, when Lockbitsupp asked for help identifying Toha’s real-life identity.

In early 2024, the leader of the Lockbit ransomware group — Lockbitsupp — asked for help investigating the identity of the XSS administrator Toha, which he claimed was a Russian man named Anton Avdeev.

Lockbitsupp didn’t share why he wanted Toha’s details, but he maintained that Toha’s real name was Anton Avdeev. I declined to help Lockbitsupp in whatever revenge he was planning on Toha, but his question made me curious to look deeper.

It appears Lockbitsupp’s query was based on a now-deleted Twitter post from 2022, when a user by the name “3xp0rt” asserted that Toha was a Russian man named Anton Viktorovich Avdeev, born October 27, 1983.

Searching the web for Toha’s email address toschka2003@yandex.ru reveals a 2010 sales thread on the forum bmwclub.ru where a user named Honeypo was selling a 2007 BMW X5. The ad listed the contact person as Anton Avdeev and gave the contact phone number 9588693.

A search on the phone number 9588693 in the breach tracking service Constella Intelligence finds plenty of official Russian government records with this number, date of birth and the name Anton Viktorovich Avdeev. For example, hacked Russian government records show this person has a Russian tax ID and SIN (Social Security number), and that they were flagged for traffic violations on several occasions by Moscow police; in 2004, 2006, 2009, and 2014.

Astute readers may have noticed by now that the ages of Mr. Avdeev (41) and the XSS admin arrested this month (38) are a bit off. This would seem to suggest that the person arrested is someone other than Mr. Avdeev, who did not respond to requests for comment.

A FLY ON THE WALL

For further insight on this question, KrebsOnSecurity sought comments from Sergeii Vovnenko, a former cybercriminal from Ukraine who now works at the security startup paranoidlab.com. I reached out to Vovnenko because for several years beginning around 2010 he was the owner and operator of thesecure[.]biz, an encrypted “Jabber” instant messaging server that Europol said was operated by the suspect arrested in Kiev. Thesecure[.]biz grew quite popular among many of the top Russian-speaking cybercriminals because it scrupulously kept few records of its users’ activity, and its administrator was always a trusted member of the community.

The reason I know this historic tidbit is that in 2013, Vovnenko — using the hacker nicknames “Fly,” and “Flycracker” — hatched a plan to have a gram of heroin purchased off of the Silk Road darknet market and shipped to our home in Northern Virginia. The scheme was to spoof a call from one of our neighbors to the local police, saying this guy Krebs down the street was a druggie who was having narcotics delivered to his home.

I happened to be lurking on Flycracker’s private cybercrime forum when his heroin-framing plan was carried out, and called the police myself before the smack eventually arrived in the U.S. Mail. Vovnenko was later arrested for unrelated cybercrime activities, extradited to the United States, convicted, and deported after a 16-month stay in the U.S. prison system [on several occasions, he has expressed heartfelt apologies for the incident, and we have since buried the hatchet].

Vovnenko said he purchased a device for cloning credit cards from Toha in 2009, and that Toha shipped the item from Russia. Vovnenko explained that he (Flycracker) was the owner and operator of thesecure[.]biz from 2010 until his arrest in 2014.

Vovnenko believes thesecure[.]biz was stolen while he was in jail, either by Toha and/or an XSS administrator who went by the nicknames N0klos and Sonic.

“When I was in jail, [the] admin of xss.is stole that domain, or probably N0klos bought XSS from Toha or vice versa,” Vovnenko said of the Jabber domain. “Nobody from [the forums] spoke with me after my jailtime, so I can only guess what really happened.”

N0klos was the owner and administrator of an early Russian-language cybercrime forum known as Darklife[.]ws. However, N0kl0s also appears to be a lifelong Russian resident, and in any case seems to have vanished from Russian cybercrime forums several years ago.

Asked whether he believes Toha was the XSS administrator who was arrested this month in Ukraine, Vovnenko maintained that Toha is Russian, and that “the French cops took the wrong guy.”

WHO IS TOHA?

So who did the Ukrainian police arrest in response to the investigation by the French authorities? It seems plausible that the BMW ad invoking Toha’s email address and the name and phone number of a Russian citizen was simply misdirection on Toha’s part — intended to confuse and throw off investigators. Perhaps this even explains the Avdeev surname surfacing in the registration records from one of Toha’s domains.

But sometimes the simplest answer is the correct one. “Toha” is a common Slavic nickname for someone with the first name “Anton,” and that matches the name in the registration records for more than a dozen domains tied to Toha’s toschka2003@yandex.ru email address: Anton Medvedovskiy.

Constella Intelligence finds there is an Anton Gannadievich Medvedovskiy living in Kiev who will be 38 years old in December. This individual owns the email address itsmail@i.ua, as well an an Airbnb account featuring a profile photo of a man with roughly the same hairline as the suspect in the blurred photos released by the Ukrainian police. Mr. Medvedovskiy did not respond to a request for comment.

My take on the takedown is that the Ukrainian authorities likely arrested Medvedovskiy. Toha shared on DaMaGeLab in 2005 that he had recently finished the 11th grade and was studying at a university — a time when Mevedovskiy would have been around 18 years old. On Dec. 11, 2006, fellow Exploit members wished Toha a happy birthday. Records exposed in a 2022 hack at the Ukrainian public services portal diia.gov.ua show that Mr. Medvedovskiy’s birthday is Dec. 11, 1987.

The law enforcement action and resulting confusion about the identity of the detained has thrown the Russian cybercrime forum scene into disarray in recent weeks, with lengthy and heated arguments about XSS’s future spooling out across the forums.

XSS relaunched on a new Tor address shortly after the authorities plastered their seizure notice on the forum’s  homepage, but all of the trusted moderators from the old forum were dismissed without explanation. Existing members saw their forum account balances drop to zero, and were asked to plunk down a deposit to register at the new forum. The new XSS “admin” said they were in contact with the previous owners and that the changes were to help rebuild security and trust within the community.

However, the new admin’s assurances appear to have done little to assuage the worst fears of the forum’s erstwhile members, most of whom seem to be keeping their distance from the relaunched site for now.

Indeed, if there is one common understanding amid all of these discussions about the seizure of XSS, it is that Ukrainian and French authorities now have several years worth of private messages between XSS forum users, as well as contact rosters and other user data linked to the seized Jabber server.

“The myth of the ‘trusted person’ is shattered,” the user “GordonBellford” cautioned on Aug. 3 in an Exploit forum thread about the XSS admin arrest. “The forum is run by strangers. They got everything. Two years of Jabber server logs. Full backup and forum database.”

GordonBellford continued:

And the scariest thing is: this data array is not just an archive. It is material for analysis that has ALREADY BEEN DONE . With the help of modern tools, they see everything:

Graphs of your contacts and activity.
Relationships between nicknames, emails, password hashes and Jabber ID.
Timestamps, IP addresses and digital fingerprints.
Your unique writing style, phraseology, punctuation, consistency of grammatical errors, and even typical typos that will link your accounts on different platforms.

They are not looking for a needle in a haystack. They simply sifted the haystack through the AI sieve and got ready-made dossiers.

Planet DebianColin Watson: Free software activity in July 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

DebConf

I attended DebConf for the first time in 11 years (my last one was DebConf 14 in Portland). It was great! For once I had a conference where I had a fairly light load of things I absolutely had to do, so I was able to spend time catching up with old friends, making some new friends, and doing some volunteering - a bit of Front Desk, and quite a lot of video team work where I got to play with sound desks and such. Apparently one of the BoFs (“birds of a feather”, i.e. relatively open discussion sessions) where I was talkmeister managed to break the automatic video cutting system by starting and ending precisely on time, to the second, which I’m told has never happened before. I’ll take that.

I gave a talk about Debusine, along with helping Enrico run a Debusine BoF. We still need to process some of the feedback from this, but are generally pretty thrilled about the reception. My personal highlight was getting a shout-out in a talk from CERN (in the slide starting at 32:55).

Other highlights for me included a Python team BoF, Ian’s tag2upload talk and some very useful follow-up discussions, a session on archive-wide testing, a somewhat brain-melting whiteboard session about the “multiarch interpreter problem”, several useful discussions about salsa.debian.org, Matthew’s talk on how Wikimedia automates their Debian package builds, and many others. I hope I can start attending regularly again!

OpenSSH

Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported “No new SSH connections possible during large part of upgrade to Debian Trixie”, and after a little testing in a container I confirmed that this was a reproducible problem that would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:

  • OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. (OpenSSH 10.0 further split sshd-session, adding an sshd-auth process that deals with the user authentication phase of the protocol.) This hardens the OpenSSH server by using different address spaces for privileged and unprivileged code.

    Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it. After this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen in two phases: first we unpack the new files onto disk, and then we run some package-specific configuration steps which usually include things like restarting services. (I’m simplifying, but this is good enough for this post.) Normally this is fine, and in fact desirable: the old service keeps on working, and this approach often allows breaking what would otherwise be difficult cycles by ensuring that the system is in a more coherent state before trying to restart services. However, in this case, unpacking the new files onto disk immediately means that new SSH connections no longer work: the old sshd receives the connection and tries to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this.

    If you’re just upgrading OpenSSH on its own or with a small number of other packages, this isn’t much of a problem as the listener will be restarted quite soon; but if you’re upgrading from bookworm to trixie, there may be a long gap when you can’t SSH to the system any more, and if something fails in the middle of the upgrade then you could be in trouble.

    So, what to do? I considered keeping a copy of the old sshd around temporarily and patching the new sshd to re-execute it if it’s being run to handle an incoming connection, but that turned out to fail in my first test: dependencies are normally only checked when configuring a package, so it’s possible to unpack openssh-server before unpacking a newer libc6 that it depends on, at which point you can’t execute the new sshd at all. (That also means that the approach of restarting the service at unpack time instead of configure time is a non-starter.) We needed a different idea.

    dpkg, the core Debian package manager, has a specialized facility called “diversions”: you can tell it that when it’s unpacking a particular file it should put it somewhere else instead. This is normally used by administrators when they want to install a locally-modified version of a particular file at their own risk, or by packages that knowingly override a file normally provided by some other package. However, in this case it turns out to be useful for openssh-server to temporarily divert one of its own files! When upgrading from before 9.8, it now diverts /usr/sbin/sshd to /usr/sbin/sshd.session-split before the new version is unpacked, then removes the diversion and moves the new file into place once it’s ready to restart the service; this reduces the period when incoming connections fail to a minimum. (We actually have to pretend that the diversion is being performed on behalf of a slightly different package since we’re using dpkg-divert in a strange way here, but it all works.)

  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, which means that as soon as you unpack the new libssl3 during an upgrade (actually libssl3t64 due to the 64-bit time_t transition), sshd stops working. This couldn’t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL. And time was tight if we wanted to maximize the chance that people would apply that stable update before upgrading to trixie; there isn’t going to be another point release of Debian 12 before the release of Debian 13.

    Fortunately, there’s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted my proposal to fix this there.

The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine. Many thanks to Manfred for reporting this with just enough time to spare that we were able to fix it before Debian 13 is released in a few days!

debmirror

I did my twice-yearly refresh of debmirror’s mirror_size documentation, and applied a patch from Christoph Goehre to improve mirroring of installer files.

madison-lite

I proposed renaming this project along with the rmadison tool in devscripts, although I’m not yet sure what a good replacement name would be.

Python team

I upgraded python-expandvars, python-typing-extensions (in experimental), and webtest to new upstream versions.

I backported fixes for some security vulnerabilities to unstable:

I fixed or helped to fix a number of release-critical bugs:

I fixed some other bugs, mostly Severity: important:

I reinstated python3-mastodon’s build-dependency on and recommendation of python3-blurhash, now that the latter has been fixed to use the correct upstream source.

Worse Than FailureCodeSOD: A Dropped Down DataSet

While I frequently have complaints about over-reliance on Object Relational Mapping tools, they do offer key benefits. For example, mapping each relation in the database to a type in your programming language at least guarantees a bit of type safety in your code. Or, you could be like Nick L's predecessor, and write VB code like this.

For i As Integer = 0 To SQLDataset.Tables(0).Rows.Count - 1
     Try 'Handles DBNull
         Select Case SQLDataset.Tables(0).Rows(i).Item(0)
             Case "Bently" 'Probes
                 Probes_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Keyphasor"
                 Keyphasor_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Transmitter"
                 Transmitter_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Tachometer"
                 Tachometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim.ToUpper.ToString.Trim)
             Case "Dial Therm"
                 DialThermometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "DPS"
                 DPS_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Pump Bracket"
                 PumpBracket_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Accelerometer"
                 Accelerometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
             Case "Velometer"
                 Velometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim)
         End Select
     Catch
         'MessageBox.Show(text:="Error during SetModelNums().", _
         '                caption:="Error", _
         '                buttons:=MessageBoxButtons.OK, _
         '                icon:=MessageBoxIcon.Error)
     End Try
Next

So, for starters, they're using the ADO .Net DataSet object. This is specifically meant to be a disconnected, in-memory model of the database. The idea is that you might run a set of queries, store the results in a DataSet, and interact with the data entirely in memory after that point. The resulting DataSet will model all the tables and constraints you've pulled in (or allow you to define your own in memory).

One of the things that the DataSet tracks is the names of tables. So, the fact that they go and access .Table(0) is a nuisance- they could have used the name of the table. And while that might have been awfully verbose, there's nothing stopping them from doing DataTable products = SQLDataSet.Tables("Products").

None of this is what caught Nick's attention, though. You see, the DataTable in the DataSet will do its best to map database fields to .NET types. So it's the chain of calls at the end of most every field that caught Nick's eye:

SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim

ToUpper works because the field in the database is a string field. Also, it returns a string, so there's no need to ToString it before trimming. Of course, it's the Tachometer entry that brings this to its natural absurdity:

Tachometer_Combobox.Items.Add(SQLDataset.Tables(0).Rows(i).Item(1).ToUpper.ToString.Trim.ToUpper.ToString.Trim)

All of this is wrapped up in an exception handler, not because of the risk of an error connecting to the database (the DataSet is disconnected after all), but because of the risk of null values, as the comment helpfully states.

We can see that once, this exception handler displayed a message box, but that has since been commented out, presumably because there are a lot of nulls and the number of message boxes the users had to click through were cumbersome. Now, the exception handler doesn't actually check what kind of exception we get, and just assumes the only thing that could happen was a null value. But that's not true- someone changed one of the tables to add a column to the front, which meant Item(1) was no longer grabbing the field the code expects, breaking the population of the Pump Bracket combo box. There was no indication that this had happened beyond users asking, "Why are there no pump brackets anymore?"

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Collector

Author: Mark Renney Thomas collects the needles. It is an unpopular job but is open to all. No qualifications are required or prior experience, not even a recommendation. One has simply to turn up and register at an Agency office, take to the streets and, using the bags provided, start Collecting. The needles are everywhere, […]

The post The Collector appeared first on 365tomorrows.

Planet DebianMatthew Palmer: I'm trying an open source funding experiment

As I’m currently somewhat underemployed, and could do with some extra income, I’m starting an open source crowd-funding experiment. My hypothesis is that the open source community, and perhaps a community-minded company or two, really wants more open source code in the world, and is willing to put a few dollars my way to make that happen.

To begin with, I’m asking for contributions to implement a bunch of feature requests on action-validator, a Rust CLI tool I wrote to validate the syntax of GitHub actions and workflows. The premise is quite simple: for every AU$150 (about US$100) I receive in donations, I’ll implement one of the nominated feature requests. If people want a particular feature implemented, they can nominate a feature in their donation message, otherwise when “general” donations get to AU$150, I’ll just pick a feature that looks interesting. More details are on my code fund page.

In the same spirit of simplicity, donations can be made through my Ko-fi page, and I’ll keep track of the various totals in a hand-written HTML table.

So, in short, if you want more open source code to exist, now would be a good time to visit my Ko-fi page and chip in a few dollars. If you’re curious to know more, my code fund page has a list of Foreseeably Anticipated Questions that might address your curiosity. Otherwise, ask your questions in the comments or email me.

,

Planet DebianRavi Dwivedi: Tricked by a website while applying for Vietnam visa

In December 2024, Badri and I went to Vietnam. In this post, I’ll document our experiences with the visa process of Vietnam. Vietnam requires an e-visa to enter the country. The official online portal for the e-visa application is evisa.xuatnhapcanh.gov.vn/. However, I submitted my visa application on the website vietnamvisa.govt.vn. It was only after submitting my application and making the payment that I realized that it’s not the official e-visa website. The realization came from the tagline mentioned in the top left corner of the website - the best way to obtain a Vietnam visa.

I was a bit upset that I got tricked by that website. I should have checked the top level domains of Vietnam’s government websites. Anyways, it is pretty easy to confuse govt.vn with gov.vn. I also paid double the amount of the official visa fee. However, I wasn’t asked to provide a flight reservation or hotel bookings - documents which are usually asked for most of the visas. But they did ask me for a photo. I was not even sure whether the website was legit or not.

Badri learnt from my experience and applied through the official Vietnam government website. During the process, he had to provide a hotel booking as well as enter the hotel address into the submission form. Additionally, the official website asked to provide the exact points of entry to and exit from the country, which the non-official website did not ask for. On the other hand, he had to pay only 25 USD versus my 54 USD.

It turned out that the website I registered on was also legit, as they informed me a week later that my visa has been approved, along with a copy of my visa. Further, I was not barred from entering and found to be holding a fake visa. It appears that the main “scam” is not about the visa being fake, but rather that you will be charged more than if you apply through the official website.

I would still recommend you (the readers) to submit your visa application only through the official website and not on any of the other such websites.

Our visa was valid for a month (my visa was valid from the 4th of December 2024 to the 4th of January 2025). We also had a nice time in Vietnam. Stay tuned for my Vietnam travel posts!

Credits to Badri for proofreading and writing his part of the experience.

Planet DebianThomas Lange: FAIme service new features: Linux Mint support and data storage for USB

Build your own customized Linux Mint ISO

Using the FAIme service [1] you can now build your own customized installation ISO for Xfce edition of Linux Mint 22.1 'Xia'.

You can select the language, add a list of additional packages, set the username and passwords. In the advanced settings you may add your ssh public key, some grub option and add a postinst script to be executed.

Add writable data partition for USB sticks

For all variants of ISOs (all live and all install ISOs) you can add a data partition to the ISO by just clicking a checkbox. This writable partition can be used when booting from USB stick. FAI will use it to search for a config space and to store the logs when this partition is detected.

The logs will be stored in the subdirectory logs on this partition. For using a different config space than the one on the ISO (which is read only) create a subdirectory config and copy a FAI config space into that directory. Then set FAI_CONFIG_SRC=detect:// (which is the default) and FAI will search for a config space on the data partition and uses this. More info about this [2]

You can also store some local packages in your config space, which will be installed automatically, without the need of recreating the ISO.

Worse Than FailureCodeSOD: An Annual Report

Michael has the "fun" task of converting old, mainframe-driven reports into something more modern. This means reading through reams of Intelligent Query code.

Like most of these projects, no one has a precise functional definition of what it's supposed to do. The goal is to replace the system with one that behaves exactly the same, but is more "modern". This means their test cases are "run the two systems in parallel and compare the outputs; if they match, the upgrade is good."

After converting one report, the results did not match. Michael dug in, tracing through the code. The name of the report contained the word "Annual". One of the key variables which drove original the report was named TODAYS-365 (yes, you can put dashes in variables in IQ). Michael verified that the upgraded report was pulling exactly one year's worth of data. Tracing through the original report, Michael found this:

#
DIVIDE ISBLCS BY ISB-COST-UOM GIVING ISB-COST-EACH.
MULTIPLY ISB-STK-QOH TIMES ISB-COST-EACH GIVING ISB-ON-HAND-COST.
#
SUBTRACT TODAYS-DATE MINUS 426 GIVING TODAYS-365.
#
SEARCH FOR ITMMAN =  'USA'
       AND ITMNMB <> '112-*'

This snippet comes from a report which contains many hundreds of lines of code. So it's very easy to understand how someone could miss the important part of the code. Specifically, it's this line: SUBTRACT TODAYS-DATE MINUS 426 GIVING TODAYS-365..

Subtract 426 from today's date, and store the result in a variable called TODAYS-365. This report isn't for the past year, but for the past year and about two months.

It's impossible to know exactly why, but at a guess, originally the report needed to grab a year. Then, at some point, the requirement changed, probably based on some nonsense around fiscal years or something similar. The least invasive way to make that change was to just change the calculation, leaving the variable name (and the report name) incorrect and misleading. And there it say, working perfectly fine, until poor Michael came along, trying to understand the code.

The fix was easy, but the repeated pattern of oddly name, unclear variables was not. Remember, the hard part about working on old mainframes isn't learning COBOL or IQ or JCL or whatever antique languages they use; I'd argue those languages are in many cases easier to learn (if harder to use) than modern languages. The hard part is the generations of legacy kruft that's accumulated in them. It's grandma's attic, and granny was a pack rat.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsThe Club

Author: Majoki The chair creaked noisily when Sandoval sat at the table with five glasses set out. Even though he’d lost a few pounds since they last met, the old wood complained. Soon the others joined him: Avrilla, Hurst, Marpreesh, Suh. Five left. Only five. No others living humans in the history of civilization were […]

The post The Club appeared first on 365tomorrows.

Planet DebianMatthew Garrett: Cordoomceps - replacing an Amiga's brain with Doom

There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comment count unavailable comments

Planet DebianMichael Ablassmeier: PVE 9.0 - Snapshots for LVM

The new Proxmox release advertises a new feature for easier snapshot handling of virtual machines whose disks are stored on LVM volumes, I wondered.. whats the deal..?

To be able to use the new feature, you need to enable a special flag for the LVM volume group. This example shows the general workflow for a fresh setup.

1) Create the volume group with the snapshot-as-volume-chain feature turned on:

 pvesm add lvm lvmthick --content images --vgname lvm --snapshot-as-volume-chain 1

2) From this point on, you can create virtual machines right away, BUT those virtual machines disks must use the QCOW image format for their disk volumes. If you use the RAW format, you wont be able to create snapshots, still.

 VMID=401
 qm create $VMID --name vm-lvmthick
 qm set $VMID -scsi1 lvmthick:2,format=qcow2

So, why would it make sense to format the LVM volume as QCOW?

Snapshots on LVM thick provisioned devices are, as everybody knows, a very I/O intensive task. Besides each snapshot, a special -cow Device is created that tracks the changed block regions and the original block data for each change to the active volume. This will waste quite some space within your volume group for each snapshot.

Formatting the LVM volume as QCOW image, makes it possible to use the QCOW backing-image option for these devices, this is the way PVE 9 handles these kind of snapshots.

Creating a snapshot looks like this:

 qm snapshot $VMID id
 snapshotting 'drive-scsi1' (lvmthick3:vm-401-disk-0.qcow2)
 Renamed "vm-401-disk-0.qcow2" to "snap_vm-401-disk-0_id.qcow2" in volume group "lvm"
 Rounding up size to full physical extent 1.00 GiB
 Logical volume "vm-401-disk-0.qcow2" created.
 Formatting '/dev/lvm/vm-401-disk-0.qcow2', fmt=qcow2 cluster_size=131072 extended_l2=on preallocation=metadata compression_type=zlib size=1073741824 backing_file=snap_vm-401-disk-0_id.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16

So it will rename the current active disk and create another QCOW formatted LVM volume, but pointing it to the snapshot image using the backing_file option.

Neat.

,

Planet DebianScarlett Gately Moore: Fostering Constructive Communication in Open Source Communities

I write this in the wake of a personal attack against my work and a project that is near and dear to me. Instead of spreading vile rumors and hearsay, talk to me. I am not known to be ‘hard to talk to’ and am wide open for productive communication. I am disheartened and would like to share some thoughts of the importance of communication. Thanks for listening.

Open source development thrives on collaboration, shared knowledge, and mutual respect. Yet sometimes, the very passion that drives us to contribute can lead to misunderstandings and conflicts that harm both individuals and the projects we care about. As contributors, maintainers, and community members, we have a responsibility to foster environments where constructive dialogue flourishes.

The Foundation of Healthy Open Source Communities

At its core, open source is about people coming together to build something greater than what any individual could create alone. This collaborative spirit requires more than just technical skills—it demands emotional intelligence, empathy, and a commitment to treating one another with dignity and respect.

When disagreements arise—and they inevitably will—the manner in which we handle them defines the character of our community. Technical debates should focus on the merits of ideas, implementations, and approaches, not on personal attacks or character assassinations conducted behind closed doors.

The Importance of Direct Communication

One of the most damaging patterns in any community is when criticism travels through indirect channels while bypassing the person who could actually address the concerns. When we have legitimate technical disagreements or concerns about someone’s work, the constructive path forward is always direct, respectful communication.

Consider these approaches:

  • Address concerns directly: If you have technical objections to someone’s work, engage with them directly through appropriate channels
  • Focus on specifics: Critique implementations, documentation, or processes—not the person behind them
  • Assume good intentions: Most contributors are doing their best with the time and resources available to them
  • Offer solutions: Instead of just pointing out problems, suggest constructive alternatives

Supporting Contributors Through Challenges

Open source contributors often juggle their community involvement with work, family, and personal challenges. Many are volunteers giving their time freely, while others may be going through difficult periods in their lives—job searching, dealing with health issues, or facing other personal struggles.

During these times, our response as a community matters enormously. A word of encouragement can sustain someone through tough periods, while harsh criticism delivered thoughtlessly can drive away valuable contributors permanently.

Building Resilient Communities

Strong open source communities are built on several key principles:

Transparency in Communication: Discussions about technical decisions should happen in public forums where all stakeholders can participate and learn from the discourse.

Constructive Feedback Culture: Criticism should be specific, actionable, and delivered with the intent to improve rather than to tear down.

Recognition of Contribution: Every contribution, whether it’s code, documentation, bug reports, or community support, has value and deserves acknowledgment.

Conflict Resolution Processes: Clear, fair procedures for handling disputes help prevent minor disagreements from escalating into community-damaging conflicts.

The Long View

Many successful open source projects span decades, with contributors coming and going as their life circumstances change. The relationships we build and the culture we create today will determine whether these projects continue to attract and retain the diverse talent they need to thrive.

When we invest in treating each other well—even during disagreements—we’re investing in the long-term health of our projects and communities. We’re creating spaces where innovation can flourish because people feel safe to experiment, learn from mistakes, and grow together.

Moving Forward Constructively

If you find yourself in conflict with another community member, consider these steps:

  1. Take a breath: Strong emotions rarely lead to productive outcomes
  2. Seek to understand: What are the underlying concerns or motivations?
  3. Communicate directly: Reach out privately first, then publicly if necessary
  4. Focus on solutions: How can the situation be improved for everyone involved?
  5. Know when to step back: Sometimes the healthiest choice is to disengage from unproductive conflicts

A Call for Better

Open source has given us incredible tools, technologies, and opportunities. The least we can do in return is treat each other with the respect and kindness that makes these collaborative achievements possible.

Every contributor—whether they’re packaging software, writing documentation, fixing bugs, or supporting users—is helping to build something remarkable. Let’s make sure our communities are places where that work can continue to flourish, supported by constructive communication and mutual respect.

The next time you encounter work you disagree with, ask yourself: How can I make this better? How can I help this contributor grow? How can I model the kind of community interaction I want to see?

Our projects are only as strong as the communities that support them. Let’s build communities worthy of the amazing software we create together.

https://gofund.me/506c910c

David BrinSome lighthearted stuff this time! Plus a few sobering reminders.

All right, it's been 3 weeks without a posting. Busy, as we finally move back home after 6 months in exile.  And sure, there's plenty going on in the world. Which I'll comment on soon, once my 3-week lobotomy has had a chance to settle in. (All hail Vlad and the New USSR and Vlad's orange-quisling U.S. prophet!)

Okay, meanwhile, got time for some humor and fun? There's a LOT of cool links, below!

Let’s start with this clipSimply one of the best things I have seen, maybe ever!  Supporting my view that ‘pre-sapient’ consciousness is very, very common… and breaking through the glass ceiling to our level must be very, very hard. 


== Distractions! ==


Running short on distractions suitable for you alpha types? I mentioned Saturday Morning Breakfast Cereal comix. These are among the good ones lately.


https://www.smbc-comics.com/comic/why-6


https://www.smbc-comics.com/comic/law-4


https://www.smbc-comics.com/comic/profile


https://www.smbc-comics.com/comic/cult-2


Saturday Morning Breakfast Cereal - LLM And an exceptionally on-target whimsy cynicism from SMBC…


You'd also likely enjoy XKCD, which is generally even more science oriented. Might as well start here and just keep clicking the one-step-backward button till you tire of the cleverness!  


I mentioned Electric Sheep Comix by Patrick Farley. All his serials have such different styles you'd be sure they must have different artists. And all are brilliant! 



== And seriously, now ==


Briefly serious and then more lighthearted stuff!


Here’s a tip and a tool worth spreading. The Canadian Women's Foundation has created a hand signal for those who are victims of domestic violence which can be used silently on video calls during the coronavirus crisis to signal for help.  But not just for video calls, as illustrated in this earlier video.


And while we’re talking inspiring ways to move ahead… Big star Bruce Springsteen’s Jeep commercial paid homage to the ReUnited States of America… a lovely sentiment! (Calling to mind “malice toward none” from Lincoln’s 2nd inaugural address, one of the top ten speeches of all time.) 


It also called to mind - for not a few folks who pinged me - resonance with the “Restored United States” of my novel (and the film) “The Postman.” Which has itself been “restored” or refreshed, edited and updated with TWO new Patrick Farley covers and a new introduction. 


(Let me append -- below -- a relevant passage from The Postman, in which -- in the 1980s -- I predicted many of the rationalizations of the would-be lords seeking to re-impose 6000 years of dismal feudalism)


On the other hands, the dumbing-down continues. In 2022, the National Council of Teachers of English declared: “Time to decenter book reading and essay-writing as the pinnacles of English language arts education.” Instead, teachers are urged to focus on "media literacy" and short texts that students feel are "relevant." ??? I am well-versed on the 'newer' language arts and helped invent some. And this leads to the moronic world that Walter Tevis ('The Queen's Gambit') portrayed in his great novel MOCKINGBIRD. 


But oh yeah. who reads novels? Or tracks coherent, complex thoughts?

Dig it. This is part of the Great Big War Vs Nerds that's primarily on the Mad Right... but also has long had a strong locus on the postmodernist left.

Books r 2 hard 2 reed and shit ...


…but sure… now back to fun!



== And more spritely and musically now, to cheer you up! ==


And now something completely different. I assert that Gilbert and Sullivan were master musicians. And in each opera they has at least one pas-de-deux... where you take two seemingly completely independent songs, hear them separately, and then lo! They get woven together in beauty & irony. This one combines unhelpful encouragement (!) with courage-despite-terror. You'll see (and hear) what I mean at about 3:30. Play it loud!


This version with the incomparable Linda Rondstadt!

And yes, a few of you (too few!) will deem this familiar from a scene in BRIGHTNESS REEF!


And let’s have another. Here’s one of my utter-favorite songs, by Vangelis. The Jon Anderson version is great. Donna Summer’s is even better!


Less perfect but a fun variation is Chrissie Hynde’s version with Moodswings.


Then there’s this way-fun bit of grunting nonsense by Mike Oldfield, that should be redone by Tenacious D!


Three more faves recommended by my brother, with my thumbs way up.


Johnny Clegg with Nelson Mandela. 


Patty Smith, People Have the Power.  


Cornershop ‘Free Love.’ 



======


== And now that promised POSTMAN lagniappe ==


So it had been that way here too. The cliched "last straw" had been this plague of survivalists--particularly those following the high priest of violent anarchy, Nathan Holn.
...
The irony of it was that we had things turned around! The depression was over. People were at work again and cooperating. Except for a few crazies, it looked like a renaissance was coming, for America and the world.

But we forgot how much harm a few crazies could do, in America and in the world.

 


--… and later in the book… --

 

 

“How did he get away with pushing a book like this?”

       Gordon shrugged. 

       “It was called ‘the Big Lie’ technique, Johnny. Just SOUND like you know what you’re talking about—as if you’re citing real facts. Talk very fast. Weave your lies into the shape of a conspiracy theory and repeat your assertions over and over again. Those who want an excuse to hate or blame—those with big but weak egos— will leap at a simple, neat explanation for the way the world is. Those types will never call you on the facts…”



Want more?  I'll post another, longer, section of the book, soon. You'll likely not see a better pre-diagnosis of the hell we are in now, verging on possibly much worse.  But yes, we will win.


Thrive. And persevere!

 





Worse Than FailureCodeSOD: Concatenated Validation

User inputs are frequently incorrect, which is why we validate them. So, for example, if the user is allowed to enter an "asset ID" to perform some operation on it, we should verify that the asset ID exists before actually doing the operation.

Someone working with Capybara James almost got there. Almost.

private boolean isAssetIdMatching(String requestedAssetId, String databaseAssetId) {
    return (requestedAssetId + "").equals(databaseAssetId + "");
}

This Java code checks if the requestedAssetId, provided by the user, matches a databaseAssetId, fetched from the database. I don't fully understand how we get to this particular function. How is the databaseAssetId fetched? If the fetch were successful, how could it not match? I fear they may do this in a loop across all of the asset IDs in the database until they find a match, but I don't know that for sure, but the naming conventions hint at a WTF.

The weird thing here, though, is the choice to concatenate an empty string to every value. There's no logical reason to do this. It certainly won't change the equality check. I strongly suspect that the goal here was to protect against null values, but it doesn't work that way in Java. If the string variables are null, this will just throw an exception when you try and concatenate.

I strongly suspect the developer was more confident in JavaScript, where this pattern "works".

I don't understand why or how this function got here. I'm not the only one. James writes:

No clue what the original developers were intending with this. It sure was a shocker when we inherited a ton of code like this.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsNot Dying Today

Author: Julian Miles, Staff Writer Mum always said ice mining is a stupid idea. Whenever she said that, Dad just shrugged and went back to watching videos about playing the markets to get rich. I’m not sure if it was her crazy enthusiasms for anything that might get us ‘a better life’ or his stubborn […]

The post Not Dying Today appeared first on 365tomorrows.