Planet Russell

,

Planet DebianSteinar H. Gunderson: Recommended VCL

In line with this bug, and after losing an hour of sleep, here's some VCL that I can readily recommend if you happen to run Varnish:

sub vcl_recv {
  ...
  if (req.http.user-agent ~ "Scrapy") {
    return (synth(200, "FUCK YOU FUCK YOU FUCK YOU"));
  }
  ...
}

But hey, we “need to respect the freedom of Scrapy users”, that comes before actually not, like, destroying the Internet with AI bots.

Worse Than FailureCodeSOD: Dating in Another Language

It takes a lot of time and effort to build a code base that exceeds 100kloc. Rome wasn't built in a day; it just burned down in one.

Liza was working in a Python shop. They had a mildly successful product that ran on Linux. The sales team wanted better sales software to help them out, and instead of buying something off the shelf, they hired a C# developer to make something entirely custom.

Within a few months, that developer had produced a codebase of 320kloc I say "produced" and not "wrote" because who knows how much of it was copy/pasted, stolen from Stack Overflow, or otherwise not the developer's own work.

You have to wonder, how do you get such a large codebase so quickly?

private String getDatum()
{
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    return datum.ToShortDateString();
}

public int getTag()
{
    int tag;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    tag = datum.Day;
    return tag;
}

private int getMonat()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    monat = datum.Month;
    return monat;
}

private int getJahr()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    monat = datum.Year;
    return monat;
}

private int getStunde()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    monat = datum.Hour;
    return monat;
}

private int getMinute()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    monat = datum.Minute;
    return monat;
}

Instead of our traditional "bad date handling code" which eschews the built-in libraries, this just wraps the built in libraries with a less useful set of wrappers. Each of these could be replaced with some version of DateTime.Now.Minute.

You'll notice that most of the methods are private, but one is public. That seems strange, doesn't it? Well this set of methods was pulled from one random class which implements them in the codebase, but many classes have these methods copy/pasted in. At some point, the developer realized that duplicating that much code was a bad idea, and started marking them as public, so that you could just call them as needed. Note, said developer never learned to use the keyword static, so you end up calling the method on whatever random instance of whatever random class you happen to have handy. The idea of putting it into a common base class, or dedicated date-time utility class never occurred to the developer, but I guess that's because they were already part of a dedicated date-time utility class.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianMichael Prokop: Lessons learned from running an open source project for 20 years @ GLT25

Time flies by so quickly, it’s >20 years since I started the Grml project.

I’m giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!

Planet DebianRussell Coker: Last Post About the Yoga Gen3

Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to “code 0284 TCG-compliant functionality-related error” which suggests a motherboard problem. So I bought a new motherboard.

The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.

An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don’t retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).

I think that spending more money on trying to fix this would be a waste. So I’ll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.

For the moment I’m back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don’t notice any difference from the Yoga Gen 3.

Now I’m considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there’s only one on ebay Australia for $1200ono.

365 TomorrowsIngress

Author: Sukanya Basu Mallik Every evening, Mira and Arun huddled in the glow of their holo-tablet to devour ‘Extended Reality’, the hottest sci-fi novel on the Net. As pages flicked by in midair, lush digital fauna and neon-lit spires looped through their cramped flat. Tonight’s chapter promised the Chromatic Gates—legendary portals that blurred the line […]

The post Ingress appeared first on 365tomorrows.

Planet DebianDirk Eddelbuettel: RInside 0.2.19 on CRAN: Mostly Maintenance

A new release 0.2.19 of RInside arrived on CRAN and in Debian today. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release fixes a minor bug that got tickled (after a decade and a half RInside) by environment variables (which we parse at compile time and encode in a C/C++ header file as constants) built using double quotes. CRAN currently needs that on one or two platforms, and RInside was erroring. This has been addressed. In the two years since the last release we also received two kind PRs updating the Qt examples to Qt6. And as always we also updated a few other things around the package.

The list of changes since the last release:

Changes in RInside version 0.2.19 (2025-04-22)

  • The qt example now supports Qt6 (Joris Goosen in #54 closing #53)

  • CMake support was refined for more recent versions (Joris Goosen in #55)

  • The sandboxed-server example now states more clearly that RINSIDE_CALLBACKS needs to be defined

  • More routine update to package and continuous integration.

  • Some now-obsolete checks for C++11 have been removed

  • When parsing environment variables, use of double quotes is now supported

My CRANberries also provide a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

,

LongNowLynn Rothschild

Lynn Rothschild

Lynn J. Rothschild is a research scientist at NASA Ames and Adjunct Professor at Brown University and Stanford University working in astrobiology, evolutionary biology and synthetic biology. Rothschild's work focuses on the origin and evolution of life on Earth and in space, and in pioneering the use of synthetic biology to enable space exploration.

From 2011 through 2019 Rothschild served as the faculty advisor of the award-winning Stanford-Brown iGEM (international Genetically Engineered Machine Competition) team, exploring innovative technologies such as biomining, mycotecture, BioWires, making a biodegradable UAS (drone) and an astropharmacy. Rothschild is a past-president of the Society of Protozoologists, fellow of the Linnean Society of London, The California Academy of Sciences and the Explorer’s Club and lectures and speaks about her work widely.

Planet DebianMelissa Wen: 2025 FOSDEM: Don't let your motivation go, save time with kworkflow

2025 was my first year at FOSDEM, and I can say it was an incredible experience where I met many colleagues from Igalia who live around the world, and also many friends from the Linux display stack who are part of my daily work and contributions to DRM/KMS. In addition, I met new faces and recognized others with whom I had interacted on some online forums and we had good and long conversations.

During FOSDEM 2025 I had the opportunity to present about kworkflow in the kernel devroom. Kworkflow is a set of tools that help kernel developers with their routine tasks and it is the tool I use for my development tasks. In short, every contribution I make to the Linux kernel is assisted by kworkflow.

The goal of my presentation was to spread the word about kworkflow. I aimed to show how the suite consolidates good practices and recommendations of the kernel workflow in short commands. These commands are easily configurable and memorized for your current work setup, or for your multiple setups.

For me, Kworkflow is a tool that accommodates the needs of different agents in the Linux kernel community. Active developers and maintainers are the main target audience for kworkflow, but it is also inviting for users and user-space developers who just want to report a problem and validate a solution without needing to know every detail of the kernel development workflow.

Something I didn’t emphasize during the presentation but would like to correct this flaw here is that the main author and developer of kworkflow is my colleague at Igalia, Rodrigo Siqueira. Being honest, my contributions are mostly on requesting and validating new features, fixing bugs, and sharing scripts to increase feature coverage.

So, the video and slide deck of my FOSDEM presentation are available for download here.

And, as usual, you will find in this blog post the script of this presentation and more detailed explanation of the demo presented there.


Kworkflow at FOSDEM 2025: Speaker Notes and Demo

Hi, I’m Melissa, a GPU kernel driver developer at Igalia and today I’ll be giving a very inclusive talk to not let your motivation go by saving time with kworkflow.

So, you’re a kernel developer, or you want to be a kernel developer, or you don’t want to be a kernel developer. But you’re all united by a single need: you need to validate a custom kernel with just one change, and you need to verify that it fixes or improves something in the kernel.

And that’s a given change for a given distribution, or for a given device, or for a given subsystem…

Look to this diagram and try to figure out the number of subsystems and related work trees you can handle in the kernel.

So, whether you are a kernel developer or not, at some point you may come across this type of situation:

There is a userspace developer who wants to report a kernel issue and says:

  • Oh, there is a problem in your driver that can only be reproduced by running this specific distribution. And the kernel developer asks:
  • Oh, have you checked if this issue is still present in the latest kernel version of this branch?

But the userspace developer has never compiled and installed a custom kernel before. So they have to read a lot of tutorials and kernel documentation to create a kernel compilation and deployment script. Finally, the reporter managed to compile and deploy a custom kernel and reports:

  • Sorry for the delay, this is the first time I have installed a custom kernel. I am not sure if I did it right, but the issue is still present in the kernel of the branch you pointed out.

And then, the kernel developer needs to reproduce this issue on their side, but they have never worked with this distribution, so they just created a new script, but the same script created by the reporter.

What’s the problem of this situation? The problem is that you keep creating new scripts!

Every time you change distribution, change architecture, change hardware, change project - even in the same company - the development setup may change when you switch to a different project, you create another script for your new kernel development workflow!

You know, you have a lot of babies, you have a collection of “my precious scripts”, like Sméagol (Lord of the Rings) with the precious ring.

Instead of creating and accumulating scripts, save yourself time with kworkflow. Here is a typical script that many of you may have. This is a Raspberry Pi 4 script and contains everything you need to memorize to compile and deploy a kernel on your Raspberry Pi 4.

With kworkflow, you only need to memorize two commands, and those commands are not specific to Raspberry Pi. They are the same commands to different architecture, kernel configuration, target device.

What is kworkflow?

Kworkflow is a collection of tools and software combined to:

  • Optimize Linux kernel development workflow.
  • Reduce time spent on repetitive tasks, since we are spending our lives compiling kernels.
  • Standardize best practices.
  • Ensure reliable data exchange across kernel workflow. For example: two people describe the same setup, but they are not seeing the same thing, kworkflow can ensure both are actually with the same kernel, modules and options enabled.

I don’t know if you will get this analogy, but kworkflow is for me a megazord of scripts. You are combining all of your scripts to create a very powerful tool.

What is the main feature of kworflow?

There are many, but these are the most important for me:

  • Build & deploy custom kernels across devices & distros.
  • Handle cross-compilation seamlessly.
  • Manage multiple architecture, settings and target devices in the same work tree.
  • Organize kernel configuration files.
  • Facilitate remote debugging & code inspection.
  • Standardize Linux kernel patch submission guidelines. You don’t need to double check documentantion neither Greg needs to tell you that you are not following Linux kernel guidelines.
  • Upcoming: Interface to bookmark, apply and “reviewed-by” patches from mailing lists (lore.kernel.org).

This is the list of commands you can run with kworkflow. The first subset is to configure your tool for various situations you may face in your daily tasks.

# Manage kw and kw configurations
kw init             - Initialize kw config file
kw self-update (u)  - Update kw
kw config (g)       - Manage kernel .config files

The second subset is to build and deploy custom kernels.

# Build & Deploy custom kernels
kw kernel-config-manager (k) - Manage kernel .config files
kw build (b)        - Build kernel
kw deploy (d)       - Deploy kernel image (local/remote)
kw bd               - Build and deploy kernel

We have some tools to manage and interact with target machines.

# Manage and interact with target machines
kw ssh (s)          - SSH support
kw remote (r)       - Manage machines available via ssh
kw vm               - QEMU support

To inspect and debug a kernel.

# Inspect and debug
kw device           - Show basic hardware information
kw explore (e)      - Explore string patterns in the work tree and git logs
kw debug            - Linux kernel debug utilities
kw drm              - Set of commands to work with DRM drivers

To automatize best practices for patch submission like codestyle, maintainers and the correct list of recipients and mailing lists of this change, to ensure we are sending the patch to who is interested in it.

# Automatize best practices for patch submission
kw codestyle (c)    - Check code style
kw maintainers (m)  - Get maintainers/mailing list
kw send-patch       - Send patches via email

And the last one, the upcoming patch hub.

# Upcoming
kw patch-hub        - Interact with patches (lore.kernel.org)

How can you save time with Kworkflow?

So how can you save time building and deploying a custom kernel?

First, you need a .config file.

  • Without kworkflow: You may be manually extracting and managing .config files from different targets and saving them with different suffixes to link the kernel to the target device or distribution, or any descriptive suffix to help identify which is which. Or even copying and pasting from somewhere.
  • With kworkflow: you can use the kernel-config-manager command, or simply kw k, to store, describe and retrieve a specific .config file very easily, according to your current needs.

Then you want to build the kernel:

  • Without kworkflow: You are probably now memorizing a combination of commands and options.
  • With kworkflow: you just need kw b (kw build) to build the kernel with the correct settings for cross-compilation, compilation warnings, cflags, etc. It also shows some information about the kernel, like number of modules.

Finally, to deploy the kernel in a target machine.

  • Without kworkflow: You might be doing things like: SSH connecting to the remote machine, copying and removing files according to distributions and architecture, and manually updating the bootloader for the target distribution.
  • With kworkflow: you just need kw d which does a lot of things for you, like: deploying the kernel, preparing the target machine for the new installation, listing available kernels and uninstall them, creating a tarball, rebooting the machine after deploying the kernel, etc.

You can also save time on debugging kernels locally or remotely.

  • Without kworkflow: you do: ssh, manual setup and traces enablement, copy&paste logs.
  • With kworkflow: more straighforward access to debug utilities: events, trace, dmesg.

You can save time on managing multiple kernel images in the same work tree.

  • Without kworkflow: now you can be cloning multiple times the same repository so you don’t lose compiled files when changing kernel configuration or compilation options and manually managing build and deployment scripts.
  • With kworkflow: you can use kw env to isolate multiple contexts in the same worktree as environments, so you can keep different configurations in the same worktree and switch between them easily without losing anything from the last time you worked in a specific context.

Finally, you can save time when submitting kernel patches. In kworkflow, you can find everything you need to wrap your changes in patch format and submit them to the right list of recipients, those who can review, comment on, and accept your changes.

This is a demo that the lead developer of the kw patch-hub feature sent me. With this feature, you will be able to check out a series on a specific mailing list, bookmark those patches in the kernel for validation, and when you are satisfied with the proposed changes, you can automatically submit a reviewed-by for that whole series to the mailing list.


Demo

Now a demo of how to use kw environment to deal with different devices, architectures and distributions in the same work tree without losing compiled files, build and deploy settings, .config file, remote access configuration and other settings specific for those three devices that I have.

Setup

  • Three devices:
    • laptop (debian x86 intel local)
    • SteamDeck (steamos x86 amd remote)
    • RaspberryPi 4 (raspbian arm64 broadcomm remote)
  • Goal: To validate a change on DRM/VKMS using a single kernel tree.
  • Kworkflow commands:
    • kw env
    • kw d
    • kw bd
    • kw device
    • kw debug
    • kw drm

Demo script

In the same terminal and worktree.

First target device: Laptop (debian|x86|intel|local)
$ kw env --list # list environments available in this work tree
$ kw env --use LOCAL # select the environment of local machine (laptop) to use: loading pre-compiled files, kernel and kworkflow settings.
$ kw device # show device information
$ sudo modinfo vkms # show VKMS module information before applying kernel changes.
$ <open VKMS file and change module info>
$ kw bd # compile and install kernel with the given change
$ sudo modinfo vkms # show VKMS module information after kernel changes.
$ git checkout -- drivers
Second target device: RaspberryPi 4 (raspbian|arm64|broadcomm|remote)
$ kw env --use RPI_64 # move to the environment for a different target device.
$ kw device # show device information and kernel image name
$ kw drm --gui-off-after-reboot # set the system to not load graphical layer after reboot
$ kw b # build the kernel with the VKMS change
$ kw d --reboot # deploy the custom kernel in a Raspberry Pi 4 with Raspbian 64, and reboot
$ kw s # connect with the target machine via ssh and check the kernel image name
$ exit
Third target device: SteamDeck (steamos|x86|amd|remote)
$ kw env --use STEAMDECK # move to the environment for a different target device
$ kw device # show device information
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output
$ kw debug --dmesg --follow --history --cmd="modprobe -r vkms" # run a command and show the related dmesg output
$ <add a printk with a random msg to appear on dmesg log>
$ kw bd # deploy and install custom kernel to the target device
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output after build and deploy the kernel change

Q&A

Most of the questions raised at the end of the presentation were actually suggestions and additions of new features to kworkflow.

The first participant, that is also a kernel maintainer, asked about two features: (1) automatize getting patches from patchwork (or lore) and triggering the process of building, deploying and validating them using the existing workflow, (2) bisecting support. They are both very interesting features. The first one fits well the patch-hub subproject, that is under-development, and I’ve actually made a similar request a couple of weeks before the talk. The second is an already existing request in kworkflow github project.

Another request was to use kexec and avoid rebooting the kernel for testing. Reviewing my presentation I realized I wasn’t very clear that kworkflow doesn’t support kexec. As I replied, what it does is to install the modules and you can load/unload them for validations, but for built-in parts, you need to reboot the kernel.

Another two questions: one about Android Debug Bridge (ADB) support instead of SSH and another about support to alternative ways of booting when the custom kernel ended up broken but you only have one kernel image there. Kworkflow doesn’t manage it yet, but I agree this is a very useful feature for embedded devices. On Raspberry Pi 4, kworkflow mitigates this issue by preserving the distro kernel image and using config.txt file to set a custom kernel for booting. For ADB, there is no support too, and as I don’t see currently users of KW working with Android, I don’t think we will have this support any time soon, except if we find new volunteers and increase the pool of contributors.

The last two questions were regarding the status of b4 integration, that is under development, and other debugging features that the tool doesn’t support yet.

Finally, when Andrea and I were changing turn on the stage, he suggested to add support for virtme-ng to kworkflow. So I opened an issue for tracking this feature request in the project github.

With all these questions and requests, I could see the general need for a tool that integrates the variety of kernel developer workflows, as proposed by kworflow. Also, there are still many cases to be covered by kworkflow.

Despite the high demand, this is a completely voluntary project and it is unlikely that we will be able to meet these needs given the limited resources. We will keep trying our best in the hope we can increase the pool of users and contributors too.

Cryptogram Slopsquatting

As AI coding assistants invent nonexistent software libraries to download and use, enterprising attackers create and upload libraries with those names—laced with malware, of course.

EDITED TO ADD (1/22): Research paper. Slashdot thread.

Planet DebianJoey Hess: offgrid electric car

Eight months ago I came up my rocky driveway in an electric car, with the back full of solar panel mounting rails. I didn't know how I'd manage to keep it charged. I got the car earlier than planned, with my offgrid solar upgrade only beginning. There's no nearby EV charger, and winter was coming, less solar power every day. Still, it was the right time to take a leap to offgid EV life.

My existing 1 kilowatt solar array could charge the car only 5 miles on a good day. Here's my first try at charging the car offgrid:

first feeble charging offgrid

It was not worth charging the car that way, the house battery tended to get drained while doing that, and adding cycles to that battery is not desirable. So that was only a proof of concept, I knew I'd need to upgrade.

My goal with the upgrade was to charge the car directly from the sun, even when it was cloudy, using the house battery only to skate over brief darker periods (like a thunderstorm). By mid October, I had enough solar installed to do that (5 kilowatts).

me standing in front of solar fence

first charging from solar fence

Using this, in 2 days I charged the car up from 57% to 82%, and took off on a celebratory road trip to Niagra Falls, where I charged the car from hydro power from a dam my grandfather had engineered.

When I got home, it was November. Days were getting ever shorter. My solar upgrade was only 1/3rd complete and could charge the car 30-some miles per day, but only on a good day, and weather was getting worse. I came back with a low state of charge (both car and me), and needed to get back to full in time for my Thanksgiving trip at the end of the month. I decided to limit my trips to town.

charging up gradually through the month of November

This kind of medium term planning about car travel was new to me. But not too unusual for offgrid living. You look at the weather forecast and make some rough plans, and get to feel connected to the natural world a bit more.

December is the real test for offgrid solar, and honestly this was a bit rough, with a road trip planned for the end of the month. I did the usual holiday stuff but otherwise holed up at home a bit more than I usually would. Charging was limited and the cold made it charge less efficiently.

bleak December charging

Still, I was busy installing more solar panels, and by winter solstice, was back to charging 30 miles on a good day.

Of course, from there out things improved. In January and February I was able to charge up easily enough for my usual trips despite the cold. By March the car was often getting full before I needed to go anywhere, and I was doing long round trips without bothering to fast charge along the way, coming home low, knowing even cloudy days would let it charge up enough.

That brings me up to today. The car is 80% full and heading up toward 100% for a long trip on Friday. Despite the sky being milky white today with no visible sun, there's plenty of power to absorb, and the car charger turned on at 11 am with the house battery already full.

My solar upgrade is only 2/3rds complete, and also I have not yet installed my inverter upgrade, so the car can only currenly charge at 9 amps despite much more solar power often being available. So I'm looking forward to how next December goes with my full planned solar array and faster charging.

But first, a summer where I expect the car will mostly be charged up and ready to go at all times, and the only car expense will be fast charging on road trips!


By the way, the code I've written to automate offgrid charging that runs only when there's enough solar power is here.

And here are the charging graphs for the other months. All told, it's charged 475 kwh offgrid, enough to drive more than 1500 miles.

January
February
March
April

Cryptogram Android Improves Its Security

Android phones will soon reboot themselves after sitting idle for three days. iPhones have had this feature for a while; it’s nice to see Google add it to their phones.

Worse Than FailureXJSOML

When Steve's employer went hunting for a new customer relationship management system (CRM), they had some requirements. A lot of them were around the kind of vendor support they'd get. Their sales team weren't the most technical people, and the company wanted to push as much routine support off to the vendor as possible.

But they also needed a system that was extensible. Steve's company had many custom workflows they wanted to be able to execute, and automated marketing messages they wanted to construct, and so wanted a CRM that had an easy to use API.

"No worries," the vendor sales rep said, "we've had a RESTful API in our system for years. It's well tested and reliable. It's JSON based."

The purchasing department ground their way through the purchase order and eventually they started migrating to the new CRM system. And it fell to Steve to start learning the JSON-based, RESTful API.

"JSON"-based was a more accurate description.

For example, an API endpoint might have a schema like:

DeliveryId:	int // the ID of the created delivery
Errors: 	xml // Collection of errors encountered

This example schema is representative. Many "JSON" documents contained strings of XML inside of them.

Often, this is done when an existing XML-based API is "modernized", but in this case, the root cause is a little dumber than that. The system uses SQL Server as its back end, and XML is one of the native types. They just have a stored procedure build an XML object and then return it as an output parameter.

You'll be surprised to learn that the vendor's support team had a similar level of care: they officially did what you asked, but sometimes it felt like malicious compliance.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsGilded Cage

Author: Robert Gilchrist The door snicked shut behind the Dauphin. Metallic locks hammered with a decisive thud. He breathed a sigh of relief. He was safe. Jogging into the room was the Invader. Wearing a red holo-mask to obscure distinguishing features, the figure came up to the door and began running their hands over it […]

The post Gilded Cage appeared first on 365tomorrows.

Krebs on SecurityWhistleblower: DOGE Siphoned NLRB Case Data

A security architect with the National Labor Relations Board (NLRB) alleges that employees from Elon Musk‘s Department of Government Efficiency (DOGE) transferred gigabytes of sensitive data from agency case files in early March, using short-lived accounts configured to leave few traces of network activity. The NLRB whistleblower said the unusual large data outflows coincided with multiple blocked login attempts from an Internet address in Russia that tried to use valid credentials for a newly-created DOGE user account.

The cover letter from Berulis’s whistleblower statement, sent to the leaders of the Senate Select Committee on Intelligence.

The allegations came in an April 14 letter to the Senate Select Committee on Intelligence, signed by Daniel J. Berulis, a 38-year-old security architect at the NLRB.

NPR, which was the first to report on Berulis’s whistleblower complaint, says NLRB is a small, independent federal agency that investigates and adjudicates complaints about unfair labor practices, and stores “reams of potentially sensitive data, from confidential information about employees who want to form unions to proprietary business information.”

The complaint documents a one-month period beginning March 3, during which DOGE officials reportedly demanded the creation of all-powerful “tenant admin” accounts in NLRB systems that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis writes that on March 3, a black SUV accompanied by a police escort arrived at his building — the NLRB headquarters in Southeast Washington, D.C. The DOGE staffers did not speak with Berulis or anyone else in NLRB’s IT staff, but instead met with the agency leadership.

“Our acting chief information officer told us not to adhere to standard operating procedure with the DOGE account creation, and there was to be no logs or records made of the accounts created for DOGE employees, who required the highest level of access,” Berulis wrote of their instructions after that meeting.

“We have built in roles that auditors can use and have used extensively in the past but would not give the ability to make changes or access subsystems without approval,” he continued. “The suggestion that they use these accounts was not open to discussion.”

Berulis found that on March 3 one of the DOGE accounts created an opaque, virtual environment known as a “container,” which can be used to build and run programs or scripts without revealing its activities to the rest of the world. Berulis said the container caught his attention because he polled his colleagues and found none of them had ever used containers within the NLRB network.

Berulis said he also noticed that early the next morning — between approximately 3 a.m. and 4 a.m. EST on Tuesday, March 4  — there was a large increase in outgoing traffic from the agency. He said it took several days of investigating with his colleagues to determine that one of the new accounts had transferred approximately 10 gigabytes worth of data from the NLRB’s NxGen case management system.

Berulis said neither he nor his co-workers had the necessary network access rights to review which files were touched or transferred — or even where they went. But his complaint notes the NxGen database contains sensitive information on unions, ongoing legal cases, and corporate secrets.

“I also don’t know if the data was only 10gb in total or whether or not they were consolidated and compressed prior,” Berulis told the senators. “This opens up the possibility that even more data was exfiltrated. Regardless, that kind of spike is extremely unusual because data almost never directly leaves NLRB’s databases.”

Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account — one that had been created just minutes earlier. Berulis said those attempts were all blocked thanks to rules in place that prohibit logins from non-U.S. locations.

“Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”

According to Berulis, the naming structure of one Microsoft user account connected to the suspicious activity suggested it had been created and later deleted for DOGE use in the NLRB’s cloud systems: “DogeSA_2d5c3e0446f9@nlrb.microsoft.com.” He also found other new Microsoft cloud administrator accounts with nonstandard usernames, including “Whitesox, Chicago M.” and “Dancehall, Jamaica R.”

A screenshot shared by Berulis showing the suspicious user accounts.

On March 5, Berulis documented that a large section of logs for recently created network resources were missing, and a network watcher in Microsoft Azure was set to the “off” state, meaning it was no longer collecting and recording data like it should have.

Berulis said he discovered someone had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever use. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

The complaint alleges that by March 17 it became clear the NLRB no longer had the resources or network access needed to fully investigate the odd activity from the DOGE accounts, and that on March 24, the agency’s associate chief information officer had agreed the matter should be reported to US-CERT. Operated by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), US-CERT provides on-site cyber incident response capabilities to federal and state agencies.

But Berulis said that between April 3 and 4, he and the associate CIO were informed that “instructions had come down to drop the US-CERT reporting and investigation and we were directed not to move forward or create an official report.” Berulis said it was at this point he decided to go public with his findings.

An email from Daniel Berulis to his colleagues dated March 28, referencing the unexplained traffic spike earlier in the month and the unauthorized changing of security controls for user accounts.

Tim Bearese, the NLRB’s acting press secretary, told NPR that DOGE neither requested nor received access to its systems, and that “the agency conducted an investigation after Berulis raised his concerns but ‘determined that no breach of agency systems occurred.'” The NLRB did not respond to questions from KrebsOnSecurity.

Nevertheless, Berulis has shared a number of supporting screenshots showing agency email discussions about the unexplained account activity attributed to the DOGE accounts, as well as NLRB security alerts from Microsoft about network anomalies observed during the timeframes described.

As CNN reported last month, the NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function.

“Despite its limitations, the agency had become a thorn in the side of some of the richest and most powerful people in the nation — notably Elon Musk, Trump’s key supporter both financially and arguably politically,” CNN wrote.

Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis shared screenshots with KrebsOnSecurity showing that on the day the NPR published its story about his claims (April 14), the deputy CIO at NLRB sent an email stating that administrative control had been removed from all employee accounts. Meaning, suddenly none of the IT employees at the agency could do their jobs properly anymore, Berulis said.

An email from the NLRB’s associate chief information officer Eric Marks, notifying employees they will lose security administrator privileges.

Berulis shared a screenshot of an agency-wide email dated April 16 from NLRB director Lasharn Hamilton saying DOGE officials had requested a meeting, and reiterating claims that the agency had no prior “official” contact with any DOGE personnel. The message informed NLRB employees that two DOGE representatives would be detailed to the agency part-time for several months.

An email from the NLRB Director Lasharn Hamilton on April 16, stating that the agency previously had no contact with DOGE personnel.

Berulis told KrebsOnSecurity he was in the process of filing a support ticket with Microsoft to request more information about the DOGE accounts when his network administrator access was restricted. Now, he’s hoping lawmakers will ask Microsoft to provide more information about what really happened with the accounts.

“That would give us way more insight,” he said. “Microsoft has to be able to see the picture better than we can. That’s my goal, anyway.”

Berulis’s attorney told lawmakers that on April 7, while his client and legal team were preparing the whistleblower complaint, someone physically taped a threatening note to Mr. Berulis’s home door with photographs — taken via drone — of him walking in his neighborhood.

“The threatening note made clear reference to this very disclosure he was preparing for you, as the proper oversight authority,” reads a preface by Berulis’s attorney Andrew P. Bakaj. “While we do not know specifically who did this, we can only speculate that it involved someone with the ability to access NLRB systems.”

Berulis said the response from friends, colleagues and even the public has been largely supportive, and that he doesn’t regret his decision to come forward.

“I didn’t expect the letter on my door or the pushback from [agency] leaders,” he said. “If I had to do it over, would I do it again? Yes, because it wasn’t really even a choice the first time.”

For now, Mr. Berulis is taking some paid family leave from the NLRB. Which is just as well, he said, considering he was stripped of the tools needed to do his job at the agency.

“They came in and took full administrative control and locked everyone out, and said limited permission will be assigned on a need basis going forward” Berulis said of the DOGE employees. “We can’t really do anything, so we’re literally getting paid to count ceiling tiles.”

Further reading: Berulis’s complaint (PDF).

,

Planet DebianGunnar Wolf: Want your title? Here, have some XML!

As it seems ChatGPT would phrase it… Sweet Mother of God!

I received a mail from my University’s Scholar Administrative division informing me my Doctor degree has been granted and emitted (yayyyyyy! 👨��), and before printing the corresponding documents, I should review all of the information is correct.

Attached to the mail, I found they sent me a very friendly and welcoming XML file, that stated it followed the schema at https://www.siged.sep.gob.mx/titulos/schema.xsd… Wait! There is nothing to be found in that address! Well, never mind, I can make sense out of a XML document, right?

XML sample

Of course, who needs an XSD schema? Everybody can parse through the data in a XML document, right? Of course, it took me close to five seconds to spot a minor mistake (in the finish and start dates of my previous degree), for which I mailed the relevant address…

But… What happens if I try to undestand the world as seen by 9.8 out of 10 people getting a title from UNAM, in all of its different disciplines (scientific, engineering, humanities…) Some people will have no clue about what to do with a XML file. Fortunately, the mail has a link to a very useful tutorial (roughly translated by myself):

The attached file has an XML extension, so in order to visualize it, you must open it with a text editor such as Notepad or Sublime Text. In case you have any questions on how to open the file, please refer to the following guide: https://www.dgae.unam.mx/guia_abrir_xml.html

Seriously! Asking people getting a title in just about any area of knowledge to… Install SublimeText to validate the content of a XML (that includes the oh-so-very-readable signature of some universitary bureaucrat).

Of course, for many years Mexican people have been getting XML files by mail (for any declared monetary exchange, i.e. buying goods or offering services), but they are always sent together with a render of such XML to a personalized PDF. And yes — the PDF is there only to give the human receiving the file an easier time understanding it. Who thought a bare XML was a good idea? 😠

Worse Than FailureCodeSOD: The Variable Toggle

A common class of bad code is the code which mixes server side code with client side code. This kind of thing:

<script>
    <?php if (someVal) { ?>
        var foo = <? echo someOtherVal ?>;
    <?php } else { ?>
        var foo = 5;
    <?php } ?>
</script>

We've seen it, we hate it, and is there really anything new to say about it?

Well, today's anonymous submitter found an "interesting" take on the pattern.

<script>
    if(linkfromwhere_srfid=='vff')
      {
    <?php
    $vff = 1;
    ?>
      }
</script>

Here, they have a client-side conditional, and based on that conditional, they attempt to set a variable on the server side. This does not work. This cannot work: the PHP code executes on the server, the client code executes on the client, and you need to be a lot more thoughtful about how they interact than this.

And yet, the developer responsible has done this all over the code base, pushed the non-working code out to production, and when it doesn't work, just adds bug tickets to the backlog to eventually figure out why- tickets that never get picked up, because there's always something with a higher priority out there.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Planet DebianLouis-Philippe Véronneau: One last Bookworm for the road — report from the Montreal 2025 BSP

Hello, hello, hello!

This report for the Bug Squashing Party we held in Montreal on March 28-29th is very late ... but better late than never? We're now at our fifth BSP in a row1, which is both nice and somewhat terrifying.

Have I really been around for five Debian releases already? Geez...

This year, around 13 different people showed up, including some brand new folks! All in all, we ended up working on 77 bugs, 61 of which have since been closed.

This is somewhat skewed by the large number of Lintian bugs I closed by merging and releasing the very many patches submitted by Maytham Alsudany (hello Maytham!), but that was still work :D

For our past few events, we have been renting a space at Ateliers de la transition socio-écologique. This building used to be nunnery (thus the huge cross on the top floor), but has since been transformed into a multi-faceted project.

A drawing of the building where the BSP was hosted

BSPs are great and this one was no exception. You should try to join an upcoming event or to organise one if you can. It is loads of fun and you will be helping the Debian project release its next stable version sooner!

As always, thanks to Debian for granting us a budget for the food and to rent the venue.

Pictures

Here are a bunch of pictures of the BSP, mixed in with some other pictures I took at this venue during a previous event.

Some of the people present on Friday, in the smaller room we had that day

A picture of a previous event, which includes many of the folks present at the BSP and the larger room we used on Saturday

A sticker on the door of the bathroom with text saying 'All Employees Must Wash Away Sin Before Returning To Work', a tongue-in-cheek reference to the building's previous purpose

A wall with posters for upcoming events

A drawing on one of the single-occupancy rooms in the building, warning people the door can't be opened from the inside (yikes!)

A table at the entrance with many flyers for social and political events


  1. See our previous BSPs in 2017, 2019, 2021 and 2023

365 TomorrowsNo Future For You

Author: Julian Miles, Staff Writer Flickering light is the only illumination in the empty laboratory. A faint humming the only noise. At the centre of a mass of equipment sits an old, metal-framed specimen tank, edges spotted with rust. Inside whirls a multi-coloured cloud, source of both light and sound. This close to it, the […]

The post No Future For You appeared first on 365tomorrows.

xkcdAir Fact

,

365 TomorrowsDrugs Awareness Day

Author: David Barber Teachers make the worst students, thought Mrs Adebeyo. They drifted in, chattering, and filling up tables according to subject. At the front sat four English teachers. One of the women was busy knitting. Mrs Adebeyo was already frowning at the click of needles. At the back was a row of men looking […]

The post Drugs Awareness Day appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Up the Down Staircase

Review: Up the Down Staircase, by Bel Kaufman

Publisher: Vintage Books
Copyright: 1964, 1991, 2019
Printing: 2019
ISBN: 0-525-56566-3
Format: Kindle
Pages: 360

Up the Down Staircase is a novel (in an unconventional format, which I'll describe in a moment) about the experiences of a new teacher in a fictional New York City high school. It was a massive best-seller in the 1960s, including a 1967 movie, but seems to have dropped out of the public discussion. I read it from the library sometime in the late 1980s or early 1990s and have thought about it periodically ever since. It was Bel Kaufman's first novel.

Sylvia Barrett is a new graduate with a master's degree in English, where she specialized in Chaucer. As Up the Down Staircase opens, it is her first day as an English teacher in Calvin Coolidge High School. As she says in a letter to a college friend:

What I really had in mind was to do a little teaching. "And gladly wolde he lerne, and gladly teche" — like Chaucer's Clerke of Oxenford. I had come eager to share all I know and feel; to imbue the young with a love for their language and literature; to instruct and to inspire. What happened in real life (when I had asked why they were taking English, a boy said: "To help us in real life") was something else again, and even if I could describe it, you would think I am exaggerating.

She instead encounters chaos and bureaucracy, broken windows and mindless regulations, a librarian who is so protective of her books that she doesn't let any students touch them, a school guidance counselor who thinks she's Freud, and a principal whose sole interaction with the school is to occasionally float through on a cushion of cliches, dispensing utterly useless wisdom only to vanish again.

I want to take this opportunity to extend a warm welcome to all faculty and staff, and the sincere hope that you have returned from a healthful and fruitful summer vacation with renewed vim and vigor, ready to gird your loins and tackle the many important and vital tasks that lie ahead undaunted. Thank you for your help and cooperation in the past and future.

Maxwell E. Clarke
Principal

In practice, the school is run by James J. McHare, Clarke's administrative assistant, who signs his messages JJ McH, Adm. Asst. and who Sylvia immediately starts calling Admiral Ass. McHare is a micro-managing control freak who spends the book desperately attempting to impose order over school procedures, the teachers, and the students, with very little success. The title of the book comes from one of his detention slips:

Please admit bearer to class—

Detained by me for going Up the Down staircase and subsequent insolence.

JJ McH

The conceit of this book is that, except for the first and last chapters, it consists only of memos, letters, notes, circulars, and other paper detritus, often said to come from Sylvia's wastepaper basket. Sylvia serves as the first-person narrator through her long letters to her college friend, and through shorter but more frequent exchanges via intraschool memo with Beatrice Schachter, another English teacher at the same school, but much of the book lies outside her narration. The reader has to piece together what's happening from the discarded paper of a dysfunctional institution.

Amid the bureaucratic and personal communications, there are frequent chapters with notes from the students, usually from the suggestion box that Sylvia establishes early in the book. These start as chaotic glimpses of often-misspelled wariness or open hostility, but over the course of Up the Down Staircase, some of the students become characters with fragmentary but still visible story arcs. This remains confusing throughout the novel — there are too many students to keep them entirely straight, and several of them use pseudonyms for the suggestion box — but it's the sort of confusion that feels like an intentional authorial choice. It mirrors the difficulty a teacher has in piecing together and remembering the stories of individual students in overstuffed classrooms, even if (like Sylvia and unlike several of her colleagues) the teacher is trying to pay attention.

At the start, Up the Down Staircase reads as mostly-disconnected humor. There is a strong "kids say the darnedest things" vibe, which didn't entirely work for me, but the send-up of chaotic bureaucracy is both more sophisticated and more entertaining. It has the "laugh so that you don't cry" absurdity of a system with insufficient resources, entirely absent management, and colleagues who have let their quirks take over their personalities. Sylvia alternates between incredulity and stubbornness, and I think this book is at its best when it shows the small acts of practical defiance that one uses to carve out space and coherence from mismanaged bureaucracy.

But this book is not just a collection of humorous anecdotes about teaching high school. Sylvia is sincere in her desire to teach, which crystallizes around, but is not limited to, a quixotic attempt to reach one delinquent that everyone else in the school has written off. She slowly finds her footing, she has a few breakthroughs in reaching her students, and the book slowly turns into an earnest portrayal of an attempt to make the system work despite its obvious unfitness for purpose. This part of the book is hard to review. Parts of it worked brilliantly; I could feel myself both adjusting my expectations alongside Sylvia to something less idealistic and also celebrating the rare breakthrough with her. Parts of it were weirdly uncomfortable in ways that I'm not sure I enjoyed. That includes Sylvia's climactic conversation with the boy she's been trying to reach, which was weirdly charged and ambiguous in a way that felt like the author's reach exceeding their grasp.

One thing that didn't help my enjoyment is Sylvia's relationship with Paul Barringer, another of the English teachers and a frustrated novelist and poet. Everyone who works at the school has found their own way to cope with the stress and chaos, and many of the ways that seem humorous turn out to have a deeper logic and even heroism. Paul's, however, is to retreat into indifference and alcohol. He is a believable character who works with Kaufman's themes, but he's also entirely unlikable. I never understood why Sylvia tolerated that creepy asshole, let alone kept having lunch with him. It is clear from the plot of the book that Kaufman at least partially understands Paul's deficiencies, but that did not help me enjoy reading about him.

This is a great example of a book that tried to do something unusual and risky and didn't entirely pull it off. I like books that take a risk, and sometimes Up the Down Staircase is very funny or suddenly insightful in a way that I'm not sure Kaufman could have reached with a more traditional novel. It takes a hard look at what it means to try to make a system work when it's clearly broken and you can't change it, and the way all of the characters arrive at different answers that are much deeper than their initial impressions was subtle and effective. It's the sort of book that sticks in your head, as shown by the fact I bought it on a whim to re-read some 35 years after I first read it. But it's not consistently great. Some parts of it drag, the characters are frustratingly hard to keep track of, and the emotional climax points are odd and unsatisfying, at least to me.

I'm not sure whether to recommend it or not, but it's certainly unusual. I'm glad I read it again, but I probably won't re-read it for another 35 years, at least.

If you are considering getting this book, be aware that it has a lot of drawings and several hand-written letters. The publisher of the edition I read did a reasonably good job formatting this for an ebook, but some of the pages, particularly the hand-written letters, were extremely hard to read on a Kindle. Consider paper, or at least reading on a tablet or computer screen, if you don't want to have to puzzle over low-resolution images.

The 1991 trade paperback had a new introduction by the author, reproduced in the edition I read as an afterward (which is a better choice than an introduction). It is a long and fascinating essay from Kaufman about her experience with the reaction to this book, culminating in a passionate plea for supporting public schools and public school teachers. Kaufman's personal account adds a lot of depth to the story; I highly recommend it.

Content note: Self-harm, plus several scenes that are closely adjacent to student-teacher relationships. Kaufman deals frankly with the problems of mostly-poor high school kids, including sexuality, so be warned that this is not the humorous romp that it might appear on first glance. A couple of the scenes made me uncomfortable; there isn't anything explicit, but the emotional overtones can be pretty disturbing.

Rating: 7 out of 10

,

Planet DebianAhmed Siam: My first post and writing plans

This is my first post in this blog and I think it will be useful to share what I will write about during the next months.

Here are some titles:

  • My Debian experimental internship experience as an intern.
  • Using IRC: What, Why and How.
  • How to internationalize CLI tools written in C++ using ICU4C.

If you are interested in such topics, feel free to subscribe to my RSS feed and/or follow me in any of my social media accounts.

Stay tuned!

365 TomorrowsMy Earliest Memory

Author: Marshall Bradshaw “You’re going to remember this next part,” said Dr. Adams. The fluorescent lights of exam room 8 hummed in beautiful harmony. I counted off the flashes. 120 per second. That was 7,200 per minute, or 432,000 per hour. The numbers felt pleasantly round to me. I reported the observation to Dr. Adams. […]

The post My Earliest Memory appeared first on 365tomorrows.

David BrinThe AI Dilemma continues onward... despite all our near term worries

First off, although this posting is not overall political... I will offer a warning to you activists out there.


While I think protest marches are among the least effective kinds of resistance - (especially since MAGAs live for one thing: to drink the tears of every smartypants professional/scienctist/civil-servant etc.) -- I still praise you active folks who are fighting however you can for the Great (now endangered) Exxperiment. Still may I point out how deeply stupid the organizers of this 50 501 Movement are?


Carumba! They scheduled their next protests for April 19, which far right maniacs call Waco Day or Timothy McVeigh Day. A day when you are best advised to lock the doors. 

That's almost as stoopid as the morons who made April 20 (4-20) their day of yippee pot delight... also Hitler's birthday.

Shouldn't we have vetting and even CONFERENCES before we decide such things?

Still, those of you heading out there (is it today already?) bless you for your citizenship and courage.

And now...


There’s always more about AI – and hence a collection of links to…



== The AI dilemmas and myriad-lemmas continue ==


I’ve been collecting so much material on the topic… and more keeps pouring in. Though (alas!) so little of it enlightening about how to turn this tech revolution into a positive sum outcome for all.


Still, let’s start with a potpourri…


1. A new study by Caltech and UC Riverside uncovers the hidden toll that AI exacts on human health, from chip manufacturing to data center operation.
 

2. And also this re: my alma mater: Caltech researchers have developed brain–machine interfaces that can interpret data from neural activity even after the signal from an implant has become less clear.   

 

3. Swinging from process to implications… my friend from the UK (via Panama) Calum Chace (author of Surviving AI: The Promise & Peril of Artificial Intelligence) sent me this one from his Conscium Project re: “Principles for Responsible AI Consciousness Research”. While detailed and illuminating, the work is vague about the most important things, like how to tell ‘consciousness’ from a system that feigns it… and whether that even matters.

Moreover, none of the authors cited here refers to how the topic was pre-explored in science fiction. Take the dive into “what is consciousness?” that you’ll find in Peter Watts’s novel “Blindsight.” 

 

…wherein Watts makes the case that a sense of self is not even necessary in order for a being to behave in ways that are actively intelligent, communicative and even ferociously self-interested.  

 

All you need is evolution. And an overall system in which evolution remains (as in nature) zero-sum.  Which – I keep trying to tell folks – is not necessarily fore-ordained.



== And yet more AI related Miscellany ==


4. Augmented reality glasses with face-recognition and access to world databases… now where have we seen this before? How about in great detail in Existence?

5. On the MINDPLEX Podcast with AI pioneers Ben Goertzel and Lisa Rein covering – among many topics - training AGI's to hold each other accountable, pingable IDs (using cryptographic hashes to secure agent identity), AGI rights & much, much more! (Sept 2024). And yeah, I am there, too. 


6. An article by Anthropic CEO Dario Amodei makes points similar to Reid Hoffman and Marc Andreeson, that the upsides of AI are potentially spectacular, as also portrayed in a small but influential number of science fiction tales.  Alas, his list of potential benefits, while extensive re ways AI could be "Machines of Loving Grace,"* is also long in the tooth and almost hoary-clichéd. We need to recall that in any ecosystem - including the new cyber one - entities without feedback constraints soon evolve into whatever form best proliferates quickly. That is, unless feedback loops take shape.


7. This article in FORTUNE makes a case similar to my own... that AIs will improve best, in accuracy and sagacity and even wisdom, if accountability is applied by AI upon other AIs. Positive feedback can be a dangerous cycle, while some kinds of negative feedback loops can lead to incrementally increased correlation with the real world.


8. Again, my featured WIRED article about this - Give Every AI a soul... or else.

My related Newsweek op-ed (June'22) dealt with 'empathy bots'' that feign sapience and personhood.  



== AI generated visual lies – we can deal with this! ==


9. A new AI algorithm flags deepfakes with 98% accuracy — better than any other tool out there right now. And it is essential that we keep developing such systems, in order to stand a chance of keeping up in an arms race against those who would foist on us lies, scams and misinformation...


   ... pretty much exactly as I described back in 1997, this reposted chapter from The Transparent Society - "The End of Photography as Proof."  


Two problems. First, scammers will use programs like this one to help perfect their scam algorithms. Second, it would be foolish to rely upon any one such system, or just a few. A diversity of highly competitive tattle-tale lie-denouncing systems is the only thing that can work, as I discuss here


Oh, and third. It is inherent – (as I show in that chapter of The Transparent Society) – that lies are more easily detected, denounced and incinerated in a general environment of transparency, wherein the greatest number can step away from their screens and algorithms and compare them to actual, physically-witnessed reality.


For more on this, here’s my related Newsweek op-ed (June'22) dealt with 'empathy bots' that feign sapience. Plus a YouTube pod where I give an expanded version.


== Generalizing to innovation, in general ==


10. Traditional approaches to innovation emphasize ideas and inventions, often leading to a losing confrontation with the mysterious “Valley of Death.” My colleague Peter Denning and his co-author Todd Lyons upend this paradigm in Navigating a Restless Sea, offering eight mutually reinforcing practices that power skillful navigation toward adoption, mobilizing people to try something new.  


=== Some tacked-on tech miscellany ==


11. Sure, it goes back to neolithic "Venus figurines" and Playboy and per-minute phone comfort lines and Eliza - and the movie "Her." And bot factories are hard at work. At my 2017 World of Watson keynote, I predicted persuasive 'empathy bots' would arrive in 2022 (they did.) And soon, Kremlin 'honeypot-lure babes" should become ineffective! Because this deep weakness of male humans will have an outlet that's... more than human?


Could that lead to those men calming down, prioritizing the more important aspects of life?)


12. And hence, we will soon see...
AI girlfriends and companions. And this from Vox: People are falling in love with -- and getting addicted to -- AI voices.


13. Kewl!  This tiny 3D-printed Apple IIe is powered by a $2 microcontroller .” With a teensy working screen taken from an Apple watch. Can run your old IIe programs. Size of an old mouse.  


14. Paragraphica by Bjørn Karmann is a camera that has no lens, but instead generates a text description of when & where it is, then generates an image via a text-to-image model.  


15. Daisy is an AI cellphone application that wastes scammers’ time so that they don’t have time to target real victims. Daisy has "told frustrated scammers meandering stories of her family, talked at length about her passion for knitting, and provided exasperated callers with false personal information including made-up bank details."



And finally...



== NOT directly AI… but for sure implications! ==


And… only slightly off-topic: If you feel a need for an inspiring tale about a modern hero, intellect and deeply-wise public figure, try Judge David Tatel’s VISION: A memoir and Blindness and Justice. I’ll vouch that he’s one of the wisest wise-guys I know. "Vision is charming, wise, and completely engaging. This memoir of a judge of the country’s second highest court, who has been without sight for decades, goes down like a cool drink on a hot day." —Scott Turow. https://www.davidtatel.com/


And Maynard Moore - of the Institute for the Study of Religion in the Age of Science - will be holding a pertinent event online in mid January: “Human-Machine Fusion: Our Destiny, or Might We Do Better?”  The IRAS webinar is free, but registration is required.



== a lagniappe (puns intended) ==


In the 1980s this supposedly “AI”- generated sonnet emerged from the following prompt, "Buzz Off, Banana Nose."  


Well… real or not, here’s my haiku response. 


In lunar orchards

    A pissed apiary signs:

"Bee gone, Cyrano!"

 

Count the layers, oh cybernetic padiwans!  I’m not obsolete… yet.


,

Planet DebianSven Hoexter: Trixie Upgrade and X11 Clipboard Manager Madness

Due to my own laziness and a few functionality issues my "for work laptop" is still using a 15+ year old setup with X11 and awesome. Since trixie is now starting its freeze, it's time to update that odd machine as well and look at the fallout. Good news: It's mostly my own resistance to change which required some kick in the back to move on.

Clipboard Manager Madness

For the past decade or so I used parcellite which served me well. Now that is no longer available in trixie and I started to look into one of the dead end streets of X11 related tooling, searching for an alternative.

Parcellite

Seems upstream is doing sporadic fixes, but holds GTK2 tight. The Debian package was patched to be GTK3 compatible, but has unfixed ftbfs issues with GCC 14.

clipit

Next I checked for a parcellite fork named clipit, and that's when it started to get funky. It's packaged in Debian, QA maintained, and recently received at least two uploads to keep it working. Installed it and found it's greeting me with a nag screen that I should migrate to diodon. The real clipit tool is still shipped as a binary named clipit.real, so if you know it you can still use it. To achieve the nag screen it depends on zenity and to ease the migration it depends on diodon. Two things I do not really need. Also the package description prominently mentions that you should not use the package.

diodon

The nag screen of clipit made me look at diodon. It claims it was written for the Ubuntu Unity desktop, something where I've no idea how alive and relevant it still is. While there is still something on launchpad, it seems to receive sporadic commits on github. Not sure if it's dead or just feature complete.

Interim Solution: clipit

Settled with clipit for now, but decided to fork the Debian package to remove the nag screen and the dependency on diodon and zenity (package build). My hope is to convert this last X11 setup to wayland within the lifetime of trixie.

I also contacted the last uploader regarding a removal of the nag screen, who then brought in the last maintainer who added the nag screen. While I first thought clipit is somewhat maintained upstream, Andrej quickly pointed out that this is not really the case. Still that leaves us in trixie with a rather odd situation. We ship now for the second stable release a package that recommends to move to a different tool while still shipping the original tool. Plus it's getting patched by some of its users who refuse to migrate to the alternative envisioned by the former maintainer.

VirtualBox and moving to libvirt

I always liked the GUI of VirtualBox, and it really made desktop virtualization easy. But with Linux 6.12, which enables KVM by default, it seems to get even more painful to get it up and running. In the past I just took the latest release from unstable and rebuild that one on the current stable. Currently the last release in unstable is 7.0.20, while the Linux 6.12 fixes only started to appear in VirtualBox 7.1.4 and later. The good thing is with virt-manager and the whole libvirt ecosystem there is a good enough replacement available, and it works fine with related tooling like vagrant. There are instructions available on how to set it up. I can only add that it makes sense to export VAGRANT_DEFAULT_PROVIDER=libvirt in your .bashrc to make that provider change permanent.

Worse Than FailureError'd: Hot Dog

Faithful Peter G. took a trip. "So I wanted to top up my bus ticket online. After asking for credit card details, PINs, passwords, a blood sample, and the airspeed velocity of an unladen European swallow, they also sent a notification to my phone which I had to authorise with a fingerprint, and then verify that all the details were correct (because you can never be too careful when paying for a bus ticket). So yes, it's me, but the details definitely are not correct." Which part is wrong, the currency? Any idea what the exchange rate is between NZD and the euro right now?

3

 

An anonymous member kvetched "Apparently, I'm a genius, but The New York Times' Spelling Bee is definitely not."

0

 

Mickey D. Had an ad pop up for a new NAS to market.
Specs: Check
Storage: Check
Superior technical support: "

1

 

Michael R. doesn't believe everything he sees on TV, thankfully, because "No wonder the stock market is in turmoil when prices fall by 500% like in the latest Amazon movie G20."

2

 

Finally, new friend Sandro shared his tale of woe. "Romance was hard enough I was young, and I see not much has changed now!"

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsSubscription Fee

Author: Fawkes Defries ‘Shit!’ Russ collapsed against his chrome tent, cursing as the acid tore through his clothes. Usually he made it back inside before the rain fell, but his payments to Numeral for the metal arms had just defaulted, and without the gravity-suspenders active he was stuck lugging his hands around like a cyborg […]

The post Subscription Fee appeared first on 365tomorrows.

,

Planet DebianSimon Josefsson: Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.

The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.

Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:

  • Stop publishing (or at least stop building from) source tarballs that differ from version control sources.
  • Create recipes for how to derive the published source tarballs from version control sources. Verify that independently from upstream.

While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.

So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.

How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.

We can do better.

I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:

https://gitlab.com/debdistutils/verify-reproducible-releases

The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.

I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)

I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.

Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.

Happy Supply-Chain Security Hacking!

Planet DebianScarlett Gately Moore: KDE Applications 25.04 Snaps and Kubuntu Plucky Puffin 25.04 Released!

Very busy releasetastic week! The versions being the same is a complete coincidence 😆

https://kde.org/announcements/gear/25.04.0

Which can be downloaded here: https://snapcraft.io/publisher/kde !

In addition to all the regular testing I am testing our snaps in a non KDE environment, so far it is not looking good in Xubuntu. We have kernel/glibc crashes on startup for some and for file open for others. I am working on a hopeful fix.

Next week I will have ( I hope ) my final surgery. If you can spare any change to help bring me over the finish line, I will be forever grateful 🙂

Planet DebianPetter Reinholdtsen: Gearing up OpenSnitch for a 1.6.8 release in Trixie

Sadly, the interactive application firewall OpenSnitch have in practice been unmaintained in Debian for a while. A few days ago I decided to do something about it, and today I am happy with the result. This package monitor network traffic going in and out of a Linux machine, and show a popup dialog to the logged in desktop user, asking to approve or deny any new connections. It has proved very valuable in discovering programs calling home, giving me more control of how information leak out of my Linux machine.

So far the new version is only available in Debian experimental, but I plan to upload it to unstable as soon as I know it is working on a few more machines, and make sure the new version make it into the next stable release of Debian. The package freeze is approaching, and it is not a lot of time left. If you read this blog post, I hope you can be one of the testers.

The new version should be using eBPF on architectures where this is working (amd64 and arm64), and fall back to /proc/ probing where the opensnitch-ebpf-modules package is missing (so far only armhf, a unrelated bug blocks building on riscv64 and s390x). Using eBPF should provide more accurate attribution of packages responsible for network traffic for short lived processes, which some times were unavailable in /proc/ when opensnitch tried to probe for information. I have limited experience with the new version, having used it myself for a day or so. It is easily backportable to Debian 12 Bookworm without code changes, all it need is a simple 'debuild' thanks to the optional build dependencies.

Due to a misfeature of llc on armhf, there is no eBPF support available there. I have not investigated the details, nor reported any bug yet, but for some reason -march=bpf is an unknown option on this architecture, causing the build in the ebpf_prog subdirectory build to fail.

The package is maintained under the umbrella of Debian Go team, and you can meet the current maintainers on the #debian-golang and #opensnitch IRC channels on irc.debian.org.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Cryptogram Age Verification Using Facial Scans

Discord is testing the feature:

“We’re currently running tests in select regions to age-gate access to certain spaces or user settings,” a spokesperson for Discord said in a statement. “The information shared to power the age verification method is only used for the one-time age verification process and is not stored by Discord or our vendor. For Face Scan, the solution our vendor uses operates on-device, which means there is no collection of any biometric information when you scan your face. For ID verification, the scan of your ID is deleted upon verification.”

I look forward to all the videos of people hacking this system using various disguises.

Cryptogram Friday Squid Blogging: Live Colossal Squid Filmed

A live colossal squid was filmed for the first time in the ocean. It’s only a juvenile: a foot long.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Planet DebianJonathan Dowland: Hledger UI themes

Last year I intended to write an update on my use of hledger, but that was waylaid for various reasons and I need to revisit how (if) I'm using it, so that's put off for longer. I do want to mention one contribution I made upstream: a dark theme for the UI, and some unfinished work on consistent colours.

Consistent terminal colours are an interesting issue: the most common terminal colour modes (8 and 256) use indexing into a palette, but the definition of the colours is ambiguous: the 8-colour palette is formally specified by ANSI as names (red, green, etc.); the 256-colour palette is effectively defined by xterm (a useful chart) but I'm not sure all terminal emulators that support it have chosen the same colour values.

A consequence of indexed-colour is that the end-user may redefine what the colour values are. Whether this is a good thing or a bad thing depends on your point of view. As an end-user, it's attractive to be able to tune the colour scheme; but as a software author, it means you have no real idea what your users are going to see, and matters like ensuring contrast are impossible.

Some terminals support 24-bit "true" colour, in which the colours are specified as an RGB triplet. Using these mean the software author can be reasonably sure all users will see the same thing (for a fungible definition of "same"), at the cost of user configurability. However, since it's less well supported, we start having to worry about fallback behaviour.

In the case of hledger-ui, which provides several colour schemes, that's probably OK, because the user configurability is achieved by choosing one of the schemes. (or writing your own, in extremis). However, the dark theme I contributed uses the 8-colour palette, in common with the other themes, and my explorations into using predictable colours are unfinished.

Planet DebianArturo Borrero González: My experience in the Debian LTS and ELTS projects

Debian

Last year, I decided to start participating in the Debian LTS and ELTS projects. It was a great opportunity to engage in something new within the Debian community. I had been following these projects for many years, observing their evolution and how they gained traction both within the ecosystem and across the industry.

I was curious to explore how contributors were working internally — especially how they managed security patching and remediation for older software. I’ve always felt this was a particularly challenging area, and I was fortunate to experience it firsthand.

As of April 2025, the Debian LTS project was primarily focused on providing security maintenance for Debian 11 Bullseye. Meanwhile, the Debian ELTS project was targeting Debian 8 Jessie, Debian 9 Stretch, and Debian 10 Buster.

During my time with the projects, I worked on a variety of packages and CVEs. Some of the most notable ones include:

There are several technical highlights I’d like to share — things I learned or had to apply while participating:

  • CI/CD pipelines: We used CI/CD pipelines on salsa.debian.org all the times to automate tasks such as building, linting, and testing packages. For any package I worked on that lacked CI/CD integration, setting it up became my first step.

  • autopkgtest: There’s a strong emphasis on autopkgtest as the mechanism for running functional tests and ensuring that security patches don’t introduce regressions. I contributed by both extending existing test suites and writing new ones from scratch.

  • Toolchain complexity for older releases: Working with older Debian versions like Jessie brought some unique challenges. Getting a development environment up and running often meant troubleshooting issues with sbuild chroots, qemu images, and other tools that don’t “just work” like they tend to on Debian stable.

  • Community collaboration: The people involved in LTS and ELTS are extremely helpful and collaborative. Requests for help, code reviews, and general feedback were usually answered quickly.

  • Shared ownership: This collaborative culture also meant that contributors would regularly pick up work left by others or hand off their own tasks when needed. That mutual support made a big difference.

  • Backporting security fixes: This is probably the most intense —and most rewarding— activity. It involves manually adapting patches to work on older codebases when upstream patches don’t apply cleanly. This requires deep code understanding and thorough testing.

  • Upstream collaboration: Reaching out to upstream developers was a key part of my workflow. I often asked if they could provide patches for older versions or at least review my backports. Sometimes they were available, but most of the time, the responsibility remained on us.

  • Diverse tech stack: The work exposed me to a wide range of programming languages and frameworks—Python, Java, C, Perl, and more. Unsurprisingly, some modern languages (like Go) are less prevalent in older releases like Jessie.

  • Security team interaction: I had frequent contact with the core Debian Security Team—the folks responsible for security in Debian stable. This gave me a broader perspective on how Debian handles security holistically.

In March 2025, I decided to scale back my involvement in the projects due to some changes in my personal life. Still, this experience has been one of the highlights of my career, and I would definitely recommend it to others.

I’m very grateful for the warm welcome I received from the LTS/ELTS community, and I don’t rule out the possibility of rejoining the LTS/ELTS efforts in the future.

The Debian LTS/ELTS projects are currently coordinated by folks at Freexian. Many thanks to Freexian and sponsors for providing this opportunity!

Worse Than FailureCodeSOD: Static State

Today's Anonymous submitter was reviewing some C++ code, and saw this perfectly reasonable looking pattern.

class SomeClass
{
public:
	void setField(int val);
	int getField();
}

Now, we can talk about how overuse of getters and setters is itself an antipattern (especially if they're trivial- you've just made a public variable with extra steps), but it's not wrong and there are certainly good reasons to be cautious with encapsulation. That said, because this is C++, that getField should really be declared int getField() const- appropriate for any method which doesn't cause a mutation to a class instance.

Or should it? Let's look at the implementation.

void SomeClass::setField(int val)
{
	setGetField(true, val);
}

void SomeClass::getField()
{
	return setGetField(false);
}

Wait, what? Why are we passing a boolean to a method called setGet. Why is there a method called setGet? They didn't go and make a method that both sets and gets, and decide which they're doing based on a boolean flag, did they?

int SomeClass::setGetField(bool set, int val)
{
	static int s_val = 0;
	if (set)
	{
		s_val = val;
	}
	return s_val;
}

Oh, good, they didn't just make a function that maybe sets or gets based on a boolean flag. They also made the state within that function a static field. And yes, function level statics are not scoped to an instance, so this is shared across all instances of the class. So it's not encapsulated at all, and we've blundered back into Singletons again, somehow.

Our anonymous submitter had two reactions. Upon seeing this the first time, they wondered: "WTF? This must be some kind of joke. I'm being pranked."

But then they saw the pattern again. And again. After seeing it fifty times, they wondered: "WTF? Who hired these developers? And can that hiring manager be fired? Out of a cannon? Into the sun?"

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsWatching the Ships

Author: Shannon O’Connor I watch the space ships leave and wonder what it’s like to be able to go that far and dream that big. These days, space travel is available to the elite, but not to those on the bottom like me, who can barely afford to get by. I used to watch the […]

The post Watching the Ships appeared first on 365tomorrows.

,

Cryptogram CVE Program Almost Unfunded

Mitre’s CVE’s program—which provides common naming and other informational resources about cybersecurity vulnerabilities—was about to be cancelled, as the US Department of Homeland Security failed to renew the contact. It was funded for eleven more months at the last minute.

This is a big deal. The CVE program is one of those pieces of common infrastructure that everyone benefits from. Losing it will bring us back to a world where there’s no single way to talk about vulnerabilities. It’s kind of crazy to think that the US government might damage its own security in this way—but I suppose no crazier than any of the other ways the US is working against its own interests right now.

Sasha Romanosky, senior policy researcher at the Rand Corporation, branded the end to the CVE program as “tragic,” a sentiment echoed by many cybersecurity and CVE experts reached for comment.

“CVE naming and assignment to software packages and versions are the foundation upon which the software vulnerability ecosystem is based,” Romanosky said. “Without it, we can’t track newly discovered vulnerabilities. We can’t score their severity or predict their exploitation. And we certainly wouldn’t be able to make the best decisions regarding patching them.”

Ben Edwards, principal research scientist at Bitsight, told CSO, “My reaction is sadness and disappointment. This is a valuable resource that should absolutely be funded, and not renewing the contract is a mistake.”

He added “I am hopeful any interruption is brief and that if the contract fails to be renewed, other stakeholders within the ecosystem can pick up where MITRE left off. The federated framework and openness of the system make this possible, but it’ll be a rocky road if operations do need to shift to another entity.”

More similar quotes in the article.

My guess is that we will somehow figure out how to transition this program to continue without the US government. It’s too important to be at risk.

EDITED TO ADD: Another good article.

Worse Than FailureCodeSOD: Conventional Events

Now, I would argue that the event-driven lifecycle of ASP .Net WebForms is a bad way to design web applications. And it's telling that the model is basically dead; it seems my take is at best lukewarm, if not downright cold.

Pete inherited code from Bob, and Bob wrote an ASP .Net WebForm many many ages ago, and it's still the company's main application. Bob may not be with the company, but his presence lingers, both in the code he wrote and the fact that he commented frequently with // bob was here

Bob liked to reinvent wheels. Maybe that's why most methods he wrote were at least 500 lines long. He wrote his own localization engine, which doesn't work terribly well. What code he did write, he copy/pasted multiple times.

He was fond of this pattern:

if (SomeMethodReturningBoolean())
{
    return true;
}
else
{
    return false;
}

Now, in a Web Form, you usually attach your events to parts of the page lifecycle by convention. Name a method Page_Load? It gets called when the load event fires. Page_PreRender? Yep- when the pre-render event fires. SomeField_MouseClick? You get it.

Bob didn't, or Bob didn't like coding by naming convention. Which, I'll be frank, I also don't like coding by naming convention, but it was the paradigm Web Forms favored, it's what the documentation assumed, it's what every other developer was going to expect to see.

Still, Bob had his own Bob way of doing it.

In every page he'd write code like this:

this.PreRender += this.RequestPagePreRender

That line manually registers an event handler, which invokes the method RequestPagePreRender. And while I might object to wiring up events by convention- this is still just a convention. It's not done with any thought at all- every page has this line, even if the RequestPagePreRender method is empty.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Sam VargheseEaster is a pagan festival. Why do Christians in Australia make such a big deal of it?

Australia is inclined to often paint itself as a progressive country, one that has left the conservative era, by and large, behind, and one that no longer accepts the common myths that religious leaders and politicians used in the past to keep the people under their sway.

But that impression is largely a myth. And there is no better time when the extent to which Australia is a land that is deeply conservative is evident than Easter.

As any encyclopedia will tell the average reader, Easter is a pagan festival that was brought into the Christian calendar in order to increase the number of those in Christian ranks. Even the most cursory glance at scripture will reveal the absurdity of the claims; how can anyone be claimed to be within a tomb for three days and three nights when that period is said to be between Friday and Sunday?

Yes, as folklore has it, Jesus was crucified on Good Friday and then rose from the dead on Easter Sunday. That makes about a day and a half at best; yet, the Bible says Christ was dead for three days and three nights. How does that work out?

Anyone who tries to pour cold water on this myth is likely to be tarred and feathered and ridden on a rail. Easter means a commercial bonanza and anyone who gets in the way of businesspeople making money is likely to be about as popular as a communist in the Vatican.

Easter had its origins as a pagan festival celebrating spring in the Northern Hemisphere, long before the advent of Christianity.

New Unger’s Bible dictionary has this to say: “The word Easter is of Saxon origin, Eastra, the goddess of spring, in whose honour sacrifices were offered about Passover time each year. By the eighth century, Anglo–Saxons had adopted the name to designate the celebration of Christ’s resurrection.”

In 325AD, the first major church council, the Council of Nicaea, decided that Easter would fall on the Sunday following the first full moon after the spring equinox.

The rabbits and eggs that are part and parcel of Easter represent the pagan symbols for new life and the celebration of spring. The church turned a blind eye to the pagan origin of these things as it has done with many things before and after.

But just try and telling anyone that this whole tamasha has no religious meaning at all. You will become an outcast even in your own family.

365 TomorrowsNot Your Mother’s AI

Author: Majoki “A planetary AI, a quantum simbot, and an ice queen walk into a bar…” “Ice queen?” “One of those augs with the latest mods boosted to the max. You know the type. They act all cold and calculating, believing any display of emotion will make them look less advanced.” “Okay. I’ve run into […]

The post Not Your Mother’s AI appeared first on 365tomorrows.

Krebs on SecurityFunding Expires for Key Cyber Vulnerability Database

A critical resource that cybersecurity professionals worldwide rely on to identify, mitigate and fix security vulnerabilities in software and hardware is in danger of breaking down. The federally funded, non-profit research and development organization MITRE warned today that its contract to maintain the Common Vulnerabilities and Exposures (CVE) program — which is traditionally funded each year by the Department of Homeland Security — expires on April 16.

A letter from MITRE vice president Yosry Barsoum, warning that the funding for the CVE program will expire on April 16, 2025.

Tens of thousands of security flaws in software are found and reported every year, and these vulnerabilities are eventually assigned their own unique CVE tracking number (e.g. CVE-2024-43573, which is a Microsoft Windows bug that Redmond patched last year).

There are hundreds of organizations — known as CVE Numbering Authorities (CNAs) — that are authorized by MITRE to bestow these CVE numbers on newly reported flaws. Many of these CNAs are country and government-specific, or tied to individual software vendors or vulnerability disclosure platforms (a.k.a. bug bounty programs).

Put simply, MITRE is a critical, widely-used resource for centralizing and standardizing information on software vulnerabilities. That means the pipeline of information it supplies is plugged into an array of cybersecurity tools and services that help organizations identify and patch security holes — ideally before malware or malcontents can wriggle through them.

“What the CVE lists really provide is a standardized way to describe the severity of that defect, and a centralized repository listing which versions of which products are defective and need to be updated,” said Matt Tait, chief operating officer of Corellium, a cybersecurity firm that sells phone-virtualization software for finding security flaws.

In a letter sent today to the CVE board, MITRE Vice President Yosry Barsoum warned that on April 16, 2025, “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs will expire.”

“If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure,” Barsoum wrote.

MITRE told KrebsOnSecurity the CVE website listing vulnerabilities will remain up after the funding expires, but that new CVEs won’t be added after April 16.

A representation of how a vulnerability becomes a CVE, and how that information is consumed. Image: James Berthoty, Latio Tech, via LinkedIn.

DHS officials did not immediately respond to a request for comment. The program is funded through DHS’s Cybersecurity & Infrastructure Security Agency (CISA), which is currently facing deep budget and staffing cuts by the Trump administration. The CVE contract available at USAspending.gov says the project was awarded approximately $40 million last year.

Former CISA Director Jen Easterly said the CVE program is a bit like the Dewey Decimal System, but for cybersecurity.

“It’s the global catalog that helps everyone—security teams, software vendors, researchers, governments—organize and talk about vulnerabilities using the same reference system,” Easterly said in a post on LinkedIn. “Without it, everyone is using a different catalog or no catalog at all, no one knows if they’re talking about the same problem, defenders waste precious time figuring out what’s wrong, and worst of all, threat actors take advantage of the confusion.”

John Hammond, principal security researcher at the managed security firm Huntress, told Reuters he swore out loud when he heard the news that CVE’s funding was in jeopardy, and that losing the CVE program would be like losing “the language and lingo we used to address problems in cybersecurity.”

“I really can’t help but think this is just going to hurt,” said Hammond, who posted a Youtube video to vent about the situation and alert others.

Several people close to the matter told KrebsOnSecurity this is not the first time the CVE program’s budget has been left in funding limbo until the last minute. Barsoum’s letter, which was apparently leaked, sounded a hopeful note, saying the government is making “considerable efforts to continue MITRE’s role in support of the program.”

Tait said that without the CVE program, risk managers inside companies would need to continuously monitor many other places for information about new vulnerabilities that may jeopardize the security of their IT networks. Meaning, it may become more common that software updates get mis-prioritized, with companies having hackable software deployed for longer than they otherwise would, he said.

“Hopefully they will resolve this, but otherwise the list will rapidly fall out of date and stop being useful,” he said.

Update, April 16, 11:00 a.m. ET: The CVE board today announced the creation of non-profit entity called The CVE Foundation that will continue the program’s work under a new, unspecified funding mechanism and organizational structure.

“Since its inception, the CVE Program has operated as a U.S. government-funded initiative, with oversight and management provided under contract,” the press release reads. “While this structure has supported the program’s growth, it has also raised longstanding concerns among members of the CVE Board about the sustainability and neutrality of a globally relied-upon resource being tied to a single government sponsor.”

The organization’s website, thecvefoundation.org, is less than a day old and currently hosts no content other than the press release heralding its creation. The announcement said the foundation would release more information about its structure and transition planning in the coming days.

Update, April 16, 4:26 p.m. ET: MITRE issued a statement today saying it “identified incremental funding to keep the programs operational. We appreciate the overwhelming support for these programs that have been expressed by the global cyber community, industry and government over the last 24 hours. The government continues to make considerable efforts to support MITRE’s role in the program and MITRE remains committed to CVE and CWE as global resources.”

Planet DebianOtto Kekäläinen: Going Full-Time as an Open Source Developer

Featured image of post Going Full-Time as an Open Source Developer

After careful consideration, I’ve decided to embark on a new chapter in my professional journey. I’ve left my position at AWS to dedicate at least the next six months to developing open source software and strengthening digital ecosystems. My focus will be on contributing to Linux distributions (primarily Debian) and other critical infrastructure components that our modern society depends on, but which may not receive adequate attention or resources.

The Evolution of Open Source

Open source won. Over the 25+ years I’ve been involved in the open source movement, I’ve witnessed its remarkable evolution. Today, Linux powers billions of devices — from tiny embedded systems and Android smartphones to massive cloud datacenters and even space stations. Examine any modern large-scale digital system, and you’ll discover it’s built upon thousands of open source projects.

I feel the priority for the open source movement should no longer be increasing adoption, but rather solving how to best maintain the vast ecosystem of software. This requires building robust institutions and processes to secure proper resourcing and ensure the collaborative development process remains efficient and leads to ever-increasing quality of software.

What is Special About Debian?

Debian, established in 1993 by Ian Murdock, stands as one of these institutions that has demonstrated exceptional resilience. There is no single authority, but instead a complex web of various stakeholders, each with their own goals and sources of funding. Every idea needs to be championed at length to a wide audience and implemented through a process of organic evolution.

Thanks to this approach, Debian has been consistently delivering production-quality, universally useful software for over three decades. Having been a Debian Developer for more than ten years, I’m well-positioned to contribute meaningfully to this community.

If your organization relies on Debian or its derivatives such as Ubuntu, and you’re interested in funding cyber infrastructure maintenance by sponsoring Debian work, please don’t hesitate to reach out. This could include package maintenance and version currency, improving automated upgrade testing, general quality assurance and supply chain security enhancements.

Best way to reach me is by e-mail otto at debian.org. You can also book a 15-minute chat with me for a quick introduction.

Grow or Die

My four-year tenure as a Software Development Manager at Amazon Web Services was very interesting. I’m grateful for my time at AWS and proud of my team’s accomplishments, particularly for creating an open source contribution process that got Amazon from zero to the largest external contributor to the MariaDB open source database.

During this time, I got to experience and witness a plethora of interesting things. I will surely share some of my key learnings in future blog posts. Unfortunately, the rate of progress in this mammoth 1.5 million employee organization was slowing down, and I didn’t feel I learned much new in the last years. This realization, combined with the opportunity cost of not spending enough time on new cutting-edge technology, motivated me to take this leap.

Being a full-time open source developer may not be financially the most lucrative idea, but I think it is an excellent way to force myself to truly assess what is important on a global scale and what areas I want to contribute to.

Working fully on open source presents a fascinating duality: you’re not bound by any external resource or schedule limitations, and can the progress you make is directly proportional to how much energy you decide to invest. Yet, you also depend on collaboration with people you might never meet and who are not financially incentivized to collaborate. This will undoubtedly expose me to all kinds of challenges. But what would be better in fostering holistic personal growth? I know that deep down in my DNA, I am not made to stay cozy or to do easy things. I need momentum.

OK, let’s get going 🙂

,

Planet DebianJonathan Dowland: submitted

Today I submitted my PhD thesis, 8 years since I started (give or take). Next step, Viva.

Normal service may resume shortly…

Planet DebianDirk Eddelbuettel: AsioHeaders 1.30.2-1 on CRAN: New Upstream

Another new (stable) release of the AsioHeaders package arrived at CRAN just now. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

The update last week, kindly prepared by Charlie Gao, had overlooked / not covered one other nag discovered by CRAN. This new release, based on the current stable upstream release, does that.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.30.2-0 (2025-04-15

  • Upgraded to Asio 1.30.2 (Dirk in #13 fixing #12)

  • Added two new badges to README.md

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianRussell Coker: What Desktop PCs Need

It seems to me that we haven’t had much change in the overall design of desktop PCs since floppy drives were removed, and modern PCs still have bays the size of 5.25″ floppy drives despite having nothing modern that can fit in such spaces other than DVD drives (which aren’t really modern) and carriers for 4*2.5″ drives both of which most people don’t use. We had the PC System Design Guide [1] which was last updated in 2001 which should have been updated more recently to address some of these issues, the thing that most people will find familiar in that standard is the colours for audio ports. Microsoft developed the Legacy Free PC [2] concept which was a good one. There’s a lot of things that could be added to the list of legacy stuff to avoid, TPM 1.2, 5.25″ drive bays, inefficient PSUs, hardware that doesn’t sleep when idle or which prevents the CPU from sleeping, VGA and DVI ports, ethernet slower than 2.5Gbit, and video that doesn’t include HDMI 2.1 or DisplayPort 2.1 for 8K support. There are recently released high-end PCs on sale right now with 1gbit ethernet as standard and hardly any PCs support resolutions above 4K properly.

Here are some of the things that I think should be in a modern PC System Design Guide.

Power Supply

The power supply is a core part of the computer and it’s central location dictates the layout of the rest of the PC. GaN PSUs are more power efficient and therefore require less cooling. A 400W USB power supply is about 1/4 the size of a standard PC PSU and doesn’t have a cooling fan. A new PC standard should include less space for the PSU except for systems with multiple CPUs or that are designed for multiple GPUs.

A Dell T630 server has an option of a 1600W PSU that is 20*8.5*4cm = 680cc. The typical dimensions of an ATX PSU are 15*8.6*14cm = 1806cc. The SFX (small form factor variant of ATX) PSU is 12.5*6.3*10cm = 787cc. There is a reason for the ATX and SFX PSUs having a much worse ratio of power to size and that is the airflow. Server class systems are designed for good airflow and can efficiently cool the PSU with less space and they are also designed for uses where people are less concerned about fan noise. But the 680cc used for a 1600W Dell server PSU that predates GaN technology could be used for a modern GaN PSU that supplies the ~600W needed for a modern PC while being quiet. There are several different smaller size PSUs for name-brand PCs (where compatibility with other systems isn’t needed) that have been around for ~20 years but there hasn’t been a standard so all white-box PC systems have had really large PSUs.

PCs need USB-C PD ports that can charge a laptop etc. There are phones that can draw 80W for fast charging and it’s not unreasonable to expect a PC to be able to charge a phone at it’s maximum speed.

GPUs should have USB-C alternate mode output and support full USB functionality over the cable as well as PD that can power the monitor. Having a monitor with a separate PSU, a HDMI or DP cable to the PC, and a USB cable between PC and monitor is an annoyance. There should be one cable between PC and monitor and then keyboard, mouse, etc should connect to the monior.

All devices that are connected to a PC should use USB-C for power connection. That includes monitors that are using HDMI or DisplayPort for video, desktop switches, home Wifi APs, printers, and speakers (even when using line-in for the audio signal). The European Commission Common Charger Directive is really good but it only covers portable devices, keyboards, and mice.

Motherboard Features

Latest verions of Wifi and Bluetooth on the motherboard (this is becoming a standard feature).

On motherboard video that supports 8K resolution. An option of a PCIe GPU is a good thing to have but it would be nice if the motherboard had enough video capabilities to satisfy most users. There are several options for video that have a higher resolution than 4K and making things just work at 8K means that there will be less e-waste in future.

ECC RAM should be a standard feature on all motherboards, having a single bit error cause a system crash is a MS-DOS thing, we need to move past that.

There should be built in hardware for monitoring the system status that is better than BIOS beeps on boot. Lenovo laptops have a feature for having the BIOS play a tune on a serious error with an Android app to decode the meaning of the tune, we could have a standard for this. For desktop PCs there should be a standard for LCD status displays similar to the ones on servers, this would be cheap if everyone did it.

Case Features

The way the Framework Laptop can be expanded with modules is really good [3]. There should be something similar for PC cases. While you can buy USB devices for these things they are messy and risk getting knocked out of their sockets when moving cables around. While the Framework laptop expansion cards are much more expensive than other devices with similar functions that are aimed at a mass market if there was a standard for PCs then the devices to fit them would become cheap.

The PC System Design Guide specifies colors for ports (which is good) but not the feel of them. While some ports like Ethernet ports allow someone to feel which way the connector should go it isn’t possible to easily feel which way a HDMI or DisplayPort connector should go. It would be good if there was a standard that required plastic spikes on one side or some other way of feeling which way a connector should go.

GPU Placement

In modern systems it’s fairly common to have a high heatsink on the CPU with a fan to blow air in at the front and out the back of the PC. The GPU (which often dissipates twice as much heat as the CPU) has fans blowing air in sideways and not out the back. This gives some sort of compromise between poor cooling and excessive noise. What we need is to have air blown directly through a GPU heatsink and out of the case. One option for a tower case that needs minimal changes is to have the PCIe slot nearest the bottom of the case used for the GPU and have a grille in the bottom to allow air to go out, the case could have feet to keep it a few cm above the floor or desk. Another possibility is to have a PCIe slot parallel to the rear surface of the case (right angles to the other PCIe slots).

A common case with desktop PCs is to have the GPU use more than half the total power of the PC. The placement of the GPU shouldn’t be an afterthought, it should be central to the design.

Is a PCIe card even a good way of installing a GPU? Could we have a standard GPU socket on the motherboard next to the CPU socket and use the same type of heatsink and fan for GPU and CPU?

External Cooling

There are a range of aftermarket cooling devices for laptops that push cool air in the bottom or suck it out the side. We need to have similar options for desktop PCs. I think it would be ideal to have a standard attachments for airflow on the front and back of tower PCs. The larger a fan is the slower it can spin to give the same airflow and therefore the less noise it will produce. Instead of just relying on 10cm fans at the front and back of a PC to push air in and suck it out you could have a conical rubber duct connected to a 30cm diameter fan. That would allow quieter fans to do most of the work in pushing air through the PC and also allow the hot air to be directed somewhere suitable. When doing computer work in summer it’s not great to have a PC sending 300+W of waste heat into the room you are in. If it could be directed out a window that would be good.

Noise

For restricting noise of PCs we have industrial relations legislation that seems to basically require that workers not be exposed to noise louder than a blender, so if a PC is quieter than that then it’s OK. For name brand PCs there are specs about how much noise is produced but there are usually caveats like “under typical load” or “with a typical feature set” that excuse them from liability if the noise is louder than expected. It doesn’t seem possible for someone to own a PC, determine that the noise from it is what is acceptable, and then buy another that is close to the same.

We need regulations about this, and the EU seems the best jurisdiction for it as they cover the purchase of a lot of computer equipment that is also sold without change in other countries. The regulations need to also cover updates, for example I have a Dell T630 which is unreasonably loud and Dell support doesn’t have much incentive to be particularly helpful about it. BIOS updates routinely tweak things like fan speeds without the developers having an incentive to keep it as quiet as it was when it was sold.

What Else?

Please comment about other things you think should be standard PC features.

Cryptogram Troy Hunt Gets Phished

In case you need proof that anyone, even someone who does cybersecurity for a living, can fall for a phishing attack, Troy Hunt has a long, iterative story on his webpage about how he got phished. Worth reading.

EDITED TO ADD (4/14): Commentary from Adam Shostack and Cory Doctorow.

Planet DebianRussell Coker: Storage Trends 2025

It’s been almost 15 months since I blogged about Storage Trends 2024 [1]. There hasn’t been much change in this time (in Australia at least – I’m not tracking prices in other countries). The change was so small I had to check how the Australian dollar has performed against other currencies to see if changes to currencies had countered changes to storage prices, but there has been little overall change when compared to the Chinese Yuan and the Australian dollar is only about 11% worse against the US dollar when compared to a year ago. Generally there’s a trend of computer parts decreasing in price by significantly more than 11% per annum.

Small Storage

The cheapest storage device from MSY now is a Patriot P210 128G SATA SSD for $19, cheaper than the $24 last year and the same price as the year before. So over the last 2 years there has been no change to the cheapest storage device on sale. It would almost never make sense to buy that as a 256G SATA SSD (also Patriot P210) is $25 and has twice the lifetime (120TBW vs 60TBW). There are also 256G NVMe devices for $29 and $30 which would be better options if the system has a NVMe socket built in.

The cheapest 500G devices are $42.50 for a 512G SATA SSD and $45 for a 500G NVMe. Last year the prices were $33 for SATA and $36 for NVMe in that size so there’s been a significant increase in price there. The difference is enough that if someone was on a tight budget they might reasonably decide to use smaller storage than they might have used last year!

2TB hard drives are still $89 the same price as last year! Last year a 2TB SATA SSD was $118 and a 2TB NVMe was $145, now a 2TB SATA SSD is $157 and a 2TB NVMe is $127. So NVMe has become cheaper than SATA in that segment but overall prices are higher than last year. Again for business use 2TB seems a sensible minimum for most systems if you are paying MSY rates (or similar rates from Amazon etc).

Medium Storage

Last year 4TB HDDs were $135, now they are $148. Last year the cheapest 4TB SSD was $299, now the cheapest is a $309 NVMe. While the prices have all gone up the price difference between hard drives and SSD has decreased in that size range. So for a small server (a lot of home servers and small business servers) 4TB of RAID-1 storage is all that’s needed and for that SSDs are the best option. The price difference between $296 for 4TB of RAID-1 HDDs and $618 for RAID-1 NVMe is small enough to be justified by the benefits of speed and being quiet for most small server uses.

In 2023 a 8TB hard drive cost $179 and a 8TB SSD cost $739. Last year a 8TB hard drive cost $239 and a 8TB SATA SSD cost, $899. Now a 8TB HDD costs $229 and MSY doesn’t sell 8TB SSDs but for comparison Amazon has a Samsung 8TB SATA SSD for $919. So for storing 8TB+ there are benefits of hard drives as SSDs are difficult to get in that size range and more expensive than they were before. It seems that 8TB SSDs aren’t used by enough people to have a large market in the home and small office space, so those of us who want the larger storage sizes will have to get second hand enterprise gear. It will probably be another few years before 8TB enterprise SSDs start appearing on the second hand market.

Serious Storage

Last year I wrote about the affordability of U.2 devices. I regret not buying some then as there are fewer on sale now and prices are higher.

For hard drives they still aren’t a good choice for most users because most users don’t have more than 4TB of data.

For large quantities of data hard drives are still a good option, a 22TB disk costs $899. For companies this is a good option for many situations. For home users there is the additional problem that determining whether a drive is Shingled Magnetic Recording which has some serious performance issues for some use and it’s very difficult to determine which drives use it.

Conclusion

For corporate purchases the options for serious storage are probably decent. But for small companies and home users things definitely don’t seem to have improved as much as we expect from the computer industry, I had expected 8TB SSDs to go for $450 by now and SSDs less than 500G to not even be sold new any more.

The prices on 8TB SSDs have gone up more in the last 2 yeas than the ASX 200 (index of 200 biggest companies in the Australian stock market). I would never recommend using SSDs as an investment, but in retrospect 8TB SSDs could have been a good one.

$20 seems to be about the minimum cost that SSDs approach while hard drives have a higher minimum price of a bit under $100 because they are larger, heavier, and more fragile. It seems that the market is likely to move to most SSDs being close to $20, if they can make 2TB SSDs cheaply enough to sell for about that price then that would cover the majority of the market.

I’ve created a table of the prices, I should have done this before but I initially didn’t plan an ongoing series of posts on this topic.

Jun 2020 Apr 2021 Apr 2023 Jan 2024 Apr 2025
128G SSD $49 $19 $24 $19
500G SSD $97 $73 $32 $33 $42.50
2TB HDD $95 $72 $75 $89 $89
2TB SSD $335 $245 $149
4TB HDD $115 $135 $148
4TB SSD $895 $349 $299 $309
8TB HDD $179 $239 $229
8TB SSD $949 $739 $899 $919
10TB HDD $549 $395

365 TomorrowsLike a Shadow in the Tall Grass

Author: Hillary Lyon “Your rifles are fully charged,” the safari guide said as he walked out to the four-wheeled transport. A group of three hunters followed behind. He opened the door on the driver’s side and got in. “Remember,” he continued as the hunters climbed in the back, “your prey will not be a two-dimensional […]

The post Like a Shadow in the Tall Grass appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Message Oriented Database

Mark was debugging some database querying code, and got a bit confused about what it was actually doing. Specifically, it generated a query block like this:

$statement="declare @status int
        declare @msg varchar(30)
        exec @status=sp_doSomething 'arg1', ...
        select @msg=convert(varchar(10),@status)
        print @msg
        ";

$result = sybase_query ($statement, $this->connection);

Run a stored procedure, capture its return value in a variable, stringify that variable and print it. The select/print must be for debugging, right? Leftover debugging code. Why else would you do something like that?

if (sybase_get_last_message()!=='0') {
    ...
}

Oh no. sybase_get_last_message gets the last string printed out by a print statement. This is a pretty bonkers way to get the results of a function or procedure call back, especially when if there are any results (like a return value), they'll be in the $result return value.

Now that said, reading through those functions, it's a little unclear if you can actually get the return value of a stored procedure this way. Without testing it myself (and no, I'm not doing that), we're in a world where this might actually be the best way to do this.

So I'm not 100% sure where the WTF lies. In the developer? In the API designers? Sybase being TRWTF is always a pretty reliable bet. I suppose there's a reason why all those functions are listed as "REMOVED IN PHP 7.0.0", which was was rolled out through 2015. So at least those functions have been dead for a decade.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Krebs on SecurityTrump Revenge Tour Targets Cyber Leaders, Elections

President Trump last week revoked security clearances for Chris Krebs, the former director of the Cybersecurity and Infrastructure Security Agency (CISA) who was fired by Trump after declaring the 2020 election the most secure in U.S. history. The White House memo, which also suspended clearances for other security professionals at Krebs’s employer SentinelOne, comes as CISA is facing huge funding and staffing cuts.

Chris Krebs. Image: Getty Images.

The extraordinary April 9 memo directs the attorney general to investigate Chris Krebs (no relation), calling him “a significant bad-faith actor who weaponized and abused his government authority.”

The memo said the inquiry will include “a comprehensive evaluation of all of CISA’s activities over the last 6 years and will identify any instances where Krebs’ or CISA’s conduct appears to be contrary to the administration’s commitment to free speech and ending federal censorship, including whether Krebs’ conduct was contrary to suitability standards for federal employees or involved the unauthorized dissemination of classified information.”

CISA was created in 2018 during Trump’s first term, with Krebs installed as its first director. In 2020, CISA launched Rumor Control, a website that sought to rebut disinformation swirling around the 2020 election.

That effort ran directly counter to Trump’s claims that he lost the election because it was somehow hacked and stolen. The Trump campaign and its supporters filed at least 62 lawsuits contesting the election, vote counting, and vote certification in nine states, and nearly all of those cases were dismissed or dropped for lack of evidence or standing.

When the Justice Department began prosecuting people who violently attacked the U.S. Capitol on January 6, 2021, President Trump and Republican leaders shifted the narrative, claiming that Trump lost the election because the previous administration had censored conservative voices on social media.

Incredibly, the president’s memo seeking to ostracize Krebs stands reality on its head, accusing Krebs of promoting the censorship of election information, “including known risks associated with certain voting practices.” Trump also alleged that Krebs “falsely and baselessly denied that the 2020 election was rigged and stolen, including by inappropriately and categorically dismissing widespread election malfeasance and serious vulnerabilities with voting machines” [emphasis added].

Krebs did not respond to a request for comment. SentinelOne issued a statement saying it would cooperate in any review of security clearances held by its personnel, which is currently fewer than 10 employees.

Krebs’s former agency is now facing steep budget and staff reductions. The Record reports that CISA is looking to remove some 1,300 people by cutting about half its full-time staff and another 40% of its contractors.

“The agency’s National Risk Management Center, which serves as a hub analyzing risks to cyber and critical infrastructure, is expected to see significant cuts, said two sources familiar with the plans,” The Record’s Suzanne Smalley wrote. “Some of the office’s systematic risk responsibilities will potentially be moved to the agency’s Cybersecurity Division, according to one of the sources.”

CNN reports the Trump administration is also advancing plans to strip civil service protections from 80% of the remaining CISA employees, potentially allowing them to be fired for political reasons.

The Electronic Frontier Foundation (EFF) urged professionals in the cybersecurity community to defend Krebs and SentinelOne, noting that other security companies and professionals could be the next victims of Trump’s efforts to politicize cybersecurity.

“The White House must not be given free reign to turn cybersecurity professionals into political scapegoats,” the EFF wrote. “It is critical that the cybersecurity community now join together to denounce this chilling attack on free speech and rally behind Krebs and SentinelOne rather than cowering because they fear they will be next.”

However, Reuters said it found little sign of industry support for Krebs or SentinelOne, and that many security professionals are concerned about potentially being targeted if they speak out.

“Reuters contacted 33 of the largest U.S. cybersecurity companies, including tech companies and professional services firms with large cybersecurity practices, and three industry groups, for comment on Trump’s action against SentinelOne,” wrote Raphael Satter and A.J. Vicens. “Only one offered comment on Trump’s action. The rest declined, did not respond or did not answer questions.”

CYBERCOM-PLICATIONS

On April 3, President Trump fired Gen. Timothy Haugh, the head of the National Security Agency (NSA) and the U.S. Cyber Command, as well as Haugh’s deputy, Wendy Noble. The president did so immediately after meeting in the Oval Office with far-right conspiracy theorist Laura Loomer, who reportedly urged their dismissal. Speaking to reporters on Air Force One after news of the firings broke, Trump questioned Haugh’s loyalty.

Gen. Timothy Haugh. Image: C-SPAN.

Virginia Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, called it inexplicable that the administration would remove the senior leaders of NSA-CYBERCOM without cause or warning, and risk disrupting critical ongoing intelligence operations.

“It is astonishing, too, that President Trump would fire the nonpartisan, experienced leader of the National Security Agency while still failing to hold any member of his team accountable for leaking classified information on a commercial messaging app – even as he apparently takes staffing direction on national security from a discredited conspiracy theorist in the Oval Office,” Warner said in a statement.

On Feb. 28, The Record’s Martin Matishak cited three sources saying Defense Secretary Pete Hegseth ordered U.S. Cyber Command to stand down from all planning against Russia, including offensive digital actions. The following day, The Guardian reported that analysts at CISA were verbally informed that they were not to follow or report on Russian threats, even though this had previously been a main focus for the agency.

A follow-up story from The Washington Post cited officials saying Cyber Command had received an order to halt active operations against Russia, but that the pause was intended to last only as long as negotiations with Russia continue.

The Department of Defense responded on Twitter/X that Hegseth had “neither canceled nor delayed any cyber operations directed against malicious Russian targets and there has been no stand-down order whatsoever from that priority.”

But on March 19, Reuters reported several U.S. national security agencies have halted work on a coordinated effort to counter Russian sabotage, disinformation and cyberattacks.

“Regular meetings between the National Security Council and European national security officials have gone unscheduled, and the NSC has also stopped formally coordinating efforts across U.S. agencies, including with the FBI, the Department of Homeland Security and the State Department,” Reuters reported, citing current and former officials.

TARIFFS VS TYPHOONS

President’s Trump’s institution of 125% tariffs on goods from China has seen Beijing strike back with 84 percent tariffs on U.S. imports. Now, some security experts are warning that the trade war could spill over into a cyber conflict, given China’s successful efforts to burrow into America’s critical infrastructure networks.

Over the past year, a number of Chinese government-backed digital intrusions have come into focus, including a sprawling espionage campaign involving the compromise of at least nine U.S. telecommunications providers. Dubbed “Salt Typhoon” by Microsoft, these telecom intrusions were pervasive enough that CISA and the FBI in December 2024 warned Americans against communicating sensitive information over phone networks, urging people instead to use encrypted messaging apps (like Signal).

The other broad ranging China-backed campaign is known as “Volt Typhoon,” which CISA described as “state-sponsored cyber actors seeking to pre-position themselves on IT networks for disruptive or destructive cyberattacks against U.S. critical infrastructure in the event of a major crisis or conflict with the United States.”

Responsibility for determining the root causes of the Salt Typhoon security debacle fell to the Cyber Safety Review Board (CSRB), a nonpartisan government entity established in February 2022 with a mandate to investigate the security failures behind major cybersecurity events. But on his first full day back in the White House, President Trump dismissed all 15 CSRB advisory committee members — likely because those advisers included Chris Krebs.

Last week, Sen. Ron Wyden (D-Ore.) placed a hold on Trump’s nominee to lead CISA, saying the hold would continue unless the agency published a report on the telecom industry hacks, as promised.

“CISA’s multi-year cover up of the phone companies’ negligent cybersecurity has real consequences,” Wyden said in a statement. “Congress and the American people have a right to read this report.”

The Wall Street Journal reported last week Chinese officials acknowledged in a secret December meeting that Beijing was behind the widespread telecom industry compromises.

“The Chinese official’s remarks at the December meeting were indirect and somewhat ambiguous, but most of the American delegation in the room interpreted it as a tacit admission and a warning to the U.S. about Taiwan,” The Journal’s Dustin Volz wrote, citing a former U.S. official familiar with the meeting.

Meanwhile, China continues to take advantage of the mass firings of federal workers. On April 9, the National Counterintelligence and Security Center warned (PDF) that Chinese intelligence entities are pursuing an online effort to recruit recently laid-off U.S. employees.

“Foreign intelligence entities, particularly those in China, are targeting current and former U.S. government (USG) employees for recruitment by posing as consulting firms, corporate headhunters, think tanks, and other entities on social and professional networking sites,” the alert warns. “Their deceptive online job offers, and other virtual approaches, have become more sophisticated in targeting unwitting individuals with USG backgrounds seeking new employment.”

Image: Dni.gov

ELECTION THREATS

As Reuters notes, the FBI last month ended an effort to counter interference in U.S. elections by foreign adversaries including Russia, and put on leave staff working on the issue at the Department of Homeland Security.

Meanwhile, the U.S. Senate is now considering a House-passed bill dubbed the “Safeguard American Voter Eligibility (SAVE) Act,” which would order states to obtain proof of citizenship, such as a passport or a birth certificate, in person from those seeking to register to vote.

Critics say the SAVE Act could disenfranchise millions of voters and discourage eligible voters from registering to vote. What’s more, documented cases of voter fraud are few and far between, as is voting by non-citizens. Even the conservative Heritage Foundation acknowledges as much: An interactive “election fraud map” published by Heritage lists just 1,576 convictions or findings of voter fraud between 1982 and the present day.

Nevertheless, the GOP-led House passed the SAVE Act with the help of four Democrats. Its passage in the Senate will require support from at least seven Democrats, Newsweek writes.

In February, CISA cut roughly 130 employees, including its election security advisors. The agency also was forced to freeze all election security activities pending an internal review. The review was reportedly completed in March, but the Trump administration has said the findings would not be made public, and there is no indication of whether any cybersecurity support has been restored.

Many state leaders have voiced anxiety over the administration’s cuts to CISA programs that provide assistance and threat intelligence to election security efforts. Iowa Secretary of State Paul Pate last week told the PBS show Iowa Press he would not want to see those programs dissolve.

“If those (systems) were to go away, it would be pretty serious,” Pate said. “We do count on a lot those cyber protections.”

Pennsylvania’s Secretary of the Commonwealth Al Schmidt recently warned the CISA election security cuts would make elections less secure, and said no state on its own can replace federal election cybersecurity resources.

The Pennsylvania Capital-Star reports that several local election offices received bomb threats around the time polls closed on Nov. 5, and that in the week before the election a fake video showing mail-in ballots cast for Trump and Sen. Dave McCormick (R-Pa.) being destroyed and thrown away was linked to a Russian disinformation campaign.

“CISA was able to quickly identify not only that it was fraudulent, but also the source of it, so that we could share with our counties and we could share with the public so confidence in the election wasn’t undermined,” Schmidt said.

According to CNN, the administration’s actions have deeply alarmed state officials, who warn the next round of national elections will be seriously imperiled by the cuts. A bipartisan association representing 46 secretaries of state, and several individual top state election officials, have pressed the White House about how critical functions of protecting election security will perform going forward. However, CNN reports they have yet to receive clear answers.

Nevada and 18 other states are suing Trump over an executive order he issued on March 25 that asserts the executive branch has broad authority over state election procedures.

“None of the president’s powers allow him to change the rules of elections,” Nevada Secretary of State Cisco Aguilar wrote in an April 11 op-ed. “That is an intentional feature of our Constitution, which the Framers built in to ensure election integrity. Despite that, Trump is seeking to upend the voter registration process; impose arbitrary deadlines on vote counting; allow an unelected and unaccountable billionaire to invade state voter rolls; and withhold congressionally approved funding for election security.”

The order instructs the U.S. Election Assistance Commission to abruptly amend the voluntary federal guidelines for voting machines without going through the processes mandated by federal law. And it calls for allowing the administrator of the so-called Department of Government Efficiency (DOGE), along with DHS, to review state voter registration lists and other records to identify non-citizens.

The Atlantic’s Paul Rosenzweig notes that the chief executive of the country — whose unilateral authority the Founding Fathers most feared — has literally no role in the federal election system.

“Trump’s executive order on elections ignores that design entirely,” Rosenzweig wrote. “He is asserting an executive-branch role in governing the mechanics of a federal election that has never before been claimed by a president. The legal theory undergirding this assertion — that the president’s authority to enforce federal law enables him to control state election activity — is as capacious as it is frightening.”

,

Cryptogram DIRNSA Fired

In “Secrets and Lies” (2000), I wrote:

It is poor civic hygiene to install technologies that could someday facilitate a police state.

It’s something a bunch of us were saying at the time, in reference to the vast NSA’s surveillance capabilities.

I have been thinking of that quote a lot as I read news stories of President Trump firing the Director of the National Security Agency. General Timothy Haugh.

A couple of weeks ago, I wrote:

We don’t know what pressure the Trump administration is using to make intelligence services fall into line, but it isn’t crazy to worry that the NSA might again start monitoring domestic communications.

The NSA already spies on Americans in a variety of ways. But that’s always been a sideline to its main mission: spying on the rest of the world. Once Trump replaces Haugh with a loyalist, the NSA’s vast surveillance apparatus can be refocused domestically.

Giving that agency all those powers in the 1990s, in the 2000s after the terrorist attacks of 9/11, and in the 2010s was always a mistake. I fear that we are about to learn how big a mistake it was.

Here’s PGP creator Phil Zimmerman in 1996, spelling it out even more clearly:

The Clinton Administration seems to be attempting to deploy and entrench a communications infrastructure that would deny the citizenry the ability to protect its privacy. This is unsettling because in a democracy, it is possible for bad people to occasionally get elected—sometimes very bad people. Normally, a well-functioning democracy has ways to remove these people from power. But the wrong technology infrastructure could allow such a future government to watch every move anyone makes to oppose it. It could very well be the last government we ever elect.

When making public policy decisions about new technologies for the government, I think one should ask oneself which technologies would best strengthen the hand of a police state. Then, do not allow the government to deploy those technologies. This is simply a matter of good civic hygiene.

Cryptogram Reimagining Democracy

Imagine that all of us—all of society—have landed on some alien planet and need to form a government: clean slate. We do not have any legacy systems from the United States or any other country. We do not have any special or unique interests to perturb our thinking. How would we govern ourselves? It is unlikely that we would use the systems we have today. Modern representative democracy was the best form of government that eighteenth-century technology could invent. The twenty-first century is very different: scientifically, technically, and philosophically. For example, eighteenth-century democracy was designed under the assumption that travel and communications were both hard.

Indeed, the very idea of representative government was a hack to get around technological limitations. Voting is easier now. Does it still make sense for all of us living in the same place to organize every few years and choose one of us to go to a single big room far away and make laws in our name? Representative districts are organized around geography because that was the only way that made sense two hundred-plus years ago. But we do not need to do it that way anymore. We could organize representation by age: one representative for the thirty-year-olds, another for the forty-year-olds, and so on. We could organize representation randomly: by birthday, perhaps. We can organize in any way we want. American citizens currently elect people to federal posts for terms ranging from two to six years. Would ten years be better for some posts? Would ten days be better for others? There are lots of possibilities. Maybe we can make more use of direct democracy by way of plebiscites. Certainly we do not want all of us, individually, to vote on every amendment to every bill, but what is the optimal balance between votes made in our name and ballot initiatives that we all vote on?

For the past three years, I have organized a series of annual two-day workshops to discuss these and other such questions.1 For each event, I brought together fifty people from around the world: political scientists, economists, law professors, experts in artificial intelligence, activists, government types, historians, science-fiction writers, and more. We did not come up with any answers to our questions—and I would have been surprised if we had—but several themes emerged from the event. Misinformation and propaganda was a theme, of course, and the inability to engage in rational policy discussions when we cannot agree on facts. The deleterious effects of optimizing a political system for economic outcomes was another theme. Given the ability to start over, would anyone design a system of government for the near-term financial interest of the wealthiest few? Another theme was capitalism and how it is or is not intertwined with democracy. While the modern market economy made a lot of sense in the industrial age, it is starting to fray in the information age. What comes after capitalism, and how will it affect the way we govern ourselves?

Many participants examined the effects of technology, especially artificial intelligence (AI). We looked at whether—and when—we might be comfortable ceding power to an AI system. Sometimes deciding is easy. I am happy for an AI system to figure out the optimal timing of traffic lights to ensure the smoothest flow of cars through my city. When will we be able to say the same thing about the setting of interest rates? Or taxation? How would we feel about an AI device in our pocket that voted in our name, thousands of times per day, based on preferences that it inferred from our actions? Or how would we feel if an AI system could determine optimal policy solutions that balanced every voter’s preferences: Would it still make sense to have a legislature and representatives? Possibly we should vote directly for ideas and goals instead, and then leave the details to the computers.

These conversations became more pointed in the second and third years of our workshop, after generative AI exploded onto the internet. Large language models are poised to write laws, enforce both laws and regulations, act as lawyers and judges, and plan political strategy. How this capacity will compare to human expertise and capability is still unclear, but the technology is changing quickly and dramatically. We will not have AI legislators anytime soon, but just as today we accept that all political speeches are professionally written by speechwriters, will we accept that future political speeches will all be written by AI devices? Will legislators accept AI-written legislation, especially when that legislation includes a level of detail that human-based legislation generally does not? And if so, how will that change affect the balance of power between the legislature and the administrative state? Most interestingly, what happens when the AI tools we use to both write and enforce laws start to suggest policy options that are beyond human understanding? Will we accept them, because they work? Or will we reject a system of governance where humans are only nominally in charge?

Scale was another theme of the workshops. The size of modern governments reflects the technology at the time of their founding. European countries and the early American states are a particular size because that was a governable size in the eighteenth and nineteenth centuries. Larger governments—those of the United States as a whole and of the European Union—reflect a world where travel and communications are easier. Today, though, the problems we have are either local, at the scale of cities and towns, or global. Do we really have need for a political unit the size of France or Virginia? Or is it a mixture of scales that we really need, one that moves effectively between the local and the global?

As to other forms of democracy, we discussed one from history and another made possible by today’s technology. Sortition is a system of choosing political officials randomly. We use it today when we pick juries, but both the ancient Greeks and some cities in Renaissance Italy used it to select major political officials. Today, several countries—largely in Europe—are using the process to decide policy on complex issues. We might randomly choose a few hundred people, representative of the population, to spend a few weeks being briefed by experts, debating the issues, and then deciding on environmental regulations, or a budget, or pretty much anything.

“Liquid democracy” is a way of doing away with elections altogether. The idea is that everyone has a vote and can assign it to anyone they choose. A representative collects the proxies assigned to him or her and can either vote directly on the issues or assign all the proxies to someone else. Perhaps proxies could be divided: this person for economic matters, another for health matters, a third for national defense, and so on. In the purer forms of this system, people might transfer their votes to someone else at any time. There would be no more election days: vote counts might change every day.

And then, there is the question of participation and, more generally, whose interests are taken into account. Early democracies were really not democracies at all; they limited participation by gender, race, and land ownership. These days, to achieve a more comprehensive electorate we could lower the voting age. But, of course, even children too young to vote have rights, and in some cases so do other species. Should future generations be given a “voice,” whatever that means? What about nonhumans, or whole ecosystems? Should everyone have the same volume and type of voice? Right now, in the United States, the very wealthy have much more influence than others do. Should we encode that superiority explicitly? Perhaps younger people should have a more powerful vote than everyone else. Or maybe older people should.

In the workshops, those questions led to others about the limits of democracy. All democracies have boundaries limiting what the majority can decide. We are not allowed to vote Common Knowledge out of existence, for example, but can generally regulate speech to some degree. We cannot vote, in an election, to jail someone, but we can craft laws that make a particular action illegal. We all have the right to certain things that cannot be taken away from us. In the community of our future, what should be our rights as individuals? What should be the rights of society, superseding those of individuals?

Personally, I was most interested, at each of the three workshops, in how political systems fail. As a security technologist, I study how complex systems are subverted—hacked, in my parlance—for the benefit of a few at the expense of the many. Think of tax loopholes, or tricks to avoid government regulation. These hacks are common today, and AI tools will make them easier to find—and even to design—in the future. I would want any government system to be resistant to trickery. Or, to put it another way: I want the interests of each individual to align with the interests of the group at every level. We have never had a system of government with this property, but—in a time of existential risks such as climate change—it is important that we develop one.

Would this new system of government even be called “democracy”? I truly do not know.

Such speculation is not practical, of course, but still is valuable. Our workshops did not produce final answers and were not intended to do so. Our discourse was filled with suggestions about how to patch our political system where it is fraying. People regularly debate changes to the US Electoral College, or the process of determining voting districts, or the setting of term limits. But those are incremental changes. It is difficult to find people who are thinking more radically: looking beyond the horizon—not at what is possible today but at what may be possible eventually. Thinking incrementally is critically important, but it is also myopic. It represents a hill-climbing strategy of continuous but quite limited improvements. We also need to think about discontinuous changes that we cannot easily get to from here; otherwise, we may be forever stuck at local maxima. And while true innovation in politics is a lot harder than innovation in technology, especially without a violent revolution forcing changes on us, it is something that we as a species are going to have to get good at, one way or another.

Our workshop will reconvene for a fourth meeting in December 2025.

Note

  1. The First International Workshop on Reimagining Democracy (IWORD) was held December 7—8, 2022. The Second IWORD was held December 12—13, 2023. Both took place at the Harvard Kennedy School. The sponsors were the Ford Foundation, the Knight Foundation, and the Ash and Belfer Centers of the Kennedy School. See Schneier, “Recreating Democracy” and Schneier, “Second Interdisciplinary Workshop.”

This essay was originally published in Common Knowledge.

Worse Than FailureA Single Mortgage

We talked about singletons a bit last week. That reminded John of a story from the long ago dark ages where we didn't have always accessible mobile Internet access.

At the time, John worked for a bank. The bank, as all banks do, wanted to sell mortgages. This often meant sending an agent out to meet with customers face to face, and those agents needed to show the customer what their future would look like with that mortgage- payment calculations, and pretty little graphs about equity and interest.

Today, this would be a simple website, but again, reliable Internet access wasn't a thing. So they built a client side application. They tested the heck out of it, and it worked well. Sales agents were happy. Customers were happy. The bank itself was happy.

Time passed, as it has a way of doing, and the agents started clamoring for a mobile web version, that they could use on their phones. Now, the first thought was, "Wire it up to the backend!" but the backend they had was a mainframe, and there was a dearth of mainframe developers. And while the mainframe was the source of truth, and the one place where mortgages actually lived, building a mortgage calculator that could do pretty visualizations was far easier- and they already had one.

The client app was in .NET, and it was easy enough to wrap the mortgage calculation objects up in a web service. A quick round of testing of the service proved that it worked just as well as the old client app, and everyone was happy - for awhile.

Sometimes, agents would run a calculation and get absolute absurd results. Developers, putting in exactly the same values into their test environment wouldn't see the bad output. Testing the errors in production didn't help either- it usually worked just fine. There was a Heisenbug, but how could a simple math calculation that had already been tested and used for years have a Heisenbug?

Well, the calculation ran by simulation- it simply iteratively applied payments and interest to generate the entire history of the loan. And as it turns out, because the client application which started this whole thing only ever needed one instance of the calculator, someone had made it a singleton. And in their web environment, this singleton wasn't scoped to a single request, it was a true global object, which meant when simultaneous requests were getting processed, they'd step on each other and throw off the iteration. And testing didn't find it right away, because none of their tests were simulating the effect of multiple simultaneous users.

The fix was simple- stop being a singleton, and ensure every request got its own instance. But it's also a good example of misapplication of patterns- there was no need in the client app to enforce uniqueness via the singleton pattern. A calculator that holds state probably shouldn't be a singleton in the first place.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsFollow That

Author: Julian Miles, Staff Writer Slow night on the back side of the club quarter. Shouldn’t have taken the bet, but two bottles of wine and Ronny being a tit decided otherwise. So here I am, looking to beat his takings from the main drag, watching the only possible passenger in the last hour climb […]

The post Follow That appeared first on 365tomorrows.

Cryptogram China Sort of Admits to Being Behind Volt Typhoon

The Wall Street Journal has the story:

Chinese officials acknowledged in a secret December meeting that Beijing was behind a widespread series of alarming cyberattacks on U.S. infrastructure, according to people familiar with the matter, underscoring how hostilities between the two superpowers are continuing to escalate.

The Chinese delegation linked years of intrusions into computer networks at U.S. ports, water utilities, airports and other targets, to increasing U.S. policy support for Taiwan, the people, who declined to be named, said.

The admission wasn’t explicit:

The Chinese official’s remarks at the December meeting were indirect and somewhat ambiguous, but most of the American delegation in the room interpreted it as a tacit admission and a warning to the U.S. about Taiwan, a former U.S. official familiar with the meeting said.

No surprise.

,

Cory DoctorowNimby and the D-Hoppers CONCLUSION

Ben Templesmith's art for the comic adaptation of 'Nimby and the D-Hoppers', depicting a figure in powered armor flying through a slate-gray sky filled with abstract equations.

This week on my podcast, I conclude my reading of my 2003 Asimov’s Science Fiction Magazine story, Nimby and the D-Hoppers” (here’s the first half). The story has been widely reprinted (it was first published online in The Infinite Matrix in 2008), and was translated (by Elisabeth Vonarburg) into French for Solaris Magazine, as well as into Chinese, Russian, Hebrew, and Italian. The story was adapted for my IDW comic book series Cory Doctorow’s Futuristic Tales of the Here and Now by Ben Templesmith. I read this into my podcast 20 years ago, but I found myself wanting to revisit it.

Don’t get me wrong — I like unspoiled wilderness. I like my sky clear and blue and my city free of the thunder of cars and jackhammers. I’m no technocrat. But goddamit, who wouldn’t want a fully automatic, laser-guided, armor-piercing, self-replenishing personal sidearm?

Nice turn of phrase, huh? I finally memorized it one night, from one of the hoppers, as he stood in my bedroom, pointing his hand-cannon at another hopper, enumerating its many charms: “This is a laser-guided blah blah blah. Throw down your arms and lace your fingers behind your head, blah blah blah.” I’d heard the same dialog nearly every day that month, whenever the dimension-hoppers catapaulted into my home, shot it up, smashed my window, dived into the street, and chased one another through my poor little shtetl, wreaking havoc, maiming bystanders, and then gateing out to another poor dimension to carry on there.

Assholes.

It was all I could do to keep my house well-fed on sand to replace the windows. Much more hopper invasion and I was going to have to extrude its legs and babayaga to the beach. Why the hell was it always my house, anyway?


MP3

Planet DebianKeith Packard: sanitizer-fun

Fun with -fsanitize=undefined and Picolibc

Both GCC and Clang support the -fsanitize=undefined flag which instruments the generated code to detect places where the program wanders into parts of the C language specification which are either undefined or implementation defined. Many of these are also common programming errors. It would be great if there were sanitizers for other easily detected bugs, but for now, at least the undefined sanitizer does catch several useful problems.

Supporting the sanitizer

The sanitizer can be built to either trap on any error or call handlers. In both modes, the same problems are identified, but when trap mode is enabled, the compiler inserts a trap instruction and doesn't expect the program to continue running. When handlers are in use, each identified issue is tagged with a bunch of useful data and then a specific sanitizer handling function is called.

The specific functions are not all that well documented, nor are the parameters they receive. Maybe this is because both compilers provide an implementation of all of the functions they use and don't really expect external implementations to exist? However, to make these useful in an embedded environment, picolibc needs to provide a complete set of handlers that support all versions both gcc and clang as the compiler-provided versions depend upon specific C (and C++) libraries.

Of course, programs can be built in trap-on-error mode, but that makes it much more difficult to figure out what went wrong.

Fixing Sanitizer Issues

Once the sanitizer handlers were implemented, picolibc could be built with them enabled and all of the picolibc tests run to uncover issues within the library.

As with the static analyzer adventure from last year, the vast bulk of sanitizer complaints came from invoking undefined or implementation-defined behavior in harmless ways:

  • Computing pointers past &array[size+1]. I found no cases where the resulting pointers were actually used, but the mere computation is still undefined behavior. These were fixed by adjusting the code to avoid computing pointers like this. The result was clearer code, which is good.

  • Signed arithmetic overflow in PRNG code. There are several linear congruential PRNGs in the library which used signed integer arithmetic. The rand48 generator carefully used unsigned short values. Of course, in C, the arithmetic performed on them is done with signed ints if int is wider than short. C specifies signed overflow as undefined, but both gcc and clang generate the expected code anyways. The fixes here were simple; just switch the computations to unsigned arithmetic, adjusting types and inserting casts as required.

  • Passing pointers to the middle of a data structure. For example, free takes a pointer to the start of an allocation. The management structure appears just before that in memory; computing the address of which appears to be undefined behavior to the compiler. The only fix I could do here was to disable the sanitizer in functions doing these computations -- the sanitizer was mis-detecting correct code and it doesn't provide a way to skip checks on a per-operator basis.

  • Null pointer plus or minus zero. C says that any arithmetic with the NULL pointer is undefined, even when the value being added or subtracted is zero. The fix here was to create a macro, enabled only when the sanitizer is enabled, which checks for this case and skips the arithmetic.

  • Discarded computations which overflow. A couple of places computed a value, then checked if that would have overflowed and discard the result. Even though the program doesn't depend upon the computation, its mere presence is undefined behavior. These were fixed by moving the computation into an else clause in the overflow check. This inserts an extra branch instruction, which is annoying.

  • Signed integer overflow in math code. There's a common pattern in various functions that want to compare against 1.0. Instead of using the floating point equality operator, they do the computation using the two 32-bit halves with ((hi - 0x3ff00000) | lo) == 0. It's efficient, but because most of these functions store the 'hi' piece in a signed integer (to make checking the sign bit fast), the result is undefined when hi is a large negative value. These were fixed by inserting casts to unsigned types as the results were always tested for equality.

Signed integer shifts

This is one area where the C language spec is just wrong.

For left shift, before C99, it worked on signed integers as a bit-wise operator, equivalent to the operator on unsigned integers. After that, left shift of negative integers became undefined. Fortunately, it's straightforward (if tedious) to work around this issue by just casting the operand to unsigned, performing the shift and casting it back to the original type. Picolibc now has an internal macro, lsl, which does this:

    #define lsl(__x,__s) ((sizeof(__x) == sizeof(char)) ?                   \
                          (__typeof(__x)) ((unsigned char) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(short)) ?                  \
                          (__typeof(__x)) ((unsigned short) (__x) << (__s)) : \
                          (sizeof(__x) == sizeof(int)) ?                    \
                          (__typeof(__x)) ((unsigned int) (__x) << (__s)) :   \
                          (sizeof(__x) == sizeof(long)) ?                   \
                          (__typeof(__x)) ((unsigned long) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(long long)) ?              \
                          (__typeof(__x)) ((unsigned long long) (__x) << (__s)) : \
                          __undefined_shift_size(__x, __s))

Right shift is significantly more complicated to implement. What we want is an arithmetic shift with the sign bit being replicated as the value is shifted rightwards. C defines no such operator. Instead, right shift of negative integers is implementation defined. Fortunately, both gcc and clang define the >> operator on signed integers as arithmetic shift. Also fortunately, C hasn't made this undefined, so the program itself doesn't end up undefined.

The trouble with arithmetic right shift is that it is not equivalent to right shift of unsigned values. Here's what Per Vognsen came up with using standard C operators:

    int
    __asr_int(int x, int s) {
        return x < 0 ? ~(~x >> s) : x >> s;
    }

When the value is negative, we invert all of the bits (making it positive), shift right, then flip all of the bits back. Both GCC and Clang seem to compile this to a single asr instruction. This function is replicated for each of the five standard integer types and then the set of them wrapped in another sizeof-selecting macro:

    #define asr(__x,__s) ((sizeof(__x) == sizeof(char)) ?           \
                          (__typeof(__x))__asr_char(__x, __s) :       \
                          (sizeof(__x) == sizeof(short)) ?          \
                          (__typeof(__x))__asr_short(__x, __s) :      \
                          (sizeof(__x) == sizeof(int)) ?            \
                          (__typeof(__x))__asr_int(__x, __s) :        \
                          (sizeof(__x) == sizeof(long)) ?           \
                          (__typeof(__x))__asr_long(__x, __s) :       \
                          (sizeof(__x) == sizeof(long long)) ?      \
                          (__typeof(__x))__asr_long_long(__x, __s):   \
                          __undefined_shift_size(__x, __s))

The lsl and asr macros use sizeof instead of the type-generic mechanism to remain compatible with compilers that lack type-generic support.

Once these macros were written, they needed to be applied where required. To preserve the benefits of detecting programming errors, they were only applied where required, not blindly across the whole codebase.

There are a couple of common patterns in the math code using shift operators. One is when computing the exponent value for subnormal numbers.

for (ix = -1022, i = hx << 11; i > 0; i <<= 1)
    ix -= 1;

This code computes the exponent by shifting the significand left by 11 bits (the width of the exponent field) and then incrementally shifting it one bit at a time until the sign flips, which indicates that the most-significant bit is set. Use of the pre-C99 definition of the left shift operator is intentional here; so both shifts are replaced with our lsl operator.

In the implementation of pow, the final exponent is computed as the sum of the two exponents, both of which are in the allowed range. The resulting sum is then tested to see if it is zero or negative to see if the final value is sub-normal:

hx += n << 20;
if (hx >> 20 <= 0)
    /* do sub-normal things */

In this case, the exponent adjustment, n, is a signed value and so that shift is replaced with the lsl macro. The test value needs to compute the correct the sign bit, so we replace this with the asr macro.

Because the right shift operation is not undefined, we only use our fancy macro above when the undefined behavior sanitizer is enabled. On the other hand, the lsl macro should have zero cost and covers undefined behavior, so it is always used.

Actual Bugs Found!

The goal of this little adventure was both to make using the undefined behavior sanitizer with picolibc possible as well as to use the sanitizer to identify bugs in the library code. I fully expected that most of the effort would be spent masking harmless undefined behavior instances, but was hopeful that the effort would also uncover real bugs in the code. I was not disappointed. Through this work, I found (and fixed) eight bugs in the code:

  1. setlocale/newlocale didn't check for NULL locale names

  2. qsort was using uintptr_t to swap data around. On MSP430 in 'large' mode, that's a 20-bit type inside a 32-bit representation.

  3. random() was returning values in int range rather than long.

  4. m68k assembly for memcpy was broken for sizes > 64kB.

  5. freopen returned NULL, even on success

  6. The optimized version of memrchr was always performing unaligned accesses.

  7. String to float conversion had a table missing four values. This caused an array access overflow which resulted in imprecise values in some cases.

  8. vfwscanf mis-parsed floating point values by assuming that wchar_t was unsigned.

Sanitizer Wishes

While it's great to have a way to detect places in your C code which evoke undefined and implementation defined behaviors, it seems like this tooling could easily be extended to detect other common programming mistakes, even where the code is well defined according to the language spec. An obvious example is in unsigned arithmetic. How many bugs come from this seemingly innocuous line of code?

    p = malloc(sizeof(*p) * c);

Because sizeof returns an unsigned value, the resulting computation never results in undefined behavior, even when the multiplication wraps around, so even with the undefined behavior sanitizer enabled, this bug will not be caught. Clang seems to have an unsigned integer overflow sanitizer which should do this, but I couldn't find anything like this in gcc.

Summary

The undefined behavior sanitizers present in clang and gcc both provide useful diagnostics which uncover some common programming errors. In most cases, replacing undefined behavior with defined behavior is straightforward, although the lack of an arithmetic right shift operator in standard C is irksome. I recommend anyone using C to give it a try.

Planet DebianMichael Prokop: OpenSSH penalty behavior in Debian/trixie #newintrixie

This topic came up at a customer of mine in September 2024, when working on Debian/trixie support. Since then I wanted to blog about it to make people aware of this new OpenSSH feature and behavior. I finally found some spare minutes at Debian’s BSP in Vienna, so here we are. :)

Some of our Q/A jobs failed to run against Debian/trixie, in the debug logs we found:

debug1: kex_exchange_identification: banner line 0: Not allowed at this time

This Not allowed at this time pointed to a new OpenSSH feature. OpenSSH introduced options to penalize undesirable behavior with version 9.8p1, see OpenSSH Release Notes, and also sshd source code.

FTR, on the SSH server side, you’ll see messages like that:

Apr 13 08:57:11 grml sshd-session[2135]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55792 ssh2 [preauth]
Apr 13 08:57:11 grml sshd-session[2135]: Disconnecting authenticating user root 10.100.15.42 port 55792: Too many authentication failures [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55800 ssh2 [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: Disconnecting authenticating user root 10.100.15.42 port 55800: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55804 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: Disconnecting authenticating user root 10.100.15.42 port 55804: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55810 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: Disconnecting authenticating user root 10.100.15.42 port 55810: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55818 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55824 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55838 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55854 on [10.100.15.230]:22 penalty: failed authentication

This feature certainly is useful and has its use cases. But if you f.e. run automated checks to ensure that specific logins aren’t working, be careful: you might hit the penalty feature, lock yourself out but also consecutive checks then don’t behave as expected. Your login checks might fail, but only because the penalty behavior kicks in. The login you’re verifying still might be working underneath, but you don’t actually check for it exactly. Furthermore legitimate traffic from systems which accept connections from many users or behind shared IP addresses, like NAT and proxies could be denied.

To disable this new behavior, you can set PerSourcePenalties no in your sshd_config, but there are also further configuration options available, see PerSourcePenalties and PerSourcePenaltyExemptList settings in sshd_config(5) for further details.

Planet DebianBen Hutchings: FOSS activity in March 2025

Planet DebianBen Hutchings: FOSS activity in February 2025

Planet DebianBen Hutchings: FOSS activity in November 2024

365 TomorrowsThe Final Sunset

Author: Lachlan Bond I watch on, as the sun begins to expand before my eyes. Slowly, at first, its pulsating shape growing ever-so-slightly behind the Vintusian glass. The radiation waves shake the station, solar winds battering our rapidly failing shields. Alarms blare, but I can hardly hear them over the slip disks firing at full […]

The post The Final Sunset appeared first on 365tomorrows.

,

Planet DebianKalyani Kenekar: Nextcloud Installation HowTo: Secure Your Data with a Private Cloud

Logo NGinx

Nextcloud is an open-source software suite that enables you to set up and manage your own cloud storage and collaboration platform. It offers a range of features similar to popular cloud services like Google Drive or Dropbox but with the added benefit of complete control over your data and the server where it’s hosted.

I wanted to have a look at Nextcloud and the steps to setup a own instance with a PostgreSQL based database together with NGinx as the webserver to serve the WebUI. Before doing a full productive setup I wanted to play around locally with all the needed steps and worked out all the steps within KVM machine.

While doing this I wrote down some notes to mostly document for myself what I need to do to get a Nextcloud installation running and usable. So this manual describes how to setup a Nextcloud installation on Debian 12 Bookworm based on NGinx and PostgreSQL.

Nextcloud Installation

Install PHP and PHP extensions for Nextcloud

Nextcloud is basically a PHP application so we need to install PHP packages to get it working in the end. The following steps are based on the upstream documentation about how to install a own Nextcloud instance.

Installing the virtual package package php on a Debian Bookworm system would pull in the depending meta package php8.2. This package itself would then pull also the package libapache2-mod-php8.2 as an dependency which then would pull in also the apache2 webserver as a depending package. This is something I don’t wanted to have as I want to use NGinx that is already installed on the system instead.

To get this we need to explicitly exclude the package libapache2-mod-php8.2 from the list of packages which we want to install, to achieve this we have to append a hyphen - at the end of the package name, so we need to use libapache2-mod-php8.2- within the package list that is telling apt to ignore this package as an dependency. I ended up with this call to get all needed dependencies installed.

$ sudo apt install php php-cli php-fpm php-json php-common php-zip \
  php-gd php-intl php-curl php-xml php-mbstring php-bcmath php-gmp \
  php-pgsql libapache2-mod-php8.2-
  • Check php version (optional step)

    $ php -v

PHP 8.2.28 (cli) (built: Mar 13 2025 18:21:38) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.28, Copyright (c) Zend Technologies
    with Zend OPcache v8.2.28, Copyright (c), by Zend Technologies
  • After installing all the packages, edit the php.ini file:

    $ sudo vi /etc/php/8.2/fpm/php.ini

  • Change the following settings per your requirements:

max_execution_time = 300
memory_limit = 512M
post_max_size = 128M
upload_max_filesize = 128M
  • To make these settings effective, restart the php-fpm service

    $ sudo systemctl restart php8.2-fpm


Install PostgreSQL, Create a database and user

This manual assumes we will use a PostgreSQL server on localhost, if you have a server instance on some remote site you can skip the installation step here.

$ sudo apt install postgresql postgresql-contrib postgresql-client

  • Check version after installation (optinal step):

    $ sudo -i -u postgres

    $ psql -version

  • This output will be seen:

    psql (15.12 (Debian 15.12-0+deb12u2))

  • Exit the PSQL shell by using the command \q.

    postgres=# \q

  • Exit the CLI of the postgres user:

    postgres@host:~$ exit

Create a PostgreSQL Database and User:

  1. Create a new PostgreSQL user (Use a strong password!):

    $ sudo -u postgres psql -c "CREATE USER nextcloud_user PASSWORD '1234';"

  2. Create new database and grant access:

    $ sudo -u postgres psql -c "CREATE DATABASE nextcloud_db WITH OWNER nextcloud_user ENCODING=UTF8;"

  3. (Optional) Check if we now can connect to the database server and the database in detail (you will get a question about the password for the database user!). If this is not working it makes no sense to proceed further! We need to fix first the access then!

    $ psql -h localhost -U nextcloud_user -d nextcloud_db

    or

    $ psql -h 127.0.0.1 -U nextcloud_user -d nextcloud_db

  • Log out from postgres shell using the command \q.

Download and install Nextcloud

  • Use the following command to download the latest version of Nextcloud:

    $ wget https://download.nextcloud.com/server/releases/latest.zip

  • Extract file into the folder /var/www/html with the following command:

    $ sudo unzip latest.zip -d /var/www/html

  • Change ownership of the /var/www/html/nextcloud directory to www-data.

    $ sudo chown -R www-data:www-data /var/www/html/nextcloud

Configure NGinx for Nextcloud to use a certificate

In case you want to use self signed certificate, e.g. if you play around to setup Nextcloud locally for testing purposes you can do the following steps.

  • Generate the private key and certificate:

    $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nextcloud.key -out nextcloud.crt

    $ sudo cp nextcloud.crt /etc/ssl/certs/ && sudo cp nextcloud.key /etc/ssl/private/

  • If you want or need to use the service of Let’s Encrypt (or similar) drop the step above and create your required key data by using this command:

    $ sudo certbot --nginx -d nextcloud.your-domain.com

    You will need to adjust the path to the key and certificate in the next step!

  • Change the NGinx configuration:

    $ sudo vi /etc/nginx/sites-available/nextcloud.conf

  • Add the following snippet into the file and save it.

# /etc/nginx/sites-available/nextcloud.conf
upstream php-handler {
    #server 127.0.0.1:9000;
    server unix:/run/php/php8.2-fpm.sock;
}

# Set the `immutable` cache control options only for assets with a cache
# busting `v` argument

map $arg_v $asset_immutable {
    "" "";
    default ", immutable";
}

server {
    listen 80;
    listen [::]:80;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # Enforce HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443      ssl http2;
    listen [::]:443 ssl http2;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Path to the root of your installation
    root /var/www/html/nextcloud;

    # Use Mozilla's guidelines for SSL/TLS settings
    # https://mozilla.github.io/server-side-tls/ssl-config-generator/
    # Adjust the usage and paths of the correct key data! E.g. it you want to use Let's Encrypt key material!
    ssl_certificate /etc/ssl/certs/nextcloud.crt;
    ssl_certificate_key /etc/ssl/private/nextcloud.key;
    # ssl_certificate /etc/letsencrypt/live/nextcloud.your-domain.com/fullchain.pem; 
    # ssl_certificate_key /etc/letsencrypt/live/nextcloud.your-domain.com/privkey.pem;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # HSTS settings
    # WARNING: Only add the preload option once you read about
    # the consequences in https://hstspreload.org/. This option
    # will add the domain to a hardcoded list that is shipped
    # in all major browsers and getting removed from this list
    # could take several months.
    #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;

    # set max upload size and increase upload timeout:
    client_max_body_size 512M;
    client_body_timeout 300s;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # Pagespeed is not supported by Nextcloud, so if your server is built
    # with the `ngx_pagespeed` module, uncomment this line to disable it.
    #pagespeed off;

    # The settings allows you to optimize the HTTP2 bandwidth.
    # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
    # for tuning hints
    client_body_buffer_size 512k;

    # HTTP response headers borrowed from Nextcloud `.htaccess`
    add_header Referrer-Policy                   "no-referrer"       always;
    add_header X-Content-Type-Options            "nosniff"           always;
    add_header X-Frame-Options                   "SAMEORIGIN"        always;
    add_header X-Permitted-Cross-Domain-Policies "none"              always;
    add_header X-Robots-Tag                      "noindex, nofollow" always;
    add_header X-XSS-Protection                  "1; mode=block"     always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    # Set .mjs and .wasm MIME types
    # Either include it in the default mime.types list
    # and include that list explicitly or add the file extension
    # only for Nextcloud like below:
    include mime.types;
    types {
        text/javascript js mjs;
        application/wasm wasm;
    }

    # Specify how to handle directories -- specifying `/index.php$request_uri`
    # here as the fallback means that NGinx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    # `/updater`, `/ocs-provider`), and thus
    # `try_files $uri $uri/ /index.php$request_uri`
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
    location = / {
        if ( $http_user_agent ~ ^DavClnt ) {
            return 302 /remote.php/webdav/$is_args$args;
        }
    }

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    # Make a regex exception for `/.well-known` so that clients can still
    # access it despite the existence of the regex rule
    # `location ~ /(\.|autotest|...)` which would otherwise handle requests
    # for `/.well-known`.
    location ^~ /.well-known {
        # The rules in this block are an adaptation of the rules
        # in `.htaccess` that concern `/.well-known`.

        location = /.well-known/carddav { return 301 /remote.php/dav/; }
        location = /.well-known/caldav  { return 301 /remote.php/dav/; }

        location /.well-known/acme-challenge    { try_files $uri $uri/ =404; }
        location /.well-known/pki-validation    { try_files $uri $uri/ =404; }

        # Let Nextcloud's API for `/.well-known` URIs handle all other
        # requests by passing them to the front-end controller.
        return 301 /index.php$request_uri;
    }

    # Rules borrowed from `.htaccess` to hide certain paths from clients
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)                { return 404; }

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then NGinx will encounter an infinite rewriting loop when it prepend `/index.php`
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$|/) {
        # Required for legacy support
        rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|ocs-provider\/.+|.+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;

        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;

        fastcgi_max_temp_file_size 0;
    }

    # Serve static files
    location ~ \.(?:css|js|mjs|svg|gif|png|jpg|ico|wasm|tflite|map|ogg|flac)$ {
        try_files $uri /index.php$request_uri;
        # HTTP response headers borrowed from Nextcloud `.htaccess`
        add_header Cache-Control                     "public, max-age=15778463$asset_immutable";
        add_header Referrer-Policy                   "no-referrer"       always;
        add_header X-Content-Type-Options            "nosniff"           always;
        add_header X-Frame-Options                   "SAMEORIGIN"        always;
        add_header X-Permitted-Cross-Domain-Policies "none"              always;
        add_header X-Robots-Tag                      "noindex, nofollow" always;
        add_header X-XSS-Protection                  "1; mode=block"     always;
        access_log off;     # Optional: Don't log access to assets
    }

    location ~ \.woff2?$ {
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from `.htaccess`
        access_log off;     # Optional: Don't log access to assets
    }

    # Rule borrowed from `.htaccess`
    location /remote {
        return 301 /remote.php$request_uri;
    }

    location / {
        try_files $uri $uri/ /index.php$request_uri;
    }
}
  • Symlink configuration site available to site enabled.

    $ ln -s /etc/nginx/sites-available/nextcloud.conf /etc/nginx/sites-enabled/

  • Restart NGinx and access the URI in the browser.

  • Go through the installation of Nextcloud.

  • The user data on the installation dialog should point e.g to administrator or similar, that user will become administrative access rights in Nextcloud!

  • To adjust the database connection detail you have to edit the file $install_folder/config/config.php. Means here in the example within this post you would need to modify /var/www/html/nextcloud/config/config.php to control or change the database connection.

---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(Or your remote PostgreSQL server address if you have.)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for database user.)
--->%---

After the installation and setup of the Nextcloud PHP application there are more steps to be done. Have a look into the WebUI what you will need to do as additional steps like create a cronjob or tuning of some more PHP configurations.

If you’ve done all things correct you should see a login page similar to this:

Login Page of your Nextcloud instance


Optional other steps for more enhanced configuration modifications

Move the data folder to somewhere else

The data folder is the root folder for all user content. By default it is located in $install_folder/data, so in our case here it is in /var/www/html/nextcloud/data.

  • Move the data directory outside the web server document root.

    $ sudo mv /var/www/html/nextcloud/data /var/nextcloud_data

  • Ensure access permissions, mostly not needed if you move the folder.

    $ sudo chown -R www-data:www-data /var/nextcloud_data

    $ sudo chown -R www-data:www-data /var/www/html/nextcloud/

  • Update the Nextcloud configuration:

    1. Open the config/config.php file of your Nextcloud installation.

      $ sudo vi /var/www/html/nextcloud/config/config.php

    2. Update the ‘datadirectory’ parameter to point to the new location of your data directory.

  ---%<---
     'datadirectory' => '/var/nextcloud_data'
  --->%---
  • Restart NGinx service:

    $ sudo systemctl restart nginx

Make the installation available for multiple FQDNs on the same server

  • Adjust the Nextcloud configuration to listen and accept requests for different domain names. Configure and adjust the key trusted_domains accordingly.

    $ sudo vi /var/www/html/nextcloud/config/config.php

  ---%<---
    'trusted_domains' => 
    array (
      0 => 'domain.your-domain.com',
      1 => 'domain.other-domain.com',
    ),
  --->%---
  • Create and adjust the needed site configurations for the webserver.
  • Restart the NGinx unit.

An error message about .ocdata might occur

  • .ocdata is not found inside the data directory

    • Create file using touch and set necessary permissions.

      $ sudo touch /var/nextcloud_data/.ocdata

      $ sudo chown -R www-data:www-data /var/nextcloud_data/

The password for the administrator user is unknown

  1. Log in to your server:

    • SSH into the server where your PostgreSQL database is hosted.
  2. Switch to the PostgreSQL user:

    • $ sudo -i -u postgres
  3. Access the PostgreSQL command line

    • psql
  4. List the databases: (If you’re unsure which database is being used by Nextcloud, you can list all the databases by the list command.)

    • \l
  5. Switch to the Nextcloud database:

    • Switch to the specific database that Nextcloud is using.
    • \c nextclouddb
  6. Reset the password for the Nextcloud database user:

    • ALTER USER nextcloud_user WITH PASSWORD 'new_password';
  7. Exit the PostgreSQL command line:

    • \q
  8. Verify Database Configuration:

    • Check the database connection details in the config.php file to ensure they are correct.

      sudo vi /var/www/html/nextcloud/config/config.php

    • Replace nextcloud_db, nextcloud_user, and your_password with your actual database name, user, and password.

---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(or your PostgreSQL server address)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for nextcloud_user.)
--->%---
  1. Restart NGinx and access the UI through the browser.

365 TomorrowsX Wings

Author: David C. Nutt I did a quick scan outside my vehicle. I could see columns of thousands upon thousands of them, spinning fast, trying to ride the thermals up and out of the dust devil. At least half of them are getting shredded by the wind and when pieces of their wings and thoraxes […]

The post X Wings appeared first on 365tomorrows.

David BrinTwenty things about The Mess that no one has mentioned, so far. From tariffs to signalgate to DOGE to Houthis and...

I've been distracted by local vexations. But truly, it's time to weigh back into this era in America, that Robert Heinlein accurately forecast as The Crazy Years

This is a long one. But here are things you ought to know, that you've not seen in the news. Starting with...

 == The Tariff War ==

* Ninety-five years ago a Republican Congress passed the biggest tariff hike til Trump. Which even Republican economists now call the dumbest policy move ever. The thing that swerved a mere stock market downturn into the Great Depression. 

Want it explained in a way that's painfully funny? But sure. Some of you already knew that. 

Only have you considered how Trump's tariffs on China give Xi and the Beijing Politburo exactly what they want?

China's economy has been in decline for two years, with bad prospects ahead, especially for youth unemployment. The Tariff War will worsen things for millions of Chinese... but not for Xi!  What Xi gets out of this - personally - is someone to blame for China's already-underway recession! 

"It's all America's fault!"  Thus solidifying his own continued grip on power. He was already doing that, riling (unearned) anti-American fever. Only Trump now proves his case! While ensuring that the USA swirls down the same economic toilet. 

Always look at who benefits! So far it sure looks like Putin and Xi.

* Oh, and this... Want the biggest reason China's economy was already in decline? No one in media is touting that the USA was already experiencing a renaissance of manufacturing. 

HOW is that not a major part of the story? As a direct result of the 2021 Pelosi bills, investment in U.S. domestic factories skyrocketed!**   Shortening supply chains, reducing use of toxic ocean freighters and dependence on China. Only now, if it continues, Trump will claim "My tariffs did it!"

Because Democrats have the polemical skills of a tardigrade.

(Oh, and the USA has been energy independent and a net exporter of oil etc. since Obama. Care to bet? "Drill baby drill?" The Saudis are already scared. They won't allow it.)


== Want another aspect you hadn't considered? ==

* This nutty Tariff War will end the status of the US dollar as the world reserve currency

It is fast underway, as we speak. Long a fond goal of the Chinese politburo, along with Putin and the Saudis. All of them our close friends. Though even our actual allies won't trust the dollar anymore.

Take a look at what's happening with the dollar... and be proud. So proud.


== Might a Tariff Gambit have been done better?

By effectively banning Chinese imports to the U.S., Trump is sending commerce into chaos and prices skyward. Still, while it's totally dumb and risky, I suppose that a carefully selective tarrif tiff might have worked.  

If the aim was to replace China as our top supplier, it could have been done tactically, by favoring friendlier cheap-labor nations like Vietnam and Malaysia.  First, that would keep supply chains going. It would also help to solidify those nations as allies in opposing Beijing ambitions southward. And give US businesses a way to transition more gently.

Better yet... Mexico. 

While our southern neighbor has been fast transforming into a middle class nation - largely thanks to trade with the U.S. - it's still pretty cheap labor. Only with many added bonuses. First, as Mexicans become more prosperous, they buy a lot of stuff from the USA. Three dollars out of every ten that we spend on Mexican value-added goods - say in maquiladora parts-assembly plants - comes right back. 

Second, U.S. manufacturers will tell you they invest more in US factories when they can partner with Mexican ones. (Shall we wager over that assertion?)

(*Elsewhere I explain why turning Mexico middle class has been one of America's greatest accomplishments! I can prove it. For example, during all of the Right's ravings about illegal immigration, do you hear that it's Mexicans flooding the border? No you don't. The fraction of border-crashers who are Mexican citizens has been in steep decline for a decade. Increasingly, it's refugees from Central American right-wing caudillo regimes and so on. Because most Mexicans can now find decent work at home. Try actually tracking such things for yourself.)

Want another selective tariff that could have advanced U.S. interests? 

How about we use tariffs to pressure countries to stop buying Russian oil, till Putin withdraws from Ukraine? And punish countries like Panama and Gabon and all the other 'flags of convenience' till they stop shilling for Russian tankers that are evading sanctions and rusting into ecological time bombs? More generally, notice that Trump made no actual demands!

Instead? Notice that Russia was left out of any mention on Trumps list of a zillion super-tariffed nations. Why would that be?

Because while Don howls words occasionally toward Putin -- purely for show -- he never ever ever ever does any actual acts that negatively affect his Kremlin master. 

Ever.


== The distilled frothy essence atop the poop pile ==

I could go on about the Trump Tariff War. But of course it boils down to two things that should be plain to anyone. 

First: it's primarily the USA and our friends who will get harmed, but never our enemies...

... and second: it's all jibbering lunacy! Perp'd by a clown car of capering idiots. 

Dig it: during Trump v.1.0 (#45) he at least appointed a veneer of adults to top positions... then got pissed off when ALL of those adults (Tillerson, McMaster, Barr etc.) turned on him, denouncing him as a raving moron and Kremlin agent. 

The one absolute fact about Trump v 2.0 (#47) is the total absence of any adults in the room. 

None. Anywhere. Not one appointee who is actually suited or qualified for the job. Just toadies. Lickspittle and (my personal theory) blackmailed to ensure utter loyalty. See the pattern. Because no one in media - not even a single liberal pundit - is tracking the obvious.


== A few more things to notice ==

I was gonna do all the tariff stuff as 'blips' but there was no way to condense those complicated aspects. So, here are a few items that may qualify as 'blips':

SIGNAL GATE. Remember the Signal Idiocy? 

("Oh, Brin, that was so 'last week.'" Yeah yeah. Libs never notice that a core KGB/MAGA tactic is to distract from one scandal by moving on to the next one!)

Sure, liberal and moderate media did an okay job blaring at some aspects of the insipidly dumb "Signalgate" Scandal, wherein a dozen top Trumpian cabinet officials illegally used an non-secure, unvetted chat system to giggle-blather top secret information about a looming US military action against the Yemeni Houthi rebel enclave... while one member of the chat was even in a notoriously non-secure Moscow hotel, at that very moment! 

And of course, everyone in media shouted that our National Security Adviser invited a top-level critic/reporter into the chat without vetting nor any participant (not even the Director of National 'Intelligence') even noticing.   

Still and alas, as always, no one in either moderate or liberal media commented that:

1. Not a single military officer was part of the conversation about a major military operation. Sure, it's consistent with the all-out Republican campaign against the entire senior officer corps. The men and women who have been most dedicated to service and fact-centered responsibility. And competence! 

No, no, we can't have anyone like that in a conversation about a military operation. Some common sense might leak in and pollute the purity of blithering idiocy.

2. Um... but... WHY attack the Houthis? I am sure someone in media must have asked that, but I never saw it.  Indeed, the dumbitude aspects of "Signalgate" perfectly distracted from that bigger question.

Think about it. The Houthis are at war with the Saudis. And every Republican administration, without exception, does the bidding of the Saudi Royal House instantly and without question. Bush Sr., Bush Jr. and Trump. Good doggies.

But are WE served by enraging potential new bin Ladens into swearing revenge on America?

Sure, the Houthis are Shiite friends of Iran... and Iran is best buds with Putin. But then so are the Saudis... and so is Trump. Confused yet? Then look for the common thread. 

What they ALL want - and seek from Trump - is the fall of American liberal democracy. And any sign of a world governed by a transparent Rule of Law. 

Distilled in its essence, the Houthis are the least culpable party in any of this. All they want is to be left alone. Oh, their current leadership is likely a pack of radical jerks. But if a fair referendum were held, 90%+ of the population of North Yemen would vote for independence and peace and membership in the family of nations. 

Above all, do we really need more enemies, right now?


== And WHY the DOGE mass firings? ==

Ay Carumba!  I've seen NO ONE in media or politics even try to penetrate this question!

Dig it: The oligarchy and their foreign backers don't give a damn about the Department of Education. Or massive firings at the VA or Social Security, or Commerce or Agriculture or Parks and so on. There's no underlying goal of "efficiency." 

If they wanted that, they could have done it the way that Al Gore did with his massively effective efficiency campaign in the 90s.

No. All the slashed civil servants I just mentioned, and thousands more were attacked and their services to citizens cauterized for one reason. The same old reason that liberal and moderate pundits always fall-for. Distraction from the real targets.

1. I mentioned the United States senior military officer corps. The smart and savvy and dedicated generals and admirals must be brought to heel!

2. The FBI and all related law professionals. Um duh? But above all...

3. The Internal Revenue Service. The greatest accomplishment of the 2021 Pelosi bills was to fully fund the IRS, which had been starved by GOP Congresses into using 1970s computers and crippled for lack of personnel from auditing the super-rich tax cheaters... 

...cheaters who were left terrified by that legislation!  And I assert that the topmost reason they pushed hard for Donald Trump was to get this opportunity to re-gut the IRS.

And failure to even note or mention that aspect only proves my point about the moderate and liberal and Democrat political and punditry caste.  Oh, many of them are decent people, with far lower rates of every turpitude than their corrupt, perverted and blackmailed GOP counterparts...

...but smart? 

Tardigrades. Tardigrades all the way down.


== There's so much more... ==

...but no time or room for any more this weekend. 

Oh I am sure that some of the items I cite above were mistaken or cockeyed. But suffice it to say that those in media who failed to note any of them are proving to be almost as incompetent and blind as the fools who ran the Harris campaign and put us into this mess.

But YOU don't have to be blind!  

You can spread word about some of these things. Get others to see what has NOT been spoon-fed to them by simplistic media. And if you get any of the complicit dopes to put up wager stakes, I'll happily provide ways and means to take their money.

Heinlein had it right.  The way to ultimately emerge from The Crazy Years is to spread sanity.


----------------------

----------------------

**The idiocy of Kamala Harris's advisers, for not emphasizing the return of U.S. manufacturing instead of just shouting "Abortion!" ten million times - disqualifies them from any future role in Democratic politics. Kamala herself was fine. Her political mavens should have no future role.


,

Planet DebianReproducible Builds: Reproducible Builds in March 2025

Welcome to the third report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Debian bookworm live images now fully reproducible from their binary packages
  2. “How NixOS and reproducible builds could have detected the xz backdoor”
  3. LWN: Fedora change aims for 99% package reproducibility
  4. Python adopts PEP standard for specifying package dependencies
  5. OSS Rebuild real-time validation and tooling improvements
  6. SimpleX Chat server components now reproducible
  7. Three new scholarly papers
  8. Distribution roundup
  9. An overview of “Supply Chain Attacks on Linux distributions”
  10. diffoscope & strip-nondeterminism
  11. Website updates
  12. Reproducibility testing framework
  13. Upstream patches

Debian bookworm live images now fully reproducible from their binary packages

Roland Clobus announced on our mailing list this month that all the major desktop variants (ie. Gnome, KDE, etc.) can be reproducibly created for Debian bullseye, bookworm and trixie from their (pre-compiled) binary packages.

Building reproducible Debian live images does not require building from reproducible source code, but this is still a remarkable achievement. Some large proportion of the binary packages that comprise these live images can (and were) built reproducibly, but live image generation works at a higher level. (By contrast, “full” or end-to-end reproducibility of a bootable OS image will, in time, require both the compile-the-packages the build-the-bootable-image stages to be reproducible.)

Nevertheless, in response, Roland’s announcement generated significant congratulations as well as some discussion regarding the finer points of the terms employed: a full outline of the replies can be found here.

The news was also picked up by Linux Weekly News (LWN) as well as to Hacker News.


How NixOS and reproducible builds could have detected the xz backdoor

Julien Malka aka luj published an in-depth blog post this month with the highly-stimulating title “How NixOS and reproducible builds could have detected the xz backdoor for the benefit of all”.

Starting with an dive into the relevant technical details of the XZ Utils backdoor, Julien’s article goes on to describe how we might avoid the xz “catastrophe” in the future by building software from trusted sources and building trust into untrusted release tarballs by way of comparing sources and leveraging bitwise reproducibility, i.e. applying the practices of Reproducible Builds.

The article generated significant discussion on Hacker News as well as on Linux Weekly News (LWN).


LWN: Fedora change aims for 99% package reproducibility

Linux Weekly News (LWN) contributor Joe Brockmeier has published a detailed round-up on how Fedora change aims for 99% package reproducibility. The article opens by mentioning that although Debian has “been working toward reproducible builds for more than a decade”, the Fedora project has now:

…progressed far enough that the project is now considering a change proposal for the Fedora 43 development cycle, expected to be released in October, with a goal of making 99% of Fedora’s package builds reproducible. So far, reaction to the proposal seems favorable and focused primarily on how to achieve the goal—with minimal pain for packagers—rather than whether to attempt it.

The Change Proposal itself is worth reading:

Over the last few releases, we [Fedora] changed our build infrastructure to make package builds reproducible. This is enough to reach 90%. The remaining issues need to be fixed in individual packages. After this Change, package builds are expected to be reproducible. Bugs will be filed against packages when an irreproducibility is detected. The goal is to have no fewer than 99% of package builds reproducible.

Further discussion can be found on the Fedora mailing list as well as on Fedora’s Discourse instance.


Python adopts PEP standard for specifying package dependencies

Python developer Brett Cannon reported on Fosstodon that PEP 751 was recently accepted. This design document has the purpose of describing “a file format to record Python dependencies for installation reproducibility”. As the abstract of the proposal writes:

This PEP proposes a new file format for specifying dependencies to enable reproducible installation in a Python environment. The format is designed to be human-readable and machine-generated. Installers consuming the file should be able to calculate what to install without the need for dependency resolution at install-time.

The PEP, which itself supersedes PEP 665, mentions that “there are at least five well-known solutions to this problem in the community”.


OSS Rebuild real-time validation and tooling improvements

OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io, npm registries) and publish signed attestations and build definitions for public use.

OSS Rebuild is now attempting rebuilds as packages are published, shortening the time to validating rebuilds and publishing attestations.

Aman Sharma contributed classifiers and fixes for common sources of non-determinism in JAR packages.

Improvements were also made to some of the core tools in the project:

  • timewarp for simulating the registry responses from sometime in the past.
  • proxy for transparent interception and logging of network activity.
  • and stabilize, yet another nondeterminism fixer.


SimpleX Chat server components now reproducible

SimpleX Chat is a privacy-oriented decentralised messaging platform that eliminates user identifiers and metadata, offers end-to-end encryption and has a unique approach to decentralised identity. Starting from version 6.3, however, Simplex has implemented reproducible builds for its server components. This advancement allows anyone to verify that the binaries distributed by SimpleX match the source code, improving transparency and trustworthiness.


Three new scholarly papers

Aman Sharma of the KTH Royal Institute of Technology of Stockholm, Sweden published a paper on Build and Runtime Integrity for Java (PDF). The paper’s abstract notes that “Software Supply Chain attacks are increasingly threatening the security of software systems” and goes on to compare build- and run-time integrity:

Build-time integrity ensures that the software artifact creation process, from source code to compiled binaries, remains untampered. Runtime integrity, on the other hand, guarantees that the executing application loads and runs only trusted code, preventing dynamic injection of malicious components.

Aman’s paper explores solutions to safeguard Java applications and proposes some novel techniques to detect malicious code injection. A full PDF of the paper is available.


In addition, Hamed Okhravi and Nathan Burow of Massachusetts Institute of Technology (MIT) Lincoln Laboratory along with Fred B. Schneider of Cornell University published a paper in the most recent edition of IEEE Security & Privacy on Software Bill of Materials as a Proactive Defense:

The recently mandated software bill of materials (SBOM) is intended to help mitigate software supply-chain risk. We discuss extensions that would enable an SBOM to serve as a basis for making trust assessments thus also serving as a proactive defense.

A full PDF of the paper is available.


Lastly, congratulations to Giacomo Benedetti of the University of Genoa for publishing their PhD thesis. Titled Improving Transparency, Trust, and Automation in the Software Supply Chain, Giacomo’s thesis:

addresses three critical aspects of the software supply chain to enhance security: transparency, trust, and automation. First, it investigates transparency as a mechanism to empower developers with accurate and complete insights into the software components integrated into their applications. To this end, the thesis introduces SUNSET and PIP-SBOM, leveraging modeling and SBOMs (Software Bill of Materials) as foundational tools for transparency and security. Second, it examines software trust, focusing on the effectiveness of reproducible builds in major ecosystems and proposing solutions to bolster their adoption. Finally, it emphasizes the role of automation in modern software management, particularly in ensuring user safety and application reliability. This includes developing a tool for automated security testing of GitHub Actions and analyzing the permission models of prominent platforms like GitHub, GitLab, and BitBucket.


Distribution roundup

In Debian this month:


The IzzyOnDroid Android APK repository reached another milestone in March, crossing the 40% coverage mark — specifically, more than 42% of the apps in the repository is now reproducible

Thanks to funding by NLnet/Mobifree, the project was also to put more time into their tooling. For instance, developers can now run easily their own verification builder in “less than 5 minutes”. This currently supports Debian-based systems, but support for RPM-based systems is incoming. Future work in the pipeline, including documentation, guidelines and helpers for debugging.


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build which attempts to reproduce an existing package within a Koji build environment. Although the project’s README file lists a number of “fields will always or almost always vary” (and there are a non-zero list of other known issues), this is an excellent first step towards full Fedora reproducibility (see above for more information).


Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for his work there.


An overview of Supply Chain Attacks on Linux distributions

Fenrisk, a cybersecurity risk-management company, has published a lengthy overview of Supply Chain Attacks on Linux distributions. Authored by Maxime Rinaudo, the article asks:

[What] would it take to compromise an entire Linux distribution directly through their public infrastructure? Is it possible to perform such a compromise as simple security researchers with no available resources but time?


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 290, 291, 292 and 293 and 293 to Debian:

  • Bug fixes:

    • file(1) version 5.46 now returns XHTML document for .xhtml files such as those found nested within our .epub tests. []
    • Also consider .aar files as APK files, at least for the sake of diffoscope. []
    • Require the new, upcoming, version of file(1) and update our quine-related testcase. []
  • Codebase improvements:

    • Ensure all calls to our_check_output in the ELF comparator have the potential CalledProcessError exception caught. [][]
    • Correct an import masking issue. []
    • Add a missing subprocess import. []
    • Reformat openssl.py. []
    • Update copyright years. [][][]

In addition, Ivan Trubach contributed a change to ignore the st_size metadata entry for directories as it is essentially arbitrary and introduces unnecessary or even spurious changes. []


Website updates

Once again, there were a number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In March, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add links to two related bugs about buildinfos.debian.net. []
    • Add an extra sync to the database backup. []
    • Overhaul description of what the service is about. [][][][][][]
    • Improve the documentation to indicate that need to fix syncronisation pipes. [][]
    • Improve the statistics page by breaking down output by architecture. []
    • Add a copyright statement. []
    • Add a space after the package name so one can search for specific packages more easily. []
    • Add a script to work around/implement a missing feature of debrebuild. []
  • Misc:

    • Run debian-repro-status at the end of the chroot-install tests. [][]
    • Document that we have unused diskspace at Ionos. []

In addition:

And finally, node maintenance was performed by Holger Levsen [][][] and Mattia Rizzolo [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Planet DebianBits from Debian: Bits from the DPL

Dear Debian community,

this is bits from DPL for March (sorry for the delay, I was waiting for some additional input).

Conferences

In March, I attended two conferences, each with a distinct motivation.

I joined FOSSASIA to address the imbalance in geographical developer representation. Encouraging more developers from Asia to contribute to Free Software is an important goal for me, and FOSSASIA provided a valuable opportunity to work towards this.

I also attended Chemnitzer Linux-Tage, a conference I have been part of for over 20 years. To me, it remains a key gathering for the German Free Software community –a place where contributors meet, collaborate, and exchange ideas.

I have a remark about submitting an event proposal to both FOSDEM and FOSSASIA:

    Cross distribution experience exchange

As Debian Project Leader, I have often reflected on how other Free Software distributions address challenges we all face. I am interested in discussing how we can learn from each other to improve our work and better serve our users. Recognizing my limited understanding of other distributions, I aim to bridge this gap through open knowledge exchange. My hope is to foster a constructive dialogue that benefits the broader Free Software ecosystem. Representatives of other distributions are encouraged to participate in this BoF –whether as contributors or official co-speakers. My intention is not to drive the discussion from a Debian-centric perspective but to ensure that all distributions have an equal voice in the conversation.

This event proposal was part of my commitment from my 2024 DPL platform, specifically under the section "Reaching Out to Learn". Had it been accepted, I would have also attended FOSDEM. However, both FOSDEM and FOSSASIA rejected the proposal.

In hindsight, reaching out to other distribution contributors beforehand might have improved its chances. I may take this approach in the future if a similar opportunity arises. That said, rejecting an interdistribution discussion without any feedback is, in my view, a missed opportunity for collaboration.

FOSSASIA Summit

The 14th FOSSASIA Summit took place in Bangkok. As a leading open-source technology conference in Asia, it brings together developers, startups, and tech enthusiasts to collaborate on projects in AI, cloud computing, IoT, and more.

With a strong focus on open innovation, the event features hands-on workshops, keynote speeches, and community-driven discussions, emphasizing open-source software, hardware, and digital freedom. It fosters a diverse, inclusive environment and highlights Asia's growing role in the global FOSS ecosystem.

I presented a talk on Debian as a Global Project and led a packaging workshop. Additionally, to further support attendees interested in packaging, I hosted an extra self-organized workshop at a hacker café, initiated by participants eager to deepen their skills.

There was another Debian related talk given by Ananthu titled "The Herculean Task of OS Maintenance - The Debian Way!"

To further my goal of increasing diversity within Debian –particularly by encouraging more non-male contributors– I actively engaged with attendees, seeking opportunities to involve new people in the project. Whether through discussions, mentoring, or hands-on sessions, I aimed to make Debian more approachable for those who might not yet see themselves as contributors. I was fortunate to have the support of Debian enthusiasts from India and China, who ran the Debian booth and helped create a welcoming environment for these conversations. Strengthening diversity in Free Software is a collective effort, and I hope these interactions will inspire more people to get involved.

Chemnitzer Linuxtage

The Chemnitzer Linux-Tage (CLT) is one of Germany's largest and longest-running community-driven Linux and open-source conferences, held annually in Chemnitz since 2000. It has been my favorite conference in Germany, and I have tried to attend every year.

Focusing on Free Software, Linux, and digital sovereignty, CLT offers a mix of expert talks, workshops, and exhibitions, attracting hobbyists, professionals, and businesses alike. With a strong grassroots ethos, it emphasizes hands-on learning, privacy, and open-source advocacy while fostering a welcoming environment for both newcomers and experienced Linux users.

Despite my appreciation for the diverse and high-quality talks at CLT, my main focus was on connecting with people who share the goal of attracting more newcomers to Debian. Engaging with both longtime contributors and potential new participants remains one of the most valuable aspects of the event for me.

I was fortunate to be joined by Debian enthusiasts staffing the Debian booth, where I found myself among both experienced booth volunteers –who have attended many previous CLT events– and young newcomers. This was particularly reassuring, as I certainly can't answer every detailed question at the booth. I greatly appreciate the knowledgeable people who represent Debian at this event and help make it more accessible to visitors.

As a small point of comparison –while FOSSASIA and CLT are fundamentally different events– the gender ratio stood out. FOSSASIA had a noticeably higher proportion of women compared to Chemnitz. This contrast highlighted the ongoing need to foster more diversity within Free Software communities in Europe.

At CLT, I gave a talk titled "Tausend Freiwillige, ein Ziel" (Thousand Volunteers, One Goal), which was video recorded. It took place in the grand auditorium and attracted a mix of long-term contributors and newcomers, making for an engaging and rewarding experience.

Kind regards Andreas.

Planet DebianGunnar Wolf: Culture as a positive freedom

This post is an unpublished review for La cultura libre como libertad positiva
Please note: This review is not meant to be part of my usual contributions to ACM's «Computing Reviews». I do want, though, to share it with people that follow my general interests and such stuff.

This article was published almost a year ago, and I read it just after relocating from Argentina back to Mexico. I came from a country starting to realize the shock it meant to be ruled by an autocratic, extreme right-wing president willing to overrun its Legislative and bent on destroying the State itself — not too different from what we are now witnessing on a global level.

I have been a strong proponent and defender of Free Software and of Free Culture throughout my adult life. And I have been a Socialist since my early teenage years. I cannot say there is a strict correlation between them, but there is a big intersection of people and organizations who aligns to both sides — And Ártica (and Mariana Fossatti) are clearly among them.

Freedom is a word that has brought us many misunderstanding throughout the past many decades. We will say that Freedom can only be brought hand-by-hand with Equality, Fairness and Tolerance. But the extreme-right wing (is it still bordering Fascism, or has it finally embraced it as its true self?) that has grown so much in many countries over the last years also seems to have appropriated the term, even taking it as their definition. In English (particularly, in USA English), liberty is a more patriotic term, and freedom is more personal (although the term used for the market is free market); in Spanish, we conflate them both under libre.

Mariana refers to a third blog, by Rolando Astarita, where the author introduces the concepts positive and negative freedom/liberties. Astarita characterizes negative freedom as an individual’s possibility to act without interferences or coertion, and is limited by other people’s freedom, while positive freedom is the real capacity to exercise one’s autonomy and achieve self-realization; this does not depend on a person on its own, but on different social conditions; Astarita understands the Marxist tradition to emphasize on the positive freedom.

Mariana brings this definition to our usual discussion on licensing: If we follow negative freedom, we will understand free licenses as the idea of access without interference to cultural or information goods, as long as it’s legal (in order not to infringe other’s property rights). Licensing is seen as a private content, and each individual can grant access and use to their works at will.

The previous definition might be enough for many, but she says, is missing something important. The practical effect of many individuals renouncing a bit of control over their property rights produce, collectively, the common goods. They constitute a pool of knowledge or culture that are no longer an individual, contractual issue, but grow and become social, collective. Negative freedom does not go further, but positive liberty allows broadening the horizon, and takes us to a notion of free culture that, by strengthening the commons, widens social rights.

She closes the article by stating (and I’ll happily sign as if they were my own words) that we are Free Culture militants «not only because it affirms the individual sovereignty to deliver and receive cultural resources, in an intellectual property framework guaranteed by the state. Our militancy is of widening the cultural enjoying and participation to the collective through the defense of common cultural goods» (…) «We want to build Free Culture for a Free Society. But a Free Society is not a society of free owners, but a society emancipated from the structures of economic power and social privilege that block this potential collective».

Planet DebianBits from Debian: DebConf25 Registration and Call for Proposals are open

The 26th edition of the Debian annual conference will be held in Brest, France, from July 14th to July 20th, 2025. The main conference will be preceded by DebCamp, from July 7th to July 13th. We invite everyone interested to register for the event to attend DebConf25 in person. You can also submit a talk or event proposal if you're interested in presenting your work in Debian at DebConf25.

Registration can be done by creating an account on the DebConf25 website and clicking on "Register" in the profile section.

As always, basic registration is free of charge. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories. This helps cover the costs of organizing the event while also supporting subsidizing other community members attendance. The last day to register with guaranteed swag is 9th June.

We encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are available. More details can be found on the bursary information page. The last day to apply for a bursary is April 14th. Applicants should receive feedback on their bursary application by April 25th.

The call for proposals for talks, discussions and other activities is also open. To submit a proposal, you need to create an account on the website and click the "Submit Talk Proposal" button in the profile section. The last day to submit and have your proposal considered for the main conference schedule, with video coverage guaranteed, is May 25th.

DebConf25 is also looking for sponsors; if you are interested or think you know of others who would be willing to help, please get in touch with sponsors@debconf.org.

All important dates can be found on the link here.

See you in Brest!

Worse Than FailureError'd: Sentinel Headline

When faced with an information system lacking sufficient richness to permit its users to express all of the necessary data states, human beings will innovate. In other words, they will find creative ways to bend the system to their will, usually (but not always) inconsequentially.

In the early days of information systems, even before electronic computers, we found users choosing to insert various out-of-bounds values into data fields to represent states such as "I don't know the true value for this item" or "It is impossible accurately state the true value of this item because of faulty constraint being applied to the input mechanism" or other such notions.

This practice carried on into the computing age, so that now, numeric fields will often contain values of 9999 or 99999999. Taxpayer numbers will be listed as 000-00-0000 or any other repetition of the same digit or simple sequences. Requirements to enter names collected John Does. Now we also see a fair share of Disney characters.

Programmers then try to make their systems idiot-proof, with the obvious and entirely predictable results.

The mere fact that these inventions exist at all is entirely due to the ommission of mechanisms for the metacommentary that we all know perfectly well is sometimes necessary. But rather than provide those, it's easier to wave our hands and pretend that these unwanted states won't exist, can be ignored, can be glossed over. "Relax" they'll tell you. "It probably won't ever happen." "If it does happen, it won't matter." "Don't lose your head over it."

The Beast in Black certainly isn't inclined to cover up an errant sentinel. "For that price, it had better be a genuine Louis XVI pillow from 21-January-1793." A La Lanterne!

3

 

Daniel D. doubled up on Error'ds for us. "Do you need the error details? Yes, please."

0

 

And again with an alert notification oopsie. "Google Analytics 4 never stops surprising us any given day with how bugged it is. I call it an "Exclamation point undefined". You want more info? Just Google it... Oh wait." I do appreciate knowing who is responsible for the various bodges we are sent. Thank you, Daniel.

1

 

"Dark pattern or dumb pattern?" wonders an anonymous reader. I don't think it's very dark.

2

 

Finally, Ian Campbell found a data error that doesn't look like an intentional sentinel. But I'm not sure what this number represents. It is not an integral power of 2. Says Ian, "SendGrid has a pretty good free plan now with a daily limit of nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-two."

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsAutovore

Author: Morrow Brady Without a backstory, the darker patch at the edge of the busy road went unnoticed. It was being faded to oblivion by layers of desert dust and the enraged rush hour traffic. As my evening walk took me past that patch, near the busy street junction, I looked over at it and […]

The post Autovore appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 294 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 294. This version includes the following changes:

[ Chris Lamb ]
* Correct longstanding issue where many ">"-based version tests used in
  conditional fixtures were broken due to the lack of a __gt__ method.
  Thanks, Colin Watson! (Closes: #1102658)
* Address a long-hidden issue in the test_versions testsuite where we weren't
  actually testing ">" as it was masked by the tests for equality in the
  testsuite.
* Update copyright years.

You find out more by visiting the project homepage.

,

Planet DebianThorsten Alteholz: My Debian Activities in March 2025

Debian LTS

This was my hundred-twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4096-1] librabbitmq security update to one CVE related to credential visibility when using tools on the command line.
  • [DLA 4103-1] suricata security update to fix second CVEs related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops, buffer overflows, unintended file access and using large amount of memory.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eightieth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1360-1] ffmpeg security update to fix three CVEs in Stretch related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1361-1] ffmpeg security update to fix four CVEs in Buster related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1362-1] librabbitmq security update to fix two CVEs in Stretch and Buster related to heap memory corruption due to integer overflow and credential visibility when using the tools on the command line.
  • [ELA-1363-1] librabbitmq security update to fix one CVE in Jessie related to credential visibility when using the tools on the command line.
  • [ELA-1367-1] suricata security update to fix five CVEs in Buster related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops and buffer overflows.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • cups-filters to make it work with a new upstream version of qpdf again.

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

Unfortunately I had a rather bad experience with package hijacking this month. Of course errors can always happen, but when I am forced into a discussion about the advantages of hijacking, I am speechless about such self-centered behavior. Oh fellow Debian Developers, is it really that hard to acknowledge a fault and tidy up afterwards? What a sad trend.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new upstream or bugfix versions of almost all packages. First I uploaded them to experimental and afterwards to unstable to get the latest upstream versions into Trixie.

misc

This month I uploaded new packages or new upstream or bugfix versions of:

meep and meep-mpi-default are no longer supported on 32bit architectures.

FTP master

This month I accepted 343 and rejected 38 packages. The overall number of packages that got accepted was 347.

Krebs on SecurityChina-based SMS Phishing Triad Pivots to Banks

China-based purveyors of SMS phishing kits are enjoying remarkable success converting phished payment card data into mobile wallets from Apple and Google. Until recently, the so-called “Smishing Triad” mainly impersonated toll road operators and shipping companies. But experts say these groups are now directly targeting customers of international financial institutions, while dramatically expanding their cybercrime infrastructure and support staff.

An image of an iPhone device farm shared on Telegram by one of the Smishing Triad members. Image: Prodaft.

If you own a mobile device, the chances are excellent that at some point in the past two years you’ve received at least one instant message that warns of a delinquent toll road fee, or a wayward package from the U.S. Postal Service (USPS). Those who click the promoted link are brought to a website that spoofs the USPS or a local toll road operator and asks for payment card information.

The site will then complain that the visitor’s bank needs to “verify” the transaction by sending a one-time code via SMS. In reality, the bank is sending that code to the mobile number on file for their customer because the fraudsters have just attempted to enroll that victim’s card details into a mobile wallet.

If the visitor supplies that one-time code, their payment card is then added to a new mobile wallet on an Apple or Google device that is physically controlled by the phishers. The phishing gangs typically load multiple stolen cards to digital wallets on a single Apple or Android device, and then sell those phones in bulk to scammers who use them for fraudulent e-commerce and tap-to-pay transactions.

A screenshot of the administrative panel for a smishing kit. On the left is the (test) data entered at the phishing site. On the right we can see the phishing kit has superimposed the supplied card number onto an image of a payment card. When the phishing kit scans that created card image into Apple or Google Pay, it triggers the victim’s bank to send a one-time code. Image: Ford Merrill.

The moniker “Smishing Triad” comes from Resecurity, which was among the first to report in August 2023 on the emergence of three distinct mobile phishing groups based in China that appeared to share some infrastructure and innovative phishing techniques. But it is a bit of a misnomer because the phishing lures blasted out by these groups are not SMS or text messages in the conventional sense.

Rather, they are sent via iMessage to Apple device users, and via RCS on Google Android devices. Thus, the missives bypass the mobile phone networks entirely and enjoy near 100 percent delivery rate (at least until Apple and Google suspend the spammy accounts).

In a report published on March 24, the Swiss threat intelligence firm Prodaft detailed the rapid pace of innovation coming from the Smishing Triad, which it characterizes as a loosely federated group of Chinese phishing-as-a-service operators with names like Darcula, Lighthouse, and the Xinxin Group.

Prodaft said they’re seeing a significant shift in the underground economy, particularly among Chinese-speaking threat actors who have historically operated in the shadows compared to their Russian-speaking counterparts.

“Chinese-speaking actors are introducing innovative and cost-effective systems, enabling them to target larger user bases with sophisticated services,” Prodaft wrote. “Their approach marks a new era in underground business practices, emphasizing scalability and efficiency in cybercriminal operations.”

A new report from researchers at the security firm SilentPush finds the Smishing Triad members have expanded into selling mobile phishing kits targeting customers of global financial institutions like CitiGroup, MasterCard, PayPal, Stripe, and Visa, as well as banks in Canada, Latin America, Australia and the broader Asia-Pacific region.

Phishing lures from the Smishing Triad spoofing PayPal. Image: SilentPush.

SilentPush found the Smishing Triad now spoofs recognizable brands in a variety of industry verticals across at least 121 countries and a vast number of industries, including the postal, logistics, telecommunications, transportation, finance, retail and public sectors.

According to SilentPush, the domains used by the Smishing Triad are rotated frequently, with approximately 25,000 phishing domains active during any 8-day period and a majority of them sitting at two Chinese hosting companies: Tencent (AS132203) and Alibaba (AS45102).

“With nearly two-thirds of all countries in the world targeted by [the] Smishing Triad, it’s safe to say they are essentially targeting every country with modern infrastructure outside of Iran, North Korea, and Russia,” SilentPush wrote. “Our team has observed some potential targeting in Russia (such as domains that mentioned their country codes), but nothing definitive enough to indicate Russia is a persistent target. Interestingly, even though these are Chinese threat actors, we have seen instances of targeting aimed at Macau and Hong Kong, both special administrative regions of China.”

SilentPush’s Zach Edwards said his team found a vulnerability that exposed data from one of the Smishing Triad’s phishing pages, which revealed the number of visits each site received each day across thousands of phishing domains that were active at the time. Based on that data, SilentPush estimates those phishing pages received well more than a million visits within a 20-day time span.

The report notes the Smishing Triad boasts it has “300+ front desk staff worldwide” involved in one of their more popular phishing kits — Lighthouse — staff that is mainly used to support various aspects of the group’s fraud and cash-out schemes.

The Smishing Triad members maintain their own Chinese-language sales channels on Telegram, which frequently offer videos and photos of their staff hard at work. Some of those images include massive walls of phones used to send phishing messages, with human operators seated directly in front of them ready to receive any time-sensitive one-time codes.

As noted in February’s story How Phished Data Turns Into Apple and Google Wallets, one of those cash-out schemes involves an Android app called Z-NFC, which can relay a valid NFC transaction from one of these compromised digital wallets to anywhere in the world. For a $500 month subscription, the customer can wave their phone at any payment terminal that accepts Apple or Google pay, and the app will relay an NFC transaction over the Internet from a stolen wallet on a phone in China.

Chinese nationals were recently busted trying to use these NFC apps to buy high-end electronics in Singapore. And in the United States, authorities in California and Tennessee arrested Chinese nationals accused of using NFC apps to fraudulently purchase gift cards from retailers.

The Prodaft researchers said they were able to find a previously undocumented backend management panel for Lucid, a smishing-as-a-service operation tied to the XinXin Group. The panel included victim figures that suggest the smishing campaigns maintain an average success rate of approximately five percent, with some domains receiving over 500 visits per week.

“In one observed instance, a single phishing website captured 30 credit card records from 550 victim interactions over a 7-day period,” Prodaft wrote.

Prodaft’s report details how the Smishing Triad has achieved such success in sending their spam messages. For example, one phishing vendor appears to send out messages using dozens of Android device emulators running in parallel on a single machine.

Phishers using multiple virtualized Android devices to orchestrate and distribute RCS-based scam campaigns. Image: Prodaft.

According to Prodaft, the threat actors first acquire phone numbers through various means including data breaches, open-source intelligence, or purchased lists from underground markets. They then exploit technical gaps in sender ID validation within both messaging platforms.

“For iMessage, this involves creating temporary Apple IDs with impersonated display names, while RCS exploitation leverages carrier implementation inconsistencies in sender verification,” Prodaft wrote. “Message delivery occurs through automated platforms using VoIP numbers or compromised credentials, often deployed in precisely timed multi-wave campaigns to maximize effectiveness.

In addition, the phishing links embedded in these messages use time-limited single-use URLs that expire or redirect based on device fingerprinting to evade security analysis, they found.

“The economics strongly favor the attackers, as neither RCS nor iMessage messages incur per-message costs like traditional SMS, enabling high-volume campaigns at minimal operational expense,” Prodaft continued. “The overlap in templates, target pools, and tactics among these platforms underscores a unified threat landscape, with Chinese-speaking actors driving innovation in the underground economy. Their ability to scale operations globally and evasion techniques pose significant challenges to cybersecurity defenses.”

Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill said he’s observed at least one video of a Windows binary that wraps a Chrome executable and can be used to load in target phone numbers and blast messages via RCS, iMessage, Amazon, Instagram, Facebook, and WhatsApp.

“The evidence we’ve observed suggests the ability for a single device to send approximately 100 messages per second,” Merrill said. “We also believe that there is capability to source country specific SIM cards in volume that allow them to register different online accounts that require validation with specific country codes, and even make those SIM cards available to the physical devices long-term so that services that rely on checks of the validity of the phone number or SIM card presence on a mobile network are thwarted.”

Experts say this fast-growing wave of card fraud persists because far too many financial institutions still default to sending one-time codes via SMS for validating card enrollment in mobile wallets from Apple or Google. KrebsOnSecurity interviewed multiple security executives at non-U.S. financial institutions who spoke on condition of anonymity because they were not authorized to speak to the press. Those banks have since done away with SMS-based one-time codes and are now requiring customers to log in to the bank’s mobile app before they can link their card to a digital wallet.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m giving an online talk on AI and trust for the Weizenbaum Institute on April 24, 2025 at 2:00 PM CEST (8:00 AM ET).

The list is maintained on this page.

 

Worse Than FailureCodeSOD: A Steady Ship

You know what definitely never changes? Shipping prices. Famously static, despite all economic conditions and the same across all shipping providers. It doesn't matter where you're shipping from, or to, you know exactly what the price will be to ship that package at all times.

Wait, what? You don't think that's true? It must be true, because Chris sent us this function, which calculates shipping prices, and it couldn't be wrong, could it?

public double getShippingCharge(String shippingType, bool saturday, double subTot)
{
    double shCharge = 0.00;
    if(shippingType.Equals("Ground"))
    {
        if(subTot <= 29.99 && subTot > 0)
        {
            shCharge = 4.95;
        }
        else if(subTot <= 99.99 && subTot > 29.99)
        {
            shCharge = 7.95;
        }
        else if(subTot <= 299.99 && subTot > 99.99)
        {
            shCharge = 9.95;
        }
        else if(subTot > 299.99)
        {
            shCharge = subTot * .05;
        }              
    }
    else if(shippingType.Equals("Two-Day"))
    {
        if(subTot <= 29.99 && subTot > 0)
        {
            shCharge = 14.95;
        }
        else if(subTot <= 99.99 && subTot > 29.99)
        {
            shCharge = 19.95;
        }
        else if(subTot <= 299.99 && subTot > 99.99)
        {
            shCharge = 29.95;
        }
        else if(subTot > 299.99)
        {
            shCharge = subTot * .10;
        }              
    }
    else if(shippingType.Equals("Next Day"))
    {
        if(subTot <= 29.99 && subTot > 0)
        {
            shCharge = 24.95;
        }
        else if(subTot <= 99.99 && subTot > 29.99)
        {
            shCharge = 34.95;
        }
        else if(subTot <= 299.99 && subTot > 99.99)
        {
            shCharge = 44.95;
        }
        else if(subTot > 299.99)
        {
            shCharge = subTot * .15;
        }              
    }
    else if(shippingType.Equals("Next Day a.m."))
    {
        if(subTot <= 29.99 && subTot > 0)
        {
            shCharge = 29.95;
        }
        else if(subTot <= 99.99 && subTot > 29.99)
        {
            shCharge = 39.95;
        }
        else if(subTot <= 299.99 && subTot > 99.99)
        {
            shCharge = 49.95;
        }
        else if(subTot > 299.99)
        {
            shCharge = subTot * .20;
        }              
    }                                      
    return shCharge;
}

Next you're going to tell me that passing the shipping types around as stringly typed data instead of enums is a mistake, too!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Lagrange Point

Author: RY Jack floated in the observation blister, the void pressing silent against the reinforced plasteel. Earth hung like a chipped blue marble a million klicks sunward. Behind him, the comms array of Lagrange Point 1 hummed its patient vigil, vast silver dishes straining to catch whispers from the interstellar dark. Mostly, it caught static. […]

The post The Lagrange Point appeared first on 365tomorrows.

Cryptogram AI Vulnerability Finding

Microsoft is reporting that its AI systems are able to find new vulnerabilities in source code:

Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison.

Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device.

Nothing major here. These aren’t exploitable out of the box. But that an AI system can do this at all is impressive, and I expect their capabilities to continue to improve.

Planet DebianJohn Goerzen: Announcing the NNCPNET Email Network

From 1995 to 2019, I ran my own mail server. It began with a UUCP link, an expensive long-distance call for me then. Later, I ran a mail server in my apartment, then ran it as a VPS at various places.

But running an email server got difficult. You can’t just run it on a residential IP. Now there’s SPF, DKIM, DMARC, and TLS to worry about. I recently reviewed mail hosting services, and don’t get me wrong: I still use one, and probably will, because things like email from my bank are critical.

But we’ve lost the ability to tinker, to experiment, to have fun with email.

Not anymore. NNCPNET is an email system that runs atop NNCP. I’ve written a lot about NNCP, including a less-ambitious article about point-to-point email over NNCP 5 years ago. NNCP is to UUCP what ssh is to telnet: a modernization, with modern security and features. NNCP is an asynchronous, onion-routed, store-and-forward network. It can use as a transport anything from the Internet to a USB stick.

NNCPNET is a set of standards, scripts, and tools to facilitate a broader email network using NNCP as the transport. You can read more about NNCPNET on its wiki!

The “easy mode” is to use the Docker container (multi-arch, so you can use it on your Raspberry Pi) I provide, which bundles:

  • Exim mail server
  • NNCP
  • Verification and routing tools I wrote. Because NNCP packets are encrypted and signed, we get sender verification “for free”; my tools ensure the From: header corresponds with the sending node.
  • Automated nodelist tools; it will request daily nodelist updates and update its configurations accordingly, so new members can be communicated with
  • Integration with the optional, opt-in Internet email bridge
    • It is open to all. The homepage has a more extensive list of features.

      I even have mailing lists running on NNCPNET; see the interesting addresses page for more details.

      There is extensive documentation, and of course the source to the whole thing is available.

      The gateway to Internet SMTP mail is off by default, but can easily be enabled for any node. It is a full participant, in both directions, with SPF, DKIM, DMARC, and TLS.

      You don’t need any inbound ports for any of this. You don’t need an always-on Internet connection. You don’t even need an Internet connection at all. You can run it from your laptop and still use Thunderbird to talk to it via its optional built-in IMAP server.

,

Worse Than FailureCodeSOD: Single or Mingle

Singletons is arguably the easiest to understand design pattern, and thus, one of the most frequently implemented design patterns, even- especially- when it isn't necessary. Its simplicity is its weakness.

Bartłomiej inherited some code which implemented this pattern many, many times. None of them worked quite correctly, and all of them tried to create a singleton a different way.

For example, this one:

public class SystemMemorySettings
{
    private static SystemMemorySettings _instance;

    public SystemMemorySettings()
    {
        if (_instance == null)
        {
            _instance = this;
        }
    }

    public static SystemMemorySettings GetInstance()
    {
        return _instance;
    }

    public void DoSomething()
    {
    ...
        // (this must only be done for singleton instance - not for working copy)
        if (this != _instance)
        {
            return;
        }
    ...
    }
}

The only thing they got correct was the static method which returns an instance, but everything else is wrong. They construct the instance in the constructor, meaning this isn't actually a singleton, since you can construct it multiple times. You just can't use it.

And you can't use it because of the real "magic" here: DoSomething, which checks if the currently active instance is also the originally constructed instance. If it isn't, this function just fails silently and does nothing.

A common critique of singletons is that they're simply "global variables with extra steps," but this doesn't even succeed at that- it's just a failure, top to bottom.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsMaster Lonsang Chooses

Author: David Barber The first meeting between aliens and humans had not gone well. The details will never be known, but as the generation ship Pilgrim neared Centauri, it had been met by an alien craft. Imagine the descendants of those first colonists, isolated for centuries in their little world, suddenly invaded by monsters. The […]

The post Master Lonsang Chooses appeared first on 365tomorrows.

Krebs on SecurityPatch Tuesday, April 2025 Edition

Microsoft today released updates to plug at least 121 security holes in its Windows operating systems and software, including one vulnerability that is already being exploited in the wild. Eleven of those flaws earned Microsoft’s most-dire “critical” rating, meaning malware or malcontents could exploit them with little to no interaction from Windows users.

The zero-day flaw already seeing exploitation is CVE-2025-29824, a local elevation of privilege bug in the Windows Common Log File System (CLFS) driver.  Microsoft rates it as “important,” but as Chris Goettl from Ivanti points out, risk-based prioritization warrants treating it as critical.

This CLFS component of Windows is no stranger to Patch Tuesday: According to Tenable’s Satnam Narang, since 2022 Microsoft has patched 32 CLFS vulnerabilities — averaging 10 per year — with six of them exploited in the wild. The last CLFS zero-day was patched in December 2024.

Narang notes that while flaws allowing attackers to install arbitrary code are consistently top overall Patch Tuesday features, the data is reversed for zero-day exploitation.

“For the past two years, elevation of privilege flaws have led the pack and, so far in 2025, account for over half of all zero-days exploited,” Narang wrote.

Rapid7’s Adam Barnett warns that any Windows defenders responsible for an LDAP server — which means almost any organization with a non-trivial Microsoft footprint — should add patching for the critical flaw CVE-2025-26663 to their to-do list.

“With no privileges required, no need for user interaction, and code execution presumably in the context of the LDAP server itself, successful exploitation would be an attractive shortcut to any attacker,” Barnett said. “Anyone wondering if today is a re-run of December 2024 Patch Tuesday can take some small solace in the fact that the worst of the trio of LDAP critical RCEs published at the end of last year was likely easier to exploit than today’s example, since today’s CVE-2025-26663 requires that an attacker win a race condition. Despite that, Microsoft still expects that exploitation is more likely.”

Among the critical updates Microsoft patched this month are remote code execution flaws in Windows Remote Desktop services (RDP), including CVE-2025-26671, CVE-2025-27480 and CVE-2025-27482; only the latter two are rated “critical,” and Microsoft marked both of them as “Exploitation More Likely.”

Perhaps the most widespread vulnerabilities fixed this month were in web browsers. Google Chrome updated to fix 13 flaws this week, and Mozilla Firefox fixed eight bugs, with possibly more updates coming later this week for Microsoft Edge.

As it tends to do on Patch Tuesdays, Adobe has released 12 updates resolving 54 security holes across a range of products, including ColdFusion, Adobe Commerce, Experience Manager Forms, After Effects, Media Encoder, BridgePremiere Pro, Photoshop, Animate, AEM Screens, and FrameMaker.

Apple users may need to patch as well. On March 31, Apple released a huge security update (more than three gigabytes in size) to fix issues in a range of their products, including at least one zero-day flaw.

And in case you missed it, on March 31, 2025 Apple released a rather large batch of security updates for a wide range of their products, from macOS to the iOS operating systems on iPhones and iPads.

Earlier today, Microsoft included a note saying Windows 10 security updates weren’t available but would be released as soon as possible. It appears from browsing askwoody.com that this snafu has since been rectified. Either way, if you run into complications applying any of these updates please leave a note about it in the comments below, because the chances are good that someone else had the same problem.

As ever, please consider backing up your data and or devices prior to updating, which makes it far less complicated to undo a software update gone awry. For more granular details on today’s Patch Tuesday, check out the SANS Internet Storm Center’s roundup. Microsoft’s update guide for April 2025 is here.

For more details on Patch Tuesday, check out the write-ups from Action1 and Automox.

Planet DebianDirk Eddelbuettel: AsioHeaders 1.28.2-1 on CRAN: New Upstream

A new release of the AsioHeaders package arrived at CRAN earlier today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This update brings a new upstream version which helps the three dependent packages using AsiooHeaders to remain compliant at CRAN, and has been prepared by Charlie Gao. Otherwise I made some routine updates to packaging since the last release in late 2022.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.28.2-1 (2025-04-08)

  • Standard maintenance to CI and other packaging aspects

  • Upgraded to Asio 1.28.2 (Charlie Gao in #11 fixing #10)

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianTaavi Väänänen: Writing a custom rsync server to automatically update a static site

Inspired by some friends,1 I too wanted to make a tiny website telling which event I am at this exact moment. Thankfully I already had an another toy project with that information easily available, so generating the web page was a matter of just querying that project's API and feeding that data to a HTML template.

Now the obvious way to host that would be to hook up the HTML-generating code to a web server, maybe add some caching for the API calls, and then route external HTTPS traffic to it. However, that'd a) require that server to be constantly available to serve traffic, and b) be boring.

For context: I have an existing setup called staticweb, which is effectively a fancy name for a couple of Puppet-managed servers that run Apache httpd to serve static web pages and have a bunch of systemd timers running rsync to ensure they're serving the same content. It works really well and I use it for things ranging from my website or whyisbetabroken.com to things like my internal apt repository.

Now, there are two ways to get new content into that mechanism: it can be manually pushed in from e.g. a CI job, or the system can be configured to periodically pull it from a separate server. The latter mechanism was initially created so that I could pull the Debian packages from my separate reprepro server into the staticweb setup. It turns out that the latter makes a really neat method for handling other dynamically-generated static sites as well.

So, for my "where is Taavi at" site, I ended up writing the server part in Go, and included an rsync server using the gokrazy/rsync package. Initially I just implemented a static temporary directory with a timer to regularly update the HTML file in it, but then I got an even more cursed idea: what if the HTML was dynamically generated when an rsync client connected to the server? So I did just that.

For deployment, I slapped the entire server part in a container and deployed it to my Kubernetes cluster. The rsync server is exposed directly as a service to my internal network with no authentication or encryption - I think that's fine since that's a read-only service in a private LAN and the resulting HTML is going to be publicly exposed anyway. (Thanks to some DNS magic, just creating a LoadBalancer Service object with a special annotation is enough to have a DNS name provisioned for the assigned IP address, which is neat.)

Overall the setup works nice, at least for now. I need to add some sort of a cache to not fetch unchanged information from the API since for every update. And I guess I could write some cursed rsyncd reverse proxy with per-module rules if I end up creating more sites like this to avoid creating new LoadBalancer services for each of them.


  1. Mostly from Sammy's where.fops.at↩︎

Planet DebianFreexian Collaborators: Debian Contributions: Preparations for Trixie, Updated debvm, DebConf 25 registration website updates and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-03

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for Trixie, by Raphaël Hertzog

As we are approaching the trixie freeze, it is customary for Debian developers to review their packages and clean them up in preparation for the next stable release.

That’s precisely what Raphaël did with publican, a package that had not seen any change since the last Debian release and that partially stopped working along the way due to a major Perl upgrade. While upstream’s activity is close to zero, hope is not yet entirely gone as the git repository moved to a new location a couple of months ago and contained the required fix. Raphaël also developed another fix to avoid an annoying warning that was seen at runtime.

Raphaël also ensured that the last upstream version of zim was uploaded to Debian unstable, and developed a fix for gnome-shell-extension-hamster to make it work with GNOME 48 and thus ensure that the package does not get removed from trixie.

Abseil and re2 transition in Debian, by Stefano Rivera

One of the last transitions to happen for trixie was an update to abseil, bringing it up to 202407. This library is a dependency for one of Freexian’s customers, as well as blocking newer versions of re2, a package maintained by Stefano.

The transition had been stalled for several months while some issues with reverse dependencies were investigated and dealt with. It took a final push to make the transition happen, including fixing a few newly discovered problems downstream. The abseil package’s autopkgtests were (trivially) broken by newer cmake versions, and some tests started failing on PPC64 (a known issue upstream).

debvm uploaded, by Helmut Grohne

debvm is a command line tool for quickly creating a Debian-based virtual machine for testing purposes. Over time, it accumulated quite a few minor issues as well as CI failures. The most notorious one was an ARM32 failure present since August. It was diagnosed down to a glibc bug by Tj and Chris Hofstaedtler and little has happened since then. To have debvm work somewhat, it now contains a workaround for this situation. Few changes are expected to be noticeable, but related tools such as apt, file, linux, passwd, and qemu required quite a few adaptations all over the place. Much of the necessary debugging was contributed by others.

DebConf 25 Registration website, by Stefano Rivera and Santiago Ruano Rincón

DebConf 25, the annual Debian developer conference, is now open for registration. Other than preparing the conference website, getting there always requires some last minute changes to the software behind the registration interface and this year was no exception. Every year, the conference is a little different to previous years, and has some different details that need to be captured from attendees. And every year we make minor incremental improvements to fix long-standing problems.

New concepts this year included: brunch, the closing talks on the departure day, venue security clearance, partial contributions towards food and accommodation bursaries, and attendee-selected bursary budgets.

Miscellaneous contributions

  • Helmut uploaded guess-concurrency incorporating feedback from others.
  • Helmut reacted to rebootstrap CI results and adapted it to cope with changes in unstable.
  • Helmut researched real world /usr-move fallout though little was actually attributable. He also NMUed systemd unsuccessfully.
  • Helmut sent 12 cross build patches.
  • Helmut looked into undeclared file conflicts in Debian more systematically and filed quite some bugs.
  • Helmut attended the cross/bootstrap sprint in Würzburg. A report of the event is pending.
  • Lucas worked on the CFP and tracks definition for DebConf 25.
  • Lucas worked on some bits involving Rails 7 transition.
  • Carles investigated why the job piuparts on salsa-ci/pipeline was passing but was failing on piuparts.debian.org for simplemonitor package. Created an issue and MR with a suggested fix, under discussion.
  • Carles improved the documentation of salsa-ci/pipeline: added documentation for different variables.
  • Carles made debian-history package reproducible (with help from Chris Lamb).
  • Carles updated simplemonitor package (new upstream version), prepared a new qdacco version (fixed bugs in qdacco, packaged with the upgrade from Qt 5 to Qt 6).
  • Carles reviewed and submitted translations to Catalan for adduser, apt, shadow, apt-listchanges.
  • Carles reviewed, created merge-requests for translations to Catalan of 38 packages (using po-debconf-manager tooling). Created 40 bug reports for some merge requests that haven’t been actioned for some time.
  • Colin Watson fixed 59 RC bugs (including 26 packages broken by the long-overdue removal of dh-python’s dependency on python3-setuptools), and upgraded 38 packages (mostly Python-related) to new upstream versions.
  • Colin worked with Pranav P to track down and fix a dnspython autopkgtest regression on s390x caused by an endianness bug in pylsqpack.
  • Colin fixed a time-based test failure in python-dateutil that would have triggered in 2027, and contributed the fix upstream.
  • Colin fixed debconf to automatically use the noninteractive frontend if stdin is not a terminal.
  • Stefano bisected and fixed a pypy translation regression on Debian stable and older on 32-bit ARM.
  • Emilio coordinated and helped finish various transitions in light of the transition freeze.
  • Thorsten Alteholz uploaded cups-filters to fix an FTBFS with a new upstream version of qpdf.
  • With the aim of enhancing the support for packages related to Software Bill of Materials (SBOMs) in recent industrial standards, Santiago has worked on finishing the packaging of and uploaded CycloneDX python library. There is on-going work about SPDX python tools, but it requires (build-)dependencies currently not shipped in Debian, such as owlrl and pyshacl.
  • Anupa worked with the Publicity team to announce the Debian 12.10 point release.
  • Anupa with the support of Santiago prepared an announcement and announced the opening of CfP and Registrations for DebConf 25.

,

Planet DebianPetter Reinholdtsen: Some notes on Linux LUKS cracking

A few months ago, I found myself in the unfortunate position that I had to try to recover the password used to encrypt a Linux hard drive. Tonight a few friends of mine asked for details on this effort. I guess it is a good idea to expose the recipe I found to a wider audience, so here are a few relevant links and key findings. I've forgotten a lot, so part of this is taken from memory.

I found a good recipe in a blog post written in 2019 by diverto, titled Cracking LUKS/dm-crypt passphrases. I tried both the john the ripper approach where it generated password candidates and passed it to cryptsetup and the luks2jack.py approach (which did not work for me, if I remember correctly), but believe I had most success with the hashcat approach. I had it running for several days on my Thinkpad X230 laptop from 2012. I do not remember the exact hash rate, but when I tested it again just now on the same machine by running "hashcat -a 0 hashcat.luks longlist --force", I got a hash rate of 7 per second. Testing it on a newer machine with a 32 core AMD CPU, I got a hash rate of 289 per second. Using the ROCM OpenCL approach on the same machine I managed to get a hash rate of 2821 per second.

Session..........: hashcat                                
Status...........: Quit
Hash.Mode........: 14600 (LUKS v1 (legacy))
Hash.Target......: hashcat.luks
Time.Started.....: Tue Apr  8 23:06:08 2025 (1 min, 10 secs)
Time.Estimated...: Tue Apr  8 23:12:49 2025 (5 mins, 31 secs)
Kernel.Feature...: Pure Kernel
Guess.Base.......: File (/usr/share/dict/bokmål)
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........:     2821 H/s (8.18ms) @ Accel:128 Loops:128 Thr:32 Vec:1
Recovered........: 0/1 (0.00%) Digests (total), 0/1 (0.00%) Digests (new)
Progress.........: 0/935405 (0.00%)
Rejected.........: 0/0 (0.00%)
Restore.Point....: 0/935405 (0.00%)
Restore.Sub.#1...: Salt:0 Amplifier:0-1 Iteration:972928-973056
Candidate.Engine.: Device Generator
Candidates.#1....: A-aksje -> fiskebil
Hardware.Mon.#1..: Temp: 73c Fan: 77% Util: 99% Core:2625MHz Mem: 456MHz Bus:16

Note that for this last test I picked the largest word list I had on my machine (dict/bokmål) as a fairly random work list and not because it is useful for cracking my particular use case from a few months ago.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Cryptogram How to Leak to a Journalist

Neiman Lab has some good advice on how to leak a story to a journalist.

Worse Than FailureCodeSOD: Insanitize Your Inputs

Honestly, I don't know what to say about this code sent to us by Austin, beyond "I think somebody was very confused".

string text;
text = "";
// snip
box.Text = text;
text = "";
text = XMLUtil.SanitizeXmlString(text);

This feels like it goes beyond the usual cruft and confusion that comes with code evolving without ever really being thought about, and ends up in some space outside of meaning. It's all empty strings, signifying nothing, but we've sanitized it.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsJunko

Author: Majoki Junko opened the dumpster lid and peered up at the spires of Saint Petersbot towering above. It made the sign of the triple cross and performed its diagnostic ablutions. Only two system alerts pinged. Junko would ignore them for another day. From the dumpster, Junko made its way along back alleys to the […]

The post Junko appeared first on 365tomorrows.

,

Cryptogram Arguing Against CALEA

At a Congressional hearing earlier this week, Matt Blaze made the point that CALEA, the 1994 law that forces telecoms to make phone calls wiretappable, is outdated in today’s threat environment and should be rethought:

In other words, while the legally-mandated CALEA capability requirements have changed little over the last three decades, the infrastructure that must implement and protect it has changed radically. This has greatly expanded the “attack surface” that must be defended to prevent unauthorized wiretaps, especially at scale. The job of the illegal eavesdropper has gotten significantly easier, with many more options and opportunities for them to exploit. Compromising our telecommunications infrastructure is now little different from performing any other kind of computer intrusion or data breach, a well-known and endemic cybersecurity problem. To put it bluntly, something like Salt Typhoon was inevitable, and will likely happen again unless significant changes are made.

This is the access that the Chinese threat actor Salt Typhoon used to spy on Americans:

The Wall Street Journal first reported Friday that a Chinese government hacking group dubbed Salt Typhoon broke into three of the largest U.S. internet providers, including AT&T, Lumen (formerly CenturyLink), and Verizon, to access systems they use for facilitating customer data to law enforcement and governments. The hacks reportedly may have resulted in the “vast collection of internet traffic”; from the telecom and internet giants. CNN and The Washington Post also confirmed the intrusions and that the U.S. government’s investigation is in its early stages.

Planet DebianScarlett Gately Moore: KDE Snap Updates, Kubuntu Updates, More life updates!

Icy morning Witch Wells AzIcy morning Witch Wells Az

Life:

Last week we were enjoying springtime, this week winter has made a comeback! Good news on the broken arm front, the infection is gone, so they can finally deal with the broken issue again. I will have a less invasive surgery April 25th to pull the bones back together so they can properly knit back together! If you can spare any change please consider a donation to my continued healing and recovery, or just support my work 🙂

Kubuntu:

While testing Beta I came across some crashy apps ( Namely PIM ) due to apparmor. I have uploaded fixed profiles for kmail, akregator, akonadiconsole, konqueror, tellico

KDE Snaps:

Added sctp support in Qt https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/commit/bbcb1dc39044b930ab718c8ffabfa20ccd2b0f75

This will allow me to finish a pyside6 snap and fix FreeCAD build.

Changed build type to Release in the kf6-core24-sdk which will reduce the size of kf6-core24 significantly.

Fixed a few startup errors in kf5-core24 and kf6-core24 snapcraft-desktop-integration.

Soumyadeep fixed wayland icons in https://invent.kde.org/neon/snap-packaging/kf6-core-sdk/-/merge_requests/3

KDE Applications 25.03.90 RC released to –candidate ( I know it says 24.12.3, version won’t be updated until 25.04.0 release )

Kasts core24 fixed in –candidate

Kate now core24 with Breeze theme! –candidate

Neochat: Fixed missing QML and 25.04 dependencies in –candidate

Kdenlive now with Galxnimate animations! –candidate

Digikam 8.6.0 now with scanner support in –stable

Kstars 3.7.6 released to –stable for realz, removed store rejected plugs.

Thanks for stopping by!

LongNowThe Self-Domesticated Ape

The Self-Domesticated Ape

We aren’t the only species on this planet to have domesticated another species. There is one kind of ancient ant that herds and cares for insect aphids in order to milk them of honeydew sugar. But we are the only species to have domesticated more than one species. Over time humans have domesticated dogs, cats, cows, horses, chickens, ducks, sheep, goats, camels, pigs, guinea pigs, and rabbits, among many others. We have modified their genes with selective breeding so that their behavior aligns with ours. For example, we have tweaked the genetic makeup of a wild dog so that it wants to guard our sheep. And we have designed wild cattle to allow us to milk it in exchange for food. In each case of domestication we alter genetics by clever breeding over time, using our minds to detect and select traits. In a very real sense, the tame dog and milk cow were invented by humans, and were among the earliest human inventions. Along each step of the process our ancestors imagined a better version of what they had, and then made a better version happen. Domestication is for the most part, an act of imagination.

One of the chief characteristics of domesticated animals is their reduced aggression compared to wild types. Tame dogs, cats, cattle and goats, are much more tolerant of others and more social than their feral versions. This acquired tameness is why we can work close with them. In addition, domestication brings morphological changes to the skulls of adults — they resemble the young more with larger wider eyes, smaller teeth, flatter rounder faces, and more slender bones. Tame dogs look like wolf puppies, and domesticated cats more like lion kittens. 

This retention of juvenile traits into adulthood is called neoteny and is considered a hallmark of domestication. The reduction of certain types of aggression is also a form of neoteny. The behavior of domesticated animals is similar to that of juvenile animals: more trusting of strangers, less hostile aggression over threats, less violent in-group fighting. 

In the 01950s, the Russian geneticist Dmitry Belyaev started breeding wild silver foxes in captivity, selecting the friendliest of each generation to breed into the next. Each generation of less aggressive foxes displayed more puppy-like features: rounder, flatter heads, wider eyes, floppy ears. Within 20 generations he had bred domesticated foxes.

Later analysis of their genomes in 02018 showed the presence of a set of genes shared with other domesticated animals, suggesting that there are “domestication” genes. Some scientists propose that dozens of interacting genes form a “domestication syndrome” that will alter features and behaviors in a consistent direction across many species at once. 

Although wolves were domesticated into dogs in several regions of the world around 15 to 40 thousand years ago, they were not the first animals to be domesticated. We were. Homo sapiens may have been the first species to select for these genes. When anthropologists compare the morphological features of modern humans to our immediate ancestors like the Neanderthal and Denisovans, humans display neoteny. Humans resemble juvenile Neanderthal, with rounder fatter faces, shorter jaws with smaller teeth, and slender bones. And in fact the differences between a modern human skull and a Neanderthal skull parallel those between a dog and its wild wolf ancestor.

The Self-Domesticated Ape
Comparisons of craniofacial traits. Top: Modern human vs. Neanderthal skull, showing reduced brow ridge, nasal projection, jaw projection, and tooth size in H. sapiens. Bottom: Dog vs. wolf skull, showing analogous reductions (e.g. shorter muzzle, smaller teeth) in the domesticated dog. From Theofanopoulou C, Gastaldon S, O’Rourke T, Samuels BD, Martins PT, Delogu F, et al. (2017) Self-domestication in Homo sapiens: Insights from comparative genomics. PLoS ONE 12(10): e0185306. https://doi.org/10.1371/journal.pone.0185306

The gene BAZ1B influences a network of developmental genes, and is one of the gene networks found in the domesticated silver foxes. In a rare human genetic disorder, the gene BAZ1B is duplicated twice, resulting in a person with longer jaws and longer teeth, and social awkwardness. In another rare genetic disorder called Williams-Beuren syndrome, the same BAZ1B gene is not doubled, it is missing. This omission results in “elfin” features, rounder face, short chin, and extreme overly friendliness and trust of strangers — a type of extreme neoteny.  A network of developmental genes controlled by BAZ1B are common in all modern humans but absent in Neanderthals, suggesting our own juvenile-like domestication has been genetically selected.

What’s distinctive about humans is that homo sapiens domesticated themselves. We are self-domesticated apes. Anthropologist Brian Hare characterizes recent human evolution (Late Pleistocene) as “Survival of the Friendliest”, arguing that in our self-domestication we favored prosociality — the tendency to be friendly, cooperative, and empathetic. We chose the most cooperative, the least aggressive, the less bullying types, and that trust in others resulted in greater prosperity, which in turn spread neoteny genes, and other domestication traits, into our populations.

Domesticated species often show increased playfulness, extended juvenile behavior, and even enhanced social learning abilities. Humans continued to extend their childhood far later than almost any other animal. This extended childhood enabled an extended time to learn beyond inherent instincts, but it also demanded greater parental resources and nuanced social bonds.

We are the first animals we domesticated. Not dogs. We first domesticated ourselves, and then we were able to domesticate dogs. Our domestication is not just about neoteny and reduced aggression and increased sociability. We also altered other genes and traits.

For at least a million years hominins have been using fire. Many animals and all apes have the manual dexterity to start a fire, but only hominins have the cognitive focus needed to ignite a fire from scratch and keep it going. Fires serve many purposes, including heat, light, protection from predators, annealing sharp points, and control burns for flushing out prey. But its chief consequence was fire’s ability to cook food. Cooking significantly reduced the time humans needed to forage, chew, and digest, freeing up time for other social activities. Cooking acted as a second stomach for humans, by pre-digesting hard-to-digest ingredients, releasing more nutrients that could be used to nourish a growing brain. Over many generations of cooking-fed humans, this invention altered our jaws and teeth, reduced our gut, and enlarged our brains. Our invention changed our genes.

Once we began to domesticate ungulates like cows and sheep, we began to consume their milk in many forms. This milk was especially important in raising children to healthy adults. But fairly quickly (on biological time scales, 8,000 years) in areas with domesticated ungulates, adults acquired the genetic ability to digest lactose. Again our invention altered our genes, enlarging our options. We changed ourselves in an elemental, foundational way.

In my 02010 book, What Technology Wants, I made this argument, which I believe is the first time anyone suggested that humans domesticated themselves:

We are not the same folks who marched out of Africa. Our genes have coevolved with our inventions. In the past 10,000 years alone, in fact, our genes have evolved 100 times faster than the average rate for the previous 6 million years. This should not be a surprise. As we domesticated the dog (in all its breeds) from wolves and bred cows and corn and more from their unrecognizable ancestors, we, too, have been domesticated. We have domesticated ourselves. Our teeth continue to shrink (because of cooking, our external stomach), our muscles thin out, our hair disappears. Technology has domesticated us. As fast as we remake our tools, we remake ourselves. We are coevolving with our technology, and so we have become deeply dependent on it. If all technology — every last knife and spear — were to be removed from this planet, our species would not last more than a few months. We are now symbiotic with technology….We have domesticated our humanity as much as we have domesticated our horses. Our human nature itself is a malleable crop that we planted 50,000 years ago and continue to garden even today.

Our self-domestication is just the start of our humanity. We are self-domesticated apes, but more important, we are apes that have invented ourselves. Just as the control of fire came about because of our mindful intentions, so did the cow and corn arise from our minds. Those are inventions as clear as the plow and the knife. And just as domesticated animals were inventions, as we self-domesticated, we self-invented ourselves, too. We are self-invented humans.

We invented our humanity. We invented cooking, we invented human language, we invented our sense of fairness, duty, and responsibility. All these came intentionally, out our imaginations of what could be. To the fullest extent possible, all the traits that we call “human” in contrast to either “animal” or “nature,” are traits that we created for ourselves. We self-selected our character, and crafted this being called human. In a real sense we collectively chose to be human.

We invented ourselves. I contend this is our greatest invention. Neither fire, the wheel, steam power, nor anti-biotics or AI is the greatest invention of humankind. Our greatest invention is our humanity.

And we are not done inventing ourselves yet.

Worse Than FailureCodeSOD: Unnavigable

Do you know what I had forgotten until this morning? That VBScript (and thus, older versions of Visual Basic) don't require you to use parentheses when calling a function. Foo 5 and Foo(5) are the same thing.

Of course, why would I remember that? I thankfully haven't touched any of those languages since about… 2012. Which is actually a horrifyingly short time ago, back when I supported classic ASP web apps. Even when I did, I always used parentheses because I wanted my code to be something close to readable.

Classic ASP, there's a WTF for you. All the joy of the way PHP mixes markup and code into a single document, but with an arguably worse and weirder language.

Which finally, brings us to Josh's code. Josh worked for a traveling exhibition company, and that company had an entirely homebrewed CMS written in classic ASP. Here's a few hundred lines out of their navigation menu.

  <ul class=menuMain>
        <%  if menu = "1" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/home.asp' title='Home'>Home</a></li>"
            else
                Response.Write "<li><a href='/home.asp' title='Home'>Home</a></li>"
            end if
            if  menu = "2" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/about_wc_homepage.asp' title='About World Challenge'>About us</a></li>"
            else
                Response.Write "<li><a href='/expeditions/about_wc_homepage.asp' title='About World Challenge'>About us</a></li>"
            end if
            if  menu = "3" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/book-a-school-expedition.asp' title='How to book'>How to book</a></li>"
            else
                Response.Write "<li><a href='/expeditions/book-a-school-expedition.asp' title='How to book'>How to book</a></li>"
            end if
            if  menu = "4" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/expeditions_home.asp' title='Expeditions'>Expeditions</a></li>"
            else
                Response.Write "<li><a href='/expeditions/expeditions_home.asp' title='Expeditions'>Expeditions</a></li>"
            end if 
            if  menu = "5" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/safety_home.asp' title='Safety'>Safety</a></li>"
            else 
                Response.Write "<li><a href='/expeditions/safety_home.asp' title='Safety'>Safety</a></li>"
            end if 
            if  menu = "6" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/mm_what_is_mm.asp' title='Fundraising support'>Fundraising</a></li>"
            else 
                Response.Write "<li><a href='/expeditions/mm_what_is_mm.asp' title='Fundraising support'>Fundraising</a></li>"
            end if 
            if  menu = "7" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/careers_home.asp' title='Work for us'>Work for us</a></li>"
            else
                Response.Write "<li><a href='/expeditions/careers_home.asp' title='Work for us'>Work for us</a></li>"
            end if          
            if  menu = "8" then
                Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/contact_us_home.asp' title='Contact us'>Contact us</a></li>"
            else 
                Response.Write "<li><a href='/expeditions/contact_us_home.asp' title='Contact us'>Contact us</a></li>"
            end if
        Response.Write "</ul>"
        Response.Write "<ul class='menuSub'>"
               if menu = "1" then
               end if
 
               if menu = "2" then   
                   if submenu = "1" then   
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/about_wc_who_we_are.asp'  title='Who we are'>Who we are</a></li>"
                   else   
                    Response.Write "<li><a href='/expeditions/about_wc_who_we_are.asp'title='Who we are'>Who we are</a></li>"
                   end if
                   if submenu = "2" then   
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/world_challenge_CSR.asp' title='CSR'>CSR</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/world_challenge_CSR.asp' title='CSR'>CSR</a></li>"
                   end if
 
                   if submenu = "3" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/World-Challenge-Accreditation.asp' title='Partners and accreditation'>Partners and accreditation</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/World-Challenge-Accreditation.asp' title='Partners and accreditation'>Partners and accreditation</a></li>"
                   end if
 
                   if submenu = "4" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/curriculum-links.asp' title='Curriculum links'>Curriculum links</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/curriculum-links.asp' title='Curriculum links'>Curriculum links</a></li>"
                   end if
 
                   if submenu = "5" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/expedition_advice.asp' title='Expedition advice'>Expedition advice</a></li>"
                   else   
                    Response.Write "<li><a href='/expeditions/expedition_advice.asp' title='Expedition advice'>Expedition advice</a></li>"
                   end if                   
                   if submenu = "6" then   
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/about_wc_press_and_publications.asp' title='Press resources'>Press resources</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/about_wc_press_and_publications.asp' title='Press resources'>Press resources</a></li>"
                   end if   
               end if
 
               if menu = "3" then
               Response.Write "<li></li>"
               end if
 
               if menu = "4" then
                   if submenu = "1" then   
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/exped_lh_dest_ca.asp' title='Central & North America'>Central and North America</a></li>"
                   else   
                    Response.Write "<li><a href='/expeditions/exped_lh_dest_ca.asp'  title='Central and North America'>Central and North America</a></li>"
                   end if   
                   if submenu = "2" then   
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/exped_lh_dest_sa.asp' title='South America'>South America</a></li>"
                   else   
                    Response.Write "<li><a href='/expeditions/exped_lh_dest_sa.asp'  title='South America'>South America</a></li>"
                   end if
                   if submenu = "3" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/exped_lh_dest_sea.asp' title='South East Asia'>South East Asia</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/exped_lh_dest_sea.asp' title='South East Asia'>South East Asia</a></li>"
                   end if
                   if submenu = "4" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/exped_lh_dest_asia.asp' title='Asia'>Asia</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/exped_lh_dest_asia.asp' title='Asia'>Asia</a></li>"
                   end if
                   if submenu = "5" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/exped_lh_dest_africa.asp' title='Africa'>Africa</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/exped_lh_dest_africa.asp' title='Africa'>Africa</a></li>"
                   end if
                   if submenu = "6" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/europe_school_expeditions.asp' title='Europe'>Europe</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/europe_school_expeditions.asp' title='Europe'>Europe</a></li>"
                   end if
                   if submenu = "7" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/community-projects.asp' title='Community projects'>Community projects</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/community-projects.asp' title='Community projects'>Community projects</a></li>"
                   end if
                   if submenu = "8" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/exped_indiv_home.asp' title='Independent'>Independent</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/exped_indiv_home.asp' title='Independent'>Independent</a></li>"
                   end if
               end if
 
               if menu = "5" then
                   if submenu = "1" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/safe-people.asp' title='Safe People'>Safe people</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/safe-people.asp' title='Safe People'>Safe people</a></li>"
                   end if
                   if submenu = "2" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/safe-place.asp' title='Safe places'>Safe places</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/safe-place.asp' title='Safe places'>Safe places</a></li>"
                   end if
                   if submenu = "3" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/safe-policies-practises.asp' title='Safe practices and policies'>Safe practices and policies</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/safe-policies-practises.asp' title='Safe practices and policies'>Safe practices and policies</a></li>"
                   end if
                   if submenu = "4" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/safe-resources.asp' title='Safe Resources'>Safe resources</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/safe-resources.asp' title='Safe Resources'>Safe resources</a></li>"
                   end if
                   if submenu = "5" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/safety_ops_centre.asp'  title='Operations Centre'>Operations Centre</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/safety_ops_centre.asp' title='Operations Centre'>Operations Centre</a></li>"
                   end if
                   if submenu = "6" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/travel_safety_course.asp' title='Travelsafe course'>Travelsafe course</a></li>"
                   else   
                    Response.Write "<li><a href='/expeditions/travel_safety_course.asp'  title='Travelsafe course'>Travelsafe course</a></li>"
                   end if
               end if  
            
               if menu = "6" then
 
'                  if submenu = "1" then   
'                   Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/fundraising-team.asp' title='Fundraising team'>Fundraising team</a></li>"
'                  else   
'                   Response.Write "<li><a href='/expeditions/fundraising-team.asp'  title='Fundraising team'>Fundraising team</a></li>"
'                  end if   
 
                   if submenu = "2" then   
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/mm_ideas.asp' title='Fundraising ideas'>Fundraising ideas</a></li>"
                   else   
                    Response.Write "<li><a href='/expeditions/mm_ideas.asp'  title='Fundraising ideas'>Fundraising ideas</a></li>"
                   end if                   
                   if submenu = "3" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/about_wc_events_challenger_events.asp'  title='Fundraising events'>Fundraising events</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/about_wc_events_challenger_events.asp' title='Fundraising events'>Fundraising events</a></li>"
                   end if                   
               end if
 
               if menu = "7" then
                   if submenu = "1" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/careers_leader_ops_overseas.asp' title='Lead an expedition'>Lead an expedition</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/careers_leader_ops_overseas.asp'  title='Lead an expedition'>Lead an expedition</a></li>"
                   end if
                   if submenu = "2" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/permanent_jobs_world_challenge.asp'  title='Office based positions'>Office based positions</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/permanent_jobs_world_challenge.asp' title='Office based positions'>Office based positions</a></li>"
                   end if
               end if
 
               if menu = "8" then
                   if submenu = "1" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/pages/forms-brochure.asp'  title='Request a brochure'>Request a brochure</a></li>"
                   else
                    Response.Write "<li><a href='/pages/forms-brochure.asp'  title='Request a brochure'>Request a brochure</a></li>"
                   end if
                   if submenu = "2" then
                    Response.Write "<li class='activ'><b></b><i></i><a rel='external' href='http://f.chtah.com/s/3/2069554126/signup.html'  title='Sign up for e-news'>Sign up for e-news</a></li>"
                   else
                    Response.Write "<li><a rel='external' href='http://f.chtah.com/s/3/2069554126/signup.html'  title='Sign up for e-news'>Sign up for e-news</a></li>"
                   end if
                   if submenu = "3" then
                    Response.Write "<li class='activ'><b></b><i></i><a href='/expeditions/about_wc_press_and_publications.asp'  title='Press resources'>Press resources</a></li>"
                   else
                    Response.Write "<li><a href='/expeditions/about_wc_press_and_publications.asp'  title='Press resources'>Press resources</a></li>"
                   end if
               end if %>
                  </ul>

This renders the whole menu, but based on the selected menu and submenu, it adds an activ class to the HTML elements. Which means that each HTML element is defined here twice, once with and without the CSS class on it. I know folks like to talk about dry code, but this code is SOGGY with repetition. Just absolutely dripping wet with the same thing multiple times. Moist.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsFace the Dawn

Author: Julian Miles, Staff Writer The battlefield is littered with carcasses to the point where soil has mixed with ichor to form a gritty green mud that shines as the searchlights swing by. I wave the site teams to either side. “Get the spotlights up! We’ll never find anything in this without brights.” Dosun of […]

The post Face the Dawn appeared first on 365tomorrows.

,

Cory DoctorowNimby and the D-Hoppers

Ben Templesmith's art for the comic adaptation of 'Nimby and the D-Hoppers', depicting a figure in powered armor flying through a slate-gray sky filled with abstract equations.

This week on my podcast, I once again read my 2003 Asimov’s Science Fiction Magazine story, Nimby and the D-Hoppers” The story has been widely reprinted (it was first published online in The Infinite Matrix in 2008), and was translated (by Elisabeth Vonarburg) into French for Solaris Magazine, as well as into Chinese, Russian, Hebrew, and Italian. The story was adapted for my IDW comic book series Cory Doctorow’s Futuristic Tales of the Here and Now by Ben Templesmith. I read this into my podcast 20 years ago, but I found myself wanting to revisit it.

Don’t get me wrong — I like unspoiled wilderness. I like my sky clear and blue and my city free of the thunder of cars and jackhammers. I’m no technocrat. But goddamit, who wouldn’t want a fully automatic, laser-guided, armor-piercing, self-replenishing personal sidearm?

Nice turn of phrase, huh? I finally memorized it one night, from one of the hoppers, as he stood in my bedroom, pointing his hand-cannon at another hopper, enumerating its many charms: “This is a laser-guided blah blah blah. Throw down your arms and lace your fingers behind your head, blah blah blah.” I’d heard the same dialog nearly every day that month, whenever the dimension-hoppers catapaulted into my home, shot it up, smashed my window, dived into the street, and chased one another through my poor little shtetl, wreaking havoc, maiming bystanders, and then gateing out to another poor dimension to carry on there.

Assholes.

It was all I could do to keep my house well-fed on sand to replace the windows. Much more hopper invasion and I was going to have to extrude its legs and babayaga to the beach. Why the hell was it always my house, anyway?


MP3

David BrinScience Fictional News & Updates - spring 2025

First, long-awaited news! My 1st novel -SUNDIVER- never had a hardcover, till now! Phantasia Press has issued a special, limited edition of SUNDIVER, finely-bound with interiors and gorgeous cover, all by the epic artist Jim Burns! Not cheap. But if you want a lovely edition with quality to survive several geological epochs...;-)

(BTW... people keep kvelling about potential Startide or Uplift War movies. But I think Sundiver is the obvious one! A murder mystery in which the victim gets dumped into the Sun? Take that on, CSI!)

 Second I'm pleased to announce new volumes in my Out of Time series of novels for teen readers who love adventure laced with history, science and other cool stuff. New books include Boondoggle by SF Legend Tom Easton & newcomer Torion Oey plus Raising the Roof by R. James Doyle! All new titles are released by Amazing Stories.

Meanwhile, Open Road republished the earlier five Out of Time novels, including great tales by Nancy Kress, Sheila Finch, and Roger Allen. The shared motif... teens from across time are pulled into the 24th Century, asked to use their unique skills to help a future that's in peril!  Among characters who get 'yanked' into tomorrow include a young Arthur Conan Doyle, Winston Churchill, Joan of Arc's page and maybe... you!

All of the Out of Time books can be accessed (and assessed) here

== A special event ==

This is way cool. A video interview with two terrific academics concerning one of the ‘lost’ founders of modern science fiction – the 19th century author, Robert Duncan Milne – who they are in-effect resurrecting from obscurity in a soon published book. Co-authored by Ari Brin! On the daringly named ‘cast “Every Single Sci-Fi Film Ever.”  A new anthology - The Essential Robert Duncan Milne - was released in January.  One of the best anthologies of classic SF I ever saw, along with cogent commentary.

== Lists of great Sci Fi! ==

An insightful top-ten Science Fiction Novels list about the general notion of humanity dealing with inscrutable alien minds - with mentions of Existence along with some great company, including Robert Charles Wilson’s terrific Blind Lake, Octavia Butler's Dawn, and Le Guin's The Left Hand of Darkness

And another list: SF novels that won both the Hugo and the Nebula… though that seems almost required, nowadays, now that the voting pools almost precisely overlap. 

Audacity has published a fine audio for your commute - David Brin on First Contact in "Existence,"- wherein I cover a wide range of topics, from AI to the Fermi Paradox.

This one is of actual - or likely - importance to human survival! The TASAT project (There's A Story About That) is doing great! I've touted it before - a special service I tried to bring into the world for almost 20 years. And now, thanks master programmer Todd Zimmerman, it lives!  Come by TASAT.org and see how there's a small but real chance that nerdy SciFi readers like YOU might one day save the world!


Among the many topics that have come up on the lively TASAT site has been great opening lines in science fiction. 


Well, here’s an older blog in which I compile some of my favorites – and many others appear in comments! Though my favorite opening sentence, from a recent novel, The Melody of Memory, goes like this:


“I was nine when my words saved a man’s life; it wasn’t till later that my words killed him.”


== And more sci fi news ==

Taking classic novels to the big screen...Certainly I expect wonderful things, when Denis Villeneuve  films Arthur Clarke’s wonderful Rendezvous with Rama. Still, I nurse a fond hope that Villeneuve will consider splicing in elements from Greg Bear’s magnificent novel Eon. Which expands exponentially on the rather spare (and just a little disappointing) vagueness of Arthur’s story.


Sci Fi great Ed Lerner is interviewed here about fusion and anti-matter, electromagnetic bottles, the Albercurrie drive for warping space-time to get around the speed limit of light, and neutrino communications. Plus the Prime Directive, the Drake equation, the Fermi Paradox, scientific revolutions and evolutions, stealth technologies, and alien monitoring stations keeping an eye on Earth in the Kuiper belt and the Oort cloud.


Want escape? I read opening scenes of Existence. More is free at the book's website. Plus the vivid trailer with tons of great art by Patrick Farley! 


365 TomorrowsCosmic Shower

Author: R. J. Erbacher I had just stepped into my shower, having had to wait a full five minutes for the water to become hot enough. It took forever for the water temperature to get up to at least tepid in my apartment. Usually, it was either freezing cold or scalding with no middle ground. […]

The post Cosmic Shower appeared first on 365tomorrows.

,

Sam VargheseABC seeking cash when it is all talk and has nothing to show for it

The Australian Broadcasting Corporation is always crying poor and asking the government for more money for what it claims is a shortfall in funds that has grown over the years due to cuts by Coalition governments.

One doubts that the Australian public would begrudge the organisation the necessary cash were it to provide quality programming. But when its claims are bolstered by promos that show David Speers, Jane Norman and Patricia Karvelas, claiming that they are among the best political analysts in the country, then it is doubtful that the public will back the government coughing up.

Over the 27 and a bit years that I’ve watched the ABC, its quality has steadily fallen. It had many journalists — and I mean real journalists, not the arse-licking variety that haunts its corridors these days — and produced a lot of good journalism. There was both good news programming and cultural stuff as well.

And the head of the ABC at that time was always under pressure, even from his own staff, to keep quality standards up. Foremost among the ABC staff who held their own bosses to account were people like Kerry O’Brien and Jon Faine (the latter in ABC Radio) who did not hesitate to ask hard questions whenever they thought standards were dropping.

Jane Norman: one of the many incompetents in the ABC ranks. Photo: courtesy YouTube

Had their like been present over the last year or so, then the ABC would never have presented such shameful, one-sided, cowardly coverage of the Gaza conflict, caving in meekly to the Zionists who are always trying to muscle the media into buying their point of view.

Many would argue that the drop in quality has come about because of the dwindling funds and that good journalists, no doubt seeking enhanced pay packets, have gone outside in search of money. But that is not the case. What has happened is that the attitude of the management has changed to accommodate everything but good journalistic values and those staff who want to work in a good newsy environment have decided that it would be better to work elsewhere where they do not feel frustrated by bosses who constantly seek to humour every lobby group, forgetting that news organisations have to stand against pressure in order to provide good coverage.

As standards have fallen, the ABC has resorted more and more to promotions to try and make things seem better than they are and spin has thus come to predominate the corporation’s communications. It is the worst policy a news organisation can adopt, but the ABC cannot be treated as a news organisation any more. It is more a PR outfit, trying to put a good face on an appalling performance.

The ABC has not made any good programs for a long time. This year, it broadcast a program titled Optics, about a fictitious PR company. It was just terrible. The program was made by one of the ABC’s mates, the males who make up the Chaser. Once they were good, now they are just terrible. There have also been a few lightweight programs like quizzes but again the standard has been uniformly low. The only decent program on the ABC is one called Hard Quiz which has copied its name from the BBC’s now-cancelled program Hard Talk. Tom Gleeson, who hosts Hard Quiz, is an intelligent person who is very good at repartee. The ABC depends so much on this show that it screens old episodes in the evening and a new episode once a week, on Wednesdays.

The same organisation, that once preached that the public would not be fooled by PR claims, now does the same thing itself. Its claim, that it is the best news organisation in Australia, is nothing more than high-grade bullshit. Appearance is important, but below that lies rubbish.

Asking for more cash now will only result in the public, now cynical about its claims, tending to regard the organisation with contempt.

Planet DebianRussell Coker: HP z840

Many PCs with DDR4 RAM have started going cheap on ebay recently. I don’t know how much of that is due to Windows 11 hardware requirements and how much is people replacing DDR4 systems with DDR5 systems.

I recently bought a z840 system on ebay, it’s much like the z640 that I recently made my workstation [1] but is designed strictly as a 2 CPU system. The z640 can run with 2 CPUs if you have a special expansion board for a second CPU which is very expensive on eBay and and which doesn’t appear to have good airflow potential for cooling. The z840 also has a slightly larger case which supports more DIMM sockets and allows better cooling.

The z640 and z840 take the same CPUs if you use the E5-2xxx series of CPU that is designed for running in 2-CPU mode. The z840 runs DDR4 RAM at 2400 as opposed to 2133 for the z640 for reasons that are not explained. The z840 has more PCIe slots which includes 4*16x slots that support bifurcation.

The z840 that I have has the HP Z-Cooler [2] installed. The coolers are mounted on a 45 degree angle (the model depicted at the right top of the first page of that PDF) and the system has a CPU shroud with fans that mount exactly on top of the CPU heatsinks and duct the hot air out without going over other parts. The technology of the z840 cooling is very impressive. When running two E5-2699A CPUs which are listed as “145W typical TDP” with all 44 cores in use the system is very quiet. It’s noticeably louder than the z640 but is definitely fine to have at your desk. In a typical office you probably wouldn’t hear it when it’s running full bore. If I was to have one desktop PC or server in my home the z840 would definitely be the machine I choose for that.

I decided to make the z840 a build server to share the resource with friends and to use for group coding projects. I often have friends visit with laptops to work on FOSS stuff and a 44 core build server is very useful for that.

The system is by far the fastest system I’ve ever owned even though I don’t have fast storage for it yet. But 256G of RAM allows enough caching that storage speed doesn’t matter too much.

Here is building the SE Linux “refpolicy” package on the z640 with E5-2696 v3 CPU and the z840 with two E5-2699A v4 CPUs:

257.10user 47.18system 1:40.21elapsed 303%CPU (0avgtext+0avgdata 416408maxresident)k
66904inputs+1519912outputs (74major+8154395minor)pagefaults 0swaps

222.15user 24.17system 1:13.80elapsed 333%CPU (0avgtext+0avgdata 416192maxresident)k
5416inputs+0outputs (64major+8030451minor)pagefaults 0swaps

Here is building Warzone2100 on the z640 and the z840:

6887.71user 178.72system 16:15.09elapsed 724%CPU (0avgtext+0avgdata 1682160maxresident)k
1555480inputs+8918768outputs (114major+27133734minor)pagefaults 0swaps

6055.96user 77.05system 8:00.20elapsed 1277%CPU (0avgtext+0avgdata 1682100maxresident)k
117640inputs+0outputs (46major+11460968minor)pagefaults 0swaps

It seems that the refpolicy package can’t use many more than 18 cores as it is only 37% faster when building with 44 cores available. Building Warzone is slightly more than twice as fast so it can really use all the available cores. According to Passmark the E5-2699A v4 is 22% faster than the E5-2696 v3.

I highly recommend buying a z640 if you see one at a good price.

Planet DebianSteinar H. Gunderson: Cisco 2504 password extraction

I needed this recently, so I took a trip into Ghidra and learned enough to pass it on:

If you have an AireOS-based wireless controller (Cisco 2504, vWLC, etc.; basically any of the now-obsolete Cisco WLC series), and you need to pick out the password, you can go look in the XML files in /mnt/application/xml/aaaapiFileDbCfgData.xml (if you have a 2504, you can just take out the CompactFlash card and mount the fourth partition or run strings on it; if it's a vWLC you can use the disk image similarly). You will find something like (hashes have been changed to not leak my own passwords :-) ):

    <userDatabase index="0" arraySize="2048">
      <userName>61646d696e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</userName>
      <serviceType>6</serviceType>
      <WLAN-id>0</WLAN-id>
      <accountCreationTimestamp>946686833</accountCreationTimestamp>
      <passwordStore>
        <ps_type>PS_STATIC_AES128CBC_SHA1</ps_type>
        <iv>3f7b4fcfcd3b944751a8614ebf80a0a0</iv>
        <mac>874d482bbc56b24ee776e80bbf1f5162</mac>
        <max_passwd_len>50</max_passwd_len>
        <passwd_len>16</passwd_len>
        <passwd>8614c0d0337989017e9576b82662bc120000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</passwd>
      </passwordStore>
      <telnetEnable>1</telnetEnable>
    </userDatabase>

“userName” is obviously just “admin” in plain hex. Ignore the HMAC; it's seemingly only used for integrity checking. The password is encrypted with a static key embedded in /sbin/switchdrvr, namely 834156f9940f09c0a8d00f019f850005. So you can just ask OpenSSL to decrypt it:

> printf $( echo '8614c0d0337989017e9576b82662bc12' | sed 's/\(..\)/\\x&/g' ) | openssl aes-128-cbc -d -K 834156f9940f09c0a8d00f019f850005 -iv 3f7b4fcfcd3b944751a8614ebf80a0a0 | xxd -g 1
00000000: 70 61 73 73 77 6f 72 64                          password

And voila. (There are some other passwords floating around there in the XML files, where I believe that this master key is used to encrypt other keys, and occasionally things seem to be double-hex-encoded, but I haven't really bothered looking at it.)

When you have the actual key, it's easy to just search for and others have found the same thing, but for “show run” output, so searching for e.g. “PS_STATIC_AES128CBC_SHA1” found nothing. But now at least you know.

Update: Just to close the loop: The contents of <mac> is a HMAC-SHA1 of a concatenation of 00 00 00 01 <iv> <passwd> (supposedly maybe 01 00 00 00 instead, depending on endian of the underlying system; both MIPS and x86 controllers exist), where <passwd> is the encrypted password (without the extra tacked-on zeros, and the HMAC key is 44C60835E800EC06FFFF89444CE6F789. So it's doubly useless for password cracking; just decrypt the plaintext password instead. :-)

Planet DebianRussell Coker: More About the HP ML110 Gen9 and z640

In May 2021 I bought a ML110 Gen9 to use as a deskside workstation [1]. I started writing this post in April 2022 when it had been my main workstation for almost a year. While this post was in a draft state in Feb 2023 I upgraded it to an 18 core E5-2696 v3 CPU [2]. It’s now March 2025 and I have replaced it.

Hardware Issues

My previous state with this was not having adequate cooling to allow it to boot and not having a PCIe power cable for a video card. As an experiment I connected the CPU fan to the PCIe fan power and discovered that all power and monitoring wires for the CPU and PCIe fans are identical. This allowed me to buy a CPU fan which was cheaper ($26.09 including postage) and easier to obtain than a PCIe fan (presumably due to CPU fans being more commonly used and manufactured in larger quantities). I had to be creative in attaching the CPU fan as it’s cable wasn’t long enough to reach the usual location for a PCIe fan. The PCIe fan also required a baffle to direct the air to the right place which annoyingly HP apparently doesn’t ship with the low end servers, so I made one from a Corn Flakes packet and duct tape.

The Wikipedia page listing AMD GPUs lists many newer ones that draw less than 80W and don’t need a PCIe power cable. I ordered a Radeon RX560 4G video card which cost $246.75. It only uses 8 lanes of PCIe but that’s enough for me, the only 3D game I play is Warzone 2100 which works well at 4K resolution on that card. It would be really annoying if I had to just spend $246.75 to get the system working, but I had another system in need of a better video card which had a PCIe power cable so the effective cost was small. I think of it as upgrading 2 systems for $123 each.

The operation of the PCIe video card was a little different than non-server systems. The built in VGA card displayed the hardware status at the start and then kept displaying that after the system had transitioned to PCIe video. This could be handy in some situations if you know what it’s doing but was confusing initially.

Booting

One insidious problem is that when booting in “legacy” mode the boot process takes an unreasonably long time and often hangs, the UEFI implementation on this system seems much more reliable and also supports booting from NVMe.

Even with UEFI the boot process on this system was slow. Also the early stage of the power on process involves fans being off and the power light flickering which leads you to think that it’s not booting and needs to have the power button pressed again – which turns it off. The Dell power on sequence of turning most LEDs on and instantly running the fans at high speed leaves no room for misunderstanding. This is also something that companies making electric cars could address. When turning on a machine you should never be left wondering if it is actually on.

Noise

This was always a noisy system. When I upgraded the CPU from an 8 core with 85W “typical TDP” to an 18 core with 145W “typical TDP” it became even louder. Then over time as dust accumulated inside the machine it became louder still until it was annoyingly loud outside the room when all 18 cores were busy.

Replacement

I recently blogged about options for getting 8K video to work on Linux [3]. This requires PCIe power which the z640s have (all the ones I have seen have it I don’t know if all that HP made have it) and which the cheaper models in the ML-110 line don’t have. Since then I have ordered an Intel Arc card which apparently has 190W TDP. There are adaptors to provide PCIe power from SATA or SAS power which I could have used, but having a E5-2696 v3 CPU that draws 145W [4] and a GPU that draws 190W [4] in a system with a 350W PSU doesn’t seem viable.

I replaced it with one of the HP z640 workstations I got in 2023 [5].

The current configuration of the z640 has 3*32G RDIMMs compared to the ML110 having 8*32G, going from 256G to 96G is a significant decrease but most tasks run well enough like that. A limitation of the z640 is that when run with a single CPU it only has 4 DIMM slots which gives a maximum of 512G if you get 128G LRDIMMs, but as all DDR4 DIMMs larger than 32G are unreasonably expensive at this time the practical limit is 128G (which costs about $120AU). In this case I have 96G because the system I’m using has a motherboard problem which makes the fourth DIMM slot unusable. Currently my desire to get more than 96G of RAM is less than my desire to avoid swapping CPUs.

At this time I’m not certain that I will make my main workstation the one that talks to an 8K display. But I really want to keep my options open and there are other benefits.

The z640 boots faster. It supports PCIe bifurcation (with a recent BIOS) so I now have 4 NVMe devices in a single PCIe slot. It is very quiet, the difference is shocking. I initially found it disconcertingly quiet.

The biggest problem with the z640 is having only 4 DIMM sockets and the particular one I’m using has a problem limiting it to 3. Another problem with the z640 when compared to the ML110 Gen9 is that it runs the RAM at 2133 while the ML110 runs it at 2400, that’s a significant performance reduction. But the benefits outweigh the disadvantages.

Conclusion

I have no regrets about buying the ML-110. It was the only DDR4 ECC system that was in the price range I wanted at the time. If I knew that the z640 systems would run so quietly then I might have replaced it earlier. But it was only late last year that 32G DIMMs became affordable, before then I had 8*16G DIMMs to give 128G because I had some issues of programs running out of memory when I had less.

365 TomorrowsThe Weight of a Stamp

Author: Jennifer Peaslee The stale air of the Interplanetary Dynamics office reflected the collective mood of its desk jockeys. Ash Zendar, stewing in a stiff-collared uniform, barely glanced at the form in front of them before stamping approval for a three-cycle visit from the dangerous K’noth planet. Number nine hundred and ninety-eight. Today, Ash’s five […]

The post The Weight of a Stamp appeared first on 365tomorrows.

,

Planet DebianGunnar Wolf: Naming things revisited

How long has it been since you last saw a conversation over different blogs syndicated at the same planet? Well, it’s one of the good memories of the early 2010s. And there is an opportunity to re-engage! 😃

I came across Evgeni’s post “naming things is hard� in Planet Debian. So, what names have I given my computers?

I have had many since the mid-1990s I also had several during the decade before that, but before Linux, my computers didn’t hve a formal name. Naming my computers something nice Linux gave me.

I have forgotten many. Some of the names I have used:

  • My years in Iztacala: I worked as a sysadmin between 1999 and 2003. When I arrived, we already had two servers, campus and tlali, and one computer pending installation, ollin. The credit for their names is not mine.
    • campus: A mighty SPARCstation 5! Because it was the main (and for some time, the only!) server in our campus.
    • tlali: A regular PC used as a Linux server. “Tlaliâ€� means something like lands in náhuatl, the prehispanic language spoken in central Mexico. My workplace was Iztacala, which translates as “the place where there are white housesâ€�; “tlaliâ€� and “caliâ€� are related words.
    • ollin: was a big IBM RS/6000 system running AIX. It came to us, probably already obsolete, as a (useless) donation from Fundación UNAM; I don’t recall the exact model, but it looked very much like this one. Ran on AIX. We had no software for it, and frankly… never really got it to be productive. Funnily, its name “Ollinâ€� means “movementâ€� in Náhuatl. I added some servers to the lineup during the two years I was in Iztacala:
    • tlamantli: An Alpha 21164 server that doubled as my desktop. Given the tradition in Iztacala of naming things in Náhuatl, but trying to be somewhat funny, tlamantli just means a thing; I understand the word is usually bound to a quantifier.
    • tepancuate: A regular PC system we set up with OpenBSD as a firewall. It means “wallâ€� in Náhuatl.
  • Following the first CONSOL (National Free Software Conference), I was invited to work as a programmer at UPN, Universidad Pedagógica Nacional in 2003–2004. There I was not directly in charge of any of the servers (I mostly used ajusco, managed by Víctor, named after the mountain on whose slopes our campus was). But my only computer there was:
    • shmate: , meaning old rag in yiddish. The word shmate is used like thingy, although it would usually mean old and slightly worn-out thingy. It was a quite nice machine, though. I had a Pentium 4 with 512MB RAM, not bad for 2003!
  • I started my present work at Instituto de Investigaciones Económicas, UNAM 20 years ago(!), in 2005. Here I am a systems administrator, so naturally I am in charge of the servers. And over the years, we have had a fair share of machines:
    • mosca: is my desktop. It has changed hardware several times (of course) over the years, but it’s still the same Debian Sid install I did in January 2005 (I must have reinstalled once, when I got it replaced by an AMD64). Its name is the Spanish name for the common fly. I have often used it to describe my work, since I got in the early 1990s an automated bilingual translator called TRANSLATE; it came on seven 5.25â€� floppies. As a teenager, I somehow got my hands on a copy, and installed it in my 80386SX. Fed it its own README to see how it fared. And the first sentence made me burst in laughter: «TRANSLATE performs on the fly translation» ⇒ «TRADUCE realiza traducción sobre la mosca». Starting then, I always think of «on the fly» as «sobre la mosca». As Groucho said, I guess… Time flies like an arrow, but fruit flies like a banana.
    • lafa When I got there, we didn’t have any servers; for some time, I took one of the computer lab’s systems to serve our web page and receive mail. But when we got some budget approved, we bought a fsckin-big server. Big as in four-rack-units. Double CPUs (not multicore, but two independent early Xeon CPUs, if I’m not mistaken. Still, it was still a 32 bits system). ל×�פה (lafa) is a big, more flexible kind of Arab bread than pita; I loved it when I lived in Israel. And there is an album (and song) by Teapacks, an Israeli group I am very fond of, «hajaim shelja belafa» (your life in a lafa), saying, «hey, brother! Your life is in a lafa. You throw everything in a big pita. You didn’t have time to chew, you already swallowed it».
    • joma: Our firewall. חו×�×” means wall in Hebrew.
    • baktun: lafa was great, but over the years, it got old. After many years, I finally got the Institute to buy a second server. We got it in December 2012. There was a lot of noise around then because the world was supposed to end on 2012.12.21, as the Mayan calendar reached a full long cycle. This long cycle is called /baktun/. So, it was fitting as the name of the new server.
    • teom: As lafa was almost immediately decomissioned and turned into a virtual machine in the much bigger baktun,, I wanted to split services, make off-hardware backups, and such. Almost two years later, my request was approved and we bought a second server. But instead of buying it from a “regularâ€� provider, we got it off a stash of machines bought by our university’s central IT entity. To my surprise, it had the exact same hardware configuration as baktun, bought two years earlier. Even the serial number was absurdly close. So, I had it as baktun’s long-lost twin. Hence, תְ×�וֹ×� (transliterated as teom), the Hebrew word for twin. About a year after teom arrived to my life, my twin children were also born, but their naming followed a completely different logic process than my computers 😉
  • At home or on the road: I am sure I am missing several systems over the years.
    • pato: The earliest system I had that I remember giving a name to. I built a 80386SX in 1991, buying each component separately. The box had a 1-inch square for integrators to put their branding — And after some time, I carefully printed and applied a label that said Catarmáquina PATO (the first word, very small). Pato (duck) is how we’d call a no-brand system. Catarmáquina because it was the system where I ran my BBS, CatarSYS (1992-1994).
    • malenkaya: In 2008 I got a 9â€� Acer Aspire One netbook (Atom N270 i386, 1GB RAM). I really loved that machine! Although it was quite limited, it was my main computer while on the road for almost five years. malenkaya means small (for female) in Russian.
    • matlalli: After malenkaya started being too limited for my regular use, I bought its successor Acer Aspire One model. This one was way larger (10.1 inches screen) and I wasn’t too happy about it at the beginning, but I ended up loving it. So much, in fact, that we bought at least four very similar such computers for us and our family members. This computer was dirt cheap, and endured five further years of lugging everywhere. matlalli is due to its turquoise color: it is the Náhuatl word for blue or green.
    • cajita: In 2014 I got a beautiful Cubox i4 Pro computer. It took me some time to get it to boot and be generally useful, but it ended up being my home server for many years, until I had a power supply malfunction which bricked it. cajita means little box in Spanish.
    • pitentzin: Another 10.1â€� Acer Aspire One (the last in the lineup; the CPU is a Celeron 877, so it does run AMD64, and it supports up to 16GB RAM, I think I have it with 12). We originally bought it for my family in Argentina, but they didn’t really use it much, and after a couple of years we got it back. We decided it would be the computer for the kids, at least for the time being. And although it is a 2013 laptop, it’s still our everyday media station driver. Oh, and the name pitentzin? Náhuatl for /children/.
    • tliltik: In 2018, I bought a second-hand Thinkpad X230. It was my daily driver for about three years. I reflashed its firmware with CoreBoot, and repeated the experience for seven people IIRC in DebConf18. With it, I learned to love the Thinkpad keyboard. Naturally for a thinkpad, tliltik means black in Náhuatl.
    • uesebe: When COVID struck, we were all sent home, and my university lent me a nice recently bought Intel i7 HP laptop. At first, I didn’t want to mess up its Windows install (so I set up a USB-drive-based installation, hence the name uesebe); when it was clear the lockdown was going to be long (and that tliltik had too many aches to be used for my daily work), I transferred the install to its HDD and used it throughout the pandemic, until mid 2022.
    • bolex: I bought this computer for my father in 2020. After he passed away in May 2022, I took his computer, and named it bolex because that’s the brand of the 8mm cinema camera he loved and had since 1955, and with which he created most of his films. It is really an entry-level machine, though (a single-core, dual-threaded Celeron), and it was too limited when I started distance-teaching again, so I had to store it as an emergency system.
    • yogurtu: During the pandemics, I spent quite a bit of time fiddling with the Raspberry Pi family. But all in all, while they are nice machines for many uses, they are too limited to be daily drivers. Or even enough for taking i.e. to Debconf and have them be my conference computer. I bought an almost-new-but-used (≈2 year old) Yoga C630 ARM laptop. I often brag about my happy experience with it, and how it brings a reasonably powerful ARM Linux system to my everyday life. In our last DebConf, I didn’t even pick up my USB-C power connector every day; the battery just lasts over ten hours of active work. But I’m not here doing ads, right? yogurtu naturally is derived from the Yoga brand it has, but is taken from Yogurtu Nghé, a fictional character by the Argentinian comical-musical group Les Luthiers, that has marked my life.
    • misnenet: Towards mid 2023, when it was clear that bolex would not be a good daily driver, and considering we would be spending six months in Argentina, I bought a new desktop system. It seems I have something for small computers: I decided for a refurbished HP EliteDesk 800 G5 Mini i7 system. I picked it because, at close to 18×18×3.5cm it perfectly fits in my DebConf18 bag. A laptop, it is clearly not, but it can easily travel with me when needed. Oh, and the name? Because for this model, HP uses different enclosures based on the kind of processor: The i3 model has a flat, black aluminum top… But mine has lots of tiny holes, covering two areas of roughly 15×7cm, with a tiny hole every ~2mm, and with a solid strip between them. Of course, ×�ִסנֶנֶת (misnenet, in Hebrew) means strainer.

Planet DebianGuido Günther: Booting an Android custom kernel on a Pixel 3a for QMI debugging

As you might know I'm not much of an Android user (let alone developer) but in order to figure out how something low level works you sometimes need to peek at how vendor kernels handles this. For that it is often useful to add additional debugging.

One such case is QMI communication going on in Qualcomm SOCs. Joel Selvaraj wrote some nice tooling for this.

To make use of this a rooted device and a small kernel patch is needed and what would be a no-brainer with Linux Mobile took me a moment to get it to work on Android. Here's the steps I took on a Pixel 3a to first root the device via Magisk, then build the patched kernel and put that into a boot.img to boot it.

Flashing the factory image

If you still have Android on the device you can skip this step.

You can get Android 12 from developers.google.com. I've downloaded sargo-sp2a.220505.008-factory-071e368a.zip. Then put the device into Fastboot mode (Power + Vol-Down), connect it to your PC via USB, unzip/unpack the archive and reflash the phone:

unpack sargo-sp2a.220505.008-factory-071e368a.zip
./flash-all.sh

This wipes your device! I had to run it twice since it would time out on the first run. Note that this unpacked zip contains another zip (image-sargo-sp2a.220505.008.zip) which will become useful below.

Enabling USB debugging

Now boot Android and enable Developer mode by going to SettingsAbout then touching Build Number (at the very bottom) 7 times.

Go back one level, then go to SystemDeveloper Options and enable "USB Debugging".

Obtaining boot.img

There are several ways to get boot.img. If you just flashed Android above then you can fetch boot.img from the already mentioned image-sargo-sp2a.220505.008.zip:

unzip image-sargo-sp2a.220505.008.zip boot.img

If you want to fetch the exact boot.img from your device you can use TWRP (see the very end of this post).

Becoming root with Magisk

Being able to su via adb will later be useful to fetch kernel logs. For that we first download Magisk as APK. At the time of writing v28.1 is current.

Once downloaded we upload the APK and the boot.img from the previous step onto the phone (which needs to have Android booted):

adb push Magisk-v28.1.apk /sdcard/Download
adb push boot.img /sdcard/Download

In Android open the Files app, navigate to /sdcard/Download and install the Magisk APK by opening the APK.

We now want to patch boot.img to get su via adb to work (so we can run dmesg). This happens by hitting Install in the Magisk app, then "Select a file to patch". You then select the boot.img we just uploaded.

The installation process will create a magisk_patched-<random>.img in /sdcard/Download. We can pull that file via adb back to our PC:

adb pull /sdcard/Download/magisk_patched-28100_3ucVs.img

Then reboot the phone into fastboot (adb reboot bootloader) and flash it (this is optional see below):

fastboot flash boot magisk_patched-28100_3ucVs.img

Now boot the phone again, open the Magisk app, go to SuperUser at the bottom and enable Shell.

If you now connect to your phone via adb again and now su should work:

adb shell
su

As noted above if you want to keep your Android installation pristine you don't even need to flash this Magisk enabled boot.img. I've flashed it so I have su access for other operations too. If you don't want to flash it you can still test boot it via:

fastboot boot magisk_patched-28100_3ucVs.img

and then perform the same adb shell su check as above.

Building the custom kernel

For our QMI debugging to work we need to patch the kernel a bit and place that in boot.img too. So let's build the kernel first. For that we install the necessary tools (which are thankfully packaged in Debian) and fetch the Android kernel sources:

sudo apt install repo android-platform-tools-base kmod ccache build-essential mkbootimg
mkdir aosp-kernel && cd aosp-kernel
repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-bonito-4.9-android12L
repo sync

With that we can apply Joel's kernel patches and also compile in the touch controller driver so we don't need to worry if the modules in the initramfs match the kernel. The kernel sources are in private/msm-google. I've just applied the diffs on top with patch and modified the defconfig and committed the changes. The resulting tree is here.

We then build the kernel:

PATH=/usr/sbin:$PATH ./build_bonito.sh

The resulting kernel is at ./out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb.

In order to boot that kernel I found it to be the simplest to just replace the kernel in the Magisk patched boot.img as we have that already. In case you have already deleted that for any reason we can always fetch the current boot.img from the phone via TWRP (see below).

Preparing a new boot.img

To replace the kernel in our Magisk enabled magisk_patched-28100_3ucVs.img from above with the just built kernel we can use mkbootimgfor that. I basically copied the steps we're using when building the boot.img on the Linux Mobile side:

ARGS=$(unpack_bootimg --format mkbootimg --out tmp --boot_img magisk_patched-28100_3ucVs.img)
CLEAN_PARAMS="$(echo "${ARGS}" | sed -e "s/ --cmdline '.*'//" -e "s/ --board '.*'//")"
cp android-kernel/out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb tmp/kernel
mkbootimg -o "boot.patched.img" ${CLEAN_PARAMS} --cmdline "${ARGS}"

This will give you a boot.patched.img with the just built kernel.

Boot the new kernel via fastboot

We can now boot the new boot.patched.img. No need to flash that onto the device for that:

fastboot boot boot.patched.img

Fetching the kernel logs

With that we can fetch the kernel logs with the debug output via adb:

adb shell su -c 'dmesg -t' > dmesg_dump.xml

or already filtering out the QMI commands:

adb shell su -c 'dmesg -t'  | grep "@QMI@" | sed -e "s/@QMI@//g" &> sargo_qmi_dump.xml

That's it. You can apply this method for testing out other kernel patches as well. If you want to apply the above to other devices you basically need to make sure you patch the right kernel sources, the other steps should be very similar.

In case you just need a rooted boot.img for sargo you can find a patched one here.

If this procedure can be improved / streamlined somehow please let me know.

Appendix: Fetching boot.img from the phone

If, for some reason you lost boot.img somewhere on the way you can always use TWRP to fetch the boot.img currently in use on your phone.

First get TWRP for the Pixel 3a. You can boot that directly by putting your device into fastboot mode, then running:

fastboot boot twrp-3.7.1_12-1-sargo.img

Within TWRP select BackupBoot and backup the file. You can then use adb shell to locate the backup in /sdcard/TWRP/BACKUPS/ and pull it:

adb pull /sdcard/TWRP/BACKUPS/97GAY10PWS/2025-04-02--09-24-24_SP2A220505008/boot.emmc.win

You now have the device's boot.img on your PC and can e.g. replace the kernel or make modifications to the initramfs.

Krebs on SecurityCyber Forensic Expert in 2,000+ Cases Faces FBI Probe

A Minnesota cybersecurity and computer forensics expert whose testimony has featured in thousands of courtroom trials over the past 30 years is facing questions about his credentials and an inquiry from the Federal Bureau of Investigation (FBI). Legal experts say the inquiry could be grounds to reopen a number of adjudicated cases in which the expert’s testimony may have been pivotal.

One might conclude from reading Mr. Lanterman’s LinkedIn profile that has a degree from Harvard University.

Mark Lanterman is a former investigator for the U.S. Secret Service Electronics Crimes Task Force who founded the Minneapolis consulting firm Computer Forensic Services (CFS). The CFS website says Lanterman’s 30-year career has seen him testify as an expert in more than 2,000 cases, with experience in cases involving sexual harassment and workplace claims, theft of intellectual property and trade secrets, white-collar crime, and class action lawsuits.

Or at least it did until last month, when Lanterman’s profile and work history were quietly removed from the CFS website. The removal came after Hennepin County Attorney’s Office said it was notifying parties to ten pending cases that they were unable to verify Lanterman’s educational and employment background. The county attorney also said the FBI is now investigating the allegations.

Those allegations were raised by Sean Harrington, an attorney and forensics examiner based in Prescott, Wisconsin. Harrington alleged that Lanterman lied under oath in court on multiple occasions when he testified that he has a Bachelor of Science and a Master’s degree in computer science from the now-defunct Upsala College, and that he completed his postgraduate work in cybersecurity at Harvard University.

Harrington’s claims gained steam thanks to digging by the law firm Perkins Coie LLP, which is defending a case wherein a client’s laptop was forensically reviewed by Lanterman. On March 14, Perkins Coie attorneys asked the judge (PDF) to strike Lanterman’s testimony because neither he nor they could substantiate claims about his educational background.

Upsala College, located in East Orange, N.J., operated for 102 years until it closed in 1995 after a period of declining enrollment and financial difficulties. Perkins Coie told the court that they’d visited Felician University, which holds the transcripts for Upsala College during the years Lanterman claimed to have earned undergraduate and graduate degrees. The law firm said Felician had no record of transcripts for Lanterman (PDF), and that his name was absent from all of the Upsala College student yearbooks and commencement programs during that period.

Reached for comment, Lanterman acknowledged he had no way to prove he attended Upsala College, and that his “postgraduate work” at Harvard was in fact an eight-week online cybersecurity class called HarvardX, which cautions that its certificates should not be considered equivalent to a Harvard degree or a certificate earned through traditional, in-person programs at Harvard University.

Lanterman has testified that his first job after college was serving as a police officer in Springfield Township, Pennsylvania, although the Perkins Coie attorneys noted that this role was omitted from his resume. The attorneys said when they tried to verify Lanterman’s work history, “the police department responded with a story that would be almost impossible to believe if it was not corroborated by Lanterman’s own email communications.”

As recounted in the March 14 filing, Lanterman was deposed on Feb. 11, and the following day he emailed the Springfield Township Police Department to see if he could have a peek at his old personnel file. On Feb. 14, Lanterman visited the Springfield Township PD and asked to borrow his employment record. He told the officer he spoke with on the phone that he’d recently been instructed to “get his affairs in order” after being diagnosed with a grave heart condition, and that he wanted his old file to show his family about his early career.

According to Perkins Coie, Lanterman left the Springfield Township PD with his personnel file, and has not returned it as promised.

“It is shocking that an expert from Minnesota would travel to suburban Philadelphia and abscond with his decades-old personnel file to obscure his background,” the law firm wrote. “That appears to be the worst and most egregious form of spoliation, and the deception alone is reason enough to exclude Lanterman and consider sanctions.”

Harrington initially contacted KrebsOnSecurity about his concerns in late 2023, fuming after sitting through a conference speech in which Lanterman shared documents from a ransomware victim and told attendees it was because they’d refused to hire his company to perform a forensic investigation on a recent breach.

“He claims he was involved in the Martha Stewart investigation, the Bernie Madoff trial, Paul McCartney’s divorce, the Tom Petters investigation, the Denny Hecker investigation, and many others,” Harrington said. “He claims to have been invited to speak to the Supreme Court, claims to train the ‘entire federal judiciary’ on cybersecurity annually, and is a faculty member of the United States Judicial Conference and the Judicial College — positions which he obtained, in part, on a house of fraudulent cards.”

In an interview this week, Harrington said court documents reveal that at least two of Lanterman’s previous clients complained CFS had held their data for ransom over billing disputes. In a declaration (PDF) dated August 2022, the co-founder of the law firm MoreLaw Minneapolis LLC said she hired Lanterman in 2014 to examine several electronic devices after learning that one of their paralegals had a criminal fraud history.

But the law firm said when it pushed back on a consulting bill that was far higher than expected, Lanterman told them CFS would “escalate” its collection efforts if they didn’t pay, including “a claim and lien against the data which will result in a public auction of your data.”

“All of us were flabbergasted by Mr. Lanterman’s email,” wrote MoreLaw co-founder Kimberly Hanlon. “I had never heard of any legitimate forensic company threatening to ‘auction’ off an attorney’s data, particularly knowing that the data is comprised of confidential client data, much of which is sensitive in nature.”

In 2009, a Wisconsin-based manufacturing company that had hired Lanterman for computer forensics balked at paying an $86,000 invoice from CFS, calling it “excessive and unsubstantiated.” The company told a Hennepin County court that on April 15, 2009, CFS conducted an auction of its trade secret information in violation of their confidentiality agreement.

“CFS noticed and conducted a Public Sale of electronic information that was entrusted to them pursuant to the terms of the engagement agreement,” the company wrote. “CFS submitted the highest bid at the Public Sale in the amount of $10,000.”

Lanterman briefly responded to a list of questions about his background (and recent heart diagnosis) on March 24, saying he would send detailed replies the following day. Those replies never materialized. Instead, Lanterman forwarded a recent memo he wrote to the court that attacked Harrington and said his accuser was only trying to take out a competitor. He has not responded to further requests for comment.

“When I attended Upsala, I was a commuter student who lived with my grandparents in Morristown, New Jersey approximately 30 minutes away from Upsala College,” Lanterman explained to the judge (PDF) overseeing a separate ongoing case (PDF) in which he has testified. “With limited resources, I did not participate in campus social events, nor did I attend graduation ceremonies. In 2023, I confirmed with Felician University — which maintains Upsala College’s records — that they could not locate my transcripts or diploma, a situation that they indicated was possibly due to unresolved money-related issues.”

Lanterman was ordered to appear in court on April 3 in the case defended by Perkins Coie, but he did not show up. Instead, he sent a message to the judge withdrawing from the case.

“I am 60 years old,” Lanterman told the judge. “I created my business from nothing. I am done dealing with the likes of individuals like Sean Harrington. And quite frankly, I have been planning at turning over my business to my children for years. That time has arrived.”

Lanterman’s letter leaves the impression that it was his decision to retire. But according to an affidavit (PDF) filed in a Florida case on March 28, Mark Lanterman’s son Sean said he’d made the difficult decision to ask his dad to step down given all the negative media attention.

Mark Rasch, a former federal cybercrime prosecutor who now serves as counsel to the New York cybersecurity intelligence firm Unit 221B, said that if an expert witness is discredited, any defendants who lost cases that were strongly influenced by that expert’s conclusions at trial could have grounds for appeal.

Rasch said law firms who propose an expert witness have a duty in good faith to vet that expert’s qualifications, knowing that those credentials will be subject to cross-examination.

“Federal rules of civil procedure and evidence both require experts to list every case they have testified in as an expert for the past few years,” Rasch said. “Part of that due diligence is pulling up the results of those cases and seeing what the nature of their testimony has been.”

Perhaps the most well-publicized case involving significant forensic findings from Lanterman was the 2018 conviction of Stephen Allwine, who was found guilty of killing his wife two years earlier after attempts at hiring a hitman on the dark net fell through. Allwine is serving a sentence of life in prison, and continues to maintain that he was framed, casting doubt on computer forensic evidence found on 64 electronic devices taken from his home.

On March 24, Allwine petitioned a Minnesota court (PDF) to revisit his case, citing the accusations against Lanterman and his role as a key witness for the prosecution.

Planet DebianJohannes Schauer Marin Rodrigues: To boldly build what no one has built before

Last week, we (Helmut, Jochen, Holger, Gioele and josch) met in Würzburg for a Debian crossbuilding & bootstrap sprint. We would like to thank Angestöpselt e. V. for generously providing us with their hacker space which we were able to use exclusively during the four-day-sprint. We’d further like to thank Debian for their sponsorship of accommodation of Helmut and Jochen.

The most important topics that we worked on together were:

  • publicity and funding for bootstrappable and cross-buildable Debian, driven by Gioele, including the creation of a list of usecases and slogans [everyone]
  • proof-of-concept for substituting coreutils with alternative implementations such as busybox, toybox or uutils [Helmut, Jochen, josch]
  • writing a patch for documenting the Multi-Arch field in Debian policy #749826 [Helmut, Holger, Jochen, josch]
  • turning build profile spec text into a patch for Debian policy #757760 [Helmut, Jochen, josch]

Our TODO items for after the sprint are:

  • josch needs to fix bootstrap.debian.net
  • josch exports the package lists computed by bootstrap.debian.net in a machine readable format for Holger
  • writing a mail to d-devel about making coreutils non-essential

In addition to what was already listed above, people worked on the following tasks specifically:

  • Holger now wants a crossbootstrap pkg set for reproducible builds.
  • Holger worked on some reproducible builds issues, uploaded ~10 sequoia related packages and did a devscripts upload.
  • Jochen worked on creating initrds
  • Jochen helped Holger with sequoia/rust packaging
  • Jochen worked on sbuild
  • Jochen discussed cross bootstrapping with Helmut and josch
  • Jochen fixed bugs in devscripts (debrebuild/debootstrap, build-rdeps, proxy.py)
  • Jochen worked on reproduce.d.n
  • Jochen worked on src:kokkos resulting in #1101487
  • Gioele gathered information and material for possible funding for bootstrapping-related projects.
  • Gioele ported src:libreplaygain from cdbs to dh.
  • Helmut dug into lingering debvm issues some. Jochen tracked down the ARM32 autopkgtest regression to #1079443 which is now worked around.
  • Helmut collected feedback on linux-libc-dev being a:all.
  • Helmut collected feedback on dropping libcrypt-dev from build-essential and initiated work with Santiago Vila
  • Helmut collected feedback on how sbuild would want to interface with a better build containment
  • josch reviewed and merged the following MRs:
  • josch worked on making the Debian Linux kernel packaging use hooks installed in /usr/share/kernel/*.d and gathered feedback from the other sprint participants in how to best move this forward, culminating in the opening of #1101733 against src:linux.

Thank you all for attending this sprint, for making it so productive and for the amazing atmosphere and enlightening discussions!

Planet DebianEvgeni Golov: naming things is hard

I got a new laptop (a Lenovo Thinkpad X1 Carbon Gen 12, more on that later) and as always with new pets, it needed a name.

My naming scheme is roughly "short japanese words that somehow relate to the machine".

The current (other) machines at home are (not all really in use):

  • Thinkpad X1 Carbon G9 - tanso (炭素), means carbon
  • Thinkpad T480s - yatsu (八), means 8, as it's a T480s
  • Thinkpad X201s - nana (七), means 7, as it was my first i7 CPU
  • Thinkpad X61t - obon (御盆), means tray, which in German is "Tablett" and is close to "tablet"
  • Thinkpad X300 - atae (与え) means gift, as it was given to me at a very low price, almost a gift
  • Thinkstation P410 - kangae (考え), means thinking, and well, it's a Thinkstation
  • self-built homeserver - sai (さい), means dice, which in German is "Würfel", which is the same as cube, and the machine used to have an almost cubic case
  • Raspberry Pi 4 - aita (開いた), means open, it's running OpenWRT
  • Sun Netra T1 - nisshoku (日食), means solar eclipse
  • Apple iBook G4 13 - ringo (林檎), means apple

Then, I happen to rent a few servers:

  • ippai (一杯), means "a cup full", the VM is hosted at "netcup.de"
  • genshi (原子), means "atom", the machine has an Atom CPU
  • shokki (織機), means loom, which in German is Webstuhl or Webmaschine, and it's the webserver

I also had machines in the past, that are no longer with me:

  • Thinkpad X220 - rodo (労働) means work, my first work laptop
  • Thinkpad X31 - chiisai (小さい) means small, my first X series
  • Thinkpad Z61m - shinkupaddo (シンクパッド) means Thinkpad, my first Thinkpad

And also servers from the past:

  • chikara (力) means power, as it was a rather powerful (for that time) Xeon server
  • hozen (保全), means preservation, it was a backup host

So, what shall I call the new one? It will be "juuni" (十二), which means 12. Creative, huh?

Worse Than FailureError'd: Mais Que Nada

I never did explain the elusive off-by-one I hinted at, did I? A little meta, perhaps. It is our practice at Error'd to supply five nuggets of joy each week. But in episode previous-plus-one, you actually got six! (Or maybe, depending on how you count them, that's yet another off-by-one. I slay me.) If that doesn't tickle you enough, just wait until you hear what Dave L. brought us. Meanwhile...

"YATZP" scoffed self-styled Foo AKA F. Yet Another Time Zone P*, I guess. Not wrong. According to Herr Aka F., "German TV teletext (yes, we still have it!) botched the DST start (upper right corner). The editors realized it and posted a message stating as much, sent from the 'future' (i.e. correct) time zone."

0

 

Michael R. wrote in with a thought-provoker. If I'm representing one o'clock as 1:00, two o'clock as 2:00, and so forth, why should zero o'clock be the only time represented with not just one, but TWO leading zeroes? Logically, zero o'clock should be represented simply by :00, right?

1

 

Meanwhile, (just) Randy points out that somebody failed to pay attention to detail. "Did a full-scroll on Baeldung's members page and saw this. Sometimes, even teachers don't get it right."

2

 

In case Michael R. is still job-hunting Gary K. has found the perfect position for everyone. That is, assuming the tantalizingly missing Pay Range section conforms to the established pattern. "Does this mean I should put my qualifications in?" he wondered. Run, don't walk.

3

 

And in what I think is an all-time first for us, Dave L. brings (drum roll) an audio Error'd "I thought you'd like this recording from my Garmin watch giving me turn-by-turn directions: In 280.097 feet turn right. That's two hundred eighty feet and ONE POINT ONE SIX FOUR INCHES. Accuracy to a third of a millimeter!" Don't move your hand!

 

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsThe High Costs of Mad Science

Author: S. Douglas Hall Doctor Hibberd’s shoulders slumped and he laid his clipboard on the table. The buzzing at his lab door overshadowed the normal beeps, clicks, and whirls from the lab around him. He ran his hands through his graying brown hair and adjusted his sturdy black rimmed glasses before reaching for the latch […]

The post The High Costs of Mad Science appeared first on 365tomorrows.

,

Planet DebianGregor Herrmann: Debian MountainCamp, Innsbruck, 16–18 May 2025

the days are getting warmer (in the northern hemisphere), debian is getting colder, & quite a few debian events are taking place.

in innsbruck, we are organizing MountainCamp, an event in the tradition of SunCamp & SnowCamp: no schedule, no talks, meet other debian people, fix bugs, come up with crazy ideas, have fun, develop things.

interested? head over to the information & signup page on the debian wiki.

Cryptogram Friday Squid Blogging: Squid and Efficient Solar Tech

Researchers are trying to use squid color-changing biochemistry for solar tech.

This appears to be new and related research to a 2019 squid post.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Cryptogram Friday Squid Blogging: Two-Man Giant Squid

The Brooklyn indie art-punk group, Two-Man Giant Squid, just released a new album.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Worse Than FailureRepresentative Line: Get Explosive

Sean sends us a one-line function that is a delight, if by delight you mean "horror". You'll be shocked to know it's PHP.

function proget(){foreach($_GET as $k=>$v){if($k=="h"){$GLOBALS["h"]=1;}$p=explode(",",$k);}return($p);} //function to process GET headers

Based on the comment, proget is a shorthand for process_get_parameters. Which is sort of what it does. Sort of.

Let's go through this. We iterate across our $_GET parameters using $k for the key, $v for the value, but we never reference the value so forget it exists. We're iterating across every key. The first thing we check is if a key "h" exists. We don't look at its value, we just check if it exists, and if it does, we set a global variable. And this, right here, is enough for this to be a WTF. The logic of "set a global variable based on the existence of a query parameter regardless of the value of the query parameter" is… a lot. But then, somehow, this actually gets more out there.

We explode the key on commas (explode being PHP's much cooler name for split), which implies… our keys may be lists of values? Which I feel like is an example of someone not understanding what a "key" is. But worse than that, we just do this for every key, and return the results of performing that operation on the last key. Which means that if this function is doing anything at all, it's entirely dependent on the order of the keys. Which, PHP does order the keys by the order they're added, which I take to mean that if the URL has query params like ?foo=1&h=0&a,b,c,d=wtf. Or, if we're being picky about encoding, ?foo=1&h=0&a%2Cb%2Cc%2Cd=wtf. The only good news here is that PHP handles the encoding/decoding for you, so the explode will work as expected.

This is the kind of bad code that leaves me with lots of questions, and I'm not sure I want any of the answers. How did this happen, and why are questions best left unanswered, because I think the answers might cause more harm.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsSand Diamond

Author: A.R. McHugh Diamonds won her as a child. Looking at sedimentary quartz under 200x magnification, she was fascinated by the possibility of so much clarity, such mineral perfection. Somewhere between her mother’s flashing ring and her father’s relentless pressure to produce better grades and faster times, a harder carapace around her teenage soul and […]

The post Sand Diamond appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

,

Cryptogram Web 3.0 Requires Data Integrity

If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known as the CIA triad. When we talk about a system being secure, that’s what we’re referring to. All are important, but to different degrees in different contexts. In a world populated by artificial intelligence (AI) systems and artificial intelligent agents, integrity will be paramount.

What is data integrity? It’s ensuring that no one can modify data—that’s the security angle—but it’s much more than that. It encompasses accuracy, completeness, and quality of data—all over both time and space. It’s preventing accidental data loss; the “undo” button is a primitive integrity measure. It’s also making sure that data is accurate when it’s collected—that it comes from a trustworthy source, that nothing important is missing, and that it doesn’t change as it moves from format to format. The ability to restart your computer is another integrity measure.

The CIA triad has evolved with the Internet. The first iteration of the Web—Web 1.0 of the 1990s and early 2000s—prioritized availability. This era saw organizations and individuals rush to digitize their content, creating what has become an unprecedented repository of human knowledge. Organizations worldwide established their digital presence, leading to massive digitization projects where quantity took precedence over quality. The emphasis on making information available overshadowed other concerns.

As Web technologies matured, the focus shifted to protecting the vast amounts of data flowing through online systems. This is Web 2.0: the Internet of today. Interactive features and user-generated content transformed the Web from a read-only medium to a participatory platform. The increase in personal data, and the emergence of interactive platforms for e-commerce, social media, and online everything demanded both data protection and user privacy. Confidentiality became paramount.

We stand at the threshold of a new Web paradigm: Web 3.0. This is a distributed, decentralized, intelligent Web. Peer-to-peer social-networking systems promise to break the tech monopolies’ control on how we interact with each other. Tim Berners-Lee’s open W3C protocol, Solid, represents a fundamental shift in how we think about data ownership and control. A future filled with AI agents requires verifiable, trustworthy personal data and computation. In this world, data integrity takes center stage.

For example, the 5G communications revolution isn’t just about faster access to videos; it’s about Internet-connected things talking to other Internet-connected things without our intervention. Without data integrity, for example, there’s no real-time car-to-car communications about road movements and conditions. There’s no drone swarm coordination, smart power grid, or reliable mesh networking. And there’s no way to securely empower AI agents.

In particular, AI systems require robust integrity controls because of how they process data. This means technical controls to ensure data is accurate, that its meaning is preserved as it is processed, that it produces reliable results, and that humans can reliably alter it when it’s wrong. Just as a scientific instrument must be calibrated to measure reality accurately, AI systems need integrity controls that preserve the connection between their data and ground truth.

This goes beyond preventing data tampering. It means building systems that maintain verifiable chains of trust between their inputs, processing, and outputs, so humans can understand and validate what the AI is doing. AI systems need clean, consistent, and verifiable control processes to learn and make decisions effectively. Without this foundation of verifiable truth, AI systems risk becoming a series of opaque boxes.

Recent history provides many sobering examples of integrity failures that naturally undermine public trust in AI systems. Machine-learning (ML) models trained without thought on expansive datasets have produced predictably biased results in hiring systems. Autonomous vehicles with incorrect data have made incorrect—and fatal—decisions. Medical diagnosis systems have given flawed recommendations without being able to explain themselves. A lack of integrity controls undermines AI systems and harms people who depend on them.

They also highlight how AI integrity failures can manifest at multiple levels of system operation. At the training level, data may be subtly corrupted or biased even before model development begins. At the model level, mathematical foundations and training processes can introduce new integrity issues even with clean data. During execution, environmental changes and runtime modifications can corrupt previously valid models. And at the output level, the challenge of verifying AI-generated content and tracking it through system chains creates new integrity concerns. Each level compounds the challenges of the ones before it, ultimately manifesting in human costs, such as reinforced biases and diminished agency.

Think of it like protecting a house. You don’t just lock a door; you also use safe concrete foundations, sturdy framing, a durable roof, secure double-pane windows, and maybe motion-sensor cameras. Similarly, we need digital security at every layer to ensure the whole system can be trusted.

This layered approach to understanding security becomes increasingly critical as AI systems grow in complexity and autonomy, particularly with large language models (LLMs) and deep-learning systems making high-stakes decisions. We need to verify the integrity of each layer when building and deploying digital systems that impact human lives and societal outcomes.

At the foundation level, bits are stored in computer hardware. This represents the most basic encoding of our data, model weights, and computational instructions. The next layer up is the file system architecture: the way those binary sequences are organized into structured files and directories that a computer can efficiently access and process. In AI systems, this includes how we store and organize training data, model checkpoints, and hyperparameter configurations.

On top of that are the application layers—the programs and frameworks, such as PyTorch and TensorFlow, that allow us to train models, process data, and generate outputs. This layer handles the complex mathematics of neural networks, gradient descent, and other ML operations.

Finally, at the user-interface level, we have visualization and interaction systems—what humans actually see and engage with. For AI systems, this could be everything from confidence scores and prediction probabilities to generated text and images or autonomous robot movements.

Why does this layered perspective matter? Vulnerabilities and integrity issues can manifest at any level, so understanding these layers helps security experts and AI researchers perform comprehensive threat modeling. This enables the implementation of defense-in-depth strategies—from cryptographic verification of training data to robust model architectures to interpretable outputs. This multi-layered security approach becomes especially crucial as AI systems take on more autonomous decision-making roles in critical domains such as healthcare, finance, and public safety. We must ensure integrity and reliability at every level of the stack.

The risks of deploying AI without proper integrity control measures are severe and often underappreciated. When AI systems operate without sufficient security measures to handle corrupted or manipulated data, they can produce subtly flawed outputs that appear valid on the surface. The failures can cascade through interconnected systems, amplifying errors and biases. Without proper integrity controls, an AI system might train on polluted data, make decisions based on misleading assumptions, or have outputs altered without detection. The results of this can range from degraded performance to catastrophic failures.

We see four areas where integrity is paramount in this Web 3.0 world. The first is granular access, which allows users and organizations to maintain precise control over who can access and modify what information and for what purposes. The second is authentication—much more nuanced than the simple “Who are you?” authentication mechanisms of today—which ensures that data access is properly verified and authorized at every step. The third is transparent data ownership, which allows data owners to know when and how their data is used and creates an auditable trail of data providence. Finally, the fourth is access standardization: common interfaces and protocols that enable consistent data access while maintaining security.

Luckily, we’re not starting from scratch. There are open W3C protocols that address some of this: decentralized identifiers for verifiable digital identity, the verifiable credentials data model for expressing digital credentials, ActivityPub for decentralized social networking (that’s what Mastodon uses), Solid for distributed data storage and retrieval, and WebAuthn for strong authentication standards. By providing standardized ways to verify data provenance and maintain data integrity throughout its lifecycle, Web 3.0 creates the trusted environment that AI systems require to operate reliably. This architectural leap for integrity control in the hands of users helps ensure that data remains trustworthy from generation and collection through processing and storage.

Integrity is essential to trust, on both technical and human levels. Looking forward, integrity controls will fundamentally shape AI development by moving from optional features to core architectural requirements, much as SSL certificates evolved from a banking luxury to a baseline expectation for any Web service.

Web 3.0 protocols can build integrity controls into their foundation, creating a more reliable infrastructure for AI systems. Today, we take availability for granted; anything less than 100% uptime for critical websites is intolerable. In the future, we will need the same assurances for integrity. Success will require following practical guidelines for maintaining data integrity throughout the AI lifecycle—from data collection through model training and finally to deployment, use, and evolution. These guidelines will address not just technical controls but also governance structures and human oversight, similar to how privacy policies evolved from legal boilerplate into comprehensive frameworks for data stewardship. Common standards and protocols, developed through industry collaboration and regulatory frameworks, will ensure consistent integrity controls across different AI systems and applications.

Just as the HTTPS protocol created a foundation for trusted e-commerce, it’s time for new integrity-focused standards to enable the trusted AI services of tomorrow.

This essay was written with Davi Ottenheimer, and originally appeared in Communications of the ACM.

Worse Than FailureCodeSOD: Join Us in this Query

Today's anonymous submitter worked for a "large, US-based, e-commerce company." This particular company was, some time back, looking to save money, and like so many companies do, that meant hiring offshore contractors.

Now, I want to stress, there's certainly nothing magical about national borders which turns software engineers into incompetents. The reality is simply that contractors never have their client's best interests at heart; they only want to be good enough to complete their contract. This gets multiplied by the contracting firm's desire to maximize their profits by keeping their contractors as booked as possible. And it gets further multiplied by the remoteness and siloing of the interaction, especially across timezones. Often, the customer sends out requirements, and three months later gets a finished feature, with no more contact than that- and it never goes well.

All that said, let's look at some SQL Server code. It's long, so we'll take it in chunks.

-- ===============================================================================
-- Author     : Ignacius Ignoramus
-- Create date: 04-12-2020
-- Description:	SP of Getting Discrepancy of Allocation Reconciliation Snapshot
-- ===============================================================================

That the comment reinforces that this is an "SP", aka stored procedure, is already not my favorite thing to see. The description is certainly made up of words, and I think I get the gist.

ALTER PROCEDURE [dbo].[Discrepency]
	(
		@startDate DATETIME,
		@endDate DATETIME
	)
AS

BEGIN

Nothing really to see here; it's easy to see that we're going to run a query for a date range. That's fine and common.

	DECLARE @tblReturn TABLE
	(
		intOrderItemId	   INT
	)

Hmm. T-SQL lets you define table variables, which are exactly what they sound like. It's a local variable in this procedure, that acts like a table. You can insert/update/delete/query it. The vague name is a little sketch, and the fact that it holds only one field also makes me go "hmmm", but this isn't bad.

	DECLARE @tblReturn1 TABLE
	(
		intOrderItemId	   INT
	)

Uh oh.

	DECLARE @tblReturn2 TABLE
	(
		intOrderItemId	   INT
	)

Oh no.

	DECLARE @tblReturn3 TABLE
	(
		intOrderItemId	   INT
	)

Oh no no no.

	DECLARE @tblReturn4 TABLE
	(
		intOrderItemId	   INT
	)

This doesn't bode well.

So they've declared five variables called tblReturn, that all hold the same data structure.

What happens next? This next block is gonna be long.

	INSERT INTO @tblReturn --(intOrderItemId) VALUES (@_ordersToBeAllocated)

	/* OrderItemsPlaced */		

		select 		
		intOrderItemId
		from CompanyDatabase..Orders o
		inner join CompanyDatabase..OrderItems oi on oi.intOrderId = o.intOrderId
		where o.dtmTimeStamp between @startDate and  @endDate


		AND intOrderItemId Not In 
		(

		/* _itemsOnBackorder */

		select intOrderItemId			
		from CompanyDatabase..OrderItems oi
		inner join CompanyDatabase..Orders o on o.intOrderId = oi.intOrderId
		where o.dtmTimeStamp between @startDate and  @endDate
		and oi.strstatus='backordered' 
		)

		AND intOrderItemId Not In 
		(

		/* _itemsOnHold */

		select intOrderItemId			
		from CompanyDatabase..OrderItems oi
		inner join CompanyDatabase..Orders o on o.intOrderId = oi.intOrderId
		where o.dtmTimeStamp between @startDate and  @endDate
		and o.strstatus='ONHOLD'
		and oi.strStatus <> 'BACKORDERED' 
		)

		AND intOrderItemId Not In 
		(

		/* _itemsOnReview */

		select  intOrderItemId			
		from CompanyDatabase..OrderItems oi
		inner join CompanyDatabase..Orders o on o.intOrderId = oi.intOrderId
		where o.dtmTimeStamp between @startDate and  @endDate 
		and o.strstatus='REVIEW' 
		and oi.strStatus <> 'BACKORDERED'
		)

		AND intOrderItemId Not In 
		(

		/*_itemsOnPending*/

		select  intOrderItemId			
		from CompanyDatabase..OrderItems oi
		inner join CompanyDatabase..Orders o on o.intOrderId = oi.intOrderId
		where o.dtmTimeStamp between @startDate and  @endDate
		and o.strstatus='PENDING'
		and oi.strStatus <> 'BACKORDERED'
		)

		AND intOrderItemId Not In 
		(

		/*_itemsCancelled */

		select  intOrderItemId			
		from CompanyDatabase..OrderItems oi
		inner join CompanyDatabase..Orders o on o.intOrderId = oi.intOrderId
		where o.dtmTimeStamp between @startDate and  @endDate
		and oi.strstatus='CANCELLED' 
		)

We insert into @tblReturn the result of a query, and this query relies heavily on using a big pile of subqueries to decide if a record should be included in the output- but these subqueries all query the same tables as the root query. I'm fairly certain this could be a simple join with a pretty readable where clause, but I'm also not going to sit here and rewrite it right now, we've got a lot more query to look at.

INSERT INTO @tblReturn1

		
		/* _backOrderItemsReleased */	

		select  intOrderItemId			
		from CompanyDatabase..OrderItems oi
		inner join CompanyDatabase..orders o on o.intorderid = oi.intorderid
		where oi.intOrderItemid in (
			  select intRecordID 
			  from CompanyDatabase..StatusChangeLog
			  where strRecordType = 'OrderItem'
			  and strOldStatus in ('BACKORDERED')
			  and strNewStatus in ('NEW', 'RECYCLED')
			  and dtmTimeStamp between @startDate and  @endDate  
		)
		and o.dtmTimeStamp < @startDate
		

		UNION
		(
			/*_pendingHoldItemsReleased*/

			select  intOrderItemId					
			from CompanyDatabase..OrderItems oi
			inner join CompanyDatabase..orders o on o.intorderid = oi.intorderid
			where oi.intOrderID in (
				  select intRecordID 
				  from CompanyDatabase..StatusChangeLog
				  where strRecordType = 'Order'
				  and strOldStatus in ('REVIEW', 'ONHOLD', 'PENDING')
				  and strNewStatus in ('NEW', 'PROCESSING')
				  and dtmTimeStamp between @startDate and  @endDate  
			)
			and o.dtmTimeStamp < @startDate
			
		)

		UNION

		/* _reallocationsowingtonostock */	
		(
			select oi.intOrderItemID				   	 
			from CompanyDatabase.dbo.StatusChangeLog 
			inner join CompanyDatabase.dbo.OrderItems oi on oi.intOrderItemID = CompanyDatabase.dbo.StatusChangeLog.intRecordID
			inner join CompanyDatabase.dbo.Orders o on o.intOrderId = oi.intOrderId  

			where strOldStatus = 'RECYCLED' and strNewStatus = 'ALLOCATED' 
			and CompanyDatabase.dbo.StatusChangeLog.dtmTimestamp > @endDate and 
			strRecordType = 'OrderItem'
			and intRecordId in 
			(
			  select intRecordId from CompanyDatabase.dbo.StatusChangeLog 
			  where strOldStatus = 'ALLOCATED' and strNewStatus = 'RECYCLED' 
			  and strRecordType = 'OrderItem'
			  and CompanyDatabase.dbo.StatusChangeLog.dtmTimestamp between @startDate and  @endDate  
			)  
		)

Okay, just some unions with more subquery filtering. More of the same. It's the next one that makes this special.

INSERT INTO @tblReturn2

	SELECT intOrderItemId FROM @tblReturn 
	
	UNION

	SELECT intOrderItemId FROM @tblReturn1

Ah, here's the stuff. This is just bonkers. If the goal is to combine the results of these queries into a single table, you could just insert into one table the whole time.

But we know that there are 5 of these tables, so why are we only going through the first two to combine them at this point?

    INSERT INTO @tblReturn3

		/* _factoryAllocation*/

		select 
		oi.intOrderItemId                              
		from CompanyDatabase..Shipments s 
		inner join CompanyDatabase..ShipmentItems si on si.intShipmentID = s.intShipmentID
		inner join Common.CompanyDatabase.Stores stores on stores.intStoreID = s.intLocationID
		inner join CompanyDatabase..OrderItems oi on oi.intOrderItemId = si.intOrderItemId                                      
		inner join CompanyDatabase..Orders o on o.intOrderId = s.intOrderId  
		where s.dtmTimestamp >= @endDate
		and stores.strLocationType = 'FACTORY'
		
		UNION 
		(
	 	  /*_storeAllocations*/

		select oi.intOrderItemId                               
		from CompanyDatabase..Shipments s 
		inner join CompanyDatabase..ShipmentItems si on si.intShipmentID = s.intShipmentID
		inner join Common.CompanyDatabase.Stores stores on stores.intStoreID = s.intLocationID
		inner join CompanyDatabase..OrderItems oi on oi.intOrderItemId = si.intOrderItemId                                      
		inner join CompanyDatabase..Orders o on o.intOrderId = s.intOrderId
		where s.dtmTimestamp >= @endDate
		and stores.strLocationType <> 'FACTORY'
		)

		UNION
		(
		/* _ordersWithAllocationProblems */
    	
			select oi.intOrderItemId
			from CompanyDatabase.dbo.StatusChangeLog
			inner join CompanyDatabase.dbo.OrderItems oi on oi.intOrderItemID = CompanyDatabase.dbo.StatusChangeLog.intRecordID
			inner join CompanyDatabase.dbo.Orders o on o.intOrderId = oi.intOrderId
			where strRecordType = 'orderitem'
			and strNewStatus = 'PROBLEM'
			and strOldStatus = 'NEW'
			and CompanyDatabase.dbo.StatusChangeLog.dtmTimestamp > @endDate
			and o.dtmTimestamp < @endDate
		)

Okay, @tblReturn3 is more of the same. Nothing more to really add.

	 INSERT INTO @tblReturn4
	
	 SELECT intOrderItemId FROM @tblReturn2 WHERE
	 intOrderItemId NOT IN(SELECT intOrderItemId FROM @tblReturn3 )

Ooh, but here we see something a bit different- we're taking the set difference between @tblReturn2 and @tblReturn3. This would almost make sense if there weren't already set operations in T-SQL which would handle all of this.

Which brings us, finally, to the last query in the whole thing:

SELECT 
	 o.intOrderId
	,oi.intOrderItemId
	,o.dtmDate
	,oi.strDescription
	,o.strFirstName + o.strLastName AS 'Name'
	,o.strEmail
	,o.strBillingCountry
	,o.strShippingCountry
	FROM CompanyDatabase.dbo.OrderItems oi
	INNER JOIN CompanyDatabase.dbo.Orders o on o.intOrderId = oi.intOrderId
	WHERE oi.intOrderItemId IN (SELECT intOrderItemId FROM @tblReturn4)
END

At the end of all this, I've determined a few things.

First, the developer responsible didn't understand table variables. Second,they definitely didn't understand joins. Third, they had no sense of the overall workflow of this query and just sorta fumbled through until they got results that the client said were okay.

And somehow, this pile of trash made it through a code review by internal architects and got deployed to production, where it promptly became the worst performing query in their application. Correction: the worst performing query thus far.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsPaper Tickets, Broken Dreams

Author: Julia Rajagopalan Bertrand Dent knew that the lotto was rigged. Everyone knew. Only paid actors won, or friends of the Lotto Commission. Still, Bertrand stood in line at the Station 14 lotto stand to buy a ticket. It was what everyone did when docking at a refueling station. 1. Dock, 2. Secure your ship, […]

The post Paper Tickets, Broken Dreams appeared first on 365tomorrows.

,

Cryptogram Rational Astrologies and Security

John Kelsey and I wrote a short paper for the Rossfest Festschrift: “Rational Astrologies and Security“:

There is another non-security way that designers can spend their security budget: on making their own lives easier. Many of these fall into the category of what has been called rational astrology. First identified by Randy Steve Waldman [Wal12], the term refers to something people treat as though it works, generally for social or institutional reasons, even when there’s little evidence that it works—­and sometimes despite substantial evidence that it does not.

[…]

Both security theater and rational astrologies may seem irrational, but they are rational from the perspective of the people making the decisions about security. Security theater is often driven by information asymmetry: people who don’t understand security can be reassured with cosmetic or psychological measures, and sometimes that reassurance is important. It can be better understood by considering the many non-security purposes of a security system. A monitoring bracelet system that pairs new mothers and their babies may be security theater, considering the incredibly rare instances of baby snatching from hospitals. But it makes sense as a security system designed to alleviate fears of new mothers [Sch07].

Rational astrologies in security result from two considerations. The first is the principal­-agent problem: The incentives of the individual or organization making the security decision are not always aligned with the incentives of the users of that system. The user’s well-being may not weigh as heavily on the developer’s mind as the difficulty of convincing his boss to take a chance by ignoring an outdated security rule or trying some new technology.

The second consideration that can lead to a rational astrology is where there is a social or institutional need for a solution to a problem for which there is actually not a particularly good solution. The organization needs to reassure regulators, customers, or perhaps even a judge and jury that “they did all that could be done” to avoid some problem—even if “all that could be done” wasn’t very much.

Worse Than FailureCodeSOD: A Ruby Encrusted Footgun

Many years ago, JP joined a Ruby project. This was in the heyday of Ruby, when every startup on Earth was using it, and if you weren't building your app on Rails, were you even building an app?

Now, Ruby offers a lot of flexibility. One might argue that it offers too much flexibility, especially insofar as it permits "monkey patching": you can always add new methods to an existing class, if you want. Regardless of the technical details, JP and the team saw that massive flexibility and said, "Yes, we should use that. All of it!"

As these stories usually go, that was fine- for awhile. Then one day, a test started failing because a class name wasn't defined. That was already odd, but what was even odder is that when they searched through the code, that class name wasn't actually used anywhere. So yes, there was definitely no class with that name, but also, there was no line of code that was trying to instantiate that class. So where was the problem?

def controller_class(name)
  "#{settings.app_name.camelize}::Controllers".constantize.const_get("#{name.to_s.camelize}")
end

def model_class(name)
  "#{settings.app_name.camelize}".constantize.const_get("#{name.to_s.camelize}")
end

def resource_class(name)
  "#{settings.app_name.camelize}Client".constantize.const_get("#{name.to_s.camelize}")
end

It happened because they were dynamically constructing the class names from a settings field. And not just in this handful of lines- this pattern occurred all over the codebase. There were other places where it referenced a different settings field, and they just hadn't encountered the bug yet, but knew that it was only a matter of time before changing a settings file was going to break more functionality in the application.

They wisely rewrote these sections to not reference the settings, and dubbed the pattern the "Caramelize Pattern". They added that to their coding standards as a thing to avoid, and learned a valuable lesson about how languages provide footguns.

Since today's April Fool's Day, consider the prank the fact that everyone learned their lesson and corrected their mistakes. I suppose that has to happen at least sometimes.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsKnown

Author: Majoki What was I thinking? Tiasmet could not put the thought—the picture—out of her head. The chipmunk with its shark-blank eyes and its panicked keening as the tictocs methodically circled and closed on it. The chipmunk should have been able to easily dash away. It was ten times the size of a tic or […]

The post Known appeared first on 365tomorrows.

,

LongNowWhy the Physics Underlying Life is Fundamental and Computation is Not

💡
WATCH Sara Imari Walker's Long Now Talk, An Informational Theory of Life.
Why the Physics Underlying Life is Fundamental and Computation is Not

Life is undeniably real. It defines the very boundary of our reality because it is what we are. Yet despite this fundamental presence, the nature of life has defied precise scientific explanation. While we recognize “life” colloquially and can characterize its more familiar biological forms, we struggle with frontier questions: how does life emerge from non-life? How can we engineer new forms of life? How might we recognize artificial or alien life? What are the sources of novelty and creativity that underlie biology and technology? 

These challenges mirror the limits of our ancestors’ understanding of gravity. They knew objects fell to Earth without understanding why. They observed just a few stars wandering across their night sky and lacked explanations for their motion relative to all the other stars, which remained fixed. It required technological advances — precise mechanical clocks that allowed Tycho Brahe to record planetary motions, Galileo Galilei’s concept of inertial mass, and Isaac Newton’s conception of universal laws — to develop our modern explanation of gravity. While we may be tempted to point to a particular generation that made the conceptual leaps necessary, this transformation took thousands of years of technological and intellectual development before eventually giving rise to theoretical physics as an explanatory framework. The development of physics was based on the premise that reality is comprehensible through abstract descriptions that unify our observations and allow us deeper explanations than our immediate sense perception might otherwise permit. 

Our ability to explain gravity fundamentally changed how we interact with our world. With laws of gravitation, we launch satellites, visit distant worlds, and better understand our place in the cosmos. So too might an explanatory framework for life transform our future.

We now sit at an interesting point in history: one in which it is perhaps evident that we have sufficient technology to understand “life,” and according to some we may even have examples of artificial life and intelligence, but we have not yet landed on the conceptual framing and theoretical abstractions that will allow us to see what this means as clearly as we now see gravity. That is, we lack a formal language to talk about life. 

Life versus Computation

“Life” has historically been difficult to formalize at this deep level of abstraction because of its complexity. Darwin and his contemporaries were successful in explaining some portion of life because their goal was not to inventory the full complexity of living forms, but merely to explain how it is that one form can change into another, and why this should lead to a diversity of forms, some of them more complex than others.  It was not until the advent of the theory of computation roughly 75 years later that it became possible to systematically formalize some notions of complexity (although earlier individual examples of the difficulty of a computation date much earlier). Some thought then, and still think now, that such formalization might be relevant to understanding life. In the historical progression of ideas, proceeding over many many generations, the theory of computation may prove an important step, but not the final or most important one.   

The theory of computation, and its derivative concepts of computational complexity, were not explicitly developed to solve the problem of life, nor were they even devised as a formal approach to life or to physical systems. It is important to maintain this distinction because many alive now confuse computation not only with physical reality, but also more specifically with life itself. In human histories, our best languages for describing the frontier of what we understand are often embedded in the technologies of our time; however, the truly fundamental breakthroughs are often those that allow us to see beyond the current technological horizon. 

The challenge with “computation” begins with the vast spaces we must consider. In chemical space — defined as the space of all possible molecules — there are an estimated 1060 possible molecules composed of up to 30 atoms using only the elements carbon, oxygen, nitrogen, and sulfur. This is only a very small subset of all molecules we might imagine, and cheminformaticians who study chemical space have never been able to even estimate its full size. We cannot explore all possible states computationally. You may at first think this is solely a limitation of our computers, but in fact it is a limitation on reality itself. Given all available compute time and resources right now on planet Earth, it would not be possible to generate a foundation model for all possible molecules or their functional properties. But even more revealing about the physical barriers is how, if given all available time and resources in the entire universe, it would not be possible to construct every possible molecule either. And, because chemistry makes things like biological forms, which evolve into technological forms, the limitations at the base layer of chemistry indicate that our universe may be fundamentally unable to explore all possible options even in infinite time. The technical term for this is to say that our universe is non-ergodic: it cannot visit all possible states. But even this terminology is not right because it assumes that the state-space exists at all. If nothing inside the universe can generate the full space, in what sense can we say it exists? 

A much more physical interpretation, and one that keeps all descriptions internal to the universe they describe, is to assume that things do not exist until the universe generates them. Discussing all possible molecules is just one example, but the idea extends to much more familiar things like technology: even with our most advanced generative models, we could never even imagine all possible technologies, so how could we possibly create them all? This feature of living in a universe that is self-constructing is one clue that reality cannot be computational. The fact that we can imagine possibilities that cannot exist all at once is more telling about us as constructive, creative systems within the universe than it is of a landscape of possibilities “out there” that are all equally real. 

This raises deep questions about computational approaches to life, which itself emerges from a backward view of the space of chemistry that the universe can explore; that is, only physical systems that have evolved to be like us can ask such questions about how they came to be. A challenge in the field of chemistry relevant to the issue of defining life is how one can identify molecules with function, that is, ones that have some useful role in sustaining the persistence of a living entity. This is a frontier research area in artificial intelligence-driven chemical design and drug discovery and in questions about biological and machine agency. But function is a post-selected concept. Post-selection is a concept from probability theory, where one conditions the probability space on the occurrence of a given event after the event occurs. “Function” is a concept that can only be defined relative to what already exists and is, therefore, historically contingent. 

A key challenge then emerges based on the limits of our models: we can only calculate the size of the space evolution selects functional structures within by imposing tight restrictions on the space of interest (post-selecting) so we can bound the size of the space to one we can compute. It may be that the only sense in which this counterfactual space is “real” is within the confines of our models of it. Chemical space cannot be computed, nor can the full space be experimentally explored, making probability assignments across all molecules not only impossible but unphysical; there will always be structure outside our models which could be a source for novelty. To stress the point here, I am not indicating this as a limitation on our models themselves, but on reality itself and, by extension, on what laws of physics could possibly explain how life emerges from such large combinatorial spaces. 

Analogies to the theory of computation do not fit, because computation is fundamentally the wrong paradigm for understanding life. But if we were to use such an analogy, it would be like predicting the output of programs that have not yet been run. We know from the very foundations of the theory of computation that this kind of forward-looking algorithm runs into epistemologically uncertain territory. A prime example is the halting problem, and related proofs that one cannot in general determine whether a given program will terminate and produce an output or run forever. One could make a machine that could describe this situation (what is called an oracle) and solve the halting problem in a specific case, but then the oracle itself would introduce new halting problems. I could assume infinity is real and there will always be a system that can describe another, but even this would run into new issues of uncomputability. New uncomputable things lurk no matter how you patch your system to account for other uncomputable things. Furthermore, infinity is a mathematical concept that itself may not correspond to a physical reality beyond the boundaries of the representational forms of the external world constructed within the physical architecture of human minds and human-derived technologies. 

Complexity, in a computational sense of the word, describes the length of the shortest computer program that produces a given output — and it is also generally uncomputable. More important for physics is that it is also not measurable. We might try to approximate complexity with something computable, but this will depend on our choice of language and, therefore, is an observer-dependent quantity and not a good candidate for physical law. If we assume there is a unique shortest program, we must assume infinity is real to do so, and we have again introduced something non-physical. I am advocating that we take the fact that we live in a finite universe with finite resources and finite time seriously, and construct our theories accordingly — particularly in accounting for historical contingency as a fundamental explanation. We need to take seriously our finite, self-constructing universe because this will allow us to embed ideas about life and open-ended creativity into physics, and in turn explain much more about the universe we actually live in. Among the most important aspects of physics is metrology — the science of measurement — because it allows standardization and empirical testing of theory.  It also allows us to define what we consider to be “laws of physics” — laws like those underlying gravitation, which we assume to be independent of the observer or measuring device. Every branch of physics developed to date rests on a foundation of abstract representations built from empirical measurement; it is this process that allows us to see beyond the confines of our own minds. 

For example, in the foundations of physics, we talk about how laws of physics are invariant to an observer’s frame of reference. Einstein’s work on relativity is exemplary in this regard: when experiments showed the speed of light yielded the same value regardless of the measuring instrument’s motion, Einstein equated the speed of light to a law of physics using the principle of invariance. This principle is important because if something is invariant, it does not depend on what the observer is doing; they will always measure it the same way. Einstein’s peers were not willing to take the measurement at face value. Many assumed the conception that the speed of light could change with the observer was correct, consistent with other sense perceptions of the world, and therefore that the measurements must be wrong. They assumed something must be missing from the physical measurements, like the presence of an ether (a substance hypothesized to fill space to explain the data). Indeed, they were missing something physical, but it was because they assumed their current abstractions were correct, and did not take the measurement seriously enough to change their ideas of what was physically real. The invariance of the speed of light had critically important consequences because following this idea to its logical conclusion (what Einstein did in developing special relativity) indicates that simultaneity (the measuring of events happening at the same “time”) and space are relative, and these insights have subsequently been confirmed by other experiments and observations. This example highlights two important features of physical laws: they are grounded in measurement (confirming they exist beyond how our minds label the world) and they are invariant with respect to measurement.

Assembly Theory and the Physics of Life

As an explanation for the physics underlying what we call “life,” my colleagues and I are developing a new approach called assembly theory. Assembly theory as a theory of physics is built on the idea that time is fundamental (you might call it causation) and as a consequence historical contingency is a real physical feature of our universe. The past is deterministic, but the future is undetermined until it happens simply because the universe has yet to construct itself into the future (and the possibility space is so big it cannot exist until it happens). This may seem a radical step, so how did we get here from thinking about life? 

We started with the question of how one might measure the emergence of complex molecules from unconstrained chemical systems. The question was easy to state: how complex does something need to be such that we might say only a living thing can produce it? We were interested in this because we work on the problem of understanding how life arises from non-life, and this requires some way of quantifying the transition from abiotic to living systems. This led to the development of a complexity measure, the assembly index, which my colleague Lee Cronin at the University of Glasgow originally developed from thought experiments on the measurement and physical structure of molecules.

Why the Physics Underlying Life is Fundamental and Computation is Not
ac, Assembly theory (AT) is generalizable to different classes of objects, illustrated here for three different general types. a, Assembly pathway to construct diethyl phthalate molecule considering molecular bonds as the building blocks. The figure shows the pathway starting with the irreducible constructs to create the molecule with assembly index 8. b, Assembly pathway of a peptide chain by considering building blocks as strings. Left, four amino acids as building blocks. Middle, the actual object and its representation as a string. Right, assembly pathway to construct the string. c, Generalized assembly pathway of an object comprising discrete components.1

The idea is startlingly simple. The assembly index is formalized as the minimum number of steps to make an object, starting from elementary building blocks, and re-using already assembled parts. For molecules, these parts and operations are chemical bonds. This point on bonds is important: assembly theory uses as its natural language the physical constraints intrinsic to the objects it describes, which can be probed by another system, such as a measuring device. However, we also regard that any mathematical language we use to describe the physical world is not the physical world. What we are looking for is a language that at least allows us to capture the invariant properties of the objects under study, because we are after a law of physics that describes life. We consider the assembly index to represent the minimum causation required to form the object, and this is, in fact, independent of how we label the specific minimum steps. Instead, what it captures is that there is a minimum number of ordered structures necessary for the given structure to come to exist. What the assembly index captures is that causation is a real physical property, automatically implying there is an ordering to what can exist, and that objects are formed in a historically contingent path. This raises the possibility that we may be able to measure the physical complexity of a system, even if it is not possible to compute it. 

Assembly theory’s two observables — assembly index and copy number — provide a generalized quantification of the selective causation necessary to produce an observed configuration of objects. Copy number is countable; it is how many of a given object you observe. Our conjecture is that there is a threshold for life, because objects with high assembly indices do not form in high (detectable) numbers of copies in the absence of life and selective processes. This has been confirmed by experimental tests of assembly theory for molecular life detection.  If we return to the idea of the vastness of chemical space, we can see why this idea is important. If the physics of our universe operated by exhaustive search, we would not exist because there are simply too many possible configurations of matter. What the physics of life indicates is the existence of historically contingent trajectories, where structures made in the past can be used to further elaborate into the space of more complex objects. Assembly theory suggests a phase transition between non-life (breadth-based search of physical structures) and life (depth-first search of physical structures), where the latter is possible because structures the universe has already generated can be used again. Underlying this is an absolute causal structure where complex objects reside, which we call the assembly space. If one assumes everything is possible, and the universe can really do it all, you will entirely miss the structure underlying life and what really gets to exist, and why. 

Determined Pasts, Non-Determinable Futures

An important distinction emerges from the physics of life: you cannot compute the future, but you can compute the past. Assembly theory works precisely because it starts from observed objects and allows reconstructing an invariant, minimum causal ordering for how hard it is for the universe to generate that object through its measurement. This allows us to talk about complexity related to life in an objective way that we expect — if the theory passes the trial and fire of scientific consensus — will play a role like other invariant quantities in physics. This fundamentally differs from computational approaches that depend on the “machine” (or observer), and it builds on the one unique thing theoretical physics has been able to offer the world: the ability to build abstractions that reach deeper than how our brains label data to describe the world. 

By taking measurement in science seriously and recognizing how our theories of physics are built from measurement, assembly theory offers a lens through which we might finally understand life as fundamental — not as a computation to be simulated but as a physical reality to be measured. In this view, life is not merely a special case of computation but something more fundamental: a physical reality that can be measured, quantified, and understood through invariant physical laws rather than observer-dependent computations. This leads to the startling realization that one of the most important features of life is that it produces a set of future states that are not computable, even in principle. This means a paradigm for accurately understanding intelligence, consciousness, and decision making is intrinsically missing in our current science that takes as its foundation the idea that everything can happen and everything can be modeled. This does not mean that life will never be understandable as a purely physical process; it simply points to the fact we are missing the required fundamental physics to be able explain life in a universe that has a future horizon that is inherently undetermined. 

The application of assembly theory in physics introduces contingency at a fundamental level, explaining how the past structures some of the future but not all of it. Life takes inert matter that is predictable and turns it into matter that is unpredictable because of the vast number of possibilities in the phenomenon of evolution, revealing selection as a kind of force that is responsible for the production of complexity in the universe. Life, not computation, unlocks our non-deterministic future. Only by looking beyond our current technological moment to the next technologies creating new life forms will we be able to understand what our future could hold. 

Acknowledgments

Many of the ideas discussed herein come from collaborative work with Leroy Cronin.

Notes

1. Figure and caption reproduced from Sharma, A., Czégel, D., Lachmann, M. et al. Assembly theory explains and quantifies selection and evolution. Nature 622, 321–328 (02023) under a CC BY 4.0 license. https://doi.org/10.1038/s41586-023-06600-9 .

Sara Imari Walker is the author of Life As No One Knows It: The Physics of Life’s Emergence (Riverhead Books, 02024) and will be speaking at Long Now on April 1, 02025.

MELinks March 2025

Anarcat’s review of Fish is interesting and shows some benefits I hadn’t previously realised, I’ll have to try it out [1].

Longnow has an insightful article about religion and magic mushrooms [2].

Brian Krebs wrote an informative artivle about DOGE and the many security problems that it has caused to the US government [3].

Techdirt has an insightful article about why they are forced to become a democracy blog after the attacks by Trump et al [4].

Antoine wrote an insightful blog post about the war for the Internet and how in many ways we are losing to fascists [5].

Interesting story about people working for free at Apple to develop a graphing calculator [6]. We need ways for FOSS people to associate to do such projects.

Interesting YouTube video about a wiki for building a cheap road legal car [7].

Interesting video about powering spacecraft with Plutonion 238 and how they are running out [8].

Interesting information about the search for mh370 [9]. I previously hadn’t been convinced that it was hijacked but I am now.

The EFF has an interesting article about the Rayhunter, a tool to detect cellular spying that can run with cheap hardware [10].

  • [1] https://anarc.at/blog/2025-02-28-fish/
  • [2] https://longnow.org/ideas/is-god-a-mushroom/
  • [3] https://tinyurl.com/27wbb5ec
  • [4] https://tinyurl.com/2cvo42ro
  • [5] https://anarc.at/blog/2025-03-21-losing-war-internet/
  • [6] https://www.pacifict.com/story/
  • [7] https://www.youtube.com/watch?v=x8jdx-lf2Dw
  • [8] https://www.youtube.com/watch?v=geIhl_VE0IA
  • [9] https://www.youtube.com/watch?v=HIuXEU4H-XE
  • [10] https://tinyurl.com/28psvpx7
  • Worse Than FailureCodeSOD: Nobody's BFF

    Legacy systems are hard to change, and even harder to eliminate. You can't simply do nothing though; as technology and user expectations change, you need to find ways to modernize and adapt the legacy system.

    That's what happened to Alicia's team. They had a gigantic, spaghetti-coded, monolithic application that was well past drinking age and had a front-end to match. Someone decided that they couldn't touch the complex business logic, but what they could do was replace the frontend code by creating an adapter service; the front end would call into this adapter, and the adapter would execute the appropriate methods in the backend.

    Some clever coder named this "Backend for Frontend" or "BFF".

    It was not anyone's BFF. For starters, this system didn't actually allow you to just connect a UI to the backend. No, that'd be too easy. This system was actually a UI generator.

    The way this works is that you feed it a schema file, written in JSON. This file specifies what input elements you want, some hints for layout, what validation you want the UI to perform, and even what CSS classes you want. Then you compile this as part of a gigantic .NET application, and deploy it, and then you can see your new UI.

    No one likes using it. No one is happy that it exists. Everyone wishes that they could just write frontends like normal people, and not use this awkward schema language.

    All that is to say, when Alicia's co-worker stood up shortly before lunch, said, "I'm taking off the rest of the day, BFF has broken me," it wasn't particularly shocking to hear- or even the first time that'd happened.

    Alicia, not heeding the warning inherent in that statement, immediately tracked down that dev's last work, and tried to understand what had been so painful.

        "minValue": 1900,
        "maxValue": 99,
    

    This, of course, had to be a bug. Didn't it? How could the maxValue be lower than the minValue?

    Let's look at the surrounding context.

    {
        "type": "eventValueBetweenValuesValidator",
        "eventType": "CalendarYear",
        "minValue": 1900,
        "maxValue": 99,
        "isCalendarBasedMaxValue": true,
        "message": "CalendarYear must be between {% raw %}{{minValue}}{% endraw %} and {% raw %}{{maxValue}}{% endraw %}."
    }
    

    I think this should make it perfectly clear what's happening. Oh, it doesn't? Look at the isCalendarBasedMaxValue field. It's true. There, that should explain everything. No, it doesn't? You're just more confused?

    The isCalendarBasedMaxValue says that the maxValue field should not be treated as a literal value, but instead, is the number of years in the future relative to the current year which are considered valid. This schema definition says "accept all years between 1900 and 2124 (at the time of this writing)." Next year, that top value goes up to 2125. Then 2126. And so on.

    As features go, it's not a terrible feature. But the implementation of the feature is incredibly counter-intuitive. At the end of the day, this is just bad naming: (ab)using min/max to do something that isn't really a min/max validation is the big issue here.

    Alicia writes:

    I couldn't come up with something more counterintuitive if I tried.

    Oh, don't sell yourself short, Alicia. I'm sure you could write something far, far worse if you tried. The key thing here is that clearly, nobody tried- they just sorta let things happen and definitely didn't think too hard about it.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    Krebs on SecurityHow Each Pillar of the 1st Amendment is Under Attack

    “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” -U.S. Constitution, First Amendment.

    Image: Shutterstock, zimmytws.

    In an address to Congress this month, President Trump claimed he had “brought free speech back to America.” But barely two months into his second term, the president has waged an unprecedented attack on the First Amendment rights of journalists, students, universities, government workers, lawyers and judges.

    This story explores a slew of recent actions by the Trump administration that threaten to undermine all five pillars of the First Amendment to the U.S. Constitution, which guarantees freedoms concerning speech, religion, the media, the right to assembly, and the right to petition the government and seek redress for wrongs.

    THE RIGHT TO PETITION

    The right to petition allows citizens to communicate with the government, whether to complain, request action, or share viewpoints — without fear of reprisal. But that right is being assaulted by this administration on multiple levels. For starters, many GOP lawmakers are now heeding their leadership’s advice to stay away from local town hall meetings and avoid the wrath of constituents affected by the administration’s many federal budget and workforce cuts.

    Another example: President Trump recently fired most of the people involved in processing Freedom of Information Act (FOIA) requests for government agencies. FOIA is an indispensable tool used by journalists and the public to request government records, and to hold leaders accountable.

    The biggest story by far this week was the bombshell from The Atlantic editor Jeffrey Goldberg, who recounted how he was inadvertently added to a Signal group chat with National Security Advisor Michael Waltz and 16 other Trump administration officials discussing plans for an upcoming attack on Yemen.

    One overlooked aspect of Goldberg’s incredible account is that by planning and coordinating the attack on Signal — which features messages that can auto-delete after a short time — administration officials were evidently seeking a way to avoid creating a lasting (and potentially FOIA-able) record of their deliberations.

    “Intentional or not, use of Signal in this context was an act of erasure—because without Jeffrey Goldberg being accidentally added to the list, the general public would never have any record of these communications or any way to know they even occurred,” Tony Bradley wrote this week at Forbes.

    Petitioning the government, particularly when it ignores your requests, often requires challenging federal agencies in court. But that becomes far more difficult if the most competent law firms start to shy away from cases that may involve crossing the president and his administration.

    On March 22, the president issued a memorandum that directs heads of the Justice and Homeland Security Departments to “seek sanctions against attorneys and law firms who engage in frivolous, unreasonable and vexatious litigation against the United States,” or in matters that come before federal agencies.

    The POTUS recently issued several executive orders railing against specific law firms with attorneys who worked legal cases against him. On Friday, the president announced that the law firm of Skadden, Arps, Slate, Meager & Flom had agreed to provide $100 million in pro bono work on issues that he supports.

    Trump issued another order naming the firm Paul, Weiss, Rifkind, Wharton & Garrison, which ultimately agreed to pledge $40 million in pro bono legal services to the president’s causes.

    Other Trump executive orders targeted law firms Jenner & Block and WilmerHale, both of which have attorneys that worked with special counsel Robert Mueller on the investigation into Russian interference in the 2016 election. But this week, two federal judges in separate rulings froze parts of those orders.

    “There is no doubt this retaliatory action chills speech and legal advocacy, and that is qualified as a constitutional harm,” wrote Judge Richard Leon, who ruled against the executive order targeting WilmerHale.

    President Trump recently took the extraordinary step of calling for the impeachment of federal judges who rule against the administration. Trump called U.S. District Judge James Boasberg a “Radical Left Lunatic” and urged he be removed from office for blocking deportation of Venezuelan alleged gang members under a rarely invoked wartime legal authority.

    In a rare public rebuke to a sitting president, U.S. Supreme Court Justice John Roberts issued a statement on March 18 pointing out that “For more than two centuries, it has been established that impeachment is not an appropriate response to disagreement concerning a judicial decision.”

    The U.S. Constitution provides that judges can be removed from office only through impeachment by the House of Representatives and conviction by the Senate. The Constitution also states that judges’ salaries cannot be reduced while they are in office.

    Undeterred, House Speaker Mike Johnson this week suggested the administration could still use the power of its purse to keep courts in line, and even floated the idea of wholesale eliminating federal courts.

    “We do have authority over the federal courts as you know,” Johnson said. “We can eliminate an entire district court. We have power of funding over the courts, and all these other things. But desperate times call for desperate measures, and Congress is going to act, so stay tuned for that.”

    FREEDOM OF ASSEMBLY

    President Trump has taken a number of actions to discourage lawful demonstrations at universities and colleges across the country, threatening to cut federal funding for any college that supports protests he deems “illegal.”

    A Trump executive order in January outlined a broad federal crackdown on what he called “the explosion of antisemitism” on U.S. college campuses. This administration has asserted that foreign students who are lawfully in the United States on visas do not enjoy the same free speech or due process rights as citizens.

    Reuters reports that the acting civil rights director at the Department of Education on March 10 sent letters to 60 educational institutions warning they could lose federal funding if they don’t do more to combat anti-semitism. On March 20, Trump issued an order calling for the closure of the Education Department.

    Meanwhile, U.S. Immigration and Customs Enforcement (ICE) agents have been detaining and trying to deport pro-Palestinian students who are legally in the United States. The administration is targeting students and academics who spoke out against Israel’s attacks on Gaza, or who were active in campus protests against U.S. support for the attacks. Secretary of State Marco Rubio told reporters Thursday that at least 300 foreign students have seen their visas revoked under President Trump, a far higher number than was previously known.

    In his first term, Trump threatened to use the national guard or the U.S. military to deal with protesters, and in campaigning for re-election he promised to revisit the idea.

    “I think the bigger problem is the enemy from within,” Trump told Fox News in October 2024. “We have some very bad people. We have some sick people, radical left lunatics. And I think they’re the big — and it should be very easily handled by, if necessary, by National Guard, or if really necessary, by the military, because they can’t let that happen.”

    This term, Trump acted swiftly to remove the top judicial advocates in the armed forces who would almost certainly push back on any request by the president to use U.S. soldiers in an effort to quell public protests, or to arrest and detain immigrants. In late February, the president and Defense Secretary Pete Hegseth fired the top legal officers for the military services — those responsible for ensuring the Uniform Code of Military Justice is followed by commanders.

    Military.com warns that the purge “sets an alarming precedent for a crucial job in the military, as President Donald Trump has mused about using the military in unorthodox and potentially illegal ways.” Hegseth told reporters the removals were necessary because he didn’t want them to pose any “roadblocks to orders that are given by a commander in chief.”

    FREEDOM OF THE PRESS

    President Trump has sued a number of U.S. news outlets, including 60 Minutes, CNN, The Washington Post, The New York Times and other smaller media organizations for unflattering coverage.

    In a $10 billion lawsuit against 60 Minutes and its parent Paramount, Trump claims they selectively edited an interview with former Vice President Kamala Harris prior to the 2024 election. The TV news show last month published transcripts of the interview at the heart of the dispute, but Paramount is reportedly considering a settlement to avoid potentially damaging its chances of winning the administration’s approval for a pending multibillion-dollar merger.

    The president sued The Des Moines Register and its parent company, Gannett, for publishing a poll showing Trump trailing Harris in the 2024 presidential election in Iowa (a state that went for Trump). The POTUS also is suing the Pulitzer Prize board over 2018 awards given to The New York Times and The Washington Post for their coverage of purported Russian interference in the 2016 election.

    Whether or not any of the president’s lawsuits against news organizations have merit or succeed is almost beside the point. The strategy behind suing the media is to make reporters and newsrooms think twice about criticizing or challenging the president and his administration. The president also knows some media outlets will find it more expedient to settle.

    Trump also sued ABC News and George Stephanopoulos for stating that the president had been found liable for “rape” in a civil case [Trump was found liable of sexually abusing and defaming E. Jean Carroll]. ABC parent Disney settled that claim by agreeing to donate $15 million to the Trump Presidential Library.

    Following the attack on the U.S. Capitol on Jan. 6, 2021, Facebook blocked President Trump’s account. Trump sued Meta, and after the president’s victory in 2024 Meta settled and agreed to pay Trump $25 million: $22 million would go to his presidential library, and the rest to legal fees. Meta CEO Mark Zuckerberg also announced Facebook and Instagram would get rid of fact-checkers and rely instead on reader-submitted “community notes” to debunk disinformation on the social media platform.

    Brendan Carr, the president’s pick to run the Federal Communications Commission (FCC), has pledged to “dismantle the censorship cartel and restore free speech rights for everyday Americans.” But on January 22, 2025, the FCC reopened complaints against ABC, CBS and NBC over their coverage of the 2024 election. The previous FCC chair had dismissed the complaints as attacks on the First Amendment and an attempt to weaponize the agency for political purposes.

    According to Reuters, the complaints call for an investigation into how ABC News moderated the pre-election TV debate between Trump and Biden, and appearances of then-Vice President Harris on 60 Minutes and on NBC’s “Saturday Night Live.”

    Since then, the FCC has opened investigations into NPR and PBS, alleging that they are breaking sponsorship rules. The Center for Democracy & Technology (CDT), a think tank based in Washington, D.C., noted that the FCC is also investigating KCBS in San Francisco for reporting on the location of federal immigration authorities.

    “Even if these investigations are ultimately closed without action, the mere fact of opening them – and the implicit threat to the news stations’ license to operate – can have the effect of deterring the press from news coverage that the Administration dislikes,” the CDT’s Kate Ruane observed.

    Trump has repeatedly threatened to “open up” libel laws, with the goal of making it easier to sue media organizations for unfavorable coverage. But this week, the U.S. Supreme Court declined to hear a challenge brought by Trump donor and Las Vegas casino magnate Steve Wynn to overturn the landmark 1964 decision in New York Times v. Sullivan, which insulates the press from libel suits over good-faith criticism of public figures.

    The president also has insisted on picking which reporters and news outlets should be allowed to cover White House events and participate in the press pool that trails the president. He barred the Associated Press from the White House and Air Force One over their refusal to call the Gulf of Mexico by another name.

    And the Defense Department has ordered a number of top media outlets to vacate their spots at the Pentagon, including CNN, The Hill, The Washington Post, The New York Times, NBC News, Politico and National Public Radio.

    “Incoming media outlets include the New York Post, Breitbart, the Washington Examiner, the Free Press, the Daily Caller, Newsmax, the Huffington Post and One America News Network, most of whom are seen as conservative or favoring Republican President Donald Trump,” Reuters reported.

    FREEDOM OF SPEECH

    Shortly after Trump took office again in January 2025, the administration began circulating lists of hundreds of words that government staff and agencies shall not use in their reports and communications.

    The Brookings Institution notes that in moving to comply with this anti-speech directive, federal agencies have purged countless taxpayer-funded data sets from a swathe of government websites, including data on crime, sexual orientation, gender, education, climate, and global development.

    The New York Times reports that in the past two months, hundreds of terabytes of digital resources analyzing data have been taken off government websites.

    “While in many cases the underlying data still exists, the tools that make it possible for the public and researchers to use that data have been removed,” The Times wrote.

    On Jan. 27, Trump issued a memo (PDF) that paused all federally funded programs pending a review of those programs for alignment with the administration’s priorities. Among those was ensuring that no funding goes toward advancing “Marxist equity, transgenderism, and green new deal social engineering policies.”

    According to the CDT, this order is a blatant attempt to force government grantees to cease engaging in speech that the current administration dislikes, including speech about the benefits of diversity, climate change, and LGBTQ issues.

    “The First Amendment does not permit the government to discriminate against grantees because it does not like some of the viewpoints they espouse,” the CDT’s Ruane wrote. “Indeed, those groups that are challenging the constitutionality of the order argued as much in their complaint, and have won an injunction blocking its implementation.”

    On January 20, the same day Trump issued an executive order on free speech, the president also issued an executive order titled “Reevaluating and Realigning United States Foreign Aid,” which froze funding for programs run by the U.S. Agency for International Development (USAID). Among those were programs designed to empower civil society and human rights groups, journalists and others responding to digital repression and Internet shutdowns.

    According to the Electronic Frontier Foundation (EFF), this includes many freedom technologies that use cryptography, fight censorship, protect freedom of speech, privacy and anonymity for millions of people around the world.

    “While the State Department has issued some limited waivers, so far those waivers do not seem to cover the open source internet freedom technologies,” the EFF wrote about the USAID disruptions. “As a result, many of these projects have to stop or severely curtail their work, lay off talented workers, and stop or slow further development.”

    On March 14, the president signed another executive order that effectively gutted the U.S. Agency for Global Media (USAGM), which oversees or funds media outlets including Radio Free Europe/Radio Liberty and Voice of America (VOA). The USAGM also oversees Radio Free Asia, which supporters say has been one of the most reliable tools used by the government to combat Chinese propaganda.

    But this week, U.S. District Court Judge Royce Lamberth, a Reagan appointee, temporarily blocked USAGM’s closure by the administration.

    “RFE/RL has, for decades, operated as one of the organizations that Congress has statutorily designated to carry out this policy,” Lamberth wrote in a 10-page opinion. “The leadership of USAGM cannot, with one sentence of reasoning offering virtually no explanation, force RFE/RL to shut down — even if the President has told them to do so.”

    FREEDOM OF RELIGION

    The Trump administration rescinded a decades-old policy that instructed officers not to take immigration enforcement actions in or near “sensitive” or “protected” places, such as churches, schools, and hospitals.

    That directive was immediately challenged in a case brought by a group of Quakers, Baptists and Sikhs, who argued the policy reversal was keeping people from attending services for fear of being arrested on civil immigration violations. On Feb. 24, a federal judge agreed and blocked ICE agents from entering churches or targeting migrants nearby.

    The president’s executive order allegedly addressing antisemitism came with a fact sheet that described college campuses as “infested” with “terrorists” and “jihadists.” Multiple faith groups expressed alarm over the order, saying it attempts to weaponize antisemitism and promote “dehumanizing anti-immigrant policies.

    The president also announced the creation of a “Task Force to Eradicate Anti-Christian Bias,” to be led by Attorney General Pam Bondi. Never mind that Christianity is easily the largest faith in America and that Christians are well-represented in Congress.

    The Rev. Paul Brandeis Raushenbush, a Baptist minister and head of the progressive Interfaith Alliance, issued a statement accusing Trump of hypocrisy in claiming to champion religion by creating the task force.

    “From allowing immigration raids in churches, to targeting faith-based charities, to suppressing religious diversity, the Trump Administration’s aggressive government overreach is infringing on religious freedom in a way we haven’t seen for generations,” Raushenbush said.

    A statement from Americans United for Separation of Church and State said the task force could lead to religious persecution of those with other faiths.

    “Rather than protecting religious beliefs, this task force will misuse religious freedom to justify bigotry, discrimination, and the subversion of our civil rights laws,” said Rachel Laser, the group’s president and CEO.

    Where is President Trump going with all these blatant attacks on the First Amendment? The president has made no secret of his affection for autocratic leaders and “strongmen” around the world, and he is particularly enamored with Hungary’s far-right Prime Minister Viktor Orbán, who has visited Trump’s Mar-a-Lago resort twice in the past year.

    A March 15 essay in The Atlantic by Hungarian investigative journalist András Pethő recounts how Orbán rose to power by consolidating control over the courts, and by building his own media universe while simultaneously placing a stranglehold on the independent press.

    “As I watch from afar what’s happening to the free press in the United States during the first weeks of Trump’s second presidency — the verbal bullying, the legal harassment, the buckling by media owners in the face of threats — it all looks very familiar,” Pethő wrote. “The MAGA authorities have learned Orbán’s lessons well.”

    ,

    Cory DoctorowWhy I don’t like AI art

    Norman Rockwell’s ‘self portrait.’ All the Rockwell faces have been replaced with HAL 9000 from Kubrick’s ‘2001: A Space Odyssey.’ His signature has been modified with a series of rotations and extra symbols. He has ten fingers on his one visible hand.

    This week on my podcast, I read Why I don’t like AI art, a column from last week’s Pluralistic newsletter:

    Which brings me to art. As a working artist in his third decade of professional life, I’ve concluded that the point of art is to take a big, numinous, irreducible feeling that fills the artist’s mind, and attempt to infuse that feeling into some artistic vessel – a book, a painting, a song, a dance, a sculpture, etc – in the hopes that this work will cause a loose facsimile of that numinous, irreducible feeling to manifest in someone else’s mind.

    Art, in other words, is an act of communication – and there you have the problem with AI art. As a writer, when I write a novel, I make tens – if not hundreds – of thousands of tiny decisions that are in service to this business of causing my big, irreducible, numinous feeling to materialize in your mind. Most of those decisions aren’t even conscious, but they are definitely decisions, and I don’t make them solely on the basis of probabilistic autocomplete. One of my novels may be good and it may be bad, but one thing is definitely is is rich in communicative intent. Every one of those microdecisions is an expression of artistic intent.


    MP3

    (Image: Cryteria, CC BY 3.0, modified)

    David BrinAnd yet-more news from (or about) Spaaaaaace!

    NOTE: I offer a bit of a riff about the rarity of science - not just on Earth but possibly across the cosmos - at the end. 

    We are gadually trying to resume 'normal' life after our family suffered a 'disruption' in our living arrangements that has left us frazzled, with little time for blog updates. But things are a bit better now, so here is... a roundup of recent* space news and updates.

    *(Well, 'recent' as of when these postings were actually drafted, in January, before we realized how crazy things were gonna get!)

    == Heading for the moon ==

    Sending landers to the lunar surface: In mid-January, a SpaceX Falcon 9 rocket launched two commercial landers - Firefly Aerospace's Blue Ghost lander and Japan's ispace's Resilence lander - to the moon. 

    The landers contain scientific instruments to analyze the lunar regolith and magnetosphere, and set up a moon-based global navigation system, laying the groundwork for future lunar missions.

    *As of March 30... well... any space junkies know how it went.


    == Rogue planets all over! ==

    One of the imperfectly insufficient (by itself) but substantially plausible theories for the Great Silence or “Fermi Paradox” (terrible name) is that interstellar travel… even at just 10% of light speed… is made very difficult by a minefield of hidden obstacles.  No, I am not talking about my short story “Crystal Spheres.”  But rather, these would be rogue planets that are untethered from stars. Every year we find they are more common in the galaxy.

    For example, the infrared-sensitive Webb Telescope has found hundreds… down to Saturn size, just in the Orion Nebula, alone! Forty-two of them are in binary pairs. Wow. Implicit: billions of free-floating planets in the darkness between the stars.

    One more incredible accomplishment by this fantastic instrument that this fantastic, scientific civilization created, in our steady and accelerating progress as apprentices in the Laboratory of Creation! 


    And yet some ignore the almost (or actual) theological significance of these incredible accomplishments (Robots roaming Mars! New human-made life forms! The new skills to save this beautiful world from … ourselves!) Okay, grad students in Creation’s Lab should respect those who clutch the Kindergarten text given to illiterate shepherds. Fine. 


    But those who wage all-out war vs science are clearly the real heretics, here.


    See more incredible Webb Wonders!  A way-kewl podcast from Fraser Cain



    == Monitoring Methane Emissions ==


    Among the worst criminals alive today are those who are deliberately venting methane into the atmosphere. After GOP Congresses deliberately canceled or slashed the satellites to track down vents and Trump delayed them, we now, at last, have the policing tools. A satellite that measures methane leaks from oil and gas companies is set to start circulating the Earth 15 times a day next month. Google plans to have the data mapped by the end of the year for the whole world to see. (Thanks Sergey.)

    Methane is a potent greenhouse gas estimated to be responsible for nearly a third of human-caused global warming. Scientists say slashing methane emissions is one of the fastest ways to slow the climate crisis because methane has 80 times the warming power of carbon dioxide over a decade. Though farming is the largest source of methane emissions from human activities, the energy sector is a close second. Oil, gas, and coal operations are thought to account for 40% of global methane emissions from human activities. The IEA says focusing on the energy sector should be a priority, in part because reducing methane leaks is cost-effective. Leaking gas can be captured and sold, and the technology to do that is relatively cheap.

    Two new methane-detecting satellites - Carbon Mapper and MethaneSAT/EDF are now surveying the planet's climate. Because the Biden admin pushed through the quality methane satellites, the information will be so widely seen that members of the public will be able to act on their own - even despite a suborned EPA and Justice dept.  A case where the right may be bitten by the 'market/consumer alternative to government' that they have long raved about.


    == Dark comets, Dwarf galaxies - and Dark Matter ==

    If I had followed my original scientific path – not lured away by the likes of you telling me to write more scifi – I’d likely have been in the mix of these studies of “dark comets,” whose orbits get significantly altered by gassy or dusty emissions, the way it happens with regular, icy comets, but without any visible signs of watery volatiles. “dark comets are different from another intermediary category between asteroids and comets, known as active asteroids, although there may be some overlap. Active asteroids are objects without ice that produce a cloud of dust around them, for a variety of reasons…” 

    Only the Dark Comets – and some include the odd cigar-shaped interstellar visitor ‘Oumuamua' – still have no firm explanation. Though some theories suggest emission of some volatile substance that doesn’t leave an ionized spectral trace.

    The Milky Way’s central (huge) black hole is spinning surprisingly fast and out of orientation with the rest of the galaxy; the reasons remain unknown. Now, data from the Event Horizon Telescope - that first captured the black hole's image in 2022 has revealed a clue: The Sagittarius A* we see today was born from a cataclysmic merger with another giant black hole billions of years ago.

    Dark matter might not just be the silent partner of the universe—it could be the secret to understanding how supermassive black holes unite in their deadly dance. 


    Attempts to figure out dark matter have pinned hopes on the possibility that the dark… bits… whatever they might be… interact with regular matter in some way – even very slightly – beyond just gravity. At least that’s been the hope of particle physicists with their big machines. So far, the indicators suggest ‘only gravity.’ But this study of nearby anomalous dwarf galaxies hints there might be just a little something more.



    == A couple of final notes about you-know-what ==


    Science is - above all - about chasing down what's true about objective reality, even when the results conflict with your wishes or preconceptions. 


    This human-invented process has led to all of the benefits of enlightenment: unprecedented wealth, comfort, knowledge, safety and - yes - comparative peace... along withg our recent ambitions to overcome a myriad errors through cheerful exchange of criticism. Errors like prejudicial assumptions about whole classes of people. Errors like mismanaging a fragile planet.  


    Alas, science is a rare phenomenon. Rare across human history and -- given the way that evolution works -- probably rare across the universe. (My own top explanation for the Fermi Paradox, by the way.)


    Across human history, science - and its ancillary arts like equality before law - almost never happened. Instead, people in most societies preferred stories. Incantations about the world, told by their parents and then by priests and by kings.  I know about this, having had successful careers in both science and storytelling. I know the differences and the overlaps very well. 


    While romance and stories are essential to being human, they also can lead directly to horrors and Auschwitz, if they allow evil incantation-spewers to rile up whole populations toward hatred and cauterized hope. 


    Anyone who does not recognize what I just described as THE essential thing now happening across the globe is already lost to reason. 


    Moreover, if the recent trend - reverting human civilization back to 10,000 years of nescient rule by inheritance brats and chanting incantation spinners - does succeed at suppressing the rare era of science, then we'll truly have our answer for why no voices can be heard ac ross the cosmos.


    Cryptogram Cell Phone OPSEC for Border Crossings

    I have heard stories of more aggressive interrogation of electronic devices at US border crossings. I know a lot about securing computers, but very little about securing phones.

    Are there easy ways to delete data—files, photos, etc.—on phones so it can’t be recovered? Does resetting a phone to factory defaults erase data, or is it still recoverable? That is, does the reset erase the old encryption key, or just sever the password that access that key? When the phone is rebooted, are deleted files still available?

    We need answers for both iPhones and Android phones. And it’s not just the US; the world is going to become a more dangerous place to oppose state power.

    Cryptogram The Signal Chat Leak and the NSA

    US National Security Advisor Mike Waltz, who started the now-infamous group chat coordinating a US attack against the Yemen-based Houthis on March 15, is seemingly now suggesting that the secure messaging service Signal has security vulnerabilities.

    "I didn’t see this loser in the group," Waltz told Fox News about Atlantic editor in chief Jeffrey Goldberg, whom Waltz invited to the chat. "Whether he did it deliberately or it happened in some other technical mean, is something we’re trying to figure out."

    Waltz’s implication that Goldberg may have hacked his way in was followed by a report from CBS News that the US National Security Agency (NSA) had sent out a bulletin to its employees last month warning them about a security "vulnerability" identified in Signal.

    The truth, however, is much more interesting. If Signal has vulnerabilities, then China, Russia, and other US adversaries suddenly have a new incentive to discover them. At the same time, the NSA urgently needs to find and fix any vulnerabilities quickly as it can—and similarly, ensure that commercial smartphones are free of backdoors—access points that allow people other than a smartphone’s user to bypass the usual security authentication methods to access the device’s contents.

    That is essential for anyone who wants to keep their communications private, which should be all of us.

    It’s common knowledge that the NSA’s mission is breaking into and eavesdropping on other countries’ networks. (During President George W. Bush’s administration, the NSA conducted warrantless taps into domestic communications as well—surveillance that several district courts ruled to be illegal before those decisions were later overturned by appeals courts. To this day, many legal experts maintain that the program violated federal privacy protections.) But the organization has a secondary, complementary responsibility: to protect US communications from others who want to spy on them. That is to say: While one part of the NSA is listening into foreign communications, another part is stopping foreigners from doing the same to Americans.

    Those missions never contradicted during the Cold War, when allied and enemy communications were wholly separate. Today, though, everyone uses the same computers, the same software, and the same networks. That creates a tension.

    When the NSA discovers a technological vulnerability in a service such as Signal (or buys one on the thriving clandestine vulnerability market), does it exploit it in secret, or reveal it so that it can be fixed? Since at least 2014, a US government interagency "equities" process has been used to decide whether it is in the national interest to take advantage of a particular security flaw, or to fix it. The trade-offs are often complicated and hard.

    Waltz—along with Vice President J.D. Vance, Defense Secretary Pete Hegseth, and the other officials in the Signal group—have just made the trade-offs much tougher to resolve. Signal is both widely available and widely used. Smaller governments that can’t afford their own military-grade encryption use it. Journalists, human rights workers, persecuted minorities, dissidents, corporate executives, and criminals around the world use it. Many of these populations are of great interest to the NSA.

    At the same time, as we have now discovered, the app is being used for operational US military traffic. So, what does the NSA do if it finds a security flaw in Signal?

    Previously, it might have preferred to keep the flaw quiet and use it to listen to adversaries. Now, if the agency does that, it risks someone else finding the same vulnerability and using it against the US government. And if it was later disclosed that the NSA could have fixed the problem and didn’t, then the results might be catastrophic for the agency.

    Smartphones present a similar trade-off. The biggest risk of eavesdropping on a Signal conversation comes from the individual phones that the app is running on. While it’s largely unclear whether the US officials involved had downloaded the app onto personal or government-issued phones—although Witkoff suggested on X that the program was on his "personal devices"—smartphones are consumer devices, not at all suitable for classified US government conversations. An entire industry of spyware companies sells capabilities to remotely hack smartphones for any country willing to pay. More capable countries have more sophisticated operations. Just last year, attacks that were later attributed to China attempted to access both President Donald Trump and Vance’s smartphones. Previously, the FBI—as well as law enforcement agencies in other countries—have pressured both Apple and Google to add "backdoors" in their phones to more easily facilitate court-authorized eavesdropping.

    These backdoors would create, of course, another vulnerability to be exploited. A separate attack from China last year accessed a similar capability built into US telecommunications networks.

    The vulnerabilities equities have swung against weakened smartphone security and toward protecting the devices that senior government officials now use to discuss military secrets. That also means that they have swung against the US government hoarding Signal vulnerabilities—and toward full disclosure.

    This is plausibly good news for Americans who want to talk among themselves without having anyone, government or otherwise, listen in. We don’t know what pressure the Trump administration is using to make intelligence services fall into line, but it isn’t crazy to worry that the NSA might again start monitoring domestic communications.

    Because of the Signal chat leak, it’s less likely that they’ll use vulnerabilities in Signal to do that. Equally, bad actors such as drug cartels may also feel safer using Signal. Their security against the US government lies in the fact that the US government shares their vulnerabilities. No one wants their secrets exposed.

    I have long advocated for a "defense dominant" cybersecurity strategy. As long as smartphones are in the pocket of every government official, police officer, judge, CEO, and nuclear power plant operator—and now that they are being used for what the White House now calls calls  "sensitive," if not outright classified conversations among cabinet members—we need them to be as secure as possible. And that means no government-mandated backdoors.

    We may find out more about how officials—including the vice president of the United States—came to be using Signal on what seem to be consumer-grade smartphones, in a apparent breach of the laws on government records. It’s unlikely that they really thought through the consequences of their actions.

    Nonetheless, those consequences are real. Other governments, possibly including US allies, will now have much more incentive to break Signal’s security than they did in the past, and more incentive to hack US government smartphones than they did before March 24.

    For just the same reason, the US government has urgent incentives to protect them.

    This essay was originally published in Foreign Policy.

    365 TomorrowsThe Unsuitable Girl

    Author: Jessica Pickard Once again Sam asked himself why he was standing here, in this field, miles out of town, staring into an increasingly dusky sky. Well he was here for the money of course. God knows he could use that right now. But he was also here, if he was honest, for the girl, […]

    The post The Unsuitable Girl appeared first on 365tomorrows.

    ,

    365 TomorrowsThe Memory Hour

    Author: John Adinolfi Caleb lived alone, as did Cole. Caleb by circumstance, Cole by choice. Trina had entertained a variety of live-in partners, but all were short associations. She lived alone. Each of their homes was unexceptional, except for sharing an extraordinary view of the Pacific below. Sitting on the edge of a cliff, surrounded […]

    The post The Memory Hour appeared first on 365tomorrows.

    ,

    Cryptogram Friday Squid Blogging: Squid Werewolf Hacking Group

    In another rare squid/cybersecurity intersection, APT37 is also known as “Squid Werewolf.”

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Worse Than FailureError'd: Here Comes the Sun

    We got an unusual rash of submissions at Error'd this week. Here are five reasonably good ones chosen not exactly at random. For those few (everyone) who didn't catch the off-by-one from last week's batch, there's the clue.

    "Gotta CAPTCHA 'Em All," puns Alex G. "So do I select them all?" he wondered. I think the correct answer is null.

    1

     

    "What does a null eat?" wondered B.J.H , "and is one null invited or five?". The first question is easily answered. NaaN, of course. Probably garlic. I would expect B.J. to already know the eating habits of a long-standing companion, so I am guessing that the whole family is not meant to tag along. Stick with just the one.

    3

     

    Planespotter Rick R. caught this one at the airport. "Watching my daughter's flight from New York and got surprised by Boeing's new supersonic 737 having already arrived in DFW," he observed. I'm not quite sure what went wrong. It's not the most obvious time zone mistake I can imagine, but I'm pretty sure the cure is the same: all times displayed in any context that is not purely restricted to a single location (and short time frame) should explicitly include the relevant timezone.

    2

     

    Rob H. figures "From my day job's MECM Software Center. It appears that autocorrect has miscalculated, because the internet cannot be calculated." The internet is -1.

    4

     

    Ending this week on a note of hope, global warrior Stewart may have just saved the planet. "Climate change is solved. We just need to replicate the 19 March performance of my new solar panels." Or perhaps I miscalculated.

    0

     

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    365 TomorrowsWhen Next the Fractals Bloom

    Author: Hillary Lyon With a well-worn key in hand, Bonnie unlocked the massive front door of her great-uncle Duran’s house. The place sat unoccupied since his passing; it had taken forever for his will to slog through probate. She’d been his favorite family member, and he, hers. His death made her face her own mortality; […]

    The post When Next the Fractals Bloom appeared first on 365tomorrows.

    ,

    LongNowBlaise Agüera y Arcas

    Blaise Agüera y Arcas

    In What is Intelligence?, Blaise Agüera y Arcas, VP, Fellow and CTO of Technology & Society at Google, explores what intelligence really is, and how AI’s emergence is a natural consequence of evolution. Encompassing decades of theory, existing literature, and recent artificial life experiments, Agüera y Arcas’ research argues that certain modern AI systems do indeed have a claim to intelligence, consciousness, and free will.

    This talk is presented as part of a larger project on What is Intelligence?, including a printed book alongside experimental formats which challenge the conventions of academic publishing. It is the inaugural collaborative work of Antikythera, a think tank on the philosophy of technology, and MIT Press, a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.

    Krebs on SecurityWhen Getting Phished Puts You in Mortal Danger

    Many successful phishing attacks result in a financial loss or malware infection. But falling for some phishing scams, like those currently targeting Russians searching online for organizations that are fighting the Kremlin war machine, can cost you your freedom or your life.

    The real website of the Ukrainian paramilitary group “Freedom of Russia” legion. The text has been machine-translated from Russian.

    Researchers at the security firm Silent Push mapped a network of several dozen phishing domains that spoof the recruitment websites of Ukrainian paramilitary groups, as well as Ukrainian government intelligence sites.

    The website legiohliberty[.]army features a carbon copy of the homepage for the Freedom of Russia Legion (a.k.a. “Free Russia Legion”), a three-year-old Ukraine-based paramilitary unit made up of Russian citizens who oppose Vladimir Putin and his invasion of Ukraine.

    The phony version of that website copies the legitimate site — legionliberty[.]army — providing an interactive Google Form where interested applicants can share their contact and personal details. The form asks visitors to provide their name, gender, age, email address and/or Telegram handle, country, citizenship, experience in the armed forces; political views; motivations for joining; and any bad habits.

    “Participation in such anti-war actions is considered illegal in the Russian Federation, and participating citizens are regularly charged and arrested,” Silent Push wrote in a report released today. “All observed campaigns had similar traits and shared a common objective: collecting personal information from site-visiting victims. Our team believes it is likely that this campaign is the work of either Russian Intelligence Services or a threat actor with similarly aligned motives.”

    Silent Push’s Zach Edwards said the fake Legion Liberty site shared multiple connections with rusvolcorps[.]net. That domain mimics the recruitment page for a Ukrainian far-right paramilitary group called the Russian Volunteer Corps (rusvolcorps[.]com), and uses a similar Google Forms page to collect information from would-be members.

    Other domains Silent Push connected to the phishing scheme include: ciagov[.]icu, which mirrors the content on the official website of the U.S. Central Intelligence Agency; and hochuzhitlife[.]com, which spoofs the Ministry of Defense of Ukraine & General Directorate of Intelligence (whose actual domain is hochuzhit[.]com).

    According to Edwards, there are no signs that these phishing sites are being advertised via email. Rather, it appears those responsible are promoting them by manipulating the search engine results shown when someone searches for one of these anti-Putin organizations.

    In August 2024, security researcher Artem Tamoian posted on Twitter/X about how he received startlingly different results when he searched for “Freedom of Russia legion” in Russia’s largest domestic search engine Yandex versus Google.com. The top result returned by Google was the legion’s actual website, while the first result on Yandex was a phishing page targeting the group.

    “I think at least some of them are surely promoted via search,” Tamoian said of the phishing domains. “My first thread on that accuses Yandex, but apart from Yandex those websites are consistently ranked above legitimate in DuckDuckGo and Bing. Initially, I didn’t realize the scale of it. They keep appearing to this day.”

    Tamoian, a native Russian who left the country in 2019, is the founder of the cyber investigation platform malfors.com. He recently discovered two other sites impersonating the Ukrainian paramilitary groups — legionliberty[.]world and rusvolcorps[.]ru — and reported both to Cloudflare. When Cloudflare responded by blocking the sites with a phishing warning, the real Internet address of these sites was exposed as belonging to a known “bulletproof hosting” network called Stark Industries Solutions Ltd.

    Stark Industries Solutions appeared two weeks before Russia invaded Ukraine in February 2022, materializing out of nowhere with hundreds of thousands of Internet addresses in its stable — many of them originally assigned to Russian government organizations. In May 2024, KrebsOnSecurity published a deep dive on Stark, which has repeatedly been used to host infrastructure for distributed denial-of-service (DDoS) attacks, phishing, malware and disinformation campaigns from Russian intelligence agencies and pro-Kremlin hacker groups.

    In March 2023, Russia’s Supreme Court designated the Freedom of Russia legion as a terrorist organization, meaning that Russians caught communicating with the group could face between 10 and 20 years in prison.

    Tamoian said those searching online for information about these paramilitary groups have become easy prey for Russian security services.

    “I started looking into those phishing websites, because I kept stumbling upon news that someone gets arrested for trying to join [the] Ukrainian Army or for trying to help them,” Tamoian told KrebsOnSecurity. “I have also seen reports [of] FSB contacting people impersonating Ukrainian officers, as well as using fake Telegram bots, so I thought fake websites might be an option as well.”

    Search results showing news articles about people in Russia being sentenced to lengthy prison terms for attempting to aid Ukrainian paramilitary groups.

    Tamoian said reports surface regularly in Russia about people being arrested for trying carry out an action requested by a “Ukrainian recruiter,” with the courts unfailingly imposing harsh sentences regardless of the defendant’s age.

    “This keeps happening regularly, but usually there are no details about how exactly the person gets caught,” he said. “All cases related to state treason [and] terrorism are classified, so there are barely any details.”

    Tamoian said while he has no direct evidence linking any of the reported arrests and convictions to these phishing sites, he is certain the sites are part of a larger campaign by the Russian government.

    “Considering that they keep them alive and keep spawning more, I assume it might be an efficient thing,” he said. “They are on top of DuckDuckGo and Yandex, so it unfortunately works.”

    Further reading: Silent Push report, Russian Intelligence Targeting its Citizens and Informants.

    Cryptogram AIs as Trusted Third Parties

    This is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties:

    Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them.

    When I was writing Applied Cryptography way back in 1993, I talked about human trusted third parties (TTPs). This research postulates that someday AIs could fulfill the role of a human TTP, with added benefits like (1) being able to audit their processing, and (2) being able to delete it and erase their knowledge when their work is done. And the possibilities are vast.

    Here’s a TTP problem. Alice and Bob want to know whose income is greater, but don’t want to reveal their income to the other. (Assume that both Alice and Bob want the true answer, so neither has an incentive to lie.) A human TTP can solve that easily: Alice and Bob whisper their income to the TTP, who announces the answer. But now the human knows the data. There are cryptographic protocols that can solve this. But we can easily imagine more complicated questions that cryptography can’t solve. “Which of these two novel manuscripts has more sex scenes?” “Which of these two business plans is a riskier investment?” If Alice and Bob can agree on an AI model they both trust, they can feed the model the data, ask the question, get the answer, and then delete the model afterwards. And it’s reasonable for Alice and Bob to trust a model with questions like this. They can take the model into their own lab and test it a gazillion times until they are satisfied that it is fair, accurate, or whatever other properties they want.

    The paper contains several examples where an AI TTP provides real value. This is still mostly science fiction today, but it’s a fascinating thought experiment.

    Worse Than FailureA Bracing Way to Start the Day

    Barry rolled into work at 8:30AM to see the project manager waiting at the door, wringing her hands and sweating. She paced a bit while Barry badged in, and then immediately explained the issue:

    Today was a major release of their new features. This wasn't just a mere software change; the new release was tied to major changes to a new product line- actual widgets rolling off an assembly line right now. And those changes didn't work.

    "I thought we tested this," Barry said.

    "We did! And Stu called in sick today!"

    Stu was the senior developer on the project, who had written most of the new code.

    "I talked to him for a few minutes, and he's convinced it's a data issue. Something in the metadata or something?"

    "I'll take a look," Barry said.

    He skipped grabbing a coffee from the carafe and dove straight in.

    Prior to the recent project, the code had looked something like this:

    if (IsProduct1(_productId))
    	_programId = 1;
    elseif (IsProduct2(_productId))
    	_programId = 2;
    elseif (IsProduct3(_productId))
    	_programId = 3;
    

    Part of the project, however, was about changing the workflow for "Product 3". So Stu had written this code:

    if (IsProduct1(_productId))
    	_programId = 1;
    else if (IsProduct2(_productId))
    	_programId = 2;
    else if (IsProduct3(_productId))
    	_programId = 3;
    	DoSomethingProductId3Specific1();
    	DoSomethingProductId3Specific2();
    	DoSomethingProductId3Specific3();
    

    Since this is C# and not Python, it took Barry all of 5 seconds to spot this and figure out what the problem was and fix it:

    if (IsProduct1(_productId))
    {
    	_programId = 1;
    }
    else if (IsProduct2(_productId))
    {
    	_programId = 2;
    }
    else if (IsProduct3(_productId))
    {
    	_programId = 3;
    	DoSomethingProductId3Specific1();
    	DoSomethingProductId3Specific2();
    	DoSomethingProductId3Specific3();
    }
    

    This brings us to about 8:32. Now, given the problems, Barry wasn't about to just push this change- in addition to running pipeline tests (and writing tests that Stu clearly hadn't), he pinged the head of QA to get a tester on this fix ASAP. Everyone worked quickly, and that meant by 9:30 the fix was considered good and ready to be merged in and pushed to production. Sometime in there, while waiting for a pipeline to complete, Barry managed to grab a cup of coffee to wake himself up.

    While Barry was busy with that, Stu had decided that he wasn't feeling that sick after all, and had rolled into the office around 9:00. Which meant that just as Barry was about to push the button to run the release pipeline, an "URGENT" email came in from Stu.

    "Hey, everybody, I fixed that bug. Can we get this released ASAP?"

    Barry went ahead and released the version that he'd already tested, but out of morbid curiosity, went and checked Stu's fix.

    if (IsProduct1(_productId))
    	_programId = 1;
    else if (IsProduct2(_productId))
    	_programId = 2;
    else if (IsProduct3(_productId))
    {
    	_programId = 3;
    }
    
    if (IsProduct3(_productId))
    {
    	DoSomethingProductId3Specific1();
    	DoSomethingProductId3Specific2();
    	DoSomethingProductId3Specific3();
    }
    

    At least this version would have worked, though I'm not sure Stu fully understands what "{}"s mean in C#. Or in most programming languages, if we're being honest.

    With Barry's work, the launch went off just a few minutes later than the scheduled time. Since the launch was successful, at the next company "all hands", the leadership team made sure to congratulate the people instrumental in making it happen: that is to say, the lead developer of the project, Stu.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.