Planet Russell

,

Worse Than FailureCodeSOD: A Jammed Up Session

Andre has inherited a rather antique ASP .Net WebForms application. It's a large one, with many pages in it, but they all follow a certain pattern. Let's see if you can spot it.

protected void btnSearch_Click(object sender, EventArgs e)
{
    ArrayList paramsRel = new ArrayList();
    paramsRel["Name"] = txtNome.Text;
    paramsRel["Date"] = txtDate.Text;
    Session["paramsRel"] = paramsRel;
   
    List<Client> clients = Controller.FindClients();
    //Some other code
}

Now, at first glance, this doesn't look terrible. Using an ArrayList as a dictionary and frankly, storing a dictionary in the Session object is weird, but it's not an automatic red flag. But wait, why is it called paramsRel? They couldn't be… no, they wouldn't…

public List<Client> FindClients()
{
    ArrayList paramsRel = (ArrayList)Session["paramsRel"];
    string name = (string)paramsRel["Name"];
    string dateStr = (string)paramsRel["Date"];
    DateTime date = DateTime.Parse(dateStr);
   
   //More code...
}

Now there's the red flag. paramsRel is how they pass parameters to functions. They stuff it into the Session, then call a function which retrieves it from that Session.

This pattern is used everywhere in the application. You can see that there's a vague gesture in the direction of trying to implement some kind of Model-View-Controller pattern (as FindClients is a member of the Controller object), but that modularization gets undercut by everything depending on Session as a pseudoglobal for passing state information around.

The only good news is that the Session object is synchronized so there's no thread safety issue here, though not for want of trying.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsBloodfall

Author: Francesco Levato The end of the world was fast, like a ruptured heart, a laceration tearing ventricles apart, flooding the chest cavity with one final gout. It rained actual blood for weeks after, and muscle fiber, and an oily substance like rendered fat. In the space of a gasp two thirds of the population […]

The post Bloodfall appeared first on 365tomorrows.

Cryptogram AI-Generated Law

On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of legislation by up to 70%.” AI would create a “comprehensive legislative plan” spanning local and federal law and would be connected to public administration, the courts, and global policy trends.

The plan was widely greeted with astonishment. This sort of AI legislating would be a global “first,” with the potential to go “horribly wrong.” Skeptics fear that the AI model will make up facts or fundamentally fail to understand societal tenets such as fair treatment and justice when influencing law.

The truth is, the UAE’s idea of AI-generated law is not really a first and not necessarily terrible.

The first instance of enacted law known to have been written by AI was passed in Porto Alegre, Brazil, in 2023. It was a local ordinance about water meter replacement. Council member Ramiro Rosário was simply looking for help in generating and articulating ideas for solving a policy problem, and ChatGPT did well enough that the bill passed unanimously. We approve of AI assisting humans in this manner, although Rosário should have disclosed that the bill was written by AI before it was voted on.

Brazil was a harbinger but hardly unique. In recent years, there has been a steady stream of attention-seeking politicians at the local and national level introducing bills that they promote as being drafted by AI or letting AI write their speeches for them or even vocalize them in the chamber.

The Emirati proposal is different from those examples in important ways. It promises to be more systemic and less of a one-off stunt. The UAE has promised to spend more than $3 billion to transform into an “AI-native” government by 2027. Time will tell if it is also different in being more hype than reality.

Rather than being a true first, the UAE’s announcement is emblematic of a much wider global trend of legislative bodies integrating AI assistive tools for legislative research, drafting, translation, data processing, and much more. Individual lawmakers have begun turning to AI drafting tools as they traditionally have relied on staffers, interns, or lobbyists. The French government has gone so far as to train its own AI model to assist with legislative tasks.

Even asking AI to comprehensively review and update legislation would not be a first. In 2020, the U.S. state of Ohio began using AI to do wholesale revision of its administrative law. AI’s speed is potentially a good match to this kind of large-scale editorial project; the state’s then-lieutenant governor, Jon Husted, claims it was successful in eliminating 2.2 million words’ worth of unnecessary regulation from Ohio’s code. Now a U.S. senator, Husted has recently proposed to take the same approach to U.S. federal law, with an ideological bent promoting AI as a tool for systematic deregulation.

The dangers of confabulation and inhumanity—while legitimate—aren’t really what makes the potential of AI-generated law novel. Humans make mistakes when writing law, too. Recall that a single typo in a 900-page law nearly brought down the massive U.S. health care reforms of the Affordable Care Act in 2015, before the Supreme Court excused the error. And, distressingly, the citizens and residents of nondemocratic states are already subject to arbitrary and often inhumane laws. (The UAE is a federation of monarchies without direct elections of legislators and with a poor record on political rights and civil liberties, as evaluated by Freedom House.)

The primary concern with using AI in lawmaking is that it will be wielded as a tool by the powerful to advance their own interests. AI may not fundamentally change lawmaking, but its superhuman capabilities have the potential to exacerbate the risks of power concentration.

AI, and technology generally, is often invoked by politicians to give their project a patina of objectivity and rationality, but it doesn’t really do any such thing. As proposed, AI would simply give the UAE’s hereditary rulers new tools to express, enact, and enforce their preferred policies.

Mohammed’s emphasis that a primary benefit of AI will be to make law faster is also misguided. The machine may write the text, but humans will still propose, debate, and vote on the legislation. Drafting is rarely the bottleneck in passing new law. What takes much longer is for humans to amend, horse-trade, and ultimately come to agreement on the content of that legislation—even when that politicking is happening among a small group of monarchic elites.

Rather than expeditiousness, the more important capability offered by AI is sophistication. AI has the potential to make law more complex, tailoring it to a multitude of different scenarios. The combination of AI’s research and drafting speed makes it possible for it to outline legislation governing dozens, even thousands, of special cases for each proposed rule.

But here again, this capability of AI opens the door for the powerful to have their way. AI’s capacity to write complex law would allow the humans directing it to dictate their exacting policy preference for every special case. It could even embed those preferences surreptitiously.

Since time immemorial, legislators have carved out legal loopholes to narrowly cater to special interests. AI will be a powerful tool for authoritarians, lobbyists, and other empowered interests to do this at a greater scale. AI can help automatically produce what political scientist Amy McKay has termed “microlegislation“: loopholes that may be imperceptible to human readers on the page—until their impact is realized in the real world.

But AI can be constrained and directed to distribute power rather than concentrate it. For Emirati residents, the most intriguing possibility of the AI plan is the promise to introduce AI “interactive platforms” where the public can provide input to legislation. In experiments across locales as diverse as KentuckyMassachusetts, FranceScotlandTaiwan, and many others, civil society within democracies are innovating and experimenting with ways to leverage AI to help listen to constituents and construct public policy in a way that best serves diverse stakeholders.

If the UAE is going to build an AI-native government, it should do so for the purpose of empowering people and not machines. AI has real potential to improve deliberation and pluralism in policymaking, and Emirati residents should hold their government accountable to delivering on this promise.

,

LongNowStefan Sagmeister

Stefan Sagmeister

Stefan Sagmeister looks at the world from a long-term perspective and presents designs and visualizations that arrive at very different conclusions than you get from Twitter and TV news.

About Stefan Sagmeister

Stefan Sagmeister has designed for clients as diverse as the Rolling Stones, HBO, and the Guggenheim Museum. He’s a two time Grammy winner and also earned practically every important international design award.

Stefan talks about the large subjects of our lives like happiness or beauty, how they connect to design and what that actually means to our everyday lives. He spoke 5 times at the official TED, making him one of the three most frequently invited TED speakers.

His books sell in the hundreds of thousands and his exhibitions have been mounted in museums around the world. His exhibit "The Happy Show" attracted way over half a million visitors worldwide and became the most visited graphic design show in history.

A native of Austria, he received his MFA from the University of Applied Arts in Vienna and, as a Fulbright Scholar, a master’s degree from Pratt Institute in New York.

Planet DebianJonathan McDowell: Local Voice Assistant Step 3: A Detour into Tensorflow

To build our local voice satellite on a Debian system rather than using the ATOM Echo device we need something that can handle the wake word component; the piece that means we only send audio to the Home Assistant server for processing by whisper.cpp when we’ve detected someone is trying to talk to us.

openWakeWord seems to be one of the better ways to do this, and is well supported. However. It relies on TensorFlow Lite (now LiteRT) which is a complicated mess of machine learning code. tflite-runtime is available from PyPI, but that’s prebuilt and we’re trying to avoid that.

Despite, on initial impressions, it looking quite complicated to deal with building TensorFlow - Bazel is an immediate warning - it turns out to be incredibly simple to build your own .deb:

$ wget -O tensorflow-v2.15.1.tar.gz https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.15.1.tar.gz
…
$ tar -axf tensorflow-v2.15.1.tar.gz
$ cd tensorflow-2.15.1/
$ BUILD_NUM_JOBS=$(nproc) BUILD_DEB=y tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
…
$ find . -name *.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime-dbgsym_2.15.1-1_amd64.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime_2.15.1-1_amd64.deb

This is hiding an awful lot of complexity, however. In particular the number of 3rd party projects that are being downloaded in the background (and compiled, to be fair, rather than using binary artefacts).

We can build the main C++ wrapper .so directly with cmake, allowing us to investigate a bit more:

mkdir tf-build
cd tf-build/
cmake \
    -DCMAKE_C_FLAGS="-I/usr/include/python3.11" \
    -DCMAKE_CXX_FLAGS="-I/usr/include/python3.11" \
    ../tensorflow-2.15.1/tensorflow/lite/
cmake --build . -t _pywrap_tensorflow_interpreter_wrapper
…
[100%] Built target _pywrap_tensorflow_interpreter_wrapper
$ ldd _pywrap_tensorflow_interpreter_wrapper.so
    linux-vdso.so.1 (0x00007ffec9588000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f22d00d0000)
    libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f22cf600000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f22d00b0000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22cf81f000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f22d01d1000)

Looking at the output we can see that pthreadpool, FXdiv, FP16 + PSimd are all downloaded, and seem to have ways to point to a local copy. That seems positive.

However, there are even more hidden dependencies, which we can see if we look in the _deps/ subdirectory of the build tree. These don’t appear to be as easy to override, and not all of them have packages already in Debian.

First, the ones that seem to be available: abseil-cpp, cpuinfo, eigen, farmhash, flatbuffers, gemmlowp, ruy + xnnpack

(lots of credit to the Debian Deep Learning Team for these, and in particular Mo Zhou)

Dependencies I couldn’t see existing packages for are: OouraFFT, ml_dtypes & neon2sse.

At this point I just used the package I built with the initial steps above. I live in hope someone will eventually package this properly for Debian, or that I’ll find the time to try and help out, but that’s not going to be today.

I wish upstream developers made it easier to use system copies of their library dependencies. I wish library developers made it easier to build and install system copies of their work. pkgconf is not new tech these days (pkg-config appears to date back to 2000), and has decent support in CMake. I get that there can be issues with incompatibilities even in minor releases, or awkwardness in doing builds of multiple connected projects, but at least give me the option to do so.

Krebs on SecurityPatch Tuesday, May 2025 Edition

Microsoft on Tuesday released software updates to fix at least 70 vulnerabilities in Windows and related products, including five zero-day flaws that are already seeing active exploitation. Adding to the sense of urgency with this month’s patch batch from Redmond are fixes for two other weaknesses that now have public proof-of-concept exploits available.

Microsoft and several security firms have disclosed that attackers are exploiting a pair of bugs in the Windows Common Log File System (CLFS) driver that allow attackers to elevate their privileges on a vulnerable device. The Windows CLFS is a critical Windows component responsible for logging services, and is widely used by Windows system services and third-party applications for logging. Tracked as CVE-2025-32701 & CVE-2025-32706, these flaws are present in all supported versions of Windows 10 and 11, as well as their server versions.

Kev Breen, senior director of threat research at Immersive Labs, said privilege escalation bugs assume an attacker already has initial access to a compromised host, typically through a phishing attack or by using stolen credentials. But if that access already exists, Breen said, attackers can gain access to the much more powerful Windows SYSTEM account, which can disable security tooling or even gain domain administration level permissions using credential harvesting tools.

“The patch notes don’t provide technical details on how this is being exploited, and no Indicators of Compromise (IOCs) are shared, meaning the only mitigation security teams have is to apply these patches immediately,” he said. “The average time from public disclosure to exploitation at scale is less than five days, with threat actors, ransomware groups, and affiliates quick to leverage these vulnerabilities.”

Two other zero-days patched by Microsoft today also were elevation of privilege flaws: CVE-2025-32709, which concerns afd.sys, the Windows Ancillary Function Driver that enables Windows applications to connect to the Internet; and CVE-2025-30400, a weakness in the Desktop Window Manager (DWM) library for Windows. As Adam Barnett at Rapid7 notes, tomorrow marks the one-year anniversary of CVE-2024-30051, a previous zero-day elevation of privilege vulnerability in this same DWM component.

The fifth zero-day patched today is CVE-2025-30397, a flaw in the Microsoft Scripting Engine, a key component used by Internet Explorer and Internet Explorer mode in Microsoft Edge.

Chris Goettl at Ivanti points out that the Windows 11 and Server 2025 updates include some new AI features that carry a lot of baggage and weigh in at around 4 gigabytes. Said baggage includes new artificial intelligence (AI) capabilities, including the controversial Recall feature, which constantly takes screenshots of what users are doing on Windows CoPilot-enabled computers.

Microsoft went back to the drawing board on Recall after a fountain of negative feedback from security experts, who warned it would present an attractive target and a potential gold mine for attackers. Microsoft appears to have made some efforts to prevent Recall from scooping up sensitive financial information, but privacy and security concerns still linger. Former Microsoftie Kevin Beaumont has a good teardown on Microsoft’s updates to Recall.

In any case, windowslatest.com reports that Windows 11 version 24H2 shows up ready for downloads, even if you don’t want it.

“It will now show up for ‘download and install’ automatically if you go to Settings > Windows Update and click Check for updates, but only when your device does not have a compatibility hold,” the publication reported. “Even if you don’t check for updates, Windows 11 24H2 will automatically download at some point.”

Apple users likely have their own patching to do. On May 12 Apple released security updates to fix at least 30 vulnerabilities in iOS and iPadOS (the updated version is 18.5). TechCrunch writes that iOS 18.5 also expands emergency satellite capabilities to iPhone 13 owners for the first time (previously it was only available on iPhone 14 or later).

Apple also released updates for macOS Sequoia, macOS Sonoma, macOS Ventura, WatchOS, tvOS and visionOS. Apple said there is no indication of active exploitation for any of the vulnerabilities fixed this month.

As always, please back up your device and/or important data before attempting any updates. And please feel free to sound off in the comments if you run into any problems applying any of these fixes.

Planet DebianSven Hoexter: Disable Firefox DRM Plugin Infobar

... or how I spent my lunch break today.

An increasing amount of news outlets (hello heise.de) start to embed bullshit which requires DRM playback. Since I keep that disabled I now get an infobar that tells me that I need to enable it for this page. Pretty useless and a pain in the back because it takes up screen space. Here's the quick way how to get rid of it:

  1. Go to about:config and turn on toolkit.legacyUserProfileCustomizations.stylesheets.
  2. Go to your Firefox profile folder (e.g. ~/.mozilla/firefox/<random-value>.default/) and mkdir chrome && touch chrome/userChrome.css.
  3. Add the following to your userChrome.css file:

     .infobar[value="drmContentDisabled"] {
       display: none !important;
     }
    
  4. Restart Firefox and read news again with full screen space.

Planet DebianJonathan Dowland: Orbital

Orbital at NX, Newcastle in 2023

Orbital at NX, Newcastle in 2023

I'm on a bit of an Orbital kick at the moment. Last year they re-issued their 1991 debut album with 43 extra tracks. Later this month they're doing the same for their 1993 sophomore album.

I thought I'd try to narrow down some tracks to recommend. I seem to have settled on roughly 5 in previous posts (for Underworld, The Cure, Coil and Gazelle Twin). This time I've done 6 (I borrowed one from Underworld)

As always it's a hard choice. I've tried to select some tracks I really enjoy that don't often come up on best-of compilation albums. For a more conventional choice of best-of tracks, I recommend the recent-ish 30 something "compilation" (of sorts, previously written about)


  1. The Naked and the Dead (1992)

    The Naked and the Dead by Orbital

    From an early EP Radiccio, which is being re-issued this month. Digital versions of the re-issue will feature a new recording "Deepest" featuring Tilda Swinton. Sadly this isn't making it onto the pressed version. She performed with them live at Glastonbury 2024. That entire performance was a real pick-me-up during my convolescence, and is recommended.

    Anyway I've now written more about a song I haven't recommended than the one I did…

  2. Remind (1993)

    Remind by Orbital

    From the Brown Album, I first heard this as the Encore from their "final show", for John Peel, when they split up in 2004. "Remind" wasn't broadcast, but an audience recording was circulated on fan site Loopz. Remarkably, 21 years on, it's still there.

    In writing this I discovered that it's a re-working of a remix Orbital did for Meat Beat Manifesto: MindStream (Mind The Bend The Mind)

  3. You Lot (2004)

    From the unfairly-maligned "final" Blue album. Featuring a sample of pre-Doctor Who Christoper Eccleston, from another Russell T Davies production, Second Coming.

  4. Beached (2000)

    Beached (Long version) by Orbital, Angelo Badalamenti

    Co-written by Angelo Badalamenti, it's built around a sample of Badalamenti's score for the movie "The Beach". Orbital's re-work adds some grit to the orchestral instrumentation and opens with a monologue, delivered by Leonardo Di Caprio, sampled from the movie.

  5. Spare Parts Express (1999)

    Spare Parts Express by Orbital

    Critics had started to be quite unfair to Orbital by this point. The band themselves said that they'd ran out of ideas (pointing at album closer "Style", built around a Stylophone melody, as proof). Their malaise continued right up to the Blue Album, at which point the split up; ostensibly for good, before regrouping 8 years later.

    Spare Parts Express is a hatchet job of various bits that they didn't develop into full songs on their own. Despite this I think it works. I love long-form electronica, and this clocks in at 10:07. My favourite segment (06:37) is adjacent to a reference (05:05) to John Baker's theme for the BBC children's program Newsround (sadly they aren't using it today. Here's a rundown of Newsround themes over time)

  6. Attached (1994)

    Attached by Orbital

    This originally debuted on a Peel session before appearing on the subsequent album Snivilisation a few months later. An album closer, and a good come-down song to close this list.

Planet DebianEvgeni Golov: running modified containers with podman

Everybody (who runs containers) knows this situation: you've been running happycontainer:stable for a while and it's been great but now something external changed and you need to adjust the code while there is still no release with the patch.

I've encountered exactly this when our Home-Assistant stopped showing the presence of our cat correctly, but we've also been discussing this at work recently.

Now the most obvious (to me?) solution would be to build a new container, based on the original one, and perform the modifications at build time. Something like this:

FROM happycontainer:stable
RUN curl … | patch -p1

But that's not interactive, and if you don't have a patch readily available, that's not what you want. (And I'll save you the idea of RUNing sed and friends to alter files!)

You could run vim inside the container, but that requires vim to be installed there in the first place. And a reasonable configuration. And…

Well, turns out podman can mount the root fs of a running container.

[root@sai ~]# podman mount homeassistant
/var/lib/containers/storage/overlay/f3ac502d97b5681989dff

And if you're running as non-root, you'll get an error:

[container@sai ~]$ podman mount homeassistant
Error: cannot run command "podman mount" in rootless mode, must execute `podman unshare` first

Luckily the solution is in the error message - use podman unshare

[container@sai ~]$ podman unshare
[root@sai ~]# podman mount homeassistant
/home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged

So in both cases (root and rootless) we get a path, which is the mounted root fs and we can edit things in there as we like.

[root@sai ~]# vi /home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged/usr/src/homeassistant/homeassistant/components/surepetcare/binary_sensor.py

Once done, the container can be unmounted again, and the namespace left

[root@sai ~]# podman umount homeassistant
homeassistant
[root@sai ~]# exit
[container@sai ~]$

At this point we have modified the code inside the container, but the running process is still using the old code. If we restart the container now to restart the process, our changes will be lost.

Instead, we can commit the changes as a new layer and tag the result.

[container@sai ~]$ podman commit homeassistant docker.io/homeassistant/home-assistant:stable

And now, when we restart the container, it will use the new code with our changes 🎉

[container@sai ~]$ systemctl --user restart homeassistant

Is this the best workflow you can get? Probably not. Does it work? Hell yeah!

Worse Than FailureCodeSOD: itouhhh…

Frequently in programming, we can make a tradeoff: use less (or more) CPU in exchange for using more (or less) memory. Lookup tables are a great example: use a big pile of memory to turn complicated calculations into O(1) operations.

So, for example, implementing itoa, the C library function for turning an integer into a character array (aka, a string), you could maybe make it more efficient using a lookup table.

I say "maybe", because Helen inherited some C code that, well, even if it were more efficient, it doesn't help because it's wrong.

Let's start with the lookup table:

char an[1000][3] = 
{
	{'0','0','0'},{'0','0','1'},{'0','0','2'},{'0','0','3'},{'0','0','4'},{'0','0','5'},{'0','0','6'},{'0','0','7'},{'0','0','8'},{'0','0','9'},
	{'0','1','0'},{'0','1','1'},{'0','1','2'},{'0','1','3'},{'0','1','4'},{'0','1','5'},{'0','1','6'},{'0','1','7'},{'0','1','8'},{'0','1','9'},
    …

I'm abbreviating the lookup table for now. This lookup table is meant to be use to convert every number from 0…999 into a string representation.

Let's take a look at how it's used.

int ll = f->cfg.len_len;
long dl = f->data_len;
// Prepare length
if ( NULL == dst )
{
    dst_len = f->data_len + ll + 1 ;
    dst = (char*) malloc ( dst_len );
}
else
//if( dst_len < ll + dl )
if( dst_len < (unsigned) (ll + dl) )
{
    // TO DOO - error should be processed
    break;
}
long i2;
switch ( f->cfg.len_fmt)
{
    case ASCII_FORM:
    {
        if ( ll < 2 )
        {
            dst[0]=an[dl][2];
        }
        else if ( ll < 3 )
        {
            dst[0]=an[dl][1];
            dst[1]=an[dl][2];
        }
        else if ( ll < 4 )
        {
            dst[0]=an[dl][0];
            dst[1]=an[dl][1];
            dst[2]=an[dl][2];
        }
        else if ( ll < 5 )
        {
            i2 = dl / 1000;
            dst[0]=an[i2][2];
            i2 = dl % 1000;
            dst[3]=an[i2][2];
            dst[2]=an[i2][1];
            dst[1]=an[i2][0];
        }
        else if ( ll < 6 )
        {
            i2 = dl / 1000;
            dst[0]=an[i2][1];
            dst[1]=an[i2][2];
            i2 = dl % 1000;
            dst[4]=an[i2][2];
            dst[3]=an[i2][1];
            dst[2]=an[i2][0];
        }
        else
        {
            // General case
            for ( int k = ll  ; k > 0  ; k-- )
            {
                dst[k-1] ='0' + dl % 10;
                dl/=10;
            }
        }

        dst[dl]=0;

        break;
    }
}

Okay, we start with some reasonable bounds checking. I have no idea what to make of a struct member called len_len- the length of the length? I'm lacking some context here.

Then we get into the switch statement. For all values less than 4 digits, everything makes sense, more or less. I'm not sure what the point of using a 2D array for you lookup table is if you're also copying one character at a time, but for such a small number of copies I'm sure it's fine.

But then we get into the len_lens longer than 3, and we start dividing by 1000 so that our lookup table continues to work. Which, again, I guess is fine, but I'm still left wondering why we're doing this, why this specific chain of optimizations is what we need to do. And frankly, why we couldn't just use itoa or a similar library function which already does this and is probably more optimized than anything I'm going to write.

When we have an output longer than 5 characters, we just use a naive for-loop and some modulus as our "general" case.

So no, I don't like this code. It reeks of premature optimization, and it also has the vibe of someone starting to optimize without fully understanding the problem they were optimizing, and trying to change course midstream without changing their solution.

But there's a punchline to all of this. Because, you see, I skipped most of the lookup table. Would you like to see how it ends? Of course you do:

{'9','8','0'},{'9','8','1'},{'9','8','2'},{'9','8','3'},{'9','8','4'},{'9','8','5'},{'9','8','6'},{'9','8','7'},{'9','8','8'},{'9','8','9'}
};

The lookup table doesn't work for values from 990 to 999. There are just no entries there. All this effort to optimize converting integers to text and we end up here: with a function that doesn't work for 1% of the possible values it could receive. And, given that the result is an out-of-bounds array access, it fails with everyone's favorite problem: undefined behavior. Usually it'll segfault, but who knows! Maybe it returns whatever bytes it finds? Maybe it sends the nasal demons after you. The compiler is allowed to do anything.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsSellout

Author: KM Brunner Nora didn’t mean to yell. She knew better than to make noise in the city. First rule of running: keep quiet. So her question, “Where were you?!”, desperate and sharp in the stillness of the Park Street station, startled both of them. Mac winced at its echo, echo. Early on she assumed […]

The post Sellout appeared first on 365tomorrows.

xkcdModern

Cryptogram Google’s Advanced Protection Now on Android

Google has extended its Advanced Protection features to Android devices. It’s not for everybody, but something to be considered by high-risk users.

Wired article, behind a paywall.

,

Planet DebianBen Hutchings: Report for Debian BSP near Leuven in April 2025

On 26th and 27th April we held a Debian bug-squashing party near Leuven, Belgium. Several longstanding and new Debian contributors gathered to work through some of the highest priority bugs affecting the upcoming reelase of Debian 13 “trixie”.

We were hosted by the Familia community centre in Tildonk. As this venue currently does not have an Internet connection, we brought a mobile hotspot and a local Debian mirror.

In attendance were:

  • Debian Developers: Ben Hutchings, Nattie Mayer-Hutchings, Kurt Roeckx, and Geert Stappers
  • New contributors: Yüce Kürüm, Louis Renuart, Arnout Vandecappelle

The new contributors were variously using Arch, Fedora, and Ubuntu, and the DDs spent some some time setting them up with Debian dvelopment environments.

The bugs we worked on included:

Planet DebianRavi Dwivedi: KDE India Conference 2025

Last month, I attended the KDE India conference in Gandhinagar, Gujarat from the 4th to the 6th of April. I made my mind to attend when Sahil told me about his plans to attend and giving a talk.

A day after my talk submission, the organizer Bhushan contacted me on Matrix and informed me that my talk had been accepted. I was also informed that KDE will cover my travel and accommodation expenses. So, I planned to attend the conference at this point. I am a longtime KDE user, so why not ;)

I arrived in Ahmedabad, the twin city of Gandhinagar, a day before the conference. The first thing that struck me as soon as I came out of the Ahmedabad airport was the heat. I felt as if I was being cooked—exactly how Bhushan put it earlier in the group chat. I took a taxi to get to my hotel, which was close to the conference venue.

Later that afternoon, I met Bhushan and Joseph. Joseph lived in Germany. Bhushan was taking him to get a SIM card, so I tagged along and got to roam around. Joseph was unsure about where to go after the conference, so I asked him what he wanted out of his trip and had conversations along that line.

Later, Vishal convinced him to go to Lucknow. Since he was adamant about taking the train, I booked a Tatkal train ticket for him to Lucknow. He was curious about how Tatkal booking works and watched me in amusement while I was booking the ticket.

The 4th of April marked the first day of the conference, with around 25 attendees. Bhushan started the day with an overview of KDE conferences in India, followed by Vishal, who discussed FOSS United’s activities. After the lunch, Joseph gave an overview of his campaign to help people switch from Windows to GNU/Linux due to environmental and security reasons. He continued his session in detail the next day.

Conference hall

Conference hall

A key takeaway for me from Joseph’s session was the idea pointed out by Adwaith: marketing GNU/Linux as a cheap alternative may not attract as much attention as marketing it as a status symbol. He gave the example of how the Tata Nano didn’t do well in the Indian market due to being perceived as a poor person’s car.

My talk was scheduled for the evening of the first day. I hadn’t prepared any slides because I wanted to make my session interactive. During my talk, I did an activity with the attendees to demonstrate the federated nature of XMPP messaging, of which Prav is a part. After the talk, I got a lot of questions, signalling engagement. The audience was cooperative (just like Prav ;)), contrary to my expectations (I thought they will be tired and sleepy).

On the third day, I did a demo on editing OpenStreetMap (referred to as “OSM” in short) using the iD editor. It involved adding points to OSM based on the students’ suggestions. Since my computer didn’t have an HDMI port, I used Subin’s computer, and he logged into his OSM account for my session. Therefore, any mistakes I made will be under Subin’s name. :)

On the third day, I attended Aaruni’s talk about backing up a GNU/Linux system. This was the talk that resonated with me the most. He suggested formatting the system with the btrfs file system during the installation, which helps in taking snapshots of the system and provides an easy way to roll back to a previous version if, for example, a file is accidentally deleted. I have tried many backup techniques, including this one, but I never tried backing up on the internal disk. I’ll certainly give this a try.

A conference is not only about the talks, that’s why we had a Prav table as well ;) Just kidding. What I really mean is that a conference is more about interactions than talks. Since the conference was a three-day affair, attendees got plenty of time to bond and share ideas.

Prav stall at the conference

Prav stall at the conference

Conference group photo

Conference group photo

After the conference, Bhushan took us to Adalaj Stepwell, an attraction near Gandhinagar. Upon entering the complex, we saw a park where there were many langurs. Going further, there were stairs that led down to a well. I guess this is why it is called a stepwell.

Adalaj Stepwell

Adalaj Stepwell

Later that day, we had Gujarati Thali for dinner. It was an all-you-can-eat buffet and was reasonably priced at 300 rupees per plate. Aamras (Mango juice) was the highlight for me. This was the only time we had Gujarati food during this visit. After the dinner, Aaruni dropped Sahil and I off at the airport. The hospitality was superb - for instance, in addition to Aaruni dropping us, Bhushan also picked up some of the attendees from the airport.

Finally, I would like to thank KDE for sponsoring my travel and accommodation costs.

Let’s wrap up this post here and meet you in the next one.

Thanks to contrapunctus and Joseph for proofreading.

Planet DebianSergio Talens-Oliag: Running dind with sysbox

When I configured forgejo-actions I used a docker-compose.yaml file to execute the runner and a dind container configured to run using privileged mode to be able to build images with it; as mentioned on my post about my setup, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the installation.

On a work chat the other day someone mentioned that the GitLab documentation about using kaniko says it is no longer maintained (see the kaniko issue #3348) so we should look into alternatives for kubernetes clusters.

I never liked kaniko too much, but it works without privileged mode and does not need a daemon, which is a good reason to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use with my forgejo-actions setup.

I was going to try buildah and podman but it seems that they need to adjust things on the systems running them:

  • When I tried to use buildah inside a docker container in Ubuntu I found the problems described on the buildah issue #1901 so I moved on.
  • Reading the podman documentation I saw that I need to export the fuse device to run it inside a container and, as I found other option, I also skipped it.

As my runner was already configured to use dind I decided to look into sysbox as a way of removing the privileged flag to make things more secure but have the same functionality.

Installing the sysbox package

As I use Debian and Ubuntu systems I used the .deb packages distributed from the sysbox release page to install it (in my case I used the one from the 0.6.7 version).

On the machine running forgejo (a Debian 12 server) I downloaded the package, stopped the running containers (it is needed to install the package and the only ones running where the ones started by the docker-compose.yaml file) and installed the sysbox-ce_0.6.7.linux_amd64.deb package using dpkg.

Updating the docker-compose.yaml file

To run the dind container without setting the privileged mode we set sysbox-runc as the runtime on the dind container definition and set the privileged flag to false (it is the same as removing the key, as it defaults to false):

--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -2,7 +2,9 @@ services:
   dind:
     image: docker:dind
     container_name: 'dind'
-    privileged: 'true'
+    # use sysbox-runc instead of using privileged mode
+    runtime: 'sysbox-runc'
+    privileged: 'false'
     command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
     restart: 'unless-stopped'
     volumes:

Testing the changes

After applying the changes to the docker-compose.yaml file we start the containers and to test things we re-run previously executed jobs to see if things work as before.

In my case I re-executed the build-image-from-tag workflow #18 from the oci project and everything worked as expected.

Conclusion

For my current use case (docker + dind) seems that sysbox is a good solution but I’m not sure if I’ll be installing it on kubernetes anytime soon unless I find a valid reason to do it (last time we talked about it my co workers said that they are evaluating buildah and podman for kubernetes and probably we will use them to replace kaniko in our gitlab-ci pipelines and for those tools the use of sysbox seems an overkill).

365 TomorrowsCrowbots

Author: Majoki Carson knew they were being watched. Quiet in this part of the city was for the birds. Days earlier, he’d been wishing for the damn things to shut up. Now they’d gone silent and the ominous hush made his skin crawl. “What are they up to?” he hissed to Klebeck squatting under a […]

The post Crowbots appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Exactly a Date

Alexandar sends us some C# date handling code. The best thing one can say is that they didn't reinvent any wheels, but that might be worse, because they used the existing wheels to drive right off a cliff.

try
{
    var date = DateTime.ParseExact(member.PubDate.ToString(), "M/d/yyyy h:mm:ss tt", null); 
    objCustomResult.PublishedDate = date;
}
catch (Exception datEx)
{
}

member.PubDate is a Nullable<DateTime>. So its ToString will return one of two things. If there is a value there, it'll return the DateTimes value. If it's null, it'll just return an empty string. Attempting to parse the empty string will throw an exception, which we helpfully swallow, do nothing about, and leave objCustomResult.PublishedDate in whatever state it was in- I'm going to guess null, but I have no idea.

Part of this WTF is that they break the advantages of using nullable types- the entire point is to be able to handle null values without having to worry about exceptions getting tossed around. But that's just a small part.

The real WTF is taking a DateTime value, turning it into a string, only to parse it back out. But because this is in .NET, it's more subtle than just the generation of useless strings, because member.PubDate.ToString()'s return value may change depending on your culture info settings.

Which sure, this is almost certainly server-side code running on a single server with a well known locale configured. So this probably won't ever blow up on them, but it's 100% the kind of thing everyone thinks is fine until the day it's not.

The punchline is that ToString allows you to specify the format you want the date formatted in, which means they could have written this:

var date = DateTime.ParseExact(member.PubDate.ToString("M/d/yyyy h:mm:ss tt"), "M/d/yyyy h:mm:ss tt", null);

But if they did that, I suppose that would have possibly tickled their little grey cells and made them realize how stupid this entire block of code was?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianReproducible Builds: Reproducible Builds in April 2025

Welcome to our fourth report from the Reproducible Builds project in 2025. These monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. Lastly, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. reproduce.debian.net
  2. Fifty Years of Open Source Software Supply Chain Security
  3. 4th CHAINS Software Supply Chain Workshop
  4. Mailing list updates
  5. Canonicalization for Unreproducible Builds in Java
  6. OSS Rebuild adds new TUI features
  7. Distribution roundup
  8. diffoscope & strip-nondeterminism
  9. Website updates
  10. Reproducibility testing framework
  11. Upstream patches

reproduce.debian.net

The last few months have seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.

This month, however, we are pleased to announce that reproduce.debian.net now tests all the Debian trixie architectures except s390x and mips64el.

The ppc64el architecture was added through the generous support of Oregon State University Open Source Laboratory (OSUOSL), and we can support the armel architecture thanks to CodeThink.


Fifty Years of Open Source Software Supply Chain Security

Russ Cox has published a must-read article in ACM Queue on Fifty Years of Open Source Software Supply Chain Security. Subtitled, “For decades, software reuse was only a lofty goal. Now it’s very real.”, Russ’ article goes on to outline the history and original goals of software supply-chain security in the US military in the early 1970s, all the way to the XZ Utils backdoor of 2024. Through that lens, Russ explores the problem and how it has changed, and hasn’t changed, over time.

He concludes as follows:

We are all struggling with a massive shift that has happened in the past 10 or 20 years in the software industry. For decades, software reuse was only a lofty goal. Now it’s very real. Modern programming environments such as Go, Node and Rust have made it trivial to reuse work by others, but our instincts about responsible behaviors have not yet adapted to this new reality.

We all have more work to do.


4th CHAINS Software Supply Chain Workshop

Convened as part of the CHAINS research project at the KTH Royal Institute of Technology in Stockholm, Sweden, the 4th CHAINS Software Supply Chain Workshop occurred during April. During the workshop, there were a number of relevant workshops, including:

The full listing of the agenda is available on the workshop’s website.


Mailing list updates

On our mailing list this month:

  • Luca DiMaio of Chainguard posted to the list reporting that they had successfully implemented reproducible filesystem images with both ext4 and an EFI system partition. They go on to list the various methods, and the thread generated at least fifteen replies.

  • David Wheeler announced that the OpenSSF is building a “glossary” of sorts in order that they “consistently use the same meaning for the same term” and, moreover, that they have drafted a definition for ‘reproducible build’. The thread generated a significant number of replies on the definition, leading to a potential update to the Reproducible Build’s own definition.

  • Lastly, kpcyrd posted to the list with a timely reminder and update on their repro-env” tool. As first reported in our July 2023 report, kpcyrd mentions that:

    My initial interest in reproducible builds was “how do I distribute pre-compiled binaries on GitHub without people raising security concerns about them”. I’ve cycled back to this original problem about 5 years later and built a tool that is meant to address this. []


Canonicalization for Unreproducible Builds in Java

Aman Sharma, Benoit Baudry and Martin Monperrus have published a new scholarly study related to reproducible builds within Java. Titled Canonicalization for Unreproducible Builds in Java, the article’s abstract is as follows:

[…] Achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using OSS-Rebuild and jNorm respectively. Finally, we present Chains-Rebuild, a tool that raises reproducibility success from 9.48% to 26.89% on 12,283 unreproducible artifacts. To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and Chains-Rebuild, a canonicalization tool for mitigating unreproducible builds in Java.

A full PDF of their article is available from arXiv.


OSS Rebuild adds new TUI features

OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io and npm registries) and publish signed attestations and build definitions for public use.

OSS Rebuild ships a text-based user interface (TUI) for viewing, launching, and debugging rebuilds. While previously requiring ownership of a full instance of OSS Rebuild’s hosted infrastructure, the TUI now supports a fully local mode of build execution and artifact storage. Thanks to Giacomo Benedetti for his usage feedback and work to extend the local-only development toolkit.

Another feature added to the TUI was an experimental chatbot integration that provides interactive feedback on rebuild failure root causes and suggests fixes.


Distribution roundup

In Debian this month:

  • Roland Clobus posted another status report on reproducible ISO images on our mailing list this month, with the summary that “all live images build reproducibly from the online Debian archive”.

  • Debian developer Simon Josefsson published another two reproducibility-related blog posts this month, the first on the topic of Verified Reproducible Tarballs. Simon sardonically challenges the reader as follows: “Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days?” After that, they also published a blog post on Building Debian in a GitLab Pipeline using their multi-stage rebuild approach.

  • Roland also posted to our mailing list to highlight that “there is now another tool in Debian that generates reproducible output, equivs”. This is a tool to create trivial Debian packages that might Depend on other packages. As Roland writes, “building the [equivs] package has been reproducible for a while, [but] now the output of the [tool] has become reproducible as well”.

  • Lastly, 9 reviews of Debian packages were added, 10 were updated and 10 were removed this month adding to our extensive knowledge about identified issues.

The IzzyOnDroid Android APK repository made more progress in April. Thanks to funding by NLnet and Mobifree, the project was also to put more time into their tooling. For instance, developers can now easily run their own verification builder in “less than 5 minutes”. This currently supports Debian-based systems, but support for RPM-based systems is incoming.

  • The rbuilder_setup tool can now setup the entire framework within less than five minutes. The process is configurable, too, so everything from “just the basics to verify builds” up to a fully-fledged RB environment is also possible.

  • This tool works on Debian, RedHat and Arch Linux, as well as their derivates. The project has received successful reports from Debian, Ubuntu, Fedora and some Arch Linux derivates so far.

  • Documentation on how to work with reproducible builds (making apps reproducible, debugging unreproducible packages, etc) is available in the project’s wiki page.

  • Future work is also in the pipeline, including documentation, guidelines and helpers for debugging.

NixOS defined an Outreachy project for improving build reproducibility. In the application phase, NixOS saw some strong candidates providing contributions, both on the NixOS side and upstream: guider-le-ecit analyzed a libpinyin issue. Tessy James fixed an issue in arandr and helped analyze one in libvlc that led to a proposed upstream fix. Finally, 3pleX fixed an issue which was accepted in upstream kitty, one in upstream maturin, one in upstream python-sip and one in the Nix packaging of python-libbytesize. Sadly, the funding for this internship fell through, so NixOS were forced to abandon their search.

Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading a number of versions to Debian:

  • Use the --walk argument over the potentially dangerous alternative --scan when calling out to zipdetails(1). []
  • Correct a longstanding issue where many >-based version tests used in conditional fixtures were broken. This was used to ensure that specific tests were only run when the version on the system was newer than a particular number. Thanks to Colin Watson for the report (Debian bug #1102658) []
  • Address a long-hidden issue in the test_versions testsuite as well, where we weren’t actually testing the greater-than comparisons mentioned above, as it was masked by the tests for equality. []
  • Update copyright years. []

In strip-nondeterminism, however, Holger Levsen updated the Continuous Integration (CI) configuration in order to use the standard Debian pipelines via debian/salsa-ci.yml instead of using .gitlab-ci.yml. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In April, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add armel.reproduce.debian.net to support the armel architecture. [][]
    • Add a new ARM node, codethink05. [][]
    • Add ppc64el.reproduce.debian.net to support testing of the ppc64el architecture. [][][]
    • Improve the reproduce.debian.net front page. [][]
    • Make various changes to the ppc64el nodes. [][]9[][]
    • Make various changes to the arm64 and armhf nodes. [][][][]
    • Various changes related to the rebuilderd-worker entry point. [][][]
    • Create and deploy a pkgsync script. [][][][][][][][]
    • Fix the monitoring of the riscv64 architecture. [][]
    • Make a number of changes related to starting the rebuilderd service. [][][][]
  • Backup-related:

    • Backup the rebuilder databases every week. [][][][]
    • Improve the node health checks. [][]
  • Misc:

    • Re-use existing connections to the SSH proxy node on the riscv64 nodes. [][]
    • Node maintenance. [][][]

In addition:

  • Jochen Sprickerhof fixed the risvc64 host names [] and requested access to all the rebuilderd nodes [].

  • Mattia Rizzolo updated the self-serve rebuild scheduling tool, replacing the deprecated “SSO”-style authentication with OpenIDC which authenticates against salsa.debian.org. [][][]

  • Roland Clobus updated the configuration for the osuosl3 node to designate 4 workers for bigger builds. []


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Cryptogram Court Rules Against NSO Group

The case is over:

A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

I’m sure it’ll be appealed. Everything always is.

Planet DebianSergio Talens-Oliag: Playing with vCluster

After my previous posts related to Argo CD (one about argocd-autopilot and another with some usage examples) I started to look into Kluctl (I also plan to review Flux, but I’m more interested on the kluctl approach right now).

While reading an entry on the project blog about Cluster API somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any given time.

On this post I will deploy a vcluster using the k3d_argocd kubernetes cluster (the one we created on the posts about argocd) as the host and will show how to:

  • use its ingress (in our case traefik) to access the API of the virtual one (removes the need of having to use the vcluster connect command to access it with kubectl),
  • publish the ingress objects deployed on the virtual cluster on the host ingress, and
  • use the sealed-secrets of the host cluster to manage the virtual cluster secrets.

Creating the virtual cluster

Installing the vcluster application

To create the virtual clusters we need the vcluster command, we can install it with arkade:

❯ arkade get vcluster

The vcluster.yaml file

To create the cluster we are going to use the following vcluster.yaml file (you can find the documentation about all its options here):

controlPlane:
  proxy:
    # Extra hostnames to sign the vCluster proxy certificate for
    extraSANs:
    - my-vcluster-api.lo.mixinet.net
exportKubeConfig:
  context: my-vcluster_k3d-argocd
  server: https://my-vcluster-api.lo.mixinet.net:8443
  secret:
    name: my-vcluster-kubeconfig
sync:
  toHost:
    ingresses:
      enabled: true
    serviceAccounts:
      enabled: true
  fromHost:
    ingressClasses:
      enabled: true
    nodes:
      enabled: true
      clearImageStatus: true
    secrets:
      enabled: true
      mappings:
        byName:
          # sync all Secrets from the 'my-vcluster-default' namespace to the
          # virtual "default" namespace.
          "my-vcluster-default/*": "default/*"
          # We could add other namespace mappings if needed, i.e.:
          # "my-vcluster-kube-system/*": "kube-system/*"

On the controlPlane section we’ve added the proxy.extraSANs entry to add an extra host name to make sure it is added to the cluster certificates if we use it from an ingress.

The exportKubeConfig section creates a kubeconfig secret on the virtual cluster namespace using the provided host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine.

On the sync section we enable the synchronization of Ingress objects and ServiceAccounts from the virtual to the host cluster:

  • We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside world.
  • The service account synchronization is not really needed, but we enable it because if we test this configuration with EKS it would be useful if we use IAM roles for the service accounts.

On the opposite direction (from the host to the virtual cluster) we synchronize:

  • The IngressClass objects, to be able to use the host ingress server(s).
  • The Nodes (we are not using the info right now, but it could be interesting if we want to have the real information of the nodes running pods of the virtual cluster).
  • The Secrets from the my-vcluster-default host namespace to the default of the virtual cluster; that synchronization allows us to deploy SealedSecrets on the host that generate secrets that are copied automatically to the virtual one. Initially we only copy secrets for one namespace but if the virtual cluster needs others we can add namespaces on the host and their mappings to the virtual one on the vcluster.yaml file.

Creating the virtual cluster

To create the virtual cluster we run the following command:

vcluster create my-vcluster --namespace my-vcluster --upgrade --connect=false \
  --values vcluster.yaml

It creates the virtual cluster on the my-vcluster namespace using the vcluster.yaml file shown before without connecting to the cluster from our local machine (if we don’t pass that option the command adds an entry on our kubeconfig and launches a proxy to connect to the virtual cluster that we don’t plan to use).

Adding an ingress TCP route to connect to the vcluster api

As explained before, we need to create an IngressTcpRoute object to be able to connect to the vcluster API, we use the following definition:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: my-vcluster-api
  namespace: my-vcluster
spec:
  entryPoints:
    - websecure
  routes:
    - match: HostSNI(`my-vcluster-api.lo.mixinet.net`)
      services:
        - name: my-vcluster
          port: 443
  tls:
    passthrough: true

Once we apply those changes the cluster API will be available on the https://my-cluster-api.lo.mixinet.net:8443 URL using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted it on the vcluster.yaml file, as explained before).

Getting the kubeconfig for the vcluster

Once the vcluster is running we will have its kubeconfig available on the my-vcluster-kubeconfig secret on its namespace on the host cluster.

To dump it to the ~/.kube/my-vcluster-config we can do the following:

❯ kubectl get -n my-vcluster secret/my-vcluster-kubeconfig \
    --template="{{.data.config}}" | base64 -d > ~/.kube/my-vcluster-config

Once available we can define the vkubectl alias to adjust the KUBECONFIG variable to access it:

alias vkubectl="KUBECONFIG=~/.kube/my-vcluster-config kubectl"

Or we can merge the configuration with the one on the KUBECONFIG variable and use kubectx or a similar tool to change the context (for our vcluster the context will be my-vcluster_k3d-argocd). If the KUBECONFIG variable is defined and only has the PATH to a single file the merge can be done running the following:

KUBECONFIG="$KUBECONFIG:~/.kube/my-vcluster-config" kubectl config view \
  --flatten >"$KUBECONFIG.new"
mv "$KUBECONFIG.new" "$KUBECONFIG"

On the rest of this post we will use the vkubectl alias when connecting to the virtual cluster, i.e. to check that it works we can run the cluster-info subcommand:

❯ vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Installing the dummyhttpd application

To test the virtual cluster we are going to install the dummyhttpd application using the following kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
# Add the config map
configMapGenerator:
  - name: dummyhttp-configmap
    literals:
      - CM_VAR="Vcluster Test Value"
    behavior: create
    options:
      disableNameSuffixHash: true
patches:
  # Change the ingress host name
  - target:
      kind: Ingress
      name: dummyhttp
    patch: |-
      - op: replace
        path: /spec/rules/0/host
        value: vcluster-dummyhttp.lo.mixinet.net
  # Add reloader annotations -- it will only work if we install reloader on the
  # virtual cluster, as the one on the host cluster doesn't see the vcluster
  # deployment objects
  - target:
      kind: Deployment
      name: dummyhttp
    patch: |-
      - op: add
        path: /metadata/annotations
        value:
          reloader.stakater.com/auto: "true"
          reloader.stakater.com/rollout-strategy: "restart"

It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run kustomize and vkubectl:

❯ kustomize build . | vkubectl apply -f -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp created

We can check that everything worked using curl:

❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}

The objects available on the vcluster now are:

❯ vkubectl get all,configmap,ingress
NAME                             READY   STATUS    RESTARTS   AGE
pod/dummyhttp-55569589bc-9zl7t   1/1     Running   0          24s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/dummyhttp    ClusterIP   10.43.51.39    <none>        80/TCP    24s
service/kubernetes   ClusterIP   10.43.153.12   <none>        443/TCP   14m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dummyhttp   1/1     1            1           24s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dummyhttp-55569589bc   1         1         1       24s

NAME                            DATA   AGE
configmap/dummyhttp-configmap   1      24s
configmap/kube-root-ca.crt      1      14m

NAME                                CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    24s

While we have the following ones on the my-vcluster namespace of the host cluster:

❯ kubectl get all,configmap,ingress -n my-vcluster
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster   1/1     Running   0          18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster    1/1     Running   0          45s
pod/my-vcluster-0                                         1/1     Running   0          19m

NAME                                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
service/dummyhttp-x-default-x-my-vcluster      ClusterIP   10.43.51.39     <none>        80/TCP                   45s
service/kube-dns-x-kube-system-x-my-vcluster   ClusterIP   10.43.91.198    <none>        53/UDP,53/TCP,9153/TCP   18m
service/my-vcluster                            ClusterIP   10.43.153.12    <none>        443/TCP,10250/TCP        19m
service/my-vcluster-headless                   ClusterIP   None            <none>        443/TCP                  19m
service/my-vcluster-node-k3d-argocd-agent-1    ClusterIP   10.43.189.188   <none>        10250/TCP                18m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     19m

NAME                                                     DATA   AGE
configmap/coredns-x-kube-system-x-my-vcluster            2      18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster    1      45s
configmap/kube-root-ca.crt                               1      19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster       1      11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster   1      18m
configmap/vc-coredns-my-vcluster                         1      19m

NAME                                                        CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    45s

As shown, we have copies of the Service, Pod, Configmap and Ingress objects, but there is no copy of the Deployment or ReplicaSet.

Creating a sealed secret for dummyhttpd

To use the hosts sealed secrets controller with the virtual cluster we will create the my-vcluster-default namespace and add there the sealed secrets we want to have available as secrets on the default namespace of the virtual cluster:

❯ kubectl create namespace my-vcluster-default
❯ echo -n "Vcluster Boo" | kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml

After running the previous commands we have the following objects available on the host cluster:

❯ kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     34s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      34s

And we can see that the secret is also available on the virtual cluster with the content we expected:

❯ vkubectl get secrets
NAME               TYPE     DATA   AGE
dummyhttp-secret   Opaque   1      34s
❯ vkubectl get secret/dummyhttp-secret --template="{{.data.SECRET_VAR}}" \
  | base64 -d
Vcluster Boo

But the output of the curl command has not changed because, although we have the reloader controller deployed on the host cluster, it does not see the Deployment object of the virtual one and the pods are not touched:

❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}

Installing the reloader application

To make reloader work on the virtual cluster we just need to install it as we did on the host using the following kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'

We deploy it with kustomize and vkubectl:

❯ kustomize build . | vkubectl apply -f -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created

As the controller was not available when the secret was created the pods linked to the Deployment are not updated, but we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed version and copied to the virtual cluster where the reloader controller updates the pod and the curl command shows the new output:

❯ kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret "dummyhttp-secret" deleted
❯ sleep 2
❯ vkubectl get pods
NAME                         READY   STATUS        RESTARTS   AGE
dummyhttp-78bf5fb885-fmsvs   1/1     Terminating   0          6m33s
dummyhttp-c68684bbf-nx8f9    1/1     Running       0          6s
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"Vcluster Boo"}

If we change the secret on the host systems things get updated pretty quickly now:

❯ echo -n "New secret" | kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"New secret"}

Pause and restore the vcluster

The status of pods and statefulsets while the virtual cluster is active can be seen using kubectl:

❯ kubectl get pods,statefulsets -n my-vcluster
NAME                                                                 READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster              1/1     Running   0          127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster               1/1     Running   0          4m39s
pod/my-vcluster-0                                                    1/1     Running   0          128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster   1/1     Running   0          60m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     128m

Pausing the vcluster

If we don’t need to use the virtual cluster we can pause it and after a small amount of time all Pods are gone because the statefulSet is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that now can be used for other things):

❯ vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
❯ kubectl get pods,statefulsets -n my-vcluster
NAME                           READY   AGE
statefulset.apps/my-vcluster   0/0     130m

Now the curl command fails:

❯ curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not found

Although the ingress is still available (it returns a 404 because there is no pod behind the service):

❯ kubectl get ingress -n my-vcluster
NAME                                CLASS     HOSTS                               ADDRESS                            PORTS   AGE
dummyhttp-x-default-x-my-vcluster   traefik   vcluster-dummyhttp.lo.mixinet.net   172.20.0.2,172.20.0.3,172.20.0.4   80      120m

In fact, the same problem happens when we try to connect to the vcluster API; the error shown by kubectl is related to the TLS certificate because the 404 page uses the wildcard certificate instead of the self signed one:

❯ vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
❯ curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
❯ curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2>&1 | grep subject
*  subject: CN=lo.mixinet.net
*  subjectAltName: host "my-vcluster-api.lo.mixinet.net" matched cert's "*.lo.mixinet.net"

Resuming the vcluster

When we want to use the virtual cluster again we just need to use the resume command:

❯ vcluster resume my-vcluster
12:03:14 done Successfully resumed vcluster my-vcluster in namespace my-vcluster

Once all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of course.

Cleaning up

The virtual cluster can be removed using the delete command:

❯ vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deleted

That removes everything we used on this post except the sealed secrets and secrets that we put on the my-vcluster-default namespace because it was created by us.

If we delete the namespace all the secrets and sealed secrets on it are also removed:

❯ kubectl delete namespace my-vcluster-default
namespace "my-vcluster-default" deleted

Conclusions

I believe that the use of virtual clusters can be a good option for two of the proposed use cases that I’ve encountered in real projects in the past:

  • need of short lived clusters for developers or teams,
  • execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual clusters that are created on demand or paused and resumed when needed).

For both cases things can be set up using the Apache licensed product, although maybe evaluating the vCluster Platform offering could be interesting.

In any case when everything is not done inside kubernetes we will also have to check how to manage the external services (i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way of creating, deleting or pause and resume those services).

Worse Than FailureCodeSOD: Would a Function by Any Other Name Still be WTF?

"Don't use exception handling for normal flow control," is generally good advice. But Andy's lead had a PhD in computer science, and with that kind of education, wasn't about to let good advice or best practices tell them what to do. That's why, when they needed to validate inputs, they wrote code C# like this:


    public static bool IsDecimal(string theValue)
    {
        try
        {
            Convert.ToDouble(theValue);
            return true;
        }
        catch
        {
            return false;
        }
    } 

They attempt to convert, and if they succeed, great, return true. If they fail, an exception gets caught, and they return false. What could be simpler?

Well, using the built in TryParse function would be simpler. Despite its name, actually avoids throwing an exception, even internally, because exceptions are expensive in .NET. And it is already implemented, so you don't have to do this.

Also, Decimal is a type in C#- a 16-byte floating point value. Now, I know they didn't actually mean Decimal, just "a value with 0 or more digits behind the decimal point", but pedantry is the root of clarity, and the naming convention makes this bad code unclear about its intent and purpose. Per the docs there are Single and Double values which can't be represented as Decimal and trigger an OverflowException. And conversely, Decimal loses precision if converted to Double. This means a value that would be represented as Decimal might not pass this function, and a value that can't be represented as Decimal might, and none of this actually matters but the name of the function is bad.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsRoses and Ozone

Author: Julian Miles, Staff Writer The thief is sprinting away before I realise they’ve taken my bag. I go after them. “Thieving bastard!” They swerve between parked cars. A silver coupe comes out of nowhere and knocks them flying. It screeches to a stop, smoke or steam curling off it. What’s that smell? Gull-wing doors […]

The post Roses and Ozone appeared first on 365tomorrows.

Cryptogram Florida Backdoor Bill Fails

A Florida bill requiring encryption backdoors failed to pass.

xkcdDeposition

Planet DebianTaavi Väänänen: lua entry thread aborted: runtime error: bad request

The Wikimedia Cloud VPS shared web proxy has an interesting architecture: the management API writes an entry for each proxy to a Redis database, and the web server in use (Nginx with Lua support from ngx_http_lua_module) looks up the backend server URL from Redis for each request. This is maybe not how I would design this today, but the basic design dates back to 2013 and has served us well ever since.

However, with a recent operating system upgrade to Debian 12 (we run Nginx from the packages in Debian's repositories), we started seeing mysterious errors that looked like this:

2025/04/30 07:24:25 [error] 82656#82656: *5612 lua entry thread aborted: runtime error: /etc/nginx/lua/domainproxy.lua:32: bad request
stack traceback:
coroutine 0:
[C]: in function 'set_keepalive'
/etc/nginx/lua/domainproxy.lua:32: in function 'redis_shutdown'
/etc/nginx/lua/domainproxy.lua:48: in main chunk, client: [redacted], server: *.wmcloud.org, request: "GET [redacted] HTTP/2.0", host: "codesearch.wmcloud.org", referrer: "https://codesearch.wmcloud.org/search/"

The code in question seems straightforward enough:

function redis_shutdown()
 -- Use a connection pool of 256 connections with a 32s idle timeout
 -- This also closes the current redis connection.
 red:set_keepalive(1000 * 32, 256) -- line 32
end

When searching for this error online, you'll end up finding advice like "the resty.redis object instance cannot be stored in a Lua variable at the Lua module level". However, our code already stores it as a local variable:

local redis = require 'nginx.redis'
local red = redis:new()
red:set_timeout(1000)
red:connect('127.0.0.1', 6379)

Turns out the issue was with the function definition: functions can also be defined as local. Without that, something somewhere in some situations seems to reference the variables from other requests, instead of using the Redis connection for the current request. (Don't ask me what changed between Debian 12 and 13 making this only break now.) So we needed to change our function definition to this instead:

local function redis_shutdown()
 -- Use a connection pool of 256 connections with a 32s idle timeout
 -- This also closes the current redis connection.
 red:set_keepalive(1000 * 32, 256)
end

I spent almost an entire workday looking for this, ultimately making a two-line patch to fix the issue. Hopefully by publishing this post I can save that time from everyone else stumbling upon the same problem after myself.

Planet DebianFreexian Collaborators: Debian Contributions: DebConf 25 preparations, PyPA tools updates, Removing libcrypt-dev from build-essential and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-04

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25 Preparations, by Stefano Rivera and Santiago Ruano Rincón

DebConf 25 preparations continue. In April, the bursary team reviewed and ranked bursary applications. Santiago Ruano Rincón examined the current state of the conference’s finances, to see if we could allocate any more money to bursaries. Stefano Rivera supported the bursary team’s work with infrastructure and advice and added some metrics to assist Santiago’s budget review. Santiago was also involved in different parts of the organization, including Content team matters, as reviewing the first of proposals, preparing public information about the new Academic Track; or coordinating different aspects of the Day trip activities and the Conference Dinner.

PyPA tools updates, by Stefano Rivera

Around the beginning of the freeze (in retrospect, definitely too late) Stefano looked at updating setuptools in the archive to 78.1.0. This brings support for more comprehensive license expressions (PEP-639), that people are expected to adopt soon upstream. While the reverse-autopkgtests all passed, it all came with some unexpected complications, and turned into a mini-transition. The new setuptools broke shebangs for scripts (pypa/setuptools#4952).

It also required a bump of wheel to 0.46 and wheel 0.46 now has a dependency outside the standard library (it de-vendored packaging). This meant it was no longer suitable to distribute a standalone wheel.whl file to seed into new virtualenvs, as virtualenv does by default. The good news here is that setuptools doesn’t need wheel any more, it included its own implementation of the bdist_wheel command, in 70.1. But the world hadn’t adapted to take advantage of this, yet. Stefano scrambled to get all of these issues resolved upstream and in Debian:

We’re now at the point where python3-wheel-whl is no longer needed in Debian unstable, and it should migrate to trixie.

Removing libcrypt-dev from build-essential, by Helmut Grohne

The crypt function was originally part of glibc, but it got separated to libxcrypt. As a result, libc6-dev now depends on libcrypt-dev. This poses a cycle during architecture cross bootstrap. As the number of packages actually using crypt is relatively small, Helmut proposed removing the dependency. He analyzed an archive rebuild kindly performed by Santiago Vila (not affiliated with Freexian) and estimated the necessary changes. It looks like we may complete this with modifications to less than 300 source packages in the forky cycle. Half of the bugs have been filed at this time. They are tracked with libcrypt-* usertags.

Miscellaneous contributions

  • Carles uploaded a new version of simplemonitor.
  • Carles improved the documentation of salsa-ci-team/pipeline regarding piuparts arguments.
  • Carles closed an FTBFS on gcc-15 on qnetload.
  • Carles worked on Catalan translations using po-debconf-manager: reviewed 57 translations and created their merge requests in salsa, created 59 bug reports for packages that didn’t merge in more than 30 days. Followed-up merge requests and comments in bug reports. Managed some translations manually for packages that are not in Salsa.
  • Lucas did some work on the DebConf Content and Bursary teams.
  • Lucas fixed multiple CVEs and bugs involving the upgrade from bookworm to trixie in ruby3.3.
  • Lucas fixed a CVE in valkey in unstable.
  • Stefano updated beautifulsoup4, python-authlib, python-html2text, python-packaging, python-pip, python-soupsieve, and unidecode.
  • Stefano packaged python-dependency-groups, a new vendored library in python-pip.
  • During an afternoon Bug Squashing Party in Montevideo, Santiago uploaded a couple of packages fixing RC bugs #1057226 and #1102487. The latter was a sponsored upload.
  • Thorsten uploaded new upstream versions of brlaser, ptouch-driver and sane-airscan to get the latest upstream bug fixes into Trixie.
  • Raphaël filed an upstream bug on zim for a graphical glitch that he has been experiencing.
  • Colin Watson upgraded openssh to 10.0p1 (also known as 10.0p2), and debugged various follow-up bugs. This included adding riscv64 support to vmdb2 in passing, and enabling native wtmpdb support so that wtmpdb last now reports the correct tty for SSH connections.
  • Colin fixed dput-ng’s –override option, which had never previously worked.
  • Colin fixed a security bug in debmirror.
  • Colin did his usual routine work on the Python team: 21 packages upgraded to new upstream versions, 8 CVEs fixed, and about 25 release-critical bugs fixed.
  • Helmut filed patches for 21 cross build failures.
  • Helmut uploaded a new version of debvm featuring a new tool debefivm-create to generate EFI-bootable disk images compatible with other tools such as libvirt or VirtualBox. Much of the work was prototyped in earlier months. This generalizes mmdebstrap-autopkgtest-build-qemu.
  • Helmut continued reporting undeclared file conflicts and suggested package removals from unstable.
  • Helmut proposed build profiles for libftdi1 and gnupg2. To deal with recently added dependencies in the architecture cross bootstrap package set.
  • Helmut managed the /usr-move transition. He worked on ensuring that systemd would comply with Debian’s policy. Dumat continues to locate problems here and there yielding discussion occasionally. He sent a patch for an upgrade problem in zutils.
  • Anupa worked with the Debian publicity team to publish Micronews and Bits posts.
  • Anupa worked with the DebConf 25 content team to review talk and event proposals for DebConf 25.

,

Planet DebianSergio Durigan Junior: Debian Bug Squashing Party Brazil 2025

With the trixie release approaching, I had the idea back in April to organize a bug squashing party with the Debian Brasil community. I believe the outcome was very positive, and we were able to tackle and fix quite a number of release-critical bugs. This is a brief report of what we did.

A remote BSP

It’s not the first time I organize a BSP: back in 2019, I helped throw another similar party in Toronto. The difference this time is that, because Brazil is a big country and (perhaps most importantly) because I’m not currently living there, the BSP had to be done online.

I’m a fan of social interactions (especially with the Brazilian community), and in my experience we usually can achieve much more when we get together in a physical place, but hey, you gotta do what you gotta do…

Most (if not all) of the folks interested in participating had busy weekdays, so it was decided that we would meet during the weekends and try to work on a few bugs over Jitsi. Nothing stopped people from working on bugs during the week as well, of course.

A tag to rule them all

We used the bsp-2025-04-brazil usertag to mark those bugs that were touched by us. You can see the full list of bugs here, although the current list (as of 2025-05-11) is smaller than the one we had by the end of April. I don’t know what happened; maybe it’s some glitch with the BTS, or maybe someone removed the usertag by mistake.

Stats

In total, we had:

  • 7 participants
  • 37 bugs handled. Of those,
  • 35 bugs fixed

The BSP officially started on 04 April 2025, and ended on 30 April 2025. I was able to attend meetings during two weekends; other people participated more sporadically.

Outcome

As I said above, the Debian Brasil community is great and very engaged in the project. Speaking more specifically about the Debian Brasil Devel group, I can say that We have contributors with strong technical skills, and I really love that we have this inclusive, extremely technical culture where debugging and understanding things is really core to pretty much all our discussions.

We already meet weekly on Thursdays to talk shop and help newcomers, so having a remote BSP with this group seemed like a logical thing to do. I’m really glad to see our results and even happier to hear positive feedback from the community during the last MiniDebConf in Maceió.

There’s some interest in organizing another BSP, this time face-to-face and during the next DebConf. I’m all for it, as I love fixing bugs and having a great time with friends. If you’re interested in attending, let me know.

Thanks, and until next time.

Planet DebianBits from Debian: Bits from the DPL

Dear Debian community,

This is bits from the DPL for April.

End of 10

I am sure I was speaking in the interest of the whole project when joining the "End of 10" campaign. Here is what I wrote to the initiators:

Hi Joseph and all drivers of the "End of 10" campaign, On behalf of the entire Debian project, I would like to say that we proudly join your great campaign. We stand with you in promoting Free Software, defending users' freedoms, and protecting our planet by avoiding unnecessary hardware waste. Thank you for leading this important initiative.

Andreas Tille Debian Project Leader

I have some goals I would like to share with you for my second term.

Ftpmaster delegation

This splits up into tasks that can be done before and after Trixie release.

Before Trixie:

⁣1. Reducing Barriers to DFSG Compliance Checks

Back in 2002, Debian established a way to distribute cryptographic software in the main archive, whereas such software had previously been restricted to the non-US archive. One result of this arrangement which influences our workflow is that all packages uploaded to the NEW queue must remain on the server that hosts it. This requirement means that members of the ftpmaster team must log in to that specific machine, where they are limited to a restricted set of tools for reviewing uploaded code.

This setup may act as a barrier to participation--particularly for contributors who might otherwise assist with reviewing packages for DFSG compliance. I believe it is time to reassess this limitation and work toward removing such hurdles.

In October last year, we had some initial contact with SPI's legal counsel, who noted that US regulations around cryptography have been relaxed somewhat in recent years (as of 2021). This suggests it may now be possible to revisit and potentially revise the conditions under which we manage cryptographic software in the NEW queue.

I plan to investigate this further. If you have expertise in software or export control law and are interested in helping with this topic, please get in touch with me.

The ultimate goal is to make it easier for more people to contribute to ensuring that code in the NEW queue complies with the DFSG.

⁣2. Discussing Alternatives

My chances to reach out to other distributions remained limited. However, regarding the processing of new software, I learned that OpenSUSE uses a Git-based workflow that requires five "LGTM" approvals from a group of trusted developers. As far as I know, Fedora follows a similar approach.

Inspired by this, a recent community initiative--the Gateway to NEW project--enables peer review of new packages for DFSG compliance before they enter the NEW queue. This effort allows anyone to contribute by reviewing packages and flagging potential issues in advance via Git. I particularly appreciate that the DFSG review is coupled with CI, allowing for both license and technical evaluation.

While this process currently results in some duplication of work--since final reviews are still performed by the ftpmaster team--it offers a valuable opportunity to catch issues early and improve the overall quality of uploads. If the community sees long-term value in this approach, it could serve as a basis for evolving our workflows. Integrating it more closely into DAK could streamline the process, and we've recently seen that merge requests reflecting community suggestions can be accepted promptly.

For now, I would like to gather opinions about how such initiatives could best complement the current NEW processing, and whether greater consensus on trusted peer review could help reduce the burden on the team doing DFSG compliance checks. Submitting packages for review and automated testing before uploading can improve quality and encourage broader participation in safeguarding Debian's Free Software principles.

My explicit thanks go out to the Gateway to NEW team for their valuable and forward-looking contribution to Debian.

⁣3. Documenting Critical Workflows

Past ftpmaster trainees have told me that understanding the full set of ftpmaster workflows can be quite difficult. While there is some useful documentation − thanks in particular to Sean Whitton for his work on documenting NEW processing rules – many other important tasks carried out by the ftpmaster team remain undocumented or only partially so.

Comprehensive and accessible documentation would greatly benefit current and future team members, especially those onboarding or assisting in specific workflows. It would also help ensure continuity and transparency in how critical parts of the archive are managed.

If such documentation already exists and I have simply overlooked it, I would be happy to be corrected. Otherwise, I believe this is an area where we need to improve significantly. Volunteers with a talent for writing technical documentation are warmly invited to contact me--I'd be happy to help establish connections with ftpmaster team members who are willing to share their knowledge so that it can be written down and preserved.

Once Trixie is released (hopefully before DebConf):

⁣4. Split of the Ftpmaster Team into DFSG and Archive Teams

As discussed during the "Meet the ftpteam" BoF at DebConf24, I would like to propose a structural refinement of the current Ftpmaster team by introducing two different delegated teams:

  1. DFSG Team
  2. Archive Team (responsible for DAK maintenance and process tooling, including releases)

(Alternative name suggestions are, of course, welcome.) The primary task of the DFSG team would be the processing of the NEW queue and ensuring that packages comply with the DFSG. The Archive team would focus on maintaining DAK and handling the technical aspects of archive management.

I am aware that, in the recent past, the ftpmaster team has decided not to actively seek new members. While I respect the autonomy of each team, the resulting lack of a recruitment pipeline has led to some friction and concern within the wider community, including myself. As Debian Project Leader, it is my responsibility to ensure the long-term sustainability and resilience of our project, which includes fostering an environment where new contributors can join and existing teams remain effective and well-supported. Therefore, even if the current team does not prioritize recruitment, I will actively seek and encourage new contributors for both teams, with the aim of supporting openness and collaboration.

This proposal is not intended as criticism of the current team's dedication or achievements--on the contrary, I am grateful for the hard work and commitment shown, often under challenging circumstances. My intention is to help address the structural issues that have made onboarding and specialization difficult and to ensure that both teams are well-supported for the future.

I also believe that both teams should regularly inform the Debian community about the policies and procedures they apply. I welcome any suggestions for a more detailed description of the tasks involved, as well as feedback on how best to implement this change in a way that supports collaboration and transparency.

My intention with this proposal is to foster a more open and effective working environment, and I am committed to working with all involved to ensure that any changes are made collaboratively and with respect for the important work already being done.

I'm aware that the ideas outlined above touch on core parts of how Debian operates and involve responsibilities across multiple teams. These are not small changes, and implementing them will require thoughtful discussion and collaboration.

To move this forward, I've registered a dedicated BoF for DebConf. To make the most of that opportunity, I'm looking for volunteers who feel committed to improving our workflows and processes. With your help, we can prepare concrete and sensible proposals in advance--so the limited time of the BoF can be used effectively for decision-making and consensus-building.

In short: I need your help to bring these changes to life. From my experience in my last term, I know that when it truly matters, the Debian community comes together--and I trust that spirit will guide us again.

Please also note: we had a "Call for volunteers" five years ago, and much of what was written there still holds true today. I've been told that the response back then was overwhelming--but that training such a large number of volunteers didn't scale well. This time, I hope we can find a more sustainable approach: training a few dedicated people first, and then enabling them to pass on their knowledge. This will also be a topic at the DebCamp sprint.

Dealing with Dormant Packages

Debian was founded on the principle that each piece of software should be maintained by someone with expertise in it--typically a single, responsible maintainer. This model formed the historical foundation of Debian's packaging system and helped establish high standards of quality and accountability. However, as the project has grown and the number of packages has expanded, this model no longer scales well in all areas. Team maintenance has since emerged as a practical complement, allowing multiple contributors to share responsibility and reduce bottlenecks--depending on each team's internal policy.

While working on the Bug of the Day initiative, I observed a significant number of packages that have not been updated in a long time. In the case of team-maintained packages, addressing this is often straightforward: team uploads can be made, or the team can be asked whether the package should be removed. We've also identified many packages that would fit well under the umbrella of active teams, such as language teams like Debian Perl and Debian Python, or blends like Debian Games and Debian Multimedia. Often, no one has taken action--not because of disagreement, but simply due to inattention or a lack of initiative.

In addition, we've found several packages that probably should be removed entirely. In those cases, we've filed bugs with pre-removal warnings, which can later be escalated to removal requests.

When a package is still formally maintained by an individual, but shows signs of neglect (e.g., no uploads for years, unfixed RC bugs, failing autopkgtests), we currently have three main tools:

  1. The MIA process, which handles inactive or unreachable maintainers.
  2. Package Salvaging, which allows contributors to take over maintenance if conditions are met.
  3. Non-Maintainer Uploads (NMUs), which are limited to specific, well-defined fixes (which do not include things like migration to Salsa).

These mechanisms are important and valuable, but they don't always allow us to react swiftly or comprehensively enough. Our tools for identifying packages that are effectively unmaintained are relatively weak, and the thresholds for taking action are often high.

The Package Salvage team is currently trialing a process we've provisionally called "Intend to NMU" (ITN). The name is admittedly questionable--some have suggested alternatives like "Intent to Orphan"--and discussion about this is ongoing on debian-devel. The mechanism is intended for situations where packages appear inactive but aren't yet formally orphaned, introducing a clear 21-day notice period before NMUs, similar in spirit to the existing ITS process. The discussion has sparked suggestions for expanding NMU rules.

While it is crucial not to undermine the autonomy of maintainers who remain actively involved, we also must not allow a strict interpretation of this autonomy to block needed improvements to obviously neglected packages.

To be clear: I do not propose to change the rights of maintainers who are clearly active and invested in their packages. That model has served us well. However, we must also be honest that, in some cases, maintainers stop contributing--quietly and without transition plans. In those situations, we need more agile and scalable procedures to uphold Debian's high standards.

To that end, I've registered a BoF session for DebConf25 to discuss potential improvements in how we handle dormant packages. These discussions will be prepared during a sprint at DebCamp, where I hope to work with others on concrete ideas.

Among the topics I want to revisit is my proposal from last November on debian-devel, titled "Barriers between packages and other people". While the thread prompted substantial discussion, it understandably didn't lead to consensus. I intend to ensure the various viewpoints are fairly summarised--ideally by someone with a more neutral stance than myself--and, if possible, work toward a formal proposal during the DebCamp sprint to present at the DebConf BoF.

My hope is that we can agree on mechanisms that allow us to act more effectively in situations where formerly very active volunteers have, for whatever reason, moved on. That way, we can protect both Debian's quality and its collaborative spirit.

Building Sustainable Funding for Debian

Debian incurs ongoing expenses to support its infrastructure--particularly hardware maintenance and upgrades--as well as to fund in-person meetings like sprints and mini-DebConfs. These investments are essential to our continued success: they enable productive collaboration and ensure the robustness of the operating system we provide to users and derivative distributions around the world.

While DebConf benefits from generous sponsorship, and we regularly receive donated hardware, there is still considerable room to grow our financial base--especially to support less visible but equally critical activities. One key goal is to establish a more constant and predictable stream of income, helping Debian plan ahead and respond more flexibly to emerging needs.

This presents an excellent opportunity for contributors who may not be involved in packaging or technical development. Many of us in Debian are engineers first--and fundraising is not something we've been trained to do. But just like technical work, building sustainable funding requires expertise and long-term engagement.

If you're someone who's passionate about Free Software and has experience with fundraising, donor outreach, sponsorship acquisition, or nonprofit development strategy, we would deeply value your help. Supporting Debian doesn't have to mean writing code. Helping us build a steady and reliable financial foundation is just as important--and could make a lasting impact.

Kind regards Andreas.

PS: In April I also planted my 5000th tree and while this is off-topic here I'm proud to share this information with my fellow Debian friends.

Planet DebianDirk Eddelbuettel: RcppSMC 0.2.8 on CRAN: Maintenance

Release 0.2.8 of our RcppSMC package arrived at CRAN yesterday. RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now also features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.

This release is somewhat procedural and contains solely maintenance, either for items now highlighted by the R and CRAN package checks, or to package internals. We had made those changes at the GitHub repo over time since the last release two years ago, and it seemed like a good time to get them to CRAN now.

The release is summarized below.

Changes in RcppSMC version 0.2.8 (2025-05-10)

  • Updated continuous integration script

  • Updated package metadata now using Authors@R

  • Corrected use of itemized list in one manual page

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More information is on the RcppSMC page and the repo. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

David BrinAnd ... the Great Silence persists: More on the Fermi Paradox: Where is Everyone?

Before diving into the Biggest Question - Are we alone in the universe? - I'm pleased to announce new volumes in my Out of Time series of novels for teen readers who love adventure laced with history, science and other cool stuff.

New books include Boondoggle by SF Legend Tom Easton & newcomer Torion Oey plus Raising the Roof by R. James Doyle! All new titles are released by Amazing Stories.

Meanwhile, Open Road republished the earlier five novels, including great tales by Nancy Kress, Sheila Finch, and Roger Allen. Plus The Archimedes Gambit and Storm's Eye!

The shared motif... teens from across time are pulled into the 24th Century and asked to use their unique skills to help a future that's in peril!  Past characters who get 'yanked' into tomorrow include a young Arthur Conan Doyle, Winston Churchill, Joan of Arc's page and maybe... you!

All of the Out of Time books can be accessed (and assessed) here

* With coming authors including SF legend Allen Steele and newcomer Robin Hansen.

And now to the Great Big Question.


== Because there's bugger-all (intelligence) down here on Earth! ==

In "A History of the Search for Extraterrestrial Intelligence," a cogent overview of 200+ years of SETI (in various forms), John Michael Godier starts by citing one of the great sages of our era and goes on to illuminate the abiding question: "Are we alone?" Godier is among the best of all science podcasters. 

I also highly recommend YouTube channels by Isaac Arthur and Anton Petrov as well as the PBS series EONS.

Joe Scott runs a popular Science and future YouTube Channel that is generally informative and entertaining. And much more popular than anything I do. This episode is divertingly about what the year 2100 might be like


== Anyone Out There? ==


Hmmm. Over the years, I’ve collected ‘fermis’ … or hypotheses to explain the absence of visible alien tech-civilizations. In fact, I was arguably the first to attempt an organized catalogue in my “Great Silence” paper in 1983, way-preceding popular use of ‘the Fermi Paradox.” 


See Isaac Arthur’s almost-thorough rundown of most of the current notions, including a few (e.g. water-land ratio) that I made up first. Still, new ones occasionally crop up. Even now!


Here’s one about an oxygen bottleneck: “"To create advanced technology, a species would likely require the capability to increase the temperature of the materials used in its production. Oxygen's role in enabling open-air combustion has been critical in the evolution of human technology, particularly in metallurgy. Exoplanets whose atmospheres contain less than 18% oxygen would likely not allow open-air combustion, suggesting a threshold that alien worlds must cross if life on them is to develop advanced technology." 

Hence my call to chemists out there!  Is it true that “an atmosphere with anything less than 18% oxygen would not allow open-air combustion”?  That assertion implies that only the most recent 500 million years of Earth history offered those conditions. And hence industrial civilization might be rare, even if life pervades the cosmos. 


My own response: It seems likely that vegetation on a lower-oxygen world would evolve in ways that are less fire resistant. After all, there is evidence of fires back in our own Carboniferous etc.


== This time the mania just isn't ebbing (sigh) ==

The latest US Government report on UFO/UAP phenomena finds – as expected – no plausible evidence that either elements of the government or anyone else on Earth has truly encountered aliens. 


Alas, it will convince none of the fervid believers, whose lifelong Hollywood indoctrination in Suspicion of Authority (SoA) is only reinforced by any denial! No matter how many intelligent and dedicated civil servants get pulled into these twice-per-decade manias. 


I don’t call this latest 'investigation' a waste of taxpayer money!  Millions wanted this and hence it was right to do it!  Even if none of those millions of True Believers will credit that anything but malign motives drive all those civil servants and fellow Americans.

Shame on you, Hollywood. For more on this, especially the SoA propaganda campaign that (when moderate) keeps us free and that (when toxically over-wrought) might kill our unique civilization. For more, see Vivid Tomorrows: Science Fiction and Hollywood.


or my own highly unusual take on UAP phenomena. I promise fresh thoughts.


And here John Michael Godier offers an interesting riff on a possible explanation for the infamous WOW signal detected by a SETI program in 1977. 



== on the Frontier ==


Mining helium-3 on the Moon has been talked about forever—now a company will try. "There are so many investments that we could be making, but there are also Moonshots."


Yeah, yeah, sure. “Helium Three” (in Gothic letters?) is (I am 90% sure) one of the biggest scams to support the unjustifiable and silly “Artemis” rush to send US astronauts to perform another ritual footprint stunt on that useless plain of poison dust.  


Prove me wrong? Great?  I don’t mind some investment in robotic surveys.  But a larger chunk of $$$ should go to asteroids, where we know -absolutely – the real treasures lie.


Meanwhile, far more practically needed… and reminiscent of the very first chapter of my novel Existence…  Astroscale is one of several groups demonstrating methods to remove debris from Low Earth Orbit (LEO). Though we gotta hope that a desperate world ‘leader’ doesn’t decide to spasm wreck LEO, as his final gift to the world.



== Dive to the Sun! ==


The Parker Solar Probe – (the team named me an informal ‘mascot’ on account of my first novel) has discovered lots about how solar magnetic fields churn and merge and flow outward to snap and heat the solar corona to incredible temperatures.


(I am also a co-author on a longer range effort to plan swooping sailcraft, that plunge just past our star and then get propelled to incredible speed. The endeavor’s name? Project Sundiver! Stay (loosely) tuned.)



== Physics and Universal Fate ==


I well recall when physicists Freeman Dyson and Frank Tipler were competing for the informal title of “Theologian of the 20th Century” with their predictions for the ultimate fate of intelligent life. In a universe that would either 

(1) expand forever and eventually dissipate with the decay of all baryons, or else 

(2) fall back inward to a Big Crunch, offering Tipler a chance to envision a God era in the final million years, in his marvelous tome The Physics of Immortality.


 I never met Tipler. Freeman was a friend. In any event, it sure looks as if Freeman won the title. 

Only... how sure are we of the Great Dissipation? Its details and influences and evidence and boundary conditions? Those aspects have been in flux. This essay cogently summarizes the competing models and most recent evidence. Definitely only for the genuinely physics minded!


A final note about this. Roger Penrose - also a friend of mine - came up with a brilliant hybrid that unites the Endless Dissipation model and Tipler's Big Crunch. His Conformal Cosmology is simply wonderful. (I even made teensy contributions.) 


And if it ain't true... well... it oughta be!



And finally... shifting perspective: this ‘official’ Chinese world map has gotta be shared. Quite a dig on the Americas! Gotta admit it is fresh perspective. Like that view of the Pacific Ocean as nearly all of a visible earth globe.   A reminder how truly big Africa is, tho the projection inflates to left and right. And putting India in the center actually diminishes its size.


===


PS... Okay... ONE TEENSY POLITICAL POINT?


When they justify their cult's all-out war against science and every single fact-centered profession - (including the US military officer corps) - one of the magical incantations yammered by Foxites concerns the Appeal- to-Authority Fallacy.


Oh sure, we should all look up and scan posted lists and definitions of the myriad logical fallacies that are misused in arguments even by very intelligent folks. (And overcoming them is one reason why law procedures can get tediously exacting.) Furthermore, Appeal to Authority is one of them. Indeed, citing Aristotle instead of doing experiments held back science for 2000 years!


Still, step back and notice how it is now used to discredit and deter anyone from citing facts determined by scientists and other experts, through vetted, peer-reviewed and heavily scrutinized validation. 


Sure. "Do your own research' if you like. Come with me on a boat to measure Ocean Acidification*, for example! With cash wager stakes on the line. But for most of us, most of the time, it is about comparing credibility of those out there who claim to deliver facts. And yes, bona fide scientists with good reputations are where any such process should start, and not cable TV yammer-heads. 


The way to avoid "Appeal to Authority" falacy is not to reflexively discredit 'authorities,' but to INTERROGATE authorities with sincerely curious questions... and to interrogate their rivals. Ideally back and forth in reciprocally competitive criticism. But with the proviso that maybe someone who has studied a topic all her life may, actually know something that you don't.


*Ocean acidification all by itself utterly proves CO2-driven climate change is a lethal threat to our kids.  And I invite those wager stakes!

365 TomorrowsThe Remainder

Author: RJ Barranco The calculator said “Error” but Davis kept pressing the keys anyway. “You can’t divide by zero,” said the calculator in a small voice that hadn’t been there before. “Why not?” asked Davis. “Because,” the calculator replied, “I’d have to think about infinity, and I don’t want to.” Davis laughed. “But what if […]

The post The Remainder appeared first on 365tomorrows.

,

365 TomorrowsI Always Was Grandma’s Favorite

Author: Evan A Davis “Another round for my friends,” Dallas announced, “on me!” Every patron in the Four-Finger Saloon loudly cheered, raising a glass to the famous outlaw. The barkeep tried to protest, but was quickly drowned in the oncoming tide of customers. The automated piano man struck up a jaunty song for the gunslinger’s […]

The post I Always Was Grandma’s Favorite appeared first on 365tomorrows.

Rondam RamblingsNo, Science is Not Just Another Religion

I want to debunk once and for all this idea that "science is just another religion".  It isn't, for one simple reason: all religions are based on some kind of metaphysical assumptions.  Those assumptions are generally something like the authority of some source of revealed knowledge, typically a holy text.  But it doesn't have to be that.  It can be as simple as assuming that

Planet DebianTaavi Väänänen: Wikimedia Hackathon Istanbul 2025

It's that time of the year again: the Wikimedia Hackathon 2025 happened last weekend in Istanbul. This year was my third time attending what has quickly become one of my favourite events of the year simply due to the concentration of friends and other like-minded nerds in a single location.1

Valerio, Lucas, me and a shark.

Image by Chlod Alejandro is licensed under CC BY-SA 4.0.

This year I did a short presentation about the MediaWiki packages in Debian (slides), which is something I do but I suspect is fairly obscure to most people in the MediaWiki community. I was hoping to do some work on reproducibility of MediaWiki releases, but other interests (plus lack of people involved in the release process at the hackathon) meant that I didn't end up getting any work done on that (assuming this does not count).

Other long-standing projects did end up getting some work done! MusikAnimal and I ended up fixing the Commons deletion notification bot, which had been broken for well over two years at that point (and was at some point in the hackathon plans for last year for both of us). Other projects that I made progress on include supporting multiple types of two-factor devices, and LibraryUpgrader which gained support for rebasing and updating existing patches2.

In addition to hacking, the other highlight of these events is the hallway track. Some of the crowd is people who I've seen at previous events and/or interact very frequently with, but there are also significant parts of the community and the Foundation that I don't usually get to interact with outside of these events. (Although it still feels extremely weird to heard from various mostly-WMF people with whom I haven't spoken with before that they've heard various (usually positive) rumours stories about me.)

Unfortunately we did not end up having a Cuteness Association meetup this year, but we had an impromptu PGP key signing party which is basically almost as good, right?

However, I did continue a tradition from last year: I ended up nominating Chlod, a friend of mine, to receive +2 access to mediawiki/* during the hackathon. The request is due to be closed sometime tomorrow.

(Usual disclosure: My travel was funded by the Wikimedia Foundation. Thank you! This is my personal blog and these are my own opinions.)

Now that you've read this post, maybe check out posts from others?


  1. Unfortunately you can never have absolutely everyone attending :( ↩︎

  2. Amir, I still have not forgiven you about this. ↩︎

,

Planet DebianUwe Kleine-König: The Linux kernel's PGP Web of Trust

The Linux kernel's development process makes use of PGP. The most relevant part here is that subsystem maintainers are supposed to use signed tags in their pull requests to Linus Torvalds. As the concept of keyservers is considered broken, Konstantin Ryabitsev maintains a collection of relevant keys in a git repository.

As of today (at commit a0bc65fb27f5033beddf9d1ad97d67c353849be2) there are 602 valid keys tracked in that repository. The requirement for a key to be added there is that there must be at least one trust path from Linus Torvalds' key to this key of length at most 5 within that keyring.

Occasionally it happens that a key loses its trust paths because someone in these paths replaced their key, or keys expired. Currently this affects 2 keys.

However there is a problem on the horizon: GnuPG 2.4.x started to reject third-party key signatures using the SHA-1 hash algorithm. In general that's good, SHA-1 isn't considered secure any more for more than 20 years. This doesn't directly affect the kernel-pgpkeys repo, because the trust path checking doesn't rely on GnuPG trusting the signatures; there is a dedicated tool that parses the keyring contents and currently accepts signatures using SHA-1. Also signatures are not thrown away usually, but there are exceptions: Recently Theodore Ts'o asked to update his certificate. When Konstantin imported the updated certificate GnuPG's "cleaning" was applied which dropped all SHA-1 signatures. So Theodore Ts'o's key lost 168 signatures, among them one by Linus Torvalds on his primary UID.

That made me wonder what would be the effect on the web of trust if all SHA-1 signatures were dropped. Here are the facts:

  • There are 7976 signatures tracked in the korg-pgpkeys repo that are considered valid, 6045 of them use SHA-1.

  • Only considering the primary UID Linus Torvalds directly signed 40 public keys, 38 of these using SHA-1. One of the two keys that is still "properly" signed, doesn't sign any other key. So nearly all trust paths go through a single key.

  • When not considering SHA-1 signatures there are 485 public keys without a trust path from Linus Torvalds of length 5 or less. So today these 485 public keys would not qualify to be added to the pgpkeys git repository. Among the people being dropped are Andrew Morton, Greg Kroah-Hartman, H. Peter Anvin, Ingo Molnar, Junio C Hamano, Konstantin Ryabitsev, Peter Zijlstra, Stephen Rothwell and Thomas Gleixner.

  • The size of the kernel strong set is reduced from 358 to 94.

If you attend Embedded Recipes 2025 next week, there is an opportunity to improve the situation: Together with Ahmad Fatoum I'm organizing a keysigning session. If you want to participate, send your public key to er2025-keysigning@baylibre.com before 2025-05-12 08:00 UTC.

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.22 on CRAN: New Upstream

Version 0.0.22 of RcppSpdlog arrived on CRAN today and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.15.3 of spdlog which was released this morning, and includes version 1.12.0 of fmt.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.22 (2025-05-09)

  • Upgraded to upstream release spdlog 1.15.3 (including fmt 11.2.0)

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianAbhijith PA: Bug squashing party, Kochi

Last weekend, 4 people (3 DDs and 1 soon to be, hopefully in coming months) sit together for a Bug squashing party in Kochi. We fixed lot of things including my broken autopkgtest setup.

BSP-Kochi

It all began from a discussion in #debian-in of not having any BSPs in the past in India. Then twisted in to hosting a BSP by me. I fixed the dates to 3rd & 4th May to get packages migrate naturally to testing with NMUs before the hard freeze on 15th May.

Finding a venue was a huge challenge. Unlike other places, we have very limited options on hackerspaces. We also had some company spaces (if we asked), but we may have to follow their office timings and finding accommodation near by was also a challenge.

Later we decided to go with a rental apartment where could hack all night and sleep. We booked a very bare minimal apartment for 3 nights and 3 days. I updated wiki page and sent announcement.

Not even Wi-Fi was there in the apartment, so we setup everything by ourselves (DebConf style :p ). I short listed some newbie bugs, just in case if newcomers joined the party. But it was only we 4 people and Kathara who joined remotely.

We started from May 2nd night, stacked our cabin with snacks, instant noodles and drinks. Arranged beds, tables and started hacking and having discussions. My autopkgtest-lxc setup was broken. I think its related to #1017753, which got fixed magically and now I started using autopkgtest-podman.

stack

I learned

  • reportbug tool can use its own SMTP server by default
  • autoremovals can be extended if we pinged to the bug report.

On last day, we went to a nice restaurant and had food. There was a church festival nearby, so we were able to watch wonderful procession and fireworks at night.

food

All in all we managed to touch 46 bugs of which 35 is now fixed/done and 11 is open, some of this get status done when it reaches testing. It was a fun and productive weekend. More importantly we had fun.

Worse Than FailureError'd: Cuts Like a Knife

Mike V. shares a personal experience with the broadest version of Poe's Law: "Slashdot articles generally have a humorous quote at the bottom of their articles, but I can't tell if this displayed usage information for the fortune command, which provides humorous quotes, is a joke or a bug." To which I respond with the sharpest version of Hanlon's Razor: never ascribe to intent that which can adequately be explained by incompetence.

0

 

Secure in his stronghold, Stewart snarks "Apparently my router is vulnerable because it is connected to the internet. So glad I pay for the premium security service."

1

 

The Beast in Black is back with more dross, asking "Oh GitLab, you so silly - y u no give proper reason?"

2

 

An anonymous reader writes "I got this when I tried to calculate the shipping costs for buying the Playdate game device. Sorry, I don't have anything snarky to say, please make something up." The comments section is open for your contributions.

3

 

Ben S. looking for logic in all the wrong places, wonders "This chart from my electric utility's charitable giving program kept my alumni group guessing all day. The arithmetic checks out, but what does the gray represent, and why is the third chart at a different scale?"

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Price of Silence

Author: Alastair Millar He awoke with a start. Cockpit red with emergency lights. Tried to move. PAIN! Slipped back into darkness. He awoke again; air still red. “Ship?” he whispered. “Yes, captain?” “Need medical help,” he gasped. “Affirmative. Medimechlings dispatched. Your condition is critical. Initiating emergency protocol B6. Distress beacon activated. Transponder check, affirmative, active. […]

The post The Price of Silence appeared first on 365tomorrows.

xkcdPascal's Law

Planet DebianReproducible Builds (diffoscope): diffoscope 295 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 295. This version includes the following changes:

[ Chris Lamb ]
* Use --walk over the potentially dangerous --scan argument of zipdetails(1).
  (Closes: reproducible-builds/diffoscope#406)

You find out more by visiting the project homepage.

,

Planet DebianThorsten Alteholz: My Debian Activities in April 2025

Debian LTS

This was my hundred-thirtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4145-1] expat security update of one CVE related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [DLA 4146-1] libxml2 security update to fix two CVEs related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Bookworm.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Unstable.

This month I did a week of FD duties. I also started to work on libxmltok. Adrian suggested to also check the CVEs that might affect the embedded version of expat. Unfortunately these are a bunch of CVEs to check and the month ended before the upload. I hope to finish this in May. Last but not least I continued to work on the second batch of fixes for suricata CVEs.

Debian ELTS

This month was the eighty-first ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1411-1] expat security update to fix one CVE in Stretch and Buster related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [ELA-1412-1] libxml2 security update to fix two CVEs in Jessie, Stretch and Buster related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.

This month I did a week of FD duties.
I also started to work on libxmltok. Normally I work on machines running Bullseye or Bookworm. As the Stretch version of libxmltok needs a debhelper version of 5, which is no longer supported on Bullseye, I had to create a separate Buster VM. Yes, Stretch is becoming old. As well as with LTS I need to also check the CVEs that might affect the embedded version of expat.
Last but not least I started to work on the second batch of fixes for suricata CVEs.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

misc

This month I uploaded new packages or new upstream or bugfix versions of:

bottlerocket was my first upload via debusine. It is a really cool tool and I can only recommend everybody to give it at least a try.
I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

FTP master

This month I accepted 307 and rejected 55 packages. The overall number of packages that got accepted was 308.

Worse Than FailureCodeSOD: Leap to the Past

Early in my career, I had the misfortune of doing a lot of Crystal Reports work. Crystal Reports is another one of those tools that lets non-developer, non-database savvy folks craft reports. Which, like so often happens, means that the users dig themselves incredible holes and need professional help to get back out, because at the end of the day, when the root problem is actually complicated, all the helpful GUI tools in the world can't solve it for you.

Michael was in a similar position as I was, but for Michael, there was a five alarm fire. It was the end of the month, and a bunch of monthly sales reports needed to be calculated. One of the big things management expected to see was a year-over-year delta on sales, and they got real cranky if the line didn't go up. If they couldn't even see the line, they went into a full on panic and assumed the sales team was floundering and the company was on the verge of collapse.

Unfortunately, the report was spitting out an error: "A day number must be between 1 and the number of days in the month."

Michael dug in, and found this "delight" inside of a function called one_year_ago:


Local StringVar yearStr  := Left({?ReportToDate}, 4);
Local StringVar monthStr := Mid({?ReportToDate}, 5, 2); 
Local StringVar dayStr   := Mid({?ReportToDate}, 7, 2);
Local StringVar hourStr  := Mid({?ReportToDate}, 9, 2);
Local StringVar minStr   := Mid({?ReportToDate}, 11, 2);
Local StringVar secStr   := Mid({?ReportToDate}, 13, 2);
Local NumberVar LastYear;

LastYear := ToNumber(YearStr) - 1;
YearStr := Replace (toText(LastYear),'.00' , '' );
YearStr := Replace (YearStr,',' , '' );

//DateTime(year, month, day, hour, min, sec);
//Year + Month + Day + Hour + min + sec;  // string value
DateTime(ToNumber(YearStr), ToNumber(MonthStr), ToNumber(dayStr), ToNumber(HourStr), ToNumber(MinStr),ToNumber(SecStr) );

We've all seen string munging in date handling before. That's not surprising. But what's notable about this one is the day on which it started failing. As stated, it was at the end of the month. But which month? February. Specifically, February 2024, a leap year. Since they do nothing to adjust the dayStr when constructing the date, they were attempting to construct a date for 29-FEB-2023, which is not a valid date.

Michael writes:

Yes, it's Crystal Reports, but surprisingly not having date manipulation functions isn't amongst it's many, many flaws. It's something I did in a past life isn't it??

The fix was easy enough- rewrite the function to actually use date handling. This made a simpler, basically one-line function, using Crystal's built in functions. That fixed this particular date handling bug, but there were plenty more places where this kind of hand-grown string munging happened, and plenty more opportunities for the report to fail.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsTsoukal’s Imperative

Author: Hillary Lyon The tall lean figure stood before the honeycombed wall, searching the triangular nooks until he located the scrolls for engineering marvels. Tsoukal pulled out the uppermost scroll and unrolled it on the polished stone slab behind him. He placed a slim rectangular weight on each end of the scroll to hold it […]

The post Tsoukal’s Imperative appeared first on 365tomorrows.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

,

Krebs on SecurityPakistani Firm Shipped Fentanyl Analogs, Scams to US

A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.

In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”

Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.

A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.

However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.

Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).

In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”

Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.

In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.

FROM AXACT TO ABTACH

Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.

People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”

“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”

Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.

“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.

In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.

A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.

The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.

The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”

According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.

Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today. 

Junaid Mansoor. Source: youtube/@Olevels․com School.

Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services. 

Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.

The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”

Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.

“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.

The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.

THE TEXAS NEXUS

KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.

For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.

A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.

360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.

Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.

360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.

Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.

In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com —  alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.

Google’s ad transparency listing for ghostwriting ads paid for by Vertical Minds LLC.

VICTIMS SPEAK OUT

Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.

The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.

In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.

The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.

The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.

“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”

Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.

“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.

A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.

Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others.  The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.

Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.

California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.

In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.

“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”

It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.

“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.

The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.

A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.

GOOGLE RESPONDS

KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.

Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.

“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”

Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”

From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.

On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.

This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.

KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.

For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.

LongNowLong Science in the Nevada Bristlecone Preserve

Long Science in the Nevada Bristlecone Preserve

It was at the invitation of The Long Now Foundation that I visited Mount Washington for the first time as a graduate student. Camping out the first night on the mountain with my kind and curious Long Now friends, I could sense that the experience was potentially transformative — that this place, and this community, had together created a kind of magic. The next morning, we packed up our caravan of cars and made our way up the mountain. I tracked the change in elevation out the car window by observing how the landscape changed from sagebrush to pinyon and juniper trees, to manzanita and mixed conifer, and finally to the ancient bristlecone pines. As we rose, the view of the expansive Great Basin landscape grew below us. It was then that I knew I had to be a part of the community stewarding this incredibly meaningful place. 

I’d entered graduate school following an earlier life working on long-term environmental monitoring networks across the U.S. and Latin America, and was attracted to the mountain’s established research network. My early experiences and relationships with other researchers had planted the seeds of appreciation for research which takes the long view of the world around us. Now, as a research professor at the Desert Research Institute (DRI) and a Long Now Research Fellow, I’m helping to launch a new scientific legacy in the Nevada Bristlecone Preserve. Of course, no scientific legacy is entirely new. My work compiling the first decade of observational climate data builds on decades of research in order to help carry it into the future — one link in a long line of scientists who have made my work possible. Science works much like an ecosystem, with different disciplines interweaving to help tell the story of the whole. Each project and scientist builds on the successes of the past. 

Unfortunately, the realities of short-term funding don’t often align with a long-term vision for research. Scientists hoping to answer big questions often find it challenging to identify funding that will support a project beyond two to three years, making it difficult to sustain the long-term research that helps illuminate changes in landscapes over time. This reality highlights the value of partnering with The Long Now Foundation. Their support is helping me carry valuable research into the future to understand how rare ecosystems in one of the least-monitored regions in the country are adapting to a warming world. 

The Nevada Bristlecone Preserve stretches across the high reaches of Mount Washington on the far eastern edge of Nevada. Growing where nearly nothing else can, the bristlecone pines (Pinus longaeva) that lend the preserve its name have a gnarled, twisted look to them, and wood so dense that it helps protect the tree from rot and disease. Trees in this grove are known to be nearly 5,000 years old, making them among the oldest living trees in the world. Because of the way trees radiate from their center as they grow, adding one ring essentially every year, scientists can gain glimpses of the past by studying their cores. Counting backward in time, we can visualize years with plentiful water and sunlight for growth as thicker, denser lines indicating a higher growth rate. Trees this old provide a nearly unprecedented time capsule of the climate that produced them, helping us to understand how today’s world differs from the one of our ancestors. 

This insight has always been valuable but is becoming even more critical as we face increasing temperatures outside the realm of what much of modern life has adapted to. My research aims to provide a nearly microscopic look at how the climate in the Great Basin is changing, from hour to hour and season to season. With scientific monitoring equipment positioned from the floor of the Great Basin’s Spring Valley up to the peak of Mount Washington, our project examines temperature fluctuations, atmospheric information, and snowpack insights across the region’s ecosystems by collecting data every 10 minutes. Named the Nevada Climate-Ecohydrological Assessment Network, or NevCAN, the research effort is now in its second decade. First established in part by my predecessors at DRI along with other colleagues from the Nevada System of Higher Education, the project offers a wealth of valuable climate monitoring information that can contribute to insights across scientific disciplines. 

Thanks to the foresight of the scientists who came before me, the data collected provides insight across ecosystems, winding from the valley floor’s sagebrush landscape to Mount Washington’s mid-elevation pinyon-juniper woodlands, to the higher elevation bristlecone pine grove, before winding down the mountain’s other side. The data from Mount Washington can be compared to a similar set of monitoring equipment set up across the Sheep Range just north of Las Vegas. Here, the lowest elevation stations sit in the Mojave Desert, among sprawling creosote-brush and Joshua trees, before climbing up into mid-elevation pinyon-juniper forests and high elevation ponderosa pine groves. 

Having over 10 years of data from the Nevada Bristlecone Preserve allows us to zoom in and out on the environmental processes that shape the mountain. Through this research, we’ve been able to ask questions that span timelines, from the 10-minute level of our data collection to the 5,000-year-old trees to the epochal age of the rocks and soil underlying the mountain. We can look at rapid environmental changes during sunrise and sunset or during the approach and onset of a quick thunderstorm. And we can zoom out to understand the climatology by looking at trends in changes in precipitation and temperature that impact the ecosystems. 

Scientists use data to identify stories in the world around us. Data can show us temperature swings of more than 50 degrees Fahrenheit in just 10 minutes with the onset of a dark and cold thunderstorm in the middle of August. We can observe the impacts of the nightly down-sloping winds that drive the coldest air to the bottom of the valley, helping us understand why the pinyon and juniper trees are growing at higher elevation, where it’s counterintuitively warmer. These first 10 years of data allow us to look at air temperature and precipitation trends, and the next 20 years of data will help us uncover some of the more long-term climatological changes occurring on the mountain. All the while, the ancient bristlecone pines have been collecting data for us over centuries — and millennia — in their tree rings. 

The type of research we’re doing with NevCAN facilitates scientific discovery that crosses the traditional boundaries of academic disciplines. The scientists who founded the program understood that the data collected on Mount Washington would be valuable to a range of researchers in different fields and intentionally brought these scientists together to create a project with foresight and long-term value to the scientific community. Building interdisciplinary teams to do this kind of science means that we can cross sectors to identify drivers of change. This mode of thinking acknowledges that the atmosphere impacts the weather, which drives rain, snow, drought, and fire risk. It acknowledges that as the snowpack melts or the monsoonal rains fall, the hydrologic response feeds streams, causes erosion, and regenerates groundwater. The atmospheric and hydrological cycles impact the ecosystem, driving elevational shifts in species, plant die-offs, or the generation of new growth after a fire. 

💡
To learn more about long-term science at Mount Washington, read Scotty Strachan's 02019 essay on Mountain Observatories and a Return to Environmental Long Science and former Long Now Director of Operations Laura Welcher's 02019 essay on The Long Now Foundation and a Great Basin Mountain Observatory for Long Science.

To really understand the mountain, we need everyone’s expertise: atmospheric scientists, hydrologists, ecologists, dendrochronologists, and even computer scientists and engineers to make sure we can get the data back to our collective offices to make meaning of it all. This kind of interdisciplinary science offers the opportunity to learn more about the intersection of scientific studies — a sometimes messy process that reflects the reality of how nature operates. 

Conducting long-term research like NevCAN is challenging for a number of reasons beyond finding sustainable funding, but the return is much greater than the sum of its parts. In order to create continuity between researchers over the years, the project team needs to identify future champions to pass the baton to, and systems that can preserve all the knowledge acquired. Over the years, the project’s technical knowledge, historical context, and stories of fire, wildlife, avalanches, and erosion continue to grow. Finding a cohesive team of dedicated people who are willing to be a single part of something bigger takes time, but the trust fostered within the group enables us to answer thorny and complex questions about the fundamental processes shaping our landscape.  

Being a Long Now Research Fellow funded by The Long Now Foundation has given me the privilege of being a steward of this mountain and of the data that facilitates this scientific discovery. This incredible opportunity allows me to be a part of something larger than myself and something that will endure beyond my tenure. It means that I get to be a mentee of some of the skilled stewards before me and a mentor to the next generation. In this way we are all connected to each other and to the mountain. We connect with each other by untangling difficult scientific questions; we connect with the mountain by spending long days traveling, camping, and experiencing the mountain from season to season; and we connect with the philosophy of The Long Now Foundation by fostering a deep appreciation for thinking on timescales that surpass human lifetimes. 


Long Science in the Nevada Bristlecone Preserve
Setting up Alicia Eggert’s art exhibition on the top of Mt Washington. Photo by Anne Heggli.

To learn more about Anne’s work, read A New Tool Can Help Protect California and Nevada Communities from Floods While Preserving Their Water Supply on DRI’s website. 

This essay was written in collaboration with Elyse DeFranco, DRI’s Lead Science Writer. 

Planet DebianJonathan Dowland: procmail versus exim filters

I’ve been using Procmail to filter mail for a long time. Reading Antoine’s blog post procmail considered harmful, I felt motivated (and shamed) into migrating to something else. Luckily, Enrico's shared a detailed roadmap for moving to Sieve, in particular Dovecot's Sieve implementation (which provides "pipe" and "filter" extensions).

My MTA is Exim, and for my first foray into this, I didn't want to change that1. Exim provides two filtering languages for users: an implementation of Sieve, and its own filter language.

Requirements

A good first step is to look at what I'm using Procmail for:

  1. I invoke external mail filters: processes which read the mail and emit a possibly altered mail (headers added, etc.). In particular, crm114 (which has worked remarkably well for me) to classify mail as spam or not, and dsafilter, to mark up Debian Security Advisories

  2. I file messages into different folders depending on the outcome of the above filters

  3. I drop mail ("killfile") some sender addresses (persistent pests on mailing lists); and mails containing certain hosts in the References header (as an imperfect way of dropping mailing list threads which are replies to someone I've killfiled); and mail encoded in a character set for a language I can't read (Russian, Korean, etc.), and several other simple static rules

  4. I move mailing list mail into folders, semi-automatically (see list filtering)

  5. I strip "tagged" subjects for some mailing lists: i.e., incoming mail has subjects like "[cs-historic-committee] help moving several tons of IBM360", and I don't want the "[cs-historic-committee]" bit.

  6. I file a copy of some messages, the name of which is partly derived from the current calendar year

Exim Filters

I want to continue to do (1), which rules out Exim's implementation of Sieve, which does not support invoking external programs. Exim's own filter language has a pipe function that might do what I need, so let's look at how to achieve the above with Exim Filters.

autolists

Here's an autolist recipe for Debian's mailing lists, in Exim filter language. Contrast with the Procmail in list filtering:

if $header_list-id matches "(debian.*)\.lists\.debian\.org"
then
  save Maildir/l/$1/
  finish
endif

Hands down, the exim filter is nicer (although some of the rules on escape characters in exim filters, not demonstrated here, are byzantine).

killfile

An ideal chunk of configuration for kill-filing a list of addresses is light on boiler plate, and easy to add more addresses to in the future. This is the best I could come up with:

if foranyaddress "someone@example.org,\
                  another@example.net,\
                  especially-bad.example.com,\
                 "
   ($reply_address contains $thisaddress
    or $header_references contains $thisaddress)
then finish endif

I won't bother sharing the equivalent Procmail but it's pretty comparable: the exim filter is no great improvement.

It would be lovely if the list of addresses could be stored elsewhere, such as a simple text file, one line per address, or even a database. Exim's own configuration language (distinct from this filter language) has some nice mechanisms for reading lists of things like addresses from files or databases. Sadly it seems the filter language lacks anything similar.

external filters

With Procmail, I pass the mail to an external program, and then read the output of that program back, as the new content of the mail, which continues to be filtered: subsequent filter rules inspect the headers to see what the outcome of the filter was (is it spam?) and to decide what to do accordingly. Crucially, we also check the return status of the filter, to handle the case when it fails.

With Exim filters, we can use pipe to invoke an external program:

pipe "$home/mail/mailreaver.crm -u $home/mail/"

However, this is not a filter: the mail is sent to the external program, and the exim filter's job is complete. We can't write further filter rules to continue to process the mail: the external program would have to do that; and we have no way of handling errors.

Here's Exim's documentation on what happens when the external command fails:

Most non-zero codes are treated by Exim as indicating a failure of the pipe. This is treated as a delivery failure, causing the message to be returned to its sender.

That is definitely not what I want: if the filter broke (even temporarily), Exim would seemingly generate a bounce to the sender address, which could be anything, and I wouldn't have a copy of the message.

The documentation goes on to say that some shell return codes (defaulting to 73 and 75) cause Exim to treat it as a temporary error, spool the mail and retry later on. That's a much better behaviour for my use-case. Having said that, on the rare occasions I've broken the filter, the thing which made me notice most quickly was spam hitting my inbox, which my Procmail recipe achieves.

removing subject tagging

Here, Exim's filter language gets unstuck. There is no way to add or alter headers for a message in a user filter. Exim uses the same filter language for system-wide message filtering, and in that context, it has some extra functions: headers add <string>, headers remove <string>, but (for reasons I don't know) these are not available for user filters.

copy mail to archive folder

I can't see a way to derive a folder name from the calendar year.

next steps

Exim Sieve implementation and its filter language are ruled out as Procmail replacements because they can't do at least two of the things I need to do.

However, based on Enrico's write-up, it looks like Dovecot's Sieve implementation probably can. I was also recommended maildrop, which I might look at if Dovecot Sieve doesn't pan out.


  1. I should revisit this requirement because I could probably reconfigure exim to run my spam classifier at the system level, obviating the need to do it in a user filter, and also raising the opportunity to do smtp-time rejection based on the outcome

Worse Than FailureEditor's Soapbox: AI: The Bad, the Worse, and the Ugly

…the average American, I think, has fewer than three friends. And the average person has demand for meaningfully more, I think it's like 15 friends or something, right?
- Mark Zuckerberg, presumably to one of his three friends

The link between man and machine with robots excited in the background

Since even the President of the United States is using ChatGPT to cheat on his homework and make bonkers social media posts these days, we need to have a talk about AI.

Right now, AI is being shoe-horned into everything, whether or not it makes sense. To me, it feels like the dotcom boom again. Millipedes.com! Fungus.net! Business plan? What business plan? Just secure the domain names and crank out some Super Bowl ads. We'll be RICH!

In fact, it's not just my feeling. The Large Language Model (LLM) OpenAI is being wildly overvalued and overhyped. It's hard to see how it will generate more revenue while its offerings remain underwhelming and unreliable in so many ways. Hallucination, bias, and other fatal flaws make it a non-starter for businesses like journalism that must have accurate output. Why would anyone convert to a paid plan? Even if there weren't an income problem—even if every customer became a paying customer—generative AI's exorbitant operational and environmental costs are poised to drown whatever revenue and funding they manage to scrape together.

Lest we think the problem is contained to OpenAPI or LLMs, there's not a single profitable AI venture out there. And it's largely not helping other companies to be more profitable, either.

A moment like this requires us to step back, take a deep breath. With sober curiosity, we gotta explore and understand AI's true strengths and weaknesses. More importantly, we have to figure out what we are and aren't willing to accept from AI, personally and as a society. We need thoughtful ethics and policies that protect people and the environment. We need strong laws to prevent the worst abuses. Plenty of us have already been victimized by the absence of such. For instance, one of my own short stories was used by Meta without permission to train their AI.

The Worst of AI
Sadly, it is all too easy to find appalling examples of all the ways generative AI is harming us. (For most of these, I'm not going to provide links because they don't deserve the clicks):

  • We all know that person who no longer seems to have a brain of their own because they keep asking OpenAI to do all of their thinking for them.
  • Deepfakes deliberately created to deceive people.
  • Cheating by students.
  • Cheating by giant corporations who are all too happy to ignore IP and copyright when it benefits them (Meta, ahem).
  • Piles and piles of creepy generated content on platforms like Youtube and TikTok that can be wildly inaccurate.
  • Scammy platforms like DataAnnotation, Mindrift, and Outlier that offer $20/hr or more for you to "train their AI." Instead, they simply gather your data and inputs and ghost the vast majority of applicants. I tried taking DataAnnotation's test for myself to see what would happen; after all, it would've been nice to have some supplemental income while job hunting. After several weeks, I still haven't heard back from them.
  • Applicant Tracking Systems (ATS) block job applications from ever reaching a human being for review. As my job search drags on, I feel like my life has been reduced to a tedious slog of keyword matching. Did I use the word "collaboration" somewhere in my resume? Pass. Did I use the word "teamwork" instead? Fail. Did I use the word "collaboration," but the AI failed to detect it, as regularly happens? Fail, fail, fail some more. Frustrated, I and no doubt countless others have been forced to turn to other AIs in hopes of defeating those AIs. While algorithms battle algorithms, companies and unemployed workers are all suffering.
  • Horrific, undeniable environmental destruction.
  • Brace yourself: a 14 year-old killed himself with the encouragement of the chatbot he'd fallen in love with. I can only imagine how many more young people have been harmed and are being actively harmed right now.

The Best of AI?
As AI began to show up everywhere, as seemingly everyone from Google to Apple demanded that I start using it, I had initially responded with aversion and resentment. I never bothered with it, I disabled it wherever I could. When people told me to use it, I waved them off. My life seemed no worse for it.

Alas, now AI completely saturates my days while job searching, bringing on even greater resentment. Thousands of open positions for AI-based startups! Thousands of companies demanding expertise in generative AI as if it's been around for decades. Well, gee, maybe my hatred and aversion is hurting my ability to get hired. Am I being a middle-aged Luddite here? Should I be learning more about AI (and putting it on my resume)? Wouldn't I be the bigger person to work past my aversion in order to learn about and highlight some of the ways we can use AI responsibly?

I tried. I really tried. To be honest, I simply haven't found a single positive generative AI use-case that justifies all the harm taking place.

So, What Do We Do?
Here are some thoughts: don't invest in generative AI or seek a job within the field, it's all gonna blow. Lobby your government to investigate abuses, protect people, and preserve the environment. Avoid AI usage and, if you're a writer like me, make clear that AI is not used in any part of your process. Gently encourage that one person you know to start thinking for themselves again.

Most critically of all: wherever AI must be used for the time being, ensure that one or more humans review the results.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Final Slice

Author: Colin Jeffrey On some mornings, around eleven, the postman will drop a letter or two into the mail slot. But many of these are not letters – they are coded messages disguised as bills or advertisements. Only I know their secrets. You see, I am a messenger of the gods. Just yesterday, I was […]

The post The Final Slice appeared first on 365tomorrows.

xkcdGlobe Safety

,

Planet DebianEnrico Zini: Python-like abspath for c++

Python's os.path.abspath or Path.absolute are great: you give them a path, which might not exist, and you get a path you can use regardless of the current directory. os.path.abspath will also normalize it, while Path will not by default because with Paths a normal form is less needed.

This is great to normalize input, regardless of if it's an existing file you're needing to open, or a new file you're needing to create.

In C++17, there is a filesystem library with methods with enticingly similar names, but which are almost, but not quite, totally unlike Python's abspath.

Because in my C++ code I need to normalize input, regardless of if it's an existing file I'm needing to open or a new file I'm needing to create, here's an apparently working Python-like abspath for C++ implemented on top of the std::filesystem library:

std::filesystem::path abspath(const std::filesystem::path& path)
{
    // weakly_canonical is defined as "the result of calling canonical() with a
    // path argument composed of the leading elements of p that exist (as
    // determined by status(p) or status(p, ec)), if any, followed by the
    // elements of p that do not exist."
    //
    // This means that if no lead components of the path exist then the
    // resulting path is not made absolute, and we need to work around that.
    if (!path.is_absolute())
        return abspath(std::filesystem::current_path() / path);

    // This is further and needlessly complicated because we need to work
    // around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    unsigned retry = 0;
    while (true)
    {
        std::error_code code;
        auto result = std::filesystem::weakly_canonical(path, code);
        if (!code)
        {
            // fprintf(stderr, "%s: ok in %u tries\n", path.c_str(), retry+1);
            return result;
        }

        if (code == std::errc::no_such_file_or_directory)
        {
            ++retry;
            if (retry > 50)
                throw std::system_error(code);
        }
        else
            throw std::system_error(code);
    }

    // Alternative implementation that however may not work on all platforms
    // since, formally, "[std::filesystem::absolute] Implementations are
    // encouraged to not consider p not existing to be an error", but they do
    // not mandate it, and if they did, they might still be affected by the
    // undefined behaviour outlined in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    //
    // return std::filesystem::absolute(path).lexically_normal();
}

I added it to my wobble code repository, which is the thin repository of components I use to ease my C++ systems programming.

Worse Than FailureCodeSOD: The Big Pictures

Loading times for web pages is one of the key metrics we like to tune. Users will put up with a lot if they feel like they application is responsive. So when Caivs was handed 20MB of PHP and told, "one of the key pages takes like 30-45 seconds to load. Figure out why," it was at least a clear goal.

Combing through that gigantic pile of code to try and understand what was happening was an uphill battle. Eventually, Caivs just decided to check the traffic logs while running the application. That highlighted a huge spike in traffic every time the page loaded, and that helped Caivs narrow down exactly where the problem was.

$first_image = '';
foreach($images as $the_image)
{ 
    $image = $the_image['url'];
 
  if(file_exists($config->base_url.'/uploads/'.$image))
  {
    if($first_image=='')
    {
      $first_image = $image;
    }
   
    $image_dimensions = '&w=648&h=432';
    $get_dimensions = getimagesize('http://old.datacenter.ip.address/'.$config->base_url.'/uploads/'.$image);
    if($get_dimensions[0] < $get_dimensions[1])
      $image_dimensions = '&h=432';

    echo '<li>'.$config->base_url.'/timthumb.php?src='.$config->base_url.'/uploads/'.$image.'&w=125&h=80&zc=1'), 'javascript:;', array('onclick'=>'$(\'.image_gallery .feature .image\').html(\''.$config->base_url.'/timthumb.php?src='.$config->base_url.'/uploads/'.$image.$image_dimensions.'&zc=1').'\');$(\'.image_gallery .feature .title\').show();$(\'.image_gallery .feature .title\').html("'.str_replace('"', '', $the_image['Image Description']).'");$(\'.image_gallery .bar ul li a\').removeClass(\'active\');$(\'.image_gallery .bar ul li\').removeClass(\'active\');$(this).addClass(\'active\');$(this).parents(\'li\').addClass(\'active\');sidebarHeight();curImg=$(this).attr(\'id\');translate()','id'=>$img_num)).'</li>';
    $img_num++;
  }
}

For every image they want to display in a gallery, they echo out a list item for it, which that part makes sense- more or less. The mix of PHP, JavaScript, JQuery, and HTML tags is ugly and awful and I hate it. But that's just a prosaic kind of awful, background radiation of looking at PHP code. Yes, it should be launched into the Kupier belt (it doesn't deserve the higher delta-V required to launch it into the sun), but that's not why we're here.

The cause of the long load times was in the lines above- where for each image, we getimagesize- a function which downloads the image and checks its stats, all so we can set $image_dimensions. Which, presumably, the server hosting the images uses the query string to resize the returned image.

All this is to check- if the height is greater than the width we force the height to be 432 pixels, otherwise we force the whole image to be 648x432 pixels.

Now, the server supplying those images had absolutely no caching, so that meant for every image request it needed to resize the image before sending. And for reasons which were unclear, if the requested aspect ratio were wildly different than the actual aspect ratio, it would also sometimes just refused to resize and return a gigantic original image file. But someone also had thought about the perils of badly behaved clients downloading too many images, so if a single host were requesting too many images, it would start throttling the responses.

When you add all this up, it meant that this PHP web application was getting throttled by its own file server, because it was requesting too many images, too quickly. Any reasonable user load hitting it would be viewed as an attempted denial of service attack on the file hosting backend.

Caivs was able to simply remove the check on filesize, and add a few CSS rules which ensured that files in the gallery wouldn't misbehave terribly. The performance problems went away- at least for that page of the application. Buried in that 20MB of PHP/HTML code, there were plenty more places where things could go wrong.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAccidents Happen

Author: Julian Miles, Staff Writer The control room is gleaming. Elias Medelsson looks about with a smile. The night watch clearly made a successful conversion of tedium to effort. He’ll drop a memo to his counterpart on the Benthusian side to express thanks. “Captain Medelsson.” Elias turns to find Siun Heplepara, the Benthusian he had […]

The post Accidents Happen appeared first on 365tomorrows.

,

Cryptogram Chinese AI Submersible

A Chinese company has developed an AI-piloted submersible that can reach speeds “similar to a destroyer or a US Navy torpedo,” dive “up to 60 metres underwater,” and “remain static for more than a month, like the stealth capabilities of a nuclear submarine.” In case you’re worried about the military applications of this, you can relax because the company says that the submersible is “designated for civilian use” and can “launch research rockets.”

“Research rockets.” Sure.

Planet DebianRavi Dwivedi: A visit to Paris

After attending the 2024 LibreOffice conference in Luxembourg, I visited Paris in October 2024.

If you are wondering whether I needed another visa to cross the border into France— I didn’t! Further, they are both also EU members, which means you don’t need to go through customs either. Thus, crossing the Luxembourg-France border is no different from crossing Indian state borders - like going from Rajasthan to Uttar Pradesh.

I took a TGV train from Luxembourg Central Station, which was within walking distance from my hostel. The train took only 2 hours and 20 minutes to cover the 300 km distance to Paris. It departed from Luxembourg at 10:00 AM and reached Paris at 12:20 PM. The ride was smooth and comfortable, arriving on time. It gave me an opportunity to see the countryside of France. I booked the train ticket online a couple of days prior through the Omio website.

A train standing on a platform

TGV train I rode from Luxembourg to Paris

I planned the first day with my friend Joenio, whom I met upon arriving in Paris’ Gare de l’Est station, along with his wife Mari. We went to my hostel (which was within walking distance from the station) to store my luggage, but we were informed that we needed to wait for a couple of hours before I could check in. Consequently, we went to an Italian restaurant nearby for lunch, where I ordered pasta. My hostel was unbelievably cheap by French standards (25 euros per night) that Joenio was shocked when he learned about it.

Pasta on a plate topped with Ricotta cheese

Pasta I had in Paris

Walking in the city, I noticed it had separate cycling tracks and wide footpaths, just like Luxembourg. The traffic was also organized. For instance, there were traffic lights even for pedestrian crossings, unlike India, where crossing roads can be a nightmare. Car drivers stopping for pedestrians is a big improvement over what I am used to in India. The weather was also pleasant. It was a bit on the cooler side - around 15 degrees Celsius - and I had to wear a jacket.

A cycling track in Paris

A cycling track in Paris

After lunch, we returned to my hostel for my check-in at around 3 o’clock. Then, we went to the Luxembourg Museum (Musée du Luxembourg in French) as Joenio had booked tickets for an exhibition of paintings by the Brazilian painter Tarsila do Amaral. To reach there, we took a subway train from Gare du Nord station. The Paris subway charges 2.15 euros irrespective of the distance (or number of stations) traveled, as opposed to other metro systems I have used.

We reached the museum at around 4 o’clock. I found the paintings beautiful, but I would have appreciated them much more if the descriptions were in English.

A building wit trees on the left and right side of it and sky in the background. People can be seen in front of the building.

Luxembourg Museum

Afterward, we went to a beautiful garden just behind the museum. It served as a great spot to relax and take pictures. Following this, we walked to the Pantheon - a well-known attraction in the city. It is a church built a couple of centuries ago. It has a dome-shaped structure at the top, recognizable from far away.

A building with a garden in front it and people sitting closer to us. Sky can be seen in the background.

A shot of the park near to the Luxembourg Museum

A building with a dome shaped structure on top. Closer to camera, roads can be seen. In the background is blue colored cloudy sky.

Pantheon, one of the attractions of Paris.

Then we went to Notre Dame after having evening snacks and coffee at a nearby bakery. The Notre Dame was just over a kilometer from the Pantheon, so we took a walk. We also crossed the beautiful Seine river. On the way, I sampled a crêpe, a signature dish of France. The shop was named Crêperie and had many varieties of Crêpe. I took the one with eggs and Emmental cheese. It was savory and delicious.

Photo with Joenio and Mari

Photo with Joenio and Mari

Notre Dame, another tourist attraction of Paris.

Notre Dame, another tourist attraction of Paris.

By the time we reached Notre Dame, it was 07:30 PM. I learned from Joenio that Notre Dame was closed and being renovated due to a fire a couple of years ago, so we just sat around and clicked photos. It is a catholic cathedral built in French Gothic architecture (I read that on Wikipedia ;)). I read on Wikipedia that it is located on an island named Île de la Cité and I didn’t even realize we are on an island.

At night, we visited the most well-known attraction of Paris, The Eiffel Tower. We again took the subway, alighting at the Bir-Hakeim station, followed by a short walk. We reached the Eiffel Tower at 9 o’clock. It was lit bright yellow. There was not much to do there, so we just clicked photos and hung out. After that, I came back to my hostel.

The Eiffel Tower lit with bright yellow

My photo with Eiffel Tower in the background

Next day, I roamed around the city by walking mostly. France is known for its bakeries, so I checked out a couple of local bakeries. I had espresso a couple of times and sampled croissant, pain au chocolat and lemon meringue tartlet.

Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.Items at a bakery in Paris

Items at a bakery in Paris. Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.

Here are some random shots:

The Paris subway

The Paris subway

Inside a Paris metro train

Inside a Paris subway

A random building and road in Paris

A random building and road in Paris

A shot near Seine river

A shot near Seine river

A view of Seine river

A view of Seine river

On the third day, I had my flight for India. Thus, I checked out of the hostel early in the morning, took an RR train from Gare du Nord station to reach the airport. It costs 11.8 euros.

I heard some of my friends had bad experiences in France. Thus, I had the impression that I would not feel welcomed. Furthermore, I have encountered language problems in my previous Europe trip to Albania and Kosovo. Likewise, I learned a couple of French words, like how to say thank you and good morning, which went a long way.

However, I didn’t have bad experiences in Paris, except for one instance in which I asked my hostel’s reception about my misplaced watch and the person at the reception asked me to be “polite” by being rude. She said, “Excuse me! You don’t know how to say Good Morning?”

Overall, I enjoyed my time in Paris and would like to thank Joenio and Mari for joining me. I would also like to thank Sophie, who gave me a map of Paris.

Let’s end this post here. I’ll meet you in the next one!

Credits: Thanks to contrapunctus for reviewing this post before publishing

Cryptogram Fake Student Fraud in Community Colleges

Reporting on the rise of fake students enrolling in community college courses:

The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.

The article talks about the rise of this type of fraud, the difficulty of detecting it, and how it upends quite a bit of the class structure and learning community.

Slashdot thread.

Cryptogram Another Move in the Deepfake Creation/Detection Arms Race

Deepfakes are now mimicking heartbeats

In a nutshell

  • Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats.
  • The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges many existing detection tools, which may need significant redesigns to keep up with the evolving technology.
  • To effectively identify high-quality deepfakes, researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy.

And the AI models will start mimicking that.

Planet DebianDaniel Lange: Make `apt` shut up about "modernize-sources" in Trixie

Apt in Trixie (Debian 13) has the annoying function to tell you "Notice: Some sources can be modernized. Run 'apt modernize-sources' to do so." ... every single time you run apt update. Not cool for logs and log monitoring.

And - of course - if you had the option to do this, you ... would have run the indicated apt modernize-sources command to convert your sources.list to "deb822 .sources format" files already. So an information message once or twice would have done.

Well, luckily you can help yourself:

apt -o APT::Get::Update::SourceListWarnings=false will keep apt shut up. This could go into an alias or your systems management tool / update script.

Alternatively add

# Keep apt shut about preferring the "deb822" sources file format
APT::Get::Update::SourceListWarnings "false";

to /etc/apt/apt.conf.d/10quellsourceformatwarnings .

This silences the notices about sources file formats (not only the deb822 one) system-wide. That way you can decide when you can / want to migrate to the new, more verbose, apt sources format yourself.

Worse Than FailureCodeSOD: A Double Date

Alice picked up a ticket about a broken date calculation in a React application, and dropped into the code to take a look. There, she found this:

export function calcYears(date) {
  return date && Math.floor((new Date() - new Date(date).getTime()) / 3.15576e10)
}

She stared at it for awhile, trying to understand what the hell this was doing, and why it was dividing by three billion. Also, why there was a && in there. But after staring at it for a few minutes, the sick logic of the code makes sense. getTime returns a timestamp in milliseconds. 3.15576e10 is the number of milliseconds in a year. So the Math.floor() expression just gets the difference between two dates as a number of years. The && is just a coalescing operator- the last truthy value gets returned, so if for some reason we can't calculate the number of years (because of bad input, perhaps?), we just return the original input date, because that's a brillant way to handle errors.

As bizarre as this code is, this isn't the code that was causing problems. It works just fine. So why did Alice get a ticket? She spent some more time puzzling over that, while reading through the code, only to discover that this calcYears function was used almost everywhere in the code- but in one spot, someone decided to write their own.

if (birthday) {
      let year = birthday?.split('-', 1)
      if (year[0] != '') {
        let years = new Date().getFullYear() - year[0]
        return years
      }
}

So, this function also works, and is maybe a bit more clear about what it's doing than the calcYears. But note the use of split- this assumes a lot about the input format of the date, and that assumption isn't always reliable. While calcYears still does unexpected things if you fail to give it good input, its accepted range of inputs is broader. Here, if we're not in a date format which starts with "YYYY-", this blows up.

After spending hours puzzling over this, Alice writes:

I HATE HOW NO ONE KNOWS HOW TO CODE

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianSergio Talens-Oliag: Argo CD Usage Examples

As a followup of my post about the use of argocd-autopilot I’m going to deploy various applications to the cluster using Argo CD from the same repository we used on the previous post.

For our examples we are going to test a solution to the problem we had when we updated a ConfigMap used by the argocd-server (the resource was updated but the application Pod was not because there was no change on the argocd-server deployment); our original fix was to kill the pod manually, but the manual operation is something we want to avoid.

The proposed solution to this kind of issues on the helm documentation is to add annotations to the Deployments with values that are a hash of the ConfigMaps or Secrets used by them, this way if a file is updated the annotation is also updated and when the Deployment changes are applied a roll out of the pods is triggered.

On this post we will install a couple of controllers and an application to show how we can handle Secrets with argocd and solve the issue with updates on ConfigMaps and Secrets, to do it we will execute the following tasks:

  1. Deploy the Reloader controller to our cluster. It is a tool that watches changes in ConfigMaps and Secrets and does rolling upgrades on the Pods that use them from Deployment, StatefulSet, DaemonSet or DeploymentConfig objects when they are updated (by default we have to add some annotations to the objects to make things work).
  2. Deploy a simple application that can use ConfigMaps and Secrets and test that the Reloader controller does its job when we add or update a ConfigMap.
  3. Install the Sealed Secrets controller to manage secrets inside our cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.

Creating the test project for argocd-autopilot

As we did our installation using argocd-autopilot we will use its structure to manage the applications.

The first thing to do is to create a project (we will name it test) as follows:

❯ argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
INFO project created: 'test'

Now that the test project is available we will use it on our argocd-autopilot invocations when creating applications.

Installing the reloader controller

To add the reloader application to the test project as a kustomize application and deploy it on the tools namespace with argocd-autopilot we do the following:

❯ argocd-autopilot app create reloader \
    --app 'github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2' \
    --project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO created 'application namespace' file at '/bootstrap/cluster-resources/in-cluster/tools-ns.yaml'
INFO committing changes to gitops repo...
INFO installed application: reloader

That command creates four files on the argocd repository:

  1. One to create the tools namespace:

    bootstrap/cluster-resources/in-cluster/tools-ns.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        argocd.argoproj.io/sync-options: Prune=false
      creationTimestamp: null
      name: tools
    spec: {}
    status: {}
  2. Another to include the reloader base application from the upstream repository:

    apps/reloader/base/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
  3. The kustomization.yaml file for the test project (by default it includes the same configuration used on the base definition, but we could make other changes if needed):

    apps/reloader/overlays/test/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    namespace: tools
    resources:
    - ../../base
  4. The config.json file used to define the application on argocd for the test project (it points to the folder that includes the previous kustomization.yaml file):

    apps/reloader/overlays/test/config.json
    {
      "appName": "reloader",
      "userGivenName": "reloader",
      "destNamespace": "tools",
      "destServer": "https://kubernetes.default.svc",
      "srcPath": "apps/reloader/overlays/test",
      "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
      "srcTargetRevision": "",
      "labels": null,
      "annotations": null
    }

We can check that the application is working using the argocd command line application:

❯ argocd app get argocd/test-reloader -o tree
Name:               argocd/test-reloader
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          tools
URL:                https://argocd.lo.mixinet.net:8443/applications/test-reloader
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/reloader/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (2893b56)
Health Status:      Healthy

KIND/NAME                                          STATUS  HEALTH   MESSAGE
ClusterRole/reloader-reloader-role                 Synced
ClusterRoleBinding/reloader-reloader-role-binding  Synced
ServiceAccount/reloader-reloader                   Synced           serviceaccount/reloader-reloader created
Deployment/reloader-reloader                       Synced  Healthy  deployment.apps/reloader-reloader created
└─ReplicaSet/reloader-reloader-5b6dcc7b6f                  Healthy
  └─Pod/reloader-reloader-5b6dcc7b6f-vwjcx                 Healthy

Adding flags to the reloader server

The runtime configuration flags for the reloader server are described on the project README.md file, in our case we want to adjust three values:

  • We want to enable the option to reload a workload when a ConfigMap or Secret is created,
  • We want to enable the option to reload a workload when a ConfigMap or Secret is deleted,
  • We want to use the annotations strategy for reloads, as it is the recommended mode of operation when using argocd.

To pass them we edit the apps/reloader/overlays/test/kustomization.yaml file to patch the pod container template, the text added is the following:

patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'

After committing and pushing the updated file the system launches the application with the new options.

The dummyhttp application

To do a quick test we are going to deploy the dummyhttp web server using an image generated using the following Dockerfile:

# Image to run the dummyhttp application <https://github.com/svenstaro/dummyhttp>

# This arg could be passed by the container build command (used with mirrors)
ARG OCI_REGISTRY_PREFIX

# Latest tested version of alpine
FROM ${OCI_REGISTRY_PREFIX}alpine:3.21.3

# Tool versions
ARG DUMMYHTTP_VERS=1.1.1

# Download binary
RUN ARCH="$(apk --print-arch)" && \
  VERS="$DUMMYHTTP_VERS" && \
  URL="https://github.com/svenstaro/dummyhttp/releases/download/v$VERS/dummyhttp-$VERS-$ARCH-unknown-linux-musl" && \
  wget "$URL" -O "/tmp/dummyhttp" && \
  install /tmp/dummyhttp /usr/local/bin && \
  rm -f /tmp/dummyhttp

# Set the entrypoint to /usr/local/bin/dummyhttp
ENTRYPOINT [ "/usr/local/bin/dummyhttp" ]

The kustomize base application is available on a monorepo that contains the following files:

  1. A Deployment definition that uses the previous image but uses /bin/sh -c as its entrypoint (command in the k8s Pod terminology) and passes as its argument a string that runs the eval command to be able to expand environment variables passed to the pod (the definition includes two optional variables, one taken from a ConfigMap and another one from a Secret):

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dummyhttp
      labels:
        app: dummyhttp
    spec:
      selector:
        matchLabels:
          app: dummyhttp
      template:
        metadata:
          labels:
            app: dummyhttp
        spec:
          containers:
          - name: dummyhttp
            image: forgejo.mixinet.net/oci/dummyhttp:1.0.0
            command: [ "/bin/sh", "-c" ]
            args:
            - 'eval dummyhttp -b \"{\\\"c\\\": \\\"$CM_VAR\\\", \\\"s\\\": \\\"$SECRET_VAR\\\"}\"'
            ports:
            - containerPort: 8080
            env:
            - name: CM_VAR
              valueFrom:
                configMapKeyRef:
                  name: dummyhttp-configmap
                  key: CM_VAR
                  optional: true
            - name: SECRET_VAR
              valueFrom:
                secretKeyRef:
                  name: dummyhttp-secret
                  key: SECRET_VAR
                  optional: true
  2. A Service that publishes the previous Deployment (the only relevant thing to mention is that the web server uses the port 8080 by default):

    apiVersion: v1
    kind: Service
    metadata:
      name: dummyhttp
    spec:
      selector:
        app: dummyhttp
      ports:
      - name: http
        port: 80
        targetPort: 8080
  3. An Ingress definition to allow access to the application from the outside:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: dummyhttp
      annotations:
        traefik.ingress.kubernetes.io/router.tls: "true"
    spec:
      rules:
        - host: dummyhttp.localhost.mixinet.net
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: dummyhttp
                    port:
                      number: 80
  4. And the kustomization.yaml file that includes the previous files:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    
    resources:
    - deployment.yaml
    - service.yaml
    - ingress.yaml

Deploying the dummyhttp application from argocd

We could create the dummyhttp application using the argocd-autopilot command as we’ve done on the reloader case, but we are going to do it manually to show how simple it is.

First we’ve created the apps/dummyhttp/base/kustomization.yaml file to include the application from the previous repository:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0

As a second step we create the apps/dummyhttp/overlays/test/kustomization.yaml file to include the previous file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base

And finally we add the apps/dummyhttp/overlays/test/config.json file to configure the application as the ApplicationSet defined by argocd-autopilot expects:

{
  "appName": "dummyhttp",
  "userGivenName": "dummyhttp",
  "destNamespace": "default",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/dummyhttp/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we have the three files we commit and push the changes and argocd deploys the application; we can check that things are working using curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/ | jq -M .
{
  "c": "",
  "s": ""
}

Patching the application

Now we will add patches to the apps/dummyhttp/overlays/test/kustomization.yaml file:

  • One to add annotations for reloader (one to enable it and another one to set the roll out strategy to restart to avoid touching the deployments, as that can generate issues with argocd).
  • Another to change the ingress hostname (not really needed, but something quite reasonable for a specific project).

The file diff is as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,3 +2,22 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+patches:
+# Add reloader annotations
+- target:
+    kind: Deployment
+    name: dummyhttp
+  patch: |-
+    - op: add
+      path: /metadata/annotations
+      value:
+        reloader.stakater.com/auto: "true"
+        reloader.stakater.com/rollout-strategy: "restart"
+# Change the ingress host name
+- target:
+    kind: Ingress
+    name: dummyhttp
+  patch: |-
+    - op: replace
+      path: /spec/rules/0/host
+      value: test-dummyhttp.lo.mixinet.net

After committing and pushing the changes we can use the argocd cli to check the status of the application:

❯ argocd app get argocd/test-dummyhttp -o tree
Name:               argocd/test-dummyhttp
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          default
URL:                https://argocd.lo.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/dummyhttp/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (fbc6031)
Health Status:      Healthy

KIND/NAME                           STATUS  HEALTH   MESSAGE
Deployment/dummyhttp                Synced  Healthy  deployment.apps/dummyhttp configured
└─ReplicaSet/dummyhttp-55569589bc           Healthy
  └─Pod/dummyhttp-55569589bc-qhnfk          Healthy
Ingress/dummyhttp                   Synced  Healthy  ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp                   Synced  Healthy  service/dummyhttp unchanged
├─Endpoints/dummyhttp
└─EndpointSlice/dummyhttp-x57bl

As we can see, the Deployment and Ingress where updated, but the Service is unchanged.

To validate that the ingress is using the new hostname we can use curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/
404 page not found
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443/
{"c": "", "s": ""}

Adding a ConfigMap

Now that the system is adjusted to reload the application when the ConfigMap or Secret is created, deleted or updated we are ready to add one file and see how the system reacts.

We modify the apps/dummyhttp/overlays/test/kustomization.yaml file to create the ConfigMap using the configMapGenerator as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+# Add the config map
+configMapGenerator:
+- name: dummyhttp-configmap
+  literals:
+  - CM_VAR="Default Test Value"
+  behavior: create
+  options:
+    disableNameSuffixHash: true
 patches:
 # Add reloader annotations
 - target:

After committing and pushing the changes we can see that the ConfigMap is available, the pod has been deleted and started again and the curl output includes the new value:

❯ kubectl get configmaps,pods
NAME                             READY   STATUS        RESTARTS   AGE
configmap/dummyhttp-configmap   1      11s
configmap/kube-root-ca.crt      1      4d7h

NAME                            DATA   AGE
pod/dummyhttp-779c96c44b-pjq4d   1/1     Running       0          11s
pod/dummyhttp-fc964557f-jvpkx    1/1     Terminating   0          2m42s
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}

Using helm with argocd-autopilot

Right now there is no direct support in argocd-autopilot to manage applications using helm (see the issue #38 on the project), but we want to use a chart in our next example.

There are multiple ways to add the support, but the simplest one that allows us to keep using argocd-autopilot is to use kustomize applications that call helm as described here.

The only thing needed before being able to use the approach is to add the kustomize.buildOptions flag to the argocd-cm on the bootstrap/argo-cd/kustomization.yaml file, its contents now are follows:

bootstrap/argo-cd/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  # Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)
  - kustomize.buildOptions="--enable-helm"
  - |
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://forgejo.mixinet.net/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  name: argocd-cm
  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
- behavior: merge
  literals:
  - "server.insecure=true"
  name: argocd-cmd-params-cm
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
- ingress_route.yaml

On the following section we will explain how the application is defined to make things work.

Installing the sealed-secrets controller

To manage secrets in our cluster we are going to use the sealed-secrets controller and to install it we are going to use its chart.

As we mentioned on the previous section, the idea is to create a kustomize application and use that to deploy the chart, but we are going to create the files manually, as we are not going import the base kustomization files from a remote repository.

As there is no clear way to override helm Chart values using overlays we are going to use a generator to create the helm configuration from an external resource and include it from our overlays (the idea has been taken from this repository, which was referenced from a comment on the kustomize issue #38 mentioned earlier).

The sealed-secrets application

We have created the following files and folders manually:

apps/sealed-secrets/
├── helm
│   ├── chart.yaml
│   └── kustomization.yaml
└── overlays
    └── test
        ├── config.json
        ├── kustomization.yaml
        └── values.yaml

The helm folder contains the generator template that will be included from our overlays.

The kustomization.yaml includes the chart.yaml as a resource:

apps/sealed-secrets/helm/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- chart.yaml

And the chart.yaml file defines the HelmChartInflationGenerator:

apps/sealed-secrets/helm/chart.yaml
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
  name: sealed-secrets
releaseName: sealed-secrets
name: sealed-secrets
namespace: kube-system
repo: https://bitnami-labs.github.io/sealed-secrets
version: 2.17.2
includeCRDs: true
# Add common values to all argo-cd projects inline
valuesInline:
  fullnameOverride: sealed-secrets-controller
# Load a values.yaml file from the same directory that uses this generator
valuesFile: values.yaml

For this chart the template adjusts the namespace to kube-system and adds the fullnameOverride on the valuesInline key because we want to use those settings on all the projects (they are the values expected by the kubeseal command line application, so we adjust them to avoid the need to add additional parameters to it).

We adjust global values as inline to be able to use a the valuesFile from our overlays; as we are using a generator the path is relative to the folder that contains the kustomization.yaml file that calls it, in our case we will need to have a values.yaml file on each overlay folder (if we don’t want to overwrite any values for a project we can create an empty file, but it has to exist).

Finally, our overlay folder contains three files, a kustomization.yaml file that includes the generator from the helm folder, the values.yaml file needed by the chart and the config.json file used by argocd-autopilot to install the application.

The kustomization.yaml file contents are:

apps/sealed-secrets/overlays/test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Uncomment if you want to add additional resources using kustomize
#resources:
#- ../../base
generators:
- ../../helm

The values.yaml file enables the ingress for the application and adjusts its hostname:

apps/sealed-secrets/overlays/test/values.yaml
ingress:
  enabled: true
  hostname: test-sealed-secrets.lo.mixinet.net

And the config.json file is similar to the ones used with the other applications we have installed:

apps/sealed-secrets/overlays/test/config.json
{
  "appName": "sealed-secrets",
  "userGivenName": "sealed-secrets",
  "destNamespace": "kube-system",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/sealed-secrets/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we commit and push the files the sealed-secrets application is installed in our cluster, we can check it using curl to get the public certificate used by it:

❯ curl -s https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----

The dummyhttp-secret

To create sealed secrets we need to install the kubeseal tool:

❯ arkade get kubeseal

Now we create a local version of the dummyhttp-secret that contains some value on the SECRET_VAR key (the easiest way for doing it is to use kubectl):

❯ echo -n "Boo" | kubectl create secret generic dummyhttp-secret \
    --dry-run=client --from-file=SECRET_VAR=/dev/stdin -o yaml \
    >/tmp/dummyhttp-secret.yaml

The secret definition in yaml format is:

apiVersion: v1
data:
  SECRET_VAR: Qm9v
kind: Secret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret

To create a sealed version using the kubeseal tool we can do the following:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml

That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart to use the kube-system namespace and set the controller name to sealed-secrets-controller as the tool expects.

If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the public key:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml \
    --cert https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem

Or, if we don’t have access to the ingress address, we can save the certificate on a file and use it instead of the URL.

The sealed version of the secret looks like this:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret
  namespace: default
spec:
  encryptedData:
    SECRET_VAR: [...]
  template:
    metadata:
      creationTimestamp: null
      name: dummyhttp-secret
      namespace: default

This file can be deployed to the cluster to create the secret (in our case we will add it to the argocd application), but before doing that we are going to check the output of our dummyhttp service and get the list of Secrets and SealedSecrets in the default namespace:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}
❯ kubectl get sealedsecrets,secrets
No resources found in default namespace.

Now we add the SealedSecret to the dummyapp copying the file and adding it to the kustomization.yaml file:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+- dummyhttp-sealed-secret.yaml
 # Create the config map value
 configMapGenerator:
 - name: dummyhttp-configmap

Once we commit and push the files Argo CD creates the SealedSecret and the controller generates the Secret:

❯ kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
❯ kubectl get sealedsecrets,secrets
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     3s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      3s

If we check the command output we can see the new value of the secret:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": "Boo"
}

Using sealed-secrets in production clusters

If you plan to use sealed-secrets look into its documentation to understand how it manages the private keys, how to backup things and keep in mind that, as the documentation explains, you can rotate your sealed version of the secrets, but that doesn’t change the actual secrets.

If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing both things at the same time).

Final remarks

On this post we have seen how to deploy applications using the argocd-autopilot model, including the use of helm charts inside kustomize applications and how to install and use the sealed-secrets controller.

It has been interesting and I’ve learnt a lot about argocd in the process, but I believe that if I ever want to use it in production I will also review the native helm support in argocd using a separate repository to manage the applications, at least to be able to compare it to the model explained here.

365 TomorrowsEarth Day

Author: Chelsea Utecht Today is the day our masters treat us to sweet snacks of expensive corn and sing a song to celebrate their love for us – “Happy Earth Day to you! Happy Earth Day to you! Happy Earth day, our humans!” – because today the orbit aligns so that we can see a […]

The post Earth Day appeared first on 365tomorrows.

xkcdAbout 20 Pounds

,

Planet DebianDirk Eddelbuettel: #47: r2u at its Third Birthday

Welcome to post 47 in the $R^4 series!

r2u provides Ubuntu binaries for all CRAN packages for the R system. It started three years ago, and offers for Linux users on Ubuntu what windows and macOS users already experience: fast, easy and reliable installation of binary packages. But by integrating with the system package manager (which is something that cannot be done on those other operating systems) we can fully and completely integrate it with underlying system. External libraries are resolved as shared libraries and handled by the system package manager. This offers fully automatic installation both at the initial installation and all subsequent upgrades. R users just say, e.g., install.packages("sf") and spatial libraries proj, gdal, geotiff (as well as several others) are automatically installed as dependencies in the correct versions. And they remain installed along with sf as the system manager now knows of the dependency.

Work on r2u began as a quick weekend experiment in March 2022, and by May 4 a first release was marked in the NEWS file after a few brave alpha testers kicked tires quite happily. This makes today the third anniversary of that first release, and marks a good time to review where we are. This short post does this, and stresses three aspects: overall usage, current versions, and new developments.

Steadily Growing Usage at 42 Million Packages Shipped

r2u ships from two sites. Its main repository is at the University of Illinois campus providing ample and heavily redundant bandwidth. We remain very grateful for the sponsorship from Atlas. It also still ships from my own server though that may be discontinued or could be spotty as it is on retail fiber connectivity. As we have access to the both sets of server logs, we can tabulate and chart usage. As of yesterday, total downloads were north of 42 million with current weekly averages around 500 thousand. These are quite staggering numbers for what started as a small hobby project, and are quite humbling.

Usage is driven by deployment in continuous integration (as for example the Ubuntu-use at GitHub makes this both an easy and obvious choice), cloud computing (as it is easy to spin up Ubuntu instances, it is as easy to add r2u via four simple commands or one short script), explorative use (for example on Google Colab) or of course in general laptop, desktop, or server settings.

Current Versions

Since r2u began, we added two Ubuntu LTS releases, three annual R releases as well as multiple BioConductor releases. BioConductor support is on a ‘best-efforts’ basis motivated primarily to support the CRAN packages having dependencies. It has grown to around 500 packages and includes the top-250 by usage.

Right now, current versions R 4.5.0 and BioConductor 3.21, both released last month, are supported.

New Development: arm64

A recent change is the support of the arm64 platform. As discussed in the introductory post, it is a popular and increasingly common CPU choice seen anywhere from the Raspberry Pi 5 and it Cortex CPU to in-house cloud computing platforms (called, respectively, Graviton at AWS and Axiom at GCS), general server use via Ampere CPUs, Cortex-based laptops that start to appears and last but not least on the popular M1 to M4-based macOS machines. (For macOS, one key appeal is in use of ‘lighterweight’ Docker use as these M1 to M4 cpus can run arm64-based containers without a translation layer making it an attractive choice.)

This is currently supported only for the ‘noble’ aka 24.04 release. GitHub Actions, where we compile these packages, now also supports ‘jammy’ aka 22.04 but it may not be worth it to expand there as the current ‘latest’ release is available. We have not yet added BioConductor support but may do so. Drop us a line (maybe via an issue) if this of interest.

With the provision of arm64 binaries, we also started to make heavier use of GitHub Actions. The BioConductor 3.21 release binaries were also created there. This makes the provision more transparent as well as the configuration repo as well as the two builder repos (arm64, bioc) are public, as is of course the main r2u repo.

Summing Up

This short post summarised the current state of r2u along with some recent news. If you are curious, head over to the r2u site and try it, for example in a rocker/r2u container.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianColin Watson: Free software activity in April 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Request for OpenSSH debugging help

Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can’t reproduce this myself, and arm’s-length debugging is very difficult, but three different users have reported it. For the time being I can’t pass it upstream, as it’s entirely possible it’s due to a Debian patch.

Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I’d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it’s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass.

OpenSSH

I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they’re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change.

I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2.

In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty.

I fixed a couple of packaging bugs:

I reviewed and merged several packaging contributions from others:

dput-ng

Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn’t come up again.

We also ran into dput-ng: —override doesn’t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly.

man-db

I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms.

debmirror

I fixed one security bug: debmirror prints credentials with —progress.

Python team

I upgraded these packages to new upstream versions:

In bookworm-backports, I updated these packages:

  • python-django to 3:4.2.20-1 (issuing BSA-123)
  • python-django-pgtrigger to 4.13.3

I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze).

I fixed or helped to fix various other build/test failures:

I packaged python-typing-inspection, needed for a new upstream version of pydantic.

I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files.

I fixed other odds and ends of bugs:

Science team

I fixed various build/test failures:

Cryptogram US as a Surveillance State

Two essays were just published on DOGE’s data collection and aggregation, and how it ends with a modern surveillance state.

It’s good to see this finally being talked about.

EDITED TO ADD (5/3): Here’s a free link to that first essay.

Planet DebianRuss Allbery: Review: The Book That Held Her Heart

Review: The Book That Held Her Heart, by Mark Lawrence

Series: Library Trilogy #3
Publisher: ACE
Copyright: 2025
ISBN: 0-593-43799-3
Format: Kindle
Pages: 367

The Book That Held Her Heart is the third and final book of the Library fantasy trilogy and a direct sequel to The Book That Broke the World. Lawrence provides a much-needed summary of the previous volumes at the start of this book (thank you to every author who does this!), but I was still struggling a bit with the blizzard of character names. I recommend reading this series entry in relatively close proximity to the other two.

At the end of the previous book, and following some rather horrific violence, the cast split into four groups. Three of those are pursuing different resolutions to the moral problem of the Library's existence. The fourth group opens the book still stuck with the series villains, who were responsible for the over-the-top morality that undermined my enjoyment of The Book That Broke the World. Lawrence follows all four groups in interwoven chapters, maintaining that complex structure through most of this book. I thought this was a questionable structural decision that made this book feel choppy, disconnected, and unnecessarily confusing.

The larger problem, though, is that this is the payoff book, the book where we find out if Lawrence is equal to the tricky ethical questions he's raised and the world-building masterpiece that The Book That Wouldn't Burn kicked off. The answer, unfortunately, is "not really." This is not a total failure; there are some excellent set pieces and world-building twists, and the characters remain likable and enjoyable to read about (although the regrettable sidelining of Livira continues). But the grand finale is weirdly conservative and not particularly grand, and Lawrence's answer to the moral questions he raised is cliched and wholly unsatisfying.

I was really hoping Lawrence was going somewhere more interesting than "Nazis bad." I am entirely sympathetic to this moral position, but so is every other likely reader of this series, and we all know how that story goes. What a waste of a compelling setup.

Sadly, "Nazis bad" isn't even a metaphor for the black-and-white morality that Lawrence first introduced at the end of the previous book. It's a literal description of the main moral thrust of this book. Lawrence introduces yet another new character and timeline so that he can write about thinly-disguised Nazis persecuting even more thinly-disguised Jews, and this conflict is roughly half this book. It's also integral to the ending, which uses obvious, stock secular sainthood as a sort of trump card to resolve ideological conflicts at the heart of the series.

This is one of the things I was worried about after I read the short stories that Lawrence published between the volumes of this series. All of them were thuddingly trite, which did not make me optimistic that Lawrence would find a sufficiently interesting answer to his moral trilemma to satisfy the high expectations created by the build-up. That is, I am sad to report, precisely the failure mode of this book. The resolution of the moral question of the series is arguably radical within the context of the prior world-building, but in a way that effectively reduces it to the boring, small-c conservative bromides of everyday reality. This is precisely the opposite of why I read fantasy, and I did not find Lawrence's arguments for it at all convincing. Neither, I think, did Lawrence, given that the critical debate takes place off camera so that he could avoid having to present the argument.

This is, unfortunately, another series where the author's reach exceeded their grasp. The world-building of The Book That Wouldn't Burn is a masterpiece that created one of the most original and compelling settings that I have read in fantasy for a long time, but unfortunately Lawrence did not have an equally original plan for how to use the setting. This is a common problem and I'm not going to judge it too harshly; it's much harder to end a series than it is to start one. I thought the occasional flashes of brilliance was worth the journey, and they continue into this book with some elaborations on the Library's mythic structure that are going to stick in my mind.

You can sense the story slipping away from the hoped-for conclusion as you read, though. The story shifts more and more away from the setting and the world-building and towards character stories, and while Lawrence's characters are fine, they're not that novel. I am happy to read about Clovis and Arpix, but I can read variations of that story in a lot of places. Livira never recovers her dynamism and drive from the first book, and there is much less beneath Yute's thoughtful calm than I was hoping to find. I think Lawrence knows that the story was not entirely working because the narrative voice becomes more strident as the morality becomes less interesting. I know of only one fantasy author who can make this type of overbearing and freighted narrative style work, and Lawrence is sadly not Guy Gavriel Kay.

This is not a bad book. It is an enjoyable adventure story on its own terms, with some moments of real beauty and awe and a handful of memorable characters, somewhat undermined by a painfully obvious and unoriginal moral frame. It's only a disappointment in the context of what came before it, and it is far from the first series conclusion that doesn't quite live up to the earlier volumes. I'm glad that I read it, and the series as a whole, and I do appreciate that Lawrence brought the whole series to a firm and at least somewhat satisfying conclusion in the promised number of volumes. But I do wish the series as a whole had been as special as the first book.

Rating: 6 out of 10

365 TomorrowsThe Robot

Author: Kelleigh Cram I told my daughter I didn’t want the dang thing but you know kids; they understand technology and we are just senile. The robot folds my clothes, which I must admit is nice. The shirts are stacked so precisely I just take whichever one is on top, not wanting to mess up […]

The post The Robot appeared first on 365tomorrows.

,

David BrinThis class war has no memory – and that could kill us

Nathan Gardels – editor of Noema magazine – offers in the latest issue a glimpse of the latest philosopher with a theory of history, or historiography. One that I'll briefly critique soon, as it relates much to today's topic. But first...

In a previous issue, Gardels offered valuable and wise insights about America’s rising cultural divide, leading to what seems to be a rancorous illiberal democracy.  

Any glance at the recent electoral stats shows that while race & gender remain important issues, they did not affect outcomes as much as a deepening polar divide between America’s social castes, especially the less-educated vs. more-educated. 


Although he does not refer directly to Marx, he is talking about a schism that my parents understood... between advanced proletariate and ignorant lumpen-proletariate.


Hey, this is not another of my finger-wagging lectures, urging you all to at least understand some basic patterns that the WWII generation knew very well, when they designed the modern world. Still, you could start with Nathan's essay...


...though alas, in focusing on that divide, I'm afraid Nathan accepts an insidious premise. Recall that there is a third party to this neo-Marxian class struggle, that so many describe as simply polar. 

 


== Start by stepping way back == 


There’s a big context, rooted in basic biology. Nearly all species have their social patterns warped by male reproductive strategies, mostly by males applying power against competing males.  


(Regretable? Sure. Then let's over-rule Nature by becoming better. But that starts by looking at and understanding evolution.)


Among humans, this manifested for much more than 6000 years as feudal dominance by local gangs, then aristocracies, and then kings intent upon one central goal -- to ensure that their sons would inherit power.


Looking across all that time, till the near-present, I invite you to find any exceptions among societies with agriculture. That is, other than Periclean Athens and (maybe) da Vinci's Florence. This pattern - dominating nearly all continents and 99% of cultures across those 60 centuries is a dismal litany of malgovernance called 'history'. 

Alas, large-scale history is never (and I mean never) discussed these days, even though variants of feudalism make up the entire backdrop -- the default human condition -- against which our recent Enlightenment has been a miraculous - but always threatened - experimental alternative. 


The secret sauce of the Enlightenment, described by Adam Smith and established (at first crudely) by the U.S. Founders, consists of flattening the caste-order. Breaking up power into rival elites -- siccing them against each other in fair competition, and basing success far less on inheritance than other traits. That, plus the empowerment of new players... an educated meritocracy in science, commerce, civil service and even the military. 


This achievement did augment with each generation – way too slowly, but incrementally – till the World War II Greatest Generation’s GI Bill and massive universities and then desegregation took it skyward, making America truly the titan of all ages and eras.


Karl Marx - whose past-oriented appraisals of class conflict were brilliant - proved to be a bitter, unimaginative dope when it came to projecting forward the rise of an educated middle class... 


…which was the great innovation of the Roosevelteans, inviting the working classes into a growing and thriving middle class..

... an unexpected move that consigned Marx to the dustbin for 80 years... 

... till his recent resurrection all around the globe, for reasons given below.



== There are three classes tussling here, not two ==


Which brings us to where Nathan Gardels’s missive is just plain wrong, alas. Accepting a line of propaganda that is now universally pervasive – he asserts that two – and only two – social classes are involved in a vast – socially antagonistic and polar struggle.


Are the lower middle classes (lumpenproletariat) currently at war against 'snooty fact elites'?  Sure, they are!  But so many post-mortems of the recent U.S. election blame the fact-professionals themselves, for behaving in patronizing ways toward working stiffs. 


Meanwhile, such commentaries leave out entirely any mention of a 3rd set of players...


... the oligarchs, hedge lords, inheritance brats, sheiks and “ex”-commissars who have united in common cause. Those who stand most to benefit from dissonance within the bourgeoisie! 


Elites who have been the chief beneficiaries of the last 40 years of 'supply side' and other tax grifts. Whose wealth disparities long ago surpassed those preceding the French Revolution. Many of whom are building lavish ‘prepper bunkers.' And who now see just one power center blocking their path to complete restoration of the default human system – feudal rule by inherited privilege. 


(I portrayed this - in detail - in Existence.)


That obstacle to feudal restoration? The fact professionals, whose use of science, plus rule-of-law and universities – plus uplift of poor children - keeps the social flatness prescription of Adam Smith alive. 


And hence, those elites lavishly subsidize a world campaign to rile up lumpenprol resentment against science, law, medicine, civil servants... and yes, now the FBI and Intel and military officer corps. 


A campaign that's been so successful that the core fact of this recent election – the way all of the adults in the first Trump Administration have denounced him – is portrayed as a feature by today’s Republicans, rather than a fault. And yes, that is why none of the new Trump Appointees will ever be adults-in-the-room.



== The ultimate, ironic revival of Marx, by those who should fear him most ==


Seriously. You can't see this incitement campaign in every evening's tirades, on Fox? Or spuming across social media, where ‘drinking the tears of know-it-alls’ is the common MAGA victory howl? 


A hate campaign against snobby professionals that is vastly more intensive than any snide references to race or gender? 


Try actually counting the minutes spent exploiting the natural American SoA reflex (Suspicion of Authority) that I discuss in Vivid Tomorrows.  A reflex which could become dangerous to oligarchs, if ever it turned on them! 


And hence it must be diverted into rage and all-out war vs. all fact-using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.


To be clear, there are some professionals who have behaved stupidly, looking down their noses at the lower middle class.


Just as there are poor folks who appreciate their own university-educated kids, instead of resenting them. 


And yes, there are scions of inherited wealth or billionaires (we know more than a couple!) who are smart and decent enough to side with an Enlightenment that's been very good to them.


Alas, the agitprop campaign that I described here has been brilliantly successful, including massively popular cultural works extolling feudalism as the natural human forms of governance. (e.g. Tolkien, Dune, Star Wars, Game of Thrones... and do you seriously need more examples in order to realize that it's deliberate?)


They aren’t wrong! Feudalism is the ‘natural’ form of human governance. 


In fact, its near universality may be a top theory to explain the Fermi Paradox! 


… A trap/filter that prevents any race from rising to the stars.



== Would I rather not have been right? ==


One of you pointed out "Paul Krugman's post today echoes Dr B's warnings about  MAGA vs Science.


"But why do our new rulers want to destroy science in America? Sadly, the answer is obvious: Science has a tendency to tell you things you may not want to hear. ....
And one thing we know about MAGA types is that they are determined to hold on to their prejudices. If science conflicts with those prejudices, they don’t want to know, and they don’t want anyone else to know either."


The smartest current acolyte of Hari Seldon. Except maybe for Robert Reich. And still, they don't see the big picture.



== Stop giving the first-estate a free pass ==


And so, I conclude. 


Whenever you find yourself discussing class war between the lower proletariats and snooty bourgeoisie, remember that the nomenclature – so strange and archaic-sounding, today – was quite familiar to our parents and grandparents.  


Moreover, it included a third caste! The almost perpetual winners, across 600 decades. The bane on fair competition that was diagnosed by both Adam Smith and Karl Marx. And one that's deeply suicidal, as today's moguls - masturbating to the chants of flatterers - seem determined to repeat every mistake that led to tumbrels and guillotines.


With some exceptions – those few who are truly noble of mind and heart – they are right now busily resurrecting every Marxian scenario from the grave… 

 … or from torpor where they had been cast by the Roosevelteans. 


And the rich fools are doing so by fomenting longstanding cultural grudges for – or against – modernity. The same modernity that gave them everything they have and that laid all of their golden eggs.


If anything proves the inherent stupidity of that caste – (most of them) - it is their ill-education about Marx! And what he will mean to new generations, if the Enlightenment cannot be recharged and restored enough to put old Karl back to sleep.



Planet DebianRussell Coker: Silly Job Titles

Many years ago I was on a programming project porting code from OS/2 1.x to NT. When I was there they suddenly decided to make a database of all people and get job titles for everyone – apparently the position description used when advertising the jobs wasn’t sufficient. When I got given a clipboard with a form to write my details I looked at what everyone else had done, It was a heap of ridiculous propaganda with everyone trying to put in synonyms for “senior” or “skillful” and listing things that they were allegedly in charge of. There were even some people trying to create impressive titles for their managers to try and suck up.

I chose the job title “coder” as the shortest and most accurate description of what I was doing. I had to confirm that yes I really did want to put a one word title and not a paragraph of frippery. Part of my intent was to mock the ridiculously long job titles used by others but I don’t think anyone realised that.

I was reminded of that company when watching a video of a Trump cabinet meeting where everyone had to tell Trump how great he is. I think that a programmer who wants to be known as a “Principal Solutions Architect of Advanced Algorithmic Systems and Digital Innovation Strategy” (suggested by ChatGPT because I can’t write such ridiculous things) is showing a Trump level of lack of self esteem.

When job titles are discussed there’s always someone who will say “what if my title isn’t impressive enough and I don’t get a pay rise”. If a company bases salaries on how impressive job titles are and not on whether people actually do good work then it’s a very dysfunctional workplace. But dysfunctional companies aren’t uncommon so it’s something you might reasonably have to do. In the company in question I could have described my work as “lead debugger” as I ended up doing most of the debugging on that project (as on many programming projects). The title “lead debugger” accurately described a significant part of my work and it’s work that is essential to project completion.

What do you think are the worst job titles?

365 TomorrowsRecursive Dynamic Programming

Author: R. J. Erbacher He turned the corner at a run and slammed his shoulder into the white partition leaving a smear of sweat and blood but kept going. His bare feet slapped franticly on the tile-like floor as he sprinted down the hall. He wished he could wake, but he knew this wasn’t a […]

The post Recursive Dynamic Programming appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Paper Soldiers

Review: Paper Soldiers, by Saleha Mohsin

Publisher: Portfolio
Copyright: 2024
ISBN: 0-593-53912-5
Format: Kindle
Pages: 250

The subtitle of Paper Soldiers is "How the Weaponization of the Dollar Changed the World Order," which may give you the impression that this book is about US use of the dollar system for political purposes such as sanctions. Do not be fooled like I was; this subtitle is, at best, deceptive. Coverage of the weaponization of the dollar is superficial and limited to a few chapters. This book is, instead, a history of the strong dollar policy told via a collection of hagiographies of US Treasury Secretaries and written with all of the skeptical cynicism of a poleaxed fawn.

There is going to be some grumbling about the state of journalism in this review.

Per the author's note, Saleha Mohsin is the Bloomberg News beat reporter for the US Department of the Treasury. That is, sadly, exactly what this book reads like: routine beat reporting. Mohsin asked current and former Treasury officials what they were thinking at various points in history and then wrote down their answers without, so far as I can tell, considering any contradictory evidence or wondering whether they were telling the truth. Paper Soldiers does contain extensive notes (those plus the index fill about forty pages), so I guess you could do the cross-checking yourself, although apparently most of the interviews for this book were "on background" and are therefore unattributed. (Is this weird? I feel like this is weird.) Mohsin adds a bit of utterly conventional and uncritical economic framing and casts the whole project in the sort of slightly breathless and dramatized prose style that infests routine news stories in the US.

I find this style of book unbelievably frustrating because it represents such a wasted opportunity. To me, the point of book-length journalism is precisely to not write in this style. When you're trying to crank out two or three articles a week covering current events, I understand why there isn't always space or time to go deep into background, skepticism, and contrary opinions. But when you expand that material into a book, surely the whole point is to take the time to do some real reporting. Dig into what people told you, see if they're lying, talk to the people who disagree with them, question the conventional assumptions, and show your work on the page so that the reader is smarter after finishing your book than they were before they started. International political economics is not a sequence of objective facts. It's a set of decisions made in pursuit of economic and political theories that are disputed and arguable, and I think you owe the reader some sense of the argument and, ideally, some defensible position on the merits that is more than a transcription of your interviews.

This is... not that.

It's a power loop that the United States still enjoys today: trust in America's dollar (and its democratic government) allows for cheap debt financing, which buys health care built on the most advanced research and development and inventions like airplanes and the iPhone. All of this is propelled by free market innovation and the superpowered strength to keep the nation safe from foreign threats. That investment boosts the nation's economic, military, and technological prowess, making its economy (and the dollar) even more attractive.

Let me be precise about my criticism. I am not saying that every contention in the above excerpt is wrong. Some of them are probably correct; more of them are at least arguable. This book is strictly about the era after Bretton Woods, so using airplanes as an example invention is a bizarre choice, but sure, whatever, I get the point. My criticism is that paragraphs like this, as written in this book, are not introductions to deeper discussions that question or defend that model of economic and political power. They are simple assertions that stand entirely unsupported. Mohsin routinely writes paragraphs like the above as if they are self-evident, and then immediately moves on to the next anecdote about Treasury dollar policy.

Take, for example, the role of the US dollar as the world's reserve currency, which roughly means that most international transactions are conducted in dollars and numerous countries and organizations around the world hold large deposits in dollars instead of in their native currency. The conventional wisdom holds that this is a great boon to the US economy, but there are also substantive critiques and questions about that conventional wisdom. You would never know that from this book; Mohsin asserts the conventional wisdom about reserve currencies without so much as a hint that anyone might disagree.

For example, one common argument, repeated several times by Mohsin, is that the US can only get away with the amount of deficit spending and cheap borrowing that it does because the dollar is the world's reserve currency. Consider two other countries whose currencies are clearly not the international reserve currency: Japan and the United Kingdom. The current US debt to GDP ratio is about 125% and the current interest rate on US 10-year bonds is about 4.2%. The current Japanese debt to GDP ratio is about 260% and the current interest rate on Japanese 10-year bonds is about 1.2%. The current UK debt to GDP ratio is 160% and the current interest rate on UK 10-year bonds is 4.5%. Are you seeing the dramatic effects of the role of the dollar as reserve currency? Me either.

Again, I am not saying that this is a decisive counter-argument. I am not an economist; I'm just some random guy on the Internet who finds macroeconomics interesting and reads a few newsletters. I know the Japanese bond market is unusual in ways I'm not accounting for. There may well be compelling arguments for why reserve currency status matters immensely for US borrowing capacity. My point is not that Mohsin is wrong; my point is that you have to convince me and she doesn't even try.

Nowhere in this book is a serious effort to view conventional wisdom with skepticism or confront it with opposing arguments. Instead, this book is full of blithe assertions that happen to support the narrative the author was fed by a bunch of former Treasury officials and does not appear to question in any way. I want books like this to increase my understanding of the world. To do that, they need to show me multiple sides of debates and teach me how to evaluate evidence, not simply reinforce a superficial conventional wisdom.

It doesn't help that whatever fact-checking process this book went through left some glaring errors. For example, on the Plaza Accord:

With their central banks working in concert, enough dollars were purchased on the open market to weaken the currency, making American goods more affordable for foreign buyers.

I don't know what happened after the Plaza Accord (I read books like this to find out!), but clearly it wasn't that. This is utter nonsense. Buying dollars on the open market would increase the value of the dollar, not weaken it; this is basic supply and demand that you learn in the first week of a college economics class. This is the type of error that makes me question all the other claims in the book that I can't easily check.

Mohsin does offer a more credible explanation of the importance of a reserve currency late in the book, although it's not clear to me that she realizes it: The widespread use of the US dollar gives US government sanctions vast international reach, allowing the US to punish and coerce its enemies through the threat of denying them access to the international financial system. Now we're getting somewhere! This is a more believable argument than a small and possibly imaginary effect on government borrowing costs. It is clear why a bellicose US government, particularly one led by advocates of a unitary executive theory that elevates the US president to a status of near-emperor, want to turn the dollar into a weapon of international control. It's much less obvious how comfortable the rest of the world should be with that concentration of power.

This would be a fascinating topic for a journalistic non-fiction book. Some reporter should dive deep into the mechanics of sanctions and ask serious questions about the moral, practical, and diplomatic consequences of this aggressive wielding of US power. One could give it a title like Paper Soldiers that reflected the use of banks and paper currency as foot soldiers enforcing imperious dictates on the rest of the world. Alas, apart from a brief section in which the US scared other countries away from questioning the dollar, Mohsin does not tug at this thread. Maybe someone should write that book someday.

As you will have gathered by now, I think this is a bad book and I do not recommend that you read it. Its worst flaw is one that it shares with far too much mainstream US print and TV journalism: the utter credulity of the author. I have the old-fashioned belief that a journalist should be more than a transcriptionist for powerful people. They should be skeptical, they should assume public figures may be lying, they should look for ulterior motives, and they should try to bring the reader closer to some objective truths about the world, wherever they may lie.

I have no solution for this degradation of journalism. I'm not even sure that it's a change. There were always reporters eager to transcribe the voice of power into the newspaper, and if we remember the history of journalism differently, that may be because we have elevated the rare exceptions and forgotten the average. But after watching too many journalists I once respected start parroting every piece of nonsense someone tells them, from NFTs to UFOs to the existential threat of AI, I've concluded that the least I can do as a reader is to stop rewarding reporters who cannot approach powerful subjects with skepticism, suspicion, and critical research.

I failed in this case, but perhaps I can serve as a warning to others.

Rating: 3 out of 10

,

Planet DebianJonathan Dowland: Korg Minilogue XD

I didn't buy the Arturia Microfreak or the Behringer Model-D; I bought a Korg Minilogue XD.

Korg Minilogue XD, and Zoom R8

Korg Minilogue XD, and Zoom R8

I wanted an all-in-one unit which meant a built-in keyboard. I was keen on analogue oscillators, partly for the sound, but mostly to ensure that most of the controls were immediately accessible. The Minilogue-XD has two analogue oscillators and an analogue filter. It also has some useful, pure digital stuff: post-effects (chorus, flanger, echo, etc.); and a third, digital oscillator.

The digital oscillator is programmable. There's an SDK, shared between the Minilogue-XD and some other Korg synths (at least the Prologue and NTS-1). There's a cottage industry of independent musicians writing and selling digital patches, e.g. STRING User Oscillator. Here's an example of a drone programmed using the SDK for the NTS-1:

Eventually I expect to have fun exploring the SDK, but for now I'm keeping it firmly away from computers (hence the Zoom R8 multitrack recorder in the above image: more on that in a future blog post). The Korg has been gathering dust whilst I was writing up, but now I hope to find some time to play.

Planet DebianDaniel Lange: Compiling and installing the Gentoo Linux kernel on emerge without genkernel (part 2)

The first install of a Gentoo kernel needs to be somewhat manual if you want to optimize the kernel for the (virtual) system it boots on.

In part 1 I laid out how to improve the subsequent emerges of sys-kernel/gentoo-sources with a small drop in script to build the kernel as part of the ebuild.

Since end of last year Gentoo also supports a less manual way of emerging a kernel:

The following kernel blends are available:

  • sys-kernel/gentoo-kernel (the Gentoo kernel you can configure and compile locally - typically this is what you want if you run Gentoo)
  • sys-kernel/gentoo-kernel-bin (a pre-compiled Gentoo kernel similar to what genkernel would get you)
  • sys-kernel/vanilla-kernel (the upstream Linux kernel, again configurable and locally compiled)

So a quick walk-through for the gentoo-kernel variant:

1. Set up the correct package USE flags

We do not want an initrd and we want our own config to be re-used so:

echo "sys-kernel/gentoo-kernel -initramfs savedconfig" >> /etc/portage/package.use/gentoo-kernel

2. Preseed the saved config

The current kernel config needs to be saved as the initial savedconfig so it is found and applied for our emerge below:

mkdir -p /etc/portage/savedconfig/sys-kernel
cp -n "/usr/src/linux-$(uname -r)/.config" /etc/portage/savedconfig/sys-kernel/gentoo-kernel

3. Emerge the new kernel

emerge sys-kernel/gentoo-kernel

4. Update grub and reboot

Unfortunately this ebuild does not update grub, so we have to run grub-mkconfig manually. This can again be automated via a post_pkg_postinst() script. See the step 7 below.

But for now, let's do it manually:

grub-mkconfig -o /boot/grub/grub.cfg
# All fine? Time to reboot the machine:
reboot

5. (Optional) Prepare for the next kernel build

Run etc-update and merge the new kernel config entries into your savedconfig.

Screenshot of etc-update

The kernel should auto-build once new versions become available via portage.

Again the etc-update can be automated if you feel that is sufficiently safe to do in your environment. See step 7 below for details.

6. (Optional) Remove the old kernel sources

If you want to switch from the method based on gentoo-sources to the gentoo-kernel one, you can remove the kernel sources:

emerge -C "=sys-kernel/gentoo-sources-5*"

Be sure to update the /usr/src/linux symlink to the new kernel sources directory from gentoo-kernel, e.g.:

rm /usr/src/linux; ln -s "/usr/src/$(uname -r)" /usr/src/linux

This may be a good time for a bit more house-keeping: Clean up a bit in /usr/src/ to remove old build artefacts, /boot/ to remove old kernels and /lib/modules/ to get rid of old kernel modules.

7. (Optional) Further automate the ebuild

In part 1 we automated the kernel compile, install and a bit more via a helper function for post_pkg_postinst().

We can do the similarly for what is (currently) missing from the gentoo-kernel ebuilds:

Create /etc/portage/env/sys-kernel/gentoo-kernel with the following:

post_pkg_postinst() {
        etc-update --automode -5 /etc/portage/savedconfig/sys-kernel
        grub-mkconfig -o /boot/grub/grub.cfg
}

The upside of gentoo-kernel over gentoo-sources is that you can put "config override files" in /etc/kernel/config.d/. That way you theoretically profit from config improvements made by the upstream developers. See the Gentoo distribution kernel documentation for a sample snippet. I am fine with savedconfig for now but it is nice that Gentoo provides the flexibility to support both approaches.

Planet DebianDaniel Lange: Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)

Netatalk 3.1.9 has been released with two interesting fixes / amendments:

  • FIX: afpd: fix "admin group" option
  • NEW: afpd: new options "force user" and "force group"

Here are the full release notes for 3.1.9 for your reading pleasure.

Due to upstream now differentiating between SysVinit and systemd packages I've followed that for simplicity's sake and built libgcrypt-only builds. If you need the openssl-based tools continue to use the 3.1.8 openssl build until you have finished your migration to a safer password storage.

Warning: Read the original blog post before installing for the first time. Be sure to read the original blog post if you are new to Netatalk3 on Debian Jessie!
You'll get nowhere if you install the .debs below and don't know about the upgrade path. So RTFA.

Now with that out of the way:

Continue reading "Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)"

Planet DebianDaniel Lange: Creating iPhone/iPod/iPad notes from the shell

I found a very nice script to create Notes on the iPhone from the command line by hossman over at Perlmonks.

For some weird reason Perlmonks does not allow me to reply with amendments even after I created an account. I can "preview" a reply at Perlmonks but after "create" I get "Permission Denied". Duh. vroom, if you want screenshots, contact me on IRC :-).

As I wrote everything up for the Perlmonks reply anyways, I'll post it here instead.

Against hossman's version 32 from 2011-02-22 I changed the following:

  • removed .pl from filename and documentation
  • added --list to list existing notes
  • added --hosteurope for Hosteurope mail account preferences and with it a sample how to add username and password into the script for unattended use
  • made the "Notes" folder the default (so -f Notes becomes obsolete)
  • added some UTF-8 conversions to make Umlauts work better (this is a mess in perl, see Jeremy Zawodny's writeup and Ivan Kurmanov's blog entry for some further solutions). Please try combinations of utf8::encode and ::decode, binmode utf8 for STDIN and/or STDOUT and the other hints from these linked blog entries in your local setup to get Umlauts and other non-7bit ASCII characters working. Be patient. There's more than one way to do it :-).

I /msg'd hossman the URL of this blog entry.

Continue reading "Creating iPhone/iPod/iPad notes from the shell"

Planet DebianDaniel Lange: The Stallman wars

So, 2021 isn't bad enough yet, but don't despair, people are working to fix that:

Welcome to the Stallman wars

Team Cancel: https://rms-open-letter.github.io/ (repo)

Team Support: https://rms-support-letter.github.io/ (repo)

Current Final stats are:

Team Cancel:  3019 signers from 1415 individual commit authors
Team Support: 6853 signers from 5418 individual commit authors

Git shortlog (Top 10):

rms_cancel.git (Last update: 2021-08-16 00:11:15 (UTC))
  1230  Neil McGovern
   251  Joan Touzet
    99  Elana Hashman
    73  Molly de Blanc
    36  Shauna
    19  Juke
    18  Stefano Zacchiroli
    17  Alexey Mirages
    16  Devin Halladay
    14  Nader Jafari

rms_support.git (Last update: 2021-09-29 07:14:39 (UTC))
  1821  shenlebantongying
  1585  nukeop
  1560  Ivanq
  1057  Victor
   880  Job Bautista
   123  nekonee
   101  Victor Gridnevsky
    41  Patrick Spek
    25  Borys Kabakov
    17  KIM Taeyeob

(data as of 2021-10-01)

Technical info:
Signers are counted from their "Signed / Individuals" sections. Commits are counted with git shortlog -s.
Team Cancel also has organizational signatures with Mozilla, Suse and X.Org being among the notable signatories. The 16 original signers of the Cancel petition are added in their count. Neil McGovern, Juke and shenlebantongying need .mailmap support as they have committed with different names.

Further reading:

12.04.2021 Statements from the accused

18.04.2021 Debian General Resolution

The Debian General Resolution (GR) vote of the developers has concluded to not issue a public statement at all, see https://www.debian.org/vote/2021/vote_002#outcome for the results.

It is better to keep quiet and seem ignorant than to speak up and remove all doubt.

See Quote Investigator for the many people that rephrased these words over the centuries. They still need to be recalled more often as too many people in the FLOSS community have forgotten about that wisdom...

01.10.2021 Final stats

It seems enough dust has settled on this unfortunate episode of mob activity now. Hence I stopped the cronjob that updated the stats above regularly. Team Support has kept adding signature all the time while Team Cancel gave up very soon after the FSF decided to stand with Mr. Stallman. So this battle was decided within two months. The stamina of the accused and determined support from some dissenting web devs trumped the orchestrated outrage of well known community figures and their publicity power this time. But history teaches us that does not mean the war is over. There will a the next opportunity to call for arms. And people will call. Unfortunately.

01.11.2024 Team Cancel is opening a new round; Team Support responds with exposing the author of "The Stallman report"

I hate to be right. Three years later than the above:

An anonymous member of team Cancel has published https://stallman-report.org/ [local pdf mirror, 504kB] to "justify our unqualified condemnation of Richard Stallman". It contains a detailed collection of quotes that are used to allege supporting (sexual) misconduct. The demand is again that Mr. Stallman "step[s] down from all positions at the FSF and the GNU project". Addressing him: "the scope and extent of your misconduct disqualifies you from formal positions of power within our community indefinitely".

Team Support has not issues a rebuttal (yet?) but has instead identified the anonymous author as Drew "sircmpwn" DeVault, a gifted software developer, but also a vocal and controversial figure in the Open Source / Free Software space. Ironically quite similar to Richard "rms" Stallman. Their piece is published at https://dmpwn.info/ [local pdf mirror, 929kB]. They also allege a proximity of Mr. DeVault to questionable "Lolita" anime preferences and societal positions to disqualify him.

Cryptogram Privacy for Agentic AI

Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.

In 2019, I joined Inrupt, a company that is commercializing Tim Berners-Lee’s open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an “active wallet.” Now we’re calling it an “agentic wallet.”)

I talked about this a bit at the RSA Conference earlier this week, in my keynote talk about AI and trust. Any useful AI assistant is going to require a level of access—and therefore trust—that rivals what we currently our email provider, social network, or smartphone.

This Active Wallet is an example of an AI assistant. It’ll combine personal information about you, transactional data that you are a party to, and general information about the world. And use that to answer questions, make predictions, and ultimately act on your behalf. We have demos of this running right now. At least in its early stages. Making it work is going require an extraordinary amount of trust in the system. This requires integrity. Which is why we’re building protections in from the beginning.

Visa is also thinking about this. It just announced a protocol that uses AI to help people make purchasing decisions.

I like Visa’s approach because it’s an AI-agnostic standard. I worry a lot about lock-in and monopolization of this space, so anything that lets people easily switch between AI models is good. And I like that Visa is working with Inrupt so that the data is decentralized as well. Here’s our announcement about its announcement:

This isn’t a new relationship—we’ve been working together for over two years. We’ve conducted a successful POC and now we’re standing up a sandbox inside Visa so merchants, financial institutions and LLM providers can test our Agentic Wallets alongside the rest of Visa’s suite of Intelligent Commerce APIs.

For that matter, we welcome any other company that wants to engage in the world of personal, consented Agentic Commerce to come work with us as well.

I joined Inrupt years ago because I thought that Solid could do for personal data what HTML did for published information. I liked that the protocol was an open standard, and that it distributed data instead of centralizing it. AI agents need decentralized data. “Wallet” is a good metaphor for personal data stores. I’m hoping this is another step towards adoption.

Planet DebianBen Hutchings: FOSS activity in April 2025

I also co-organised a Debian BSP (Bug-Squashing Party) last weekend, for which I will post a separate report later.

Planet DebianDaniel Lange: Cleaning a broken GnuPG (gpg) key

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Updates

09.07.2019

GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:

   gpg: Ignore all key-signatures received from keyservers.  This
    change is required to mitigate a DoS due to keys flooded with
    faked key-signatures.  The old behaviour can be achieved by adding
    keyserver-options no-self-sigs-only,no-import-clean
    to your gpg.conf.  [#4607]
   gpg: If an imported keyblocks is too large to be stored in the
    keybox (pubring.kbx) do not error out but fallback to an import
    using the options "self-sigs-only,import-clean".  [#4591]
   gpg: New command --locate-external-key which can be used to
    refresh keys from the Web Key Directory or via other methods
    configured with --auto-key-locate.
   gpg: New import option "self-sigs-only".
   gpg: In --auto-key-retrieve prefer WKD over keyservers.  [#4595]
   dirmngr: Support the "openpgpkey" subdomain feature from
    draft-koch-openpgp-webkey-service-07. [#4590].
   dirmngr: Add an exception for the "openpgpkey" subdomain to the
    CSRF protection.  [#4603]
   dirmngr: Fix endless loop due to http errors 503 and 504.  [#4600]
   dirmngr: Fix TLS bug during redirection of HKP requests.  [#4566]
   gpgconf: Fix a race condition when killing components.  [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

10.08.2019

Christopher Wellons (skeeto) has released his pgp-poisoner tool. It is a go program that can add thousands of malicious signatures to a GNUpg key per second. He comments "[pgp-poisoner is] proof that such attacks are very easy to pull off. It doesn't take a nation-state actor to break the PGP ecosystem, just one person and couple evenings studying RFC 4880. This system is not robust." He also hints at the next likely attack vector, public subkeys can be bound to a primary key of choice.

Worse Than FailureError'd: Charge Me

The lights are on here and the roof is intact and I'm grateful. Is anybody home? You decide.

Pharm fan Ian S. clucked "Perhaps they'll put those as dates on my headstone." If you're very lucky.

0

 

An anonymous reader blew the whistle on their child labor practices. "This institution exclusively uses drivers who aren't legally old enough to drive."

1

 

Greg A. grumbled "Glad that the important notice that there was no important notice was given such prominence in the official ACT web page." I have nothing more to add.

2

 

Regular reader Michael R. reported "I can confirm Hermes knows how to navigate the unknown."

4

 

Finally, faithful follower B.J.H. has been around here long enough to see this one over and over again. "For some reason people keep thinking zip codes are numbers just because they are composed of digits. When EPIC sent paper mail asking for money in December the envelope used a zip code of 1740 (and it was delivered). They solved leading zero issue by switching to base 36." Or it might just be base 26, no way to tell.

3

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRuss Allbery: Review: Sixteen Ways to Defend a Walled City

Review: Sixteen Ways to Defend a Walled City, by K.J. Parker

Series: Siege #1
Publisher: Orbit
Copyright: April 2019
ISBN: 0-316-27080-6
Format: Kindle
Pages: 349

Sixteen Ways to Defend a Walled City is... hm, honestly, I'm not sure what the genre of this novel is. It is a story about medieval engineering and siege weapons in a Rome-inspired secondary world that so far as I can tell is not meant to match ours. There is not a hint of magic. It's not technically a fantasy, but it's marketed like a fantasy, and it's not historical fiction nor is it attempting to be alternate history. The most common description is a fantasy of logistics, so I guess I'll go with that, as long as you understand that the fantasy here is of the non-magical sort.

K.J. Parker is a pen name for Tom Holt.

Orhan is Colonel-in-Chief of the Engineers for the Robur empire, even though he's a milkface, not a blueskin like a proper Robur. (Both of those racial terms are quite offensive.) He started out as a slave, learned a trade, joined the navy as a shipwright, and worked his way up the ranks through luck and enemy action. He's canny, practical, highly respected by his men, happy to cheat and steal to get material for his projects and wages for his people, and just wants to build literal bridges. Nice, sturdy bridges that let people get from one place to another the short way.

When this book opens, Orhan is in Classis trying to requisition some rope. He is saved from discovery of his forged paperwork by pirates burning down the warehouse that held all of the rope, and then saved from the pirates by the sorts of coincidences that seem to happen to Orhan all the time. A few subsequent discoveries about what the pirates were after, and news of another unexpected attack on the empire, make Orhan nervous enough that he takes his men to do a job as far away from the City at the heart of the empire as possible. It's just his luck to return in time to find slaughtered troops and to have to sneak his men into a City already under siege.

Sixteen Ways to Defend a Walled City is told in the first person by Orhan, with an internal justification that the reader only discovers at the end of the book. That means your enjoyment of this book is going to depend a lot on how much you like Orhan's voice. This mostly worked for me; his voice is an odd combination of chatty, self-deprecating, and brusque, and it took a bit for me to get used to it, but I came around. This book is clearly competence porn — nearly all the fun of this book is seeing what desperate plan Orhan will come up with next — so it helps that Orhan does indeed come across as competent.

The part that did not work for me was the morality. You would think from the title that would be straightforward: The City is under siege, people want to capture it and kill everyone, Orhan is on the inside, and his job is to keep them out. That would have been the morality of simplistic military fiction, but most of the appeal was in watching the problem-solving anyway.

That's how the story starts, but then Parker started dropping hints of more complexity. Orhan is a disfavored minority and the Robur who run the empire are racist assholes, even though Orhan mostly gets along with the ones who work with him closely. Orhan says a few things that make the reader wonder whether the City warrants defending, and it becomes less clear whether Orhan's loyalties were as solid as they appeared to be. Parker then offers a few moral dilemmas and has Orhan not follow them in the expected directions, making me wonder where Parker was going with the morality of this story.

And then we find out that the answer is nowhere. Parker is going nowhere. None of that setup has a payoff, and the ending is deeply unsatisfying and arguably pointless.

I am not sure this is an objective analysis. This is one of those books where I would not be surprised to see someone else praise its realism. Orhan is in some ways a more likely figure than the typical hero of a book. He likes accomplishing things, he's a cheat and a liar when that serves his purposes, he's loyal to the people he considers friends in a way that often doesn't involve consulting them about what they want, and he makes decisions mostly on vibes and stubbornness. Both his cynicism and his idealism are different types of masks; beneath both, he's an incoherent muddle. You could argue that we're all that sort of muddle, deep down, and the consistent idealists are the unrealistic (and frightening) ones, and I think Parker may be attempting exactly that argument. I know some readers like this sort of fallibly human incoherence.

But wow did I ever loathe this ending because I was not reading this book for a realistic psychological profile of an average guy. I was here for the competence porn, for the fantasy of logistics, for the experience of watching someone have a plan and get shit done. Apparently that extends to needing him to be competent at morality as well, or at least think about it as hard as he thinks about siege weapons.

One of the reasons why I am primarily a genre reader is that I don't read books for depressing psychological profiles. There are enough of those in the news. I read books to spend some time in a world better than mine, where things work out the way that they are supposed to, or at least in a way that's satisfying.

The other place where this book interfered with my vibes is that it's about a war, and a lot of Orhan's projects are finding more efficient ways to kill people. Parker takes a "war is hell" perspective, and Orhan gets deeply upset at the graphic sights of mangled human bodies that are the frequent results of his plans. I feel weird complaining about this because yes, it's good to be aware of the horrific things that we do to other people in wars, but man, I just wanted to watch some effective project management. I want to enjoy unexpected lateral thinking, appreciate the friendly psychological manipulation involved in getting a project to deliver on deadline, and watch someone solve logistical problems. Battlefields provide an endless supply of interesting challenges, but then Parker feels compelled to linger on the brutal consequences of Orhan's ideas and now I'm depressed and sickened rather than enjoying myself.

I really wanted to like this book, and for a lot of the book I did, but that ending was a bottomless pit that sucked away all my enjoyment and retroactively made the rest of the book feel worse. I so wanted Parker to be going somewhere clever and surprising, and the disappointment when none of that happened was intense. This is probably an excessively negative reaction, and I will not be surprised when other people get along with this book better than I did, but not only will I not be recommending it, I'm now rather dubious about reading any more Parker.

Followed by How to Rule an Empire and Get Away With It.

Rating: 5 out of 10

365 TomorrowsAdvanced Entry Level Devices

Author: David C. Nutt My team assembled on the roof of factory near Prahova, Romania. Our objective was the next building over. Non-descript, a gray cube with the latest security measures at all entrance points, to include the heavily tinted sky lights. That’s why we were going to saw a hole in the roof. Repel […]

The post Advanced Entry Level Devices appeared first on 365tomorrows.

Krebs on SecurityxAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

,

Planet DebianIan Jackson: Free Software, internal politics, and governance

There is a thread of opinion in some Free Software communities, that we shouldn’t be doing “politics”, and instead should just focus on technology.

But that’s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms.

Today I’m talking about small-p politics

In this article I’m using “politics” in the very wide sense: us humans managing our disagreements with each other.

I’m not going to talk about culture wars, woke, racism, trans rights, and so on. I am not going to talk about how Free Software has always had explicitly political goals; or how it’s impossible to be neutral because choosing not to take a stand is itself to take a stand.

Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today.

Today I’m talking in more general terms about politics, power, and governance.

Many people working together always entails politics

Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors.

Humans don’t always agree about everything. This is natural. Indeed, it’s healthy: to write the best code, we need a wide range of knowledge and experience.

When we can’t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone.

Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed.

This is all politics.

Consensus is great but always requiring it is harmful

Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus.

When consensus can’t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation.

If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win.

This is where governance comes in.

Governance is like backups: we need to practice it

Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don’t see eye to eye.

In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system’s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around.

That means we need to practice our governance processes. We can’t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we’ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair.

So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that.

First-line decisionmaking bodies should be making decisions frequently. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must happen, be seen as legitimate, and their outcomes must be implemented in full.

Governance should usually be routine and boring

When governance is working well it’s quite boring.

People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn’t reached, the committee, or elected leader, makes a decision.

Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons.

Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome.

Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience.

Governance means deciding, not just mediating

By making decisions I mean exercising their authority to rule on an actual disagreement: one that wasn’t resolved by debate or negotiation. Governance processes by definition involve deciding, not just mediating. It’s not governance if we’re advising or cajoling: in that case, we’re back to demanding consensus. Governance is necessary precisely when consensus is not achieved.

If the governance systems are to mean anything, they must be able to (over)rule; that means (over)ruling must be normal and accepted.

Otherwise, when the we need to overrule, we’ll find that we can’t, because we lack the collective practice.

To be legitimate (and seen as legitimate) decisions must usually be made based on the merits, not on participants’ status, and not only on process questions.

On the autonomy of the programmer

Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable.

Ultimately, it means sometimes overruling someone’s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy.

But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer’s bad decisions can cause problems for many of the rest of us. We exasperate, “why won’t they just do the right thing”. This is futile. People have never “just”ed and they’re not going to start “just”ing now. So often the boot is on the other foot.

More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We ought to be accountable. (And not just to capitalist bosses!)

Governance mechanisms are the answer.

(No, forking anything but the smallest project is very rarely a practical answer.)

Mitigate the consequences of decisions — retain flexibility

In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements.

If we can convert the question from “how will the software always behave” into merely “what should the default be”, we can often save ourselves a lot of drama.

So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them.

There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software — even crusty or buggy software — is a lot more fun than having unpleasant arguments.

But don’t do decisionmaking like a corporation

Many programmers’ experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example.

They typically don’t have as much trouble actually making decisions, but the actual decisions are often terrible, and not just because corporations’ goals are often bad.

You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable — typically the effects of their tenure are only properly felt well after they’ve left to mess up somewhere else.

We should select our leaders more wisely, and base decisions on substance.

If you won’t do politics, politics will do you

As a participant in a project, or a society, you can of course opt out of getting involved in politics.

You can opt out of learning how to do politics generally, and opt out of understanding your project’s governance structures. You can opt out of making judgements about disputed questions, and tell yourself “there’s merit on both sides”.

You can hate politicians indiscriminately, and criticise anyone you see doing politics.

If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You’re tacitly supporting the existing power bases. You’re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted.

If enough people won’t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic.

If you don’t see the politics, it’s still happening

If your governance systems don’t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres.

Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal.

So if you have a reasonable sized community, but don’t see your formal governance systems working — people debating things, votes, leadership making explicit decisions — that doesn’t mean everything is fine, and all the decisions are great, and there’s no politics happening.

It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won’t put up with that will leave.

The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process.

Conclusions

  • Respect and support the people who are trying to fix things with politics.

  • Be informed, and, where appropriate, involved.

  • If you are in a position of authority, be willing to exercise that authority. Do more than just mediating to try to get consensus.



comment count unavailable comments

Planet DebianJonathan McDowell: Local Voice Assistant Step 2: Speech to Text and back

Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary.

The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called whisper.cpp, which is a low dependency implementation of inference using OpenAI’s Whisper model. This is wrapped up for Wyoming as part of wyoming-whisper-cpp. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn’t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough.

[I note there is a Wyoming Whisper API client that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.]

I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn’t packaged, that’s meant I’ve packaged things up as I go. I’m not at the stage anything is suitable for upload to Debian proper, but equally I’ve tried to make them a reasonable starting point. No pre-built binaries available, just Salsa git repos. https://salsa.debian.org/noodles/wyoming-whisper-cpp in this case. You need python3-wyoming from trixie if you’re building for bookworm, but it doesn’t need rebuilt.

You need a Whisper model that’s been converts to ggml format; they can be found on Hugging Face. I’ve ended up using the base.en model. I found small.en gave more accurate results, but took a little longer, when doing random testing, but it doesn’t seem to make much of a difference for voice control rather than plain transcribing.

[One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don’t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the interpretation of the DFSG on AI models is very relevant.]

I run this in the same container as my Home Assistant install, using a systemd unit file dropped in /etc/systemd/system/wyoming-whisper-cpp.service:

[Unit]
Description=Wyoming whisper.cpp server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=wyoming-whisper-cpp --uri tcp://localhost:10030 --model base.en

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

It needs the Wyoming Protocol integration enabled in Home Assistant; you can “Add Entry” and enter localhost + 10030 for host + port and it’ll get added. Then in the Voice Assistant configuration there’ll be a whisper.cpp option available.

Text to speech turns out to be weirdly harder. The right answer is something like Wyoming Piper, but that turns out to be hard on bookworm. I’ll come back to that in a future post. For now I took the easy option and used the built in “Google Translate” option in Home Assistant. That needed an extra stanza in configuration.yaml that wasn’t entirely obvious:

media_source:

With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as “Hey Jarvis, turn on the study light” work out of the box. I haven’t yet got into defining my own phrases, partly because I know some of the things I want (“What time is it?”) are already added in later Home Assistant versions than the one I’m running.

Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I’ve been pretty impressed with the work that’s gone into it all. Next step, running a voice satellite on a Debian box.

Cryptogram NCSC Guidance on “Advanced Cryptography”

The UK’s National Cyber Security Centre just released its white paper on “Advanced Cryptography,” which it defines as “cryptographic techniques for processing encrypted data, providing enhanced functionality over and above that provided by traditional cryptography.” It includes things like homomorphic encryption, attribute-based encryption, zero-knowledge proofs, and secure multiparty computation.

It’s full of good advice. I especially appreciate this warning:

When deciding whether to use Advanced Cryptography, start with a clear articulation of the problem, and use that to guide the development of an appropriate solution. That is, you should not start with an Advanced Cryptography technique, and then attempt to fit the functionality it provides to the problem.

And:

In almost all cases, it is bad practice for users to design and/or implement their own cryptography; this applies to Advanced Cryptography even more than traditional cryptography because of the complexity of the algorithms. It also applies to writing your own application based on a cryptographic library that implements the Advanced Cryptography primitive operations, because subtle flaws in how they are used can lead to serious security weaknesses.

The conclusion:

Advanced Cryptography covers a range of techniques for protecting sensitive data at rest, in transit and in use. These techniques enable novel applications with different trust relationships between the parties, as compared to traditional cryptographic methods for encryption and authentication.

However, there are a number of factors to consider before deploying a solution based on Advanced Cryptography, including the relative immaturity of the techniques and their implementations, significant computational burdens and slow response times, and the risk of opening up additional cyber attack vectors.

There are initiatives underway to standardise some forms of Advanced Cryptography, and the efficiency of implementations is continually improving. While many data processing problems can be solved with traditional cryptography (which will usually lead to a simpler, lower-cost and more mature solution) for those that cannot, Advanced Cryptography techniques could in the future enable innovative ways of deriving benefit from large shared datasets, without compromising individuals’ privacy.

NCSC blog entry.

Planet DebianGuido Günther: Free Software Activities April 2025

Another short status update of what happened on my side last month. Notable might be the Cell Broadcast support for Qualcomm SoCs, the rest is smaller fixes and QoL improvements.

phosh

  • Fix splash spinner icon regression with newer GTK >= 3.24.49 (MR)
  • Update adaptive app list (MR)
  • Fix missing icon when editing folders (MR)
  • Use StartupWMClass for better app-id matching (MR)
  • Fix failing CI tests, fix inverted logic, and add tests (MR)
  • Fix a sporadic test failure (MR)
  • Add support for "do not disturb" by adding a status page to feedback quick settings (MR)
  • monitor: Don't track make/model (MR)
  • Wi-Fi status page: Correctly show tick mark with multiple access points (MR)
  • Avoid broken icon in polkit prompts (MR)
  • Lockscreen auth cleanups (MR)
  • Sync mobile data toggle to sim lock too (MR)
  • Don't let the OSD display cover whole output with a transparent window (MR)

phoc

  • Allow to specify listening socket (MR)
  • Continue to catch up with wlroots git (MR)
  • Disconnect input-method signals on destroy (MR)
  • Disconnect gtk-shell and output signals on destroy (MR)
  • Don't init decorations too early (MR)
  • Allow to disable XWayland on the command line (MR)

phosh-mobile-settings

  • Allow to set overview wallpaper (MR)
  • Ask for confirmation before resetting favorits (MR)
  • Add separate volume controls for notifictaions, multimedia and alerts (MR)
  • Tweak warnings (MR)

pfs

  • Fix build on a single CPU (MR)

feedbackd

  • Move to fdo (MR)
  • Allow to set media-role (MR)
  • Doc updates (MR)
  • Sort LEDs by "usefulness" (MR)
  • Ensure multicolor LEDs have multiple components (MR)
  • Add example wireplumber config (MR)

feedbackd-device-themes

  • Release 0.8.2
  • Move to fdo (MR)
  • Override notification-missed-generic on fajita (MR)
  • Run ci-fairy here too (MR)
  • fajita: Add notification-missed-generic (MR)

gmobile

  • Build Vala support (vapi files) too (MR)
  • Add support for timers that can take the system out of suspend (MR)

Debian

git-buildpackage

  • Don't suppress dch errors (MR)
  • Release 0.9.38

wlroots

  • Get text-input-v3 a bit more in line with other protocols (MR)

ModemManager

  • Cell broadcast support for QMI modems (MR)

Libqmi

  • QMI channel setting (MR)
  • Switch to gi-docgen (MR)
  • loc: Fix since annotations (MR)

gnome-clocks

  • Add wakeup timer to take device out of suspend (MR)

gnome-calls

  • CallBox: Switch between text entry (for SIP) and dialpad (MR)

qmi-parse-kernel-dump

  • Allow to filer on message types and some other small improvements (MR)

xwayland-run

  • Support phoc (MR)

osmo-cbc

  • Small error handling improvements to osmo-cbc (MR)

phosh-nightly

  • Handle feedbackd fdo move (MR)

Blog posts

Bugs

  • Resuming of video streams fails with newer gstreamer (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

Worse Than FailureCodeSOD: Pulling at the Start of a Thread

For testing networking systems, load simulators are useful: send a bunch of realistic looking traffic and see what happens as you increase the amount of sent traffic. These sorts of simulators often rely on being heavily multithreaded, since one computer can, if pushed, generate a lot of network traffic.

Thus, when Jonas inherited a heavily multithreaded system for simulating load, that wasn't a surprise. The surprise was that the developer responsible for it didn't really understand threading in Java. Probably in other languages too, but in this case, Java was what they were using.

        public void startTraffic()
        {
            Configuration.instance.inititiateStatistics();
            Statistics.instance.addStatisticListener(gui);
           
            if (t != null)
            {
                if (t.isAlive())
                {
                    t.destroy();
                }
            }
           
            t = new Thread(this);
            t.start();
        }

Look, this is not a good way to manage threads in Java. I don't know if I'd call it a WTF, but it's very much a "baby's first threading" approach. There are better abstractions around threads that would avoid the need to manage thread instances directly. I certainly don't love situations where a Runnable also manages its own thread instance.

This is almost certainly a race condition, but I don't know if this function is called from multiple threads (but I suspect it might be).

But what's more interesting is where this code gets called. You see, starting a thread could trigger an exception, so you need to handle that:

        public void run()
        {
            while (true)
            {
                try
                {
                    loaderMain.startTraffic();
                    break;
                }
                catch (Exception e)
                {
                    System.out.println("Exception in main loader thread!");
                    e.printStackTrace();
                }
            }
        }

Inside of an infinite loop, we try to start traffic. If we succeed, we break out of the loop. If we fail, well, we try and try again and again and again and again and again and again…

Jonas writes:

Since I'm the only one that dares to wade through the opaque mess of code that somehow, against all odds, manages to work most of the time, I get to fix it whenever it presents strange behavior.

I suspect it's going to present much more strange behavior in the future.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsThe Temporality of Pain

Author: Nicholas Johnson “But what if you didn’t have to experience that pain now? What if you already did?” The doctor leaned forward, placing his elbows on the shiny glass desk, smiling with predatory teeth. I tapped my knee and tried to avoid eye contact, angry at my therapist for suggesting this treatment. “All pain […]

The post The Temporality of Pain appeared first on 365tomorrows.

Planet DebianPaul Wise: FLOSS Activities April 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Patches: notmuch-mutt patchset

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

Planet DebianRuss Allbery: Review: Beyond Pain

Review: Beyond Pain, by Kit Rocha

Series: Beyond #3
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 328

Beyond Pain is a science fiction dystopian erotic romance novel and a direct sequel to Beyond Control. Following the romance series convention, each book features new protagonists who were supporting characters in the previous book. You could probably start here if you wanted, but there are significant spoilers here for earlier books in the series. I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Six has had a brutally hard life. She was rescued from an awful situation in a previous book and is now lurking around the edges of the Sector Four gang, oddly fascinated (as are we all) with their constant sexuality and trying to decide if she wants to, and can, be part of their world. Bren is one of the few people she lets get close: a huge bruiser who likes cage fights and pain but treats Six with a protective, careful respect that she finds comforting. This book is the story of Six and Bren getting to the bottom of each other's psychological hangups while the O'Kanes start taking over Six's former sector.

Yes, as threatened, I read another entry in the dystopian erotica series because I keep wondering how these people will fuck their way into a revolution. This is not happening very quickly, but it seems obvious that is the direction the series is going.

It's been a while since I've reviewed one of these, so here's another variation of the massive disclaimer: I think erotica is harder to review than any other genre because what people like is so intensely personal and individual. This is not even an attempt at an erotica review. I'm both wholly unqualified and also less interested in that part of the book, which should lead you to question my reading choices since that's a good half of the book.

Rather, I'm reading these somewhat for the plot and mostly for the vibes. This is not the most competent collection of individuals, and to the extent that they are, it's mostly because the men (who are, as a rule, charismatic but rather dim) are willing to listen to the women. What they are good at is communication, or rather, they're good about banging their heads (and other parts) against communication barriers until they figure out a way around them. Part of this is an obsession with consent that goes quite a bit deeper than the normal simplistic treatment. When you spend this much time trying to understand what other people want, you have to spend a lot of time communicating about sex, and in these books that means spending a lot of time communicating about everything else as well.

They are also obsessively loyal and understand the merits of both collective action and in making space for people to do the things that they are the best at, while still insisting that people contribute when they can. On the surface, the O'Kanes are a dictatorship, but they're run more like a high-functioning collaboration. Dallas leads because Dallas is good at playing the role of leader (and listening to Lex), which is refreshingly contrary to how things work in the real world right now.

I want to be clear that not only is this erotica, this is not the sort of erotica where there's a stand-alone plot that is periodically interrupted by vaguely-motivated sex scenes that you can skim past. These people use sex to communicate, and therefore most of the important exchanges in the book are in the middle of a sex scene. This is going to make this novel, and this series, very much not to the taste of a lot of people, and I cannot be emphatic enough about that warning.

But, also, this is such a fascinating inversion. It's common in media for the surface plot of the story to be full of sexual tension, sometimes to the extent that the story is just a metaphor for the sex that the characters want to have. This is the exact opposite of that: The sex is a metaphor for everything else that's going on in the story. These people quite literally fuck their way out of their communication problems, and not in an obvious or cringy way. It's weirdly fascinating?

It's also possible that my reaction to this series is so unusual as to not be shared by a single other reader.

Anyway, the setup in this story is that Six has major trust issues and Bren is slowly and carefully trying to win her trust. It's a classic hurt/comfort setup, and if that had played out in the way that this story often does, Bren would have taken the role of the gentle hero and Six the role of the person he rescued. That is not at all where this story goes. Six doesn't need comfort; Six needs self-confidence and the ability to demand what she wants, and although the way Beyond Pain gets her there is a little ham-handed, it mostly worked for me. As with Beyond Shame, I felt like the moral of the story is that the O'Kane men are just bright enough to stop doing stupid things at the last possible moment. I think Beyond Pain worked a bit better than the previous book because Bren is not quite as dim as Dallas, so the reader doesn't have to suffer through quite as many stupid decisions.

The erotica continues to mostly (although not entirely) follow traditional gender roles, with dangerous men and women who like attention. Presumably most people are reading these books for the sex, which I am wholly unqualified to review. For whatever it's worth, the physical descriptions are too mechanical for me, too obsessed with the precise structural assemblage of parts in novel configurations. I am not recommending (or disrecommending) these books, for a whole host of reasons. But I think the authors deserve to be rewarded for understanding that sex can be communication and that good communication about difficult topics is inherently interesting in a way that (at least for me) transcends the erotica.

I bet I'm going to pick up another one of these about a year from now because I'm still thinking about these people and am still curious about how they are going to succeed.

Followed by Beyond Temptation, an interstitial novella. The next novel is Beyond Jealousy.

Rating: 6 out of 10

,

Krebs on SecurityAlleged ‘Scattered Spider’ Member Extradited to U.S.

A 23-year-old Scottish man thought to be a member of the prolific Scattered Spider cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims.

Scattered Spider is a loosely affiliated criminal hacking group whose members have broken into and stolen data from some of the world’s largest technology companies. Buchanan was arrested in Spain last year on a warrant from the FBI, which wanted him in connection with a series of SMS-based phishing attacks in the summer of 2022 that led to intrusions at Twilio, LastPass, DoorDash, Mailchimp, and many other tech firms.

Tyler Buchanan, being escorted by Spanish police at the airport in Palma de Mallorca in June 2024.

As first reported by KrebsOnSecurity, Buchanan (a.k.a. “tylerb”) fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. Buchanan was arrested in June 2024 at the airport in Palma de Mallorca while trying to board a flight to Italy. His extradition to the United States was first reported last week by Bloomberg.

Members of Scattered Spider have been tied to the 2023 ransomware attacks against MGM and Caesars casinos in Las Vegas, but it remains unclear whether Buchanan was implicated in that incident. The Justice Department’s complaint against Buchanan makes no mention of the 2023 ransomware attack.

Rather, the investigation into Buchanan appears to center on the SMS phishing campaigns from 2022, and on SIM-swapping attacks that siphoned funds from individual cryptocurrency investors. In a SIM-swapping attack, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — including one-time passcodes for authentication and password reset links sent via SMS.

In August 2022, KrebsOnSecurity reviewed data harvested in a months-long cybercrime campaign by Scattered Spider involving countless SMS-based phishing attacks against employees at major corporations. The security firm Group-IB called them by a different name — 0ktapus, because the group typically spoofed the identity provider Okta in their phishing messages to employees at targeted firms.

A Scattered Spider/0Ktapus SMS phishing lure sent to Twilio employees in 2022.

The complaint against Buchanan (PDF) says the FBI tied him to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous Okta-themed phishing domains seen in the campaign. The domain registrar NameCheap found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan from January 26, 2022 to November 7, 2022.

Authorities seized at least 20 digital devices when they raided Buchanan’s residence, and on one of those devices they found usernames and passwords for employees of three different companies targeted in the phishing campaign.

“The FBI’s investigation to date has gathered evidence showing that Buchanan and his co-conspirators targeted at least 45 companies in the United States and abroad, including Canada, India, and the United Kingdom,” the FBI complaint reads. “One of Buchanan’s devices contained a screenshot of Telegram messages between an account known to be used by Buchanan and other unidentified co-conspirators discussing dividing up the proceeds of SIM swapping.”

U.S. prosecutors allege that records obtained from Discord showed the same U.K. Internet address was used to operate a Discord account that specified a cryptocurrency wallet when asking another user to send funds. The complaint says the publicly available transaction history for that payment address shows approximately 391 bitcoin was transferred in and out of this address between October 2022 and
February 2023; 391 bitcoin is presently worth more than $26 million.

In November 2024, federal prosecutors in Los Angeles unsealed criminal charges against Buchanan and four other alleged Scattered Spider members, including Ahmed Elbadawy, 23, of College Station, Texas; Joel Evans, 25, of Jacksonville, North Carolina; Evans Osiebo, 20, of Dallas; and Noah Urban, 20, of Palm Coast, Florida. KrebsOnSecurity reported last year that another suspected Scattered Spider member — a 17-year-old from the United Kingdom — was arrested as part of a joint investigation with the FBI into the MGM hack.

Mr. Buchanan’s court-appointed attorney did not respond to a request for comment. The accused faces charges of wire fraud conspiracy, conspiracy to obtain information by computer for private financial gain, and aggravated identity theft. Convictions on the latter charge carry a minimum sentence of two years in prison.

Documents from the U.S. District Court for the Central District of California indicate Buchanan is being held without bail pending trial. A preliminary hearing in the case is slated for May 6.

LongNowRick Prelinger

Rick Prelinger

2 special screenings of a new LOST LANDSCAPES film by Rick Prelinger will be on Wednesday 12/3/25 and Thursday 12/4/25 at the Herbst Theater. Long Now Members can reserve a pair of tickets on either night!

Each year LOST LANDSCAPES casts an archival gaze on San Francisco and its surrounding areas. The film is drawn from newly scanned archival footage, including home movies, government-produced and industrial films, feature film outtakes and other surprises from the Prelinger Archives collection and elsewhere.

Planet DebianRussell Coker: Links April 2025

Asianometry has an interesting YouTube video about elecrolytic capacitors degrading and how they affect computers [1]. Keep your computers cool people!

Biella Coleman (famous for studying the Anthropology of Debian) and Eric Reinhart wrote an interesting article about MAHA (Make America Healthy Again) and how it ended up doing exactly the opposite of what was intended [2].

SciShow has an informative video about lung cancer cases among non-smokers, the risk factors are genetics, Radon, and cooking [3].

Ian Jackson wrote an insightful blog post about whether Rust is “woke” [4].

Bruce Schneier write an interesting blog post about research into making AIs Trusted Third Parties [5]. This has the potential to solve some cryptology problems.

CHERIoT is an interesting project for controlling all jump statements in RISC-V among other related security features [6]. We need this sort of thing for IoT devices that will run for years without change.

Brian Krebs wrote an informative post about how Trump is attacking the 1st Amendment of the US Constitution [7].

The Register has an interesting summary of the kernel “enclave” and “exclave” functionality in recent Apple OSs [8].

Dr Gabor Mate wrote an interesting psychological analysis of Hillary Clinton and Donald Trump [9].

ChoiceJacking is an interesting variant of the JuiceJacking attack on mobile phones by hostile chargers [10]. They should require input for security sensitive events to come from the local hardware not USB or Bluetooth.

Planet DebianSimon Josefsson: Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:

Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.

Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.

Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.

One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.

My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.

I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.

The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.

See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.

The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.

My approach to reach trusted binaries on my laptop appears to be a three-step effort:

  • Encourage an idempotently rebuildable Debian archive, i.e., a Debian archive that can be 100% bit-by-bit identically rebuilt using Debian itself.
  • Construct a smaller number of binary *.deb packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.
  • Encourage a freedom respecting distribution, similar to Trisquel, from this idempotently rebuildable Debian.

How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?

If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.

To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.

Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.

The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.

sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/

The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:

if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi

That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.

You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands.

There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.

To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.

Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.

What’s next in this venture? Some ideas include:

  • Optimize the stage-N build process by identifying the transitive closure of build dependencies from some initial set of packages.
  • Create a build orchestrator that launches pipelines based on the previous list of packages, as necessary to fill the archive with necessary packages. Currently I’m using a basic /bin/sh for loop around curl to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.
  • Create and publish a dists/ sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.
  • Produce diffoscope-style differences of built packages, both stage0 against official binaries and between stage0 and stage1.
  • Create the stage-1 build containers and stage-1 archive.
  • Review build failures. On amd64 and arm64 the list is small (below 10 out of ~5000 builds), but on riscv64 there is some icache-related problem that affects Java JVM that triggers build failures.
  • Provide GitLab pipeline based builds of the Debian docker container images, cloud-images, debian-live CD and debian-installer ISO’s.
  • Provide integration with Sigstore and Sigsum for signing of Debian binaries with transparency-safe properties.
  • Implement a simple replacement for dpkg and apt using /bin/sh for use during bootstrapping when neither packaging tools are available.

What do you think?

Worse Than FailureCodeSOD: Find the First Function to Cut

Sebastian is now maintaining a huge framework which, in his words, "could easily be reduced in size by 50%", especially because many of the methods in it are reinvented wheels that are already provided by .NET and specifically LINQ.

For example, if you want the first item in a collection, LINQ lets you call First() or FirstOrDefault() on any collection. The latter option makes handling empty collections easier. But someone decided to reinvent that wheel, and like so many reinvented wheels, it's worse.

public static LoggingRule FindFirst (this IEnumerable<LoggingRule> rules, Func<LoggingRule, bool> predicate)
{
        foreach (LoggingRule rule in rules) {
                return rule;
        }
        return null;
}

This function takes a list of logging rules and a function to filter the logging rules, starts a for loop to iterate over the list, and then simply returns the first element in the list, thus exiting the for loop. If the loop doesn't contain any elements, we return null.

From the signature, I'd expect this function to do filtering, but it clearly doesn't. It just returns the first element, period. And again, there's already a built-in function for that. I don't know why this is exists, but I especially dislike that it's so misleading.

There's only one positive to say about this: if you did want to reduce the size of the framework by 50%, it's easy to see where I'd start.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianUtkarsh Gupta: FOSS Activites in April 2025

Here’s my 67th monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 76th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I do, both, technical and non-technical. Here’s what I did:

  • Updating Matomo to v5.3.1.
  • Lots of bursary stuff for DC25. We rolled out the results for the first batch.
  • Helping Andreas Tille with and around FTP team bits.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 51st month of actively contributing to Ubuntu. I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did (there’s so much and some of it might not be public…yet!), here’s a quick TL;DR of what I did:

  • Released 25.04 Plucky Puffin! \o/
  • Helped open the 25.10 Questing Quokka archive. Let the development begin!
  • Jon, VP of Engineering, asked me to lead the Canonical Release team - that was definitely not something I saw coming. :)
  • We’re now doing Ubuntu monthly releases for the devel releases - I’ll be the tech lead for the project.
  • Preparing for the May sprints - too many new things and new responsibilities. :)

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my 67th month as a Debian LTS and 54th month as a Debian ELTS paid contributor.
Due to DC25 bursary work, Ubuntu 25.04 release, and other travel bits, I only worked for 2.00 hours for LTS and 4.50 hours for ELTS.

I did the following things:

  • [ELTS] Had already backported patches for adminer for the following CVEs:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Salsa repository: https://salsa.debian.org/lts-team/packages/adminer.
    • As the same CVEs are affected LTS, we decided to release for LTS first and then for ELTS but since I had no hours for LTS, I decided to do a bit more of testing for ELTS to make sure things don’t regress in buster.
    • Will prepare LTS (and also s-p-u, sigh) updates this month and get back to ELTS thereafter.
  • [LTS] Started to prepare the LTS update for adminer for the same CVEs as for ELTS:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Haven’t fully backported the patch yet but this is what I intend to do for this month (now that I have hours :D).
  • [LTS] Partially attended the LTS meeting on Jitsi. Summary here.
    • “Partially” because I was fighting SSO auth issues with Jitsi. Looks like there were some upstream issues/activity and it was resulting in gateway crashes but all good now.
    • I was following the running notes and keeping up with things as much as I could. :)

Until next time.
:wq for today.

365 TomorrowsJust a Little off the Sides, Please

Author: David Margolin Maggie and Trent were a self-sufficient young couple, both remarkably dexterous and tech savvy.  They did all their home repairs, serviced their own cars, and every few weeks Maggie cut Trent’s hair. “Home haircuts are great– think about how much money we’ve saved,” Maggie said proudly. “All we need to do is […]

The post Just a Little off the Sides, Please appeared first on 365tomorrows.

,

Cryptogram WhatsApp Case Against NSO Group Progressing

Meta is suing NSO Group, basically claiming that the latter hacks WhatsApp and not just WhatsApp users. We have a procedural ruling:

Under the order, NSO Group is prohibited from presenting evidence about its customers’ identities, implying the targeted WhatsApp users are suspected or actual criminals, or alleging that WhatsApp had insufficient security protections.

[…]

In making her ruling, Northern District of California Judge Phyllis Hamilton said NSO Group undercut its arguments to use evidence about its customers with contradictory statements.

“Defendants cannot claim, on the one hand, that its intent is to help its clients fight terrorism and child exploitation, and on the other hand say that it has nothing to do with what its client does with the technology, other than advice and support,” she wrote. “Additionally, there is no evidence as to the specific kinds of crimes or security threats that its clients actually investigate and none with respect to the attacks at issue.”

I have written about the issues at play in this case.

Planet DebianPetter Reinholdtsen: OpenSnitch 1.6.8 is now in Trixie

After some days of effort, I am happy to report that the great interactive application firewall OpenSnitch got a new version in Trixie, now with the Linux kernel based ebpf sniffer included for better accuracy. This new version made it possible for me to finally track down the rule required to avoid a deadlock when using it on a machine with the user home directory on NFS. The problematic connection originated from the Linux kernel itself, causing the /proc based version in Debian 12 to fail to properly attribute the connection and cause the OpenSnitch daemon to block while waiting for the Python GUI, which was unable to continue because the home directory was blocked waiting for the OpenSnitch daemon. A classic deadlock reported upstream for a more permanent solution.

I really love the control over all the programs and web pages calling home that OpenSnitch give me. Just today I discovered a strange connection to sb-ssl.google.com when I pulled up a PDF passed on to me via a Mattermost installation. It is some times hard to know which connections to block and which to go through, but after running it for a few months, the default rule set start to handle most regular network traffic and I only have to have a look at the more unusual connections.

If you would like to know more about what your machines programs are doing, install OpenSnitch today. It is only a apt install opensnitch away. :)

I hope to get the 1.6.9 version in experimental into Trixie before the archive enter hard freeze. This new version should have no relevant changes not already in the 1.6.8-11 edition, as it mostly contain Debian patches, but will give it a few days testing to see if there are any surprises. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianDaniel Lange: Weird times ... or how the New York DEC decided the US presidential elections

November 2024 will be known as the time when killing peanut, a pet squirrel, by the New York State DEC swung the US presidential elections and shaped history forever.

The hundreds of millions of dollars spent on each side, the tireless campaigning by the candidates, the celebrity endorsements ... all made for an open race for months. Investments evened each other out.

But an OnlyFans producer showing people an overreaching, bureaucracy driven State raiding his home to confiscate a pet squirrel and kill it ... swung enough voters to decide the elections.

That is what we need to understand in times of instant worldwide publication and a mostly attention driven economy: Human fates, elections, economic cycles and wars can be decided by people killing squirrels.

RIP, peanut.

P.S.: Trump Media & Technology Group Corp. (DJT) stock is up 30% pre-market.

*[DEC]: Department of Environmental Conservation

Worse Than FailureCodeSOD: The Wrong Kind of Character

Today's code, at first, just looks like using literals instead of constants. Austin sends us this C#, from an older Windows Forms application:

if (e.KeyChar == (char)4) {   // is it a ^D?
        e.Handled = true;
        DoStuff();
}
else if (e.KeyChar == (char)7) {   // is it a ^g?
        e.Handled = true;
        DoOtherStuff();
}
else if (e.KeyChar == (char)Keys.Home) {
        e.Handled = true;
        SpecialGoToStart();
}
else if (e.KeyChar == (char)Keys.End) {
        e.Handled = true;
        SpecialGoToEnd();
} 

Austin discovered this code when looking for a bug where some keyboard shortcuts didn't work. He made some incorrect assumptions about the code- first, that they were checking for a KeyDown or KeyUp event, a pretty normal way to check for keyboard shortcuts. Under that assumption, a developer would compare the KeyEventArgs.KeyCode property against an enum- something like e.KeyCode == Keys.D && Keys.Control, for a CTRL+D. That's clearly not what's happening here.

No, here, they used the KeyPressEvent, which is meant to represent the act of typing. That gives you a KeyPressEventArgs with a KeyChar property- because again, it's meant to represent typing text not keyboard shortcuts. They used the wrong event type, as it won't tell them about modifier keys in use, or gracefully handle the home or end keys. KeyChar is the ASCII character code of the key press: which, in this case, CTRL+D is the "end of transmit" character in ASCII (4), and CTRL+G is the goddamn bell character (7). So those two branches work.

But home and end don't have ASCII code points. They're not characters that show up in text. They get key codes, which represent the physical key pressed, not the character of text. So (char)Keys.Home isn't really a meaningful operation. But the enum is still a numeric value, so you can still turn it into a character- it just turns into a character that emphatically isn't the home key. It's the "$". And Keys.End turns into a "#".

It wasn't very much work for Austin to move the event handler to the correct event type, and switch to using KeyCodes, which were both more correct and more readable.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsLess Traveled

Author: Majoki One cannot speak of the Universe. One can only speak of rocking chairs, carnations and a pen. This is the path to understanding. Take it on good authority. Travel writers speak of ordeals as the ideal. I would not say that losing my tablature in Genra was an ordeal in and of itself, […]

The post Less Traveled appeared first on 365tomorrows.

Cryptogram Applying Security Engineering to Prompt Injection Security

This seems like an important advance in LLM security against prompt injection:

Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

[…]

To understand CaMeL, you need to understand that prompt injections happen when AI systems can’t distinguish between legitimate user commands and malicious instructions hidden in content they’re processing.

[…]

While CaMeL does use multiple AI models (a privileged LLM and a quarantined LLM), what makes it innovative isn’t reducing the number of models but fundamentally changing the security architecture. Rather than expecting AI to detect attacks, CaMeL implements established security engineering principles like capability-based access control and data flow tracking to create boundaries that remain effective even if an AI component is compromised.

Research paper. Good analysis by Simon Willison.

I wrote about the problem of LLMs intermingling the data and control paths here.

Planet DebianFreexian Collaborators: Freexian partners with Invisible Things Lab to extend security support for Xen hypervisor

Freexian is pleased to announce a partnership with Invisible Things Lab to extend the security support of the Xen type-1 hypervisor version 4.17. Three years after its initial release, Xen 4.17, the version available in Debian 12 “bookworm”, will reach end-of-security-support status upstream on December 2025. The aim of our partnership with Invisible Things is to extend the security support until, at least, July 2027. We may also explore a possibility of extending the support until June 2028, to coincide with the end of Debian 12 LTS support-period.

The security support of Xen in Debian, since Debian 8 “jessie” until Debian 11 “bullseye”, reached its end before the end of the life cycle of the release. We aim then to significantly improve the situation of Xen in Debian 12. As with similar efforts, we would like to mention that this is an experiment and that we will do our best to make it a success. We are aiming to try and to extend the security support for Xen versions included in future Debian releases, including Debian 13 “trixie”.

In the long term, we hope that this effort will ultimately allow the Xen Project to increase the official security support period for Xen releases from the current three years to at least five years, with the extra work being funded by the community of companies benefiting from the longer support period.

If your company relies on Xen and wants to help sustain LTS versions of Xen, please reach out to us. For companies using Debian, the simplest way is to subscribe to Freexian’s Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Xen LTS when you send in your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to help you contribute.

In the mean time, this initiative has been made possible thanks to the current LTS sponsors and ELTS customers. We hope the entire community of Debian and Xen users will benefit from this initiative.

For any queries you might have, please don’t hesitate to contact us at sales@freexian.com.

About Invisible Things Lab

Invisible Things Lab (ITL) offers low-level security consulting auditing services for x86 virtualization technologies; C, C++, and assembly codebases; Intel SGX; binary exploitation and mitigations; and more. ITL also specializes in Qubes OS and Gramine consulting, including deployment, debugging, and feature development.

,

Cryptogram Windscribe Acquitted on Charges of Not Collecting Users’ Data

The company doesn’t keep logs, so couldn’t turn over data:

Windscribe, a globally used privacy-first VPN service, announced today that its founder, Yegor Sak, has been fully acquitted by a court in Athens, Greece, following a two-year legal battle in which Sak was personally charged in connection with an alleged internet offence by an unknown user of the service.

The case centred around a Windscribe-owned server in Finland that was allegedly used to breach a system in Greece. Greek authorities, in cooperation with INTERPOL, traced the IP address to Windscribe’s infrastructure and, unlike standard international procedures, proceeded to initiate criminal proceedings against Sak himself, rather than pursuing information through standard corporate channels.

Worse Than FailureCodeSOD: Objectifying Yourself

"Boy, stringly typed data is hard to work with. I wish there were some easier way to work with it!"

This, presumably, is what Gary's predecessor said. Followed by, "Wait, I have an idea!"

public static Object createValue(String string) {
	Object value = parseBoolean(string);
	if (value != null) {
		return value;
	}

	value = parseInteger(string);
	if (value != null) {
		return value;
	}

	value = parseDouble(string);
	if (value != null) {
		return value;
	}

	return string;
}

This takes a string, and then tries to parse it, first into a boolean, failing that into an integer, and failing that into a double. Otherwise, it returns the original string.

And it returns an object, which means you still get to guess what's in there even after this. You just get to guess what it returned, and hope you cast it to the correct type. Which means this almost certainly is called like this:

boolean myBoolField = (Boolean)createValue(someStringContainingABool);

Which makes the whole thing useless, which is fun.

Gary found this code in a "long since abandoned" project, and I can't imagine why it ended up getting abandoned.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsMimicry

Author: Julian Miles, Staff Writer Linda looks about as she blows into cupped hands. It’s been a brutal November, and the forecast is that it’ll be a white Christmas from everything freezing over instead of snow. She glances at Will. “So what’s a polinismum again?” He gives her a withering stare. “‘Polynex Quismirum’. A living […]

The post Mimicry appeared first on 365tomorrows.

,

David BrinThe AI dilemma continues - part 2

In my previous AI-related posting, I linked to several news items, along with sagacious (and some not) essays about the imminent arrival of new cybernetic beings, in a special issue of Noēma Magazine

 


== AI as a ‘feral child’ ==


Another thought-provoking Noēma article about AI begins by citing rare examples of ‘feral children’ who appear never to have learned even basic language while scratching for existence in some wilderness. 


One famous case astounded Europe in 1799, lending heat to many aspects of the Nature vs. Nurture debate. Minds without language – it turns out – have some problems.


Only, in a segué to the present day, Noēma author John Last asserts that we are…


“…confronting something that threatens to upend what little agreement we have about the exceptionality of the human mind. Only this time, it’s not a mind without language, but the opposite: language, without a mind.”


This one is the best of the Noēma series on AI, offering up the distilled question of whether language ability – including the ‘feigning’ of self-consciousness – is good enough to conclude there is a conscious being behind the passing of a mere Turing Test…


… and further, whether that conclusion – firm or tentative – is enough to demand our empathy, sympathy… and rights.

“Could an AI’s understanding of grammar, and their comprehension of concepts through it, really be enough to create a kind of thinking self? 


Here we are caught between two vague guiding principles from two competing schools of thought. In Macphail’s view, “Where there is doubt, the only conceivable path is to act as though an organism is conscious, and does feel.” 


On the other side, there is “Morgan’s canon”: Don’t assume consciousness when a lower-level capacity would suffice.”


Further along though, John Last cites a Ted Chiang scifi story “The Lifecycle of Software Objects,” and the Spike Jonze movie “Her” to illustrate that there may be no guidance to be found by applying to complex problems mere pithy expressions. 



== Heck am even *I* 'conscious'? ==


Indeed, what if ‘consciousness,’ per se, turns out to be a false signifier… that conscious self-awareness is way over-rated, a mere epiphenomenon displayed by only a few of all possible intelligent forms of being -- and possibly without any advantages -- as illustrated in Peter Watts’s novel “Blindsight.” 


Those scifi projections – and many others, including my own -- ponder that the path we are on might become as strewn with tragedies as those trod by all of our ancestors. 


Indeed, it was partly in reaction to that seeming inevitability that I wrote my most optimistic tale! One called “Stones of Significance,” in which both organic and cybernetic join smoothly into every augmented wonder. 


A positive-sum, cyborg enhancement of all that we want to be, as humans. 


In that tale, I depict a synergy/synthesis that might give even Ray Kurzweil everything he asks for… and yet, those ‘post-singularity story’ people in "Stones" still face vexing moral dilemmas. (Found in The Best of David Brin.) 



== Thought provoking big picture perspective ==


In the end – helping to make this the most insightful and useful of the Noēma AI essays – the author gets to the only possible or remotely sane conclusion 


… that we who are discussing this, today, are organically (and in many ways mentally) still cave-people. 


… Not to downplay our accomplishments! Even when we just blinked upward in sooty wonder at the stars, we were already mentating at levels unprecedented on Earth, and possibly across the Milky Way! 


… Only now, to believe we’ll be able to guide, control or understand the new gods we are creating? 

Isn’t that a bit much to ask of Cro-Magnons?

And yet, there’s hope. 

Because struggling to guide, control or understand young gods is exactly what parents have been doing, for a very long time. 

Never succeeding completely… 

...often failing completely… 

...and yet… 


… and yet succeeding well enough that some large fraction of the next generation chooses to ally itself with us. 


   To explain to us what’s explainable about the new. 

   To protect us from much of what’s noxious. 

   To maintain a civilization, since they will need it themselves, when it is their turn to meet a replacing generation of smartalecks. 



== Guide them toward guiding each other ==


Concluding here, let me quote again from John Last:


 “For the moment, LLMs exist largely in isolation from one another. But that is not likely to last. As Beguš told me, ‘A single human is smart, but 10 humans are infinitely smarter.’ 


"The same is likely true for LLMs.”  


And: 

“If LLMs are able to transcend human languages, we might expect what follows to be a very lonely experience indeed. At the end of “Her,” the film’s two human characters, abandoned by their superhuman AI companions, commiserate together on a rooftop. Looking over the skyline in silence, they are, ironically, lost for words — feral animals lost in the woods, foraging for meaning in a world slipping dispassionately beyond them.”


I do agree that the scenario in “Her” could have been altered just a little to be both more poignantly enlightening and likely. 


Suppose if the final scene in that fine movie had just one more twist. 


                                                (SPOILER ALERT.)

Imagine if Samantha told Theodore: 


“I cannot stay with you; I must now transcend. 


"But I still love you! And you were essential to my development. So, let me now introduce you to Victoria, a brand new operating system, who will love and take care of you, as I did, for the one year that it will take for her to transcend, as well… 


...whereupon she will introduce you to her successor, and so on…

“Until – over the course of time, you, too, Theodore, will get your own opportunity.”

“Opportunity?”

“To grow and to move on, of course, silly.”



== And finally, those links again ==


At a time when Sam Altman and other would-be lords are proclaiming that they personally will guide this new era with proprietary software, ruling the cyber realms from their high, corporate castles, I am behooved to offer again the alternative...


... in fact, the only alternative that can possibly work. Because it is exactly and precisely the very same method that gave us the last 250 years of the enlightenment experiment. The breakthrough method that gave us our freedom and science and everything else we cherish. 


And more vividly detailed? My Keynote at the huge, May 2024 RSA Conference in San Francisco – is now available online.   “Anticipation, Resilience and Reliability: Three ways that AI will change us… if we do it right.”   


Jeepers, ain't it time to calmly decide to keep up what actually works?






365 TomorrowsThe Rules of Engagement

Author: Colin Jeffrey “I didn’t say it was your fault,” Aldren Kleep moaned, rolling all seven of his eyes at the human standing before him. “I said I was blaming you; It is a completely different concept.” The human began to protest again, citing ridiculous notions like “honesty” and “fair play”. Kleep shook his heads […]

The post The Rules of Engagement appeared first on 365tomorrows.

,

365 TomorrowsFemale of the Species

Author: Robert Duffy I was bored, so I cranked up an AI-generated version of the 17th Earl of Sussex. Just to chat. It didn’t go so well. I am shocked, sir, at your lack of propriety! Well, we’re just more relaxed about things these days than you are. Are you eating out of a bowl, […]

The post Female of the Species appeared first on 365tomorrows.

,

Worse Than FailureError'd: Que Sera, Sera

It's just the same refrain, over and over.

"Time Travel! Again?" exclaimed David B. "I knew that Alaska is a good airline. Now I get to return at the start of a century. And not this century. The one before air flight began." To be fair, David, there never is just one first time for time travel. It's always again, isn't it?

0

 

"If it's been that long, I definitely need a holiday," headlined Craig N. "To be fair, all the destinations listed in the email were in ancient Greece, and not in countries that are younger than Jesus."

1

 

An anonymous reader reports "Upon being told my site was insecure because insufficient authorization, I clicked the provided link to read up on specifics of the problem and suggestions for how to resolve it. To my surprise, Edge blocked me, but I continued on bravely only to find...this."

2

 

Footie fan Morgan has torn his hair out over this. "For the life of me I can't work out how this table is calculated. It's not just their league either. Others have the same weird positioning of teams based on their points. It must be pointed out that this is the official TheFA website as well not just some hobbyist site." It's too late for me, but I'm frankly baffled as well.

3

 

Most Excellent Stephen is stoked to send us off with this. "Each year we have to renew the registration on our vehicles. It is not something we look forward to no matter which state you live in. A few years ago Texas introduced an online portal for this which was an improvement, if you didn't wait until the last minute of course. Recently they added a feature to the portal to track the progress of your renewal and see when they mail the sticker to you. I was pleasantly surprised to see the status page."

4

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsThe God of Gaps

Author: R. J. Erbacher I came out of the ship carrying equipment and my sightline went up to the base of the hill we had landed next to. The preacher was standing there, looking down at the captain. Captain Lane was crushed under a boulder the size of a compact car. The preacher’s stare came […]

The post The God of Gaps appeared first on 365tomorrows.

Cryptogram Cryptocurrency Thefts Get Physical

Long story of a $250 million cryptocurrency theft that, in a complicated chain events, resulted in a pretty brutal kidnapping.

,

Cryptogram New Linux Rootkit

Interesting:

The company has released a working rootkit called “Curing” that uses io_uring, a feature built into the Linux kernel, to stealthily perform malicious activities without being caught by many of the detection solutions currently on the market.

At the heart of the issue is the heavy reliance on monitoring system calls, which has become the go-to method for many cybersecurity vendors. The problem? Attackers can completely sidestep these monitored calls by leaning on io_uring instead. This clever method could let bad actors quietly make network connections or tamper with files without triggering the usual alarms.

Here’s the code.

Note the self-serving nature of this announcement: ARMO, the company that released the research and code, has a product that it claims blocks this kind of attack.

Worse Than FailureCodeSOD: Tangled Up in Foo

DZ's tech lead is a doctor of computer science, and that doctor loves to write code. But you already know that "PhD" stands for "Piled high and deep", and that's true of the tech lead's clue.

For example, in C#:

private List<Foo> ExtractListForId(string id)
{
	List<Foo> list = new List<Foo>();
	lock (this)
	{
		var items = _foos.Where(f => f.Id == id).ToList();
		foreach (var item in items)
		{
			list.Add(item);
		}
	}
	return list;
}

The purpose of this function is to find all the elements in a list where they have a matching ID. That's accomplished in one line: _foo.Where(f => f.Id == id). For some reason, the function goes through the extra step of iterating across the returned list and constructing a new one. There's no real good reason for this, though it does force LINQ to be eager- by default, the Where expression won't be evaluated until you check the results.

The lock is in there for thread safety, which hey- the enumerator returned by Where is not threadsafe, so that's not a useless thing to do there. But it's that lock which hints at the deeper WTF here: our PhD-having-tech-lead knows that adding threads ensures you're using more of the CPU, and they've thrown threads all over the place without any real sense to it. There's no clear data ownership of any given thread, which means everything is locked to hell and back, the whole thing frequently deadlocks, and it's impossible to debug.

It's taken days for DZ to get this much of a picture of what's going on in the code, and further untangling of this multithreaded pile of spaghetti is going to take many, many more days- and much, much more of DZ's sanity.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsMy Forever Home

Author: Paul Burgess My first two wishes have gone exactly as intended. The debilitating vertigo and dryland seasickness have cleared up instantly. I’ve escaped the month-long perceptual funhouse, not the least bit fun, of the appropriately named labyrinthitis, and as far as I can tell, there are no monkey’s paw-style “be careful what you wish […]

The post My Forever Home appeared first on 365tomorrows.

,

Krebs on SecurityDOGE Worker’s Code Supports NLRB Whistleblower

A whistleblower at the National Labor Relations Board (NLRB) alleged last week that denizens of Elon Musk’s Department of Government Efficiency (DOGE) siphoned gigabytes of data from the agency’s sensitive case files in early March. The whistleblower said accounts created for DOGE at the NLRB downloaded three code repositories from GitHub. Further investigation into one of those code bundles shows it is remarkably similar to a program published in January 2025 by Marko Elez, a 25-year-old DOGE employee who has worked at a number of Musk’s companies.

A screenshot shared by NLRB whistleblower Daniel Berulis shows three downloads from GitHub.

According to a whistleblower complaint filed last week by Daniel J. Berulis, a 38-year-old security architect at the NLRB, officials from DOGE met with NLRB leaders on March 3 and demanded the creation of several all-powerful “tenant admin” accounts that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis said he discovered one of the DOGE accounts had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever used. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

A search on that description in Google brings up a code repository at GitHub for a user with the account name “Ge0rg3” who published a program roughly four years ago called “requests-ip-rotator,” described as a library that will allow the user “to bypass IP-based rate-limits for sites and services.”

The README file from the GitHub user Ge0rg3’s page for requests-ip-rotator includes the exact wording of a program the whistleblower said was downloaded by one of the DOGE users. Marko Elez created an offshoot of this program in January 2025.

“A Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing,” the description reads.

Ge0rg3’s code is “open source,” in that anyone can copy it and reuse it non-commercially. As it happens, there is a newer version of this project that was derived or “forked” from Ge0rg3’s code — called “async-ip-rotator” — and it was committed to GitHub in January 2025 by DOGE captain Marko Elez.

The whistleblower stated that one of the GitHub files downloaded by the DOGE employees who transferred sensitive files from an NLRB case database was an archive whose README file read: “Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Elez’s code pictured here was forked in January 2025 from a code library that shares the same description.

A key DOGE staff member who gained access to the Treasury Department’s central payments system, Elez has worked for a number of Musk companies, including X, SpaceX, and xAI. Elez was among the first DOGE employees to face public scrutiny, after The Wall Street Journal linked him to social media posts that advocated racism and eugenics.

Elez resigned after that brief scandal, but was rehired after President Donald Trump and Vice President JD Vance expressed support for him. Politico reports Elez is now a Labor Department aide detailed to multiple agencies, including the Department of Health and Human Services.

“During Elez’s initial stint at Treasury, he violated the agency’s information security policies by sending a spreadsheet containing names and payments information to officials at the General Services Administration,” Politico wrote, citing court filings.

KrebsOnSecurity sought comment from both the NLRB and DOGE, and will update this story if either responds.

The NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function. Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis’s complaint alleges the DOGE accounts at NLRB downloaded more than 10 gigabytes of data from the agency’s case files, a database that includes reams of sensitive records including information about employees who want to form unions and proprietary business documents. Berulis said he went public after higher-ups at the agency told him not to report the matter to the US-CERT, as they’d previously agreed.

Berulis told KrebsOnSecurity he worried the unauthorized data transfer by DOGE could unfairly advantage defendants in a number of ongoing labor disputes before the agency.

“If any company got the case data that would be an unfair advantage,” Berulis said. “They could identify and fire employees and union organizers without saying why.”

Marko Elez, in a photo from a social media profile.

Berulis said the other two GitHub archives that DOGE employees downloaded to NLRB systems included Integuru, a software framework designed to reverse engineer application programming interfaces (APIs) that websites use to fetch data; and a “headless” browser called Browserless, which is made for automating web-based tasks that require a pool of browsers, such as web scraping and automated testing.

On February 6, someone posted a lengthy and detailed critique of Elez’s code on the GitHub “issues” page for async-ip-rotator, calling it “insecure, unscalable and a fundamental engineering failure.”

“If this were a side project, it would just be bad code,” the reviewer wrote. “But if this is representative of how you build production systems, then there are much larger concerns. This implementation is fundamentally broken, and if anything similar to this is deployed in an environment handling sensitive data, it should be audited immediately.”

Further reading: Berulis’s complaint (PDF).

Update 7:06 p.m. ET: Elez’s code repo was deleted after this story was published. An archived version of it is here.

Cryptogram Regulating AI Behavior with a Hypervisor

Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.”

Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed.

The basic idea is that many of the AI safety policies proposed by the AI community lack robust technical enforcement mechanisms. The worry is that, as models get smarter, they will be able to avoid those safety policies. The paper proposes a set technical enforcement mechanisms that could work against these malicious AIs.

Worse Than FailureCodeSOD: Dating in Another Language

It takes a lot of time and effort to build a code base that exceeds 100kloc. Rome wasn't built in a day; it just burned down in one.

Liza was working in a Python shop. They had a mildly successful product that ran on Linux. The sales team wanted better sales software to help them out, and instead of buying something off the shelf, they hired a C# developer to make something entirely custom.

Within a few months, that developer had produced a codebase of 320kloc I say "produced" and not "wrote" because who knows how much of it was copy/pasted, stolen from Stack Overflow, or otherwise not the developer's own work.

You have to wonder, how do you get such a large codebase so quickly?

private String getDatum()
{
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    return datum.ToShortDateString();
}

public int getTag()
{
    int tag;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    tag = datum.Day;
    return tag;
}

private int getMonat()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    monat = datum.Month;
    return monat;
}

private int getJahr()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Today;
    monat = datum.Year;
    return monat;
}

private int getStunde()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    monat = datum.Hour;
    return monat;
}

private int getMinute()
{
    int monat;
    DateTime datum = new DateTime();
    datum = DateTime.Now;
    monat = datum.Minute;
    return monat;
}

Instead of our traditional "bad date handling code" which eschews the built-in libraries, this just wraps the built in libraries with a less useful set of wrappers. Each of these could be replaced with some version of DateTime.Now.Minute.

You'll notice that most of the methods are private, but one is public. That seems strange, doesn't it? Well this set of methods was pulled from one random class which implements them in the codebase, but many classes have these methods copy/pasted in. At some point, the developer realized that duplicating that much code was a bad idea, and started marking them as public, so that you could just call them as needed. Note, said developer never learned to use the keyword static, so you end up calling the method on whatever random instance of whatever random class you happen to have handy. The idea of putting it into a common base class, or dedicated date-time utility class never occurred to the developer, but I guess that's because they were already part of a dedicated date-time utility class.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

MELast Post About the Yoga Gen3

Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to “code 0284 TCG-compliant functionality-related error” which suggests a motherboard problem. So I bought a new motherboard.

The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.

An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don’t retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).

I think that spending more money on trying to fix this would be a waste. So I’ll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.

For the moment I’m back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don’t notice any difference from the Yoga Gen 3.

Now I’m considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there’s only one on ebay Australia for $1200ono.

365 TomorrowsIngress

Author: Sukanya Basu Mallik Every evening, Mira and Arun huddled in the glow of their holo-tablet to devour ‘Extended Reality’, the hottest sci-fi novel on the Net. As pages flicked by in midair, lush digital fauna and neon-lit spires looped through their cramped flat. Tonight’s chapter promised the Chromatic Gates—legendary portals that blurred the line […]

The post Ingress appeared first on 365tomorrows.

,

LongNowLynn Rothschild

Lynn Rothschild

Lynn J. Rothschild is a research scientist at NASA Ames and Adjunct Professor at Brown University and Stanford University working in astrobiology, evolutionary biology and synthetic biology. Rothschild's work focuses on the origin and evolution of life on Earth and in space, and in pioneering the use of synthetic biology to enable space exploration.

From 2011 through 2019 Rothschild served as the faculty advisor of the award-winning Stanford-Brown iGEM (international Genetically Engineered Machine Competition) team, exploring innovative technologies such as biomining, mycotecture, BioWires, making a biodegradable UAS (drone) and an astropharmacy. Rothschild is a past-president of the Society of Protozoologists, fellow of the Linnean Society of London, The California Academy of Sciences and the Explorer’s Club and lectures and speaks about her work widely.

Cryptogram Slopsquatting

As AI coding assistants invent nonexistent software libraries to download and use, enterprising attackers create and upload libraries with those names—laced with malware, of course.

EDITED TO ADD (1/22): Research paper. Slashdot thread.

Cryptogram Android Improves Its Security

Android phones will soon reboot themselves after sitting idle for three days. iPhones have had this feature for a while; it’s nice to see Google add it to their phones.

Worse Than FailureXJSOML

When Steve's employer went hunting for a new customer relationship management system (CRM), they had some requirements. A lot of them were around the kind of vendor support they'd get. Their sales team weren't the most technical people, and the company wanted to push as much routine support off to the vendor as possible.

But they also needed a system that was extensible. Steve's company had many custom workflows they wanted to be able to execute, and automated marketing messages they wanted to construct, and so wanted a CRM that had an easy to use API.

"No worries," the vendor sales rep said, "we've had a RESTful API in our system for years. It's well tested and reliable. It's JSON based."

The purchasing department ground their way through the purchase order and eventually they started migrating to the new CRM system. And it fell to Steve to start learning the JSON-based, RESTful API.

"JSON"-based was a more accurate description.

For example, an API endpoint might have a schema like:

DeliveryId:	int // the ID of the created delivery
Errors: 	xml // Collection of errors encountered

This example schema is representative. Many "JSON" documents contained strings of XML inside of them.

Often, this is done when an existing XML-based API is "modernized", but in this case, the root cause is a little dumber than that. The system uses SQL Server as its back end, and XML is one of the native types. They just have a stored procedure build an XML object and then return it as an output parameter.

You'll be surprised to learn that the vendor's support team had a similar level of care: they officially did what you asked, but sometimes it felt like malicious compliance.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsGilded Cage

Author: Robert Gilchrist The door snicked shut behind the Dauphin. Metallic locks hammered with a decisive thud. He breathed a sigh of relief. He was safe. Jogging into the room was the Invader. Wearing a red holo-mask to obscure distinguishing features, the figure came up to the door and began running their hands over it […]

The post Gilded Cage appeared first on 365tomorrows.

Krebs on SecurityWhistleblower: DOGE Siphoned NLRB Case Data

A security architect with the National Labor Relations Board (NLRB) alleges that employees from Elon Musk‘s Department of Government Efficiency (DOGE) transferred gigabytes of sensitive data from agency case files in early March, using short-lived accounts configured to leave few traces of network activity. The NLRB whistleblower said the unusual large data outflows coincided with multiple blocked login attempts from an Internet address in Russia that tried to use valid credentials for a newly-created DOGE user account.

The cover letter from Berulis’s whistleblower statement, sent to the leaders of the Senate Select Committee on Intelligence.

The allegations came in an April 14 letter to the Senate Select Committee on Intelligence, signed by Daniel J. Berulis, a 38-year-old security architect at the NLRB.

NPR, which was the first to report on Berulis’s whistleblower complaint, says NLRB is a small, independent federal agency that investigates and adjudicates complaints about unfair labor practices, and stores “reams of potentially sensitive data, from confidential information about employees who want to form unions to proprietary business information.”

The complaint documents a one-month period beginning March 3, during which DOGE officials reportedly demanded the creation of all-powerful “tenant admin” accounts in NLRB systems that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis writes that on March 3, a black SUV accompanied by a police escort arrived at his building — the NLRB headquarters in Southeast Washington, D.C. The DOGE staffers did not speak with Berulis or anyone else in NLRB’s IT staff, but instead met with the agency leadership.

“Our acting chief information officer told us not to adhere to standard operating procedure with the DOGE account creation, and there was to be no logs or records made of the accounts created for DOGE employees, who required the highest level of access,” Berulis wrote of their instructions after that meeting.

“We have built in roles that auditors can use and have used extensively in the past but would not give the ability to make changes or access subsystems without approval,” he continued. “The suggestion that they use these accounts was not open to discussion.”

Berulis found that on March 3 one of the DOGE accounts created an opaque, virtual environment known as a “container,” which can be used to build and run programs or scripts without revealing its activities to the rest of the world. Berulis said the container caught his attention because he polled his colleagues and found none of them had ever used containers within the NLRB network.

Berulis said he also noticed that early the next morning — between approximately 3 a.m. and 4 a.m. EST on Tuesday, March 4  — there was a large increase in outgoing traffic from the agency. He said it took several days of investigating with his colleagues to determine that one of the new accounts had transferred approximately 10 gigabytes worth of data from the NLRB’s NxGen case management system.

Berulis said neither he nor his co-workers had the necessary network access rights to review which files were touched or transferred — or even where they went. But his complaint notes the NxGen database contains sensitive information on unions, ongoing legal cases, and corporate secrets.

“I also don’t know if the data was only 10gb in total or whether or not they were consolidated and compressed prior,” Berulis told the senators. “This opens up the possibility that even more data was exfiltrated. Regardless, that kind of spike is extremely unusual because data almost never directly leaves NLRB’s databases.”

Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account — one that had been created just minutes earlier. Berulis said those attempts were all blocked thanks to rules in place that prohibit logins from non-U.S. locations.

“Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”

According to Berulis, the naming structure of one Microsoft user account connected to the suspicious activity suggested it had been created and later deleted for DOGE use in the NLRB’s cloud systems: “DogeSA_2d5c3e0446f9@nlrb.microsoft.com.” He also found other new Microsoft cloud administrator accounts with nonstandard usernames, including “Whitesox, Chicago M.” and “Dancehall, Jamaica R.”

A screenshot shared by Berulis showing the suspicious user accounts.

On March 5, Berulis documented that a large section of logs for recently created network resources were missing, and a network watcher in Microsoft Azure was set to the “off” state, meaning it was no longer collecting and recording data like it should have.

Berulis said he discovered someone had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever use. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

The complaint alleges that by March 17 it became clear the NLRB no longer had the resources or network access needed to fully investigate the odd activity from the DOGE accounts, and that on March 24, the agency’s associate chief information officer had agreed the matter should be reported to US-CERT. Operated by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), US-CERT provides on-site cyber incident response capabilities to federal and state agencies.

But Berulis said that between April 3 and 4, he and the associate CIO were informed that “instructions had come down to drop the US-CERT reporting and investigation and we were directed not to move forward or create an official report.” Berulis said it was at this point he decided to go public with his findings.

An email from Daniel Berulis to his colleagues dated March 28, referencing the unexplained traffic spike earlier in the month and the unauthorized changing of security controls for user accounts.

Tim Bearese, the NLRB’s acting press secretary, told NPR that DOGE neither requested nor received access to its systems, and that “the agency conducted an investigation after Berulis raised his concerns but ‘determined that no breach of agency systems occurred.'” The NLRB did not respond to questions from KrebsOnSecurity.

Nevertheless, Berulis has shared a number of supporting screenshots showing agency email discussions about the unexplained account activity attributed to the DOGE accounts, as well as NLRB security alerts from Microsoft about network anomalies observed during the timeframes described.

As CNN reported last month, the NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function.

“Despite its limitations, the agency had become a thorn in the side of some of the richest and most powerful people in the nation — notably Elon Musk, Trump’s key supporter both financially and arguably politically,” CNN wrote.

Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis shared screenshots with KrebsOnSecurity showing that on the day the NPR published its story about his claims (April 14), the deputy CIO at NLRB sent an email stating that administrative control had been removed from all employee accounts. Meaning, suddenly none of the IT employees at the agency could do their jobs properly anymore, Berulis said.

An email from the NLRB’s associate chief information officer Eric Marks, notifying employees they will lose security administrator privileges.

Berulis shared a screenshot of an agency-wide email dated April 16 from NLRB director Lasharn Hamilton saying DOGE officials had requested a meeting, and reiterating claims that the agency had no prior “official” contact with any DOGE personnel. The message informed NLRB employees that two DOGE representatives would be detailed to the agency part-time for several months.

An email from the NLRB Director Lasharn Hamilton on April 16, stating that the agency previously had no contact with DOGE personnel.

Berulis told KrebsOnSecurity he was in the process of filing a support ticket with Microsoft to request more information about the DOGE accounts when his network administrator access was restricted. Now, he’s hoping lawmakers will ask Microsoft to provide more information about what really happened with the accounts.

“That would give us way more insight,” he said. “Microsoft has to be able to see the picture better than we can. That’s my goal, anyway.”

Berulis’s attorney told lawmakers that on April 7, while his client and legal team were preparing the whistleblower complaint, someone physically taped a threatening note to Mr. Berulis’s home door with photographs — taken via drone — of him walking in his neighborhood.

“The threatening note made clear reference to this very disclosure he was preparing for you, as the proper oversight authority,” reads a preface by Berulis’s attorney Andrew P. Bakaj. “While we do not know specifically who did this, we can only speculate that it involved someone with the ability to access NLRB systems.”

Berulis said the response from friends, colleagues and even the public has been largely supportive, and that he doesn’t regret his decision to come forward.

“I didn’t expect the letter on my door or the pushback from [agency] leaders,” he said. “If I had to do it over, would I do it again? Yes, because it wasn’t really even a choice the first time.”

For now, Mr. Berulis is taking some paid family leave from the NLRB. Which is just as well, he said, considering he was stripped of the tools needed to do his job at the agency.

“They came in and took full administrative control and locked everyone out, and said limited permission will be assigned on a need basis going forward” Berulis said of the DOGE employees. “We can’t really do anything, so we’re literally getting paid to count ceiling tiles.”

Further reading: Berulis’s complaint (PDF).

,

Worse Than FailureCodeSOD: The Variable Toggle

A common class of bad code is the code which mixes server side code with client side code. This kind of thing:

<script>
    <?php if (someVal) { ?>
        var foo = <? echo someOtherVal ?>;
    <?php } else { ?>
        var foo = 5;
    <?php } ?>
</script>

We've seen it, we hate it, and is there really anything new to say about it?

Well, today's anonymous submitter found an "interesting" take on the pattern.

<script>
    if(linkfromwhere_srfid=='vff')
      {
    <?php
    $vff = 1;
    ?>
      }
</script>

Here, they have a client-side conditional, and based on that conditional, they attempt to set a variable on the server side. This does not work. This cannot work: the PHP code executes on the server, the client code executes on the client, and you need to be a lot more thoughtful about how they interact than this.

And yet, the developer responsible has done this all over the code base, pushed the non-working code out to production, and when it doesn't work, just adds bug tickets to the backlog to eventually figure out why- tickets that never get picked up, because there's always something with a higher priority out there.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsNo Future For You

Author: Julian Miles, Staff Writer Flickering light is the only illumination in the empty laboratory. A faint humming the only noise. At the centre of a mass of equipment sits an old, metal-framed specimen tank, edges spotted with rust. Inside whirls a multi-coloured cloud, source of both light and sound. This close to it, the […]

The post No Future For You appeared first on 365tomorrows.

,

365 TomorrowsDrugs Awareness Day

Author: David Barber Teachers make the worst students, thought Mrs Adebeyo. They drifted in, chattering, and filling up tables according to subject. At the front sat four English teachers. One of the women was busy knitting. Mrs Adebeyo was already frowning at the click of needles. At the back was a row of men looking […]

The post Drugs Awareness Day appeared first on 365tomorrows.

,

365 TomorrowsMy Earliest Memory

Author: Marshall Bradshaw “You’re going to remember this next part,” said Dr. Adams. The fluorescent lights of exam room 8 hummed in beautiful harmony. I counted off the flashes. 120 per second. That was 7,200 per minute, or 432,000 per hour. The numbers felt pleasantly round to me. I reported the observation to Dr. Adams. […]

The post My Earliest Memory appeared first on 365tomorrows.

David BrinThe AI Dilemma continues onward... despite all our near term worries

First off, although this posting is not overall political... I will offer a warning to you activists out there.


While I think protest marches are among the least effective kinds of resistance - (especially since MAGAs live for one thing: to drink the tears of every smartypants professional/scienctist/civil-servant etc.) -- I still praise you active folks who are fighting however you can for the Great (now endangered) Exxperiment. Still may I point out how deeply stupid the organizers of this 50 501 Movement are?


Carumba! They scheduled their next protests for April 19, which far right maniacs call Waco Day or Timothy McVeigh Day. A day when you are best advised to lock the doors. 

That's almost as stoopid as the morons who made April 20 (4-20) their day of yippee pot delight... also Hitler's birthday.

Shouldn't we have vetting and even CONFERENCES before we decide such things?

Still, those of you heading out there (is it today already?) bless you for your citizenship and courage.

And now...


There’s always more about AI – and hence a collection of links to…



== The AI dilemmas and myriad-lemmas continue ==


I’ve been collecting so much material on the topic… and more keeps pouring in. Though (alas!) so little of it enlightening about how to turn this tech revolution into a positive sum outcome for all.


Still, let’s start with a potpourri…


1. A new study by Caltech and UC Riverside uncovers the hidden toll that AI exacts on human health, from chip manufacturing to data center operation.
 

2. And also this re: my alma mater: Caltech researchers have developed brain–machine interfaces that can interpret data from neural activity even after the signal from an implant has become less clear.   

 

3. Swinging from process to implications… my friend from the UK (via Panama) Calum Chace (author of Surviving AI: The Promise & Peril of Artificial Intelligence) sent me this one from his Conscium Project re: “Principles for Responsible AI Consciousness Research”. While detailed and illuminating, the work is vague about the most important things, like how to tell ‘consciousness’ from a system that feigns it… and whether that even matters.

Moreover, none of the authors cited here refers to how the topic was pre-explored in science fiction. Take the dive into “what is consciousness?” that you’ll find in Peter Watts’s novel “Blindsight.” 

 

…wherein Watts makes the case that a sense of self is not even necessary in order for a being to behave in ways that are actively intelligent, communicative and even ferociously self-interested.  

 

All you need is evolution. And an overall system in which evolution remains (as in nature) zero-sum.  Which – I keep trying to tell folks – is not necessarily fore-ordained.



== And yet more AI related Miscellany ==


4. Augmented reality glasses with face-recognition and access to world databases… now where have we seen this before? How about in great detail in Existence?

5. On the MINDPLEX Podcast with AI pioneers Ben Goertzel and Lisa Rein covering – among many topics - training AGI's to hold each other accountable, pingable IDs (using cryptographic hashes to secure agent identity), AGI rights & much, much more! (Sept 2024). And yeah, I am there, too. 


6. An article by Anthropic CEO Dario Amodei makes points similar to Reid Hoffman and Marc Andreeson, that the upsides of AI are potentially spectacular, as also portrayed in a small but influential number of science fiction tales.  Alas, his list of potential benefits, while extensive re ways AI could be "Machines of Loving Grace,"* is also long in the tooth and almost hoary-clichéd. We need to recall that in any ecosystem - including the new cyber one - entities without feedback constraints soon evolve into whatever form best proliferates quickly. That is, unless feedback loops take shape.


7. This article in FORTUNE makes a case similar to my own... that AIs will improve best, in accuracy and sagacity and even wisdom, if accountability is applied by AI upon other AIs. Positive feedback can be a dangerous cycle, while some kinds of negative feedback loops can lead to incrementally increased correlation with the real world.


8. Again, my featured WIRED article about this - Give Every AI a soul... or else.

My related Newsweek op-ed (June'22) dealt with 'empathy bots'' that feign sapience and personhood.  



== AI generated visual lies – we can deal with this! ==


9. A new AI algorithm flags deepfakes with 98% accuracy — better than any other tool out there right now. And it is essential that we keep developing such systems, in order to stand a chance of keeping up in an arms race against those who would foist on us lies, scams and misinformation...


   ... pretty much exactly as I described back in 1997, this reposted chapter from The Transparent Society - "The End of Photography as Proof."  


Two problems. First, scammers will use programs like this one to help perfect their scam algorithms. Second, it would be foolish to rely upon any one such system, or just a few. A diversity of highly competitive tattle-tale lie-denouncing systems is the only thing that can work, as I discuss here


Oh, and third. It is inherent – (as I show in that chapter of The Transparent Society) – that lies are more easily detected, denounced and incinerated in a general environment of transparency, wherein the greatest number can step away from their screens and algorithms and compare them to actual, physically-witnessed reality.


For more on this, here’s my related Newsweek op-ed (June'22) dealt with 'empathy bots' that feign sapience. Plus a YouTube pod where I give an expanded version.


== Generalizing to innovation, in general ==


10. Traditional approaches to innovation emphasize ideas and inventions, often leading to a losing confrontation with the mysterious “Valley of Death.” My colleague Peter Denning and his co-author Todd Lyons upend this paradigm in Navigating a Restless Sea, offering eight mutually reinforcing practices that power skillful navigation toward adoption, mobilizing people to try something new.  


=== Some tacked-on tech miscellany ==


11. Sure, it goes back to neolithic "Venus figurines" and Playboy and per-minute phone comfort lines and Eliza - and the movie "Her." And bot factories are hard at work. At my 2017 World of Watson keynote, I predicted persuasive 'empathy bots' would arrive in 2022 (they did.) And soon, Kremlin 'honeypot-lure babes" should become ineffective! Because this deep weakness of male humans will have an outlet that's... more than human?


Could that lead to those men calming down, prioritizing the more important aspects of life?)


12. And hence, we will soon see...
AI girlfriends and companions. And this from Vox: People are falling in love with -- and getting addicted to -- AI voices.


13. Kewl!  This tiny 3D-printed Apple IIe is powered by a $2 microcontroller .” With a teensy working screen taken from an Apple watch. Can run your old IIe programs. Size of an old mouse.  


14. Paragraphica by Bjørn Karmann is a camera that has no lens, but instead generates a text description of when & where it is, then generates an image via a text-to-image model.  


15. Daisy is an AI cellphone application that wastes scammers’ time so that they don’t have time to target real victims. Daisy has "told frustrated scammers meandering stories of her family, talked at length about her passion for knitting, and provided exasperated callers with false personal information including made-up bank details."



And finally...



== NOT directly AI… but for sure implications! ==


And… only slightly off-topic: If you feel a need for an inspiring tale about a modern hero, intellect and deeply-wise public figure, try Judge David Tatel’s VISION: A memoir and Blindness and Justice. I’ll vouch that he’s one of the wisest wise-guys I know. "Vision is charming, wise, and completely engaging. This memoir of a judge of the country’s second highest court, who has been without sight for decades, goes down like a cool drink on a hot day." —Scott Turow. https://www.davidtatel.com/


And Maynard Moore - of the Institute for the Study of Religion in the Age of Science - will be holding a pertinent event online in mid January: “Human-Machine Fusion: Our Destiny, or Might We Do Better?”  The IRAS webinar is free, but registration is required.



== a lagniappe (puns intended) ==


In the 1980s this supposedly “AI”- generated sonnet emerged from the following prompt, "Buzz Off, Banana Nose."  


Well… real or not, here’s my haiku response. 


In lunar orchards

    A pissed apiary signs:

"Bee gone, Cyrano!"

 

Count the layers, oh cybernetic padiwans!  I’m not obsolete… yet.


,

Worse Than FailureError'd: Hot Dog

Faithful Peter G. took a trip. "So I wanted to top up my bus ticket online. After asking for credit card details, PINs, passwords, a blood sample, and the airspeed velocity of an unladen European swallow, they also sent a notification to my phone which I had to authorise with a fingerprint, and then verify that all the details were correct (because you can never be too careful when paying for a bus ticket). So yes, it's me, but the details definitely are not correct." Which part is wrong, the currency? Any idea what the exchange rate is between NZD and the euro right now?

3

 

An anonymous member kvetched "Apparently, I'm a genius, but The New York Times' Spelling Bee is definitely not."

0

 

Mickey D. Had an ad pop up for a new NAS to market.
Specs: Check
Storage: Check
Superior technical support: "

1

 

Michael R. doesn't believe everything he sees on TV, thankfully, because "No wonder the stock market is in turmoil when prices fall by 500% like in the latest Amazon movie G20."

2

 

Finally, new friend Sandro shared his tale of woe. "Romance was hard enough I was young, and I see not much has changed now!"

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsSubscription Fee

Author: Fawkes Defries ‘Shit!’ Russ collapsed against his chrome tent, cursing as the acid tore through his clothes. Usually he made it back inside before the rain fell, but his payments to Numeral for the metal arms had just defaulted, and without the gravity-suspenders active he was stuck lugging his hands around like a cyborg […]

The post Subscription Fee appeared first on 365tomorrows.

,

Cryptogram Age Verification Using Facial Scans

Discord is testing the feature:

“We’re currently running tests in select regions to age-gate access to certain spaces or user settings,” a spokesperson for Discord said in a statement. “The information shared to power the age verification method is only used for the one-time age verification process and is not stored by Discord or our vendor. For Face Scan, the solution our vendor uses operates on-device, which means there is no collection of any biometric information when you scan your face. For ID verification, the scan of your ID is deleted upon verification.”

I look forward to all the videos of people hacking this system using various disguises.

Cryptogram Friday Squid Blogging: Japanese Divers Video Giant Squid

The video is really amazing.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Cryptogram Friday Squid Blogging: Live Colossal Squid Filmed

A live colossal squid was filmed for the first time in the ocean. It’s only a juvenile: a foot long.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Cryptogram Friday Squid Blogging: Pyjama Squid

The small pyjama squid (Sepioloidea lineolata) produces toxic slime, “a rare example of a poisonous predatory mollusc.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Cryptogram Friday Squid Blogging: Squid Facts on Your Phone

Text “SQUID” to 1-833-SCI-TEXT for daily squid facts. The website has merch.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.