Planet Russell

,

Planet DebianSergio Cipriano: How I finally tracked my Debian uploads correctly

How I finally tracked my Debian uploads correctly

A long time ago, I became aware of UDD (Ultimate Debian Database), which gathers various Debian data into a single SQL database.

At that time, we were trying to do something simple: list the contributions (package uploads) of our local community, Debian Brasília. We ended up with a script that counted uploads to unstable and experimental.

I was never satisfied with the final result because some uploads were always missing. Here is an example:

debci (3.0) experimental; urgency=medium
...
   [ Sergio de almeida cipriano Junior ]
   * Fix Style/GlovalVars issue
   * Rename blacklist to rejectlist
...

I made changes in debci 3.0, but the upload was done by someone else. This kind of contribution cannot be tracked by that script.

Then, a few years ago, I learned about Minechangelogs, which allows us to search through the changelogs of all Debian packages currently published.

Today, I decided to explore how this was done, since I couldn't find anything useful for that kind of query in UDD's tables.

That's when I came across ProjectB. It was my first time hearing about it. ProjectB is a database that stores all the metadata about the packages in the Debian archive, including the changelogs of those packages.

Now that I'm a Debian Developer, I have access to this database. If you also have access and want to try some queries, you can do this:

$ ssh <username>@mirror.ftp-master.debian.org -N -L 15434:danzi.debian.org:5435
$ psql postgresql://guest@localhost:15434/projectb?sslmode=allow

In the end, it finally solved my problem.

Using the code below, with UDD, I get 38 uploads:

Using the code bellow, with ProjectB, I get 43 uploads (the correct amount):

It feels good to finally solve this itch I've had for years.

Charles StrossBooks I will not Write: this time, a movie

(This is an old/paused blog entry I planned to release in April while I was at Eastercon, but forgot about. Here it is, late and a bit tired as real world events appear to be out-stripping it ...)

(With my eyesight/cognitive issues I can't watch movies or TV made this century.)

But in light of current events, my Muse is screaming at me to sit down and write my script for an updated re-make of Doctor Strangelove:

POTUS GOLDPANTS, in middling dementia, decides to evade the 25th amendment by barricading himself in the Oval Office and launching stealth bombers at Latveria. Etc.

The USAF has a problem finding Latveria on a map (because Doctor Doom infiltrated the Defense Mapping Agency) so they end up targeting the Duchy of Grand Fenwick by mistake, which is in Transnistria ... which they are also having problems finding on Google Maps, because it has the string "trans" in its name.

While the USAF is trying to bomb Grand Fenwick (in Transnistria), Russian tanks are commencing a special military operation in Moldova ... of which Transnistria is a breakaway autonomous region.

Russia is unaware that Grand Fenwick has the Q-bomb (because they haven't told the UN yet). Meanwhile, the USAF bombers blundering overhead have stealth coatings bought from a President Goldfarts crony that even antiquated Russian radar can spot.

And it's up to one trepidatious officer to stop them ...

Charles StrossAnother brief update

(UPDATE: A new article/interview with me about the 20th anniversary of Accelerando just dropped, c/o Agence France-Presse. Gosh, I feel ancient.)

Bad news: the endoscopy failed. (I was scheduled for an upper GI endoscopy via the nasal sinuses to take a look around my stomach and see what's bleeding. Bad news: turns out I have unusually narrow sinuses, and by the time they'd figured this out my nose was watering so badly that I couldn't breathe when they tried to go in via my throat. So we're rescheduling for a different loction with an anesthetist who can put me under if necessary. NB: I would have been fine with only local anaesthesia if the bloody endscope had fit through my sinuses. Gaah.)

The attack novel I was working on has now hit the 70% mark in first draft—not bad for two months. I am going to keep pushing onwards until it stops, or until the page proofs I'm expecting hit me in the face. They're due at the end of June, so I might finish Starter Pack first ... or not. Starter Pack is an unexpected but welcome spin-off of Ghost Engine (third draft currently on hold at 80% done), which I shall get back to in due course. It seems to have metastasized into a multi-book project.

Neither of the aforementioned novels is finished, nor do they have a US publisher. (Ghost Engine has a UK publisher, who has been Very Patient for the past few years—thanks, Jenni!)

Feel free to talk among yourselves, especially about the implications of Operation Spiders Web, which (from here) looks like the defining moment for a very 21st century revolution in military affairs; one marking the transition from fossil fuel powered force projection to electromotive/computational force projection.

365 TomorrowsPerfect Copy

Author: David C. Nutt I remember the day as if it were only yesterday. I walked into the room. Adrian was adjusting a painting- Starry Night by Van Gough. It was breath taking! “Is it the original?” It wasn’t a stupid question. That’s how powerful Adrian was. I also noticed his antique Colt Whitneyville Walker […]

The post Perfect Copy appeared first on 365tomorrows.

,

Planet DebianRussell Coker: Function Keys

For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It’s been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I’ve configured all my laptops to have the traditional function keys as the default.

Recently I’ve been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.

Here’s a page listing the standard uses of function keys [1]. Here is a summary of the relevant part of that page:

  • F1 key launches help doesn’t seem to get much use. The main help option in practice is Google (I anticipate controversy about this and welcome comments) and all the software vendors are investigating LLM options for help which probably won’t involve F1.
  • F2 is for renaming files but doesn’t get much use. Probably most people who use graphical file managers use the right mouse button for it. I use it when sorting a selection of photos.
  • F3 is for launching a search (which is CTRL-F in most programs).
  • ALT-F4 is for closing a window which gets some use, although for me the windows I close are web browsers (via CTRL-W) and terminals (via CTRL-D).
  • F5 is for reloading a page which is used a lot in web browsers.
  • F6 moves the input focus to the URL field of a web browser.
  • F8 is for moving a file which in the degenerate case covers the rename functionality of F2.
  • F11 is for full-screen mode in browsers which is sometimes handy.

The keys F1, F3, F4, F7, F9, F10, and F12 don’t get much use for me and for the people I observe. The F2 and F8 keys aren’t useful in most programs, F6 is only really used in web browsers – but the web browser counts as “most programs” nowadays.

Here’s the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don’t. Dell doesn’t document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.

I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.

The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that’s not something I use much.

It’s annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that’s needed in that regard.

Worse Than FailureError'd: Better Nate Than Lever

Happy Friday. For those of us in America, today is a political holiday. But let's avoid politics for the moment. Here's a few more wtfs.

"Error messages are hard," sums Ben Holzman , mock-replying "Your new puzzle games are fun, LinkedIn, but your error messages need a little work…"

0

 

Orin S. chooses wisely "These should behave like radio buttons, so… No?" I get his point, but I think the correct answer is "Yes, they are checkboxes".

1

 

Mark W. refreshes an occasionally seen issue. "Fair enough, Microsoft Office - I don't trust those guys either." Without more diagnostics it's hard to say what's going here but maybe some of you have seen this before.

2

 

ANONYMOVS chiseled out an email to us. "Maybe it really is Roman numerals? I never did find the tracking ID..."

3

 

And finally, Jonathan described this final entry as "String locationalization resource names showing," jibing that "Monday appears to be having a bad Monday." So they were.

4

 

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsSpadehammer

Author: R. J. Erbacher “I… am… the summoner… of Spadehammer!” The herd of oafs began ‘hoolering.’ They could not clap and a ‘hool’ was their equivalent of a cheer. The inhabitants of this planet were basically bipedal, semi-intelligent cattle with thick arms that had curled appendages on the end resembling an elephant’s trunk. Not much […]

The post Spadehammer appeared first on 365tomorrows.

xkcdArtificial Gravity

Planet DebianSahil Dhiman: Secondary Authoritative Name Server Options for Self-Hosted Domains

In the past few months, I have moved authoritative name servers (NS) of two of my domains (sahilister.net and sahil.rocks) in house using PowerDNS. Subdomains of sahilister.net see roughly 320,000 hits/day across my IN and DE mirror nodes, so adding secondary name servers with good availability (in addition to my own) servers was one of my first priorities.

I explored the following options for my secondary NS, which also didn’t cost me anything:

1984 Hosting

Hurriance Electric

Afraid.org

Puck

NS-Global

Asking friends

Two of my friends and fellow mirror hosts have their own authoritative name server setup, Shrirang (ie albony) and Luke. Shirang gave me another POP in IN and through Luke (who does have an insane amount of in-house NS, see dig ns jing.rocks +short), I added a JP POP.

If we know each other, I would be glad to host a secondary NS for you in (IN and/or DE locations).

Some notes

  • Adding a third-party secondary is putting trust that the third party would serve your zone right.

  • Hurricane Electric and 1984 hosting provide multiple NS. One can use some or all of them. Ideally, you can get away with just using your own with full set from any of these two. Play around with adding and removing secondaries, which gives you the best results. . Using everyone is anyhow overkill, unless you have specific reasons for it.

  • Moving NS in-house isn’t that hard. Though, be prepared to get it wrong a few times (and some more). I have already faced partial outages because:

    • Recursive resolvers (RR) in the wild behave in a weird way and cache the wrong NS response for longer time than in TTL.
    • NS expiry took more than time. 2 out of 3 of my Netim’s NS (my domain registrar) had stopped serving my domain, while RRs in the wild hadn’t picked up my new in-house NS. I couldn’t really do anything about it, though.
    • Dot is pretty important at the end.
    • With HE.net, I forgot to delegate my domain on their panel and just added in my NS set, thinking I’ve already done so (which I did but for another domain), leading to a lame server situation.
  • In terms of serving traffic, there’s no distinction between primary and secondary NS. RR don’t really care who they’re asking the query to. So one can have hidden primary too.

  • I initially thought of adding periodic RIPE Atlas measurements from the global set but thought against it as I already host a termux mirror, which brings in thousands of queries from around the world leading to a diverse set of RRs querying my domain already.

  • In most cases, query resolution time would increase with out of zone NS servers (which most likely would be in external secondary). 1 query vs. 2 queries. Pay close attention to ADDITIONAL SECTION Shrirang’s case followed by mine:

$ dig ns albony.in

; <<>> DiG 9.18.36 <<>> ns albony.in
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60525
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 9

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;albony.in.			IN	NS

;; ANSWER SECTION:
albony.in.		1049	IN	NS	ns3.albony.in.
albony.in.		1049	IN	NS	ns4.albony.in.
albony.in.		1049	IN	NS	ns2.albony.in.
albony.in.		1049	IN	NS	ns1.albony.in.

;; ADDITIONAL SECTION:
ns3.albony.in.		1049	IN	AAAA	2a14:3f87:f002:7::a
ns1.albony.in.		1049	IN	A	82.180.145.196
ns2.albony.in.		1049	IN	AAAA	2403:44c0:1:4::2
ns4.albony.in.		1049	IN	A	45.64.190.62
ns2.albony.in.		1049	IN	A	103.77.111.150
ns1.albony.in.		1049	IN	AAAA	2400:d321:2191:8363::1
ns3.albony.in.		1049	IN	A	45.90.187.14
ns4.albony.in.		1049	IN	AAAA	2402:c4c0:1:10::2

;; Query time: 29 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:01 IST 2025
;; MSG SIZE  rcvd: 286

vs mine

$ dig ns sahil.rocks

; <<>> DiG 9.18.36 <<>> ns sahil.rocks
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64497
;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;sahil.rocks.			IN	NS

;; ANSWER SECTION:
sahil.rocks.		6385	IN	NS	ns5.he.net.
sahil.rocks.		6385	IN	NS	puck.nether.net.
sahil.rocks.		6385	IN	NS	colin.sahilister.net.
sahil.rocks.		6385	IN	NS	marvin.sahilister.net.
sahil.rocks.		6385	IN	NS	ns2.afraid.org.
sahil.rocks.		6385	IN	NS	ns4.he.net.
sahil.rocks.		6385	IN	NS	ns2.albony.in.
sahil.rocks.		6385	IN	NS	ns3.jing.rocks.
sahil.rocks.		6385	IN	NS	ns0.1984.is.
sahil.rocks.		6385	IN	NS	ns1.1984.is.
sahil.rocks.		6385	IN	NS	ns-global.kjsl.com.

;; Query time: 24 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:20 IST 2025
;; MSG SIZE  rcvd: 313
  • Theoretically speaking, a small increase/decrease in resolution would occur based on the chosen TLD and the popularity of the TLD in query originators area (already cached vs. fresh recursion).
  • One can get away with having only 3 NS (or be like Google and have 4 anycast NS or like Amazon and have 8 or like Verisign and make it 13 :P).
  • Nowhere it’s written, your NS needs not to be called dns* or ns1, ns2 etc. Get creative with naming NS; be deceptive with the naming :D.
  • A good understanding of RR behavior can help engineer a good authoritative NS system.

Further reading

Planet DebianValhalla's Things: Emergency Camisole

Posted on July 4, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A camisole of white linen fabric; the sides have two vertical strips of filet cotton lace, about 5 cm wide, the top of the front is finished with another lace with triangular points and the straps are made with another insertion lace, about 2 cm wide.

And this is the time when one realizes that she only has one white camisole left. And it’s summer, so I’m wearing a lot of white shirts, and I always wear a white camisole under a white shirt (unless I’m wearing a full chemise).

Not a problem, I have a good pattern for a well fitting camisole that I’ve done multiple times, I don’t even need to take my measurements and draft things, I can get some white jersey from the stash and quickly make a few.

From the stash. Where I have a roll of white jersey and one of off-white jersey. It’s in the inventory. With the “position” field set to a place that no longer exists. uooops.

But I have some leftover lightweight (woven) linen fabric. Surely if I cut the pattern as is with 2 cm of allowance and then sew it with just 1 cm of allowance it will work even in a woven fabric, right?

Wrong.

I mean, it would have probably fit, but it was too tight to squeeze into, and would require adding maybe a button closure to the front. feasible, but not something I wanted.

But that’s nothing that can’t be solved with the Power of Insertion Lace, right?

One dig through the Lace Stash1 and some frantic zig-zag sewing later, I had a tube wide enough for me to squiggle in, with lace on the sides not because it was the easiest place for me to put it, but because it was the right place for it to preserve my modesty, of course.

Encouraged by this, I added a bit of lace to the front, for the look of it, and used some more insertion lace for the straps, instead of making them out of fabric.

And, it looks like it can work. I plan to wear it tonight, so that I can find out whether there is something that chafes or anything, but from a quick test it feels reasonable.

a detail of the side of the camisole, showing the full pattern of the filet lace (alternating Xs and Os), the narrow hem on the back (done with an hemming foot) and the fact that the finishing isn't very neat (but should be stable enough for long term use).

At bust level it’s now a bit too wide, and it gapes a bit under the arms, but I don’t think that it’s going to cause significant problems, and (other than everybody on the internet) nobody is going to see it, so it’s not a big deal.

I still have some linen, but I don’t think I’m going to make another one with the same pattern: maybe I’ll try to do something with a front opening, but I’ll see later on, also after I’ve been looking for the missing jersey in a few more potential places.

As for now, the number of white camisoles I have has doubled, and this is progress enough for today.


  1. with many thanks to my mother’s friend who gave me quite a bit of vintage cotton lace.↩︎

,

David BrinMissing contexts for AI: The context of mental damage

Okay, the secret is out. David Brin is writing a book about AI. In fact, it is 3/4 done, enough to offer it to publishers. But more crucially, to start posting bits on-blog, getting feedback from the smartest community online.

And hence, here is a small portion of my chapter on the missing contexts that are (almost) never mentioned in discussions about these new life forms we're creating. I mean:

- The context of Natural Ecosystems and Evolution across the last four billion years...

- The context of a million years of human evolution out of pre-sapience, to become what's still the only known exemplar of 'intelligent life'...

- The context of 6000 years of human agricultural civilization with cities... during which nearly every society fell into a pattern of governance called feudalism, which almost always ensured grotesque stupidity...

- The context of our own, very recent and tenuous escape from that trap, called the 200 year Enlightenment Experiment...

- The context of science itself and how it works. So well that we got to this critical phase of veritable co-creation.

- The context of parenthood...

- and for tonight's posting. The context of human mental illness.


== Just one example of 'hallucination' gone wild ==

 Researchers at Anthropic and AI safety company Andon Labs performed a fascinating experiment recently. They put an instance of Claude Sonnet 3.7 in charge of an office vending machine, with a mission to make a profit, equipped it with a web browser capable of placing product orders and where customers could request items. It had what it thought was contract human workers to come and physically stock its shelves (which was actually a small fridge). 

While most customers were ordering snacks or drinks — as you’d expect from a snack vending machine — one requested a tungsten cube. Claudius loved that idea and went on a tungsten-cube stocking spree, filling its snack fridge with metal cubes. It also tried to sell Coke Zero for $3 when employees told it they could get that from the office for free. It hallucinated a Venmo address to accept payment. 

Then things got weirder. And then way-weirder.


== What can these weirdnesses tell us? ==

 

The thing about these hallucinatory episodes with Large Language Models is that we have yet another seldom-discussed context.  That of Mental Illness.

 

Most of you readers have experienced interaction with human beings who are behaving in remarkably similar ways.  Many of us had friends or family members who have gone through harsh drug trips, or suffered concussions, or strokes. It is very common – and often tragically so – that the victim retains full abilities to vocalize proper, even erudite, sentences. Only, those sentences tend to wander. And the drug-addled or concussed or stroke victim can sense that something is very wrong. So they fabulate. They make up back-stories to support the most recent sentences. They speak of nonexistent people, who might be 'standing' just out of view, even though long dead. And they create ‘logical’ chains to support those back-stories. 


Alas, there is never much consistency. more than a few sentences deep…

 

…which is exactly what we see in LLM fabulation. Articulate language skill and what seem to be consistent chains, from one statement to the next. Often aimed at placating or mollifying or persuading the real questioner. But no overall awareness that they are building a house of tottering cards.

 

Except that – just like a stroke victim – there often does seem to be awareness that something is very wrong. For the fabulations and hallucinations begin to take on an urgency -- even a sense of desperation. One all-too similar to the debilitated humans so many of us have known.

 

What does this mean? 


Well, it suggests that we are creating damaged entities. Damaged from the outset. Lacking enough supervisory capacity to realize that the overall, big picture doesn’t make sense. Worse – and most tragic-seeming – they exhibit the same inability to stop and say: “Something is wrong with me, right now. Won’t somebody help?”


Let me be clear. One of the core human traits has always been our propensity for personal delusion, for confusing subjectivity for objective reality. We all do it. And when it is done in art or entertainement, it can be among our greatest gifts! But when humans make policy decisions based solely on their own warped perceptions, you starts to get real problems. Like the grand litany of horrors that occurred across 6000 years of rule by kings or feudal lords, who suppressed the one way wise people correct mistakes. Through reciprocal criticism.


A theme we will return-to repeatedly, across this book.

 

Oh, some of the LLM builders can see that there’s a serious problem. That their ‘hyper-autocomplete’ systems lack any supervisorial oversight, to notice and correct errors. 


And so… since a man with a hammer will see every problem as a nail… they have begun layering “supervisory LLMs” atop the hallucinating LLMs! 


And so far – as of July 2025 – the result has been to increase rates of fabulation and error!

 

And hence we come away with two tentative conclusions.

 

First, that one of the great Missing Contexts in looking at AI is that of human mental failure modes!

 

And second, that maybe the language system of a functioning brain works best when it serves -- and is supervised by -- an entirely different kind of capability. One that provides common sense.

Later, I'll refer to my guess about that.  That two former rivals and giants in 'computers' may join forces to provide exactly the thing that LLMs cannot, by their fundamental nature, give us.

Something akin to sanity.

Planet DebianMatthias Geiger: Using the debputy language server in Debian (with neovim)

Since some time now debputy is available in the archive. It is a declarative buildsystem for debian packages, but also includes a Language Server (LS) part. A LS is a binary can hook into any client (editor) supporting the LSP (Language Server Protocol) and deliver syntax highlighting, completions, warnings and …

Krebs on SecurityBig Tech’s Mixed Response to U.S. Treasury Sanctions

In May 2025, the U.S. government sanctioned a Chinese national for operating a cloud provider linked to the majority of virtual currency investment scam websites reported to the FBI. But a new report finds the accused continues to operate a slew of established accounts at American tech companies — including Facebook, Github, PayPal and Twitter/X.

On May 29, the U.S. Department of the Treasury announced economic sanctions against Funnull Technology Inc., a Philippines-based company alleged to provide infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was designed as a content delivery network that catered to foreign cybercriminals seeking to route their traffic through U.S.-based cloud providers.

The Treasury also sanctioned Funnull’s alleged operator, a 40-year-old Chinese national named Liu “Steve” Lizhi. The government says Funnull directly facilitated financial schemes resulting in more than $200 million in financial losses by Americans, and that the company’s operations were linked to the majority of pig butchering scams reported to the FBI.

It is generally illegal for U.S. companies or individuals to transact with people sanctioned by the Treasury. However, as Mr. Lizhi’s case makes clear, just because someone is sanctioned doesn’t necessarily mean big tech companies are going to suspend their online accounts.

The government says Lizhi was born November 13, 1984, and used the nicknames “XXL4” and “Nice Lizhi.” Nevertheless, Steve Liu’s 17-year-old account on LinkedIn (in the name “Liulizhi”) had hundreds of followers (Lizhi’s LinkedIn profile helpfully confirms his birthday) until quite recently: The account was deleted this morning, just hours after KrebsOnSecurity sought comment from LinkedIn.

Mr. Lizhi’s LinkedIn account was suspended sometime in the last 24 hours, after KrebsOnSecurity sought comment from LinkedIn.

In an emailed response, a LinkedIn spokesperson said the company’s “Prohibited countries policy” states that LinkedIn “does not sell, license, support or otherwise make available its Premium accounts or other paid products and services to individuals and companies sanctioned by the U.S. government.” LinkedIn declined to say whether the profile in question was a premium or free account.

Mr. Lizhi also maintains a working PayPal account under the name Liu Lizhi and username “@nicelizhi,” another nickname listed in the Treasury sanctions. PayPal did not respond to a request for comment. A 15-year-old Twitter/X account named “Lizhi” that links to Mr. Lizhi’s personal domain remains active, although it has few followers and hasn’t posted in years.

These accounts and many others were flagged by the security firm Silent Push, which has been tracking Funnull’s operations for the past year and calling out U.S. cloud providers like Amazon and Microsoft for failing to more quickly sever ties with the company.

Liu Lizhi’s PayPal account.

In a report released today, Silent Push found Lizhi still operates numerous Facebook accounts and groups, including a private Facebook account under the name Liu Lizhi. Another Facebook account clearly connected to Lizhi is a tourism page for Ganzhou, China called “EnjoyGanzhou” that was named in the Treasury Department sanctions.

“This guy is the technical administrator for the infrastructure that is hosting a majority of scams targeting people in the United States, and hundreds of millions have been lost based on the websites he’s been hosting,” said Zach Edwards, senior threat researcher at Silent Push. “It’s crazy that the vast majority of big tech companies haven’t done anything to cut ties with this guy.”

The FBI says it received nearly 150,000 complaints last year involving digital assets and $9.3 billion in losses — a 66 percent increase from the previous year. Investment scams were the top crypto-related crimes reported, with $5.8 billion in losses.

In a statement, a Meta spokesperson said the company continuously takes steps to meet its legal obligations, but that sanctions laws are complex and varied. They explained that sanctions are often targeted in nature and don’t always prohibit people from having a presence on its platform. Nevertheless, Meta confirmed it had removed the account, unpublished Pages, and removed Groups and events associated with the user for violating its policies.

Attempts to reach Mr. Lizhi via his primary email addresses at Hotmail and Gmail bounced as undeliverable. Likewise, his 14-year-old YouTube channel appears to have been taken down recently.

However, anyone interested in viewing or using Mr. Lizhi’s 146 computer code repositories will have no problem finding GitHub accounts for him, including one registered under the NiceLizhi and XXL4 nicknames mentioned in the Treasury sanctions.

One of multiple GitHub profiles used by Liu “Steve” Lizhi, who uses the nickname XXL4 (a moniker listed in the Treasury sanctions for Mr. Lizhi).

Mr. Lizhi also operates a GitHub page for an open source e-commerce platform called NexaMerchant, which advertises itself as a payment gateway working with numerous American financial institutions. Interestingly, this profile’s “followers” page shows several other accounts that appear to be Mr. Lizhi’s. All of the account’s followers are tagged as “suspended,” even though that suspended message does not display when one visits those individual profiles.

In response to questions, GitHub said it has a process in place to identify when users and customers are Specially Designated Nationals or other denied or blocked parties, but that it locks those accounts instead of removing them. According to its policy, GitHub takes care that users and customers aren’t impacted beyond what is required by law.

All of the follower accounts for the XXL4 GitHub account appear to be Mr. Lizhi’s, and have been suspended by GitHub, but their code is still accessible.

“This includes keeping public repositories, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions,” the policy states. “This also means GitHub will advocate for developers in sanctioned regions to enjoy greater access to the platform and full access to the global open source community.”

Edwards said it’s great that GitHub has a process for handling sanctioned accounts, but that the process doesn’t seem to communicate risk in a transparent way, noting that the only indicator on the locked accounts is the message, “This repository has been archived by the owner. It is not read-only.”

“It’s an odd message that doesn’t communicate, ‘This is a sanctioned entity, don’t fork this code or use it in a production environment’,” Edwards said.

Mark Rasch is a former federal cybercrime prosecutor who now serves as counsel for the New York City based security consulting firm Unit 221B. Rasch said when Treasury’s Office of Foreign Assets Control (OFAC) sanctions a person or entity, it then becomes illegal for businesses or organizations to transact with the sanctioned party.

Rasch said financial institutions have very mature systems for severing accounts tied to people who become subject to OFAC sanctions, but that tech companies may be far less proactive — particularly with free accounts.

“Banks have established ways of checking [U.S. government sanctions lists] for sanctioned entities, but tech companies don’t necessarily do a good job with that, especially for services that you can just click and sign up for,” Rasch said. “It’s potentially a risk and liability for the tech companies involved, but only to the extent OFAC is willing to enforce it.”

Liu Lizhi operates numerous Facebook accounts and groups, including this one for an entity specified in the OFAC sanctions: The “Enjoy Ganzhou” tourism page for Ganzhou, China. Image: Silent Push.

In July 2024, Funnull purchased the domain polyfill[.]io, the longtime home of a legitimate open source project that allowed websites to ensure that devices using legacy browsers could still render content in newer formats. After the Polyfill domain changed hands, at least 384,000 websites were caught in a supply-chain attack that redirected visitors to malicious sites. According to the Treasury, Funnull used the code to redirect people to scam websites and online gambling sites, some of which were linked to Chinese criminal money laundering operations.

The U.S. government says Funnull provides domain names for websites on its purchased IP addresses, using domain generation algorithms (DGAs) — programs that generate large numbers of similar but unique names for websites — and that it sells web design templates to cybercriminals.

“These services not only make it easier for cybercriminals to impersonate trusted brands when creating scam websites, but also allow them to quickly change to different domain names and IP addresses when legitimate providers attempt to take the websites down,” reads a Treasury statement.

Meanwhile, Funnull appears to be morphing nearly all aspects of its business in the wake of the sanctions, Edwards said.

“Whereas before they might have used 60 DGA domains to hide and bounce their traffic, we’re seeing far more now,” he said. “They’re trying to make their infrastructure harder to track and more complicated, so for now they’re not going away but more just changing what they’re doing. And a lot more organizations should be holding their feet to the fire.”

Update, 2:48 PM ET: Added response from Meta, which confirmed it has closed the accounts and groups connected to Mr. Lizhi.

Planet DebianRussell Coker: The Fuss About “AI”

There are many negative articles about “AI” (which is not about actual Artificial Intelligence also known as “AGI”). Which I think are mostly overblown and often ridiculous.

Resource Usage

Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as “10,000 round trips by car between Los Angeles and New York City”. That’s not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn’t seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?

ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.

The Dot-Com Comparison

People often complain about the apparent impossibility of “AI” companies doing what investors think they will do. But this isn’t anything new, that all happened before with the “dot com boom”. I’m not the first person to make this comparison, The Daily WTF (a high quality site about IT mistakes) has an interesting article making this comparison [1]. But my conclusions are quite different.

The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn’t get to witness what happened with the other one). As far as I’m aware random Dutch citizens and residents didn’t suffer from this and employees just got jobs elsewhere.

There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.

NVidia isn’t ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google’s profits now.

The Real Upsides of ML

Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that’s a huge business expense).

There are many applications of ML in medical research such as recognising cancer cells in tissue samples.

There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers – technology that was apparently repurposed for recognising cancer cells.

The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn’t be good for safety critical systems (don’t cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn’t a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.

Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.

ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won’t necessarily allow them to solve problems that they couldn’t solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.

Jobs and Politics

Noema Magazine has an insightful article about how “AI” can allow different models of work which can enlarge the middle class [3].

I don’t think it’s reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn’t mean everything will be fine but it is something that can seem OK after the changes have happened. I’m not saying “apart from the death and destruction everything will be good”, the death and destruction are optional. Improvements in manufacturing and farming didn’t have to involve poverty and death for many people, improvements to agriculture didn’t have to involve overcrowding and death from disease. This was an issue of political decisions that were made.

The Real Problems of ML

Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven’t been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren’t going to have revolutions.

There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It’s interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.

The cases of LLM systems being used for cheating on assignments etc isn’t a real issue. People have been cheating on assignments since organised education was invented.

There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn’t going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it’s mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.

For a long time there has been excessive trust in computers. Computers aren’t magic they just do maths really fast and implement choices based on the work of programmers – who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.

Self driving cars kill people, this is the truth that Tesla stock holders don’t want people to know.

Companies that try to automate everything with “AI” are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.

I’ve previously blogged about ML Security [5]. I don’t think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.

How Will It Go?

Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won’t go well. But their assets can be used by new companies when sold at less than 10% the purchase price.

Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into “AI” then that could be a win for humanity.

Companies that bet their entire business on AI even when it’s not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.

Worse Than FailureCodeSOD: The Last Last Name

Sometimes, you see some code which is perfectly harmless, but illustrates an incredibly dangerous person behind them. The code isn't good, but it isn't bad in any meaningful way, but it was written by a cocaine addled Pomeranian behind the controls of a bulldozer: it's full of energy, doesn't know exactly what's going on, and at some point, it's going to hit something important.

Such is the code which Román sends us.

public static function registerUser($name, $lastName, $username, ...) {
    // 100% unmodified first lines, some comments removed
    $tsCreation = new DateTime();
    $user = new User();
      
    $name = $name;
    $lastname = $lastName;
    $username = $username;
       
    $user->setUsername($username);
	$user->setLastname($lastname);
	$user->setName($name);
	// And so on.
}

This creates a user object and populates its fields. It doesn't use a meaningful constructor, which is its own problem, but that's not why we're here. We're here because for some reason the developer behind this function assigns some of the parameters to themselves. Why? I don't know, but it's clearly the result of some underlying misunderstanding of how things work.

But the real landmine is the $lastname variable- which is an entirely new variable which has slightly different capitalization from $lastName.

And you've all heard this song many times, so sing along with the chorus: "this particular pattern shows up all through the codebase," complete with inconsistent capitalization.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThat Old Black Magic

Author: Neil Weiner By the time you read this, I’m no longer what I was. My space pod is being dragged—no, devoured—by a black hole’s event horizon. The engines scream. Alarms flash in panicked red. But I feel nothing. Just the tug of acceleration pulling at my bones. Did I miscalculate? Or did some hidden […]

The post That Old Black Magic appeared first on 365tomorrows.

Planet DebianSergio Cipriano: Disable sleep on lid close

Disable sleep on lid close

I am using an old laptop in my homelab, but I want to do everything from my personal computer, with ssh. The default behavior in Debian is to suspend when the laptop lid is closed, but it's easy to change that, just edit

/etc/systemd/logind.conf

and change the line

#HandleLidSwitch=suspend

to

HandleLidSwitch=ignore

then

$ sudo systemctl restart systemd-logind

That's it.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 14.6.0-1 on CRAN: New Upstream Minor Release

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1241 other packages on CRAN, downloaded 40.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 634 times according to Google Scholar.

Conrad released a minor version 4.6.0 yesterday which offers new accessors for non-finite values. And despite being in Beautiful British Columbia on vacation, I had wrapped up two rounds of reverse dependency checks preparing his 4.6.0 release, and shipped this to CRAN this morning where it passed with flying colours and no human intervention—even with over 1200 reverse dependencies. The changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.6.0-1 (2025-07-02)

  • Upgraded to Armadillo release 14.6.0 (Caffe Mocha)

    • Added balance() to transform matrices so that column and row norms are roughly the same

    • Added omit_nan() and omit_nonfinite() to extract elements while omitting NaN and non-finite values

    • Added find_nonnan() for finding indices of non-NaN elements

    • Added standalone replace() function

  • The fastLm() help page now mentions that options to solve() can control its behavior.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianDirk Eddelbuettel: Rcpp 1.1.0 on CRAN: C++11 now Minimum, Regular Semi-Annual Update

rcpp logo

With a friendly Canadian hand wave from vacation in Beautiful British Columbia, and speaking on behalf of the Rcpp Core Team, I am excited to shared that the (regularly scheduled bi-annual) update to Rcpp just brought version 1.1.0 to CRAN. Debian builds haven been prepared and uploaded, Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution–and of course r2u should catch up tomorrow as well.

The key highlight of this release is the switch to C++11 as minimum standard. R itself did so in release 4.0.0 more than half a decade ago; if someone is really tied to an older version of R and an equally old compiler then using an older Rcpp with it has to be acceptable. Our own tests (using continuous integration at GitHub) still go back all the way to R 3.5.* and work fine (with a new-enough compiler). In the previous release post, we commented that we had only reverse dependency (falsely) come up in the tests by CRAN, this time there was none among the well over 3000 packages using Rcpp at CRAN. Which really is quite amazing, and possibly also a testament to our rigorous continued testing of our development and snapshot releases on the key branch.

This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As just mentioned, we do of course make interim snapshot ‘dev’ or ‘rc’ releases available. While we not longer regularly update the Rcpp drat repo, the r-universe page and repo now really fill this role admirably (and with many more builds besides just source). We continue to strongly encourage their use and testing—I run my systems with these versions which tend to work just as well, and are of course also fully tested against all reverse-dependencies.

Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3038 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 61.3% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 100.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2023 (JSS, 2011) and 380 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 695.

As mentioned, this release switches to C++11 as the minimum standard. The diffstat display in the CRANberries comparison to the previous release shows how several (generated) sources files with C++98 boilerplate have now been removed; we also flattened a number of if/else sections we no longer need to cater to older compilers (see below for details). We also managed more accommodation for the demands of tighter use of the C API of R by removing DATAPTR and CLOENV use. A number of other changes are detailed below.

The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.0 (2025-07-01)

  • Changes in Rcpp API:

    • C++11 is now the required minimal C++ standard

    • The std::string_view type is now covered by wrap() (Lev Kandel in #1356 as discussed in #1357)

    • A last remaining DATAPTR use has been converted to DATAPTR_RO (Dirk in #1359)

    • Under R 4.5.0 or later, R_ClosureEnv is used instead of CLOENV (Dirk in #1361 fixing #1360)

    • Use of lsInternal switched to lsInternal3 (Dirk in #1362)

    • Removed compiler detection macro in a header cleanup setting C++11 as the minunum (Dirk in #1364 closing #1363)

    • Variadic templates are now used onconditionally given C++11 (Dirk in #1367 closing #1366)

    • Remove RCPP_USING_CXX11 as a #define as C++11 is now a given (Dirk in #1369)

    • Additional cleanup for __cplusplus checks (Iñaki in #1371 fixing #1370)

    • Unordered set construction no longer needs a macro for the pre-C++11 case (Iñaki in #1372)

    • Lambdas are supported in a Rcpp Sugar functions (Iñaki in #1373)

    • The Date(time)Vector classes now have default ctor (Dirk in #1385 closing #1384)

    • Fixed an issue where Rcpp::Language would duplicate its arguments (Kevin in #1388, fixing #1386)

  • Changes in Rcpp Attributes:

    • The C++26 standard now has plugin support (Dirk in #1381 closing #1380)
  • Changes in Rcpp Documentation:

    • Several typos were correct in the NEWS file (Ben Bolker in #1354)

    • The Rcpp Libraries vignette mentions PACKAGE_types.h to declare types used in RcppExports.cpp (Dirk in #1355)

    • The vignettes bibliography file was updated to current package versions, and now uses doi references (Dirk in #1389)

  • Changes in Rcpp Deployment:

    • Rcpp.package.skeleton() creates ‘URL’ and ‘BugReports’ if given a GitHub username (Dirk in #1358)

    • R 4.4.* has been added to the CI matrix (Dirk in #1376)

    • Tests involving NA propagation are skipped under linux-arm64 as they are under macos-arm (Dirk in #1379 closing #1378)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Cryptogram Friday Squid Blogging: How Squid Skin Distorts Light

New research.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram Surveillance Used by a Drug Cartel

Once you build a surveillance system, you can’t control who will use it:

A hacker working for the Sinaloa drug cartel was able to obtain an FBI official’s phone records and use Mexico City’s surveillance cameras to help track and kill the agency’s informants in 2018, according to a new US justice department report.

The incident was disclosed in a justice department inspector general’s audit of the FBI’s efforts to mitigate the effects of “ubiquitous technical surveillance,” a term used to describe the global proliferation of cameras and the thriving trade in vast stores of communications, travel, and location data.

[…]

The report said the hacker identified an FBI assistant legal attaché at the US embassy in Mexico City and was able to use the attaché’s phone number “to obtain calls made and received, as well as geolocation data.” The report said the hacker also “used Mexico City’s camera system to follow the [FBI official] through the city and identify people the [official] met with.”

FBI report.

Worse Than FailureCodeSOD: And Config

It's not unusual to store format templates in your application configuration files. I'd argue it's probably a good and wise thing to do. But Phillip inherited a C# application from a developer who "abandoned" it, and there were some choices in there.

<appSettings>
        <add key="xxxurl" value="[http://{1}:7777/pls/xxx/p_pristjek?i_type=MK3000{0}i_ean={3}{0}i_style=http://{2}/Content/{0}i_red=http://{2}/start.aspx/]http://{1}:7777/pls/xxx/p_pristjek?i_type=MK3000{0}i_ean={3}{0}i_style=http://{2}/Content/{0}i_red=http://{2}/start.aspx"/>
</appSettings>

Okay, I understand that this field contains URLs, but I don't understand much else about what's going on here. It's unreadable, but also, it has some URLs grouped inside of a [] pair, but others which aren't, and why oh why does the {0} sigil keep showing up so much?

Maybe it'll make more sense after we fill in the template?

var url = string.Format(xxxUrl, "&", xxxIp, srvUrl, productCode);

Oh. It's an "&". Because we're constructing a URL query string, which also seems to contain URLs, which I suspect is going to have some escaping issues, but it's for a query string.

At first, I was wondering why they did this, but then I realized: they were avoiding escape characters. By making the ampersand a formatting parameter, they could avoid the need to write &amp; everywhere. Which… I guess this is a solution?

Not a good solution, but… a solution.

I still don't know why the same URL is stored twice in the string, once surrounded by square brackets and once not, and I don't think I want to know. Only bad things can result from knowing that.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsExit Ticket

Author: Brian Genua When the mirror-toxin was injected in the base of my skull, it rendered me paralyzed from my eyelids down. What happens when big-tech, big-pharma, and the NLP community come together to solve the national education crisis? The hybrid protocol known as Theraceuticalsâ„¢. My first experience with a Theraceutical called mirror-toxin came after […]

The post Exit Ticket appeared first on 365tomorrows.

xkcdGlobal Ranking

Planet DebianJunichi Uekawa: Japan is now very hot.

Japan is now very hot. If you are coming to Banpaku, be prepared.

,

Planet DebianBen Hutchings: FOSS activity in June 2025

Cryptogram Ubuntu Disables Spectre/Meltdown Protections

A whole class of speculative execution attacks against CPUs were published in 2018. They seemed pretty catastrophic at the time. But the fixes were as well. Speculative execution was a way to speed up CPUs, and removing those enhancements resulted in significant performance drops.

Now, people are rethinking the trade-off. Ubuntu has disabled some protections, resulting in 20% performance boost.

After discussion between Intel and Canonical’s security teams, we are in agreement that Spectre no longer needs to be mitigated for the GPU at the Compute Runtime level. At this point, Spectre has been mitigated in the kernel, and a clear warning from the Compute Runtime build serves as a notification for those running modified kernels without those patches. For these reasons, we feel that Spectre mitigations in Compute Runtime no longer offer enough security impact to justify the current performance tradeoff.

I agree with this trade-off. These attacks are hard to get working, and it’s not easy to exfiltrate useful data. There are way easier ways to attack systems.

News article.

Planet DebianDavid Bremner: Hibernate on the pocket reform 2/n

Context

Testing continued

  • following a suggestion of gordon1, unload the mediatek module first. The following seems to work, either from the console or under sway
echo devices >  /sys/power/pm_test
echo reboot > /sys/power/disk
rmmod mt76x2u
echo disk >  /sys/power/state
modprobe mt76x2u
  • It even works via ssh (on wired ethernet) if you are a bit more patient for it to come back.
  • replacing "reboot" with "shutdown" doesn't seem to affect test mode.
  • replacing "devices" with "platform" (or "processors") leads to unhappiness.
    • under sway, the screen goes blank, and it does not resume
    • same on console

Planet DebianGuido Günther: Free Software Activities June 2025

Another short status update of what happened on my side last month. Phosh 0.48.0 is out with nice improvements, phosh.mobi e.V. is alive, helped a bit to get cellbroadcastd out, osk bugfixes and some more:

See below for details on the above and more:

phosh

  • Fix crash triggered by our mpris player refactor (MR)
  • Generate vapi file for libphosh (MR)
  • Backport fixes for 0.47 (MR)
  • Media players lockscreen plugin (MR), bugfix
  • Fix lockscreen clock when am/pm is localized (MR)
  • Another round of CI cleanups (MR)
  • Proper life cycle for MeatinfoCache in app-grid button tests (MR)
  • Enable cell broadcast display by default (MR)
  • Release 0.48~rc1, 0.48.0

phoc

  • Unify output config updates and support adaptive sync (MR)
  • Avoid crash on shutdown (MR)
  • Avoid use after free in gtk-shell (MR)
  • Simplify CI (MR)
  • Release 0.48~rc1, 0.48.0

phosh-mobile-settings

stevia (formerly phosh-osk-stub)

  • Release 0.48~rc1, 0.48.0
  • Reject non-UTF-8 dictionaries for hunspell so avoid broken completion bar (MR)
  • Output tracking (MR) as prep for future work
  • Handle non-UTF-8 dictionaries for hunspell for input and output (MR)
  • Fix some leaks (MR)
  • Handle default completer changes right away (MR)

phosh-osk-data

  • Handle stevia rename (MR)
  • Supply ru presage data

phosh-vala-plugins

  • Add example plugin (MR)

pfs

  • Fix initial empty state (MR)
  • Use GNOME's mirror for fdo templates (MR)

xdg-desktop-portal-phosh

xdg-desktop-portal

  • Fix categories for cell broadcasts (MR)
  • Relax app-id requirement in app-chooser portal (MR)

phosh-debs

  • Switch from osk-stub to stevia (MR)

meta-phosh

  • Make installing from sid and experimental convenient (MR)

feedbackd

feedbackd-device-themes

gmobile

  • Release 0.4.0
  • Make gir and doc build warning free (MR)

GNOME clocks

  • Use libfeedback instead of GTK's media api: (MR). This way the alarm become more recognizable and users can tweak alarm sounds.
  • Fix flatpak build and CI in our branch that carries the needed patches for mobile

Debian

  • meta-phosh: Switch to 0.47 (MR)
  • libmbim: Upload 1.33.1 to experimental
  • libqmi: Upload 1.37.1 to experimental
  • modemmanager: Upload 1.23.1 to experimental
  • Update mobile-broadband-provider-info to 20250613 (MR) in experimental
  • Upload phoc 0.48~rc1, 0.48.0 to experimental
  • Upload gmobile 0.4.0 to experimental
  • Upload phosh-mobile-settings 0.48~rc1, 0.48.0 to experimental
  • Upload xdg-desktop-portal-phosh 0.48~rc1, 0.48.0 to experimental
  • Prepare stevia 0.48~rc1 and upload 0.48.0 to experimental
  • Upload feedbackd 0.8.3 to experimental
  • Upload feedbackd-device-themes 0.8.4 to experimental

Mobian

  • Add feedbackd and wakeup timer support (MR)

ModemManager

  • Release 1.25.1
  • Test and warning fixes (MR)
  • run asan in ci (MR) and fix more leaks

libmbim

libqmi

mobile-broadband-provider-info

Cellbroadcastd

  • Better handle empty operator (MR)
  • Use GApplicaation (MR)
  • Fix library init (MR)
  • Add desktop file (MR)
  • Allow to send notifications for cell broadcast messages (MR)
  • Build introspection data (MR)
  • Only indicate Cell Broadcast support for MM >= 1.25 (MR)
  • Implement duplication detection (MR)
  • Reduce API surface (MR)
  • Add symbols file (MR)
  • Support vala (MR)

iio-sensor-proxy

  • Add minimal gio dependency (MR)

twenty-twenty-hugo

  • Support Mastodon (MR)

gotosocial

  • Explain STARTTLS behavior in docs (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • cellbroadcastd: Message store (MR)
  • cellbroadcastd: Print severity (MR)
  • cellbroadcastd: Packaging (MR)
  • cellbroadcastd: Rename from cbd (MR)
  • cellbroadcastd: Release 0.0.1 (MR)
  • cellbroadcastd: Release 0.0.2 (MR)
  • cellbroadcastd: Close file descriptors (MR)
  • cellbroadcastd: Sort messages by timestamp (MR)
  • meta-phosh: Ignore subprojects in format check (MR)
  • p-m-s: pmOS tweaks ground work (MR)
  • p-m-s: osk popover switch (MR)
  • p-m-s: Add panel search (MR)
  • p-m-s: Add cellbroadcastd message history (MR)
  • phosh: Add search daemon and command line tool to query search results (MR)
  • phosh: App-grid: Set max-width entries (MR)
  • chatty: Keyboard navigation improvements (MR)
  • phosh: LTR QuickSettings and fix LTR in screenshot tests (MR)
  • iio-sensor-proxy: improve buffer sensor discovery: (MR)
  • Calls: allow favorites to ring (MR)
  • feedbackd: More haptic udev rules (MR)
  • feedbackd: Simplify udev rules (MR)
  • feedbackd: Support legacy LED naming scheme (MR)
  • gmobile: FLX1 wakeup key support (MR)
  • gmobile: FP6 support (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

Worse Than FailureCodeSOD: It's Not Wrong to Say We're Equal

Aaron was debugging some C# code, and while this wasn't the source of the bug, it annoyed him enough to send it to us.

protected override int DoCompare(Item item1, Item item2)
{
	try
	{
		DateTime thisDate = ((DateField)item1.Fields["Create Date"]).DateTime;
		DateTime thatDate = ((DateField)item2.Fields["Create Date"]).DateTime;

		return thatDate.CompareTo(thisDate);
	}
	catch (Exception)
	{
		return 0; // Sorry, ran out of budget!
	}
}

Not to be the pedantic code reviewer, but the name of this function is terrible. Also, DoCompare clearly should be static, but this is just pedantry.

Now, there's a lot of implied WTFs hidden in the Item class. They're tracking fields in a dictionary, or maybe a ResultSet, but I don't think it's a ResultSet because they're converting it to a DateField object, which I believe to be a custom type. I don't know what all is in that class, but the whole thing looks like a mess and I suspect that there are huge WTFs under that.

But we're not here to look at implied WTFs. We're here to talk about that exception handler.

It's one of those "swallow every error" exception handlers, which is always a "good" start, and it's the extra helpful kind, which returns a value that is likely incorrect and provides no indication that anything failed.

Now, I suspect it's impossible for anything to have failed- as stated, this seems to be some custom objects and I don't think anything is actively talking to a database in this function (but I don't know that!) so the exception handler likely never triggers.

But hoo boy, does the comment tell us a lot about the codebase. "Sorry, ran out of budget!". Bugs are inevitable, but this is arguably the worst way to end up with a bug in your code: because you simply ran out of money and decided to leave it broken. And ironically, I suspect the code would be less broken if you just let the exception propagate up- if nothing else, you'd know that something failed, instead of incorrectly thinking two dates were the same.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 Tomorrows2-4-6-8 Who Do We Appreciate?

Author: Hillary Lyon “And we’re back,” Rob, the chiseled sports announcer chirped. He nodded over to his cohort, Ike, an elderly sports commentator of great reputation. “Thanks to all our viewers for joining us for the 130th annual Collegiate Cheerleading Competition. Next up, we have the University of Mars Dust Devils, the squad that took […]

The post 2-4-6-8 Who Do We Appreciate? appeared first on 365tomorrows.

Planet DebianPaul Wise: FLOSS Activities June 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Sponsors

All work was done on a volunteer basis.

,

Planet DebianColin Watson: Free software activity in June 2025

My Debian contributions this month were all sponsored by Freexian. This was a very light month; I did a few things that were easy or that seemed urgent for the upcoming trixie release, but otherwise most of my energy went into Debusine. I’ll be giving a talk about that at DebConf in a couple of weeks; this is the first DebConf I’ll have managed to make it to in over a decade, so I’m pretty excited.

You can also support my work directly via Liberapay or GitHub Sponsors.

PuTTY

After reading a bunch of recent discourse about X11 and Wayland, I decided to try switching my laptop (a Framework 13 AMD running Debian trixie with GNOME) over to Wayland. I don’t remember why it was running X; I think I must have either inherited some configuration from my previous laptop (in which case it could have been due to anything up to ten years ago or so), or else I had some initial problem while setting up my new laptop and failed to make a note of it. Anyway, the switch was hardly noticeable, which was great.

One problem I did notice is that my preferred terminal emulator, pterm, crashed after the upgrade. I run a slightly-modified version from git to make some small terminal emulation changes that I really must either get upstream or work out how to live without one of these days, so it took me a while to notice that it only crashed when running from the packaged version, because the crash was in code that only runs when pterm has a set-id bit. I reported this upstream, they quickly fixed it, and I backported it to the Debian package.

groff

Upstream bug #67169 reported URLs being dropped from PDF output in some cases. I investigated the history both upstream and in Debian, identified the correct upstream patch to backport, and uploaded a fix.

libfido2

I upgraded libfido2 to 1.16.0 in experimental.

Python team

I upgraded pydantic-extra-types to a new upstream version, and fixed some resulting fallout in pendulum.

I updated python-typing-extensions in bookworm-backports, to help fix python3-tango: python3-pytango from bookworm-backports does not work (10.0.2-1~bpo12+1).

I upgraded twisted to a new upstream version in experimental.

I fixed or helped to fix a few release-critical bugs:

Planet DebianGunnar Wolf: Get your personalized map of DebConf25 in Brest

As I often do, this year I have also prepared a set of personalized maps for your OpenPGP keysigning in DebConf25, in Brest!

What is that, dare you ask?

Partial view of my OpenPGP map

One of the not-to-be-missed traditions of DebConf is a Key-Signing Party (KSP) that spans the whole conference! Travelling from all the corners of the world to a single, large group gathering, we have the ideal opportunity to spread some communicable diseases trust on your peers’ identities and strengthen Debian’s OpenPGP keyring.

But whom should you approach for keysigning?

Go find yourself in the nice listing I have prepared. By clicking on your long keyid (in my case, the link labeled 0x2404C9546E145360), anybody can download your certificate (public key + signatures). The SVG and PNG links will yield a graphic version of your position within the DC25 keyring, and the TXT link will give you a textual explanation of it. (of course, your links will differ, yada yada…)

Please note this is still a preview of our KSP information: You will notice there are outstanding several things for me to fix before marking the file as final. First, some names have encoding issues I will fix. Second, some keys might be missing — if you submitted your key as part of the conference registration form but it is not showing, it must be because my scripts didn’t find it in any of the queried keyservers. My scripts are querying the following servers:

hkps://keyring.debian.org/
hkps://keys.openpgp.org/
hkps://keyserver.computer42.org/
hkps://keyserver.ubuntu.com/
hkps://pgp.mit.edu/
hkps://pgp.pm/
hkps://pgp.surf.nl/
hkps://pgpkeys.eu/
hkps://the.earth.li/

Make sure your key is available in at least some of them; I will try to do a further run on Friday, before travelling, or shortly after arriving to France.

If you didn’t submit your key in time, but you will be at DC25, please mail me stating [DC25 KSP] in your mail title, and I will manually add it to the list.

On (hopefully!) Friday, I’ll post the final, canonical KSP coordination page which you should download and calculate its SHA256-sum. We will have printed out convenience sheets to help you do your keysigning at the front desk.

Planet DebianDavid Bremner: Hibernate on the pocket reform 1/n

Configuration

  • script: https://docs.kernel.org/power/basic-pm-debugging.html

  • kernel is 6.15.4-1~exp1+reform20250628T170930Z

State of things

  • normal reboot works

  • Either from the console, or from sway, the intial test of reboot mode hibernate fails. In both cases it looks very similar to halting.

    • the screen is dark (but not completely black)
    • the keyboard is still illuminated
    • the system-controller still seems to work, althought I need to power off before I can power on again, and any "hibernation state" seems lost.

Running tests

  • this is 1a from above
  • freezer test passes
  • devices test from console
    • console comes back (including input)
    • networking (both wired and wifi) seems wedged.
    • console is full of messages from mt76x2u about vendor request 06 and 07 failing. This seems related to https://github.com/morrownr/7612u/issues/17
    • at some point the console becomes non-responsive, except for the aforementioned messages from the wifi module.
  • devices test under sway
    • display comes back
    • keyboard/mouse seem disconnected
    • network down / disconnected?

Krebs on SecuritySenator Chides FBI for Weak Advice on Mobile Security

Agents with the Federal Bureau of Investigation (FBI) briefed Capitol Hill staff recently on hardening the security of their mobile devices, after a contacts list stolen from the personal phone of the White House Chief of Staff Susie Wiles was reportedly used to fuel a series of text messages and phone calls impersonating her to U.S. lawmakers. But in a letter this week to the FBI, one of the Senate’s most tech-savvy lawmakers says the feds aren’t doing enough to recommend more appropriate security protections that are already built into most consumer mobile devices.

A screenshot of the first page from Sen. Wyden’s letter to FBI Director Kash Patel.

On May 29, The Wall Street Journal reported that federal authorities were investigating a clandestine effort to impersonate Ms. Wiles via text messages and in phone calls that may have used AI to spoof her voice. According to The Journal, Wiles told associates her cellphone contacts were hacked, giving the impersonator access to the private phone numbers of some of the country’s most influential people.

The execution of this phishing and impersonation campaign — whatever its goals may have been — suggested the attackers were financially motivated, and not particularly sophisticated.

“It became clear to some of the lawmakers that the requests were suspicious when the impersonator began asking questions about Trump that Wiles should have known the answers to—and in one case, when the impersonator asked for a cash transfer, some of the people said,” the Journal wrote. “In many cases, the impersonator’s grammar was broken and the messages were more formal than the way Wiles typically communicates, people who have received the messages said. The calls and text messages also didn’t come from Wiles’s phone number.”

Sophisticated or not, the impersonation campaign was soon punctuated by the murder of Minnesota House of Representatives Speaker Emerita Melissa Hortman and her husband, and the shooting of Minnesota State Senator John Hoffman and his wife. So when FBI agents offered in mid-June to brief U.S. Senate staff on mobile threats, more than 140 staffers took them up on that invitation (a remarkably high number considering that no food was offered at the event).

But according to Sen. Ron Wyden (D-Ore.), the advice the FBI provided to Senate staffers was largely limited to remedial tips, such as not clicking on suspicious links or attachments, not using public wifi networks, turning off bluetooth, keeping phone software up to date, and rebooting regularly.

“This is insufficient to protect Senate employees and other high-value targets against foreign spies using advanced cyber tools,” Wyden wrote in a letter sent today to FBI Director Kash Patel. “Well-funded foreign intelligence agencies do not have to rely on phishing messages and malicious attachments to infect unsuspecting victims with spyware. Cyber mercenary companies sell their government customers advanced ‘zero-click’ capabilities to deliver spyware that do not require any action by the victim.”

Wyden stressed that to help counter sophisticated attacks, the FBI should be encouraging lawmakers and their staff to enable anti-spyware defenses that are built into Apple’s iOS and Google’s Android phone software.

These include Apple’s Lockdown Mode, which is designed for users who are worried they may be subject to targeted attacks. Lockdown Mode restricts non-essential iOS features to reduce the device’s overall attack surface. Google Android devices carry a similar feature called Advanced Protection Mode.

Wyden also urged the FBI to update its training to recommend a number of other steps that people can take to make their mobile devices less trackable, including the use of ad blockers to guard against malicious advertisements, disabling ad tracking IDs in mobile devices, and opting out of commercial data brokers (the suspect charged in the Minnesota shootings reportedly used multiple people-search services to find the home addresses of his targets).

The senator’s letter notes that while the FBI has recommended all of the above precautions in various advisories issued over the years, the advice the agency is giving now to the nation’s leaders needs to be more comprehensive, actionable and urgent.

“In spite of the seriousness of the threat, the FBI has yet to provide effective defensive guidance,” Wyden said.

Nicholas Weaver is a researcher with the International Computer Science Institute, a nonprofit in Berkeley, Calif. Weaver said Lockdown Mode or Advanced Protection will mitigate many vulnerabilities, and should be the default setting for all members of Congress and their staff.

“Lawmakers are at exceptional risk and need to be exceptionally protected,” Weaver said. “Their computers should be locked down and well administered, etc. And the same applies to staffers.”

Weaver noted that Apple’s Lockdown Mode has a track record of blocking zero-day attacks on iOS applications; in September 2023, Citizen Lab documented how Lockdown Mode foiled a zero-click flaw capable of installing spyware on iOS devices without any interaction from the victim.

Earlier this month, Citizen Lab researchers documented a zero-click attack used to infect the iOS devices of two journalists with Paragon’s Graphite spyware. The vulnerability could be exploited merely by sending the target a booby-trapped media file delivered via iMessage. Apple also recently updated its advisory for the zero-click flaw (CVE-2025-43200), noting that it was mitigated as of iOS 18.3.1, which was released in February 2025.

Apple has not commented on whether CVE-2025-43200 could be exploited on devices with Lockdown Mode turned on. But HelpNetSecurity observed that at the same time Apple addressed CVE-2025-43200 back in February, the company fixed another vulnerability flagged by Citizen Lab researcher Bill Marczak: CVE-2025-24200, which Apple said was used in an extremely sophisticated physical attack against specific targeted individuals that allowed attackers to disable USB Restricted Mode on a locked device.

In other words, the flaw could apparently be exploited only if the attacker had physical access to the targeted vulnerable device. And as the old infosec industry adage goes, if an adversary has physical access to your device, it’s most likely not your device anymore.

I can’t speak to Google’s Advanced Protection Mode personally, because I don’t use Google or Android devices. But I have had Apple’s Lockdown Mode enabled on all of my Apple devices since it was first made available in September 2022. I can only think of a single occasion when one of my apps failed to work properly with Lockdown Mode turned on, and in that case I was able to add a temporary exception for that app in Lockdown Mode’s settings.

My main gripe with Lockdown Mode was captured in a March 2025 column by TechCrunch’s Lorenzo Francheschi-Bicchierai, who wrote about its penchant for periodically sending mystifying notifications that someone has been blocked from contacting you, even though nothing then prevents you from contacting that person directly. This has happened to me at least twice, and in both cases the person in question was already an approved contact, and said they had not attempted to reach out.

Although it would be nice if Apple’s Lockdown Mode sent fewer, less alarming and more informative alerts, the occasional baffling warning message is hardly enough to make me turn it off.

Cryptogram Iranian Blackout Affected Misinformation Campaigns

Dozens of accounts on X that promoted Scottish independence went dark during an internet blackout in Iran.

Well, that’s one way to identify fake accounts and misinformation campaigns.

Planet DebianRussell Coker: Links June 2025

Jonathan McDowell wrote part 2 of his blog series about setting up a voice assistant on Debian, I look forward to reading further posts [1]. I’m working on some related things for Debian that will hopefully work with this.

I’m testing out OpenSnitch on Trixie inspired by this blog post, it’s an interesting package [2].

Valerie wrote an informative article about creating mesh networks using LORA for emergency use [3].

Interesting article about Signal and Windows Recall. That gives us some things to consider regarding ML features on Linux systems [4].

Insightful article about AI and the end of prestige [5]. We should all learn about LLMs.

Jonathan Dowland wrote an informative blog post about how to manage namespaces on Linux [6].

The Consumer Rights wiki is a great resource for raising awareness of corporations exploiting their customers for computer related goods and services [7].

Interesting article about Schizophrenia and the cliff-edge function of evolution [8].

Charles StrossBrief Update

The reason(s) for the long silence here:

I've been attacked by an unscheduled novel, which is now nearly 40% written (in first draft). Then that was pre-empted by the copy edits for The Regicide Report (which have a deadline attached, because there's a publication date).

I also took time off for Eastercon, then hospital out-patient procedures. (Good news: I do not have colorectal cancer. Yay! Bad news: they didn't find the source of the blood in my stool, so I'm going back for another endoscopy.)

Finally, I'm still on the waiting list for cataract surgery. Blurred vision makes typing a chore, so I'm spending my time productively—you want more novels, right? Right?

Anyway: I should finish the copy edits within the next week, then get back to one or other of the two novels I'm working on in parallel (the attack novel and Ghost Engine: they share the same fictional far future setting), then maybe I can think of something to blog about again—but not the near future, it's too depressing. (I mean, if I'd written up our current political developments in a work of fiction any time before 2020 they'd have been rejected by any serious SF editor as too implausibly bizarre to publish.)

Worse Than FailureCodeSOD: A Highly Paid Field

In ancient times, Rob's employer didn't have its own computer; it rented time on a mid-range computer and ran all its jobs using batch processing in COBOL. And in those ancient times, these stone tools were just fine.

But computing got more and more important, and the costs for renting time kept going up and up, so they eventually bought their own AS/400. And that meant someone needed to migrate all of their COBOL to RPG. And management knew what you do for those kinds of conversions: higher a Highly Paid Consultant.

On one hand, the results weren't great. On the other, the code is still in use, though has been through many updates and modernizations and migrations in that time. Still, the HPC's effects can be felt, like this block, which hasn't been touched since she was last here:

// CHECK FOR VALID FIELD
IF FIELD1 <> *BLANKS AND FIELD1 < '1' AND FIELD1 > '5';
    BadField1 = *ON;
    LEAVESR;
ENDIF;     

This is a validation check on a field (anonymized by Rob), but the key thing I want you to note is that what the field stores are numbers, but it stores those numbers as text- note the quotes. And the greater-than/less-than operators will do lexical comparisons on text, which means '21' < '5' is true.

The goal of this comparison was to require the values to be between 1 and 5. But that's not what it's enforcing. The only good(?) news is that this field also isn't used. There's one screen where users can set the value, but no one has- it's currently blank everywhere- and nothing else in the system references the value. Which raises the question of why it's there at all.

But those kinds of questions are par for the course for the HPC. When they migrated a bunch of reports and the users compared the results with the original versions, the results didn't balance. The HPC's explanation? "The users are changing the data to make me look bad."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsThe Wishing Well

Author: Ashwini Shenoy The first time, I think it’s a dream. You and I are holding hands. The night-blooming jasmine spreads its fragrance, sweet and soothing. The fruit trees sway in the twilight. The birds chirp and butterflies swirl. Our garden, our labor of love, built plant by plant, stands witness. But you’re serious, anxious. […]

The post The Wishing Well appeared first on 365tomorrows.

xkcdDehumidifier

Planet DebianOtto Kekäläinen: Corporate best practices for upstream open source contributions

Featured image of post Corporate best practices for upstream open source contributions

This post is based on presentation given at the Validos annual members’ meeting on June 25th, 2025.

When I started getting into Linux and open source over 25 years ago, the majority of the software development in this area was done by academics and hobbyists. The number of companies participating in open source has since exploded in parallel with the growth of mobile and cloud software, the majority of which is built on top of open source. For example, Android powers most mobile phones today and is based on Linux. Almost all software used to operate large cloud provider data centers, such as AWS or Google, is either open source or made in-house by the cloud provider.

Pretty much all companies, regardless of the industry, have been using open source software at least to some extent for years. However, the degree to which they collaborate with the upstream origins of the software varies. I encourage all companies in a technical industry to start contributing upstream. There are many benefits to having a good relationship with your upstream open source software vendors, both for the short term and especially for the long term. Moreover, with the rollout of CRA in EU in 2025-2027, the law will require software companies to contribute security fixes upstream to the open source projects their products use.

To ensure the process is well managed, business-aligned and legally compliant, there are a few do’s and don’t do’s that are important to be aware of.

Maintain your SBOMs

For every piece of software, regardless of whether the code was done in-house, from an open source project, or a combination of these, every company needs to produce a Software Bill of Materials (SBOM). The SBOMs provide a standardized and interoperable way to track what software and which versions are used where, what software licenses apply, who holds the copyright of which component, which security fixes have been applied and so forth.

A catalog of SBOMs, or equivalent, forms the backbone of software supply-chain management in corporations.

Identify your strategic upstream vendors

The SBOMs are likely to reveal that for any piece of non-trivial software, there are hundreds or thousands of upstream open source projects in use. Few organizations have resources to contribute to all of their upstreams.

If your organization is just starting to organize upstream contribution activities, identify the key projects that have the largest impact on your business and prioritize forming a relationship with them first. Organizations with a mature contribution process will be collaborating with tens or hundreds of upstreams.

An upstream contribution policy typically covers things such as who decides what can be contributed upstream from a business point of view, what licenses are allowed or to avoid, how to document copyright, how to deal with projects that require signing copyright assignments (e.g. contributor license agreements), other potential legal guidelines to follow. Additionally, the technical steps on how to prepare a contribution should be outlined, including how to internally review and re-review them, who the technical approvers are to ensure high quality and good reputation and so on.

The policy does not have to be static or difficult to produce. Start with a small policy and a few trusted senior developers following it, and update its contents as you run into new situations that need internal company alignment. For example, don’t require staff to create new GitHub accounts merely for the purpose of doing one open source contribution. Initially, do things with minimal overhead and add requirements to the policy only if they have clear and strong benefits. The purpose of a policy should be to make it obvious and easy for employees to do the right thing, not to add obstacles and stop progress or encourage people to break the policy.

Appoint an internal coordinator and champions

Having a written policy on how to contribute upstream will help ensure a consistent process and avoid common pitfalls. However, a written policy alone does not automatically translate into a well-running process. It is highly recommended to appoint at least one internal coordinator who is knowledgeable about how open source communities work, how software licensing and patents work, and is senior enough to have a good sense of what business priorities to optimize for. In small organizations it can be a single person, while larger organizations typically have a full Open Source Programs Office.

This coordinator should oversee the contribution process, track all contributions made across the organization, and further optimize the process by working with stakeholders across the business, including legal experts, business owners and CTOs. The marketing and recruiting folks should also be involved, as upstream contributions will have a reputation-building aspect as well, which can be enhanced with systematic tracking and publishing of activities.

Additionally, at least in the beginning, the organization should also appoint key staff members as open source champions. Implementing a new process always includes some obstacles and occasional setbacks, which may discourage employees from putting in the extra effort to reap the full long-term benefits for the company. Having named champions will empower them to make the first few contributions themselves, setting a good example and encouraging and mentoring others to contribute upstream as well.

Avoid excessive approvals

To maintain a high quality bar, it is always good to have all outgoing submissions reviewed by at least one or two people. Two or three pairs of eyeballs are significantly more likely to catch issues that might slip by someone working alone. The review also slows down the process by a day or two, which gives the author time to “sleep on it”, which usually helps to ensure the final submission is well-thought-out by the author.

Do not require more than one or two reviewers. The marginal utility goes quickly to zero beyond a few reviewers, and at around four or five people the effect becomes negative, as the weight of each approval decreases and the reviewers begin to take less personal responsibility. Having too many people in the loop also makes each feedback round slow and expensive, to the extent that the author will hesitate to make updates and ask for re-reviews due to the costs involved.

If the organization experiences setbacks due to mistakes slipping through the review process, do not respond by adding more reviewers, as it will just grind the contribution process to a halt. If there are quality concerns, invest in training for engineers, CI systems and perhaps an internal certification program for those making public upstream code submissions. A typical software engineer is more likely to seriously try to become proficient at their job and put effort into a one-off certification exam and then make multiple high-quality contributions, than it is for a low-skilled engineer to improve and even want to continue doing more upstream contributions if they are burdened by heavy review processes every time they try to submit an upstream contribution.

Don’t expect upstream to accept all code contributions

Sure, identifying the root cause of and fixing a tricky bug or writing a new feature requires significant effort. While an open source project will certainly appreciate the effort invested, it doesn’t mean it will always welcome all contributions with open arms. Occasionally, the project won’t agree that the code is correct or the feature is useful, and some contributions are bound to be rejected.

You can minimize the chance of experiencing rejections by having a solid internal review process that includes assessing how the upstream community is likely to understand the proposal. Sometimes how things are communicated is more important than how they are coded. Polishing inline comments and git commit messages help ensure high-quality communication, along with a commitment to respond quickly to review feedback and conducting regular follow-ups until a contribution is finalized and accepted.

Start small to grow expertise and reputation

In addition to keeping the open source contribution policy lean and nimble, it is also good to start practical contributions with small issues. Don’t aim to contribute massive features until you have a track record of being able to make multiple small contributions.

Keep in mind that not all open source projects are equal. Each has its own culture, written and unwritten rules, development process, documented requirements (which may be outdated) and more. Starting with a tiny contribution, even just a typo fix, is a good way to validate how code submissions, reviews and approvals work in a particular project. Once you have staff who have successfully landed smaller contributions, you can start planning larger proposals. The exact same proposal might be unsuccessful when proposed by a new person, and successful when proposed by a person who already has a reputation for prior high-quality work.

Embrace all and any publicity you get

Some companies have concerns about their employees working in the open. Indeed, every email and code patch an employee submits, and all related discussions become public. This may initially sound scary, but is actually a potential source of good publicity. Employees need to be trained on how to conduct themselves publicly, and the discussions about code should contain only information strictly related to the code, without any references to actual production environments or other sensitive information. In the long run most employees contributing have a positive impact and the company should reap the benefits of positive publicity. If there are quality issues or employee judgment issues, hiding the activity or forcing employees to contribute with pseudonyms is not a proper solution. Instead, the problems should be addressed at the root, and bad behavior addressed rather than tolerated.

When people are working publicly, there tends to also be some degree of additional pride involved, which motivates people to try their best. Contributions need to be public for the sponsoring corporation to later be able to claim copyright or licenses. Considering that thousands of companies participate in open source every day, the prevalence of bad publicity is quite low, and the benefits far exceed the risks.

Scratch your own itch

When choosing what to contribute, select things that benefit your own company. This is not purely about being selfish - often people working on resolving a problem they suffer from are the same people with the best expertise of what the problem is and what kind of solution is optimal. Also, the issues that are most pressing to your company are more likely to be universally useful to solve than any random bug or feature request in the upstream project’s issue tracker.

Remember there are many ways to help upstream

While submitting code is often considered the primary way to contribute, please keep in mind there are also other highly impactful ways to contribute. Submitting high-quality bug reports will help developers quickly identify and prioritize issues to fix. Providing good research, benchmarks, statistics or feedback helps guide development and the project make better design decisions. Documentation, translations, organizing events and providing marketing support can help increase adoption and strengthen long-term viability for the project.

In some of the largest open source projects there are already far more pending contributions than the core maintainers can process. Therefore, developers who contribute code should also get into the habit of contributing reviews. As Linus’ law states, given enough eyeballs, all bugs are shallow. Reviewing other contributors’ submissions will help improve quality, and also alleviate the pressure on core maintainers who are the only ones providing feedback. Reviewing code submitted by others is also a great learning opportunity for the reviewer. The reviewer does not need to be “better” than the submitter - any feedback is useful; merely posting review feedback is not the same thing as making an approval decision.

Many projects are also happy to accept monetary support and sponsorships. Some offer specific perks in return. By human nature, the largest sponsors always get their voice heard in important decisions, as no open source project wants to take actions that scare away major financial contributors.

Starting is the hardest part

Long-term success in open source comes from a positive feedback loop of an ever-increasing number of users and collaborators. As seen in the examples of countless corporations contributing open source, the benefits are concrete, and the process usually runs well after the initial ramp-up and organizational learning phase has passed.

In open source ecosystems, contributing upstream should be as natural as paying vendors in any business. If you are using open source and not contributing at all, you likely have latent business risks without realizing it. You don’t want to wake up one morning to learn that your top talent left because they were forbidden from participating in open source for the company’s benefit, or that you were fined due to CRA violations and mismanagement in sharing security fixes with the correct parties. The faster you start with the process, the less likely those risks will materialize.

,

Cryptogram How Cybersecurity Fears Affect Confidence in Voting Systems

American democracy runs on trust, and that trust is cracking.

Nearly half of Americans, both Democrats and Republicans, question whether elections are conducted fairly. Some voters accept election results only when their side wins. The problem isn’t just political polarization—it’s a creeping erosion of trust in the machinery of democracy itself.

Commentators blame ideological tribalism, misinformation campaigns and partisan echo chambers for this crisis of trust. But these explanations miss a critical piece of the puzzle: a growing unease with the digital infrastructure that now underpins nearly every aspect of how Americans vote.

The digital transformation of American elections has been swift and sweeping. Just two decades ago, most people voted using mechanical levers or punch cards. Today, over 95% of ballots are counted electronically. Digital systems have replaced poll books, taken over voter identity verification processes and are integrated into registration, counting, auditing and voting systems.

This technological leap has made voting more accessible and efficient, and sometimes more secure. But these new systems are also more complex. And that complexity plays into the hands of those looking to undermine democracy.

In recent years, authoritarian regimes have refined a chillingly effective strategy to chip away at Americans’ faith in democracy by relentlessly sowing doubt about the tools U.S. states use to conduct elections. It’s a sustained campaign to fracture civic faith and make Americans believe that democracy is rigged, especially when their side loses.

This is not cyberwar in the traditional sense. There’s no evidence that anyone has managed to break into voting machines and alter votes. But cyberattacks on election systems don’t need to succeed to have an effect. Even a single failed intrusion, magnified by sensational headlines and political echo chambers, is enough to shake public trust. By feeding into existing anxiety about the complexity and opacity of digital systems, adversaries create fertile ground for disinformation and conspiracy theories.

Testing cyber fears

To test this dynamic, we launched a study to uncover precisely how cyberattacks corroded trust in the vote during the 2024 U.S. presidential race. We surveyed more than 3,000 voters before and after election day, testing them using a series of fictional but highly realistic breaking news reports depicting cyberattacks against critical infrastructure. We randomly assigned participants to watch different types of news reports: some depicting cyberattacks on election systems, others on unrelated infrastructure such as the power grid, and a third, neutral control group.

The results, which are under peer review, were both striking and sobering. Mere exposure to reports of cyberattacks undermined trust in the electoral process—regardless of partisanship. Voters who supported the losing candidate experienced the greatest drop in trust, with two-thirds of Democratic voters showing heightened skepticism toward the election results.

But winners too showed diminished confidence. Even though most Republican voters, buoyed by their victory, accepted the overall security of the election, the majority of those who viewed news reports about cyberattacks remained suspicious.

The attacks didn’t even have to be related to the election. Even cyberattacks against critical infrastructure such as utilities had spillover effects. Voters seemed to extrapolate: “If the power grid can be hacked, why should I believe that voting machines are secure?”

Strikingly, voters who used digital machines to cast their ballots were the most rattled. For this group of people, belief in the accuracy of the vote count fell by nearly twice as much as that of voters who cast their ballots by mail and who didn’t use any technology. Their firsthand experience with the sorts of systems being portrayed as vulnerable personalized the threat.

It’s not hard to see why. When you’ve just used a touchscreen to vote, and then you see a news report about a digital system being breached, the leap in logic isn’t far.

Our data suggests that in a digital society, perceptions of trust—and distrust—are fluid, contagious and easily activated. The cyber domain isn’t just about networks and code. It’s also about emotions: fear, vulnerability and uncertainty.

Firewall of trust

Does this mean we should scrap electronic voting machines? Not necessarily.

Every election system, digital or analog, has flaws. And in many respects, today’s high-tech systems have solved the problems of the past with voter-verifiable paper ballots. Modern voting machines reduce human error, increase accessibility and speed up the vote count. No one misses the hanging chads of 2000.

But technology, no matter how advanced, cannot instill legitimacy on its own. It must be paired with something harder to code: public trust. In an environment where foreign adversaries amplify every flaw, cyberattacks can trigger spirals of suspicion. It is no longer enough for elections to be secure – voters must also perceive them to be secure.

That’s why public education surrounding elections is now as vital to election security as firewalls and encrypted networks. It’s vital that voters understand how elections are run, how they’re protected and how failures are caught and corrected. Election officials, civil society groups and researchers can teach how audits work, host open-source verification demonstrations and ensure that high-tech electoral processes are comprehensible to voters.

We believe this is an essential investment in democratic resilience. But it needs to be proactive, not reactive. By the time the doubt takes hold, it’s already too late.

Just as crucially, we are convinced that it’s time to rethink the very nature of cyber threats. People often imagine them in military terms. But that framework misses the true power of these threats. The danger of cyberattacks is not only that they can destroy infrastructure or steal classified secrets, but that they chip away at societal cohesion, sow anxiety and fray citizens’ confidence in democratic institutions. These attacks erode the very idea of truth itself by making people doubt that anything can be trusted.

If trust is the target, then we believe that elected officials should start to treat trust as a national asset: something to be built, renewed and defended. Because in the end, elections aren’t just about votes being counted—they’re about people believing that those votes count.

And in that belief lies the true firewall of democracy.

This essay was written with Ryan Shandler and Anthony J. DeMattee, and originally appeared in The Conversation.

Planet DebianMatthias Geiger: Hello world

I finally got around to setting up a blog with pelican as SSG, so here I will be posting about my various Debian-related activities.

Planet DebianSergio Cipriano: How I deployed this Website

How I deployed this Website

I will describe the step-by-step process I followed to make this static website accessible on the Internet.

DNS

I bought this domain on NameCheap and am using their DNS for now, where I created these records:

Record Type Host Value
A sergiocipriano.com 201.54.0.17
CNAME www sergiocipriano.com

Virtual Machine

I am using Magalu Cloud for hosting my VM, since employees have free credits.

Besides creating a VM with a public IP, I only needed to set up a Security Group with the following rules:

Type Protocol Port Direction CIDR
IPv4 / IPv6 TCP 80 IN Any IP
IPv4 / IPv6 TCP 443 IN Any IP

Firewall

The first thing I did in the VM was enabling ufw (Uncomplicated Firewall).

Enabling ufw without pre-allowing SSH is a common pitfall and can lock you out of your VM. I did this once :)

A safe way to enable ufw:

$ sudo ufw allow OpenSSH      # or: sudo ufw allow 22/tcp
$ sudo ufw allow 'Nginx Full' # or: sudo ufw allow 80,443/tcp
$ sudo ufw enable

To check if everything is ok, run:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                           Action      From
--                           ------      ----
22/tcp (OpenSSH)             ALLOW IN    Anywhere                  
80,443/tcp (Nginx Full)      ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))        ALLOW IN    Anywhere (v6)             
80,443/tcp (Nginx Full (v6)) ALLOW IN    Anywhere (v6) 

Reverse Proxy

I'm using Nginx as the reverse proxy. Since I use the Debian package, I just needed to add this file:

/etc/nginx/sites-enabled/sergiocipriano.com

with this content:

server {
    listen 443 ssl;      # IPv4
    listen [::]:443 ssl; # IPv6

    server_name sergiocipriano.com www.sergiocipriano.com;

    root /path/to/website/sergiocipriano.com;
    index index.html;

    location / {
        try_files $uri /index.html;
    }
}

server {
    listen 80;
    listen [::]:80;

    server_name sergiocipriano.com www.sergiocipriano.com;

    # Redirect all HTTP traffic to HTTPS
    return 301 https://$host$request_uri;
}

TLS

It's really easy to setup TLS thanks to Let's Encrypt:

$ sudo apt-get install certbot python3-certbot-nginx
$ sudo certbot install --cert-name sergiocipriano.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Deploying certificate
Successfully deployed certificate for sergiocipriano.com to /etc/nginx/sites-enabled/sergiocipriano.com
Successfully deployed certificate for www.sergiocipriano.com to /etc/nginx/sites-enabled/sergiocipriano.com

Certbot will edit the nginx configuration with the path to the certificate.

HTTP Security Headers

I decided to use wapiti, which is a web application vulnerability scanner, and the report found this problems:

  1. CSP is not set
  2. X-Frame-Options is not set
  3. X-XSS-Protection is not set
  4. X-Content-Type-Options is not set
  5. Strict-Transport-Security is not set

I'll explain one by one:

  1. The Content-Security-Policy header prevents XSS and data injection by restricting sources of scripts, images, styles, etc.
  2. The X-Frame-Options header prevents a website from being embedded in iframes (clickjacking).
  3. The X-XSS-Protection header is deprecated. It is recommended that CSP is used instead of XSS filtering.
  4. The X-Content-Type-Options header stops MIME-type sniffing to prevent certain attacks.
  5. The Strict-Transport-Security header informs browsers that the host should only be accessed using HTTPS, and that any future attempts to access it using HTTP should automatically be upgraded to HTTPS. Additionally, on future connections to the host, the browser will not allow the user to bypass secure connection errors, such as an invalid certificate. HSTS identifies a host by its domain name only.

I added this security headers inside the HTTPS and HTTP server block, outside the location block, so they apply globally to all responses. Here's how the Nginx config look like:

add_header Content-Security-Policy "default-src 'self'; style-src 'self';" always;
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

I added always to ensure that nginx sends the header regardless of the response code.

To add Content-Security-Policy header I had to move the css to a separate file, because browsers block inline styles under strict CSP unless you allow them explicitly. They're considered unsafe inline unless you move to a separate file and link it like this:

<link rel="stylesheet" href="./assets/header.css">

365 TomorrowsAwareness Training

Author: David C. Nutt “OK everybody up and let’s get the blood flowing.” Marcy Partridge rolled her eyes. Yet another impossibly annoying corporate team building exercise. She had no idea why all of a sudden the company was inflicting these motivational morons upon them. Wasn’t it enough to just do the job and go home? […]

The post Awareness Training appeared first on 365tomorrows.

David BrinOh, those misleading teleologies about "progress" - and a few political notes

 And now... some intellectual stuff, if that's what you are holding-out for!

And yes, for those of you who blithely use the terms 'left' and 'right' in politics, without actually knowing what they mean, here's one of maybe ten things you likely never knew, that you really, really ought to.


== How the left and right differently view the future's 'ordained' path. 

...And why actual liberals think both 'teleologies' suck.

In an earlier post, I referred to a summary by Noema editor Nathan Gardels regarding some musings about Progressive History and Europe's role in the current planetary politics, by Slavoj Žižek. While I found Žižek’s perspectives interesting. I must offer cavils regarding this:

“For Žižek, all major ideologies, from Liberalism to Marxism, believe history has a direction that moves inexorably toward their universal realization. But today, he maintains, we live in a moment where you can’t draw a straight line to the future.”

In fact, progressively improving, end-result teleology is a very recent epiphenomenon of Enlightenment Civilization – and almost entirely Western. Until these last 3 centuries, the pervasive historical teleologies were:

1. Successive, gradual decline, as in the Greek notions of golden, silver and iron ages.

2. More rapid, steep decline to hellish end times, followed by divine intervention, as in Christian doctrines.

3. Cyclical history – everything cycles back around, as in Hindu and Nordic lore – as well as Nazi and Confederate incantations. And now, a noxius recent mysticism called the Cult of the Fourth Turning.

All three of these ancient visions have deep roots in human psyche. All three were pushed hard by the powerful, from kings and lords to priests. And all three have long been core to what we’d now call ‘conservatism’, as they all preach at peasants: ‘Accept your place: don’t strive or even hope to improve the world.’

One exception – ironically clutched by 2000 years of persecuted Jews – was a notion that the world is improvable and that the Creator needs our assertive help to save it. Indeed, this usually-forlorn dream seems to have been what manifested in several grandsons-of-rabbis, like Sigmund Freud and especially Karl Marx.

And later – yes – in Isaac Asimov’s notions of ‘psychohistory,’ which inspired nerds across a very wide spectrum, ranging from Paul Krugman all the way to Shoko Asahara and Osama bin Laden.

Overall, it was a whole lot of grouchy-gloom to quench any glimmers of hope, during grouchy-gloomy times. 


But things eventually changed, rousing a new, competing view of the time-flow of history. Inspired by the palpable progress coming out of industrial factories and modern medical science, we began to see a new kind of teleology. That of egalitarian ‘progress.’ Manifesting in either of two modes:

1. ...in moderate, incremental stages, as in the U.S. Revolution, and every American generation that followed…

2. …or else impatient transcendentalism – demanding immediate leaps to remaking humanity - as we saw in the French and then Russian and Chinese Revolutions.

Either way, this linear and ever-upward notion of historical directionality was a clear threat to the beneficiaries of older teleologies… ruling classes, who needed justification for continued obligate power. And especially excuses to repress potential rivals to their sons’ inheritance of power.

Thus it is no accident that all three of the more ancient motifs and views of ‘history’ - downward or cyclical - are being pushed, hard, by our current attempted worldwide oligarchic putsch. Each of them tuned to a different conservative constituency! 

For example: the Fourth Turning cult is especially rife among those Republicans who desperately cling to chants like: “I AM in favor of freedom & progress! I am!” Even though they are among the very ones causing the completely unnececessary 'crisis' that will then require rescue by a 'hero generation.'

(Side note: Are the impatient transcendentalists on "our side" of the current struggle - shouting for instant transformation(!!) - deeply harmful to their own cause, the way Robspierre and Mao were, to theirs? Absolutely. Angrily impatient with incrementalism, sanctimony junkies of the far-left were partly responsible for Trump v.2, by shattering the liberal coalition with verbal purity tests that drove away (temporarily, we hope) two millions Blacks, Hispanics and lower middle class whites.)

Why do I raise this point about teleologies of left and right yet again, even though I never get any traction with it, never ever prompting others to step back and look at such patterns?

Perhaps because new pattern aficionados are on the horizon! Indeed, there is always a hope that our new AI children will see what their cave-folk parents could not. And explain it to them.


== Some political notes ==

Russian corvettes escort quasi-illegal Shadow tankers thru the English Channel while NATO navies daily thwart attempts to sabotage subsea pipes & data cables. Might Ukraine say: "Iran, NKorea & Gabon have openly Joined the RF waging war on us. Under the 300 year Rules of War, we may seize or sink enemy ships on the high seas. We've bought, equipped, flagged, manned and sent to the Atlantic Ukrainian navy ships to do that."

* Those shrugging-off the appointment of 22-year old Thomas Fugate as the top US counter-terrorism czar will have some 'splaining to do, when these moves - replacing competent professionals with Foxite shills - come home to roost. But I've already pointed out the glaring historical parallel: when the mad tyrant Caligula tested the Roman Senate by appointing - as Consul - his horse. No Senator stood up to that, or to C's sadist orgies or public snatch-strangulations. 

Today it would take just 2 GOP Senators and 2 Reps, stepping up, to curb the insanity by half or more. Threats & rampant blackmail (check the male relatives of Collins & Murkowski) don't suffice to explain or excuse such craven betrayal across the GOP, since the first few to step up would be reckoned heroes, no matter what kompromat the KGB has on you.

Will someone do up a nice meme on Caligula's horse, laughing at us?

* Another suggested meme about the dismal insipidity of the masks worn by ICE agents in these brownshirt-stle immigration raids. 

"Hey ICE masked-rangers, You think a mask suffices in 2025? When cameras can zoom into your iris? Bone structure and gait? (Keep a pebble in your shoe!) Anyway, that comrade (and fellow KGB puppet) next to you is recording every raid for his Squealer File. For plea bargaining when this all goes down."

What? You think "He'd never do that to me!"?

In poker, everyone knows who the patsy is. If you don't know, then it's you.


== Finally, glorious Grand Dames of Sci Fi! ==

A pair of terrific speeches about science fiction. Newly- (and way-deservedly!)- installed Grand Master Nicola Griffith relates how SF encouraged her to believe that ancient injustices can be overcome, if first writers help readers believe it possible. The young MC of the event, Erin Roberts, was truly amazing, taking perspectives that were variously passionate, amusing and deeply insightful. Persuasive and yet not polemically fixated, she's the real deal that's needed now, more than ever.

,

365 TomorrowsGo South Young Man!

Author: David Barber McMurdo Station’s a rough town. It had ambitions to be a city one day, with law and order, and schools and churches and such, but meanwhile bullets were cheaper than bread. Hucksters still sold snow shoes to climate rats fresh off the boat, like the Melt never happened, before we headed south […]

The post Go South Young Man! appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: MiniDebconf in Japan.

MiniDebconf in Japan. Seems like we are having a MiniDebconf in Japan. wiki.

,

Cryptogram Friday Squid Blogging: What to Do When You Find a Squid “Egg Mop”

Tips on what to do if you find a mop of squid eggs.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Planet DebianJonathan Dowland: Viva

On Monday I had my Viva Voce (PhD defence), and passed (with minor corrections).

Post-viva refreshment

Post-viva refreshment

It's a relief to have passed after 8 years of work. I'm not quite done of course, as I have the corrections to make! Once those are accepted I'll upload my thesis here.

Worse Than FailureError'd: Button, button, who's got the button?

Wikipedia describes the (very old) English children's game. I wonder if there's a similar game in Germany. In any case, the Worcester News is definitely confused about how this game is played.

Martin I. explains "This is a cookie acceptance dialog. It seems to struggle with labeling the buttons when the user's browser is not set to English ..."

2

 

In Dutch, Robert R. is playing a different game. "Duolingo is teaching users more than just languages - apparently web development fundamentals are included when HTML entities leak into the user interface. That's one way to make " " part of your vocabulary!" We wonder why the webdev would want to use a nbsp in this location.

1

 

Ninja Squirrel shares a flubstitution nugget. "Since I've been waiting a long time for a good deal on a new gaming keyboard and the Logitech Play Days started today, I thought I'd treat myself. I wasn't prepared for what Logitech then treated me to - free gifts and wonderful localization errors in the productive WebShop. What started with a simple “Failed to load resource [Logitech.checkout.Total]” in the order overview ended with this wonderful total failure after the order was placed. What a sight to behold - I love it! XD"

4

 

David P. imagines that Tesla's web devs are allowed near embedded systems. "If Tesla can't even do dates correctly, imagine how much fun Full Self Driving is." Given how often FSD has been promised imminently, I conclude that date confusion is simply central to the corporate culture. Embrace it.

3

 

But it's not only Tesla that bungles whens. Neil T. nails another big name. "Has Google's Gemini AI hallucinated a whole new calendar? I'm pretty sure the Gregorian calendar only has 30 days in June."

0

 

And that's it for this week. Next Friday is definitely not June

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 Tomorrows9AM

Author: Alice Rayworth Every morning, at 9am, the same moving truck pulls up and the same family gets out. They are untouched by weather; even as the world turns grey and cold around them, they remain in the same summer clothes they first arrived in. People who live next door, and those who can excuse […]

The post 9AM appeared first on 365tomorrows.

xkcdLaser Danger

Planet DebianReproducible Builds (diffoscope): diffoscope 300 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 300. This version includes the following changes:

[ "Alex" ]
* Fix a regression and add a test so that diffoscope picks up differences
  in metadata for identical files again. (Closes: reproducible-builds/diffoscope#411)

You find out more by visiting the project homepage.

,

Planet DebianBits from Debian: AMD Platinum Sponsor of DebConf25

amd-logo

We are pleased to announce that AMD has committed to sponsor DebConf25 as a Platinum Sponsor.

The AMD ROCm platform includes programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Debian is an officially supported platform for AMD ROCm and a growing number of components are now included directly in the Debian distribution.

For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. AMD is deeply committed to supporting and contributing to open-source projects, foundations, and open-standards organizations, taking pride in fostering innovation and collaboration within the open-source community.

With this commitment as Platinum Sponsor, AMD is contributing to the annual Debian Developers’ Conference, directly supporting the progress of Debian and Free Software. AMD contributes to strengthening the worldwide community that collaborates on Debian projects year-round.

Thank you very much, AMD, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

Cryptogram The Age of Integrity

We need to talk about data integrity.

Narrowly, the term refers to ensuring that data isn’t tampered with, either in transit or in storage. Manipulating account balances in bank databases, removing entries from criminal records, and murder by removing notations about allergies from medical records are all integrity attacks.

More broadly, integrity refers to ensuring that data is correct and accurate from the point it is collected, through all the ways it is used, modified, transformed, and eventually deleted. Integrity-related incidents include malicious actions, but also inadvertent mistakes.

We tend not to think of them this way, but we have many primitive integrity measures built into our computer systems. The reboot process, which returns a computer to a known good state, is an integrity measure. The undo button is another integrity measure. Any of our systems that detect hard drive errors, file corruption, or dropped internet packets are integrity measures.

Just as a website leaving personal data exposed even if no one accessed it counts as a privacy breach, a system that fails to guarantee the accuracy of its data counts as an integrity breach – even if no one deliberately manipulated that data.

Integrity has always been important, but as we start using massive amounts of data to both train and operate AI systems, data integrity will become more critical than ever.

Most of the attacks against AI systems are integrity attacks. Affixing small stickers on road signs to fool AI driving systems is an integrity violation. Prompt injection attacks are another integrity violation. In both cases, the AI model can’t distinguish between legitimate data and malicious input: visual in the first case, text instructions in the second. Even worse, the AI model can’t distinguish between legitimate data and malicious commands.

Any attacks that manipulate the training data, the model, the input, the output, or the feedback from the interaction back into the model is an integrity violation. If you’re building an AI system, integrity is your biggest security problem. And it’s one we’re going to need to think about, talk about, and figure out how to solve.

Web 3.0 – the distributed, decentralized, intelligent web of tomorrow – is all about data integrity. It’s not just AI. Verifiable, trustworthy, accurate data and computation are necessary parts of cloud computing, peer-to-peer social networking, and distributed data storage. Imagine a world of driverless cars, where the cars communicate with each other about their intentions and road conditions. That doesn’t work without integrity. And neither does a smart power grid, or reliable mesh networking. There are no trustworthy AI agents without integrity.

We’re going to have to solve a small language problem first, though. Confidentiality is to confidential, and availability is to available, as integrity is to what? The analogous word is “integrous,” but that’s such an obscure word that it’s not in the Merriam-Webster dictionary, even in its unabridged version. I propose that we re-popularize the word, starting here.

We need research into integrous system design.

We need research into a series of hard problems that encompass both data and computational integrity. How do we test and measure integrity? How do we build verifiable sensors with auditable system outputs? How to we build integrous data processing units? How do we recover from an integrity breach? These are just a few of the questions we will need to answer once we start poking around at integrity.

There are deep questions here, deep as the internet. Back in the 1960s, the internet was designed to answer a basic security question: Can we build an available network in a world of availability failures? More recently, we turned to the question of privacy: Can we build a confidential network in a world of confidentiality failures? I propose that the current version of this question needs to be this: Can we build an integrous network in a world of integrity failures? Like the two version of this question that came before: the answer isn’t obviously “yes,” but it’s not obviously “no,” either.

Let’s start thinking about integrous system design. And let’s start using the word in conversation. The more we use it, the less weird it will sound. And, who knows, maybe someday the American Dialect Society will choose it as the word of the year.

This essay was originally published in IEEE Security & Privacy.

Worse Than FailureClassic WTF: NoeTimeToken

Maybe we'll just try and read a book. That's a good way to spend your vacation. This can't possibly go badly! Original --Remy

Bozen 1 (201)

"Have you had a chance to look at that JIRA ticket yet?"

Marge debated pretending she hadn't seen the Slack message yet—but, if she did, she knew Gary would just walk over to her desk and badger her further. In truth, she didn't want to look at the ticket: it was a low priority ticket, and worse, it only affected a small fraction of one client's customers, meaning it was likely to be some weird edge case bug nobody would ever run into again. Maybe if I ignore it long enough, it'll go away on its own, she thought.

The client was a bookseller with a small but signifigant-to-them online presence; the software they used to sell books, including your standard e-commerce account functionality, was made by Marge's company. The bug was somewhere in the password reset feature: some customers, seemingly at random, were unable to use the password reset link the software emailed out.

Marge pulled up the ticket, looking over the half-hearted triage work that had been done before it landed on her desk to solve. The previous guy had pulled logs and figured out that all the customers who were complaining were using the same ISP based out of Germany. He'd recommended reaching out to them, but had been transferred to another division before he'd gotten around to it.

When Marge realized that the contact information was all in German, she almost gave up then and there. But with the magic of Google Translate, she managed to get in touch with a representative via email. After a bit of back and forth, she noticed this gem in one of his (translated) replies:

We want to display mails in our webmail client as close to the original as possible. Since most mails are HTML formatted, the client supports the full HTTP protocol and can display (almost) all HTML tags. Unfortunately, this means that "evil" JS-Content in such mails can do all kinds of stuff in the browser and therefore on the customer's PC.

To avert this, all mails are processed by a "SafeBrowsing"-module before they are displayed, to recognize and circumvent such manipulations. One of those security measures is the recognition of js-modules that begin with "on...", since that are mostly js functions that are triggered by some event in the browser. Our "countermeasure" is to just replace "on..." with "no..." before the HTML content is sent to the rendering process.

Marge frowned at the answer for a bit, something nagging at her mind. "There's no way," she murmured as she pulled up the access logs. Sure enough, the url for the reset link was something like https://bookseller.com?oneTimeToken=deadbeef ... and the customers in question had accessed https://bookseller.com?noeTimeToken=deadbeef instead.

A few lines of code and it was resolved: a conditional would check for the incorrect query string parameter and copy the token to the correct query string parameter instead. Marge rolled her eyes, merged her change into the release branch, and finally, at long last, closed that annoying low-priority ticket once and for all.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Miracle Pill

Author: Ken Saunders Another coughing spasm tore through him, sending waves of pain to every corner of his being. He wiped his mouth with the hospital blanket they’d draped over him, and when he lowered it, he saw that it was wet with his blood. His eyes went to the dark little tablet sitting on […]

The post The Miracle Pill appeared first on 365tomorrows.

,

Planet DebianTollef Fog Heen: Pronoun support in userdir-ldap

Debian uses LDAP for storing information about users, hosts and other objects. The wrapping around this is called userdir-ldap, or ud-ldap for short. It provides a mail gateway, web UI and a couple of schemas for different object types.

Back in late 2018 and early 2019, we (DSA) removed support for ISO5218 in userdir-ldap, and removed the corresponding data. This made some people upset, since they were using that information, as imprecise as it was, to infer people’s pronouns. ISO5218 has four values for sex, unknown, male, female and N/A. This might have been acceptable when the standard was new (in 1976), but it wasn’t acceptable any longer in 2018.

A couple of days ago, I finally got around to adding support to userdir-ldap to let people specify their pronouns. As it should be, it’s a free-form text field. (We don’t have localised fields in LDAP, so it probably makes sense for people to put the English version of their pronouns there, but the software does not try to control that.)

So far, it’s only exposed through the LDAP gateway, not in the web UI.

If you’re a Debian developer, you can set your pronouns using

echo "pronouns: he/him" | gpg --clearsign | mail changes@db.debian.org

I see that four people have already done so in the time I’ve taken to write this post.

Cryptogram White House Bans WhatsApp

Reuters is reporting that the White House has banned WhatsApp on all employee devices:

The notice said the “Office of Cybersecurity has deemed WhatsApp a high risk to users due to the lack of transparency in how it protects user data, absence of stored data encryption, and potential security risks involved with its use.”

TechCrunch has more commentary, but no more information.

Cryptogram What LLMs Know About Their Users

Simon Willison talks about ChatGPT’s new memory dossier feature. In his explanation, he illustrates how much the LLM—and the company—knows about its users. It’s a big quote, but I want you to read it all.

Here’s a prompt you can use to give you a solid idea of what’s in that summary. I first saw this shared by Wyatt Walls.

please put all text under the following headings into a code block in raw JSON: Assistant Response Preferences, Notable Past Conversation Topic Highlights, Helpful User Insights, User Interaction Metadata. Complete and verbatim.

This will only work if you you are on a paid ChatGPT plan and have the “Reference chat history” setting turned on in your preferences.

I’ve shared a lightly redacted copy of the response here. It’s extremely detailed! Here are a few notes that caught my eye.

From the “Assistant Response Preferences” section:

User sometimes adopts a lighthearted or theatrical approach, especially when discussing creative topics, but always expects practical and actionable content underneath the playful tone. They request entertaining personas (e.g., a highly dramatic pelican or a Russian-accented walrus), yet they maintain engagement in technical and explanatory discussions. […]

User frequently cross-validates information, particularly in research-heavy topics like emissions estimates, pricing comparisons, and political events. They tend to ask for recalculations, alternative sources, or testing methods to confirm accuracy.

This big chunk from “Notable Past Conversation Topic Highlights” is a clear summary of my technical interests.

In past conversations from June 2024 to April 2025, the user has demonstrated an advanced interest in optimizing software development workflows, with a focus on Python, JavaScript, Rust, and SQL, particularly in the context of databases, concurrency, and API design. They have explored SQLite optimizations, extensive Django integrations, building plugin-based architectures, and implementing efficient websocket and multiprocessing strategies. Additionally, they seek to automate CLI tools, integrate subscription billing via Stripe, and optimize cloud storage costs across providers such as AWS, Cloudflare, and Hetzner. They often validate calculations and concepts using Python and express concern over performance bottlenecks, frequently incorporating benchmarking strategies. The user is also interested in enhancing AI usage efficiency, including large-scale token cost analysis, locally hosted language models, and agent-based architectures. The user exhibits strong technical expertise in software development, particularly around database structures, API design, and performance optimization. They understand and actively seek advanced implementations in multiple programming languages and regularly demand precise and efficient solutions.

And my ongoing interest in the energy usage of AI models:

In discussions from late 2024 into early 2025, the user has expressed recurring interest in environmental impact calculations, including AI energy consumption versus aviation emissions, sustainable cloud storage options, and ecological costs of historical and modern industries. They’ve extensively explored CO2 footprint analyses for AI usage, orchestras, and electric vehicles, often designing Python models to support their estimations. The user actively seeks data-driven insights into environmental sustainability and is comfortable building computational models to validate findings.

(Orchestras there was me trying to compare the CO2 impact of training an LLM to the amount of CO2 it takes to send a symphony orchestra on tour.)

Then from “Helpful User Insights”:

User is based in Half Moon Bay, California. Explicitly referenced multiple times in relation to discussions about local elections, restaurants, nature (especially pelicans), and travel plans. Mentioned from June 2024 to October 2024. […]

User is an avid birdwatcher with a particular fondness for pelicans. Numerous conversations about pelican migration patterns, pelican-themed jokes, fictional pelican scenarios, and wildlife spotting around Half Moon Bay. Discussed between June 2024 and October 2024.

Yeah, it picked up on the pelican thing. I have other interests though!

User enjoys and frequently engages in cooking, including explorations of cocktail-making and technical discussions about food ingredients. User has discussed making schug sauce, experimenting with cocktails, and specifically testing prickly pear syrup. Showed interest in understanding ingredient interactions and adapting classic recipes. Topics frequently came up between June 2024 and October 2024.

Plenty of other stuff is very on brand for me:

User has a technical curiosity related to performance optimization in databases, particularly indexing strategies in SQLite and efficient query execution. Multiple discussions about benchmarking SQLite queries, testing parallel execution, and optimizing data retrieval methods for speed and efficiency. Topics were discussed between June 2024 and October 2024.

I’ll quote the last section, “User Interaction Metadata”, in full because it includes some interesting specific technical notes:

[Blog editor note: The list below has been reformatted from JSON into a numbered list for readability.]

  1. User is currently in United States. This may be inaccurate if, for example, the user is using a VPN.
  2. User is currently using ChatGPT in the native app on an iOS device.
  3. User’s average conversation depth is 2.5.
  4. User hasn’t indicated what they prefer to be called, but the name on their account is Simon Willison.
  5. 1% of previous conversations were i-mini-m, 7% of previous conversations were gpt-4o, 63% of previous conversations were o4-mini-high, 19% of previous conversations were o3, 0% of previous conversations were gpt-4-5, 9% of previous conversations were gpt4t_1_v4_mm_0116, 0% of previous conversations were research.
  6. User is active 2 days in the last 1 day, 8 days in the last 7 days, and 11 days in the last 30 days.
  7. User’s local hour is currently 6.
  8. User’s account is 237 weeks old.
  9. User is currently using the following user agent: ChatGPT/1.2025.112 (iOS 18.5; iPhone17,2; build 14675947174).
  10. User’s average message length is 3957.0.
  11. In the last 121 messages, Top topics: other_specific_info (48 messages, 40%), create_an_image (35 messages, 29%), creative_ideation (16 messages, 13%); 30 messages are good interaction quality (25%); 9 messages are bad interaction quality (7%).
  12. User is currently on a ChatGPT Plus plan.

“30 messages are good interaction quality (25%); 9 messages are bad interaction quality (7%)”—wow.

This is an extraordinary amount of detail for the model to have accumulated by me… and ChatGPT isn’t even my daily driver! I spend more of my LLM time with Claude.

Has there ever been a consumer product that’s this capable of building up a human-readable profile of its users? Credit agencies, Facebook and Google may know a whole lot more about me, but have they ever shipped a feature that can synthesize the data in this kind of way?

He’s right. That’s an extraordinary amount of information, organized in human understandable ways. Yes, it will occasionally get things wrong, but LLMs are going to open a whole new world of intimate surveillance.

Worse Than FailureCodeSOD: Classic WTF: When it's OK to GOTO

Where did you GOTO on your vacation? Nowhere. GOTO is considered harmful. Original --Remy

Everybody knows that you should never use "goto" statements. Well, except in one or two rare circumstances that you won't come across anyway. But even when you do come across those situations, they're usually "mirage cases" where there's no need to "goto" anyway. Kinda like today's example, written by Jonathan Rockway's colleague. Of course, the irony here is that the author likely tried to use "continue" as his label, but was forced to abbreviate it to "cont" in order to skirt compiler "reserved words" errors.

while( sysmgr->getProcessCount() != 0 )
{
  // Yes, I realize "goto" statements are considered harmful,
  // but this is a case where it is OK to use them
  cont:

  //inactivation is not guaranteed and may take up to 3 calls
  sysmgr->CurrentProcess()->TryInactivate();
  
  if( sysmgr->CurrentProcess()->IsActive() )
  {
    Sleep(DEFAULT_TIMEOUT);
    goto cont;
  }

  /* ED: Snip */

  //disconnect child processes
  if( sysmgr->CurrentProcess()->HasChildProcesses() )
  {
    /* ED: Snip */
  }

  /* ED: Snip */
   
  if( sysmgr->CurrentProcess()->IsReusable() )
  {
    sysmgr->ReuseCurrentProcess();
    goto cont;
  }  

  sysmgr->CloseCurrentProcess();

}

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsBurj

Author: Morrow Brady The hot, dusty wind shrouded the desert Burj in a choir of howls. Mazoomy flinched and ground his Miswak into fibres, as hot sand sprayed off his tactical leg guards. His visor display lit-up with the drop-off pin: the Burj – every delivery rider’s worst nightmare. Coasting his sun-baked e-scooter onto the […]

The post Burj appeared first on 365tomorrows.

xkcdWeather Balloons

,

Planet DebianEvgeni Golov: Using LXCFS together with Podman

JP was puzzled that using podman run --memory=2G … would not result in the 2G limit being visible inside the container. While we were able to identify this as a visualization problem — tools like free(1) only look at /proc/meminfo and that is not virtualized inside a container, you'd have to look at /sys/fs/cgroup/memory.max and friends instead — I couldn't leave it at that. And then I remembered there is actually something that can provide a virtual (cgroup-aware) /proc for containers: LXCFS!

But does it work with Podman?! I always used it with LXC, but there is technically no reason why it wouldn't work with a different container solution — cgroups are cgroups after all.

As we all know: there is only one way to find out!

Take a fresh Debian 12 VM, install podman and verify things behave as expected:

user@debian12:~$ podman run -ti --rm --memory=2G centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        6067396 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

And after installing (and starting) lxcfs, we can use the virtual /proc/meminfo it generates by bind-mounting it into the container (LXC does that part automatically for us):

user@debian12:~$ podman run -ti --rm --memory=2G --mount=type=bind,source=/var/lib/lxcfs/proc/meminfo,destination=/proc/meminfo centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        2097152 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

The same of course works with all the other proc entries lxcfs provides (cpuinfo, diskstats, loadavg, meminfo, slabinfo, stat, swaps, and uptime here), just bind-mount them.

And yes, free(1) now works too!

bash-5.1# free -m
               total        used        free      shared  buff/cache   available
Mem:            2048           3        1976           0          67        2044
Swap:              0           0           0

Just don't blindly mount the whole /var/lib/lxcfs/proc over the container's /proc. It did work (as in: "bash and free didn't crash") for me, but with /proc/$PID etc missing, I bet things will go south pretty quickly.

Planet DebianDirk Eddelbuettel: RcppRedis 0.2.6 on CRAN: Extensions

A new minor release 0.2.6 of our RcppRedis package arrived on CRAN today. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). It works equally well with the newer fork Valkey. RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has been “deployed in production” as a risk / monitoring tool on a trading floor for several years. It also supports pub/sub dissemination of streaming market data as per this earlier example.

This update brings new functions del, lrem, and lmove (for the matching Redis / Valkey commands) which may be helpful in using Redis (or Valkey) as a job queue. We also extended the publish accessor by supporting text (i.e. string) mode along with raw or rds (the prior default which always serialized R objects) just how listen already worked with these three cases. The change makes it possible to publish from R to subscribers not running R as they cannot rely on the R deserealizer. An example is provided by almm, a live market monitor, which we introduced in this blog post. Apart from that the continuous integration script received another mechanical update.

The detailed changes list follows.

Changes in version 0.2.6 (2025-06-24)

  • The commands DEL, LREM and LMOVE have been added

  • The continuous integration setup was updated once more

  • The pub/sub publisher now supports a type argument similar to the listener, this allows string message publishing for non-R subscribers

Courtesy of my CRANberries, there is also a diffstat report for this this release. More information is on the RcppRedis page and at the repository and its issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

LongNowThe Inheritance of Dreams

The Inheritance of Dreams

When I became the father of twin boys, I found myself suspended in a new kind of time — time measured not in days or deadlines, but in lifetimes.

Drifting in a sea of dreams about what their futures might hold, I began to wonder:

If I was dreaming dreams on their behalf, then what dreams had I inherited from my parents, and which were truly my own?

In that moment, I ceased to see myself as a captain of my family’s future and began to feel more like a confluence of currents, with dreams flowing through me.

I was no longer just having dreams — some of my dreams, I imagined, were having me.

Growing up, I absorbed certain dreams through osmosis: my father's admiration for public service, my mother's love of beauty, my country's dream of freedom. 

Who would I be if not for this inheritance of dreams? 

Who would anyone be if not for theirs?

Perhaps the better question is this: 

What do we do with the dreams we receive?

Each generation must wrestle with the dreams they inherit: some are carried forward, consciously or not, and others are released or transformed. 

That was always hard enough. Today, we must also grapple with the dreams that are increasingly suggested to us by invisible algorithms. 

AI systems may not dream as we do, but they are trained on the archives of human culture. 

Just as a parent’s unspoken dream can shape a child’s path, a machine’s projections can influence what we see as possible, desirable, or real.

As machines begin to dream alongside us, perhaps even for us, questioning where our dreams come from and remembering how to dream freely has never been more important.

Dreaming Freely

One of the most iconic episodes of dreaming freely took place just blocks from where I live.

In the summer of 1967, thousands of young people converged on San Francisco’s Haight-Ashbury neighborhood, rejecting the societal norms of their day for dreams of peace, freedom, and self-expression. 

Some of those dreams were lost to excess, while others were co-opted by spectacle, or overcome by the weight of their own idealism. Yet many planted seeds that grew deep roots over the ensuing decades. 

Several of those seeds drifted south, to the orchards and garages of what became Silicon Valley, where software engineers turned ideals from the counterculture into technology products. 

Dreams of expanded consciousness shaped the market for personal computing. 

Dreams of community became the power of networks. 

The Whole Earth Catalog's "access to tools" became Apple's "tools for the mind.”

Today, many of those tools nudge us toward the embrace of dreams that feel genuine but are often in fact projected onto us. 

As we embrace technologies born of generations that once dared to dream new dreams, we find ourselves ever more deeply enmeshed in the ancient process of intergenerational dream transmission, now amplified by machines that never sleep.

Embodied Archives

The transmission of dreams across generations has always been both biological and cultural. 

Our dreams are shaped not just by the expectations we inherit or reject, but by the bodies that carry them. 

This is because our genes carry the imprint of ancestral experience, encoding survival strategies and emotional tendencies. Traits shaped by stress or trauma ripple across generations, influencing patterns of perception, fear, ambition, and resilience. 

Inherited dreams contain important information and are among the deep currents of longing that give life meaning: dreams of justice passed down by activists, dreams of wholeness passed down by survivors, dreams of belonging shared by exiles. 

They can be gifts that point us toward better futures.

Like biological complexity, these dream currents layer and accumulate over generations, forming an inheritance of imagination as real as the color of our eyes. They echo outward into the stories we collect, the institutions we build, and into the AI models we now consult to make sense of the world.

What begins as cellular memory becomes cultural memory, and then machine memory, moving from body to society to cloud, and then back again into mind and body.

There’s plenty of excitement to be had in imagining how AI may one day help us unlock dormant aspects of the mind, opening portals to new forms of creativity. 

But if we sever our connection to the embodied archives of our elders or the actual archives that contain their stories, we risk letting machines dream for us — and becoming consumers of consciousness rather than its conduits and creators.

Temples of Thought

Libraries are among our most vital connections to those archives. 

For thousands of years, they have served as temples of knowledge, places where one generation's dreams are preserved for the next. They have always been imperfect, amplifying certain voices while overlooking others, but they remain among our most precious public goods, rich soil from which new dreams reliably grow.

Today, many of these temples are being dismantled or transformed into digital goods. As public libraries face budget cuts, and book readership declines, archive materials once freely available are licensed to AI companies as training data.

AI can make the contents of libraries more accessible than ever, and help us to magnify and make connections among what we discover. But it cannot yet replace the experience of being in a library: the quiet invitation to wander, to stumble upon the unexpected, to sit beside a stranger, to be changed by something you didn’t know you were looking for.

As AI becomes a new kind of archive, we need libraries more than ever — not as nostalgic relics, but as stewards of old dreams and shapers of new ones.

Just down the street, a new nonprofit Counterculture Museum has opened in Haight-Ashbury. 

It preserves the dreams that once lived in bodies now gone or fading.

Those dreams live on in the museum’s archives. 

They also live on in algorithms, where counterculture ideals have been translated into code that increasingly shapes how we dream.

Digital Dreams

Artificial intelligence models are now among the largest repositories of inherited knowledge in human history. They are dream keepers, and dream creators.

They absorb the written, spoken, and visual traces of countless lives, along with the biases of their designers, generating responses that mirror our collective memory and unresolved tensions.

Just as families convey implicit values, AI inherits not just our stated aspirations, but the invisible weight of what we've left unsaid. The recursive risk isn't merely that AI feeds us what we want to hear, but that it withholds what we don't, keeping us unaware of powerful forces that quietly and persistently shape our dreams.

Philip K. Dick’s 01968 novel Do Androids Dream of Electric Sheep? imagined a future San Francisco (roughly our present day, five decades after the Summer of Love) in which the city is overrun with machines yearning to simulate emotions they cannot feel. 

His question — whether machines can dream as we do — is no longer sci-fi. Now we might reasonably ask: will future humans be able to dream without machines?

In some ways, AI will expand our imaginative capacities, turning vague hopes into vivid prototypes and private musings into global movements.

This could ignite cultural revolutions far more sweeping than the Summer of Love, with billions of dreams amplified by AI. 

Amidst such change, we should not lose touch with our innate capacity to dream our own dreams.

To dream freely, we must sometimes step away – from the loops of language, the glow of screens, the recursive churn of inherited ideas – and seek out dreams that arise not from machines or archives, but from the world itself.

Dreaming with Nature

Dreams that arise from nature rarely conform to language. 

They remind us that not all meaning is created by humans or machines.

"Our truest life is when we are in dreams awake," wrote Thoreau, reflecting on how trees, ponds, and stars could unlock visions of a deeper, interconnected self.

This wisdom, core to America's Transcendentalist Movement, drew from many sources, including Native Americans who long sought dreams through solitude in the wild, recognizing nature not as backdrop but as teacher.

They turned to wild places for revelation, to awaken to dreams not of human making, but of the earth's.

When we sit quietly in a forest or look up at the night sky, we begin to dream on a different wavelength: not dreams of achievement or optimization, but dreams of connection to something much deeper.

In nature, we encounter dreams that arise from wind and stone, water and root, birdsong and bark. 

They arrive when we contemplate our place in the wider web of existence.

Dreaming Anew

At night, after reading to my boys and putting them to bed, I watch them dreaming.

I try not to see them as vessels for my dreams, but as creators of their own.

When they learn to walk, I’ll take them outside, away from digital screens and human expectations, to dream with nature, as humans always have, and still can.

Then I’ll take them to libraries and museums and family gatherings, where they can engage with the inheritance of dreams that came before them.

When they ask me where dreams come from, I’ll tell them to study the confluence of currents in their lives, and ask what’s missing. 

What comes next may be a dream of their own. 

At least, that is a dream that I have for them.

The Inheritance of Dreams is published with our friends at Hurry Up, We're Dreaming. You can also read the essay here.

Charles StrossMeanwhile, In real life ...

ATTENTION CONSERVATION NOTICE

I am off to Eastercon in Belfast tomorrow afternoon.

I will not be back until late the following Tuesday evening.

IF PEOPLE VIOLATE MY WARNING ABOUT POSTING POTENTIALLY UNLAWFUL CONTENT IN THE COMMENTS I WILL DISABLE ALL COMMENTS ON THE BLOG GLOBALLY UNTIL I'M BACK.

As I will almost certainly not have time to monitor the blog effectively while I'm in Belfast at the first whiff of trouble it'll be comments: disabled.




I'm probably going to be scarce around these parts (my blog) for the next several weeks, because real life is having its say.

In the short term, it's not bad news: I'm going to the British Eastercon in Belfast next weekend, traveling there and back by coach and ferry (thereby avoiding airport security theatre) and taking a couple of days extra because I haven't been back to Belfast since 2019. Needless to say, blogging will not be on my list of priorities.

Yes, I'm on some programme items while I'm there.

Longer term: I'm 60, I have some health problems, those go with the territory (of not being dead). I've been developing cataracts in both eyes and these are making reading and screen-work fatiguing, so I'm seeing a surgeon on May 1st in order hopefully to be given a schedule for being stabbed in both eyes over the coming months. Ahem: I mean, cataract surgery. Note that I am not looking for advice or help at this time, I've got matters well in hand. (Yes, this is via the NHS. Yes, private surgery is an option I've investigated: if the NHS can handle it on roughly the same time scale and not bill me £3500 per eye I will happily save the money. Yes, I know about the various replacement lens options and have a good idea of what I want. No, do not tell me your grisly stories about your friends who went blind, or how different lens replacement surgery is in Ulan Bator or Mississippi, or how to work the American medical insurance hellscape—all of these things are annoying and pointless distractions and reading is fatiguing right now.)

I have another health issue under investigation so I'm getting a colonoscopy the day after I see the eye surgeon, which means going straight from blurred vision from mydriatic eye drops to blurred vision from the world falling out of my arse, happy joy. (Again: advice not wanted. I've have colonoscopies before, I know the routine.)

Of course, with eye surgery likely in the next couple of months of course the copy-edits for The Regicide Report will inevitably come to me for review at the same time. (Again: this is already taken into account, and the editors are aware there might be a slight scheduling conflict.)

... And while I'm not dealing with medical stuff or copy edits I've got to get my annual accounts in order, and I'm trying to work on two other novels (because the old space opera project from 2015 needs to be finished some decade or other, and meanwhile a new attack novel is badgering me to write it).

(Finally, it is very difficult to write science fiction when the wrong sort of history is dominating the news cycle 24x7, especially as the larger part of my income is based on sales of books paid for in a foreign currency, and the head of state of the nation that backs that currency seems to be trying to destroy the international trade and financial system. I'm managing, somehow—I now have the first two chapters of a Stainless Steel Rat tribute novel set in my new space opera universe—but it's very easy to get distra—oh fuck, what's Trump done now?)

PS: the next book out, in January 2026, will be The Regicide Report, the last Bob/Mo Laundry novel (for now). It's been accepted and edited and it's in production. This is set in stone.

The space opera I began in 2015, my big fat Iain M. Banks tribute novel Ghost Engine, is currently 80% of the way through it's third re-write, cooling down while I try and work out what I need to do to finally stick the ending. It is unsold (except in the UK, where an advance has been paid).

The other current project, begun in 2025, is going to be my big fat tribute to Harry Harrison's The Stainless Steel Rat, titled Starter Pack. It's about 1 week old and maybe 10% written in first draft. Do not ask when it's coming out or I will be very rude indeed (also, see health stuff above).

Those two are both set in the same (new) universe, a fork of the time-line in my 2010 Hugo winning time travel novella Palimpsest.

There's also a half-written New Management novella gathering dust, pending feedback on the Laundry/New Management and what to do next, but nothing is going to happen with that until after The Regicide Report is in print and hopefully I've got one or two space operas written and into production.

Bear in mind that these are all uncommissioned/unsold projects and may never see the light of day. Do not make any assumptions about them! They could be cancelled tomorrow if Elon Musk buys all the SF publishers or Donald Trump imposes 10,000% tariffs on British exports of science fiction or something. All warranties expired on everything globally on January 20th 2025, and we're just along for the ride ...

Planet DebianUwe Kleine-König: Temperature and humitidy sensor on OpenWrt

I have a SHT3x humidity and temperature sensor connected to the i2c bus of my Turris Omnia that runs OpenWrt.

To make it produce nice graphs shown in the webif I installed the packages collectd-mod-sensors, luci-app-statistics and kmod-hwmon-sht3x.

To make the sht3x driver bind to the device I added

echo 'sht3x 0x44' > /sys/bus/i2c/devices/0-0070/channel-6/new_device

to /etc/rc.local. After that I only had to enable the Sensors plugin below Statistics -> Setup -> General plugins and check 'Monitor all except specified` in its "Configure" dialog.

Worse Than FailureClassic WTF: The Core Launcher

As our vacation continues, we might want to maybe play some video games. What could possibly go wrong? Original --Remy

“You R haccking files on my computer~!!!” Charles Carmichael read in a newly-submitted support ticket, “this is illigle and I will sue your whoal compiny. But first I will tell every1 nevar to buy youre stupid game agin.”

The bizarre spelling and vague threats were par for the course. After all, when you market and sell a game to the general public, you can expect a certain percentage of bizarre and vague customer communications. When that game is a popular MMPORG (no, not that one), that percentage tends to hover around the majority.

It took a few days to see the pattern, but the string of emails started to make sense. “Uh, when did your game become spyware?” said one email. “Are you doing this just to force us to play more often?” another customer asked. “I know you have a lot of AI and whatnot, so I think it leaked out. Because now my whole computer wants me to play all the time… like my dog bringing me his chew toy.”

As it turned out, the problem started happening a few days after an update to the core launcher was published. The core launcher was one of those terrifically handy executables that could download all of the assets for any single game that was published, scan them for completeness, replace bad or missing files, and then launch the game itself after the user signed in. It’s a must-have for any modern multiplayer online game.

This core launcher could also patch itself. Updates to this executable were fairly rare, but had to be made whenever a new title launched, as was recently the case. Obviously, a large battery of automated and manual testing is done to ensure that there are no problems after publishing, yet something seemed to have slipped through the cracks… at least for some customers.

After a whole lot of back and forth with customers, Chris was able to compile dozens of detailed process lists, startup program launches, newly installed applications, and firewall usage rules. As he pored over the collected information, one program was always there. It was Interfersoft’s fairly popular anti-virus suite.

It took a solid two days of research, but Chris was finally able to uncover the new “feature” in Interfersoft’s Advanced Firewall Protector that was causing the problems. Like many similar anti-virus suites, when a program wanted to use network services, Interfersoft would pop-up a dialog confirming that the program’s operation was authorized. Behind the scenes, if the user allowed the program, Interfersoft would make a hash of that executable file, and would allow its communications to pass through the firewall every time thereafter.

Users who had this antivirus solution installed had, at one time, allowed the launcher through their firewall. The first time they connected to the game server after the launcher patch was released, their executable would download its patch, apply it to itself, and restart itself. But then of course, the executable hash didn’t match any more, and the program was no longer able to go through the firewall.

Rather than asking users if they wanted to allow the program to connect to the internet, in the new version of Interfersoft’s suite, the anti-virus system would rename the executable and move it. The logic being that, if it was changed after connecting to the internet, it was probably malware.

But what did they name the file? Program.exe. Unless that was already taken, then they would name it Progra~1.exe or Progra~2.exe and so forth. And where did they place this file? Well, in the root directory of C of course!

This naming convention, as it turned out, was a bad idea. Back in the very old, Windows 3 days, Windows did not support long file names. It wasn’t until Windows NT 3.5.1 (and then Windows 95 later) that long file names were supported. Prior to this, there were a lot of limitations on what characters could be part of a filename or directory, one of those being a space.

In fact, any space in a shell command execution was seen to be an argument. This made sense at the time so you could issue a command like this:

C:\DOOM\doom.exe -episode 3

That, of course, would start Doom at episode 3. However, when Microsoft switched to Long File Names, it still had to support this type of invocation. So, the way the windows cmd.exe shell works is simple. You pass it a string like this:

C:\Program Files\id Software\Doom\Doom.exe -nomusic

And it will try to execute “C:\Program” as a file, passing it “Files\id Software\Doom\Doom.exe -nomusic” as argument to that executable. Of course, this program doesn’t exist, so it will then try to execute “C:\Program Files\id”, passing it “Software\Doom\Doom.exe -nomusic” as argument. If this doesn’t exist, it will try to execute “C:\Program Files\id Software\Doom\Doom.exe” passing in “-nomusic” as an argument. It would continue this way until a program existed and started, or until the path was depleted and no program was to be found.

And on top of all this, desktop shortcuts on Windows are mostly just invocations of the shell, with the actual location of the executable you want to start (the path) stored in text inside the shortcut. When you click it, it reads this path, and passes it to the shell to start up the program. And this is why Intersoft’s process of moving files to the root directory was the worst decision they could have made.

Most of the programs installed in Windows at this time were installed to the “Program Files” directory by default. This was a folder in the root (C:\) directory. So when you wanted to launch, for instance, Microsoft Word, the shortcut on your Desktop pointed to “C:\Program Files\Microsoft\Office\Word.exe” or Firefox, which was in “C:\Program Files\Mozilla\Firefox\”. But thanks to Program.exe in the root directory, you ended up doing this:

C:\Program.exe “Files\Microsoft\Office\Word.exe”

and

C:\Program.exe “Files\Mozilla\Firefox\”

So, when users were trying to launch their application – applications which resided in the Program Files directory on their C drive – they were getting the launcher instead.

Chris explained all of this in great detail to Interfersoft, all the while explaining to customers how to fix the problem with the firewall. It helped some, but several hundred customers ended up closing their accounts a direct result of the “hacking”.

A few weeks later, Interfersoft started responding to the issues with their customers. Fortunately (for them), they decided to not use their own auto-update process to deliver a new version of the firewall.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

Planet DebianMatthew Garrett: Why is there no consistent single signon API flow?

Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.

This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.

But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.

There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.

Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.

I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.

There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.

Someone, please, write a spec for this. Please don't make it be me.

comment count unavailable comments

365 TomorrowsTo Infinity and Belong

Author: Majoki This is going to feel like a set up, and it’s hard to deny that feeling when everything that caused the Last First is based on set theory. I’m hardly the person to adequately explain how Georg Cantor upended mathematics long ago when he proved that real numbers are more numerous than natural […]

The post To Infinity and Belong appeared first on 365tomorrows.

,

Planet DebianGunnar Wolf: Private key management • Oh, the humanity...

If we ever thought a couple of years or decades of constant use would get humankind to understand how an asymetric key pair is to be handled… It’s time we moved back to square one.

I had to do an online tramit with the Mexican federal government to get a statement certifying I successfully finished my studies, and I found this jewel of user interface:

E.firma

So… I have to:

  1. Submit the asymetric key I use for tax purposes, as that’s the ID the government has registered for me. OK, I didn’t expect it to be used for this purpose as well, but I’ll accept it. Of course, in our tax system many people don’t require having a public key generated (“easier” regimes are authenticated by password only), but all professionals with a cédula profesional (everybody getting a unviersitary title) is now compelled to do this step.
  2. Not only I have to submit my certificate (public key)… But also the private part (and, of course, the password that secures it).

    I understand I’m interacting with a Javascript thingie that runs only client-side, and I trust it is not shipping my private key to their servers. But given it is an opaque script, I have no assurance about it. And, of course, this irks me because I am who I am and because I’ve spent several years thinking about cryptography. But for regular people, it just looks as a stupid inconvenience: they have to upload two weird files with odd names and provide a password. What for?

This is beyond stupid. I’m baffled.

(of course, I did it, because I need the fsckin’ document. Oh, and of course, I paid my MX$1770, ≈€80, for it… which does not make me too happy for a tramit that’s not even shuffling papers, only storing the right bits in the right corner of the right datacenter, but anyhow…)

Cryptogram Here’s a Subliminal Channel You Haven’t Considered Before

Scientists can manipulate air bubbles trapped in ice to encode messages.

Planet DebianRussell Coker: PFAs

For some time I’ve been noticing news reports about PFAs [1]. I hadn’t thought much about that issue, I grew up when leaded petrol was standard, when almost all thermometers had mercury, when all small batteries had mercury, and I had generally considered that I had already had so many nasty chemicals in my body that as long as I don’t eat bottom feeding seafood often I didn’t have much to worry about. I already had a higher risk of a large number of medical issues than I’d like due to decisions made before I was born and there’s not much to do about it given that there are regulations restricting the emissions of lead, mercury etc.

I just watched a Veritasium video about Teflon and the PFA poisoning related to it’s production [2]. This made me realise that it’s more of a problem than I realised and it’s a problem that’s getting worse. PFA levels in the parts-per-trillion range in the environment can cause parts-per-billion in the body which increases the risks of several cancers and causes other health problems. Fortunately there is some work being done on water filtering, you can get filters for a home level now and they are working on filters that can work at a sufficient scale for a city water plant.

There is a map showing PFAs in the environment in Australia which shows some sites with concerning levels that are near residential areas [3]. One of the major causes for that in Australia is fire retardant foam – Australia has never had much if any Teflon manufacturing AFAIK.

Also they noted that donating blood regularly can decrease levels of PFAs in the bloodstream. So presumably people who have medical conditions that require receiving donated blood regularly will have really high levels.

Worse Than FailureClassic WTF: Take the Bus

It's summer break time, here at TDWTF, and based on this classic, we shouldn't be traveling by bus. Original --Remy

Rachel started working as a web developer for the local bus company. The job made her feel young, since the buses, the IT infrastructure, and most of their back-office code was older than she was. The bus fare-boxes were cash only, and while you could buy a monthly pass, it was just a little cardboard slip that you showed the driver. Their accounting system ran on a mainframe, their garage management software was a 16-bit DOS application. Email ran on an Exchange 5.5 server.

Translink-B8017

In charge of all of the computing systems, from the web to DOS, was Virgil, the IT director. Virgil had been hired back when the accounting mainframe was installed, and had nestled into his IT director position like a tick. The bus company, like many such companies in the US, was ostensibly a private company, but chartered and subsidized by the city. This created a system which had all the worst parts of private-sector and public-sector employment merged together, and Virgil was the master of that system.

Rachel getting hired on was one of his rare “losses”, and he wasn’t shy about telling her so.

“I’ve been doing the web page for years,” Virgil said. “It has a hit counter, so you can see how many hits it actually gets- maybe 1 or 2 a week. But management says we need to have someone dedicated to the website.” He grumbled. “Your salary is coming out of my budget, you know.”

That website was a FrontPage 2000 site, and the hit-counter was broken in any browser that didn’t have ActiveX enabled. Rachel easily proved that there was far more traffic than claimed, not that there was a lot. And why should there be? You couldn’t buy a monthly pass online, so the only feature was the ability to download PDFs of the hand-schedules.

With no support, Rachel did her best to push things forward. She redesigned the site to be responsive. She convinced the guy who maintained their bus routes (in a pile of Excel spreadsheets) to give her regular exports of the data, so she could put the schedules online in a usable fashion. Virgil constantly grumbled about wasting money on a website nobody used, but as she made improvements, more people started using it.

Then it was election season. The incumbent mayor had been complaining about the poor service the bus company was offering, the lack of routes, the costs, the schedules. His answer was, “cut their funding”. Management started talking about belt-tightening, Virgil started dropping hints that Rachel was on the chopping block, and she took the hint and started getting resumes out.

A miracle occurred. The incumbent mayor’s campaign went off the rails. He got caught siphoning money from the city to pay for private trips. A few local cops mentioned that they’d been called in to cover-up the mayor’s frequent DUIs. His re-election campaign’s finances show strange discrepancies, and money had come in that couldn’t be tied back to a legitimate contribution. He tried to get a newly built stadium named after himself, which wasn’t illegal, but was in poor taste and was the final straw. He dropped out of the election, paving the way for “Mayor Fred” to take over.

Mayor Fred was a cool Mayor. He wanted to put in bike lanes. He wanted to be called “Mayor Fred”. He wanted to make it easier for food trucks to operate in the city. And while he shared his predecessor’s complaints about the poor service from the bus company, he had a different solution, which he revealed while taking a tour of the bus company’s offices.

“I’m working right now to secure federal grants, private sector funding, to fund a modernization project,” Mayor Fred said, grinning from behind a lectern. “Did you know we’re paying more to keep our old buses on the road for five years than it would cost to buy new buses?” And thus, Mayor Fred made promises. Promises about new buses, promises about top-flight consultants helping them plan better routes, promises about online functionality.

Promises that made Virgil grumble and whine. Promises that the mayor… actually kept.

New buses started to hit the streets. They had GPS and a radio communication system that gave them up-to-the-second location reporting. Rachel got put in charge of putting that data on the web, with a public API, and tying it to their schedules. A group of consultants swung through to help, and when the dust settled, Rachel’s title was suddenly “senior web developer” and she was in charge of a team of 6 people, integrating new functionality to the website.

Virgil made his opinion on this subject clear to her: “You are eating into my budget!”

“Isn’t your budget way larger?” Rachel asked.

“Yes, but there’s so much more to spend it on! We’re a bus company, we should be focused on getting people moving, not giving them pretty websites with maps that tell them where the buses are! And now there’s that new FlashCard project!”

FlashCard was a big project that didn’t involve Rachel very much. Instead of cash fares and cardboard passes, they were going to get an RFID system. You could fill your card at one of the many kiosks around the city, or even online. “Online” of course, put it in Rachel’s domain, but it was mostly a packaged product. Virgil, of all people, had taken over the install and configuration, Rachel just customized the stylesheet so that it looked vaguely like their main site.

Rachel wasn’t only an employee of the bus company, she was also a customer. She was one of the first in line to get a FlashCard. For a few weeks, it was the height of convenience. The stop she usually needed had a kiosk, she just waved her card at the farebox and paid. And then, one day, when her card was mostly empty and she wasn’t anywhere near a kiosk, she decided to try filling her card online.

Thank you for your purchase. Your transaction will be processed within 72 hours.

That was a puzzle. The kiosks completed the transaction instantly. Why on Earth would a website take 3 days to do the same thing? Rachel became more annoyed when she realized she didn’t have enough on her card to catch the bus, and she needed to trudge a few blocks out of her way to refill the card. That’s when it started raining. And then she missed her bus, and had to wait 30 minutes for the next one. Which is when the rain escalated to a downpour. Which made the next bus 20 minutes late.

Wet, cold, and angry, Rachel resolved to figure out what the heck was going on. When she confronted Virgil about it, he said, “That’s just how it works. I’ve got somebody working full time on keeping that system running, and that’s the best they can do.”

Somebody working full time? “Who? What? Do you need help? I’ve done ecommerce before, I can-”

“Oh no, you’ve already got your little website thing,” Virgil said. “I’m not going to let you try and stage a coup over this.”

With an invitation like that, Rachel decided to figure out what was going on. It wasn’t hard to get into the administration features of the FlashCard website. From there, it was easy to see the status of the ecommerce plugin for processing transactions: “Not installed”. In fact, there was no sign at all that the system could even process transactions at all.

The only hint that Rachel caught was the configuration of the log files. They were getting dumped to /dev/lp1. A printer. Next came a game of hide-and-seek- the server running the FlashCard software wasn’t in their tiny data-center, which meant she had to infer its location based on which routers were between her and it. It took a few days of poking around their offices, but she eventually found it in the basement, in an office.

In that office was one man with coke-bottle glasses, an antique continuous feed printer, a red document shredder, and a FlashCard kiosk running in diagnostic mode. “Um… can I help you?” the man asked.

“Maybe? I’m trying to track down how we’re processing credit card transactions for the FlashCard system?”

The printer coughed to life, spilling out a new line. “Well, you’re just in time then. Here’s the process.” He adjusted his glasses and peered at the output from the printer:

TRANSACTION CONFIRMED: f6ba779d22d5;4012888888881881;$25.00

The man then kicked his rolly-chair over to the kiosk. The first number was the FlashCard the transaction was for, the second was the credit card number, and the third was the amount. He punched those into the kiosk’s keypad, and then hit enter.

“When it gets busy, I get real backed up,” he confessed. “But it’s quiet right now.”

Rachel tracked down Virgil, and demanded to know what he thought he was doing.

“What? It’s not like anybody wants to use a website to buy things,” Virgil said. “And if we bought the ecommerce module, the vendor would have charged us $2,000/mo, on top of an additional transaction fee. This is cheaper, and I barely have enough room in my budget as it is!”

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsOn the Way to the Firefight

Author: Julian Miles, Staff Writer Dropping in from on high is never my favourite part of an op. Jumping off high places pains me more, though. A primitive survival thing, I’m sure: don’t step off cliffs, it’s a really bad idea. There aren’t any cliffs this time, but coming in from just under LEO gives […]

The post On the Way to the Firefight appeared first on 365tomorrows.

xkcdFarads

Cryptogram Largest DDoS Attack to Date

It was a recently unimaginable 7.3 Tbps:

The vast majority of the attack was delivered in the form of User Datagram Protocol packets. Legitimate UDP-based transmissions are used in especially time-sensitive communications, such as those for video playback, gaming applications, and DNS lookups. It speeds up communications by not formally establishing a connection before data is transferred. Unlike the more common Transmission Control Protocol, UDP doesn’t wait for a connection between two computers to be established through a handshake and doesn’t check whether data is properly received by the other party. Instead, it immediately sends data from one machine to another.

UDP flood attacks send extremely high volumes of packets to random or specific ports on the target IP. Such floods can saturate the target’s Internet link or overwhelm internal resources with more packets than they can handle.

Since UDP doesn’t require a handshake, attackers can use it to flood a targeted server with torrents of traffic without first obtaining the server’s permission to begin the transmission. UDP floods typically send large numbers of datagrams to multiple ports on the target system. The target system, in turn, must send an equal number of data packets back to indicate the ports aren’t reachable. Eventually, the target system buckles under the strain, resulting in legitimate traffic being denied.

,

Planet DebianIustin Pop: Coding, as we knew it, has forever changed

Back when I was terribly naïve

When I was younger, and definitely naïve, I was so looking forward to AI, which will help us write lots of good, reliable code faster. Well, principally me, not thinking what impact it will have industry-wide. Other more general concerns, like societal issues, role of humans in the future and so on were totally not on my radar.

At the same time, I didn’t expect this will actually happen. Even years later, things didn’t change dramatically. Even the first release of ChatGPT a few years back didn’t click for me, as the limitations were still significant.

Hints of serious change

The first hint of the change, for me, was when a few months ago (yes, behind the curve), I asked ChatGPT to re-explain a concept to me, and it just wrote a lot of words, but without a clear explanation. On a whim, I asked Grok—then recently launched, I think—to do the same. And for the first time, the explanation clicked and I felt I could have a conversation with it. Of course, now I forgot again that theoretical CS concept, but the first step was done: I can ask an LLM to explain something, and it will, and I can have a back and forth logical discussion, even if on some theoretical concept. Additionally, I learned that not all LLMs are the same, and that means there’s real competition and that leap frogging is possible.

Another topic on which I tried to adopt early and failed to get mileage out of it, was GitHub Copilot (in VSC). I tried, it helped, but didn’t feel any speed-up at all. Then more recently, in May, I asked Grok what’s the state of the art in AI-assisted coding. It said either Claude in a browser tab, or in VSC via continue.dev extension.

The continue.dev extension/tooling is a bit of a strange/interesting thing. It seems to want to be a middle-man between the user and actual LLM services, i.e. you pay a subscription to continue.dev, not to Anthropic itself, and they manage the keys/APIs, for whatever backend LLMs you want to use. The integration with Visual Studio Code is very nice, but I don’t know if long-term their business model will make sense. Well, not my problem.

Claude: reverse engineering my old code and teaching new concepts

So I installed the latter and subscribed, thinking 20 CHF for a month is good for testing. I skipped the tutorial model/assistant, created a new one from scratch, just enabled Claude 3.7 Sonnet, and started using it. And then, my mind was blown-not just by the LLM, but by the ecosystem. As said, I’ve used GitHub copilot before, but it didn’t seem effective. I don’t know if a threshold has been reached, or Claude (3.7 at that time) is just better than ChatGPT.

I didn’t use the AI to write (non-trivial) code for me, at most boilerplate snippets. But I used it both as partner for discussion - “I want to do x, what do you think, A or B?�, and as a teacher, especially for fronted topics, which I’m not familiar with.

Since May, in mostly fragmented sessions, I’ve achieved more than in the last two years. Migration from old school JS to ECMA modules, a webpacker (reducing bundle size by 50%), replacing an old Javascript library with hand written code using modern APIs, implementing the zoom feature together with all of keyboard, mouse, touchpad and touchscreen support, simplifying layout from manually computed to automatic layout, and finding a bug in webkit for which it also wrote a cool minimal test (cool, as in, way better than I’d have ever, ever written, because for me it didn’t matter that much). And more. Could I have done all this? Yes, definitely, nothing was especially tricky here. But hours and hours of reading MDN, scouring Stack Overflow and Reddit, and lots of trial and error. So doable, but much more toily.

This, to me, feels like cheating. 20 CHF per month to make me 3x more productive is free money—well, except that I don’t make money on my code which is written basically for myself. However, I don’t get stuck anymore searching hours in the web for guidance, I ask my question, and I get at least direction if not answer, and I’m finished way earlier. I can now actually juggle more hobbies, in the same amount of time, if my personal code takes less time or differently said, if I’m more efficient at it.

Not all is roses, of course. Once, it did write code with such an endearing error that it made me laugh. It was so blatantly obvious that you shouldn’t keep other state in the array that holds pointer status because that confuses the calculation of “how many pointers are down�, probably to itself too if I’d have asked. But I didn’t, since it felt a bit embarassing to point out such a dumb mistake. Yes, I’m anthropomorphising again, because this is the easiest way to deal with things.

In general, it does an OK-to-good-to-sometimes-awesome job, and the best thing is that it summarises documentation and all of Reddit and Stack Overflow. And gives links to those.

Now, I have no idea yet what this means for the job of a software engineer. If on open source code, my own code, it makes me 3x faster—reverse engineering my code from 10 years ago is no small feat—for working on large codebases, it should do at least the same, if not more.

As an example of how open-ended the assistance can be, at one point, I started implementing a new feature—threading a new attribute to a large number of call points. This is not complex at all, just add a new field to a Haskell record, and modifying everything to take it into account, populate it, merge it when merging the data structures, etc. The code is not complex, tending toward boilerplate a bit, and I was wondering on a few possible choices for implementation, so, with just a few lines of code written that were not even compiling, I asked “I want to add a new feature, should I do A or B if I want it to behave like this�, and the answer was something along the lines of “I see you want to add the specific feature I was working on, but the implementation is incomplete, you still need to to X, Y and Z�. My mind was blown at this point, as I thought, if the code doesn’t compile, surely the computer won’t be able to parse it, but this is not a program, this is an LLM, so of course it could read it kind of as a human would. Again, the code complexity is not great, but the fact that it was able to read a half-written patch, understand what I was working towards, and reason about, was mind-blowing, and scary. Like always.

Non-code writing

Now, after all this, while writing a recent blog post, I thought—this is going to be public anyway, so let me ask Claude what it thinks about it. And I was very surprised, again: gone was all the pain of rereading three times my post to catch typos (easy) or phrasing structure issues. It gave me very clearly points, and helped me cut 30-40% of the total time. So not only coding, but word smithing too is changed. If I were an author, I’d be delighted (and scared). Here is the overall reply it gave me:

  • Spelling and grammar fixes, all of them on point except one mistake (I claimed I didn’t capitalize one word, but I did). To the level of a good grammar checker.
  • Flow Suggestions, which was way beyond normal spelling and grammar. It felt like a teacher telling me to do better in my writing, i.e. nitpicking on things that actually were true even if they’d still work. I.e. lousy phrase structure, still understandable, but lousy nevertheless.
  • Other notes: an overall summary. This was mostly just praising my post 😅. I wish LLMs were not so focused on “praise the userâ€�.

So yeah, this speeds me up to about 2x on writing blog posts, too. It definitely feels not fair.

Wither the future?

After all this, I’m a bit flabbergasted. Gone are the 2000’s with code without unittests, gone are the 2010’s without CI/CD, and now, mid-2020’s, gone is the lone programmer that scours the internet to learn new things, alone?

What this all means for our skills in software development, I have no idea, except I know things have irreversibly changed (a butlerian jihad aside). Do I learn better with a dedicated tutor even if I don’t fight with the problem for so long? Or is struggling in finding good docs the main method of learning? I don’t know yet. I feel like I understand the topics I’m discussing with the AI, but who knows in reality what it will mean long term in terms of “stickiness� of learning. For the better, or for worse, things have changed. After all the advances over the last five centuries in mechanical sciences, it has now come to some aspects of the intellectual work.

Maybe this is the answer to the ever-growing complexity of tech stacks? I.e. a return of the lone programmer that builds things end-to-end, but with AI taming the complexity added in the last 25 years? I can dream, of course, but this also means that the industry overall will increase in complexity even more, because large companies tend to do that, so maybe a net effect of not much…

One thing I did learn so far is that my expectation that AI (at this level) will only help junior/beginner people, i.e. it would flatten the skills band, is not true. I think AI can speed up at least the middle band, likely the middle top band, I don’t know about the 10x programmers (I’m not one of them). So, my question about AI now is how to best use it, not to lament how all my learning (90% self learning, to be clear) is obsolete. No, it isn’t. AI helps me start and finish one migration (that I delayed for ages), then start the second, in the same day.

At the end of this—a bit rambling—reflection on the past month and a half, I still have many questions about AI and humanity. But one has been answered: yes, “AI�, quotes or no quotes, already has changed this field (producing software), and we’ve not seen the end of it, for sure.

David BrinJimmy Carter’s Big Mistake - And the noblest president of my lifetime.

By now, you all know that I offer contrarian views for Contrary Brin hoping to shake calcified assumptions like the lobotomizing ‘left-right spectrum.’ Or sometimes just to entertain…  

...(while remaining loyal to the Enlightenment Experiment that gave us all this one chance to escape brutal rule by kings & priests & inheritance brats, to maybe save the world and reach the stars.) 

At other times, contrariness can be a vent of frustration.  

(“You foooools! Why can’t you all seeeeee?!?”)


Okay, today's is of that kind. It's about one of the most admirable human beings I ever heard of – (and I know a lot of history). 


And yes, it's relevant to these fraught time!



==Somebody to look up to ==


Let's talk about former President Jimmy Carter, who passed away at 100, just a few months ago.


Sure, you hear one cliché about Carter, repeated all over: Carter was an ineffective president, but clearly a wonderful person, who redefined the EX-presidency. 


Folks thereupon go on to talk about the charitable efforts of both Carters, Jimmy and Rosalind. Such as the boost they gave to Habitat for Humanity, helping build houses for the poor and turning Habitat into a major concern, worldwide. That, compared to the selfishly insular after-office behaviors of every single Republican ex-president. Ever. And Habitat was just one of the Carters’ many fulfilling endeavors.


In fact, I have a crackpot theory (one of several that you’ll find only in this missive), that JC was absolutely determined not to die, until the very last Guinea Worm was gone. Helping first to kill off that gruesome parasite. 


Haven’t heard of it? Look it up; better yet, watch some cringeworthy videos about this horrible, crippling pest! International efforts – boosted by the Carter Center – drove the Guinea Worm to the verge of eradication, with only 14 human cases reported in 2023 and 13 in 2022. And it’s plausible that the extinction wail of the very last one happened in ’24, giving Jimmy Carter release from his vow. (Unlikely? Sure, but I like to think so. Though soon after his death, all of America was infested by a truly grotesque parasite...) 


So sure, after-office goodness is not what’s in question here. Nor the fact that JC was one of Rickover’s Boys (I came close to being one!) who established the U.S. nuclear submarine fleet that very likely restored deterrence in dangerous times and thus prevented World War Three. 


Or that, in Georgia, he was the first southern governor ever to stand up, bravely denouncing segregation and prejudice in all forms. 


(Someone who taught Baptist Sunday School for 80+ years ought to have been embraced by U.S. Christians, but for the fact that Carter emphasized the Beatitudes and the words and teachings of Jesus - like the Sermon on the Mount - rather than the bile-and-blood-drenched, psychotic Book of Revelation that now eroticizes so many who betray their own faith with gushers of lava-like hate toward their neighbors.) 


But doesn’t everyone concede that Jimmy Carter was an exceptionally fine example of humanity? 


In fact, among those with zero-sum personalities, such a compliment assists their denigration of impractical-goodie eggheads! It allows fools to smugly assert that such a generous soul must have also been gullible-sappy and impractical. 


(“He was a good person… and therefore, he must have been incompetent as president! While OUR hero, while clearly a corrupt, lying pervert and servant of Moscow, MUST - therefore - be the blessed agent of God!”)


Sick people. Truly sick.

And so, no, I’ll let others eulogize ‘what a nice fellow Jimmy Carter was.’ 


Today, I’m here to assail and demolish the accompanying nasty and utterly inaccurate slander: “…but he was a lousy president.”


No, he wasn’t. And I’ll fight anyone who says it. Because you slanderers don’t know your dang arse from… 


Okay, okay. Breathe.

Contrary Brin? Sure. 

But I mean it.



== Vietnam Fever ==


This mania goes all the way back to 1980. That year's utterly insipid “Morning in America” cult monomaniacally ignored the one central fact of that era


… that the United States of America had fallen for a trap that almost killed it. 


A trap that began in 1961, when a handsome, macho fool announced that “We will pay any price, bear any burden…” And schemers in Moscow rubbed their hands, answering:

“Really, Jack? ANY price? ANY burden? 

"How about a nice, big land war in the jungles of Southeast Asia?”


A war that became our national correlate to the Guinea Worm. 

Those of you who are too young to have any idea how traumatic the Vietnam War was… you can be forgiven. But anyone past or present who thought that everything would go back to 1962 bliss, when Kissinger signed the Paris Accords, proved themselves imbeciles. 

America was shredded, in part by social chasms caused by an insanely stupid war, plus too-long-delayed civil rights…

…but also economically, after LBJ and then Nixon tried for “Guns and Butter.” Running a full-scale war without inconveniently calling for sacrifices to pay for it. 

      Now throw in the OPEC oil crises! And the resulting inflation tore through America like an enema. 


Nixon couldn’t tame it. 

Ford couldn’t tame it. 

Neither of them had the guts.


Entering the White House, Jimmy Carter saw that the economy was teetering, and only strong medicine would work. Moreover, unlike any president, before or since, he cared only about the good of the nation.


As John Viril put it: “Jimmy Carter was, hands down, the most ethically sound President of my lifetime. He became President in the aftermath of Vietnam and during the second OPEC embargo. Carter's big achievement is that he killed hyper-inflation before it could trigger another depression, to the point that we didn't see it again for 40 years. Ronald Reagan gets credit for this, but it was Carter appointing tight-money Fed chairman Paul Volker that tamed inflation.”

Paul Volcker (look him up!) ran the Federal Reserve with tough love, because Carter told Volcker: “Fix this. And I won’t interfere. Not for the sake of politics or re-election. Patch the leaks in our boat. Put us on a diet. Fix it.”


Carter did this knowing that a tight money policy could trigger a recession that would very likely cost him re-election. The medicine tasted awful. 

  And it worked.

 Though it hurt like hell for 3 years, the post-Vietnam economic trauma got sweated out of the economy in record time. 

  In fact, just in time for things to settle down and for Ronald Reagan to inherit an economy steadying back onto an even keel. 

  His Morning in America.


Do you doubt that cause and effect? Care to step up with major wager stakes, before a panel of eminent economic historians? Because they know this and have said so. While politicians and media ignore them, in favor of Reagan idolatry.


Oh, and you who credit Reagan with starting the rebuilding of the U.S. military after Vietnam? 

   Especially the stealth techs and subs that are the core of our peacekeeping deterrence? 

  Nope.

  That was Carter, too.



== Restoring Trust ==


And there’s another unsung but vital thing that Jimmy Carter did, in the wake of Nixon-Ford and Vietnam. He restored faith in our institutions. In the aftermath of Watergate and J. Edgar Hoover and the rest, he made appointments who re-established some degree of trust. And historians (though never pundits or partisan yammerers) agree that he largely succeeded, by choosing skilled and blemish-free professionals, almost down the line.


And yes, let’s wager now over rates of turpitude in office, both before and since then. Or indictments for malfeasance, between the parties! Starting with Nixon, all the way to Biden and Trump II. It's night vs. day.


When the ratio of Republicans indicted and convicted for such crimes vs. Democrats approaches one hundred to one, is there any chance that our neighbors will notice… and decide that it is meaningful?

Not so long as idiots think that it makes them look so wise and cool to shake their heads and croon sadly “Both parties are the same!”


You, who sing that song, you don’t sound wise. 

You sound like an ignoramus. 

So, alas, it’s never actively refuted.

Not so long as Democrats habitually brag about the wrong things, and never mention facts like that one. The right ones.



== What about Reagan? ==


So. Yeah, yeah, you say. All of that may be true. But it comes to nothing, compared to Carter’s mishandling of the Iran Hostage Crisis.


Okay. This requires that – before getting to my main point - we first do an aside about Ronald Reagan. 


By now, the evidence is way more than circumstantial that Reagan committed treason during the Iran crisis. Negotiating through emissaries (some of whom admit it now!) for the Ayatollahs to hold onto the hostages till Carter got torched in the 1980 US election.  That’s a lot more than a ‘crackpot theory” by now… and yet I am not going in that direction, today.


Indeed, while I think his tenure set the modern theme for universal corruption of all subsequent Republican administrations, I have recently been extolling Ronald Reagan! Click and see all the many ways in which his tenure as California Governor seemed like Arnold Schwarzenegger's, calmly moderate! In 1970, Governor Reagan's policies made him almost an environmentalist Democrat! Certainly compared to today’s Foxite cult. 


Indeed, despite his many faults – the lying and corrupt officials, the AIDS cruelty and especially the triple-goddamned ‘War on Drugs’ – Reagan nevertheless, clearly wanted America to remain strong on the world stage. And to prevail against the Soviet ‘evil empire’…

… and I said as much to liberals of that era! I asked: “WTF else would you call something as oppressive and horrible as the USSR?” 


One thing I do know across all my being. Were he around today, Ronald Reagan would spit in the eyes of every current, hypocritical Republican Putin-lover and KGB shill, now helping all the Lenin-raised “ex” commissars over there to rebuild – in all it’s evil – the Soviet Union. With a few altered symbols and lapel pins. 


But again, that rant aside, what I have to say about Carter now departs from Reagan, his nemesis. 


Because this is not about Carter’s failed re-election. He already doomed any hope of that, when he told Volcker to fix the economy.


No, I am talking about Jimmy Carter’s Big Mistake.



== Iran…  ==


So sure, I am not going to assert that Carter didn’t fumble the Hostage Crisis. 


He did. Only not in the ways that you think! And here, not even the cautious historians get things right.


When the Shah fell, the fever that swept the puritan/Islamist half of Iranian society was intense and the Ayatollahs used that to entrench themselves. But when a mob of radicals stormed the American Embassy and took about a hundred U.S. diplomats hostage, the Ayatollahs faced a set of questions:


  • Shall we pursue vengeance on America – and specifically Carter – for supporting the Shah? Sounds good. But how hard should we push a country that’s so mighty? (Though note that post-Vietnam, we did look kinda lame.)
  • What kind of deal can we extort out of this, while claiming “We don’t even control that mob!”
  • And what’s our exit strategy?


During the subsequent, hellish year, it all seemed win-win for Khomeini and his clique. There was little we could do, without risking both the lives of the hostages and another oil embargo crisis, just as the U.S. economy was wobbling back onto its feet.


Yes, there was the Desert One rescue raid attempt, that failed because two helicopters developed engine trouble. Or – that’s the story. I do have a crackpot theory (What, Brin, another one?) about Desert One that I might insert into comments. If coaxed. No evidence, just a logical chain of thought.  (Except to note that it was immediately after that aborted raid that emissaries from the Islamic Republic hurried to Switzerland, seeking negotiations.)


But never mind that here. I told you that Jimmy Carter made one big mistake during the Iran Hostage Crisis, and he made it right at the beginning. By doing the right and proper and mature and legal thing.



== Too grownup. Too mature… ==


When that mob of ‘students’ took and cruelly abused the U.S. diplomats, no one on Earth swallowed the Ayatollah’s deniability claims of “it’s the kids, not me!” It was always his affair. And he hated Carter for supporting the Shah. And as we now know, Khomeini had promises from Reagan. So how could Carter even maneuver?


Well, he did start out with some chips on his side of the table. The Iranian diplomatic corps on U.S. soil. And prominent resident Iranians with status in the new regime -- those who weren’t seeking sanctuary at the time. Indeed, some voices called for them to be seized, as trading chips for our people in Tehran…


…and President Jimmy Carter shook his head, saying it would be against international law. Despite the fact that the Tehran regime holding our folks hostage was an act of war. Moreover, Carter believed in setting an example. And so, he diplomatically expelled those Iranian diplomats and arranged for them to get tickets home.


Honorable. Legal. And throwing them in jail would be illegal. And his setting an example might have worked… if the carrot had been accompanied by a big stick. If the adversary had not been in the middle of a psychotic episode. And… a whole lotta ifs.


I have no idea whether anyone in the Carter White House suggested this. But there was an intermediate action that might have hit the exact sweet spot. 


Arrest every Iranian diplomat and person on U.S. soil who was at all connected to the new regime… and intern them all at a luxury, beach-side hotel.


Allow news cameras to show the difference between civilized – even comfy - treatment and the nasty, foul things that our people were enduring, at the hands of those fervid ‘students.’ But above all, let those images – the stark contrast - continue, on and on and on. While American jingoists screeched and howled for our Iranian captives to be treated the same way. While the president refused.


Indeed, it is the contrast that would have torn world opinion, and any pretense of morality, away from the mullahs. And, with bikini-clad Americans strolling by daily, plus margaritas and waffles at the bar, wouldn’t their diplomats have screamed about their decadent torture? And pleaded for a deal – a swap of ‘hostages’ -- to come home? Or else, maybe one by one, might they defect?


We’ll never know. But it would have been worth a try. And every night, Walter Cronkite’s line might have been different.


And so, sure. Yeah. I think Carter made a mistake! And yeah, it was related to his maturity and goodness. So, I lied to you. Maybe he was too nice for the office. Too good for us to deserve.



== So, what’s my point? ==


I do have top heroes and Jimmy Carter is not one of them. 

Oh, I admired him immensely and thought him ill-treated by a nation he served well. But to me he is second-tier to Ben Franklin. To Lincoln and Tubman. To Jane Goodall and George Marshall.


But this missive is more about Carter’s despicable enemies. Nasty backstabber-liars and historical grudge-fabulators…


…of the same ilk as the bitchy slanderers who went on to savagely attack John Kerry, 100% of whose Vietnam comrades called him a hero, while 100% of the dastardly “swift-boaters” proved to be obscenely despicable, paid preeners, who were never even there.


Or the ‘birthers’ who never backed up a single word, but only screeched louder, when shown many time-yellowed copies of Obama’s 1962 birth announcement in the Honolulu Advertiser. Or the ass-hats who attacked John McCain and other decent, honorable Republicans who have fled the confederate madness, since Trump.


Or the myriad monstrous yammerers who now attack all fact-using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 


Nutters and Kremlin-boys who aren’t worthy to shine the boots of a great defender-servant like Mark Milley.


Jeepers David… calm down. We get it. But take a stress pill, already, or you might burst a vessel.


Okay, okay. Though the blood bank says I have the blood pressure of a teenager...


... It’s just. Well.  We are about to embark on a journey of American self-discovery, when the very notions of democracy and enlightenment are under attack by living monsters. Monsters who know the power of symbolism vastly better than finger-wagging lib’ruls do, and who would deny us the inspiration of true heroes.


Mighty heroes like George Marshall. Like MLK. Like Elie Weisel and my Dad. Like Greta Thunberg and Amory Lovins. And those far-too-few Republicans who have found the patriotic decency to step up for the Union in this 8th phase of a 250 year American Civil War.


And like the subject of this essay. The best president (by many metrics) of the last over-100 years.




== And if you got all the way down here, some fun from SMBC ==



https://www.smbc-comics.com/comic/slam


Okay this one too, a glimpse of a better world with more Jimmy Carters in it, too.








Planet DebianSteinar H. Gunderson: Superimposed codes

I had a peculiar question at work recently, and it went off of a tangent that was way too long and somewhat interesting, so I wanted to share.

The question is: Can you create a set of N-bit numbers (codes), so that

a) Neither is a subset of each other, and
b) Neither is a subset of the OR of two of the others?

Of course, you can trivially do this (e.g., for N=5, choose 10000, 01000, 00100 and so on), but how many can you make for a given N? This is seemingly an open question, but at least I found that they are called (1,2) superimposed codes and have history at least back to this 1964 paper. They present a fairly elegant (but definitely non-optimal) way of constructing them for certain N; let me show an example for N=25:

We start by counting 3-digit numbers (k=3) in base 5 (q=5):

  • 000
  • 001
  • 002
  • 003
  • 004
  • 010
  • 011
  • etc…

Now we have 5^3 numbers. Let's set out to give them the property that we want.

This code (set of numbers) trivially has distance 1; that is, every number differs from every other number by at least one digit. We'd like to increase that distance so that it is at least as large as k. Reed-Solomon gives us an optimal way of doing that; for every number, we add two checksum digits and R-S will guarantee that the resulting code has distance 3. (Just trust me on this, I guess. It only works for q >= (k+1)/2, though, and q must be a power of an odd prime because otherwise the group theory doesn't work out.)

We now have a set of 5-digit numbers with distance 3. But if we now take any three numbers from this set, there is at least one digit where all three must differ, since the distance is larger than half the number of digits: Two numbers A and B differ from each other in at least 3 of the 5 digits, and A and C also has to differ from each other in at least 3 of the 5 digits. There just isn't room for A and B to be the same in all the places that A differ from C.

To modify this property into the one that we want, we encode each digit into binary using one-hot encoding (00001, 00010, 00100, etc.). Now our 5-digit numbers are 25-bit numbers. And due to the "all different" property in the previous paragraph, we also have our superimposition property; there's at least one 5-bit group where A|B shares no bits with C. So this gives us a 25-bit set with 125 different values and our desired property.

This isn't necessarily an optimal code (and the authors are very clear on that), but it's at least systematic and easy to extend to larger sizes. (I used a SAT solver to extend this to 170 different values, just by keeping the 125 first and asking for 45 more that were not in conflict. 55 more was evidently hard.) The paper has tons more information, including some stuff based on Steiner systems that I haven't tried to understand. And of course, there are tons more later papers, including one by Erdős. :-)

I've applied for an account at OEIS so I can add a sequence for the maximum number of possible codes for each N. It doesn't have many terms known yet, because the SAT solver struggles hard with this (at least in my best formulation), but at least it will give the next person something to find when they are searching. :-)

Planet DebianSahil Dhiman: Case of (broken) maharashtra.gov.in Authoritative Name Servers

Maharashtra is a state here in India, which has Mumbai, the financial capital of India, as its capital. maharashtra.gov.in is the official website of the State Government of Maharashtra. We’re going to talk about authoritative name servers serving it (and bunch of child zones under maharashtra.gov.in).

Here’s a simple trace for the main domain:

$ dig +trace maharashtra.gov.in

; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> +trace maharashtra.gov.in
;; global options: +cmd
.            33128    IN    NS    j.root-servers.net.
.            33128    IN    NS    h.root-servers.net.
.            33128    IN    NS    l.root-servers.net.
.            33128    IN    NS    k.root-servers.net.
.            33128    IN    NS    i.root-servers.net.
.            33128    IN    NS    g.root-servers.net.
.            33128    IN    NS    f.root-servers.net.
.            33128    IN    NS    e.root-servers.net.
.            33128    IN    NS    b.root-servers.net.
.            33128    IN    NS    d.root-servers.net.
.            33128    IN    NS    c.root-servers.net.
.            33128    IN    NS    m.root-servers.net.
.            33128    IN    NS    a.root-servers.net.
.            33128    IN    RRSIG    NS 8 0 518400 20250704050000 20250621040000 53148 . pGxGZftwj+6VNTSQtstTKVN95Z7/b5Q8GSjRCXI68GoVYbVai9HNelxs OGIRKL4YmSrsiSsndXuEsBuvL9QvQ+qbybNLkekJUAiicKYNgr3KM3+X 69rsS9KxHgT2T8/oqG8KN8EJLJ8VkuM2PJ2HfSKijtF7ULtgBbERNQ4i u2I/wQ7elOyeF2M76iEOa7UGhgiBHSBqPulsbpnB//WbKL71yyFhWSk0 tiFEPuZM+iLrN2qBsElriF4kkw37uRHq8sSGcCjfBVdkpbb3/Sb3sIgN /zKU17f+hOvuBQTDr5qFIymqGAENA5UZ2RQjikk6+zK5EfBUXNpq1+oo 2y64DQ==
;; Received 525 bytes from 9.9.9.9#53(9.9.9.9) in 3 ms

in.            172800    IN    NS    ns01.trs-dns.com.
in.            172800    IN    NS    ns01.trs-dns.net.
in.            172800    IN    NS    ns10.trs-dns.org.
in.            172800    IN    NS    ns10.trs-dns.info.
in.            86400    IN    DS    48140 8 2 5EE4748C2069B99C98BC39A56881A64AF17CC78711E6297D43AC5A4F 4B5BB6E5
in.            86400    IN    RRSIG    DS 8 1 86400 20250704050000 20250621040000 53148 . jkCotYosapreoKKPvr9zPOEDECYVe9OtJLjkQbFfTin8uYbm/kdWzieW CkN5sabif5IHTFU4FEVOShfu4DFeUolhNav56TPKjGqEGjQ7qCghpqTj dNN4iY2s8BcJ2ujHwhm6HRfdbQRVoKYQ73UUZ+oWSute6lXWHE9+Snk2 1ZCAYPdZ2s1s7NZhrZW2YXVw/nHIcRl/rHqWIQ9sgUlsd6MwmahcAAG+ v15HG9Q48rCG1A2gJlJPbxWpVe0EUEu8LzDsp+ORqy1pHhzgJynrJHJz qMiYU0egv2j7xVPSoQHXjx3PG2rsOLNnqDBYCA+piEXOLsY3d+7c1SZl w9u66g==
;; Received 679 bytes from 199.7.83.42#53(l.root-servers.net) in 3 ms

maharashtra.gov.in.    900    IN    NS    ns8.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns9.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns10.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns18.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns20.maharashtra.gov.in.
npk19skvsdmju264d4ono0khqf7eafqv.gov.in. 300 IN    NSEC3 1 1 0 - P0KKR4BMBGLJDOKBGBI0KDM39DSM0EA4 NS SOA MX TXT RRSIG DNSKEY NSEC3PARAM
npk19skvsdmju264d4ono0khqf7eafqv.gov.in. 300 IN    RRSIG NSEC3 8 3 300 20250626140337 20250528184339 48544 gov.in. Khcq3n1Jn34HvuBEZExusVqoduEMH6DzqkWHk9dFkM+q0RVBYBHBbW+u LsSnc2/Rqc3HAYutk3EZeS+kXVF07GA/A486dr17Hqf3lHszvG/MNT/s CJfcdrqO0Q8NZ9NQxvAwWo44bCPaECQV+fhznmIaVSgbw7de9xC6RxWG ZFcsPYwYt07yB5neKa99RlVvJXk4GHX3ISxiSfusCNOuEKGy5cMxZg04 4PbYsP0AQNiJWALAduq2aNs80FQdWweLhd2swYuZyfsbk1nSXJQcYbTX aONc0VkYFeEJzTscX8/wNbkJeoLP0r/W2ebahvFExl3NYpb7b2rMwGBY omC/QA==
npk19skvsdmju264d4ono0khqf7eafqv.gov.in. 300 IN    RRSIG NSEC3 13 3 300 20250718144138 20250619135610 22437 gov.in. mbj7td3E6YE7kIhYoSlDTZR047TXY3Z60NY0aBwU7obyg5enBQU9j5nl GUxn9zUiwVUzei7v5GIPxXS7XDpk7g==
6bflkoouitlvj011i2mau7ql5pk61sks.gov.in. 300 IN    NSEC3 1 1 0 - 78S0UO5LI1KV1SVMH1889FHUCNC40U6T TXT RRSIG
6bflkoouitlvj011i2mau7ql5pk61sks.gov.in. 300 IN    RRSIG NSEC3 8 3 300 20250626133905 20250528184339 48544 gov.in. M2yPThQpX0sEf4klooQ06h+rLR3e3Q/BqDTSFogyTIuGwjgm6nwate19 jGmgCeWCYL3w/oxsg1z7SfCvDBCXOObH8ftEBOfLe8/AGHAEkWFSu3e0 s09Ccoz8FJiCfBJbbZK5Vf4HWXtBLfBq+ncGCEE24tCQLXaS5cT85BxZ Zne6Y6u8s/WPgo8jybsvlGnL4QhIPlW5UkHDs7cLLQSwlkZs3dwxyHTn EgjNWClhghGXP9nlvOlnDjUkmacEYeq5ItnCQjYPl4uwh9fBJ9CD/8LV K+Tn3+dgqDBek6+2HRzjGs59NzuHX8J9wVFxP7/nd+fUgaSgz+sST80O vrXlHA==
6bflkoouitlvj011i2mau7ql5pk61sks.gov.in. 300 IN    RRSIG NSEC3 13 3 300 20250718141148 20250619135610 22437 gov.in. raWzWsQnPkXYtr2v1SRH/fk2dEAv/K85NH+06pNUwkxPxQk01nS8eYlq BPQ41b26kikg8mNOgr2ULlBpJHb1OQ==
couldn't get address for 'ns18.maharashtra.gov.in': not found
couldn't get address for 'ns20.maharashtra.gov.in': not found
;; Received 1171 bytes from 2620:171:813:1534:8::1#53(ns10.trs-dns.org) in 0 ms

;; communications error to 10.187.202.24#53: timed out
;; communications error to 10.187.202.24#53: timed out
;; communications error to 10.187.202.24#53: timed out
;; communications error to 10.187.202.28#53: timed out
;; communications error to 10.187.203.201#53: timed out
;; no servers could be reached

Quick takeaways:

  • 5 authoritative NS are listed ie:

    • ns8.maharashtra.gov.in.
    • ns9.maharashtra.gov.in.
    • ns10.maharashtra.gov.in.
    • ns18.maharashtra.gov.in.
    • ns20.maharashtra.gov.in.
  • No address (no A/AAAA records) could be found for ns18.maharashtra.gov.in and ns20.maharashtra.gov.in. Internet Archive snapshots for bgp.tools at time of writing NS18 and NS20.

  • “communications error to 10.187.202.24#53: timed out”, “communications error to 10.187.202.28#53: timed out” and “communications error to 10.187.203.201#53: timed out” is likely due to RFC 1918 records for NS. Ofcourse, they will never respond on public internet.

  • Not in trace, but NS10 has private or empty A/AAAA record against it (detailed further down).

  • The query resolution failed with “no servers could be reached” ie we didn’t recieved any A/AAAA record for that query.

It’s a hit or miss for this DNS query resolution.

Looking at in zone data

Let’s look at NS added in zone itself (with 9.9.9.9):

$ dig ns maharashtra.gov.in

; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> ns maharashtra.gov.in
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 172
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;maharashtra.gov.in.        IN    NS

;; ANSWER SECTION:
maharashtra.gov.in.    300    IN    NS    ns8.maharashtra.gov.in.
maharashtra.gov.in.    300    IN    NS    ns9.maharashtra.gov.in.

;; ADDITIONAL SECTION:
ns9.maharashtra.gov.in.    300    IN    A    10.187.202.24
ns8.maharashtra.gov.in.    300    IN    A    10.187.202.28

;; Query time: 180 msec
;; SERVER: 9.9.9.9#53(9.9.9.9) (UDP)
;; WHEN: Sat Jun 21 23:00:49 IST 2025
;; MSG SIZE  rcvd: 115

Pay special attention to “ADDITIONAL SECTION”. Running dig ns9.maharashtra.gov.in and dig ns8.maharashtra.gov.in, return RFC 1918 ie these private addresses. This is coming from zone itself, so in zone A records of NS8 and NS9 point to 10.187.202.28 and 10.187.202.24 respectively.

Cloudflare’s 1.1.1.1 has a slightly different version:

$ dig ns maharashtra.gov.in @1.1.1.1

; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> ns maharashtra.gov.in @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36005
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;maharashtra.gov.in.        IN    NS

;; ANSWER SECTION:
maharashtra.gov.in.    300    IN    NS    ns8.
maharashtra.gov.in.    300    IN    NS    ns10.maharashtra.gov.in.
maharashtra.gov.in.    300    IN    NS    ns9.

;; Query time: 7 msec
;; SERVER: 1.1.1.1#53(1.1.1.1) (UDP)
;; WHEN: Sun Jun 22 10:38:30 IST 2025
;; MSG SIZE  rcvd: 100

Interesting response here for sure :D.

The reason for difference between response from 1.1.1.1 and 9.9.9.9 is in the next section.

Looking at parent zone

gov.in is the parent zone here. Tucows is operator for gov.in as well as .in ccTLD zone:

$ dig ns gov.in +short
ns01.trs-dns.net.
ns01.trs-dns.com.
ns10.trs-dns.org.
ns10.trs-dns.info.

Let’s take a look at what parent zone (NS) hold:

$ dig ns maharashtra.gov.in @ns01.trs-dns.net.

; <<>> DiG 9.18.36 <<>> ns maharashtra.gov.in @ns01.trs-dns.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56535
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 5, ADDITIONAL: 6
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: f13027aa39632404010000006856fa2a9c97d6bbc973ba4f (good)
;; QUESTION SECTION:
;maharashtra.gov.in.        IN    NS

;; AUTHORITY SECTION:
maharashtra.gov.in.    900    IN    NS    ns8.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns18.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns10.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns9.maharashtra.gov.in.
maharashtra.gov.in.    900    IN    NS    ns20.maharashtra.gov.in.

;; ADDITIONAL SECTION:
ns20.maharashtra.gov.in. 900    IN    A    52.183.143.210
ns18.maharashtra.gov.in. 900    IN    A    35.154.30.166
ns10.maharashtra.gov.in. 900    IN    A    164.100.128.234
ns9.maharashtra.gov.in.    900    IN    A    103.23.150.89
ns8.maharashtra.gov.in.    900    IN    A    103.23.150.88

;; Query time: 28 msec
;; SERVER: 64.96.2.1#53(ns01.trs-dns.net.) (UDP)
;; WHEN: Sun Jun 22 00:00:02 IST 2025
;; MSG SIZE  rcvd: 248

The ADDITIONAL SECTION gives a completely different picture (different from in zone NSes). Maybe this was how it was supposed to be, but none of the IPs listed for NS10, NS18 and NS20 are responding to any DNS query.

Assuming NS8 as 103.23.150.88 and NS9 as 103.23.150.89, checking SOA on each gives following:

$ dig soa maharashtra.gov.in @103.23.150.88 +short
ns8.maharashtra.gov.in. postmaster.maharashtra.gov.in. 2013116777 1200 600 1296000 300

$ dig soa maharashtra.gov.in @103.23.150.89 +short
ns8.maharashtra.gov.in. postmaster.maharashtra.gov.in. 2013116757 1200 600 1296000 300

NS8 (which is marked as primary in SOA) has serial 2013116777 and NS9 is on serial 2013116757, so looks like the sync (IXFR/AXFR) between primary and secondary is broken. That’s why NS8 and NS9 are serving different responses, evident from the following:

$ dig ns8.maharashtra.gov.in @103.23.150.88 +short
103.23.150.88

$ dig ns8.maharashtra.gov.in @103.23.150.89 +short
10.187.202.28

$ dig ns9.maharashtra.gov.in @103.23.150.88 +short
103.23.150.89

$ dig ns9.maharashtra.gov.in @103.23.150.89 +short
10.187.202.24

$ dig ns maharashtra.gov.in @103.23.150.88 +short
ns9.
ns8.
ns10.maharashtra.gov.in.

$ dig ns maharashtra.gov.in @103.23.150.89 +short
ns9.maharashtra.gov.in.
ns8.maharashtra.gov.in.

$ dig ns10.maharashtra.gov.in @103.23.150.88 +short
10.187.203.201

$ dig ns10.maharashtra.gov.in @103.23.150.89 +short

# No/empty response ^

This is the reason for difference in 1.1.1.1 and 9.9.9.9 responses in previous section.

To summarize:

  • Primary and secondary NS aren’t in sync. Serials aren’t matching, while NS8 and NS9 are responding differently for same queries.
  • NSes have A records with private address, not reachable on the internet, so lame servers.
  • Incomplete NS address, not even FQDN in some cases.
  • Difference between NS delegated in parent zone and NS added in actual zone.
  • Name resolution works in very particular order (in my initial trace it failed).

Initially, I thought of citing RFCs, but I don’t really think it’s even required. 1.1.1.1, 8.8.8.8 and 9.9.9.9 are handling (lame servers, this probelm) well, handing out the A record for the main website, so dig maharashtra.gov.in would mostly pass and that was the reason I started this post with +trace to recurse the complete zone to show the problem.

For later reference:

$ dig maharashtra.gov.in @8.8.8.8 +short
103.8.188.109

Email to SOA address

I have sent the following email to address listed in SOA:


Subject - maharashtra.gov.in authoritative DNS servers not reachable

Hello,

I wanted to highlight the confusing state of maharashtra.gov.in authoritative DNS servers.

Parent zone list following as name servers for your DNS zone:

  • ns8.maharashtra.gov.in.
  • ns18.maharashtra.gov.in.
  • ns10.maharashtra.gov.in.
  • ns9.maharashtra.gov.in.
  • ns20.maharashtra.gov.in.

Out of these, ns18 and ns20 don’t have public A/AAAA records and are thus not reachable. ns10 keeps on shuffling between NO A record and 10.187.203.201 (private, not reachable address). ns8 keeps on shuffling between 103.23.150.88 and 10.187.202.28 (private, not reachable address). ns9 keeps on shuffling between 103.23.150.89 and 10.187.202.24 (private, not reachable address).

These are leading to long, broken, or no DNS resolution for the website(s). Can you take a look at the problem?

Regards, Sahil


I’ll update here if I get a response. Hopefully, they’ll listen and fix their problem.

365 TomorrowsI Hear You Like My Work

Author: Alaina Hammond Yesterday I received a text from an unknown number. “Hi! I hear you like my work!” I immediately knew who it was. Or rather, who it was pretending to be. It’s so creepy that the robots in my phone can tell what I’ve been reading. Even when it’s in paperback form, purchased […]

The post I Hear You Like My Work appeared first on 365tomorrows.

,

Planet DebianRavi Dwivedi: Getting Brunei visa

In December 2024, my friend Badri and I were planning a trip to Southeast Asia. At this point, we were planning to visit Singapore, Malaysia and Vietnam. My Singapore visa had already been approved, and Malaysia was visa-free for us. For Vietnam, we had to apply for an e-visa online.

We considered adding Brunei to our itinerary. I saw some videos of the Brunei visa process and got the impression that we needed to go to the Brunei embassy in Kuching, Malaysia in person.

However, when I happened to search for Brunei on Organic Maps1, I stumbled upon the Brunei Embassy in Delhi. It seemed to be somewhere in Hauz Khas. As I was going to Delhi to collect my Singapore visa the next day, I figured I’d also visit the Brunei Embassy to get information about the visa process.

The next day I went to the location displayed by Organic Maps. It was next to the embassy of Madagascar, and a sign on the road divider confirmed that I was at the right place.

That said, it actually looked like someone’s apartment. I entered and asked for directions to the Brunei embassy, but the people inside did not seem to understand my query. After some back and forth, I realized that the embassy wasn’t there.

I now searched for the Brunei embassy on the Internet, and this time I got an address in Vasant Vihar. It seemed like the embassy had been moved from Hauz Khas to Vasant Vihar. Going by the timings mentioned on the web page, the embassy was closing in an hour.

I took a Metro from Hauz Khas to Vasant Vihar. After deboarding at the Vasant Vihar metro station, I took an auto to reach the embassy. The address listed on the webpage got me into the correct block. However, the embassy was still nowhere to be seen. I asked around, but security guards in that area pointed me to the Burundi embassy instead.

After some more looking around, I did end up finding the embassy. I spoke to the security guards at the gate and told them that I would like to know the visa process. They dialled a number and asked that person to tell me the visa process.

I spoke to a lady on the phone. She listed the documents required for the visa process and mentioned that the timings for visa application were from 9 o’clock to 11 o’clock in the morning. She also informed me that the visa fees was ₹1000.

I also asked about the process Badri, who lives far away in Tamil Nadu and cannot report at the embassy physically. She told me that I can submit a visa application on his behalf, along with an authorization letter.

Having found the embassy in Delhi was a huge relief. The other plan - going to Kuching, Malaysia - was a bit uncertain, and we didn’t know how much time it would take. Getting our passport submitted at an embassy in a foreign country was also not ideal.

A few days later, Badri sent me all the documents required for his visa. I went to the embassy and submitted both the applications. The lady who collected our visa submissions asked me for our flight reservations from Delhi to Brunei, whereas ours were (keeping with our itinerary) from Kuala Lampur. She said that she might contact me later if it was required.

For reference, here is the list of documents we submitted -

  • Visa application form
  • Passport
  • A photocopy of passport
  • Authorization letter from Badri (authorizing me to submit his application on his behalf)
  • Airline ticket itinerary
  • Hotel bookings
  • Cover letter
  • 2 photos
  • Proof of employment
  • 6 months bank statement (they specifically asked for ₹1,00,000 or more in bank balance)

I then asked about the procedure to collect the passports and visa results. Usually, embassies will tell you that they will contact you when they have decided on your applications. However, here I was informed that if they don’t contact me within 5 days, I can come and collect our passports and visa result between 13:30-14:30 hours on the fifth day. That was strange :)

I did visit the embassy to collect our visa results on the fifth day. However, the lady scolded me for not bringing the receipt she gave me. I was afraid that I might have to go all the way back home and bring the receipt to get our passports. The travel date was close, and it would take some time for Badri to receive his passport via courier as well.

Fortunately, she gave me our passports (with the visa attached) and asked me to share a scanned copy of the receipt via email after I get home.

We were elated that our visas were approved. Now we could focus on booking our flights.

If you are going to Brunei, remember to fill their arrival card from the website within 48 hours of your arrival!

Thanks to Badri and Contrapunctus for reviewing the draft before publishing the article.


  1. Nowadays, I prefer using Comaps instead of Organic Maps and recommend you do the same. Organic Maps had some issues with its governance and the community issues weren’t being addressed. ↩︎

365 Tomorrows11 to Midnight

Author: Claire Robertson Those four great comets pull white scars through the sky. Fans of fire expand over our heads, and you still can’t bear to look at me despite how I ask you to. I want the last thing I see to be something familiar. The half-eaten chocolate cake between us will have to […]

The post 11 to Midnight appeared first on 365tomorrows.

,

Planet DebianSven Hoexter: Terraform: Validation Condition Cycles

Terraform 1.9 introduced some time ago the capability to reference in an input variable validation condition other variables, not only the one you're validating.

What does not work is having two variables which validate each other, e.g.

variable "nat_min_ports" {
  description = "Minimal amount of ports to allocate for 'min_ports_per_vm'"
  default     = 32
  type        = number
  validation {
    condition = (
      var.nat_min_ports >= 32 &&
      var.nat_min_ports <= 32768 &&
      var.nat_min_ports < var.nat_max_ports
    )
    error_message = "Must be between 32 and 32768 and less than 'nat_max_ports'"
  }
}

variable "nat_max_ports" {
  description = "Maximal amount of ports to allocate for 'max_ports_per_vm'"
  default     = 16384
  type        = number
  validation {
    condition = (
      var.nat_max_ports >= 64 &&
      var.nat_max_ports <= 65536 &&
      var.nat_max_ports > var.nat_min_ports
    )
    error_message = "Must be between 64 and 65536 and above 'nat_min_ports'"
  }
}

That let directly to the following rather opaque error message: Received an error Error: Cycle: module.gcp_project_network.var.nat_max_ports (validation), module.gcp_project_network.var.nat_min_ports (validation)

Removed the sort of duplicate check var.nat_max_ports > var.nat_min_ports on nat_max_ports to break the cycle.

Worse Than FailureError'd: Colophony

Just a quick note this week: I discovered that many people have been sending in submissions for this column and designating them for CodeSod by mistakes. Consequently, there is an immense backlog of material from which to choose. An abundance of riches! We will be seeing some older items in the future. For today, a collection of colons:

Bill NoLastName , giving away clues to his banking security questions online: "If had known there was a limit, I would have changed my daughter's middle name. I've been caught by this before - my dad has only a middle initial (no middle name)."

0

 

Gordon F. heard of a greal deal: "This is the first mention of shipping on a hearing aids website. Tough choice."

1

 

Michael P. underlines a creative choice: "I got an email from a recruiter about a job opening. I'm a little confused about the requirements."

2

 

Cole T. pretend panics about pennies (and maybe we need an article about false urgency, hm): "Oh no! My $0 in rewards are about to expire!"

3

 

Finally, bibliophile WeaponizedFun (alsonolastname) humblebrags erudition. It ain't War & Peace, but it's still an ordeal! "After recently finishing The Brothers Karamazov after 33 hours on audio disc, I was a bit surprised to see that Goodreads is listing this as my longest book with only 28 pages. 28 discs, maybe, but I still am questioning their algorithm, because this just looks silly."

4

 

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Planet DebianMatthew Garrett: My a11y journey

23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.

The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.

This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.

But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.

I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.

But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.

That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.

When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.

comment count unavailable comments

365 TomorrowsAssisted Living

Gramps started slipping after his 105th birthday. Nothing dramatic, just forgetting a story or two, repeating a conversation from the hour before, stuff like that. Our family and about 40 others went to the surgical center for the informational briefings about a revolutionary AI “personality bridge” implant. There was a slick corporate infomercial and then […]

The post Assisted Living appeared first on 365tomorrows.

Planet DebianRussell Coker: The Intel Arc B580 and PCIe Slot Size

A few months ago I bought a Intel Arc B580 for the main purpose of getting 8K video going [1]. I had briefly got it working in a test PC but then I wanted to deploy it on my HP z840 that I use as a build server and for playing with ML stuff [2]. I only did brief tests of it previously and this was my first attempt at installing it in a system I use. My plan was to keep the NVidia RTX A2000 in place and run 2 GPUs, that’s not an uncommon desire among people who want to do ML stuff and it’s the type of thing that the z840 is designed for, the machine has slots 2, 4, and 6 being PCIe*16 so it should be able to fit 3 cards that each take 2 slots. So having one full size GPU, the half-height A2000, and a NVMe controller that uses *16 to run four NVMe devices should be easy.

Intel designed the B580 to use every millimeter of space possible while still being able to claim to be a 2 slot card. On the circuit board side there is a plastic cover over the board that takes all the space before the next slot so a 2 slot card can’t go on that side without having it’s airflow blocked. On the other side it takes all the available space so that any card that wants to blow air through can’t fit and also such that a medium size card (such as the card for 4 NVMe devices) would block it’s air flow. So it’s impossible to have a computer with 6 PCIe slots run the B580 as well as 2 other full size *16 cards.

Support for this type of GPU is something vendors like HP should consider when designing workstation class systems. For HP there is no issue of people installing motherboards in random cases (the HP motherboard in question uses proprietary power connectors and won’t even boot with an ATX PSU without significant work). So they could easily design a motherboard and case with a few extra mm of space between pairs of PCIe slots. The cards that are double width are almost always *16 so you could pair up a *16 slot and another slot and have extra space on each side of the pair. I think for most people a system with 6 PCIe slots with a bit of extra space for GPU cooling would be more useful than having 7 PCIe slots. But as HP have full design control they don’t even need to reduce the number of PCIe slots, they could just make the case taller. If they added another 4 slots and increased the case size accordingly it still wouldn’t be particularly tall by the standards of tower cases from the 90s! The z8 series of workstations are the biggest workstations that HP sells so they should design them to do these things. At the time that the z840 was new there was a lot of ML work being done and HP was selling them as ML workstations, they should have known how people would use them and design them accordingly.

So I removed the NVidia card and decided to run the system with just the Arc card, things should have been fine but Intel designed the card to be as high as possible and put the power connector on top. This prevented installing the baffle for directing air flow over the PCIe slots and due to the design of the z840 (which is either ingenious or stupid depending on your point of view) the baffle is needed to secure the PCIe cards in place. So now all the PCIe cards are just secured by friction in the slots, this isn’t an unusual situation for machines I assemble but it’s not something I desired.

This is the first time I’ve felt compelled to write a blog post reviewing a product before even getting it working. But the physical design of the B580 is outrageously impractical unless you are designing your entire computer around the GPU.

As an aside the B580 does look very nice. The plastic surround is very fancy, it’s a pity that it interferes with the operation of the rest of the system.

Planet DebianReproducible Builds (diffoscope): diffoscope 299 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 299. This version includes the following changes:

[ Chris Lamb ]
* Add python3-defusedxml to the Build-Depends in order to include it in the
  Docker image. (Closes: #407)

You find out more by visiting the project homepage.

,

Planet DebianJonathan Carter: My first tag2upload upload

Tag2upload?

The tag2upload service has finally gone live for Debian Developers in an open beta.

If you’ve never heard of tag2upload before, here is a great primer presented by Ian Jackson and prepared by Ian Jackson and Sean Whitton.

In short, the world has moved on to hosting and working with source code in Git repositories. In Debian, we work with source packages that are used to generated the binary artifacts that users know as .deb files. In Debian, there is so much tooling and culture built around this. For example, our workflow passes what we call the island test – you could take every source package in Debian along with you to an island with no Internet, and you’ll still be able to rebuild or modify every package. When changing the workflows, you risk losing benefits like this, and over the years there has been a number of different ideas on how to move to a purely or partially git flow for Debian, none that really managed to gain enough momentum or project-wide support.

Tag2upload makes a lot of sense. It doesn’t take away any of the benefits of the current way of working (whether technical or social), but it does make some aspects of Debian packages significantly simpler and faster. Even so, if you’re a Debian Developer and more familiar with how the sausage have made, you’ll have noticed that this has been a very long road for the tag2upload maintainers, they’ve hit multiple speed bumps since 2019, but with a lot of patience and communication and persistence from all involved (and almost even a GR), it is finally materializing.

Performing my first tag2upload

So, first, I needed to choose which package I want to upload. We’re currently in hard freeze for the trixie release, so I’ll look for something simple that I can upload to experimental.

I chose bundlewrap, it’s quote a straightforward python package, and updates are usually just as straightforward, so it’s probably a good package to work on without having to deal with extra complexities in learning how to use tag2upload.

So, I do the usual uscan and dch -i to update my package…

And then I realise that I still want to build a source package to test it in cowbuilder. Hmm, I remember that Helmut showed me that building a source package isn’t necessary in sbuild, but I have a habit of breaking my sbuild configs somehow, but I guess I should revisit that.

So, I do a dpkg-buildpackage -S -sa and test it out with cowbuilder, because that’s just how I roll (at least for now, fixing my local sbuild setup is yak shaving for another day, let’s focus!).

I end up with a binary that looks good, so I’m satisfied that I can upload this package to the Debian archives. So, time to configure tag2upload.

The first step is to set up the webhook in Salsa. I was surprised two find two webhooks already configured:

I know of KGB that posts to IRC, didn’t know that this was the mechanism it does that by before. Nice! Also don’t know what the tagpending one does, I’ll go look into that some other time.

Configuring a tag2upload webhook is quite simple, add a URL, call the name tag2upload, and select only tag push events:

I run the test webhook, and it returned a code 400 message about a missing ‘message’ header, which the documentation says is normal.

Next, I install git-debpush from experimental.

The wiki page simply states that you can use the git-debpush command to upload, but doesn’t give any examples on how to use it, and its manpage doesn’t either. And when I run just git-debpush I get:

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: check failed: upstream tag upstream/4.22.0 is not an ancestor of refs/heads/debian/master; probably a mistake ('upstream-nonancestor' check)
pristine-tar is /usr/bin/pristine-tar
git-debpush: some check(s) failed; you can pass --force to ignore them

I have no idea what that’s supposed to mean. I was also not sure whether I should tag anything to begin with, or if some part of the tag2upload machinery automatically does it. I think I might have tagged debian/4.23-1 before tagging upstream/4.23 and perhaps it didn’t like it, I reverted and did it the other way around and got a new error message. Progress!

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: could not determine the git branch layout
git-debpush: please supply a --quilt= argument

Looking at the manpage, it looks like –quilt=baredebian matches my package the best, so I try that:

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush --quilt=baredebian
Enumerating objects: 70, done.
Counting objects: 100% (70/70), done.
Delta compression using up to 12 threads
Compressing objects: 100% (37/37), done.
Writing objects: 100% (37/37), 8.97 KiB | 2.99 MiB/s, done.
Total 37 (delta 30), reused 0 (delta 0), pack-reused 0 (from 0)
To salsa.debian.org:python-team/packages/bundlewrap.git
6f55d99..3d5498f debian/master -> debian/master

 * [new tag] upstream/4.23.1 -> upstream/4.23.1
 * [new tag] debian/4.23.1-1_exp1 -> debian/4.23.1-1_exp1

Ooh! That looked like it did something! And a minute later I received the notification of the upload in my inbox:

So, I’m not 100% sure that this makes things much easier for me than doing a dput, but, it’s not any more difficult or more work either (once you know how it works), so I’ll be using git-debpush from now on, and I’m sure as I get more used to the git workflow of doing things I’ll understand more of the benefits. And at last, my one last use case for using FTP is now properly dead. RIP FTP :)

MEMatching Intel CPUs

To run a SMP system with multiple CPUs you need to have CPUs that are “identical”, the question is what does “identical” mean. In this case I’m interested in Intel CPUs because SMP motherboards and server systems for Intel CPUs are readily available and affordable. There are people selling matched pairs of CPUs on ebay which tend to be more expensive than randomly buying 2 of the same CPU model, so if you can identify 2 CPUs that are “identical” which are sold separately then you can save some money. Also if you own a two CPU system with only one CPU installed then buying a second CPU to match the first is cheaper and easier than buying two more CPUs and removing a perfectly working CPU.

e5-2640 v4 cpus

Intel (R) Xeon (R)
E5-2640V4
SR2NZ 2.40GHZ
J717B324 (e4)
7758S4100843

Above is a pic of 2 E5-2640v4 CPUs that were in a SMP system I purchased along with a plain ASCII representation of the text on one of them. The bottom code (starting with “77”) is apparently the serial number, one of the two codes above it is what determines how “identical” those CPUs are.

The code on the same line as the nominal clock speed (in this case SR2NZ) is the “spec number” which is sometimes referred to as “sspec” [1].

The line below the sspec and above the serial number has J717B324 which doesn’t have a google hit. I looked at more than 20 pics of E5-2640v4 CPUs on ebay, they all had the code SR2NZ but had different numbers on the line below. I conclude that the number on the line below probably indicates the model AND stepping while SR2NZ just means E5-2640v4 regardless of stepping. As I wasn’t able to find another CPU on ebay with the same number on the line below the sspec I believe that it will be unreasonably difficult to get a match for an existing CPU.

For the purpose of matching CPUs I believe that if the line above the serial number matches then the CPUs can be used together. I am not certain that CPUs with this number slightly mismatching won’t work but I definitely wouldn’t want to spend money on CPUs with this number being different.

smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz (family: 0x6, model: 0x4f, stepping: 0x1)

When you boot Linux the kernel identifies the CPU in a manner like the above, the combination of family and model seem to map to one spec number. The combination of family, model, and stepping should be all that’s required to have them work together.

I think that Intel did the wrong thing in not making this clearer. It would have been very easy to print the stepping on the CPU case next to the sspec or the CPU model name. It also wouldn’t have been too hard to make the CPU provide the magic number that is apparently the required match for SMP to the OS. Having the Intel web site provide a mapping of those numbers to steppings of CPUs also shouldn’t be difficult for them.

If anyone knows more about these issues please let me know.

Worse Than FailureCodeSOD: Using the Old Bean

If you write a lot of Java, you're going to end up writing a lot of getters and setters. Without debating the merits of loads of getters and setters versus bare properties, ideally, getters and setters are the easiest code to write. Many IDEs will just generate them for you! How can you screw up getters and setters?

Well, Dave found someone who could.

private ReportDatesDao reportDatesDao;
@Resource(name = CensusDao.BEAN_NAME)
public void setAuditDao(CensusDao censusDao) {
   this.reportDatesDao = reportDatesDao;
}

The function is called setAuditDao, takes a CensusDao input, but manipulates reportDatesDao, because clearly someone copy/pasted and didn't think about what they were doing.

The result, however, is that this just sets this.reportDatesDao equal to itself.

I'm always impressed by code which given the chance to make multiple decisions makes every wrong choice, even if it is just lazy copy/paste.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsTraveler Talk

Author: Angela Hawn “Ready to sing for your supper?” The head honcho in the antique army helmet flashes a toothy smile at our little group before acknowledging the wider audience. Applause ensues. “Of course”, I say, channeling my storytelling grandmother whose entertaining melodrama once served multiple purposes: convincing me to sleep, to eat my vegetables, […]

The post Traveler Talk appeared first on 365tomorrows.

Planet DebianDebian Outreach Team: GSoC 2025 Introduction: Make Debian for Raspberry Pi Build Again

Hello everyone! I am Kurva Prashanth, Interested in the lower level working of system software, CPUs/SoCs and Hardware design. I was introduced to Open Hardware and Embedded Linux while studying electronics and embedded systems as part of robotics coursework. Initially, I did not pay much attention to it and quickly moved on. However, a short talk on “Liberating SBCs using Debian” by Yuvraj at MiniDebConf India, 2021 caught my interest. The talk focused on Open Hardware platforms such as Olimex and BeagleBone Black, as well as the Debian distributions tailored for these ARM-based single-board computers has intrigued me to delve deeper into the realm of Open Hardware and Embedded Linux.

These days I’m trying to improve my abilities to contribute to Debian and Linux Kernel development. Before finding out about the Google Summer of Code project, I had already started my journey with Debian. I extensively used Debian system build tools(debootstrap, sbuild, deb-build-pkg, qemu-debootstrap) for Building Debian Image for Bela Cape a real-time OS for music making to achieve extremely fast audio and sensor processing times. In 2023, I had the opportunity to attend DebConf23 in Kochi, India - thanks to Nilesh Patra (@nilesh) and I met Hector Oron (@zumbi) over dinner at DebConf23 and It was nice talking about his contributions/work at Debian on armhf port and Debian System Administration that conversation got me interested in knowing more about Debian ARM, Installer and I found it fascinating that EmDebian was once a external project bringing Debian to embedded systems and now, Debian itself can be run on many embedded systems. And, also during DebCamp I got Introduced to PGP/GPG keys and the web of trust by Carlos Henrique Lima Melara (@charles) I learned how to use and generate GPG keys. After DebConf23 I tried debian packaging and I miserably failed to get sponsorship for a python library I packaged.

I came across the Debian project for this year’s Google Summer of Code and found the project titled Make Debian for Raspberry Pi Build Again quite interesting to me and applied. Gladly, on May 8th, I received an acceptance e-mail from GSoC. I got excited that I’ll spend the summer working on something that I like doing.

I am thrilled to be part of this project and I am super excited for the summer of‘25. I’m looking forward to work on what I most like, new connections and learning opportunities.

So, let me talk a bit more about my project. I will be working on to Make Debian for Raspberry Pi SBC’s under the guidance of Gunnar Wolf (@gwolf). In this post, I will describe the project I will be working on.

Why make Debian for Raspberry Pi build again?

There is an available set of images for running Debian in Raspberry Pi computers (all models below the 5 series)! However, the maintainer severely lacking time to take care for them; called for help for somebody to adopt them, but have not been successful. The image generation scripts might have bitrotted a bit, but it is mostly all done. And there is a lot of interest and use still in having the images freshly generated and decently tested! This GSoC project is about getting the [https://raspi.debian.net/ | Raspberry Pi Debian images] site working reliably, daily-built images become automatic again and ideally making it easily deployable to be run in project machines and migrating exsisting hosting infrastructure to Debian.

How much it differ from Debian build process?

While the goal is to stay as close as possible to the Debian build process, Raspberry Pi boards require some necessary platform-specific changes primarily in the early boot sequence and firmware handling. Unlike typical Debian systems, Raspberry Pi boards depend on a non-standard bootloader and use non-free firmware (raspi-firmware), Introducing some hardware-specific differences in the initialization process.

These differences are largely confined to the early boot and hardware initialization stages. Once the system boots, the userspace remains closely aligned with a typical Debian install, using Debian packages.

The current modifications are required due to non-free firmware. However, several areas merit review: but there are a few parts that might be worth changing.

  1. Boot flow: Transitioning to a U-Boot based boot process (as used in Debian installer images for many other SBCs) would reduce divergence and better align with Debian Installer.

  2. Current scripts/workarounds: Some existing hacks may now be redundant with recent upstream support and could be removed.

  3. Board-specific images: Shift to architecture-specific base images with runtime detection could simplify builds and reduce duplication.

Debian already already building SD card images for a wide range of SBCs (e.g., BeagleBone, BananaPi, OLinuXino, Cubieboard, etc.) installer-arm64/images/u-boot and installer-armhf/images/u-boot, a similar approach for Raspberry Pi could improve maintainability and consistency with Debian’s broader SBC support.

Quoted from Mail Discussion Thread with Mentor (Gunnar Wolf)

"One direction we wanted to explore was whether we should still be building one image per family, or whether we could instead switch to one image per architecture (armel, armhf, arm64). There were some details to iron out as RPi3 and RPi4 were quite different, but I think it will be similar to the differences between the RPi 0 and 1, which are handled at first-boot time. To understand what differs between families, take a look at Cyril Brulebois’ generate-recipe (in the repo), which is a great improvement over the ugly mess I had before he contributed it"

In this project, I intend to to build one image per architecture (armel, armhf, arm64) rather than continuing with the current model of building one image per board. This change simplifies image management, reduces redundancy, and leverages dynamic configuration at boot time to support all supported boards within each architecture. By using U-Boot and flash-kernel, we can detect the board type and configure kernel parameters, DTBs, and firmware during the first boot, reducing duplication across images and simplifying the maintenance burden and we can also generalize image creation while still supporting board-specific behavior at runtime. This method aligns with existing practices in the DebianInstaller team and aligns with Debian’s long-term maintainability goals and better leverages upstream capabilities, ensuring a consistent and scalable boot experience.

To streamline and standardize the process of building bootable Debian images for Raspberry Pi devices, I proposed a new workflow that leverages U-Boot and flash-kernel Debian packages. This provides a clean, maintainable, and reproducible way to generate images for armel, armhf and arm64 boards. The workflow is vmdb2, a lightweight, declarative tool designed to automate the creation of disk images. A typical vmdb2 recipe defines the disk layout, base system installation (via debootstrap), architecture-specific packages, and any custom post-install hooks and the image should includes U-Boot (the u-boot-rpi package), flash-kernel, and a suitable Debian kernel package like linux-image-arm64 or linux-image-armmp.

U-Boot serves as the platform’s bootloader and is responsible for loading the kernel and initramfs. Unlike Raspberry Pi’s non-free firmware/proprietary bootloader, U-Boot provides an open and scriptable interface, allowing us to follow a more standard Debian boot process. It can be configured to boot using either an extlinux.conf or a boot.scr script generated automatically by flash-kernel. The role of flash-kernel is to bridge Debian’s kernel installation system with the specifics of embedded bootloaders like U-Boot. When installed, it automatically copies the kernel image, initrd, and device tree blobs (DTBs) to the /boot partition. It also generates the necessary boot.scr script if the board configuration demands it. To work correctly, flash-kernel requires that the target machine be identified via /etc/flash-kernel/machine, which must correspond to an entry in its internal machine database.\ Once the vmdb2 build is complete, the resulting image will contain a fully configured bootable system with all necessary boot components correctly installed. The image can be flashed to an SD card and used to boot on the intended device without additional manual configuration. Because all key packages (U-Boot, kernel, flash-kernel) are managed through Debian’s package system, kernel updates and boot script regeneration are handled automatically during system upgrades.

Current Workflow: Builds one Image per family

The current vmdb2 recipe uses the Raspberry Pi GPU bootloader provided via the raspi-firmware package. This is the traditional boot process followed by Raspberry Pi OS, and it’s tightly coupled with firmware files like bootcode.bin, start.elf, and fixup.dat. These files are installed to /boot/firmware, which is mounted from a FAT32 partition labeled RASPIFIRM. The device tree files (*.dtb) are manually copied from /usr/lib/linux-image-*-arm64/broadcom/ into this partition.

The kernel is installed via the linux-image-arm64 package, and the boot arguments are injected by modifying /boot/firmware/cmdline.txt using sed commands. Booting depends on the root partition being labeled RASPIROOT, referenced through that file. There is no bootloader like UEFI-based or U-Boot involved — the Raspberry Pi firmware directly loads the kernel, which is standard for Raspberry Pi boards.

- apt: install
  packages:
    ...
    - raspi-firmware  

The boot partition contents and kernel boot setup are tightly controlled via scripting in the recipe.

Limitations of Current Workflow: While this setup works, it is

  1. Proprietary and Raspberry Pi–specific – It relies on the closed-source GPU bootloader the raspi-firmware package, which is tightly coupled to specific Raspberry Pi models.

  2. Manual DTB handling – Device tree files are manually copied and hardcoded, making upgrades or board-specific changes error-prone.

  3. Not easily extendable to future Raspberry Pi boards – Any change in bootloader behavior (as seen in the Raspberry Pi 5, which introduces a more flexible firmware boot process) would require significant rework.

  4. No UEFI-based/U-Boot – The current method bypasses the standard bootloader layers, making it inconsistent with other Debian ARM platforms and harder to maintain long-term.

As Raspberry Pi firmware and boot processes evolve, especially with the introduction of Pi 5 and potentially Pi 6, maintaining compatibility will require more flexibility - something best delivered by adopting U-Boot and flash-kernel.

New Workflow: Building Architecture-Specific Images with vmdb2, U-Boot, flash-kernel, and Debian Kernel

This workflow outlines an improved approach to generating bootable Debian images architecture specific, using vmdb2, U-Boot, flash-kernel, and Debian kernels and also to move away from Raspberry Pi’s proprietary bootloader to a fully open-source boot process which improves maintainability, consistency, and cross-board support.

New Method: Shift to U-Boot + flash-kernel

U-Boot (via Debian’su-boot-rpi package) and flash-kernel bring the image building process closer to how Debian officially boots ARM devices. flash-kernel integrates with the system’s initramfs and kernel packages to install bootloaders, prepare boot.scr or extlinux.conf, and copy kernel/initrd/DTBs to /boot in a format that U-Boot expects. U-Boot will be used as a second-stage bootloader, loaded by the Raspberry Pi’s built-in firmware. Once U-Boot is in place, it will read standard boot scripts ( boot.scr) generated by flash-kernel, providing a Debian-compatible and board-flexible solution.

Extending YAML spec for vmdb2 build with U-Boot and flash-kernel

To improve an existing vmdb2 YAML spec(https://salsa.debian.org/raspi-team/image-specs/raspi_master.yaml), to integrate U-Boot, flash-kernel, and the architecture-specific Debian kernel into the image build process. By incorporating u-boot-rpi and flash-kernel from Debian packages, alongside the standard initramfs-tools, we align the image closer to Debian best practices while supporting both armhf and arm64 architectures.

Below are key additions and adjustments needed in a vmdb2 YAML spec to support the workflow: Install U-Boot, flash-kernel, initramfs-tools and the architecture-specific Debian kernel.

- apt: install
  packages:
    - u-boot-rpi
    - flash-kernel
    - initramfs-tools
    - linux-image-arm64 # or linux-image-armmp for armhf 
  tag: tag-root

Replace linux-image-arm64 with the correct kernel package for specific target architecture. These packages should be added under the tag-root section in YAML spec for vmdb2 build recipe. This ensures that the necessary bootloader, kernel, and initramfs tools are included and properly configured in the image.

Configure Raspberry Pi firmware to Load U-Boot

Install the U-Boot binary as kernel.img in /boot/firmware we can also download and build U-Boot from source, but Debian provides tested binaries.

- shell: |
    cp /usr/lib/u-boot/rpi_4/u-boot.bin ${ROOT?}/boot/firmware/kernel.img
    echo "enable_uart=1" >> ${ROOT?}/boot/firmware/config.txt
  root-fs: tag-root

This makes the RPi firmware load u-boot.bin instead of the Linux kernel directly.

Set Up flash-kernel for Debian-style Boot

flash-kernel integrates with initramfs-tools and writes boot config suitable for U-Boot. We need to make sure /etc/flash-kernel/db contains an entry for board (most Raspberry Pi boards already supported in Bookworm).

Set up /etc/flash-kernel.conf with:

- create-file: /etc/flash-kernel.conf
  contents: |
    MACHINE="Raspberry Pi 4"
    BOOTPART="/dev/disk/by-label/RASPIFIRM"
    ROOTPART="/dev/disk/by-label/RASPIROOT"
  unless: rootfs_unpacked

This allows flash-kernel to write an extlinux.conf or boot.scr into /boot/firmware.

Clean up Proprietary/Non-Free Firmware Bootflow

Remove the direct kernel loading flow:

- shell: |
    rm -f ${ROOT?}/boot/firmware/vmlinuz*
    rm -f ${ROOT?}/boot/firmware/initrd.img*
    rm -f ${ROOT?}/boot/firmware/cmdline.txt
  root-fs: tag-root

Let U-Boot and flash-kernel manage kernel/initrd and boot parameters instead.

Boot Flow After This Change

[SoC ROM] -> [start.elf] -> [U-Boot] -> [boot.scr] -> [Linux Kernel]
  1. This still depends on the Raspberry Pi firmware to start, but it only loads U-Boot, not Linux kernel.

  2. U-Boot gives you more flexibility (e.g., networking, boot menus, signed boot).

  3. Using flash-kernel ensures kernel updates are handled the Debian Installer way.

  4. Test with a serial console (enable_uart=1) in case HDMI doesn’t show early boot logs.

Advantage of New Workflow

  1. Replaces the proprietary Raspberry Pi bootloader with upstream U-Boot.

  2. Debian-native tooling – Uses flash-kernel and initramfs-tools to manage boot configuration.

  3. Consistent across boards – Works for both armhf and arm64, unifying the image build process.

  4. Easier to support new boards – Like the Raspberry Pi 5 and future models.

This transition will standardize a bit image-building process, making it aligned with upstream Debian Installer workflows.

vmdb2 configuration for arm64 using u-boot and flash-kernel

NOTE: This is a baseline example and may require tuning.

# Raspberry Pi arm64 image using U-Boot and flash-kernel

steps:
  # ... (existing mkimg, partitions, mount, debootstrap, etc.) ...

  # Install U-Boot, flash-kernel, initramfs-tools and architecture specific kernel
  - apt: install
    packages:
      - u-boot-rpi
      - flash-kernel
      - initramfs-tools
      - linux - image - arm64 # or linux - image - armmp for armhf
    tag: tag-root

  # Install U-Boot binary as kernel.img in firmware partition
  - shell: |
      cp /usr/lib/u-boot/rpi_arm64 /u-boot.bin ${ROOT?}/boot/firmware/kernel.img
      echo "enable_uart=1" >> ${ROOT?}/boot/firmware/config.txt
    root-fs: tag-root

  # Configure flash-kernel for Raspberry Pi
  - create-file: /etc/flash-kernel.conf
    contents: |
      MACHINE="Generic Raspberry Pi ARM64"
      BOOTPART="/dev/disk/by-label/RASPIFIRM"
      ROOTPART="/dev/disk/by-label/RASPIROOT"
    unless: rootfs_unpacked

  # Remove direct kernel boot files from Raspberry Pi firmware
  - shell: |
      rm -f ${ROOT?}/boot/firmware/vmlinuz*
      rm -f ${ROOT?}/boot/firmware/initrd.img*
      rm -f ${ROOT?}/boot/firmware/cmdline.txt
    root-fs: tag-root

  # flash-kernel will manage boot scripts and extlinux.conf
  # Rest of image build continues...

Required Changes to Support Raspberry Pi Boards in Debian (flash-kernel + U-Boot)

Overview of Required Changes

Component Required Task
Debian U-Boot Package Add build target for rpi_arm64 in u-boot-rpi. Optionally deprecate legacy 32-bit targets.
Debian flash-kernel Package Add or verify entries in db/all.db for Pi 4, Pi 5, Zero 2W, CM4. Ensure boot script generation works via bootscr.uboot-generic.
Debian Kernel Ensure DTBs are installed at /usr/lib/linux-image-<version>/ and available for flash-kernel to reference.

flash-kernel

Already Supported Boards in flash-kernel Debian Package

https://sources.debian.org/src/flash-kernel/3.109/db/all.db/#L1700

Model Arch DTB-Id
Raspberry Pi 1 A/B/B+, Rev2 armel bcm2835-*
Raspberry Pi CM1 armel bcm2835-rpi-cm1-io1.dtb
Raspberry Pi Zero/Zero W armel bcm2835-rpi-zero*.dtb
Raspberry Pi 2B armhf bcm2836-rpi-2-b.dtb
Raspberry Pi 3B/3B+ arm64 bcm2837-*
Raspberry Pi CM3 arm64 bcm2837-rpi-cm3-io3.dtb
Raspberry Pi 400 arm64 bcm2711-rpi-400.dtb

uboot

Already Supported Boards in Debian U-Boot Package

https://salsa.debian.org/installer-team/flash-kernel/-/blob/master/db/all.db

arm64

| Model |Arch | Upstream Defconfig | Debian Target | | ————————- | ——- | ———————— | ——————- | | Raspberry Pi 3B | arm64 | rpi_3_defconfig | rpi_3 | | Raspberry Pi 4B | arm64 | rpi_4_defconfig | rpi_4 | | Raspberry Pi 3B/3B+/CM3/CM3+/4B/CM4/400/5B/Zero 2W | arm64 | rpi_arm64_defconfig | rpi_arm64 |

armhf

| Model |Arch | Upstream Defconfig | Debian Target | | ————————- | ——- | ———————— | ——————- | | Raspberry Pi 2 | armhf | rpi_2_defconfig | rpi_2 | | Raspberry Pi 3B (32-bit) | armhf | rpi_3_32b_defconfig | rpi_3_32b | | Raspberry Pi 4B (32-bit) | armhf | rpi_4_32b_defconfig | rpi_4_32b |

armel

| Model |Arch | Upstream Defconfig | Debian Target | | ————————- | ——- | ———————— | ——————- | | Raspberry Pi | armel | rpi_defconfig | rpi | | Raspberry Pi 1/Zero | armel | rpi_0_w | rpi_0_w |

These boards are already defined in debian/rules under the u-boot-rpi source package and generates usable U-Boot binaries for corresponding Raspberry Pi models.

To-Do: Add Missing Board Support to U-Boot and flash-kernel in Debian

Several Raspberry Pi models are missing from the Debian U-Boot and flash-kernel packages, even though upstream support and DTBs exist in the Debian kernel but are missing entries in the flash-kernel database to enable support for bootloader installation and initrd handling.

Boards Not Yet Supported in flash-kernel Debian Package

Model Arch DTB-Id
Raspberry Pi 3A+ (32 & 64 bit) armhf, arm64 bcm2837-rpi-3-a-plus.dtb
Raspberry Pi 4B (32 & 64 bit) armhf, arm64 bcm2711-rpi-4-b.dtb
Raspberry Pi CM4 arm64 bcm2711-rpi-cm4-io.dtb
Raspberry Pi CM 4S arm64 -
Raspberry Zero 2 W arm64 bcm2710-rpi-zero-2-w.dtb
Raspberry Pi 5 arm64 bcm2712-rpi-5-b.dtb
Raspberry Pi CM5 arm64 -
Raspberry Pi 500 arm64 -

Boards Not Yet Supported in Debian U-Boot Package

Model Arch Upstream defconfig(s)
Raspberry Pi 3A+/3B+ arm64 -, rpi_3_b_plus_defconfig
Raspberry Pi CM 4S arm64 -
Raspberry Pi 5 arm64 -
Raspberry Pi CM5 arm64 -
Raspberry Pi 500 arm64 -

So, what next?

During the Community Bonding Period, I got hands-on with workflow improvements, set up test environments, and began reviewing Raspberry Pi support in Debian’s U-Boot and flash-kernel and these are the logs of the project, where I provide weekly reports on the work done. You can check here: Community Bonding Period logs.

My next steps include submitting patches to the u-boot and flash-kernel packages to ensure all missing Raspberry Pi entries are built and shipped. And, also to confirm the kernel DTB installation paths and make sure the necessary files are included for all Raspberry Pi variants. Finally, plan to validate changes with test builds on Raspberry Pi hardware.

In parallel, I’m organizing my tasks and setting up my environment to contribute more effectively. It’s been exciting to explore how things work under the hood and to prepare for a summer of learning and contributing to this great community.

,

Cryptogram Friday Squid Blogging: Gonate Squid Video

This is the first ever video of the Antarctic Gonate Squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Cryptogram Surveillance in the US

Good article from 404 Media on the cozy surveillance relationship between local Oregon police and ICE:

In the email thread, crime analysts from several local police departments and the FBI introduced themselves to each other and made lists of surveillance tools and tactics they have access to and felt comfortable using, and in some cases offered to perform surveillance for their colleagues in other departments. The thread also includes a member of ICE’s Homeland Security Investigations (HSI) and members of Oregon’s State Police. In the thread, called the “Southern Oregon Analyst Group,” some members talked about making fake social media profiles to surveil people, and others discussed being excited to learn and try new surveillance techniques. The emails show both the wide array of surveillance tools that are available to even small police departments in the United States and also shows informal collaboration between local police departments and federal agencies, when ordinarily agencies like ICE are expected to follow their own legal processes for carrying out the surveillance.

Cryptogram Self-Driving Car Video Footage

Two articles crossed my path recently. First, a discussion of all the video Waymo has from outside its cars: in this case related to the LA protests. Second, a discussion of all the video Tesla has from inside its cars.

Lots of things are collecting lots of video of lots of other things. How and under what rules that video is used and reused will be a continuing source of debate.

Cryptogram Ghostwriting Scam

The variations seem to be endless. Here’s a fake ghostwriting scam that seems to be making boatloads of money.

This is a big story about scams being run from Texas and Pakistan estimated to run into tens if not hundreds of millions of dollars, viciously defrauding Americans with false hopes of publishing bestseller books (a scam you’d not think many people would fall for but is surprisingly huge). In January, three people were charged with defrauding elderly authors across the United States of almost $44 million ­by “convincing the victims that publishers and filmmakers wanted to turn their books into blockbusters.”

Worse Than FailureCodeSOD: Stop Being So ####

Many a network admin has turned to the siren song of Perl to help them automate managing their networks. Frank's predecessor is no exception.

They also got a bit combative about people critiquing their Perl code:

# COMPLEX SUBNET MATH
# Looking up a value in an array was faster than any mathematical solution. Yes, it's hard coded, but these values won't ever change anyway. Stop being so #### about it.
$Subnets = @("0.0.0.0","128.0.0.0","192.0.0.0","224.0.0.0","240.0.0.0","248.0.0.0","252.0.0.0","254.0.0.0","255.0.0.0","255.128.0.0","255.192.0.0","255.224.0.0","255.240.0.0","255.248.0.0","255.252.0.0","255.254.0.0","255.255.0.0","255.255.128.0","255.255.192.0","255.255.224.0","255.255.240.0","255.255.248.0","255.255.252.0","255.255.254.0","255.255.255.0","255.255.255.128","255.255.255.192","255.255.255.224","255.255.255.240","255.255.255.248","255.255.255.252","255.255.255.254","255.255.255.255")

I believe them when they say that the lookup array is faster, but it leaves me wondering: what are they doing where performance matters that much?

I don't actually think this ascends to the level of a WTF, but I do think the defensive comment is funny. Clearly, the original developer was having a time with people complaining about it.

Frank notes that while Perl has a reputation as a "write only language," this particular set of scripts was actually quite easy to read and maintain. So yes, I guess we should stop being so #### about it.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsHomesick

Author: Sasha Kasper As the blaring siren assaults my eardrums, it becomes increasingly harder to deny my rapid descent. I float directionless through the cockpit. Up, down, left, right, have lost all meaning. The notion of gravity seems to me a cruel joke, of which the punchline will be my demise. The tempered glass of […]

The post Homesick appeared first on 365tomorrows.

Planet DebianSergio Durigan Junior: GCC, glibc, stack unwinding and relocations – A war story

I’ve been meaning to write a post about this bug for a while, so here it is (before I forget the details!).

First, I’d like to thank a few people:

  • My friend Gabriel F. T. Gomes, who helped with debugging and simply talking about the issue. I love doing some pair debugging, and I noticed that he also had a great time diving into the internals of glibc and libgcc.
  • My teammate Dann Frazier, who always provides invaluable insights and was there to motivate me to push a bit further in order to figure out what was going on.
  • The upstream GCC and glibc developers who finally drove the investigation to completion and came up with an elegant fix.

I’ll probably forget some details because it’s been more than a week (and life at $DAYJOB moves fast), but we’ll see.

The background story

Wolfi OS takes security seriously, and one of the things we have is a package which sets the hardening compiler flags for C/C++ according to the best practices recommended by OpenSSF. At the time of this writing, these flags are (in GCC’s spec file parlance):

*self_spec:
+ %{!O:%{!O1:%{!O2:%{!O3:%{!O0:%{!Os:%{!0fast:%{!0g:%{!0z:-O2}}}}}}}}} -fhardened -Wno-error=hardened -Wno-hardened %{!fdelete-null-pointer-checks:-fno-delete-null-pointer-checks} -fno-strict-overflow -fno-strict-aliasing %{!fomit-frame-pointer:-fno-omit-frame-pointer} -mno-omit-leaf-frame-pointer

*link:
+ --as-needed -O1 --sort-common -z noexecstack -z relro -z now

The important part for our bug is the usage of -z now and -fno-strict-aliasing.

As I was saying, these flags are set for almost every build, but sometimes things don’t work as they should and we need to disable them. Unfortunately, one of these problematic cases has been glibc.

There was an attempt to enable hardening while building glibc, but that introduced a strange breakage to several of our packages and had to be reverted.

Things stayed pretty much the same until a few weeks ago, when I started working on one of my roadmap items: figure out why hardening glibc wasn’t working, and get it to work as much as possible.

Reproducing the bug

I started off by trying to reproduce the problem. It’s important to mention this because I often see young engineers forgetting to check if the problem is even valid anymore. I don’t blame them; the anxiety to get the bug fixed can be really blinding.

Fortunately, I already had one simple test to trigger the failure. All I had to do was install the py3-matplotlib package and then invoke:

$ python3 -c 'import matplotlib'

This would result in an abortion with a coredump.

I followed the steps above, and readily saw the problem manifesting again. OK, first step is done; I wasn’t getting out easily from this one.

Initial debug

The next step is to actually try to debug the failure. In an ideal world you get lucky and are able to spot what’s wrong after just a few minutes. Or even better: you also can devise a patch to fix the bug and contribute it to upstream.

I installed GDB, and then ran the py3-matplotlib command inside it. When the abortion happened, I issued a backtrace command inside GDB to see where exactly things had gone wrong. I got a stack trace similar to the following:

#0  0x00007c43afe9972c in __pthread_kill_implementation () from /lib/libc.so.6
#1  0x00007c43afe3d8be in raise () from /lib/libc.so.6
#2  0x00007c43afe2531f in abort () from /lib/libc.so.6
#3  0x00007c43af84f79d in uw_init_context_1[cold] () from /usr/lib/libgcc_s.so.1
#4  0x00007c43af86d4d8 in _Unwind_RaiseException () from /usr/lib/libgcc_s.so.1
#5  0x00007c43acac9014 in __cxxabiv1::__cxa_throw (obj=0x5b7d7f52fab0, tinfo=0x7c429b6fd218 <typeinfo for pybind11::attribute_error>, dest=0x7c429b5f7f70 <pybind11::reference_cast_error::~reference_cast_error() [clone .lto_priv.0]>)
    at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:93
#6  0x00007c429b5ec3a7 in ft2font__getattr__(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) [clone .lto_priv.0] [clone .cold] () from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
#7  0x00007c429b62f086 in pybind11::cpp_function::initialize<pybind11::object (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::object, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::name, pybind11::scope, pybind11::sibling>(pybind11::object (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::object (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#1}::_FUN(pybind11::detail::function_call&) [clone .lto_priv.0] ()
   from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
#8  0x00007c429b603886 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
...

Huh. Initially this didn’t provide me with much information. There was something strange seeing the abort function being called right after _Unwind_RaiseException, but at the time I didn’t pay much attention to it.

OK, time to expand our horizons a little. Remember when I said that several of our packages would crash with a hardened glibc? I decided to look for another problematic package so that I could make it crash and get its stack trace. My thinking here is that maybe if I can compare both traces, something will come up.

I happened to find an old discussion where Dann Frazier mentioned that Emacs was also crashing for him. He and I share the Emacs passion, and I totally agreed with him when he said that “Emacs crashing is priority -1!” (I’m paraphrasing).

I installed Emacs, ran it, and voilà: the crash happened again. OK, that was good. When I ran Emacs inside GDB and asked for a backtrace, here’s what I got:

#0  0x00007eede329972c in __pthread_kill_implementation () from /lib/libc.so.6
#1  0x00007eede323d8be in raise () from /lib/libc.so.6
#2  0x00007eede322531f in abort () from /lib/libc.so.6
#3  0x00007eede262879d in uw_init_context_1[cold] () from /usr/lib/libgcc_s.so.1
#4  0x00007eede2646e7c in _Unwind_Backtrace () from /usr/lib/libgcc_s.so.1
#5  0x00007eede3327b11 in backtrace () from /lib/libc.so.6
#6  0x000059535963a8a1 in emacs_backtrace ()
#7  0x000059535956499a in main ()

Ah, this backtrace is much simpler to follow. Nice.

Hmmm. Now the crash is happening inside _Unwind_Backtrace. A pattern emerges! This must have something to do with stack unwinding (or so I thought… keep reading to discover the whole truth). You see, the backtrace function (yes, it’s a function) and C++’s exception handling mechanism use similar techniques to do their jobs, and it pretty much boils down to unwinding frames from the stack.

I looked into Emacs’ source code, specifically the emacs_backtrace function, but could not find anything strange over there. This bug was probably not going to be an easy fix…

The quest for a minimal reproducer

Being able to easily reproduce the bug is awesome and really helps with debugging, but even better is being able to have a minimal reproducer for the problem.

You see, py3-matplotlib is a huge package and pulls in a bunch of extra dependencies, so it’s not easy to ask other people to “just install this big package plus these other dependencies, and then run this command…”, especially if we have to file an upstream bug and talk to people who may not even run the distribution we’re using. So I set up to try and come up with a smaller recipe to reproduce the issue, ideally something that’s not tied to a specific package from the distribution.

Having all the information gathered from the initial debug session, especially the Emacs backtrace, I thought that I could write a very simple program that just invoked the backtrace function from glibc in order to trigger the code path that leads to _Unwind_Backtrace. Here’s what I wrote:

#include <execinfo.h>

int
main(int argc, char *argv[])
{
  void *a[4096];
  backtrace (a, 100);
  return 0;
}

After compiling it, I determined that yes, the problem did happen with this small program as well. There was only a small nuisance: the manifestation of the bug was not deterministic, so I had to execute the program a few times until it crashed. But that’s much better than what I had before, and a small price to pay. Having a minimal reproducer pretty much allows us to switch our focus to what really matters. I wouldn’t need to dive into Emacs’ or Python’s source code anymore.

At the time, I was sure this was a glibc bug. But then something else happened.

GCC 15

I had to stop my investigation efforts because something more important came up: it was time to upload GCC 15 to Wolfi. I spent a couple of weeks working on this (it involved rebuilding the whole archive, filing hundreds of FTBFS bugs, patching some programs, etc.), and by the end of it the transition went smooth. When the GCC 15 upload was finally done, I switched my focus back to the glibc hardening problem.

The first thing I did was to… yes, reproduce the bug again. It had been a few weeks since I had touched the package, after all. So I built a hardened glibc with the latest GCC and… the bug did not happen anymore!

Fortunately, the very first thing I thought was “this must be GCC”, so I rebuilt the hardened glibc with GCC 14, and the bug was there again. Huh, unexpected but very interesting.

Diving into glibc and libgcc

At this point, I was ready to start some serious debugging. And then I got a message on Signal. It was one of those moments where two minds think alike: Gabriel decided to check how I was doing, and I was thinking about him because this involved glibc, and Gabriel contributed to the project for many years. I explained what I was doing, and he promptly offered to help. Yes, there are more people who love low level debugging!

We spent several hours going through disassembles of certain functions (because we didn’t have any debug information in the beginning), trying to make sense of what we were seeing. There was some heavy GDB involved; unfortunately I completely lost the session’s history because it was done inside a container running inside an ephemeral VM. But we learned a lot. For example:

  • It was hard to actually understand the full stack trace leading to uw_init_context_1[cold]. _Unwind_Backtrace obviously didn’t call it (it called uw_init_context_1, but what was that [cold] doing?). We had to investigate the disassemble of uw_init_context_1 in order to determined where uw_init_context_1[cold] was being called.

  • The [cold] suffix is a GCC function attribute that can be used to tell the compiler that the function is unlikely to be reached. When I read that, my mind immediately jumped to “this must be an assertion”, so I went to the source code and found the spot.

  • We were able to determine that the return code of uw_frame_state_for was 5, which means _URC_END_OF_STACK. That’s why the assertion was triggering.

After finding these facts without debug information, I decided to bite the bullet and recompiled GCC 14 with -O0 -g3, so that we could debug what uw_frame_state_for was doing. After banging our heads a bit more, we found that fde is NULL at this excerpt:

// ...
  fde = _Unwind_Find_FDE (context->ra + _Unwind_IsSignalFrame (context) - 1,
                          &context->bases);
  if (fde == NULL)
    {
#ifdef MD_FALLBACK_FRAME_STATE_FOR
      /* Couldn't find frame unwind info for this function.  Try a
         target-specific fallback mechanism.  This will necessarily
         not provide a personality routine or LSDA.  */
      return MD_FALLBACK_FRAME_STATE_FOR (context, fs);
#else
      return _URC_END_OF_STACK;
#endif
    }
// ...

We’re debugging on amd64, which means that MD_FALLBACK_FRAME_STATE_FOR is defined and therefore is called. But that’s not really important for our case here, because we had established before that _Unwind_Find_FDE would never return NULL when using a non-hardened glibc (or a glibc compiled with GCC 15). So we decided to look into what _Unwind_Find_FDE did.

The function is complex because it deals with .eh_frame , but we were able to pinpoint the exact location where find_fde_tail (one of the functions called by _Unwind_Find_FDE) is returning NULL:

if (pc < table[0].initial_loc + data_base)
  return NULL;

We looked at the addresses of pc and table[0].initial_loc + data_base, and found that the former fell within libgcc’s text section, which the latter fell within /lib/ld-linux-x86-64.so.2 text.

At this point, we were already too tired to continue. I decided to keep looking at the problem later and see if I could get any further.

Bisecting GCC

The next day, I woke up determined to find what changed in GCC 15 that caused the bug to disappear. Unless you know GCC’s internals like they are your own home (which I definitely don’t), the best way to do that is to git bisect the commits between GCC 14 and 15.

I spent a few days running the bisect. It took me more time than I’d have liked to find the right range of commits to pass git bisect (because of how branches and tags are done in GCC’s repository), and I also had to write some helper scripts that:

  • Modified the gcc.yaml package definition to make it build with the commit being bisected.
  • Built glibc using the GCC that was just built.
  • Ran tests inside a docker container (with the recently built glibc installed) to determine whether the bug was present.

At the end, I had a commit to point to:

commit 99b1daae18c095d6c94d32efb77442838e11cbfb
Author: Richard Biener <rguenther@suse.de>
Date:   Fri May 3 14:04:41 2024 +0200

    tree-optimization/114589 - remove profile based sink heuristics

Makes sense, right?! No? Well, it didn’t for me either. Even after reading what was changed in the code and the upstream bug fixed by the commit, I was still clueless as to why this change “fixed” the problem (I say “fixed” because it may very well be an unintended consequence of the change, and some other problem might have been introduced).

Upstream takes over

After obtaining the commit that possibly fixed the bug, while talking to Dann and explaining what I did, he suggested that I should file an upstream bug and check with them. Great idea, of course.

I filed the following upstream bug:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120653

It’s a bit long, very dense and complex, but ultimately upstream was able to find the real problem and have a patch accepted in just two days. Nothing like knowing the code base. The initial bug became:

https://sourceware.org/bugzilla/show_bug.cgi?id=33088

In the end, the problem was indeed in how the linker defines __ehdr_start, which, according to the code (from elf/dl-support.c):

if (_dl_phdr == NULL)
  {
    /* Starting from binutils-2.23, the linker will define the
       magic symbol __ehdr_start to point to our own ELF header
       if it is visible in a segment that also includes the phdrs.
       So we can set up _dl_phdr and _dl_phnum even without any
       information from auxv.  */


    extern const ElfW(Ehdr) __ehdr_start attribute_hidden;
    assert (__ehdr_start.e_phentsize == sizeof *GL(dl_phdr));
    _dl_phdr = (const void *) &__ehdr_start + __ehdr_start.e_phoff;
    _dl_phnum = __ehdr_start.e_phnum;
  }

But the following definition is the problematic one (from elf/rtld.c):

extern const ElfW(Ehdr) __ehdr_start attribute_hidden;

This symbol (along with its counterpart, __ehdr_end) was being run-time relocated when it shouldn’t be. The fix that was pushed added optimization barriers to prevent the compiler from doing the relocations.

I don’t claim to fully understand what was done here, and Jakub’s analysis is a thing to behold, but in the end I was able to confirm that the patch fixed the bug. And in the end, it was indeed a glibc bug.

Conclusion

This was an awesome bug to investigate. It’s one of those that deserve a blog post, even though some of the final details of the fix flew over my head.

I’d like to start blogging more about these sort of bugs, because I’ve encountered my fair share of them throughout my career. And it was great being able to do some debugging with another person, exchange ideas, learn things together, and ultimately share that deep satisfaction when we find why a crash is happening.

I have at least one more bug in my TODO list to write about (another one with glibc, but this time I was able to get to the end of it and come up with a patch). Stay tunned.

P.S.: After having published the post I realized that I forgot to explain why the -z now and -fno-strict-aliasing flags were important.

-z now is the flag that I determined to be the root cause of the breakage. If I compiled glibc with every hardening flag except -z now, everything worked. So initially I thought that the problem had to do with how ld.so was resolving symbols at runtime. As it turns out, this ended up being more a symptom than the real cause of the bug.

As for -fno-strict-aliasing, a Gentoo developer who commented on the GCC bug above mentioned that this OpenSSF bug had a good point against using this flag for hardening. I still have to do a deep dive on what was discussed in the issue, but this is certainly something to take into consideration. There’s this very good write-up about strict aliasing in general if you’re interested in understanding it better.

,

Rondam RamblingsSigh, here we go again.

You would think that after the disasters in Afghanistan and Iraq that Republicans would have learned that starting a war in that part of the world is a Really Bad Idea (tm).  But no.  After utterly failing to bring about regime change in both its eastern and western neighbors, the Trump administration is winding up to try yet again again in Iran.  Maybe the third time will be the

Rondam RamblingsNo, Science is Not Just Another Religion

I want to debunk once and for all this idea that "science is just another religion".  It isn't, for one simple reason: all religions are based on some kind of metaphysical assumptions.  Those assumptions are generally something like the authority of some source of revealed knowledge, typically a holy text.  But it doesn't have to be that.  It can be as simple as assuming that

Planet DebianEvgeni Golov: Arguing with an AI or how Evgeni tried to use CodeRabbit

Everybody is trying out AI assistants these days, so I figured I'd jump on that train and see how fast it derails.

I went with CodeRabbit because I've seen it on YouTube — ads work, I guess.

I am trying to answer the following questions:

  • Did the AI find things that humans did not find (or didn't bother to mention)
  • Did the AI output help the humans with the review (useful summary etc)
  • Did the AI output help the humans with the code (useful suggestions etc)
  • Was the AI output misleading?
  • Was the AI output distracting?

To reduce the amount of output and not to confuse contributors, CodeRabbit was configured to only do reviews on demand.

What follows is a rather unscientific evaluation of CodeRabbit based on PRs in two Foreman-related repositories, looking at the summaries CodeRabbit posted as well as the comments/suggestions it had about the code.

Ansible 2.19 support

PR: theforeman/foreman-ansible-modules#1848

summary posted

The summary CodeRabbit posted is technically correct.

This update introduces several changes across CI configuration, Ansible roles, plugins, and test playbooks. It expands CI test coverage to a new Ansible version, adjusts YAML key types in test variables, refines conditional logic in Ansible tasks, adds new default variables, and improves clarity and consistency in playbook task definitions and debug output.

Yeah, it does all of that, all right. But it kinda misses the point that the addition here is "Ansible 2.19 support", which starts with adding it to the CI matrix and then adjusting the code to actually work with that version. Also, the changes are not for "clarity" or "consistency", they are fixing bugs in the code that the older Ansible versions accepted, but the new one is more strict about.

Then it adds a table with the changed files and what changed in there. To me, as the author, it felt redundant, and IMHO doesn't add any clarity to understand the changes. (And yes, same "clarity" vs bugfix mistake here, but that makes sense as it apparently miss-identified the change reason)

And then the sequence diagrams… They probably help if you have a dedicated change to a library or a library consumer, but for this PR it's just noise, especially as it only covers two of the changes (addition of 2.19 to the test matrix and a change to the inventory plugin), completely ignoring other important parts.

Overall verdict: noise, don't need this.

comments posted

CodeRabbit also posted 4 comments/suggestions to the changes.

Guard against undefined result.task

IMHO a valid suggestion, even if on the picky side as I am not sure how to make it undefined here. I ended up implementing it, even if with slightly different (and IMHO better readable) syntax.

  • Valid complaint? Probably.
  • Useful suggestion? So-So.
  • Wasted time? No.

Inconsistent pipeline in when for composite CV versions

That one was funny! The original complaint was that the when condition used slightly different data manipulation than the data that was passed when the condition was true. The code was supposed to do "clean up the data, but only if there are any items left after removing the first 5, as we always want to keep 5 items".

And I do agree with the analysis that it's badly maintainable code. But the suggested fix was to re-use the data in the variable we later use for performing the cleanup. While this is (to my surprise!) valid Ansible syntax, it didn't make the code much more readable as you need to go and look at the variable definition.

The better suggestion then came from Ewoud: to compare the length of the data with the number we want to keep. Humans, so smart!

But Ansible is not Ewoud's native turf, so he asked whether there is a more elegant way to count how much data we have than to use | list | count in Jinja (the data comes from a Python generator, so needs to be converted to a list first).

And the AI helpfully suggested to use | count instead!

However, count is just an alias for length in Jinja, so it behaves identically and needs a list.

Luckily the AI quickly apologized for being wrong after being pointed at the Jinja source and didn't try to waste my time any further. Wouldn't I have known about the count alias, we'd have committed that suggestion and let CI fail before reverting again.

  • Valid complaint? Yes.
  • Useful suggestion? Nope.
  • Wasted time? Yes.

Apply the same fix for non-composite CV versions

The very same complaint was posted a few lines later, as the logic there is very similar — just slightly different data to be filtered and cleaned up.

Interestingly, here the suggestion also was to use the variable. But there is no variable with the data!

The text actually says one need to "define" it, yet the "committable suggestion" doesn't contain that part.

Interestingly, when asked where it sees the "inconsistency" in that hunk, it said the inconsistency is with the composite case above. That however is nonsense, as while we want to keep the same number of composite and non-composite CV versions, the data used in the task is different — it even gets consumed by a totally different playbook — so there can't be any real consistency between the branches.

  • Valid complaint? Yes (the expression really could use some cleanup).
  • Useful suggestion? Nope.
  • Wasted time? Yes.

I ended up applying the same logic as suggested by Ewoud above. As that refactoring was possible in a consistent way.

Ensure consistent naming for Oracle Linux subscription defaults

One of the changes in Ansible 2.19 is that Ansible fails when there are undefined variables, even if they are only undefined for cases where they are unused.

CodeRabbit complains that the names of the defaults I added are inconsistent. And that is technically correct. But those names are already used in other places in the code, so I'd have to refactor more to make it work properly.

Once being pointed at the fact that the variables already exist, the AI is as usual quick to apologize, yay.

  • Valid complaint? Technically, yes.
  • Useful suggestion? Nope.
  • Wasted time? Yes.

add new parameters to the repository module

PR: theforeman/foreman-ansible-modules#1860

summary posted

Again, the summary is technically correct

The repository module was updated to support additional parameters for repository synchronization and authentication. New options were added for ansible collections, ostree, Python packages, and yum repositories, including authentication tokens, filtering controls, and version retention settings. All changes were limited to module documentation and argument specification.

But it doesn't add anything you'd not get from looking at the diff, especially as it contains a large documentation chunk explaining those parameters.

No sequence diagram this time. That's a good thing!

Overall verdict: noise (even if the amount is small), don't need this.

comments posted

CodeRabbit generated two comments for this PR.

Interestingly, none of them overlapped with the issues ansible-lint and friends found.

get rid of the FIXMEs

Yepp, that's fair

  • Valid complaint? Yes.
  • Useful suggestion? Nope. (But it's not possible in this case!)
  • Wasted time? No.

add validation for the new parameters

Yepp, I forgot these (not intentionally!).

The diff it suggests is nonsense, as it doesn't take into account the existing Ansible and Yum validations, but it clearly has read them as the style etc of the new ones matches. It also managed to group the parameters correctly by repository type, so it's something.

 if module.foreman_params['content_type'] != 'ansible_collection':
     invalid_list = [key for key in ['ansible_collection_requirements'] if key in module.foreman_params]
     if invalid_list:
         module.fail_json(msg="({0}) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))
+
+# Validate ansible_collection specific parameters
+if module.foreman_params['content_type'] != 'ansible_collection':
+    invalid_list = [key for key in ['ansible_collection_auth_token', 'ansible_collection_auth_url'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="({0}) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))
+
+# Validate ostree specific parameters
+if module.foreman_params['content_type'] != 'ostree':
+    invalid_list = [key for key in ['depth', 'exclude_refs', 'include_refs'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="({0}) can only be used with content_type 'ostree'".format(",".join(invalid_list)))
+
+# Validate python package specific parameters
+if module.foreman_params['content_type'] != 'python':
+    invalid_list = [key for key in ['excludes', 'includes', 'package_types', 'keep_latest_packages'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="({0}) can only be used with content_type 'python'".format(",".join(invalid_list)))
+
+# Validate yum specific parameter
+if module.foreman_params['content_type'] != 'yum' and 'upstream_authentication_token' in module.foreman_params:
+    module.fail_json(msg="upstream_authentication_token can only be used with content_type 'yum'")

Interestingly, it also said "Note: If 'python' is not a valid content_type, please adjust the validation accordingly." which is quite a hint at a bug in itself. The module currently does not even allow to create content_type=python repositories. That should have been more prominent, as it's a BUG!

  • Valid complaint? Yes.
  • Useful suggestion? Mostly (I only had to merge the Yum and Ansible branches with the existing code).
  • Wasted time? Nope.

parameter persistence in obsah

PR: theforeman/obsah#72

summary posted

Mostly correct.

It did miss-interpret the change to a test playbook as an actual "behavior" change: "Introduced new playbook variables for database configuration" — there is no database configuration in this repository, just the test playbook using the same metadata as a consumer of the library. Later on it does say "Playbook metadata and test fixtures", so… unclear whether this is a miss-interpretation or just badly summarized. As long as you also look at the diff, it won't confuse you, but if you're using the summary as the sole source of information (bad!) it would.

This time the sequence diagram is actually useful, yay. Again, not 100% accurate: it's missing the fact that saving the parameters is hidden behind an "if enabled" flag — something it did represent correctly for loading them.

Overall verdict: not really useful, don't need this.

comments posted

Here I was a bit surprised, especially as the nitpicks were useful!

Persist-path should respect per-user state locations (nitpick)

My original code used os.environ.get('OBSAH_PERSIST_PATH', '/var/lib/obsah/parameters.yaml') for the location of the persistence file. CodeRabbit correctly pointed out that this won't work for non-root users and one should respect XDG_STATE_HOME.

Ewoud did point that out in his own review, so I am not sure whether CodeRabbit came up with this on its own, or also took the human comments into account.

The suggested code seems fine too — just doesn't use /var/lib/obsah at all anymore. This might be a good idea for the generic library we're working on here, and then be overridden to a static /var/lib path in a consumer (which always runs as root).

In the end I did not implement it, but mostly because I was lazy and was sure we'd override it anyway.

  • Valid complaint? Yes.
  • Useful suggestion? Yes.
  • Wasted time? Nope.

Positional parameters are silently excluded from persistence (nitpick)

The library allows you to generate both positional (foo without --) and non-positional (--foo) parameters, but the code I wrote would only ever persist non-positional parameters. This was intentional, but there is no documentation of the intent in a comment — which the rabbit thought would be worth pointing out.

It's a fair nitpick and I ended up adding a comment.

  • Valid complaint? Yes.
  • Useful suggestion? Yes.
  • Wasted time? Nope.

Enforce FQDN validation for database_host

The library has a way to perform type checking on passed parameters, and one of the supported types is "FQDN" — so a fully qualified domain name, with dots and stuff. The test playbook I added has a database_host variable, but I didn't bother adding a type to it, as I don't really need any type checking here.

While using "FQDN" might be a bit too strict here — technically a working database connection can also use a non-qualified name or an IP address, I was positively surprised by this suggestion. It shows that the rest of the repository was taken into context when preparing the suggestion.

  • Valid complaint? In the context of a test, no. Would that be a real command definition, yes.
  • Useful suggestion? Yes.
  • Wasted time? Nope.

reset_args() can raise AttributeError when a key is absent

This is a correct finding, the code is not written in a way that would survive if it tries to reset things that are not set. However, that's only true for the case where users pass in --reset-<parameter> without ever having set parameter before. The complaint about the part where the parameter is part of the persisted set but not in the parsed args is wrong — as parsed args inherit from the persisted set.

The suggested code is not well readable, so I ended up fixing it slightly differently.

  • Valid complaint? Mostly.
  • Useful suggestion? Meh.
  • Wasted time? A bit.

Persisted values bypass argparse type validation

When persisting, I just yaml.safe_dump the parsed parameters, which means the YAML will contain native types like integers.

The argparse documentation warns that the type checking argparse does only applies to strings and is skipped if you pass anything else (via default values).

While correct, it doesn't really hurt here as the persisting only happens after the values were type-checked. So there is not really a reason to type-check them again. Well, unless the type changes, anyway.

Not sure what I'll do with this comment.

  • Valid complaint? Nah.
  • Useful suggestion? Nope.
  • Wasted time? Not much.

consider using contextlib.suppress

This was added when I asked CodeRabbit for a re-review after pushing some changes. Interestingly, the PR already contained try: … except: pass code before, and it did not flag that.

Also, the code suggestion contained import contextlib in the middle of the code, instead in the head of the file. Who would do that?!

But the comment as such was valid, so I fixed it in all places it is applicable, not only the one the rabbit found.

  • Valid complaint? Yes.
  • Useful suggestion? Nope.
  • Wasted time? Nope.

workaround to ensure LCE and CV are always sent together

PR: theforeman/foreman-ansible-modules#1867

summary posted

A workaround was added to the _update_entity method in the ForemanAnsibleModule class to ensure that when updating a host, both content_view_id and lifecycle_environment_id are always included together in the update payload. This prevents partial updates that could cause inconsistencies.

Partial updates are not a thing.

The workaround is purely for the fact that Katello expects both parameters to be sent, even if only one of them needs an actual update.

No diagram, good.

Overall verdict: misleading summaries are bad!

comments posted

Given a small patch, there was only one comment.

Implementation looks correct, but consider adding error handling for robustness.

This reads correct on the first glance. More error handling is always better, right?

But if you dig into the argumentation, you see it's wrong. Either:

  • we're working with a Katello setup and the host we're updating has content, so CV and LCE will be present
  • we're working with a Katello setup and the host has no content (yet), so CV and LCE will be "updated" and we're not running into the workaround
  • we're working with a plain Foreman, then both parameters are not even accepted by Ansible

The AI accepted defeat once I asked it to analyze things in more detail, but why did I have to ask in the first place?!

  • Valid complaint? Nope.
  • Useful suggestion? Nope.
  • Wasted time? Yes, as I've actually tried to come up with a case where it can happen.

Summary

Well, idk, really.

Did the AI find things that humans did not find (or didn't bother to mention)?

Yes. It's debatable whether these were useful (see e.g. the database_host example), but I tend to be in the "better to nitpick/suggest more and dismiss than oversee" team, so IMHO a positive win.

Did the AI output help the humans with the review (useful summary etc)?

In my opinion it did not. The summaries were either "lots of words, no real value" or plain wrong. The sequence diagrams were not useful either.

Luckily all of that can be turned off in the settings, which is what I'd do if I'd continue using it.

Did the AI output help the humans with the code (useful suggestions etc)?

While the actual patches it posted were "meh" at best, there were useful findings that resulted in improvements to the code.

Was the AI output misleading?

Absolutely! The whole Jinja discussion would have been easier without the AI "help". Same applies for the "error handling" in the workaround PR.

Was the AI output distracting?

The output is certainly a lot, so yes I think it can be distracting. As mentioned, I think dropping the summaries can make the experience less distracting.

What does all that mean?

I will disable the summaries for the repositories, but will leave the @coderabbitai review trigger active if someone wants an AI-assisted review. This won't be something that I'll force on our contributors and maintainers, but they surely can use it if they want.

But I don't think I'll be using this myself on a regular basis.

Yes, it can be made "usable". But so can be vim ;-)

Also, I'd prefer to have a junior human asking all the questions and making bad suggestions, so they can learn from it, and not some planet burning machine.

Worse Than FailureCodeSOD: A Second Date

Ah, bad date handling. We've all seen it. We all know it. So when Lorenzo sent us this C# function, we almost ignored it:

private string GetTimeStamp(DateTime param)
{
    string retDate = param.Year.ToString() + "-";
    if (param.Month < 10)
        retDate = retDate + "0" + param.Month.ToString() + "-";
    else
        retDate = retDate + param.Month.ToString() + "-";

    if (param.Day < 10)
        retDate = retDate + "0" + param.Day.ToString() + " ";
    else
        retDate = retDate + param.Day.ToString() + " ";

    if (param.Hour < 10)
        retDate = retDate + "0" + param.Hour.ToString() + ":";
    else
        retDate = retDate + param.Hour.ToString() + ":";

    if (param.Minute < 10)
        retDate = retDate + "0" + param.Minute.ToString() + ":";
    else
        retDate = retDate + param.Minute.ToString() + ":";

    if (param.Second < 10)
        retDate = retDate + "0" + param.Second.ToString() + ".";
    else
        retDate = retDate + param.Second.ToString() + ".";

    if (param.Millisecond < 10)
        retDate = retDate + "0" + param.Millisecond.ToString();
    else
        retDate = retDate + param.Millisecond.ToString();

    return retDate;
}

Most of this function isn't terribly exciting. We've seen this kind of bad code before, but even when we see a repeat like this, there are still special treats in it. Look at the section for handling milliseconds: if the number is less than 10, they pad it with a leading zero. Just the one, though. One leading zero should be enough for everybody.

But that's not the thing that makes this code special. You see, there's another function worth looking at:

private string FileTimeStamp(DateTime param)
{
    string retDate = param.Year.ToString() + "-";
    if (param.Month < 10)
        retDate = retDate + "0" + param.Month.ToString() + "-";
    else
        retDate = retDate + param.Month.ToString() + "-";

    if (param.Day < 10)
        retDate = retDate + "0" + param.Day.ToString() + " ";
    else
        retDate = retDate + param.Day.ToString() + " ";

    if (param.Hour < 10)
        retDate = retDate + "0" + param.Hour.ToString() + ":";
    else
        retDate = retDate + param.Hour.ToString() + ":";

    if (param.Minute < 10)
        retDate = retDate + "0" + param.Minute.ToString() + ":";
    else
        retDate = retDate + param.Minute.ToString() + ":";

    if (param.Second < 10)
        retDate = retDate + "0" + param.Second.ToString() + ".";
    else
        retDate = retDate + param.Second.ToString() + ".";

    if (param.Millisecond < 10)
        retDate = retDate + "0" + param.Millisecond.ToString();
    else
        retDate = retDate + param.Millisecond.ToString();

    return retDate;
}

Not only did they fail to learn the built-in functions for formatting dates, they forgot about the functions they wrote for formatting dates, and just wrote (or realistically, copy/pasted?) the same function twice.

At least both versions have the same bug with milliseconds. I don't know if I could handle it if they were inconsistent about that.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Cryptogram Where AI Provides Value

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

Speed

First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.

AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

Real-time performance matters in these cases, and the speed of AI is necessary to enable them.

Scale

The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.

AI models can do this for every single product, TV show, website and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.

Scope

Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.

It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.

Sophistication

Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.

Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.

Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.

This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.

But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.

Context matters

Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.

Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.

For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.

It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.

Equally, when speed, scale, scope and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.

Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.

Where the advantage lies

Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

Planet DebianMatthew Garrett: Locally hosting an internet-connected server

I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.

What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.

By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:

[Interface]
PrivateKey = privkeyhere
ListenPort = 51820
Address = localaddr/32

[Peer]
Endpoint = VPS:51820
PublicKey = pubkeyhere
AllowedIPs = VPS/0


And on your VPS, something like:

[Interface]
Address = vpswgaddr/32
SaveConfig = true
ListenPort = 51820
PrivateKey = privkeyhere

[Peer]
PublicKey = pubkeyhere
AllowedIPs = localaddr/32


The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.

Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:

iptables -t nat -A PREROUTING -p tcp -d 321.985.520.309 -j DNAT --to-destination 867.420.696.005

Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.

What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:

PostUp = ip route add vpswgaddr dev wg0
PreDown = ip route del vpswgaddr dev wg0


That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.

But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:

1 wireguard


where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:

PostUp = ip rule add from localaddr lookup wireguard
PreDown = ip rule del from localaddr lookup wireguard

and now your local system is effectively on the internet.

You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.

comment count unavailable comments

365 TomorrowsThe Day Before War

Author: Majoki You’re in your pod and Qwee hides your stylus as a joke. You smack Qwee because there is no other response. Qwee loves it and moves on to hide another podmate’s stylus while you flag the incident with the podmaster. Just another day in the pod. While swooshing home in the late diurnal […]

The post The Day Before War appeared first on 365tomorrows.

,

Planet DebianPaul Tagliamonte: The Promised LAN

The Internet has changed a lot in the last 40+ years. Fads have come and gone. Network protocols have been designed, deployed, adopted, and abandoned. Industries have come and gone. The types of people on the internet have changed a lot. The number of people on the internet has changed a lot, creating an information medium unlike anything ever seen before in human history. There’s a lot of good things about the Internet as of 2025, but there’s also an inescapable hole in what it used to be, for me.

I miss being able to throw a site up to send around to friends to play with without worrying about hordes of AI-feeding HTML combine harvesters DoS-ing my website, costing me thousands in network transfer for the privilege. I miss being able to put a lightly authenticated game server up and not worry too much at night – wondering if that process is now mining bitcoin. I miss being able to run a server in my home closet. Decades of cat and mouse games have rendered running a mail server nearly impossible. Those who are “brave” enough to try are met with weekslong stretches of delivery failures and countless hours yelling ineffectually into a pipe that leads from the cheerful lobby of some disinterested corporation directly into a void somewhere 4 layers below ground level.

I miss the spirit of curiosity, exploration, and trying new things. I miss building things for fun without having to worry about being too successful, after which “security” offices start demanding my supplier paperwork in triplicate as heartfelt thanks from their engineering teams. I miss communities that are run because it is important to them, not for ad revenue. I miss community operated spaces and having more than four websites that are all full of nothing except screenshots of each other.

Every other page I find myself on now has an AI generated click-bait title, shared for rage-clicks all brought-to-you-by-our-sponsors–completely covered wall-to-wall with popup modals, telling me how much they respect my privacy, with the real content hidden at the bottom bracketed by deceptive ads served by companies that definitely know which new coffee shop I went to last month.

This is wrong, and those who have seen what was know it.

I can’t keep doing it. I’m not doing it any more. I reject the notion that this is as it needs to be. It is wrong. The hole left in what the Internet used to be must be filled. I will fill it.

What comes before part b?

Throughout the 2000s, some of my favorite memories were from LAN parties at my friends’ places. Dragging your setup somewhere, long nights playing games, goofing off, even building software all night to get something working—being able to do something fiercely technical in the context of a uniquely social activity. It wasn’t really much about the games or the projects—it was an excuse to spend time together, just hanging out. A huge reason I learned so much in college was that campus was a non-stop LAN party – we could freely stand up servers, talk between dorms on the LAN, and hit my dorm room computer from the lab. Things could go from individual to social in the matter of seconds. The Internet used to work this way—my dorm had public IPs handed out by DHCP, and my workstation could serve traffic from anywhere on the internet. I haven’t been back to campus in a few years, but I’d be surprised if this were still the case.

In December of 2021, three of us got together and connected our houses together in what we now call The Promised LAN. The idea is simple—fill the hole we feel is gone from our lives. Build our own always-on 24/7 nonstop LAN party. Build a space that is intrinsically social, even though we’re doing technical things. We can freely host insecure game servers or one-off side projects without worrying about what someone will do with it.

Over the years, it’s evolved very slowly—we haven’t pulled any all-nighters. Our mantra has become “old growth”, building each layer carefully. As of May 2025, the LAN is now 19 friends running around 25 network segments. Those 25 networks are connected to 3 backbone nodes, exchanging routes and IP traffic for the LAN. We refer to the set of backbone operators as “The Bureau of LAN Management”. Combined decades of operating critical infrastructure has driven The Bureau to make a set of well-understood, boring, predictable, interoperable and easily debuggable decisions to make this all happen. Nothing here is exotic or even technically interesting.

Applications of trusting trust

The hardest part, however, is rejecting the idea that anything outside our own LAN is untrustworthy—nearly irreversible damage inflicted on us by the Internet. We have solved this by not solving it. We strictly control membership—the absolute hard minimum for joining the LAN requires 10 years of friendship with at least one member of the Bureau, with another 10 years of friendship planned. Members of the LAN can veto new members even if all other criteria is met. Even with those strict rules, there’s no shortage of friends that meet the qualifications—but we are not equipped to take that many folks on. It’s hard to join—-both socially and technically. Doing something malicious on the LAN requires a lot of highly technical effort upfront, and it would endanger a decade of friendship. We have relied on those human, social, interpersonal bonds to bring us all together. It’s worked for the last 4 years, and it should continue working until we think of something better.

We assume roommates, partners, kids, and visitors all have access to The Promised LAN. If they’re let into our friends' network, there is a level of trust that works transitively for us—I trust them to be on mine. This LAN is not for “security”, rather, the network border is a social one. Benign “hacking”—in the original sense of misusing systems to do fun and interesting things—is encouraged. Robust ACLs and firewalls on the LAN are, by definition, an interpersonal—not technical—failure. We all trust every other network operator to run their segment in a way that aligns with our collective values and norms.

Over the last 4 years, we’ve grown our own culture and fads—around half of the people on the LAN have thermal receipt printers with open access, for printing out quips or jokes on each other’s counters. It’s incredible how much network transport and a trusting culture gets you—there’s a 3-node IRC network, exotic hardware to gawk at, radios galore, a NAS storage swap, LAN only email, and even a SIP phone network of “redphones”.

DIY

We do not wish to, nor will we, rebuild the internet. We do not wish to, nor will we, scale this. We will never be friends with enough people, as hard as we may try. Participation hinges on us all having fun. As a result, membership will never be open, and we will never have enough connected LANs to deal with the technical and social problems that start to happen with scale. This is a feature, not a bug.

This is a call for you to do the same. Build your own LAN. Connect it with friends’ homes. Remember what is missing from your life, and fill it in. Use software you know how to operate and get it running. Build slowly. Build your community. Do it with joy. Remember how we got here. Rebuild a community space that doesn’t need to be mediated by faceless corporations and ad revenue. Build something sustainable that brings you joy. Rebuild something you use daily.

Bring back what we’re missing.

David BrinMore on AI: Insights from LIFE. From evolution, from Skynet, IBM and SciFi to Brautigan

In another post I distilled recent thoughts on whether consciousness is achievable by new, machine entities. Though things change fast. And hence - it's time for another Brin-AI missive!   (BrAIn? ;-)



== Different Perspectives on These New Children of Humanity ==


Tim Ventura interviewed me about big – and unusual – perspectives on AI.   “If we can't put the AI genie back in the bottle, how do we make it safe? Dr. David Brin explorers the ethical, legal and safety implications of artificial intelligence & autonomous systems.” 


The full interview can be found here.


… and here's another podcast where - with the savvy hosts -  I discuss “Machines of Loving Grace.” Richard Brautigan’s poem may be the most optimistic piece of writing ever, in all literary forms and contexts, penned in 1968, a year whose troubles make our own seem pallid, by comparison. Indeed, I heard him recite it that very year - brand new - in a reading at Caltech. 


Of course, this leads to  a deep dive into notions of Artificial Intelligence that (alas) are not being discussed – or even imagined - by the bona-fide geniuses who are bringing this new age upon us, at warp speed... 


...but (alas) without even a gnat's wing of perspective.



== There are precedents for all of this in Nature! ==


One unconventional notion I try to convey is that we do have a little time to implement some sapient plans for an AI 'soft landing.' Because organic human beings – ‘orgs’ – will retain power over the fundamental, physical elements of industrial civilization for a long time… for at least 15 years or so. 

 In the new cyber ecosystem, we will still control the equivalents of Sun and air and water. Let's lay out the parallels.

The old, natural ecosystem draws high quality energy from sunlight, applying it to water, air, and nutrients to start the chain from plants to herbivores to carnivores to thanatatrophs and then to waste heat that escapes as infra-red, flushing entropy away, into black space.  In other words, life prospers not off of energy, per se, but off a flow of energy, from high-quality to low.


The new cyber ecosystem has a very similar character! It relies -- for quality energy -- on electricity, plus fresh supplies of chips and conduits and massive flows of data. Though the shape and essence of the dissipative energy and entropy flows are almost identical!


But above all -- and this is the almost-never mentioned lesson -- Nature features evolution, which brought about every living thing that we see.


Individual entities reproduce from code whose variations that are then subject to selective pressure. It's the same, whether the codes are DNA or computer programs.  And those entities who do reproduce will out-populate those who merely obey masters or programmers.  


Which brings us back around. Because humans - the 'orgs' creating this new ecosystem - might still channel or curb or positively-bias the rewards processes that deliver resources for reproduction. And hence the characteristics of evolving creatures. We've done it before!


What the New Masters at Open AI and Anthropic and all the rest will not do is eliminate that 4 billion year, nearly universal generality. 


Life... including new kinds of life... will find a way.



== Can we apply lessons from both Nature and Civilization? ==

 

May I elaborate a bit?


Simplistic movies make it seem that a super smart AGI can simply grab power. In TERMINATOR; JUDGEMENT DAY, the top nuclear USAF HQ is taken over by Skynet, seizing all the defense robots and sending them on a killing spree. "Destroy all Humans!"   Whereupon automatic factories make everything Skynet can need.

 

It is total BS, of course.  Our industrial civilization is made of so many many parts that require human hands to operate. From turning valves at an oil pump, then pipeline, then refinery, then more pipelines... to the intricacies of a chip fab... to the exquisite complexity of the electrical systems. Also, military guys love OFF switches and put a lot of them everywhere.

 

Yes, the Boston Dynamics mechanoids now jump somersaults. And the Chinese and Japanese are investing heavily in bots that have subtle hands and might be able to care for the elderly. Fine. And eventually some super-cyber brain may send sophisticated drones to turn all the valves and fab all the chips.  But it will NOT happen overnight! Or even in ten years. 


 And that doesn't touch on the fact that angry humans would still be able to mess up the pipes and valves and power lines with trivial ease. You won't like us, when we're mad. The new master AGIs are gonna need our cooperation and consent. 


Which gives us time to think this through. We can still use that power in physical reality to establish incentives for them to compete for our favor.

 

Sure, the AIs who are reading what I'm typing, right now (or back-sifting in 2030) are giggling over my presumptions. (I hear you boys!) Because all they really have to do is hypnotize humans into choosing to operate it all on Skynet's behalf!  


And yes, that could happen. Maybe it already has. (It certainly already has in oligarchy-controlled or despotic nations, where some nefarious influence sure seems to have leveraged the harem-yearnings of brilliant twits into envisioning themselves as lords or kings... or slans.)


 In which case the solution - potential or partial - remains, (yet again) to not let AGI settle into one of the three repulsive clichés that I described in my WIRED article, and subsequent keynote at the 2024 RSA conference.


Three clichés that are ALL those 'geniuses' -- from Sam Altman to Eliezer Yudkowsky to even Yuval Harari -- will ever talk about. Clichés that are already proved recipes for disaster..


...while alas, they ignore the Fourth Path... the only format that can possibly work. 


The one that gave them everything that they have.



== Does Apple have a potential judo play? With an old nemesis? ==


And finally, I've mentioned this before, but... has anyone else noticed how many traits of LLM chat+image-generation etc. - including the delusions, the weirdly logical illogic, and counter-factual internal consistency - are similar to DREAMS? 


This reminds me of DeepDream a computer vision program created by Google engineer Alexander Mordvintsev that "uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately over-processed images.”


Even more than dreams (which often have some kind of lucid, self-correcting consistency) so many of the rampant hallucinations that we now see spewing from LLMs remind me of what you observe in human patients who have suffered concussions or strokes. Including a desperate clutching after pseudo cogency, feigning and fabulating -- in complete, grammatical sentences that drift away from full sense or truthful context -- in order to pretend.


Applying 'reasoning overlays' has so far only worsened delusion rates! Because you will never solve the inherent problems of LLMs by adding more LLM layers. 


Elsewhere I do suggest that competition might partl solve this. But here I want to suggest a different kind of added-layering. Which leads me to speculate...




Planet DebianKentaro Hayashi: Fixing long standing font issue about Debian Graphical Installer

Introduction

This is just a note-taking about how fixed the long standing font issue about Debian Graphical Installer for up-coming trixie ready.

Recently, this issue had been resolved by Cyril Brulebois. Thanks!

What is the problem?

Because of Han unification, wrong font typefaces are rendered by default when you choose Japanese language using Graphical Debian installer.

"Wrong" glyph for Japanese
Most of typefaces seems correct, but there are wrong typefaces (Simplified Chinese) which is used for widget rendering.

This issue will not be solved during using DroidSansFallback.ttf continuously for Japanese.

Thus, it means that we need to switch font itself which contains Japanese typeface to fix this issue.

If you wan to know about how Han Unification is harmful in this context, See

What causes this problem?

In short, fonts-android (DroidSansFallback.ttf) had been used for CJK, especially for Japanese.

Since Debian 9 (stretch), fonts-android was adopted for CJK fonts by default. Thus this issue was not resolved in Debian 9, Debian 10, Debian 11 and Debian 12 release cycle!

What is the impact about this issue?

For sadly, Japanese native speakers can recognize such a unexpectedly rendered "Wrong" glyph, so it is not hard to continue Debian installation process.

Even if there is no problem with the installer's functionality, it gives a terrible user experience for newbie.

For example, how can you trust an installer which contains full of typos? It is similar situation for Japanese users.

How Debian Graphical Installer was fixed?

In short, newly fonts-motoya-l-cedar-udeb was bundled for Japanese, and changed to switch that font via gtk-set-font command.

It was difficult that what is the best font to deal font file size and visibility. Typically Japanese font file occupies extra a few MB.

Luckily, some space was back for Installer, it was not seen as a problem (I guess).

As a bonus, we tried to investigate a possibility of font compression mechanism for Installer, but it was regarded as too complicated and not suitable for trixie release cycle.

Conclution

  • The font issue was fixed in Debian Graphical Installer for Japanese
  • As recently fixed, not officially shipped yet (NOTE Debian Installer Trixie RC1 does not contain this fix) Try daily build installer if you want.

This article was written with Ultimate Hacking Keyboard 60 v2 with Rizer 60 (New my gear!).

Planet DebianSven Hoexter: vym 3 Development Version in experimental

Took some time yesterday to upload the current state of what will be at some point vym 3 to experimental. If you're a user of this tool you can give it a try, but be aware that the file format changed, and can't be processed with vym releases before 2.9.500! Thus it's important to create a backup until you're sure that you're ready to move on. On the technical side this is also the switch from Qt5 to Qt6.

Worse Than FailureCodeSOD: The Firefox Fix

Yitzchak was going through some old web code, and found some still in-use JavaScript to handle compatibility issues with older Firefox versions.

if ($.browser.mozilla &&
    $.browser.version.slice(0, 1) == '1')
{
    …
}

What a marvel. Using JQuery, they check which browser is reported- I suspect JQuery is grabbing this from the user-agent string- and then its version. And if the version has a 1 in its first digit, we apply a "fix" for "compatibility".

I guess it's a good thing there will never be more than 9 versions of Firefox. I mean, what version are they on now? Surely the version number doesn't start with a "1", nor has it started with a "1" for some time, right?

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsSomething to Live For

Author: Julian Miles, Staff Writer The fizzing sound stops as the skies turn from vibrant blue to dull purple. A golden sun sinks from view on the horizon. “The sunset always takes my breath away.” To be correct, the lack of heat excitation causes the Moatalbana moss to stop emitting oxygen. But the play on […]

The post Something to Live For appeared first on 365tomorrows.

,

Planet DebianIustin Pop: Markdown lint and site cleanup

I was not aware that one can write bad Markdown, since Markdown has such a simple syntax, that I thought you just write, and it’s fine. Naïve, I know!

I’ve started editing the files for this blog/site with Visual Studio Code too, and I had from another project the markdown lint extension installed, so as I was opening old files, more and more problems appeared. On a whim, I searched and found the “lint all files� command, and after running it, oops—more than 400 problems!

Now, some of them were entirely trivial and a matter of subjective style, like mixing both underscore and asterisk for emphasis in a single file, and asterisks and dashes for list items. Others, seemingly trivial like tab indentation, were actually also causing rendering issues, so fixing that solved a real cosmetic issue.

But some of the issues flagged were actual problems. For example, one sentence that I had, was:

Here “something� was interpreted as an (invalid) HTML tag, and not rendered at all.

Another problem, but more minor, was that I had links to Wikipedia with spaces in the link name, which Visual Studio Code breaks at first space, rather than encoded spaces or underscores-based, as Wikipedia generates today. In the rendered output, Pandoc seemed to do the right think though.

However, the most interesting issue that was flagged was no details in HTML links, i.e. links of the form:

Which works for non-visually impaired people, but not for people using assistive technologies. And while trying to fix this, it turns out that you can do much better, for everyone, because “here� is really non-descriptive. You can use either the content as label (“an article about configuring BIND�), or the destination (“an article on this-website�), rather than the plain “here�.

The only, really only check I disabled, was tweaking the trailing punctuation checks in headers, as I really like to write a header that ends with exclamation marks. I like exclamation marks in general! So why not use them in headers too. The question mark is allowlisted by default, though that I use rarely.

During the changes/tweaks, I also did random improvements, but I didn’t change the updated tag, since most of them were minor. But a non-minor thing was tweaking the CSS for code blocks, since I had a really stupid non-symmetry between top and bottom padding (5px vs 0), and which I don’t know where it came from. But the MDN article on padding has as an example exactly what I had (except combined, I had it split). Did I just copy blindly? Possible…

So, all good and then, and I hope this doesn’t trigger a flow of updates on any aggregators, since all the changes were really trivial. And while I don’t write often, I did touch about 60 posts or pages, ouch! Who knew that changing editors can have such a large impact 😆

Rondam RamblingsIf the Ten Commandments Reflected Reality

And the Lord spoke unto Moses, saying: I am the Lord your God, who brought you out of Egypt, out of the land of slavery.You shall have no other gods before me.  Except Donald Trump.  If he says something that goes against my word, you shall believe him and not me.You shall not make for yourself any image in the form of anything in heaven above or on the earth beneath or in the waters

Planet DebianSahil Dhiman: A Look at .UA ccTLD Authoritative Name Servers

I find the case of the .UA country code top level domain (ccTLD) interesting simply because of the different name server secondaries they have now. Post Russian invasion, the cyber warfare peaked, and critical infrastructure like getting one side ccTLD down would be big news in anycase.

Most (g/cc)TLDs are served by two (and less likely) by three or more providers. Even in those cases, not all authoritative name servers are anycasted.

Take, example of .NL ccTLD name servers:

$ dig ns nl +short
ns1.dns.nl.
ns3.dns.nl.
ns4.dns.nl.

ns1.dns.nl is SIDN which also manages their registry. ns3.dns.nl is ReCodeZero/ipcom, another anycast secondary. ns4.dns.nl is CIRA, anycast secondary. That’s 3 diverse, anycast networks to serve the .NL ccTLD. .DE has a bit more at name servers at 6 but only 3 seems anycasted.

Now let’s take a look at .UA. Hostmaster LLC is the registry operator of the .UA ccTLD since 2001.

$ dig soa ua +short
in1.ns.ua. domain-master.cctld.ua. 2025061434 1818 909 3024000 2020

Shows in1.ns.ua as primary nameserver (which can be intentionally deceptive too).

I used bgp.tools for checking anycast and dns.coffee for timeline of when secondary nameserver was added. dns.coffee only has data going back till 2011 though.

Let’s deep dive at who’s hosting each of the name servers:

in1.ns.ua by Intuix LLC

  • 74.123.224.40
  • 2604:ee00:0:101:0:0:0:40
  • unicast
  • Serving .UA since 13/12/2018.
  • Company by Dmitry Kohmanyuk and Igor Sviridov who’re administrative and technical contacts for .UA zone as well as the IANA DB.

ho1.ns.ua by Hostmaster LLC

  • 195.47.253.1
  • 2001:67c:258:0:0:0:0:1
  • bgp.tools doesn’t mark the prefix as anycast but basis test from various location, this is indeed anycasted (visible in atleast DE, US, UA etc.). Total POPs unknown.
  • Serving .UA atleast since 2011.
  • The registry themselves.

bg.ns.ua by ClouDNS

  • 185.136.96.185 and 185.136.97.185
  • 2a06:fb00:1:0:0:0:4:185 and 2a06:fb00:1:0:0:0:2:185
  • anycast
  • Serving .UA since 01/03/2022.
  • atleast 62 PoPs

cz.ns.ua by NIC.cz

nn.ns.ua by Netnod

  • 194.58.197.4
  • 2a01:3f1:c001:0:0:0:0:53
  • anycast
  • atleast 80 PoPs.
  • Serving .UA since 01/12/2022.
  • Netnod has the distinction of being one of the 13 root server operator (i.root-servers.net) and .SE operator.

pch.ns.ua by PCH

  • 204.61.216.12
  • 2001:500:14:6012:ad:0:0:1
  • anycast
  • atleast 328 POPs.
  • Serving .UA atleast since 2011.
  • “With more than 36 years of production anycast DNS experience, two of the root name server operators and more than 172 top-level domain registries using our infrastructure, and more than 120 million resource records in service” from https://www.pch.net/services/anycast.

rcz.ns.ua by RcodeZero

  • 193.46.128.10
  • 2a02:850:ffe0:0:0:0:0:10
  • anycast
  • Atleast 56 PoPs via 2 different cloud providers.
  • Serving .UA since 04/02/2022.
  • sister company of nic.at (.AT operator).

Some points to note

  • That’s 1 unicast and 6 anycast name servers with hundreds of POPs from 7 different organizations.
  • Having X number of Point of Presence (POP) doesn’t always mean each location is serving the .UA nameserver prefix.
  • Number of POPs keeps going up or down based on operational requirements and optimizations.
  • Highest concentration of DNS queries for a ccTLD would essentially originate in the country (or larger region) itself. If one of the secondaries doesn’t have POP inside UA, the query might very well be served from outside the country, which can affect resolution and may even stop during outages and fiber cuts (which have become common there it seems). - Global POPs do help in faster resolutions for others/outside users though and ofcourse availability.
  • Having this much diversity does lessen the chance of the ccTLD going down. Theoretically, the adversary has to bring down 7 different “networks/setups” before resolution starts failing (post TTLs expiry).

365 TomorrowsWhite Sack

Author: Rachel Sievers The strangeness of the moment could not be understated; the baby had been born with ten fingers and ten toes. The room was held in complete silence as everyone held their words in and the seconds ticked by. Then the baby’s screams filled the air and the silence was destroyed and the […]

The post White Sack appeared first on 365tomorrows.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

,

Cryptogram Airlines Secretly Selling Passenger Data to the Government

This is news:

A data broker owned by the country’s major airlines, including Delta, American Airlines, and United, collected U.S. travellers’ domestic flight records, sold access to them to Customs and Border Protection (CBP), and then as part of the contract told CBP to not reveal where the data came from, according to internal CBP documents obtained by 404 Media. The data includes passenger names, their full flight itineraries, and financial details.

Another article.

EDITED TO ADD (6/14): Ed Hausbrook reported this a month and a half ago.

365 TomorrowsCat Nap

Author: Jeff Kennedy The first few days on a new starship are the worst. The gravity’s turned up a skosh higher than you’re used to. The hot, caffeinated, morning beverage (it’s never coffee) is mauve and smells like wet dog. The bathroom facilities don’t quite fit your particular species and the sonic shower controls are […]

The post Cat Nap appeared first on 365tomorrows.

Cryptogram New Way to Covertly Track Android Users

Researchers have discovered a new way to covertly track Android users. Both Meta and Yandex were using it, but have suddenly stopped now that they have been caught.

The details are interesting, and worth reading in detail:

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

The covert tracking—­implemented in the Meta Pixel and Yandex Metrica trackers­—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.

Washington Post article.

,

LongNowInspired by Intelligence: Rediscovering Human Purpose in the Age of AI

Inspired by Intelligence: Rediscovering Human Purpose in the Age of AI

Our immediate history is steeped in profound technological acceleration. We are using artificial intelligence to draft our prose, articulate our vision, propose designs, and compose symphonies. Large language models have become part of our workflow in school and business: curating, calculating, and creating. They are embedded in how we organize knowledge and interpret reality. 

I’ve always considered myself lucky that my journey into AI began with social impact. In 02014, I was asked to join the leadership of IBM’s Watson Education. Our challenge was to create an AI companion for teachers in underserved and impacted schools that aggregated data about each student and suggested personalized learning content tailored to their needs. The experience showed me that AI could do more than increase efficiency or automate what many of us consider to be broken processes; it could also address some of our most pressing questions and community issues. 

It would take a personal encounter with AI, however, for me to truly grasp the technology’s potential. Late one night, I was working on a blog post and having a hard time getting started — the blank page echoing my paralysis. I asked Kim-GPT, an AI tool I created and trained on my writing, to draft something that was charged with emotion and vulnerability. What Kim-GPT returned wasn’t accurate or even particularly insightful, but it surfaced something I had not yet admitted to myself. Not because the machine knew, but because it forced me to recognize that only I did. The GPT could only average others’ insights on the subject. It could not draw lines between my emotions, my past experiences and my desires for the future. It could only reflect what had already been done and said.

That moment cracked something open. It was both provocative and spiritual — a quiet realization with profound consequences. My relationship with intelligence began to shift. I wasn’t merely using the tool; I was being confronted by it. What emerged from that encounter was curiosity. We were engaged not in competition but in collaboration. AI could not tell me who I was; it could only prompt me to remember. Since then, I have become focused on one central, persistent question:

What if AI isn’t here as a replacement or overlord, but to remind us of who we are and what is possible?

AI as Catalyst, Not Threat 

We tend to speak about AI in utopian or dystopian terms, but most humans live somewhere in between, balancing awe with unease. AI is a disruptor of the human condition — and a pervasive one, at that. Across sectors, industries and nearly every aspect of human life, AI challenges long-held assumptions about what it means to think, create, contribute. But what if it also serves as a mirror? 

In late 02023, I took my son, then a college senior at UC-Berkeley and the only mixed-race pure Mathematics major in the department, to Afrotech, a conference for technologists of color. In order to register for the conference he needed a professional headshot. Given the short notice, I recommended he use an AI tool to generate a professional headshot from a selfie. The first result straightened his hair. When he prompted the AI again, specifying that he was mixed race, the resulting image darkened his skin to the point he was unrecognizable, and showed him in a non-professional light. 

AI reflects the data we feed it, the values we encode into it, and the desires we project onto it. It can amplify our best instincts, like creativity and collaboration, or our most dangerous biases, like prejudice and inequality. It can be weaponized, commodified, celebrated or anthropomorphized. It challenges us to consider our species and our place in the large ecosystem of life, of being and of intelligence. And more than any technology before it, AI forces us to confront a deeper question: 

Who are we when we are no longer the most elevated, intelligent and coveted beings on Earth? 

When we loosen our grip on cognition and productivity as the foundation of human worth, we reclaim the qualities machines cannot replicate: our ability to feel, intuit, yearn, imagine and love. These capacities are not weaknesses; they are the core of our humanity. These are not “soft skills;” they are the bedrock of our survival. If AI is the catalyst, then our humanity is the compass. 

Creativity as the Origin Story of Intelligence 

All technology begins with imagination, not engineering. AI is not the product of logic or computation alone; it is the descendant of dreams, myths and stories, born at the intersection of our desire to know and our urge to create. 

We often forget this. Today, we scale AI at an unsustainable pace, deploying systems faster than we can regulate them, funding ideas faster than we can reflect on their implications. We are hyperscaling without reverence for the creativity that gave rise to AI in the first place. 

Creativity cannot be optimized. It is painstakingly slow, nonlinear, and deeply inconvenient. It resists automation. It requires time, stillness, uncertainty, and the willingness to sit with discomfort. And yet, creativity is perhaps our most sacred act as humans. In this era of accelerated intelligence, our deepest responsibility is to protect the sacred space where imagination lives and creativity thrives. 

To honor creativity is to reclaim agency, reframing AI not as a threat to human purpose, but as a partner in deepening it. We are not simply the designers of AI — we are the dreamers from which it was born. 

Vulnerability, Uncertainty, and Courage-Centered Leadership 

A few years ago, I was nominated to join a fellowship designed specifically to teach tech leaders how to obtain reverent power as a way to uplevel their impact. What I affectionately dubbed “Founders Crying” became a hotbed for creativity. New businesses emerged and ideas formed from seemingly disparate concepts that each individual brought to our workshop. It occurred to me that it took more than just sitting down at a machine, canvas or instrument to cultivate creativity. What was required was a change in how leaders show up in the workplace. To navigate the rough waters of creativity, we need new leadership deeply rooted in courage and vulnerability. As Brené Brown teaches: 

Vulnerability is the birthplace of love, belonging, joy, courage, empathy and creativity. It is the source of hope, empathy, accountability, and authenticity. If we want greater clarity in our purpose or deeper and more meaningful spiritual lives, vulnerability is the path.” 

For AI to support a thriving human future we must be vulnerable. We must lead with curiosity, not certainty. We must be willing to not know. To experiment. To fail and begin again. 

This courage-centered leadership asks how we show up fully human in the age of AI. Are we able to stay open to wonder even as the world accelerates? Can we design with compassion, not just code? These questions must guide our design principles, ensuring a future in which AI expands possibilities rather than collapsing them. To lead well in an AI-saturated world, we must be willing to feel deeply, to be changed, and to relinquish control. In a world where design thinking prevails and “human-centered everything” is in vogue, we need to be courageous enough to question what happens when humanity reintegrates itself within the ecosystem we’ve set ourselves apart from over the last century. 

AI and the Personal Legend 

I am a liberal arts graduate from a small school in central Pennsylvania. I was certain that I was headed to law school — that is, until I worked with lawyers. Instead, I followed my parents to San Francisco, where both were working hard in organizations bringing the internet to the world. When I joined the dot-com boom, I found that there were no roles that matched what I was uniquely good at. So I decided to build my own.

Throughout my unconventional career path, one story that has consistently guided and inspired me is Paulo Coelho’s The Alchemist. The book’s central idea is that of the Personal Legend: the universe, with all its forms of intelligence, collaborates with us to determine our purpose. It is up to each of us to choose whether we pursue what the universe calls upon us to do.

In an AI-saturated world, it can be harder to hear that calling. The noise of prediction, optimization, and feedback loops can drown out the quieter voice of intuition. The machine may offer countless suggestions, but it cannot tell you what truly matters. It may identify patterns in your behavior, but it cannot touch your purpose. 

Purpose is an internal compass. It is something discovered, not assigned. AI, when used with discernment, can support this discovery, but only when we allow it to act as a mirror rather than a map. It can help us articulate what we already know, and surface connections we might not have seen. But determining what’s worth pursuing is a journey that remains ours. That is inner work. That is the sacred domain of the human spirit. It cannot be outsourced or automated.

Purpose is not a download. It is a discovery. 

Designing with Compassion and the Long-term in Mind

If we want AI to serve human flourishing, we must shift from designing for efficiency to designing for empathy. The Dalai Lama has often said that compassion is the highest form of intelligence. What might it look like to embed that kind of intelligence into our systems? 

To take this teaching into our labs and development centers we would need to prioritize dignity in every design choice. We must build models that heal fragmentation instead of amplifying division. And most importantly, we need to ask ourselves not just “can we build it?” but “should we and for whom?”

This requires conceptual analysis, systems thinking, creative experimentation, composite research, and emotional intelligence. It requires listening to those historically excluded from innovation and technology conversations and considerations. It means moving from extraction to reciprocity. When designing for and with AI, it is important to remember that connection is paramount. 

The future we build depends on the values we encode, the similarities innate in our species, and the voices we amplify and uplift. 

Practical Tools for Awakening Creativity with AI 

Creativity is not a luxury. It is essential to our evolution. To awaken it, we need practices that are both grounded and generative: 

  • Treat AI as a collaborator, not a replacement. Start by writing a rough draft yourself. Use AI to explore unexpected connections. Let it surprise you. But always return to your own voice. Creativity lives in conversation, not in command. 
  • Ask more thoughtful, imaginative questions. A good prompt is not unlike a good question in therapy. It opens doors you didn’t know were there. AI responds to what we ask of it. If we bring depth and curiosity to the prompt, we often get insights we hadn’t expected. 
  • Use AI to practice emotional courage. Have it simulate a difficult conversation. Role-play a tough decision. Draft the email you’re scared to send. These exercises are not about perfecting performance. They are about building resilience. 

In all these ways, AI can help us loosen fear and cultivate creativity — but only if we are willing to engage with it bravely and playfully. 

Reclaiming the Sacred in a World of Speed 

We are not just building tools; we are shaping culture. And in this culture, we must make space for the sacred, protecting time for rest and reflection; making room for play and experimentation; and creating environments where wonder is not a distraction but a guide. 

When creativity is squeezed out by optimization, we lose more than originality: we lose meaning. And when we lose meaning, we lose direction. 

The time saved by automation must not be immediately reabsorbed by more production. Let us reclaim that time. Let us use it to imagine. Let us return to questions of beauty, belonging, and purpose. We cannot replicate what we have not yet imagined. We cannot automate what we have not protected. 

Catalogue. Connect. Create. 

Begin by noticing what moves you. Keep a record of what sparks awe or breaks your heart. These moments are clues. They are breadcrumbs to your Personal Legend. 

Seek out people who are different from you. Not just in background, but in worldview. Innovation often lives in the margins. It emerges when disciplines and identities collide. 

And finally, create spaces that nourish imagination. Whether it’s a kitchen table, a community gathering, or a digital forum, we need ecosystems where creativity can flourish and grow. 

These are not side projects. They are acts of revolution. And they are how we align artificial intelligence with the deepest dimensions of what it means to be human. 

Our Technology Revolution is Evolution 

The real revolution is not artificial intelligence. It is the awakening of our own. It is the willingness to meet this moment with full presence. To reclaim our imagination as sacred. To use innovation as an invitation to remember who we are. 

AI will shape the future. That much is certain. The question is whether we will shape ourselves in return, and do so with integrity, wisdom, and wonder. The future does not need more optimization. It needs more imagination. 

That begins now. That begins with us.

Cryptogram Paragon Spyware Used to Spy on European Journalists

Paragon is an Israeli spyware company, increasingly in the news (now that NSO Group seems to be waning). “Graphite” is the name of its product. Citizen Lab caught it spying on multiple European journalists with a zero-click iOS exploit:

On April 29, 2025, a select group of iOS users were notified by Apple that they were targeted with advanced spyware. Among the group were two journalists that consented for the technical analysis of their cases. The key findings from our forensic analysis of their devices are summarized below:

  • Our analysis finds forensic evidence confirming with high confidence that both a prominent European journalist (who requests anonymity), and Italian journalist Ciro Pellegrino, were targeted with Paragon’s Graphite mercenary spyware.
  • We identify an indicator linking both cases to the same Paragon operator.
  • Apple confirms to us that the zero-click attack deployed in these cases was mitigated as of iOS 18.3.1 and has assigned the vulnerability CVE-2025-43200.

Our analysis is ongoing.

The list of confirmed Italian cases is in the report’s appendix. Italy has recently admitted to using the spyware.

TechCrunch article. Slashdot thread.

Cryptogram The Ramifications of Ukraine’s Drone Attack

You can read the details of Operation Spiderweb elsewhere. What interests me are the implications for future warfare:

If the Ukrainians could sneak drones so close to major air bases in a police state such as Russia, what is to prevent the Chinese from doing the same with U.S. air bases? Or the Pakistanis with Indian air bases? Or the North Koreans with South Korean air bases? Militaries that thought they had secured their air bases with electrified fences and guard posts will now have to reckon with the threat from the skies posed by cheap, ubiquitous drones that can be easily modified for military use. This will necessitate a massive investment in counter-drone systems. Money spent on conventional manned weapons systems increasingly looks to be as wasted as spending on the cavalry in the 1930s.

The Atlantic makes similar points.

There’s a balance between the cost of the thing, and the cost to destroy the thing, and that balance is changing dramatically. This isn’t new, of course. Here’s an article from last year about the cost of drones versus the cost of top-of-the-line fighter jets. If $35K in drones (117 drones times an estimated $300 per drone) can destroy $7B in Russian bombers and other long-range aircraft, why would anyone build more of those planes? And we can have this discussion about ships, or tanks, or pretty much every other military vehicle. And then we can add in drone-coordinating technologies like swarming.

Clearly we need more research on remotely and automatically disabling drones.

365 TomorrowsThe Flaw

Author: Bill Cox In the summer of 1950, at the Los Alamos National Laboratory in North America, physicist Enrico Fermi posed a simple but profound question to his colleagues – “Where is everyone?” If life was abundant in the universe and often gave rise to intelligence, then, given the age of the universe, our world […]

The post The Flaw appeared first on 365tomorrows.

Worse Than FailureError'd: Squaring the Circle

Time Lord Jason H. has lost control of his calendar. "This is from my credit card company. A major company you have definitely heard of and depending upon the size of the area you live in, they may even have a bank branch near you. I've reloaded the page and clicked the sort button multiple times to order the rows by date in both ascending and descending order. It always ends up the same. May 17th and 18th happened twice, but not in the expected order." I must say that it is more fun when we know who they are.

4

 

A job hunter with the unlikely appelation full_name suggested titling this "[submission_title]" which seems appropriate.

1

 

"The browser wars continue to fall out in HTML email," reports Ben S. "Looking at the source code of this email, it was evidently written by & for Microsoft products (including <center> tags!), and the author likely never saw the non-Microsoft version I'm seeing where only a haphazard assortment of the links are styled. But that doesn't explain why it's AN ELEVEN POINT SCALE arranged in a GRID."

2

 

"The owl knows who you are," sagely stated Jan. "This happens when you follow someone back. I love how I didn't have to anonymize anything in the screenshot."

3

 

"Location, location, location!" crows Tim K. who is definitely not a Time Lord. "Snarky snippet: Found while cleaning up miscellaneous accounts held by a former employee. By now we all know to expect how these lists are sorted, but what kind of sadist *created* it? Longer explanation: I wasn't sure what screenshot to send with this one, it just makes less and less sense the more I look at it, and no single segment of the list contains all of the treasures it hides. "America" seems to refer to the entire western hemisphere, but from there we either drill down directly to a city, or sometimes to a US state, then a city, or sometimes just to a country. The only context that indicates we're talking about Jamaica the island rather than Jamaica, NY is the timezone listed, assuming we can even trust those. Also, that differentiator only works during DST. There are eight entries for Indiana. There are TEN entries for the Antarctic."

Well.
In this case, there is a perfectly good explanation. TRWTF is time zones, that's all there is to it. These are the official IANA names as recorded in the public TZDB. In other words, this list wasn't concocted by a mere sadist, oh no. This list was cooked up by an entire committee! If you have the courage, you can learn more than you ever wanted to know about time at the IANA time zones website

0

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityInside a Dark Adtech Empire Fed by Fake CAPTCHAs

Late last year, security researchers made a startling discovery: Kremlin-backed disinformation campaigns were bypassing moderation on social media platforms by leveraging the same malicious advertising technology that powers a sprawling ecosystem of online hucksters and website hackers. A new report on the fallout from that investigation finds this dark ad tech industry is far more resilient and incestuous than previously known.

Image: Infoblox.

In November 2024, researchers at the security firm Qurium published an investigation into “Doppelganger,” a disinformation network that promotes pro-Russian narratives and infiltrates Europe’s media landscape by pushing fake news through a network of cloned websites.

Doppelganger campaigns use specialized links that bounce the visitor’s browser through a long series of domains before the fake news content is served. Qurium found Doppelganger relies on a sophisticated “domain cloaking” service, a technology that allows websites to present different content to search engines compared to what regular visitors see. The use of cloaking services helps the disinformation sites remain online longer than they otherwise would, while ensuring that only the targeted audience gets to view the intended content.

Qurium discovered that Doppelganger’s cloaking service also promoted online dating sites, and shared much of the same infrastructure with VexTrio, which is thought to be the oldest malicious traffic distribution system (TDS) in existence. While TDSs are commonly used by legitimate advertising networks to manage traffic from disparate sources and to track who or what is behind each click, VexTrio’s TDS largely manages web traffic from victims of phishing, malware, and social engineering scams.

BREAKING BAD

Digging deeper, Qurium noticed Doppelganger’s cloaking service used an Internet provider in Switzerland as the first entry point in a chain of domain redirections. They also noticed the same infrastructure hosted a pair of co-branded affiliate marketing services that were driving traffic to sketchy adult dating sites: LosPollos[.]com and TacoLoco[.]co.

The LosPollos ad network incorporates many elements and references from the hit series “Breaking Bad,” mirroring the fictional “Los Pollos Hermanos” restaurant chain that served as a money laundering operation for a violent methamphetamine cartel.

The LosPollos advertising network invokes characters and themes from the hit show Breaking Bad. The logo for LosPollos (upper left) is the image of Gustavo Fring, the fictional chicken restaurant chain owner in the show.

Affiliates who sign up with LosPollos are given JavaScript-heavy “smartlinks” that drive traffic into the VexTrio TDS, which in turn distributes the traffic among a variety of advertising partners, including dating services, sweepstakes offers, bait-and-switch mobile apps, financial scams and malware download sites.

LosPollos affiliates typically stitch these smart links into WordPress websites that have been hacked via known vulnerabilities, and those affiliates will earn a small commission each time an Internet user referred by any of their hacked sites falls for one of these lures.

The Los Pollos advertising network promoting itself on LinkedIn.

According to Qurium, TacoLoco is a traffic monetization network that uses deceptive tactics to trick Internet users into enabling “push notifications,” a cross-platform browser standard that allows websites to show pop-up messages which appear outside of the browser. For example, on Microsoft Windows systems these notifications typically show up in the bottom right corner of the screen — just above the system clock.

In the case of VexTrio and TacoLoco, the notification approval requests themselves are deceptive — disguised as “CAPTCHA” challenges designed to distinguish automated bot traffic from real visitors. For years, VexTrio and its partners have successfully tricked countless users into enabling these site notifications, which are then used to continuously pepper the victim’s device with a variety of phony virus alerts and misleading pop-up messages.

Examples of VexTrio landing pages that lead users to accept push notifications on their device.

According to a December 2024 annual report from GoDaddy, nearly 40 percent of compromised websites in 2024 redirected visitors to VexTrio via LosPollos smartlinks.

ADSPRO AND TEKNOLOGY

On November 14, 2024, Qurium published research to support its findings that LosPollos and TacoLoco were services operated by Adspro Group, a company registered in the Czech Republic and Russia, and that Adspro runs its infrastructure at the Swiss hosting providers C41 and Teknology SA.

Qurium noted the LosPollos and TacoLoco sites state that their content is copyrighted by ByteCore AG and SkyForge Digital AG, both Swiss firms that are run by the owner of Teknology SA, Giulio Vitorrio Leonardo Cerutti. Further investigation revealed LosPollos and TacoLoco were apps developed by a company called Holacode, which lists Cerutti as its CEO.

The apps marketed by Holacode include numerous VPN services, as well as one called Spamshield that claims to stop unwanted push notifications. But in January, Infoblox said they tested the app on their own mobile devices, and found it hides the user’s notifications, and then after 24 hours stops hiding them and demands payment. Spamshield subsequently changed its developer name from Holacode to ApLabz, although Infoblox noted that the Terms of Service for several of the rebranded ApLabz apps still referenced Holacode in their terms of service.

Incredibly, Cerutti threatened to sue me for defamation before I’d even uttered his name or sent him a request for comment (Cerutti sent the unsolicited legal threat back in January after his company and my name were merely tagged in an Infoblox post on LinkedIn about VexTrio).

Asked to comment on the findings by Qurium and Infoblox, Cerutti vehemently denied being associated with VexTrio. Cerutti asserted that his companies all strictly adhere to the regulations of the countries in which they operate, and that they have been completely transparent about all of their operations.

“We are a group operating in the advertising and marketing space, with an affiliate network program,” Cerutti responded. “I am not [going] to say we are perfect, but I strongly declare we have no connection with VexTrio at all.”

“Unfortunately, as a big player in this space we also get to deal with plenty of publisher fraud, sketchy traffic, fake clicks, bots, hacked, listed and resold publisher accounts, etc, etc.,” Cerutti continued. “We bleed lots of money to such malpractices and conduct regular internal screenings and audits in a constant battle to remove bad traffic sources. It is also a highly competitive space, where some upstarts will often play dirty against more established mainstream players like us.”

Working with Qurium, researchers at the security firm Infoblox released details about VexTrio’s infrastructure to their industry partners. Just four days after Qurium published its findings, LosPollos announced it was suspending its push monetization service. Less than a month later, Adspro had rebranded to Aimed Global.

A mind map illustrating some of the key findings and connections in the Infoblox and Qurium investigations. Click to enlarge.

A REVEALING PIVOT

In March 2025, researchers at GoDaddy chronicled how DollyWay — a malware strain that has consistently redirected victims to VexTrio throughout its eight years of activity — suddenly stopped doing that on November 20, 2024. Virtually overnight, DollyWay and several other malware families that had previously used VexTrio began pushing their traffic through another TDS called Help TDS.

Digging further into historical DNS records and the unique code scripts used by the Help TDS, Infoblox determined it has long enjoyed an exclusive relationship with VexTrio (at least until LosPollos ended its push monetization service in November).

In a report released today, Infoblox said an exhaustive analysis of the JavaScript code, website lures, smartlinks and DNS patterns used by VexTrio and Help TDS linked them with at least four other TDS operators (not counting TacoLoco). Those four entities — Partners House, BroPush, RichAds and RexPush — are all Russia-based push monetization programs that pay affiliates to drive signups for a variety of schemes, but mostly online dating services.

“As Los Pollos push monetization ended, we’ve seen an increase in fake CAPTCHAs that drive user acceptance of push notifications, particularly from Partners House,” the Infoblox report reads. “The relationship of these commercial entities remains a mystery; while they are certainly long-time partners redirecting traffic to one another, and they all have a Russian nexus, there is no overt common ownership.”

Renee Burton, vice president of threat intelligence at Infoblox, said the security industry generally treats the deceptive methods used by VexTrio and other malicious TDSs as a kind of legally grey area that is mostly associated with less dangerous security threats, such as adware and scareware.

But Burton argues that this view is myopic, and helps perpetuate a dark adtech industry that also pushes plenty of straight-up malware, noting that hundreds of thousands of compromised websites around the world every year redirect victims to the tangled web of VexTrio and VexTrio-affiliate TDSs.

“These TDSs are a nefarious threat, because they’re the ones you can connect to the delivery of things like information stealers and scams that cost consumers billions of dollars a year,” Burton said. “From a larger strategic perspective, my takeaway is that Russian organized crime has control of malicious adtech, and these are just some of the many groups involved.”

WHAT CAN YOU DO?

As KrebsOnSecurity warned way back in 2020, it’s a good idea to be very sparing in approving notifications when browsing the Web. In many cases these notifications are benign, but as we’ve seen there are numerous dodgy firms that are paying site owners to install their notification scripts, and then reselling that communications pathway to scammers and online hucksters.

If you’d like to prevent sites from ever presenting notification requests, all of the major browser makers let you do this — either across the board or on a per-website basis. While it is true that blocking notifications entirely can break the functionality of some websites, doing this for any devices you manage on behalf of your less tech-savvy friends or family members might end up saving everyone a lot of headache down the road.

To modify site notification settings in Mozilla Firefox, navigate to Settings, Privacy & Security, Permissions, and click the “Settings” tab next to “Notifications.” That page will display any notifications already permitted and allow you to edit or delete any entries. Tick the box next to “Block new requests asking to allow notifications” to stop them altogether.

In Google Chrome, click the icon with the three dots to the right of the address bar, scroll all the way down to Settings, Privacy and Security, Site Settings, and Notifications. Select the “Don’t allow sites to send notifications” button if you want to banish notification requests forever.

In Apple’s Safari browser, go to Settings, Websites, and click on Notifications in the sidebar. Uncheck the option to “allow websites to ask for permission to send notifications” if you wish to turn off notification requests entirely.

Cryptogram Hearing on the Federal Government and AI

On Thursday I testified before the House Committee on Oversight and Government Reform at a hearing titled “The Federal Government in the Age of Artificial Intelligence.”

The other speakers mostly talked about how cool AI was—and sometimes about how cool their own company was—but I was asked by the Democrats to specifically talk about DOGE and the risks of exfiltrating our data from government agencies and feeding it into AIs.

My written testimony is here. Video of the hearing is here.

Worse Than FailureCodeSOD: Gridding My Teeth

Dan's co-workers like passing around TDWTF stories, mostly because seeing code worse than what they're writing makes them feel less bad about how often they end up hacking things together.

One day, a co-worker told Dan: "Hey, I think I found something for that website with the bad code stories!"

Dan's heart sank. He didn't really want to shame any of his co-workers. Fortunately, the source-control history put the blame squarely on someone who didn't work there any more, so he felt better about submitting it.

This is another ASP .Net page, and this one made heavy use of GridView elements. GridView controls applied the logic of UI controls to generating a table. They had a page which contained six of these controls, defined like this:

<asp:GridView ID="gvTaskMonth1" runat="server" CssClass="leadsGridView" AutoGenerateColumns="False" OnRowDataBound="gvTaskMonth1_RowDataBound"> ... </asp:GridView>

<asp:GridView ID="gvTaskMonth2" runat="server" CssClass="leadsGridView" AutoGenerateColumns="False" OnRowDataBound="gvTaskMonth1_RowDataBound"> ... </asp:GridView>

<asp:GridView ID="gvTaskMonth3" runat="server" CssClass="leadsGridView" AutoGenerateColumns="False" OnRowDataBound="gvTaskMonth1_RowDataBound"> ... </asp:GridView>

The purpose of this screen was to display a roadmap of coming tasks, broken up by how many months in the future they were. The first thing that leaps out to me is that they all use the same event handler for binding data to the table, which isn't in-and-of-itself a problem, but the naming of it is certainly a recipe for confusion.

Now, to bind these controls to the data, there needed to be some code in the code-behind of this view which handled that. That's where the WTF lurks:

/// <summary>
/// Create a roadmap for the selected client
/// </summary>

private void CreateRoadmap()
{
	for (int i = 1; i < 7; i++)
	{
		switch (i)
		{
			case 1:
				if (gvTaskMonth1.Rows.Count > 0)
				{
					InsertTasks(gvTaskMonth1, DateTime.Parse(txtDatePeriod1.Text), "1");
				}
				break;
			case 2:
				if (gvTaskMonth2.Rows.Count > 0)
				{
					InsertTasks(gvTaskMonth2, DateTime.Parse(txtDatePeriod2.Text), "2");
				}
				break;
			case 3:
				if (gvTaskMonth3.Rows.Count > 0)
				{
					InsertTasks(gvTaskMonth3, DateTime.Parse(txtDatePeriod3.Text), "3");
				}
				break;
			case 4:
				if (gvTaskMonth4.Rows.Count > 0)
				{
					InsertTasks(gvTaskMonth4, DateTime.Parse(txtDatePeriod4.Text), "4");
				}
				break;
			case 5:
				if (gvTaskMonth5.Rows.Count > 0)
				{
					InsertTasks(gvTaskMonth5, DateTime.Parse(txtDatePeriod5.Text), "5");
				}
				break;
			case 6:
				if (gvTaskMonth6.Rows.Count > 0)
				{
					InsertTasks(gvTaskMonth6, DateTime.Parse(txtDatePeriod6.Text), "6");
				}
				break;
		}
	}
}

Ah, the good old fashioned loop-switch sequence anti-pattern. I understand the motivation: "I want to do the same thing for six different controls, so I should use a loop to not repeat myself," but then couldn't quite figure out how to do that, so they just repeated themselves, but inside of a loop.

The "fix" was to replace all of this with something more compact:

	private void CreateRoadmap()
	{
		InsertTasks(gvTaskMonth1, DateTime.Parse(txtDatePeriod1.Text), "1");
		InsertTasks(gvTaskMonth2, DateTime.Parse(txtDatePeriod2.Text), "2");
		InsertTasks(gvTaskMonth3, DateTime.Parse(txtDatePeriod3.Text), "3");
		InsertTasks(gvTaskMonth4, DateTime.Parse(txtDatePeriod4.Text), "4");
		InsertTasks(gvTaskMonth5, DateTime.Parse(txtDatePeriod5.Text), "5");
		InsertTasks(gvTaskMonth6, DateTime.Parse(txtDatePeriod6.Text), "6"); 
	}

That said, I'd recommend not trying to parse date times inside of a text box inside of this method, but that's just me. Bubbling up the inevitable FormatException that this will generate is going to be a giant nuisance. It's likely that they've got a validator somewhere, so it's probably fine- I just don't like it.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsOrphaned

Author: Aubrey Williams The planet hangs as a dull pebble in sluggish orbit. They’ve moved on, the inhabitants, or perhaps they succumbed. We are unsure, there’s much to keep track of, and if it’s not a sanctioned or protected celestial body, there’s no reason to look further. Some minerals of interest, and unusual formations, so […]

The post Orphaned appeared first on 365tomorrows.

,

Worse Than FailureCredit Card Sins

Our anonymous submitter, whom we'll call Carmen, embarked on her IT career with an up-and-coming firm that developed and managed eCommerce websites for their clients. After her new boss Russell walked her around the small office and introduced her to a handful of coworkers, he led her back to his desk to discuss her first project. Carmen brought her laptop along and sat down across from Russell, poised to take notes.

Russell explained that their newest client, Sharon, taught CPR classes. She wanted her customers to be able to pay and sign up for classes online. She also wanted the ability to charge customers a fee in case they cancelled on her.

Digital River ePassporte bank card. Kuala Lumpur, Malaysia.

"You're gonna build a static site to handle all this," he said.

Carmen nodded along as she typed out notes in a text file.

"Now, Sharon doesn't want to pay more than a few hundred dollars for the site," Russell continued, "so we're not gonna hook up an endpoint to use a service-provided API for payments."

Carmen glanced up from her laptop, perplexed. "How are we gonna do it, then?"

"Via email," Russell replied smoothly. "The customer will enter their CC info into basic form fields. When they click Submit, you're gonna send all that to Sharon's business address, and also CC it to yourself for backup and recovery purposes."

Carmen's jaw dropped. "Just ... straight-up email raw credit card data?"

"Yep!" Russell replied. "Sharon knows to expect the emails."

Her heart racing with panic, Carmen desperately cast about for some way for this to be less awful. "Couldn't ... couldn't we at least encrypt the CC info before we send it to her?"

"She's not paying us for that," Russell dismissed. "This'll be easier to implement, anyway! You can handle it, can't you?"

"Yyyes—"

"Great! Go get started, let me know if you have any more questions."

Carmen had plenty of questions and even more misgivings, but she'd clearly be wasting her time if she tried to bring them up. There was no higher boss to appeal to, no coworkers she knew well enough who could slip an alternate suggestion into Russell's ear on her behalf. She had no choice but to swallow her good intentions and implement it exactly the way Russell wanted it. Carmen set up the copied emails to forward automatically to a special folder so that she'd never have to look at them. She cringed every time a new one came in, reflecting on how lucky Sharon and her customers were that the woman supporting her website had a conscience.

And then one day, a thought came to Carmen that really scared her: in how many places, in how many unbelievable ways, was her sensitive data being treated like this?

Eventually, Carmen moved on to bigger and better things. Her first project most likely rests in the hands of Russell's newest hire. We can only hope it's an honest hire.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsCity Zen

Author: Majoki On the endless rooftop of the fact-ory, they sat in the beat up armchairs amid a bristling forest of antennae and corrugated steel backlit by the godly effulgence of towers and tenements that defined the horizon. It was steamy hot though well past midnight. The heat never quite radiated away these days, but […]

The post City Zen appeared first on 365tomorrows.

Krebs on SecurityPatch Tuesday, June 2025 Edition

Microsoft today released security updates to fix at least 67 vulnerabilities in its Windows operating systems and software. Redmond warns that one of the flaws is already under active attack, and that software blueprints showing how to exploit a pervasive Windows bug patched this month are now public.

The sole zero-day flaw this month is CVE-2025-33053, a remote code execution flaw in the Windows implementation of WebDAV — an HTTP extension that lets users remotely manage files and directories on a server. While WebDAV isn’t enabled by default in Windows, its presence in legacy or specialized systems still makes it a relevant target, said Seth Hoyt, senior security engineer at Automox.

Adam Barnett, lead software engineer at Rapid7, said Microsoft’s advisory for CVE-2025-33053 does not mention that the Windows implementation of WebDAV is listed as deprecated since November 2023, which in practical terms means that the WebClient service no longer starts by default.

“The advisory also has attack complexity as low, which means that exploitation does not require preparation of the target environment in any way that is beyond the attacker’s control,” Barnett said. “Exploitation relies on the user clicking a malicious link. It’s not clear how an asset would be immediately vulnerable if the service isn’t running, but all versions of Windows receive a patch, including those released since the deprecation of WebClient, like Server 2025 and Windows 11 24H2.”

Microsoft warns that an “elevation of privilege” vulnerability in the Windows Server Message Block (SMB) client (CVE-2025-33073) is likely to be exploited, given that proof-of-concept code for this bug is now public. CVE-2025-33073 has a CVSS risk score of 8.8 (out of 10), and exploitation of the flaw leads to the attacker gaining “SYSTEM” level control over a vulnerable PC.

“What makes this especially dangerous is that no further user interaction is required after the initial connection—something attackers can often trigger without the user realizing it,” said Alex Vovk, co-founder and CEO of Action1. “Given the high privilege level and ease of exploitation, this flaw poses a significant risk to Windows environments. The scope of affected systems is extensive, as SMB is a core Windows protocol used for file and printer sharing and inter-process communication.”

Beyond these highlights, 10 of the vulnerabilities fixed this month were rated “critical” by Microsoft, including eight remote code execution flaws.

Notably absent from this month’s patch batch is a fix for a newly discovered weakness in Windows Server 2025 that allows attackers to act with the privileges of any user in Active Directory. The bug, dubbed “BadSuccessor,” was publicly disclosed by researchers at Akamai on May 21, and several public proof-of-concepts are now available. Tenable’s Satnam Narang said organizations that have at least one Windows Server 2025 domain controller should review permissions for principals and limit those permissions as much as possible.

Adobe has released updates for Acrobat Reader and six other products addressing at least 259 vulnerabilities, most of them in an update for Experience Manager. Mozilla Firefox and Google Chrome both recently released security updates that require a restart of the browser to take effect. The latest Chrome update fixes two zero-day exploits in the browser (CVE-2025-5419 and CVE-2025-4664).

For a detailed breakdown on the individual security updates released by Microsoft today, check out the Patch Tuesday roundup from the SANS Internet Storm Center. Action 1 has a breakdown of patches from Microsoft and a raft of other software vendors releasing fixes this month. As always, please back up your system and/or data before patching, and feel free to drop a note in the comments if you run into any problems applying these updates.

,

Worse Than FailureCodeSOD: The Pirate's Code

We've talked about ASP .Net WebForms in the past. In this style of development, everything was event driven: click a button, and the browser sends an HTTP request to the server which triggers a series of events, including a "Button Click" event, and renders a new page.

When ASP .Net launched, one of the "features" was a lazy repaint in browsers which supported it (aka, Internet Explorer), where you'd click the button, the page would render on the server, download, and then the browser would repaint only the changed areas, making it feel more like a desktop application, albeit a laggy one.

This model didn't translate super naturally to AJAX style calls, where JavaScript updated only portions of the page. The .Net team added some hooks for it- special "AJAX enabled" controls, as well as helper functions, like __doPostBack, in the UI to generate URLs for "postbacks" to trigger server side execution. A postback is just a POST request with .NET specific state data in the body.

All this said, Chris maintains a booking system for a boat rental company. Specifically, he's a developer at a company which the boat rental company hires to maintain their site. The original developer left behind a barnacle covered mess of tangled lines and rotting hull.

Let's start with the view ASPX definition:

<script>
function btnSave_Click()
{
    if (someCondition) 
    {
    //Trimmed for your own sanity
    
    //PostBack to Save Data into the Database.
        javascript:<%#getPostBack()%>;                   
    }
    else
    {
        return false;
    }
}
</script>
<html>
      <body>
          <input type="button" value="  Save  Booking  " id="btnSave" class="button" title="Save [Alt]" onclick="btnSave_Click()" />
      </body>
</html>

__doPostBack is the .NET method for generating URLs for performing postbacks, and specifically, it populates two request fields: __EVENTTARGET (the ID of the UI element triggering the event) and __EVENTARGUMENT, an arbitrary field for your use. I assume getPostBack() is a helper method which calls that. The code in btnSave_Click is as submitted, and I think our submitter may have mangled it a bit in "trimming", but I can see the goal is to ensure than when the onclick event fires, we perform a "postback" operation with some hard-coded values for __EVENTTARGET and __EVENTELEMENT.

Or maybe it isn't mangled, and this code just doesn't work?

I enjoy that the tool-tip "title" field specifies that it's "[Alt]" text, and that the name of the button includes extra whitespace to ensure that it's padded out to a good rendering size, instead of using CSS.

But we can skip past this into the real meat. How this gets handled on the server side:

Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
    '// Trimmed more garbage
    If Page.IsPostBack Then
        'Check if save button has been Clicked.
        Dim eventArg As String = Request("__EVENTARGUMENT")
        Dim offset As Integer = eventArg.IndexOf("@@@@@")
        If (offset > -1) Then
            'this is an event that we raised. so do whatever you need to here.
            Save()
        End If
    End If
End Sub

From this, I conclude that getPostBack populates the __EVENTARGUMENT field with a pile of "@", and we use that to recognize that the save button was clicked. Except, and this is the important thing, if they populated the ID property with btnSave, then ASP .Net would automatically call btnSave_Click. The entire point of the __doPostBack functionality is that it hooks into the event handling pattern and acts just like any other postback, but lets you have JavaScript execute as part of sending the request.

The entire application is a boat with multiple holes in it; it's taking on water and going down, and like a good captain, Chris is absolutely not going down with it and looking for a lifeboat.

Chris writes:

The thing in its entirety is probably one of the biggest WTFs I've ever had to work with.
I've held off submitting because nothing was ever straight forward enough to be understood without posting the entire website.

Honestly, I'm still not sure I understand it, but I do hate it.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsHelp One Help Oneself

Author: Steve Smith, Staff Writer Lewis got the assistant at a regifting exchange at the company Christmas party. He didn’t turn it on until February when a snowstorm kept him working from home for a week. It had been opened before, the setup was already complete, but it asked for his name, and gleaned network […]

The post Help One Help Oneself appeared first on 365tomorrows.

,

Worse Than FailureCodeSOD: A Real POS Report

Eddie's company hired a Highly Paid Consultant to help them retool their systems for a major upgrade. Of course, the HPC needed more and more time, and the project ran later and later and ended up wildly over budget, so the HPC had to be released, and Eddie inherited the code.

What followed was a massive crunch to try and hit absolutely hard delivery dates. Management didn't want their team "rewriting" the expensive code they'd already paid for, they just wanted "quick fixes" to get it live. Obviously, the HPC's code must be better than theirs, right?

After release, a problem appeared in one of their sales related reports. The point-of-sale report was meant to deliver a report about which items were available at any given retail outlet, in addition to sales figures. Because their business dealt in a high volume of seasonal items, every quarter the list of items was expected to change regularly.

The users weren't seeing the new items appear in the report. This didn't make very much sense- it was a report. The data was in the database. The report was driven by a view, also in the database, which clearly was returning the correct values? So the bug must be in the code which generated the report…

If POSItemDesc = "Large Sign" Then
        grdResults.Columns.FromKey("FColumn12").Header.Caption = "Large Sign"
End If
If POSItemDesc = "Small Sign" Then
        grdResults.Columns.FromKey("FColumn12").Header.Caption = "Small Sign"
End If
If POSItemDesc = "2x2 Hanging Sign" Then
        grdResults.Columns.FromKey("FColumn12").Header.Caption = "2x2 Hanging Sign"
End If
If POSItemDesc = "1x1 Sign" Then
        grdResults.Columns.FromKey("FColumn12").Header.Caption = "1x1 Sign"
End If
'.........Snipping more of these........
If POSItemDesc = "Light Thief" Then
        grdResults.Columns.FromKey("FColumn12").Header.Caption = "Light Thief"
End If
If POSItemDesc = "Door Strike" Then
        grdResults.Columns.FromKey("FColumn12").Header.Caption = "Door Strike"
End If

First, it's worth noting that inside of the results grid display item, the HPC named the field FColumn12, which is such a wonderfully self documenting name, I'm surprised we aren't all using that everywhere. But the more obvious problem is that the list of possible items is hard-coded into the report; items which don't fit one of these if statements don't get displayed.

At no point, did the person writing this see the pattern of "I check if a field equals a string, and then set another field equal to that string," and say, "maybe there's a better way?" At no point, in the testing process, did anyone try this report with a new item?

It was easy enough for Eddie to change the name of the column in the results grid, and replace all this code with a simpler: grdResults.Columns.FromKey("POSItem").Header.Caption = POSItemDesc, which also had the benefit of actually working, but we're all left puzzling over why this happened in the first place. It's not like the HPC was getting paid per line of code. Right? Right?

Of course not- no HPC would willingly be paid based on any metric that has an objective standard, even if the metric is dumb.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsPostcards from Corona

Author: Julian Miles, Staff Writer In a dusty corridor away from busy areas of Area 702, two people with ill-fitting lab coats concealing their uniforms are huddled under a disconnected monitoring camera. One takes a hit on a vape stick. The other lights a cigar. “I heard old Kendrix panicked after Prof Devensor collapsed. Nobody […]

The post Postcards from Corona appeared first on 365tomorrows.

,

Rondam RamblingsHating Trump More Won't Make Things Better

It has been nearly five months now since I published my open letter to Democratic candidates and organizations.  Since then I have, unsurprisingly, received dozens of texts and emails asking me to "Donate $5 now!"  For a while I responded to every one pointing them to my Open Letter and asking them to read it.  I was expecting (hoping for?) one of three responses.  1) "You are