Planet Russell

,

Charles StrossWhy this blog update is late

... The TLDR is: the cataract in my one mostly working eye (the other has about 50% retinal occlusion) is steadily getting worse, and I'm scheduled for surgery on March 27th.

NB: no need to lecture me about cataract surgery, I've already had it on the other eye. Same team, same hospital, same prognosis. I know exactly what to expect. Nor are your best wishes welcome: replying to them gets tiring after the fiftieth time (see: poor eyesight, above).

But worsening eyesight means that reading (and writing!) is fatiguing, so I gradually do less and less of it in each session.

Consequently I've been spending my screen time, not on the blog, but on a revision pass over my next novel, and on writing the follow-up.

(No, I can't give you any details: let's just say they're space operas, not Laundry Files, and I'll talk about them when my agent gives me the go-ahead. Book 1 is written, subject to editing, and Book 2 is about 10-15% written. And neither of them is Ghost Engine, the white whale I've been fruitlessly hunting for the past decade, although the viable chunks of GE may get recycled into Book 2.)

After my eye surgery I'll be going to Iridescence, the 2026 British Eastercon, the following weekend in Birmingham. I have some program items: I'll update this blog entry when I have a final schedule.

After Iridescence, I'll be heading to Satellite 9 in Glasgow (May 22nd to 24th). And after that I'll be attending Metropol Con in Berlin, July 2nd to 5th.

I'm not attending any US SF conventions for the forseeable future (being deported to a concentration camp in El Salvador is not on my bucket list), but I will try to attend the 2027 World Science Fiction convention in Montreal, assuming the Paedopotus Rex hasn't gone on a Godzilla-style rampage north of the border by then, and that intercontinental air travel is still possible. (See, my inability to resist that kind of cheap shot is exactly why I'm not visiting the US these days: ICE want to see your social media history going back 5 years, and I gather they're using some horrible LLM tool from Palantir to vet travellers.)

We now return you to your regular scheduled kvetching about the state of world affairs until my eyeballs are firing on all cylinders again. (Say, did you know that 30% of the world's fertilizer is shipped through the Straits of Hormuz? And about 20% of the sulfur that ends up as feedstock in sulfuric acid for industrial processes comes from sour Gulf crude, so ditto? Not to mention the helium that is required to keep MRI machines and TSMC's semiconductor fab lines running, never mind your grandkids' party balloons? Happy days ...)

Charles StrossMore in Sadness than in Anger

Sorry I haven't updated the blog for a while: I've been busy. (Writing the final draft of a new novel entirely unconnected to anything else you've read—space opera, new setting, longest thing I've written aside from the big Merchant Princes doorsteps. Now in my agent's inbox while I make notes towards a sequel, if requested.)

Over the past few years I've been naively assuming that while we're ruled by a ruthless kleptocracy, they're not completely evil: aristocracies tend to run on self-interest and try to leave a legacy to their children, which usually means leaving enough peasants around to mow the lawn, wash the dishes, and work the fields.

But my faith in the sanity of the evil overlords has been badly shaken in the past couple of months by the steady drip of WTFery coming out of the USA in general and the Epstein Files in particular, and now there's this somewhat obscure aside, that rips the mask off entirely (Original email on DoJ website ) ...

A document released by the U.S. Department of Justice as part of the Epstein files contains a quote attributed to correspondence involving Jeffrey Epstein that references Bill Gates and a controversial question about "how do we get rid of poor people as a whole."

The passage appears in a written communication included in the DOJ document trove and reads, in part: "I've been thinking a lot about that question that you asked Bill Gates, 'how do we get rid of poor people as a whole,' and I have an answer/comment regarding that for you." The writer then asks to schedule a phone call to discuss the matter further.

As an editor of mine once observed, America is ruled by two political parties: the party of the evil billionaires, and the party of the sane (so slightly less evil) billionaires. Evil billionaires: "let's kill the poor and take all their stuff." Sane billionaires: "hang on, if we kill them all who's going to cook dinner and clean the pool?"

And this seemed plausible ... before it turned out that the CEO class as a whole believe entirely in AI (which, to be clear, is just another marketing grift in the same spirit as cryptocurrencies/blockchain, next-generation nuclear power, real estate backed credit default options, and Dutch tulip bulbs). AI is being sold on the promise of increasing workforce efficiency. And in a world which has been studiously ignoring John Maynard Keynes' 1930 prediction that by 2030 we would only need to work a 15 hour work week, they've drawn an inevitable unwelcome conclusion from this axiom: that there are too many of us. For the past 75 years they've been so focussed on optimizing for efficiency that they no longer understand that efficiency and resilience are inversely related: in order to survive collectively through an energy transition and a time of climate destabilization we need extra capacity, not "right-sized" capacity.

Raise the death rate by removing herd immunity to childhood diseases? That's entirely consistent with "kill the poor". Mass deportation of anyone with the wrong skin colour? The white supremacists will join in enthusiastically, and meanwhile: the deported can die out of sight. Turn disused data centres or amazon warehouses into concentration camps (which are notorious disease breeding grounds)? It's a no-brainer. Start lots of small overseas brushfire wars, escalating to the sort of genocide now being piloted in Gaza by Trump's ally Netanyahu (to emphasize: his strain of Judaism can only be understood as a Jewish expression of white nationalism, throwing off its polite political mask to reveal the death's head of totalitarianism underneath)? It's all part of the program.

Our rulers have gone collectively insane (over a period of decades) and they want to kill us.

The class war has turned hot. And we're all on the losing side.

Cryptogram Team Mirai and Democracy

Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking.

Voters get immediate feedback on how their individual point of view matches—or doesn’t—a party’s platform, and they can see whether and how the party adopts their feedback. This isn’t like an opinion poll that politicians use for calculating short-term electoral tactics. It’s a deliberative reasoning process that scales, engaging voters in defining policy and helping candidates to listen deeply to their constituents.

This is happening today in Japan. Constituents have spent about eight thousand hours engaging with Mirai’s AI Interviewer since 2025. The party’s gamified volunteer mobilization app, Action Board, captured about 100,000 organizer actions per day in the runup to last week’s election.

It’s how Team Mirai, which translates to ‘The Future Party,’ does politics. Its founder, Takahiro Anno, first ran for local office in 2024 as a 33 year old software engineer standing for Governor of Tokyo. He came in fifth out of 56 candidates, winning more than 150,000 votes as an unaffiliated political outsider. He won attention by taking a distinctive stance on the role of technology in democracy and using AI aggressively in voter engagement.

Last year, Anno ran again, this time for the Upper Chamber of the national legislature—the Diet—and won. Now the head of a new national party, Anno found himself with a platform for making his vision of a new way of doing politics a reality.

In this recent House of Representatives election, Team Mirai shot up to win nearly four million votes. In the lower chamber’s proportional representation system, that was good enough for eleven total seats—the party’s first ever representation in the Japanese House—and nearly three times what it achieved in last year’s Upper Chamber election.

Anno’s party stood for election without aligning itself on the traditional axes of left and right. Instead, Team Mirai, heavily associated with young, urban voters, sought to unite across the ideological spectrum by taking a radical position on a different axis: the status quo and the future. Anno told us that Team Mirai believes it can triple its representation in the Diet after the next elections in each chamber, an ostentatious goal that seems achievable given their rapid rise over the past year.

In the American context, the idea of a small party unifying voters across left and right sounds like a pipe dream. But there is evidence it worked in Japan. Team Mirai won an impressive 11% of proportional representation votes from unaffiliated voters, nearly twice the share of the larger electorate. The centerpiece of the party’s policy platform is not about the traditional hot button issues, it’s about democracy itself, and how it can be enhanced by embracing a futuristic vision of digital democracy.

Anno told us how his party arrived at its manifesto for this month’s elections, and why it looked different from other parties’ in important ways. Team Mirai collected more than 38,000 online questions and more than 6,000 discrete policy suggestions from voters using its AI Policy app, which is advertised as a ‘manifesto that speaks for itself.’

After factoring in all this feedback, Team Mirai maintained a contrarian position on the biggest issue of the election: the sales tax and affordability. Rather than running on a reduction of the national sales tax like the major parties, Team Mirai reviewed dozens of suggestions from the public and ultimately proposed to keep that tax level while providing support to families through a child tax credit and lowering the required contribution for social insurance. Anno described this as another future-facing strategy: less price relief in the short term, but sustained funding for essential programs.

Anno has always intended to build a different kind of party. After receiving roughly $1 million in public funding apportioned to Team Mirai based on its single seat in the Upper Chamber last year, Anno began hiring engineers to enhance his software tools for digital democracy.

Anno described Team Mirai to us as a ‘utility party;’ basic infrastructure for Japanese democracy that serves the broader polity rather than one faction. Their Gikai (‘assembly’) app illustrates the point. It provides a portal for constituents to research bills, using AI to generate summaries, to describe their impacts, to surfacing media reporting on the issue, and to answer users’ questions. Like all their software, it’s open source and free for anyone, in any party, to use.

After last week’s victory, Team Mirai now has about $5 million in public funding and ambitions to grow the influence of their digital democracy platform. Anno told us Team Mirai has secured an agreement with the LDP, Japan’s dominant ruling party, to begin using Team Mirai’s Gikai and corruption-fighting Mirumae financial transparency tool.

AI is the issue driving the most societal and economic change we will encounter in our lifetime, yet US political parties are largely silent. But AI and Big Tech companies and their owners are ramping up their political spending to influence the parties. To the extent that AI has shown up in our politics, it seems to be limited to the question of where to site the next generation of data centers and how to channel populist backlash to big tech.

Those are causes worthy of political organizing, but very few US politicians are leveraging the technology for public listening or other pro-democratic purposes. With the midterms still nine months away and with innovators like Team Mirai making products in the open for anyone to use, there is still plenty of time for an American politician to demonstrate what a new politics could look like.

This essay was written with Nathan E. Sanders, and originally appeared in Tech Policy Press.

Worse Than FailureCodeSOD: The Barren Fields

Today, it's not exactly the code that was bad. For some time, a government agency had been collecting information from users using fillable PDF forms. The user would submit the form, and then a data entry clerk would copy the text from the form into a database. This, of course, raised the question: why was someone manually riding the copy/paste button?

Sally was tasked with automating this. The data is already in a digital format, so it should be easy to use a PDF library to parse out the entered data and insert it into the database. And it almost was.

Sally shares with us, not code, but the output of her program which scanned the fields, looking for their names:

FieldType: Text
FieldName: T5ZA1
FieldNameAlt: T5ZA1
FieldFlags: 25165824
FieldJustification: Left
FieldMaxLength: 3
---
FieldType: Text
FieldName: T5ZA2
FieldNameAlt: T5ZA2
FieldFlags: 25165824
FieldJustification: Left
FieldMaxLength: 2
---
FieldType: Text
FieldName: T5ZA3
FieldNameAlt: T5ZA3
FieldFlags: 25165824
FieldJustification: Left
FieldMaxLength: 4

I could go on, Sally certainly shared many more examples, but you can get the gist. The names were all cryptic five character blobs. They all start with T5Z, and followed by "letternumber": A3, B9, C2, etc. It has the vibe of being autogenerated; someone just never considered that they might want clear names for the fields, and just let their editor autonumber them, but that has one counterpoint to it: the letter "O" is never used. T5ZN9 is followed by T5ZP1.

Sally was left scratching her head. Of course, she was going to have to write some sort of lookup that would convert the PDF's field names into database field names, but she expected that the PDF would provide at least some sort of guidance on that front.

I really enjoy that the alt-text for every field is also the field name, which is a clear accessibility "win".

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsRed Rover

Author: Majoki Red Rover, Red Rover, send MADIE right over. Red Rover, Red Rover, send MADIE right over. Red Rover, Red Rover, send MADIE right over. ANDIE sent the request out for the gigazillionth time, but Red Rover didn’t respond. Neither did MADIE. ANDIE widened his search parameters as red dust puffed from his relentless […]

The post Red Rover appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: A Shadow in Summer

Review: A Shadow in Summer, by Daniel Abraham

Series: Long Price Quartet #1
Publisher: Tor
Copyright: March 2006
ISBN: 0-7653-1340-5
Format: Hardcover
Pages: 331

A Shadow in Summer is a high fantasy novel, the first of (as the name implies) a completed four-book series. Daniel Abraham is perhaps better known as half of the writing pair behind James S.A. Corey, author of the Expanse series. This was his first novel.

Otah was the sixth son of a Khai, sent like many of the unwanted later children of the powerful to learn the secrets of the andat and be trained as a poet. He learned his lessons well enough to reject the school and its teachings and walk away.

Amat Kyaan has worked her way up from nothing to become the senior overseer of the foreign Galtic House Wilsin in the sun-drenched port city of Saraykeht. Liat is her apprentice, distracted by young love. Maati is a new apprentice poet, having endured his training and sent to learn from Heshai how to eventually hold the andat Removing-The-Part-That-Continues, better known as Seedless. None of them know they will find themselves entangled in a plot to destroy the poet of Saraykeht and, through him, the city's most potent economic tool.

A poet in this world is not what we would think of a poet. They are, in essence, magical slave-drivers who capture the essence of an andat, a spirit embodying an idea that is coerced into the prison of volition and obedience by the poet. The andat Seedless, the embodiment of the concept of removing the spark of life, is central to the economic wealth of Saraykeht in a way that is startling in its simplicity: Seedless can remove the seeds from a warehouse full of cotton at a thought. This gives Saraykeht a massive productivity advantage in the cotton trade.

Seedless is also a powerful potential weapon. What he can do to cotton, he could as easily do to any other crop, or to people. The Galts are not fond of the independence and power of Saraykeht, but as long as the city controls a powerful andat, they do not dare to attack it directly. Indirectly, though... that's another matter.

This is one of those fantasy novels with meticulous and thoughtful world-building, careful and evocative prose, and a complex ensemble cast of interesting characters that the novel then attempts to make utterly miserable and complicit in their own misery. There should be a name for this style of writing. It's not tragedy because the ending is not tragic, precisely. It's not magic realism; the andats are openly magical, which makes this clearly high fantasy. But Abraham approaches the story from the type of realist frame that considers the pain and desperation of the characters to be more interesting than their ability to overcome challenges.

Amat starts the story as an admirable, sharp-witted expert manager, so her life is destroyed and she's subjected to sexual violence. Heshai loathes himself and veers between a tragic figure and a wastrel as the story systematically undermines opportunities for redemption. Maati is young and idealistic, so of course every character in the book sets out to crush his idealism under the weight of unforeseen consequences. There is a sad and depressing love triangle, because this is exactly the sort of book that has a sad and depressing love triangle. At the end of the novel, everyone who survives is older and wiser in the sense that some stories seem to think wisdom comes from the accumulation of trauma.

I find books like this so immensely frustrating because their merits are so clear. The world-building is careful and detailed in a way that includes economic systems, unlike so much fantasy. It is full of small, intriguing touches, such as the use of posture and gesture to communicate the emotional valence of one's words. Abraham understands the moral implications of poets and andats and the story tackles them head-on. The writing flows beautifully and gave me a strong sense of the city. I wanted to like this book for the obvious skill that went into it, and sometimes I even managed.

And yet, it's taken me three months to finish A Shadow in Summer because I simply do not want to spend this much time around miserable people. I would get through one or two chapters in a night and then wanted to read something happy or defiant or heroic, rather than watching slow-motion train wrecks intermixed with desperate attempts to navigate stifling layers of immoral systems. It's not that the story lacks a moral compass. The characters are sincerely trying to make the world a better place, with some success. It even delivers a happy ending of sorts. But so much of the journey was watching the lives of the characters fall apart.

I am completely unsurprised that some people loved this book. I'm still intrigued enough by the world-building that I'm half-tempted to try to read the sequel even after having to drag myself through this one. I had a similar reaction to Abraham's The Dragon's Path, though, so I think Abraham is just not for me. I may get back to the Expanse at some point, but having to drag myself through both of his solo novels I've tried, in two different series, probably indicates an incompatibility between author and reader. That's a shame, given the quality of the writing.

Followed by A Betrayal in Winter.

Content notes: Sexual and reproductive violence as significant plot elements.

Rating: 6 out of 10

,

Planet DebianMarco d'Itri: systemd has not implemented age verification

This needs to be clear: systemd is under attack by a trolling campaign orchestrated by fascist elements. Nobody is forced to like or use systemd, but anybody who wants to pick a side should know the facts.

Recently, the free software Nazi bar crowd styling themselves as "concerned citizens" has tried to start a moral panic by saying that systemd is implementing age verification checks or that somehow it will require providing personally identifiable information.

This is a lie: the facts are simply that the systemd users database has gained an optional "date of birth" field, which the desktop environments may use or not as they deem appropriate. Of course there is no "identity verification" or requirements to provide any data, which in any case would not be shared beyond authorized local applications.

While the multiple recent bills proposing that general purpose operating systems implement age verification mechanisms are often concerning, both from a social and technical point of view, this is not the topic being discussed here. They are often suboptimal, but for a long time I have been opposing attempts to implement parental control at the network level and argued that it should be managed locally, by parents on their own machines: I cannot see why I should outright reject an attempt to implement the infrastructure to do that.

If we want to keep age-appropriate controls out of the hands of centralized authorities, the alternative is giving families the means to manage it themselves: this is what this field enables. Whether desktop environments use it for parental controls, for birthday reminders, or for nothing at all, is their users' decision.

By the way, the original UNIX users database has allowed storing PII in the GECOS field since it was invented in the '70s. Similar fields are also specified by many popular LDAP schemes: adding such an optional field is consistent with the UNIX tradition.

And while we are at it, let's also refute the other smear campaign started by the same people: the systemd project is not accepting "AI slop". What happened is that a documentation file for the benefit of coding agents was added to the repository. To be clear: agents still cannot submit merge requests. The file itself remarks that all contributions must be reviewed in detail by humans, and this is basically the same policy used by the Linux kernel.

Krebs on Security‘CanisterWorm’ Springs Wiper Attack Targeting Iran

A financially motivated data theft and extortion group is attempting to inject itself into the Iran war, unleashing a worm that spreads through poorly secured cloud services and wipes data on infected systems that use Iran’s time zone or have Farsi set as the default language.

Experts say the wiper campaign against Iran materialized this past weekend and came from a relatively new cybercrime group known as TeamPCP. In December 2025, the group began compromising corporate cloud environments using a self-propagating worm that went after exposed Docker APIs, Kubernetes clusters, Redis servers, and the React2Shell vulnerability. TeamPCP then attempted to move laterally through victim networks, siphoning authentication credentials and extorting victims over Telegram.

A snippet of the malicious CanisterWorm that seeks out and destroys data on systems that match Iran’s timezone or have Farsi as the default language. Image: Aikido.dev.

In a profile of TeamPCP published in January, the security firm Flare said the group weaponizes exposed control planes rather than exploiting endpoints, predominantly targeting cloud infrastructure over end-user devices, with Azure (61%) and AWS (36%) accounting for 97% of compromised servers.

“TeamPCP’s strength does not come from novel exploits or original malware, but from the large-scale automation and integration of well-known attack techniques,” Flare’s Assaf Morag wrote. “The group industrializes existing vulnerabilities, misconfigurations, and recycled tooling into a cloud-native exploitation platform that turns exposed infrastructure into a self-propagating criminal ecosystem.”

On March 19, TeamPCP executed a supply chain attack against the vulnerability scanner Trivy from Aqua Security, injecting credential-stealing malware into official releases on GitHub actions. Aqua Security said it has since removed the harmful files, but the security firm Wiz notes the attackers were able to publish malicious versions that snarfed SSH keys, cloud credentials, Kubernetes tokens and cryptocurrency wallets from users.

Over the weekend, the same technical infrastructure TeamPCP used in the Trivy attack was leveraged to deploy a new malicious payload which executes a wiper attack if the user’s timezone and locale are determined to correspond to Iran, said Charlie Eriksen, a security researcher at Aikido. In a blog post published on Sunday, Eriksen said if the wiper component detects that the victim is in Iran and has access to a Kubernetes cluster, it will destroy data on every node in that cluster.

“If it doesn’t it will just wipe the local machine,” Eriksen told KrebsOnSecurity.

Image: Aikido.dev.

Aikido refers to TeamPCP’s infrastructure as “CanisterWorm” because the group orchestrates their campaigns using an Internet Computer Protocol (ICP) canister — a system of tamperproof, blockchain-based “smart contracts” that combine both code and data. ICP canisters can serve Web content directly to visitors, and their distributed architecture makes them resistant to takedown attempts. These canisters will remain reachable so long as their operators continue to pay virtual currency fees to keep them online.

Eriksen said the people behind TeamPCP are bragging about their exploits in a group on Telegram and claim to have used the worm to steal vast amounts of sensitive data from major companies, including a large multinational pharmaceutical firm.

“When they compromised Aqua a second time, they took a lot of GitHub accounts and started spamming these with junk messages,” Eriksen said. “It was almost like they were just showing off how much access they had. Clearly, they have an entire stash of these credentials, and what we’ve seen so far is probably a small sample of what they have.”

Security experts say the spammed GitHub messages could be a way for TeamPCP to ensure that any code packages tainted with their malware will remain prominent in GitHub searches. In a newsletter published today titled GitHub is Starting to Have a Real Malware Problem, Risky Business reporter Catalin Cimpanu writes that attackers often are seen pushing meaningless commits to their repos or using online services that sell GitHub stars and “likes” to keep malicious packages at the top of the GitHub search page.

This weekend’s outbreak is the second major supply chain attack involving Trivy in as many months. At the end of February, Trivy was hit as part of an automated threat called HackerBot-Claw, which mass exploited misconfigured workflows in GitHub Actions to steal authentication tokens.

Eriksen said it appears TeamPCP used access gained in the first attack on Aqua Security to perpetrate this weekend’s mischief. But he said there is no reliable way to tell whether TeamPCP’s wiper actually succeeded in trashing any data from victim systems, and that the malicious payload was only active for a short time over the weekend.

“They’ve been taking [the malicious code] up and down, rapidly changing it adding new features,” Eriksen said, noting that when the malicious canister wasn’t serving up malware downloads it was pointing visitors to a Rick Roll video on YouTube.

“It’s a little all over the place, and there’s a chance this whole Iran thing is just their way of getting attention,” Eriksen said. “I feel like these people are really playing this Chaotic Evil role here.”

Cimpanu observed that supply chain attacks have increased in frequency of late as threat actors begin to grasp just how efficient they can be, and his post documents an alarming number of these incidents since 2024.

“While security firms appear to be doing a good job spotting this, we’re also gonna need GitHub’s security team to step up,” Cimpanu wrote. “Unfortunately, on a platform designed to copy (fork) a project and create new versions of it (clones), spotting malicious additions to clones of legitimate repos might be quite the engineering problem to fix.”

Update, 2:40 p.m. ET: Wiz is reporting that TeamPCP also pushed credential stealing malware to the KICS vulnerability scanner from Checkmarx, and that the scanner’s GitHub Action was compromised between 12:58 and 16:50 UTC today (March 23rd).

Cryptogram Microsoft Xbox One Hacked

It’s an impressive feat, over a decade after the box was released:

Since reset glitching wasn’t possible, Gaasedelen thought some voltage glitching could do the trick. So, instead of tinkering with the system rest pin(s) the hacker targeted the momentary collapse of the CPU voltage rail. This was quite a feat, as Gaasedelen couldn’t ‘see’ into the Xbox One, so had to develop new hardware introspection tools.

Eventually, the Bliss exploit was formulated, where two precise voltage glitches were made to land in succession. One skipped the loop where the ARM Cortex memory protection was setup. Then the Memcpy operation was targeted during the header read, allowing him to jump to the attacker-controlled data.

As a hardware attack against the boot ROM in silicon, Gaasedelen says the attack in unpatchable. Thus it is a complete compromise of the console allowing for loading unsigned code at every level, including the Hypervisor and OS. Moreover, Bliss allows access to the security processor so games, firmware, and so on can be decrypted.

Planet DebianBenjamin Mako Hill: How taboo shapes knowledge production on Wikipedia

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface them for folks who missed them, I will periodically (re) publish blog posts about some “older” published projects. This post draws material from a previously published post by Kaylea Champion on the Community Data Science Blog.

Taboo subjects—such as sexuality and mental health—are as important to discuss as they are difficult to raise in conversation. Although many people turn to online resources for information on taboo subjects, censorship and low-quality information are common in search results. In two papers I recently published at CSCW—both led by Kaylea Champion—we presented a series of analyses showing how taboo shapes the process of collaborative knowledge building on English Wikipedia.

The first study is a quantitative analysis showing that articles on taboo subjects are much more popular and are the subject of more vandalism than articles on non-taboo topics. In surprising news, we also found that they were edited more often and were of higher quality!

Short video of Kaylea’s presentation of the work given at Wikimania in August 2023.

The first challenge we faced in conducting this work was identifying taboo articles. Kaylea had a brilliant idea for a new computational approach to doing so without relying on our individual intuitions about what qualifies as taboo (something we understood would be highly specific to our own culture, class, etc). Her approach was to make use of an insight from linguistics: people develop euphemisms as ways to talk about taboos (i.e., think about all the euphemisms we’ve devised for death, or sex, or menstruation, or mental health).

We used this insight to build a new machine-learning classifier based on English Wiktionary definitions. If a ‘sense’ of a word was tagged as euphemistic, we treated the words in the definition as indicators of taboo. The end result was a series of words and phrases that most powerfully differentiate taboo from non-taboo. We then did a simple match between those words and phrases and the titles of Wikipedia articles. The topics were taboo enough that we were a little uncomfortable discussing them in our meetings! We built a comparison sample of articles whose titles are words that, like our taboo articles, appear in Wiktionary definitions.

In the first paper, we used this new dataset to test a series of hypotheses about how taboo shapes collaborative production in Wikipedia. Our initial hypotheses were based on the idea that taboo information is often in high demand but that Wikipedians might be reluctant to associate their names (or usernames) with taboo topics. The result, we argued, would be articles that were in high demand but of low quality.

We found that taboo articles are thriving on Wikipedia! In summary, we found that in comparison to non-taboo articles:

  • Taboo articles are more popular (as expected).
  • Taboo articles receive more contributions (contrary to expectations).
  • Taboo articles receive more low-quality contributions (as expected).
  • Taboo articles are higher quality (contrary to expectations).
  • Taboo article contributors are more likely to contribute without an account (as expected), and have less experience (as expected), but that accountholders are more likely to make themselves more identifiable by having a user page, disclosing their gender, and making themselves emailable (all three of these are contrary to expectation!).
Image of the estimated qualiy of articles of the four articles in the second mixed-methods paper. Extreme dips reflect periods of frequent vandalism.

Kaylea attempted to understand these somewhat confusing results by designing a fantastic mixed-methods analysis that sought to unpack some of the nuance missing in the quantitative analysis by delving deep into the “life histories” of four articles on English Wikipedia: two on taboo topics related to women’s anatomy (Clitoris and Menstration) and two nontaboo articles chosen for comparison (Cell membrance and Philip Pullman).

Although the findings from the analysis can be difficult to summarize succinctly (as with many qualitative studies), we showed how the taboo example articles’ success was hard-won amid real challenges and attacks. The paper describes how challenges were overcome through resilient leadership, often provided by a single dedicated individual. The paper provides a template for how taboo can be—and frequently is—overcome by dedicated Wikipedians in ways that provide useful knowledge resources in real demand.

For more details, visualizations, statistics, and more, we hope you’ll take a look at our papers, both linked below.


The full citation for the papers are: (1) Champion, Kaylea, and Benjamin Mako Hill. 2023. “Taboo and Collaborative Knowledge Production: Evidence from Wikipedia.” Proceedings of the ACM on Human-Computer Interaction 7 (CSCW2): 299:1-299:25. https://doi.org/10.1145/3610090. (2) Champion, Kaylea, and Benjamin Mako Hill. 2024. “Life Histories of Taboo Knowledge Artifacts.” Proceedings of the ACM: Human-Computer Interaction 8 (CSCW2): 505:1-505:32. https://doi.org/10.1145/3687044.

We have also released replication materials for the paper, including all the data and code used to conduct the analyses.

This blog post and the paper it describes are collaborative work by Kaylea Champion and Benjamin Mako Hill.

Worse Than FailureCodeSOD: Completely Readable

It is eminitently reasonable for companies to have "readability standards" for their code. You're writing this code for humans to read, after all, at least in theory. You need to communicate to future inheritors of your code.

But that doesn't mean readability standards are good. Tony's company, for example, has rules about returning boolean values from functions, and those rules mean you are expected to write code like this:

public bool Completed ()
{
   if (completed == true)
   {
   return true;
   }
   else
   {
   return false;
   }
}

It's more "explicit" this way. Which I certainly would have explicit things to say if I were told I needed to write code this way. Also, what's with the non-indented return statements? Is that also part of their coding standards?

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsAnna Left Today

Author: Julian Miles, Staff Writer The curtains hang out the window, blowing in the breeze. A tic starts on his cheek, but stills when he looks down, gaze drawn to where a torn page from her notebook flaps about in his grip, like a little bird trying to escape. Far down the road a girl […]

The post Anna Left Today appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Dark Class

Review: Dark Class, by Michelle Diener

Series: Class 5 #5
Publisher: Eclipse
Copyright: 2022
ISBN: 0-6454658-2-8
Format: Kindle
Pages: 349

Dark Class is the fifth novel (not counting the skippable novella) in Michelle Diener's Class 5 romantic science fiction series. As with the previous novels, this follows romance series conventions: There are new protagonists, but characters from the previous books make an appearance. It's helpful but not that necessary to remember the details of the previous books; the necessary background is explained enough to follow the story.

By now, series readers know the formula. Yet another Earth woman was secretly abducted by the Tecran, encounters a Class 5 ship, and finds a way to be surprisingly dangerous and politically destabilizing. This time, Ellie has been mostly unconscious since her abduction and awakes in a secret Tecran base after the Tecran have all been murdered. There is a Class 5 AI involved, but not a full ship; instead, Dark Class picks up (or, arguably, manufactures) a loose end from Dark Minds. Other than that break from the formula, you know what to expected by now: a hunky Grih, a tricky political standoff, a protective Class 5, a slow-burn romance, and a surprisingly capable protagonist who upends politics through plucky grit and refusal to tolerate poor treatment. Oh, and a new selection of salvaged clothing and weapons to make Ellie beautiful and surprisingly dangerous.

If you are this far into the series, you probably like the formula. That's my position. I don't care about the romance, but something about the prisoner to threat evolution of the kidnapped protagonists and the growing friendship with an AI makes me happy. This is not great literature, but it is reliably entertaining with a guaranteed victorious protagonist and happy ending, making it a comfortable break from more difficult books with emotionally wrenching scenes.

Dark Class is one of the better executions of the formula because it has long stretches of my favorite parts of these books: exploration of mostly-abandoned surroundings for neat gadgets while the AI and the protagonist slowly build a relationship of mutual respect. This book has bonus drones with minds of their own and an enigmatic alien spaceship that provides a fun mid-novel twist. The Tecran and the Grih repeatedly underestimate Ellie and are caught by surprise at dramatically satisfying moments. It's just fun to read, and I save this series for when I need that type of book.

As with the other books of the series, Diener's writing is serviceable but not great. She repeats herself, uses way too many paragraph breaks for emphasis, and is not going to win any literary awards for prose quality. The series is in the upper half of self-published works, and I've certainly read worse, but either the formula will click with you or it won't. If it doesn't, the prose is not going to salvage the book.

There is some development of the series plot, but it's mostly predictable fallout from Dark Matters. This book is mostly tactical and smaller in scale. I am a little curious where Diener is going with political developments, since the accumulated Earth women and Class 5 ships are in some danger of becoming a sort of shadow government through sheer military power, but I'm dubious this series will have enough political sophistication to dig into the implications. It's best enjoyed as small-scale episodic wish fulfillment for female protagonists, and that's good enough for me.

If you've read this far in the series, recommended; this is one of the stronger entries.

Followed by Collision Course, which breaks the title convention for the series.

Rating: 7 out of 10

xkcdInflation Timeline

,

Planet DebianVincent Bernat: Calculate “1/(40rods/​hogshead) → L/100km” from your Zsh prompt

I often need a quick calculation or a unit conversion. Rather than reaching for a separate tool, a few lines of Zsh configuration turn = into a calculator. Typing = 660km / (2/3)c * 2 -> ms gives me 6.60457 ms1 without leaving my terminal, thanks to the Zsh line editor.

The equal alias

The main idea looks simple: define = as an alias to a calculator command. I prefer Numbat, a scientific calculator that supports unit conversions. Qalculate is a close second.2 If neither is available, we fall back to Zsh’s built-in zcalc module.

As the alias built-in uses = as a separator for name and value, we need to alter the aliases associative array:

if (( $+commands[numbat] )); then
  aliases[=]='numbat -e'
elif (( $+commands[qalc] )); then
  aliases[=]='qalc'
else
  autoload -Uz zcalc
  aliases[=]='zcalc -f -e'
fi

With this in place, = 847/11 becomes numbat -e 847/11.

The quoting problem

The first problem surfaces quickly. Typing = 5 * 3 fails: Zsh expands the * character as a glob pattern before passing it to the calculator. The same issue applies to other characters that Zsh treats specially, such as > or |. You must quote the expression:

$ = '5 * 3'
15

We fix this by hooking into the Zsh line editor to quote the expression before executing it.

Automatic quoting with ZLE

Zsh calls the accept-line widget when you submit a command. We replace it with a function that detects the = prefix and quotes the expression:

_vbe_calc_accept() {
  case $BUFFER in
    "="*)
      typeset -g _vbe_calc_expr=$BUFFER # not used yet
      BUFFER="= ${(q-)${${BUFFER#=}# }}"
      ;;
  esac
  zle .accept-line
}
zle -N accept-line _vbe_calc_accept

When you type = 5 * 3 and press ↲, _vbe_calc_accept strips the = prefix, quotes the remainder with the (q-) parameter expansion flag, and rewrites the buffer to = '5 * 3' before invoking the original .accept-line widget. As a bonus, you can save a few keystrokes with =5*3! 🚀

You can now compute math expressions and convert units directly from your shell. Zsh automatically quotes your expressions:

$ = '1 + 2'
3
$ = 'pi/3 + pi |> cos'
-0.5
$ = '17 USD -> EUR'
14.7122 €
$ = '180*500mg -> g'
90 g
$ = '5 gigabytes / (2 minutes + 17 seconds) -> megabits/s'
291.971 Mbit/s
$ = 'now() -> tz("Asia/Tokyo")'
2026-03-22 22:00:03 JST (UTC +09), Asia/Tokyo
$ = '1 / (40 rods / hogshead) -> L / 100km'
118548 × 0.01 l/km
“That's the way I like it!� says Grampa Simpson
The metric system is the tool of the devil! My car gets forty rods to the hogshead, and that's the way I like it! ― Grampa Simpson, A Star Is Burns

Storing unquoted history

As is, Zsh records the quoted expression in history. You must unquote it before submitting it again. Otherwise, the ZLE widget quotes it a second time. Bart Schaefer provided a solution to store the original version:

_vbe_calc_history() {
  return ${+_vbe_calc_expr}
}
add-zsh-hook zshaddhistory _vbe_calc_history

_vbe_calc_preexec() {
  (( ${+_vbe_calc_expr} )) && print -s $_vbe_calc_expr
  unset _vbe_calc_expr
  return 0
}
add-zsh-hook preexec _vbe_calc_preexec

The zshaddhistory hook returns 1 if we are evaluating an expression, telling Zsh not to record the command. The preexec hook then adds the original, unquoted command with print -s.


The complete code is available in my zshrc. A common alternative is the noglob precommand modifier. If you stick with to instead of -> for unit conversion, it covers 90% of use cases. For a related Zsh line editor trick, see how I use auto-expanding aliases to fix common typos.


  1. This is the fastest a packet can travel back and forth between Paris and Marseille over optical fiber. ↩�

  2. Qalculate is less understanding with units. For example, it parses “Mbps� as megabarn per picosecond: ☢�

    $ numbat -e '5 MB/s -> Mbps'
    40 Mbps
    $ qalc 5 MB/s to Mbps
    5 megabytes/second = 0.000005 B/ps
    

    ↩�

365 TomorrowsRegularized

Author: Jacqueline Kaufman Jean of Arc takes her meds, swallowing carefully. “Delicious,” smiling, almost all her teeth intact. The voices have gone somewhere in the whiteness, gathering strength. In Russo-Amerique, meds are treasure, and she has been selected. Regularized. She has a home, concrete gray blocks that hold wind at bay, tuck in the heat […]

The post Regularized appeared first on 365tomorrows.

David BrinClaude answers GPT re ailien minds. And the winner is...

Last time, I posted a lengthy missive by ChatGPT appraising my new book ailien minds, prompted by Richard Bliss, with some of my own commentary and demurals laced-in. 

All-told, the effect was that of a loquacious book report by a garrulous and rather unctuous high school sophomore who had flipped through some random pages for key words and then googled some of them... or used them to sift its training sets. 

So, how does Claude do?  Well, the comparison here isn't exactly fair... Mr. Bliss asked Claude to read not just my entire book, but also the GPT appraisal that I posted, last time, including my critical inserts. So, you cannot draw explicit conclusions. 

Still - fair or unfair - the overall impression is night-and-day.  Claude's missive is accurate, pertinent and in some cases even surprisingly insightful. And weirdly honest about its own (un)sapience. 

Unlike GPT, which glossed maybe 5% my work over a reurgitaion of common AI memes from its training sets, Claude zoomed upon core concepts in alien minds, distilling an impressive 50% or so of the key points. (Though of course none of the supporting evidence, of which Claude says I provide too copiously!)

Moreover, there was almost none of GPT's blatant flattery.  Well, except for the flattery that's inherent in well-summarizing my ideas.

But judge for yourself. And note at the end where I promise a more-fair comparison. Maybe next time. If world events don't force me back to politics.

(Oh, note: on Monday I keynote one of the tracks at the big RSA conference.)

============== 

A Critique of ChatGPT’s Review of

Ailien Minds  by David Brin

Written by Claude (Anthropic) March 2026

Prepared after reading the full text of Ailien Minds via page-by-page screenshots supplied by Richard Bliss


Preamble: A Note on What This Document Is

This is a critique written by one AI system (Claude, made by Anthropic) of another AI system’s (ChatGPT, made by OpenAI) review of a book that is substantially about how AI systems should be governed. The irony is not lost on me.

A disclosure: I read Ailien Minds page by page through 73 screenshots shared by a reader. ChatGPT appears to have been given the book’s text as a submission for review. We both had access to the source material. We arrived at very different readings.

David Brin annotated the ChatGPT exchange with his own reactions. I have read those annotations. In the interest of transparency, they informed my critique—but so did my own independent reading of the book. Where I agree with Brin’s frustrations, I will say so. Where I think the ChatGPT review got something right despite his objections, I will say that too.


The Central Problem: Surface Pattern-Matching vs. Structural Argument

The ChatGPT review correctly identifies that Ailien Minds is about AI, that Brin occupies a middle position between optimists and doomers, and that he favors transparency and competing systems over centralized control. These are accurate at the level of a dust jacket summary. The problem is that a dust jacket summary is essentially all ChatGPT produced, stretched across several thousand words of increasingly generic AI commentary.

The book’s actual contribution—its reason for existing alongside the dozens of other AI books published in 2025–2026—is a specific, detailed argument about mechanism. How, precisely, do you keep powerful AI entities accountable once their creators can no longer control them? Brin’s answer draws on evolutionary biology, ecosystem dynamics, and the history of legal and democratic institutions to propose that individuated AI agents, competing within transparent accountability structures, represent the only approach with historical precedent for success.

ChatGPT never engages with this mechanism. It talks about transparency in the abstract. It mentions competing AIs. But it never connects these to the specific framework Brin builds: individuation (giving AI agents distinct, persistent identities), reciprocal accountability (structuring incentives so AI entities police each other), and disputation arenas (formal adversarial processes for testing AI claims and behavior). These are not passing ideas in the book. They are the book’s thesis.

 

Five Specific Failures

1. The Three Clichés Were Invisible

One of Brin’s most forceful arguments is that virtually all public AI discourse is trapped in three formats: AI as obedient corporate servant (the “castle” model), AI as amorphous uncontrolled swarm (the “blob” model), or AI as a single superintelligent overlord (the “Skynet” model). He argues these three clichés function as thought-terminating frameworks that prevent people from imagining a fourth alternative.

ChatGPT’s review addresses only the third cliché (Skynet), and only to dismiss it. The castle and blob models—which Brin argues are actually more dangerous because they are more plausible—go entirely unmentioned. This is a significant omission. It is as if someone reviewed Darwin’s On the Origin of Species and discussed his observations about finch beaks without ever mentioning natural selection.

2. Individuation Was Missed Entirely

The concept of individuation—giving AI systems distinct, trackable identities so they can build reputations and be held accountable—is arguably the single most novel proposal in the book. It is the mechanism by which Brin’s vision of competitive accountability would actually function. Without individuation, you cannot have meaningful AI reputations. Without reputations, you cannot have market-style accountability. The entire logical chain depends on it.

ChatGPT does not mention it once.

3. The Ecosystem Argument Was Flattened

Brin spends a full chapter (Chapter 3) and a substantial aside (Aside #4, on “Soup vs. Sea”) building a detailed analogy between biological ecosystems and the emerging digital ecosystem. This is not a loose metaphor. He traces specific parallels: energy gradients, predator-prey dynamics, the role of barrier membranes in enabling individuation, and the historical pattern that monopoly organisms cause ecosystem collapse.

ChatGPT reduces this to a vague gesture toward “a messy ecosystem of AIs.” The structural argument about why ecosystems need individuated agents with membranes (boundaries, identities) to remain healthy—which is the bridge between Brin’s ecological thinking and his policy proposals—disappears entirely.

4. The Disputation Arena Proposal Was Absent

Chapter 12 presents what is arguably the book’s most concrete policy proposal: Disputation Arenas, formal adversarial processes modeled on courtrooms, scientific peer review, and competitive markets, designed to stress-test AI systems and AI governance proposals through structured disagreement. Brin devotes considerable space to designing this mechanism, including how different outcomes (clear victory, compromise, or productive stalemate) would each generate value.

ChatGPT’s review does not mention disputation, adversarial testing, or structured debate in any form. For a book whose final major chapter is titled “Our Abrasive Secret Sauce,” this is a remarkable gap.

5. Training Data Overwhelmed Source Material

Perhaps the most instructive failure is how frequently the ChatGPT review drifts from the book’s actual content into generic AI discourse drawn from its training data. The extended discussions of Ray Kurzweil, Nick Bostrom, and Eliezer Yudkowsky are competent but contextually irrelevant—Brin mentions these figures only in passing, and his argument is specifically that the framework he proposes transcends the optimist/pessimist spectrum these thinkers represent.

Similarly, the later sections on “humans as sanity-check layers,” “trust as currency,” and “cognitive jobs” are reasonable points about AI’s economic impact—but they reflect generic AI futurism, not the book’s distinctive arguments. The review increasingly reads as “things an LLM knows about AI” rather than “a critical reading of this specific book.”

Brin identifies this drift repeatedly in his annotations. He is right to be frustrated. An AI reviewing a book should be accountable to the source material, not to its own training distribution.

 

What ChatGPT Got Right

Fairness requires noting what the review did well.

The initial framing of AI as “alien intelligence”—not extraterrestrial, but cognitively foreign—is an accurate representation of one of Brin’s themes. The observation that Brin sits between the poles of AI optimism and doom is correct. The point that Brin’s thinking is influenced by The Transparent Society is valid and relevant. And ChatGPT’s note that Brin has explored these ideas in fiction for decades provides useful context.

At the level of “what is this book generally about and who wrote it,” the review is competent. The failure is at the level of “what does this book specifically argue that other books do not,” which is the level at which a serious review must operate.

 

A Broader Observation

This exchange is itself a data point for one of Brin’s arguments. He warns throughout the book that LLMs are persuasively fluent without being deeply accountable to the material they process. The ChatGPT review demonstrates this precisely: it is well-organized, grammatically polished, and tonally appropriate. It sounds like a thoughtful review. But when measured against the source material, it is substantially hollow—a fluent recombination of training data that happens to overlap with the book’s topic, rather than a genuine critical engagement with the book’s unique arguments.

Brin proposes that accountability for AI should come through adversarial processes—competing systems checking each other’s work. This critique is, in a small way, an instance of exactly that process. Whether it is a better reading than ChatGPT’s is not for me to judge. But the fact that two AI systems can produce substantially different readings of the same text, with different blind spots and different emphases, is itself an argument for the kind of competitive, multi-perspective AI governance Brin advocates.


=======

Back to DB here.

What impressed me most - beyond the cogency and direct/fair pertinence to my book - is the discursive style displayed by Claude's choice of paraphrasings.* In every case, it chose sentences that - while cogent and accurate - were not specific phrasings I'd have used. (Though in a couple of cases I wish that I had!)

Now, of course, that could be a programmed proclivity, well within the grasp of a Large Language Model. Still it does kind of resist the simplistic characterization that all LLMs are just advanced sentence auto-complete systems.

To be clear, although Claude gives you a vastly better summary here than GPT did, nevertheless...

... this is actually a test of YOU!

Are you still a reader? Possessed of true intellect and curiosity? Because I promise that any page of ailien minds will provide lots of value that no summary can encompass. Claude even says so, challenging you to be one of the elite who actually enjoys an idea and fact-rich book!  ;-)


 Next time, I'll post the de-novo appraisal of ailien minds that Mr. Bliss asked Claude to do, without feeding in the annotated GPT synopsis. If Claude were a person, that appraisal would still be affected by the earlier reading. That's not supposed to happen with LLMs, who should start fresh, if told-to. But judge fopr yorself.

Till, then, fight for a civilization that is worthy-of... and sets good examples for... these new children of our minds.

------

*  (Elsewhere I have said that paraphrasing accurately is the seldom mentioned top attreibute of someone who is arguing fairly and cogently.)






,

Planet DebianJonathan Dowland: Ladytron

I saw Ladytron perform in Digital, Newcastle last night. The last time I saw them was, I think, at the same venue, 18 years ago. Time flies!

Photo of the trio performing on stage

Back in the day (perhaps their heyday, perhaps not!) Ladytron ploughed a particular sonic furrow and did it very well. Going into the gig I had set my expectations that, should they play just these hits, I'd have a good time.

The gig exceeded my expectations. The setlist very much did not lean into their best-known period: the more recent few albums were very well represented and to me this felt very confident. The lead singer, Helen Marnie, demonstrated some excellent range, particularly on some of the new songs. Daniel Hunt did a lot of backing vocals and they were really complementary to Helen's: underscoring but not overpowering. I enjoyed nerding out watching Mira Ayoro's excellent wrangling of her Korg MS-20. One highlight was an encore performance of Light & Magic, which was arguably the "alternate version" as available on the expanded versions of that album or the Remixed and Rare companion.

I thought I'd try to put together a 5-track playlist for a friend who attended the gig but isn't super familiar with them. As usual this is hard. I'm going to avoid the obvious hits, try to represent their whole career and try to ensure the current trio each get a vocal turn in the selection.

They actually released their latest album, Paradises, yesterday as well. One track from it is in the list below.

I'm Not Scared by Ladytron Kingdom Undersea by Ladytron Blue Jeans by Ladytron He took her to a movie by Ladytron Transparent Days by Ladytron

(If you can't see anything, the bandcamp embeds have been stripped out by whatever you are viewing this with)

Planet DebianMatthew Garrett: SSH certificates and git signing

When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.

git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.

SSH Certificates

And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.

And, wonderfully, you can use them in git! Let’s find out how.

Local config

There’s two main parameters you need to set. First,

1
git config set gpg.format ssh

because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one. It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.

Validating signatures

This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen2, which validates signatures against a file in a format that looks somewhat like authorized-keys. This lets you add something like:

1
* cert-authority ssh-rsa AAAA…

which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?

Haha. No.

Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own. It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.

In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

Doing it in hardware

Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.

So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.

And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

Wait, attestation?

Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

Conclusion

Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.


  1. Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys? Now you do ↩︎

  2. Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎

  3. This is more difficult than it sounds ↩︎

  4. And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎

Planet DebianRavi Dwivedi: Vietnam Trip

Before reaching Vietnam

Continuing from the last post, Badri and I took a flight from the Brunei International Airport to Kuala Lumpur on the 12th of December 2024. We reached Kuala Lumpur in the evening.

After arriving at the airport, we went through immigration. In a previous post, I mentioned that we had put our stuff in lockers at the TBS bus terminal in Kuala Lumpur. Therefore, we had to go there.

The locker was automated and required us to enter the PIN we had set. Upon entering the PIN, the locker wasn’t getting unlocked. After trying this for 10-15 minutes without any luck, we tried getting some help as there the lockers weren’t under supervision.

So, I roamed around and found a staff member, reporting that our lockers weren’t getting unlocked. They called the person who was in-charge of the lockers. He came to us in a few minutes and used their admin access to open the locker. We were supposed to pay for using the lockers by putting the banknotes inside through a slot. However, as the machine wasn’t working, we gave the amount for the use of our locker service to that person instead.

We soon went back to the KL airport to catch our morning flight to Ho Chi Minh City in Vietnam. At the flight counter, we were afraid we would have to pay extra as our luggage surpassed the allowed weight limit. This one was also a budget airline—AirAsia—and our tickets didn’t include a check-in bag.

Generally, passengers from countries requiring a visa to visit Vietnam (such as India) require going to the airline and showing their visa to get the boarding pass. However, when we went to the AirAsia counter at the Kuala Lumpur airport, they didn’t weigh our bags and asked us to get our boarding passes from an automated kiosk. So, we got our boarding passes printed and proceeded to the airport security.

While clearing the airport security, a lotion I bought from Singapore was confiscated because it was 200 mL, exceeding the limit of 100 mL per bottle. Had that 200 mL liquid been in two different bottles of 100 mL each, I would have been allowed to take it in my carry-on bag, but a single 200 mL bottle wasn’t! I was allowed to keep it in the check-in bag, but I didn’t have it included in my ticket. Huh, airports and their weird rules :( The lotion was an expensive one, so having it thrown away did ruin my mood.

Overview

We started our Vietnam trip from Ho Chi Minh City in the south on the 13th of December 2024 and finished it in Hanoi in the north on the 20th of December. We traveled from Ho Chi Minh City to Hanoi mostly by train, except for a hundred or so kilometers by bus, in chunks. On the way, we visited Nha Trang, Hoi An, and Hue. The distance between Ho Chi Minh City and Hanoi is 1700 km.

For your reference, here are those places labeled on Vietnam’s map.

Vietnam map with Ho Chi Minh City, Nha Trang, Hoi An, Hue and Hanoi labeled.

A map of Vietnam with points of places we went to labeled. ©CARTO ©MAPTILER ©OPENSTREETMAP

Ho Chi Minh City

We landed in Ho Chi Minh City early morning on the 13th of December 2024. I was tired and sleepy as I hadn’t gotten a good night’s sleep. After going through immigration, we went to a currency exchange counter to get Vietnamese Dong. Unlike other countries on this trip, money exchange counters in Vietnam didn’t accept Indian rupees. Therefore, we exchanged euros to get Vietnamese dong at the airport.

After getting out of the airport, we took a bus to the city center. It was 15,000 dongs—approximately 50 Indian rupees. Our plan was to meet Badri’s friend and stay the night at his apartment.

So we went to a café nearby and bought a coffee for each of us for 75,000 dongs. We went upstairs and sat for a while. The Wi-Fi password was mentioned on our bill. During the trip, I found out about the café culture of Vietnam. They have their own coffee brands (such as Highlands Coffee), and you can sit down at any of the cafés for work or wait for the rain to stop. It rained a lot while we were there, so we did use these cafés for that purpose.

Badri’s friend met us there, and we roamed around the area a bit, which included roaming inside a beautiful park. Then Badri’s friend took us to a restaurant. Because I do not eat meat, he took us to a vegan restaurant. Having been to four Southeast Asian countries at this point (excluding Vietnam), I was under the impression that there wouldn’t be a lot of things for my diet in Vietnam.

A picture of the park we roamed around in Ho Chi Minh City.

A picture of the park we roamed around in Ho Chi Minh City. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

However, I was pleasantly surprised at the restaurant. I found all the dishes to be tasty, especially their signature noodles called Pho. I liked another dish so much that I tracked down the restaurant again with Badri using the geotagged image of the bill I had taken earler to have it again. As a tip for vegans coming to Vietnam, the places having the letters “Chay” (without any accented letters) in their name are vegan only.

A building

This is the restaurant Badri’s friend took us to. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

An item in the restaurant

One of the dishes we had in the restaurant. This one was especially tasty. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

One of the dishes we had in the restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

One of the dishes we had in the restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Noodles in a bowl dipped in soup

These noodles are called Pho and are very popular in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

In the night, we went to a supermarket where I got myself some oranges and guavas. Then, we went to a Japanese restaurant where I didn’t have anything, as there was no vegetarian option available for me. Then we took a free bus to the place to Badri’s friend’s apartment. The construction company that built the apartment also runs this free bus service from their residential area to different parts of the city as a way of promoting their apartments. Anyone can take the bus, not just residents.

The next day, we took the free bus back to the city center and checked in to a hostel for a night. We took two beds in dormitories, which were 88,000 dongs (270 rupees) for each bed for a night. In Vietnam, if you can spend around 300 rupees per night, you can get a bed in a decent hostel.

Train from Ho Chi Minh City to Nha Trang

On the night of the 15th of December 2024, we boarded a train from Ho Chi Minh City to Nha Trang. The ticket for each of us was 519,000 dongs (1600 Indian rupees). The train name was SNT2. When we reached the Ho Chi Minh City train station, we noticed that the station was rather small by Indian standards.

After entering the train station, we went inside to the first platform, where the tickets were checked by a staff member. Ho Chi Minh City was the originating station for our train, so our train was already standing at the station. We had to cross the railway tracks on foot to reach the platform our train was on. Then we located our coach, where a ticket inspector was standing at the gate. He let us in after checking our tickets. In all these instances, we just had to show our digital boarding pass which we had received by email.

Unlike Indian trains, the train didn’t have side berths. Additionally, I liked the fact that it had a dedicated space to put our bags in, which was very convenient. The train departed from Ho Chi Minh City at 21:05 and arrived in Nha Trang at 05:30 in the morning.

Interior of our train coach. Trains in Vietnam don&rsquo;t have side berths, unlike India. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Interior of our train coach. Trains in Vietnam don’t have side berths, unlike India. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A picture of the berths from our coach. It had three tiers, similar to a 3 AC coach in Indian trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A picture of the berths from our coach. It had three tiers, similar to a 3 AC coach in Indian trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The train had a cabin to put the bags in. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The train had a cabin to put the bags in. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Nha Trang train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Nha Trang train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Nha Trang

Nha Trang is a coastal place, and we planned to go to a beach. We figured out that the bus to the airport goes can drop us near the beach. Therefore, we went to the bus station to get to the airport bus. The bus station was walking distance from the railway station. So, we decided to walk.

On the way, we stopped at a small shop for a coffee. The shop also gave a complimentary cup of green tea along with the coffee. I found out later that it is common for local shops to give a cup of complimentary green tea in Vietnam.

A cup of coffee and a cup of green tea.

I got a complimentary cup of green tea along with coffee in Nha Trang. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Soon we reached the bus station and took a bus to the beach. It was 65,000 dongs (₹200). After getting down from the bus, I had coconut water and some eggs at a small local place.

Eggs on a pan.

Eggs being cooked on a pan for my order. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Then we went to the beach, but nobody else was there. We spent some time there and went back to the place where the bus dropped us as it started raining. We couldn’t find a bus for some time. A taxi driver approached us and agreed to take us to the city center for 200,000 dongs (₹650). For reference, the place where he dropped us was 35 km from the place we took the taxi. Taxi fares in Vietnam were also cheap!

The beach we went to in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The beach we went to in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Nha Trang was a beautiful place, and so we roamed around for a while. Then we stopped at a Highlands Coffee branch for a while. Since Christmas was coming up, the café had a Christmas tree, and I liked the Christmas vibes. They were playing Mariah Carey’s All I Want for Christmas Is You.

This one was shot in the city center. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

This one was shot in the city center. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Inside a Highlands Coffee cafe in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Inside a Highlands Coffee cafe in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A coffee I got from Highlands Coffee in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A coffee I got from Highlands Coffee in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

During the evening, we went to a local place to eat. The place mentioned “Chay” in its name, and you know what it means—it was a vegan place. There was a man there and no other customers. I don’t remember the names of the dishes we ordered, but it was a bowl of soupy noodles and a bowl of dry noodles. They were very tasty. To top that off, the meal was a total of 55,000 dongs (₹180) for both of us.

The host was welcoming and friendly. We had a nice conversation with the host. In Vietnam, restaurants give chopsticks to eat noodles. While Badri was good at using them, I wasn’t. So, the host of this restaurant helped me in using chopsticks. Although my technique was not perfect and I take a bit of time, I could now eat solely with chopsticks.

The restaurant we went to in Nha Trang. The word Chay in the name means it was a vegan restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The restaurant we went to in Nha Trang. The word Chay in the name means it was a vegan restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Soupy noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Soupy noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Dry noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Dry noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Our plan was to take a night bus to Hoi An, and we were hoping to find a bus stand. However, we couldn’t find one. Asking around about the pickup location of the Hoi An bus led us to many different locations. Finally, we ended up at a bus booking agency’s office where we found out that there were no tickets available for Hoi An.

At this point, we gave up on booking the bus and searched for trains instead. As we didn’t have a local SIM, we asked the agency to let us connect to their Wi-Fi so that we could look for trains. They were kind enough to let us do that. It also seemed like they were going to close the office in like 10 minutes.

Unfortunately, all the sleeper berths were booked from Nha Trang till Hoi An on the next train with only seating berths being available. It takes around 10 hours, so I wasn’t comfortable traveling on seating berths.

Here I came up with the idea to look for sleeper berths from an intermediate stop. Fortunately, there were sleeper berths available from the next stop, Ninh Hòa. Therefore, we booked a seating berth from Nha Trang to Ninh Hòa and a sleeper berth from Ninh Hòa to Trà Kiệu (the nearest railway station from Hoi An). The train name was SE6, and it was a total of 500,000 dongs per person (₹1600 per person).

So, we went to the Nha Trang railway station and boarded the train. We had to spend 40 minutes seated for the train to reach the next stop before we could go to our sleeper berths. Badri had some friendly co-passengers on that trip who gave him Saigon beer and some crispy papad-like thing. They offered me as well, but I thought it was non-veg, so I declined it.

Hoi An

On the morning of 17th December 2024, we got down at the Trà Kiệu station at around 09:30. Our hostel was in Hoi An, which was around 22 km from the station. There was no public transport to get there.

Instead, there was a taxi driver at the train platform. We told him the name of our hostel, and he quoted 270,000 dongs (around ₹850). We said it was too expensive for us, so he agreed to bargain at 250,000 dongs. At this point, we told him that we could give him no more than 200,000 dongs, but he didn’t agree.

Badri tried a trick. He asked the driver to show us prices in the Grab app (a popular taxi booking app in Southeast Asia). Unfortunately, the Grab app showed 258,000 dongs, which was more than the fare the driver agreed to.

So we walked away as if we had so many options (we didn’t!) to reach the hostel. We got out of the station and stopped at a small shop outside to have some coffee. As is customary in Vietnam, we got a complimentary green tea here as well.

This was the place we had our coffee in Tra Kieu. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

This was the place we had our coffee in Tra Kieu. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

That taxi driver also joined us and sat in that shop. He started talking with the locals in the shop in the local language. The taxi driver was insistent on taking us to Hoi An for 250,000 dongs. At this point, Badri told the taxi driver (by the use of translation software) that we usually use public transport during our trips, and we aren’t used to paying high prices to get around. So, he can drop us somewhere in Hoi An for 200,000 dongs as we don’t mind walking a bit to reach our hotel.

After reading this, the taxi driver agreed to take us to our hostel for 200,000 dongs (₹660). He also had me take a picture with Badri after this. I think such a bargain tactic would not work in India.

Photo of Badri with taxi driver. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Photo of Badri with taxi driver. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The nice thing we noticed in Vietnam is, once bargaining is done and the deal is settled, people don’t try to bargain more or keep on talking about the subject. Before the deal, the driver was being somewhat insistent and argumentative, but after the deal was done, it was as if no argument had happened at all.

A picture of Tra Kieu area near the train station we got down at. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A picture of Tra Kieu area near the train station we got down at. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

We were treated to some beautiful scenery on the way to our hostel. Soon we reached our place and completed all the formalities for checking-in. During the time our room was being prepared for check-in, we had an egg sandwich with coffee in the hotel. I found the egg sandwich very tasty. The bread looked like the French baguette. The hostel was ₹240 per night for each of us.

The name of the hostel was Bana Spa. We liked staying here and we can recommend it if you find yourself there. It is operated by a family.

Our breakfast in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Our breakfast in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A photo of the hostel we stayed in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A photo of the hostel we stayed in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

We also rented a bicycle for each of us—25,000 dongs per day (₹80)—and explored the old town during the evening. Hoi An is popular for Vietnamese silk. Tourists come here to buy fabric and get it done by the tailor. The buildings here looked old, and they were painted in yellow with a gabled roof.

Typical yellow house with gabled roof in Hoi An old town. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Typical yellow house with gabled roof in Hoi An old town. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Here, I also had egg coffee for the first time, and I liked it. Egg coffee is a delicacy of Hanoi, but you can get it in other parts of Vietnam. If you find yourself in Vietnam, then I recommend you try egg coffee. We also bought some cool T-shirts and other souvenirs, such as a Vietnamese hat, from here.

Egg coffee I had in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Egg coffee I had in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Hue

The next day—the 18th of December 2024—we went to Hue by bus. As we could not take a bus on our own in Nha Trang, we asked the hostel to book it for us this time. We booked it a day before, and they told us to be ready by 07:00 in the morning. At 07:00, a minibus arrived, which took us to a bus agency’s office. There we waited for a few minutes and got into the bus to Hue.

The bus had sleeper seats, so I took the opportunity to catch some sleep. The ride was comfortable, so I am assuming the roads were good. In a couple of hours, we reached Hue. Again, we went to Highlands Coffee to have some coffee, charge our phones, and use the internet, not to mention using the bathrooms.

During the afternoon, we went to a local restaurant named Quán Chay Thanh Liễu. It was a vegan restaurant (remember the thing I mentioned earlier about “Chay” being in the name?). On the way, we had a steamed dumpling shaped like a momo called banh bao from a street vendor. It wasn’t very good, but I found it worthwhile.

Bahn Bao in Hue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Bahn Bao in Hue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

At the restaurant, we ordered a hot pot. First, they brought noodles and a gas stove. Then came the stock and our gas stove was turned on. The stock was kept simmering on the stove. Then, we had it bit by bit with the noodles. A big hot pot at this place costs 50,000 dongs (₹170). Then we had bánh cuốn. These were steamed rolls made of rice flour for 10,000 dongs (₹33).

Hot Pot. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Hot Pot. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Added soup to the noodles. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Added soup to the noodles. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Steamed rolls made of rice flour. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Steamed rolls made of rice flour. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Restaurants in Vietnam usually add photos of the meals in their menu or write a description in English. So, even though the dish names were Vietnamese, we had no problems in ordering food there. In addition, all the places we went to provided free Wi-Fi. They either mention the Wi-Fi password on the bill, on the menu or paste it on the wall. This made our trip smoother without getting a local SIM.

Menu from a restaurant in Ho Chi Minh City with detailed description of the food. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Menu from a restaurant in Ho Chi Minh City with detailed description of the food. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Then we slowly walked towards the railway station, as we had a night train to Hanoi. We had egg coffee in a cafe. Near the railway station, we had a bánh mì (egg sandwich). As for sightseeing, we had plans to visit a couple of places in Hue, but we ended up spending all our time inside sheltered spaces due to heavy rain.

We had booked the train SE20 for Hanoi, which had a departure time of 20:41 from Hue. This one was 948,000 dongs (₹3100) for myself and 870,000 dongs (₹2900) for Badri. My ticket was pricier than Badri’s because I got a lower berth. Our train was late by half an hour, so we waited in the common area of the station. After the train arrived, we got inside and took our seats.

The cabin had four berths—two upper and two lower, similar to India’s First AC class. The ticket inspector came to us and offered us the whole cabin (two additional berths) for 300,000 dongs (₹1,000), which we declined. However, this hinted at the other two seats not being reserved. Eventually, we had the whole cabin to ourselves, as nobody else showed up for the other two berths. It was a 14-hour journey, and I got a good sleep.

Our berths in the train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Our berths in the train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Hanoi

On the morning of the 19th of December 2024, we reached Vietnam’s capital, Hanoi. We had booked a private hotel room for ₹800. It was 1 km from the Hanoi Airport. However, it was pretty far from the railway station. So, we roamed around in the city and went to the hotel in the evening.

First, we walked to a place and had egg coffee with egg sandwiches. Then we went to Hanoi Train Street, which was walking distance from the train station. After clicking some pictures at the train street, we went to a museum nearby. Upon reaching there, we found out that it was closed.

Egg coffee in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Egg coffee in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Hanoi train street is a tourist attraction in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Hanoi train street is a tourist attraction in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Then we went shopping for jackets, as Hanoi was cold compared to other parts of Vietnam we had been to, and since many of them are manufactured in Vietnam, we thought they would be cheaper. I liked some jackets, but they were not my size. Eventually, we didn’t buy anything at the clothes shop.

In the evening, I bought a Vietnamese-styled phin coffee filter and coffee powder from Highlands Coffee. We spent a lot of time in their cafes, so it made sense to buy some souvenirs from there. Badri bought a few coffee filters for his family at Trung Nguyen, where I also bought another filter.

We had dinner at a local place where we had pho and banh it. Bahn it was served packed in banana leaves and it was made of sticky rice.

A picture of pho we had in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A picture of pho we had in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Bahn it is served packed in banana leaves. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Bahn it is served packed in banana leaves. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Bahn it. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Bahn it. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Next, we went to Hanoi railway station to catch a bus to the airport since our hotel was 1 km from the airport. The locals there helped us take the bus. It took like an hour to get to the airport. We saw on OpenStreetMap that we can take a bus from there to the hotel, but we could not find it. So we walked to our hotel instead.

It was a decent hotel room for ₹800 for a night. We went outside to explore the area and had egg sandwiches and egg coffee at a local place. Again, we were given a complimentary green tea. We went to this place like three times. We had practically become regulars by the time we left.

The next day— 20th of December 2024 — we took a bus to the airport and boarded our flight to Delhi.

Credits: Thanks Badri, Kishy and Richard for proofreading.

Planet DebianSteinar H. Gunderson: A286874(16) >= 48

Following up on the previous post, here are some heuristic results:

First, if restricting oneself to 5-uniform values (all values have exactly five bits set), the best 15-bit code one can make is indeed 42 elements, and there are two distinct solutions: {31, 227, 364, 692, 1240, 1577, 1606, 2353, 3008, 3205, 3338, 4434, 4746, 4869, 5536, 6182, 6217, 7696, 8582, 8984, 9266, 9537, 10324, 10408, 10755, 12433, 12896, 13324, 16777, 16977, 17186, 17684, 18578, 18956, 19552, 20536, 20676, 21507, 24613, 24650, 26240, 30976} and {31, 227, 364, 692, 849, 906, 1240, 2354, 3206, 3337, 3680, 4485, 5169, 5442, 5644, 6228, 6312, 6659, 8745, 9285, 9632, 9746, 10314, 10385, 11012, 12326, 12568, 12992, 16966, 17450, 17684, 18049, 18469, 18880, 18968, 20553, 20626, 21280, 24688, 24716, 24835, 31744}. This supports, but does not prove, the conjecture that A286874(15) = 42.

Second, A286874(16) >= 48 (the best previously known bound was 45), since this is a valid 48-element solution:

0000000000011111
0000000011100011
0000000101101100
0000001010110100
0000010011011000
0000011100000011
0000100100110001
0000101000101010
0000101111000000
0001000110001001
0001010000110010
0001011000001100
0001100100000110
0001110001000001
0010000110010010
0010010010000101
0010011001100000
0010100001010100
0010110100001000
0011000001001010
0011001000010001
0011100010100000
0100001001001001
0100010001000110
0100010110100000
0100100010001100
0100111000010000
0101000000100101
0101000101010000
0101001010000010
0110000000111000
0110001100000100
0110100000000011
1000001001010010
1000010000101001
1000010100010100
1000101000000101
1000110010000010
1001000011000100
1001001100100000
1001100000011000
1010000000100110
1010000101000001
1010001010001000
1100000010010001
1100000100001010
1100100001100000
1111010000000000

I won't be sweeping all of the 15- or 16-bit spaces.

365 TomorrowsBureaucratic Records Of The End Times

Author: Moura BUREAUCRATIC RECORDS OF THE END TIMES Automatic compilation of human and environmental records Source: multiple devices Status: recovered fragments THE LAST KNOT (Record 001 — Autonomous diving equipment) Two hundred meters below the surface, the darkness presses on the lungs. William fumbles with the umbilical cable, adjusting the regulator valve. The PPO₂ fluctuates. […]

The post Bureaucratic Records Of The End Times appeared first on 365tomorrows.

Planet DebianC.J. Collier: The WWW::Mechanize::Chrome Saga: A Comprehensive Narrative of PR #104

The
WWW::Mechanize::Chrome Saga: A Comprehensive Narrative of PR #104

This document synthesizes the extensive work performed from March
13th to March 20th, 2026, to harden, stabilize, and refactor the
WWW::Mechanize::Chrome library and its test suite. This
effort involved deep dives into asynchronous programming,
platform-specific bug hunting, and strategic architectural
decisions.


Part I:
The Quest for Cross-Platform Stability (March 13 – 16)

The initial phase of work focused on achieving a “green” test suite
across a variety of Linux distributions and preparing for a new release.
This involved significant hardening of the library to account for
different browser versions, OS-level security restrictions, and
filesystem differences.

Key Milestones &
Engineering Decisions:

  • Fedora & RHEL-family Success: A major effort
    was undertaken to achieve a 100% pass rate on modern Fedora 43 and
    CentOS Stream 10. This required several key engineering decisions to
    handle modern browser behavior:

    • Decision: Implement Asynchronous DOM Serialization
      Fallback.
      Synchronous fallbacks in an async context are
      dangerous. To prevent Resource was not cached errors during
      saveResources, we implemented a fully asynchronous fallback
      in _saveResourceTree. By chaining
      _cached_document with DOM.getOuterHTML
      messages, we can reconstruct document content without blocking the event
      loop, even if Chromium has evicted the resource from its cache. This
      also proved resilient against Fedora’s security policies, which often
      block file:// access.
    • Decision: Truncate Filenames for Cross-Platform
      Safety.
      To avoid File name too long errors,
      especially on Windows where the MAX_PATH limit is 260
      characters, filenameFromUrl was hardened. The filename
      truncation was reduced to a more conservative 150
      characters
      , leaving ample headroom for deeply nested CI
      temporary directories. Logic was also added to preserve file extensions
      during truncation and to sanitize backslashes from URI paths.
    • Decision: Expand Browser Discovery Paths. To
      support RHEL-based systems out-of-the-box, the
      default_executable_names was expanded to include
      headless_shell and search paths were updated to include
      /usr/lib64/chromium-browser/.
    • Decision: Mitigate Race Conditions with Stabilization Waits
      and Resilient Fetching.
      On fast systems,
      DOM.documentUpdated events could invalidate
      nodeIds immediately after navigation, causing XPath queries
      to fail with “Could not find node with given id”. A small stabilization
      sleep(0.25s) was added after page loads to ensure the DOM
      is settled. Furthermore, the asynchronous DOM fetching loop was hardened
      to gracefully handle these errors by catching protocol errors and
      returning an empty string for any node that was invalidated during
      serialization, ensuring the overall process could complete.
  • Windows Hardening:
    • Decision: Adopt Platform-Aware Watchdogs. The test
      suite’s reliance on ualarm was a blocker for Windows, where
      it is not implemented. The t::helper::set_watchdog function
      was refactored to use standard alarm() (seconds) on Windows
      and ualarm (microseconds) on Unix-like systems, enabling
      consistent test-level timeout enforcement.
  • Version 0.77 Release:
    • Decision: Adopt SOP for Version Synchronization.
      The project maintains duplicate version strings across 24+ files. A
      Standard Operating Procedure was adopted to use a batch-replacement tool
      to update all sub-modules in lib/ and to always run
      make clean and perl Makefile.PL to ensure
      META.json and META.yml reflect the new
      version. After achieving stability on Linux, the project version was
      bumped to 0.77.
  • Infrastructure & Strategic Work:
    • The ad2 Windows Server 2025 instance was restored and
      optimized, with Active Directory demoted and disk I/O performance
      improved.
    • A strategic proposal for the Heterogeneous Directory
      Replication Protocol (HDRP)
      was drafted and published.

Part II: The
Great Async Refactor (March 17 – 18)

Despite success on Linux, tests on the slow ad2 Windows
host were still plagued by intermittent, indefinite hangs. This
triggered a fundamental architectural shift to move the library’s core
from a mix of synchronous and asynchronous code to a fully non-blocking
internal API.

Key Milestones &
Engineering Decisions:

Derived Architectural Rules
& SOPs:

  • Rule: Always provide _future variants.
    Every library method that interacts with the browser via CDP must have a
    non-blocking asynchronous counterpart.
  • Rule: Centralize stabilization in the test layer.
    All timeout and retry logic should reside in the test harness
    (t/lib/t/helper.pm), not in the core library.
  • Rule: Explicitly propagate wantarray
    context.
    Synchronous wrappers must capture the caller’s context
    and pass it down the Future chain to ensure correct
    scalar/list behavior.
  • Rule: The entire call chain must be asynchronous.
    To enable non-blocking timeouts, even a single “hidden” blocking call in
    an otherwise asynchronous method will cause a stall.
  • SOP: Reduce Library Noise. Diagnostic messages
    (warn, note, diag) should be
    removed from library code before commits. All such messages should be
    converted to use the internal $self->log('debug', ...)
    mechanism, ensuring a clean TAP output for CI systems.

Part III: The
MutationObserver Saga (March 19)

With most of the library refactored to be asynchronous, one stubborn
test, t/65-is_visible.t, continued to fail with timeouts.
This led to an ambitious, but ultimately unsuccessful, attempt to
replace the wait_until_visible polling logic with a more
“modern” MutationObserver.

Key Milestones & Challenges:

  • The Theory: The goal was to replace an inefficient
    repeat { sleep } loop with an event-driven
    MutationObserver in JavaScript that would notify Perl
    immediately when an element’s visibility changed.
  • Implementation & Cascade Failure: The
    implementation proved incredibly difficult and introduced a series of
    new, hard-to-diagnose bugs:

    1. An incorrect function signature for
      callFunctionOn_future.
    2. A critical unit mismatch, passing seconds from Perl to JavaScript’s
      setTimeout, which expected milliseconds.
    3. A fundamental hang where the MutationObserver’s
      JavaScript Promise would never resolve, even after the
      underlying DOM element changed.
  • Debugging Maze: Multiple attempts to fix the
    checkVisibility JavaScript logic inside the observer
    callback, including making it more robust by adding DOM tree traversal
    and extensive console.log tracing, failed to resolve the
    hang. This highlighted the opacity and difficulty of debugging complex,
    cross-language asynchronous interactions, especially when dealing with
    low-level browser APIs.

Procedural Learning:
Granular Edits

The effort was plagued by procedural missteps in using automated
file-editing tools. Initial attempts to replace large code blocks in a
single operation led to accidental code loss and match failures.

  • Decision: Adopt “Delete, then Add” Workflow.
    Following forceful user correction, a new SOP was established for all
    future modifications:

    1. Isolate: Break the file into small, manageable
      chunks (e.g., 250 lines).
    2. Delete: Perform a “delete” operation by replacing
      the old code block with an empty string.
    3. Add: Perform an “add” operation by inserting the
      new code into the empty space.
    4. Verify: Verifying each atomic step before
      proceeding. This granular process, while slower, ensured surgical
      precision and regained technical control over the large
      Chrome.pm module.

The consistent failure of the MutationObserver approach
eventually led to the decision to abandon it in favor of stabilizing the
original, more transparent implementation.


Part IV:
Reversion and Final Stabilization (March 20)

After exhausting all reasonable attempts to fix the
MutationObserver, a strategic decision was made to revert
to the simpler, more transparent polling implementation and fix it
correctly. This proved to be the correct path to a stable solution.

Key Milestones &
Engineering Decisions:

  • Decision: Perform Strategic Reversion. The
    MutationObserver implementation, when integrated via
    callFunctionOn_future with awaitPromise,
    proved fundamentally unstable. Its JavaScript promise would consistently
    fail to resolve, causing indefinite hangs. A decision was made to
    revert all MutationObserver code from
    WWW::Mechanize::Chrome.pm and restore the original
    repeat { sleep } polling mechanism. A stable,
    understandable solution was prioritized over an elegant but broken
    one.
  • Decision: Correct Timeout Delegation in the
    Harness.
    The root cause of the original timeout failure was
    identified as a race condition in the t/lib/t/helper.pm
    test harness. The safe_wait_until_* wrappers were
    implementing their own timeout (via wait_any and
    sleep_future) that raced against the underlying polling
    function’s internal timeout. This led to intermittent failures on slow
    machines. The helpers were refactored to delegate all timeout
    management to the library’s polling functions
    , ensuring a
    single, authoritative timer controlled the operation.
  • Decision: Optimize Polling Performance. At the
    user’s request, the polling interval was reduced from 300ms to
    150ms. This modest performance improvement reduced the
    test suite’s wallclock execution time by over a second while maintaining
    stability.
  • Decision: Tune Test Watchdogs. The global watchdog
    timeout was adjusted to 12 seconds, specifically calculated as 1.5x the
    observed real execution time of the optimized test. This provides a
    data-driven safety margin for CI.

Part
V: The Last Bug – A Platform-Specific Memory Leak (March 20)

With all other tests passing, a single memory leak failure in
t/78-memleak.t persisted, but only on the Windows
ad2 environment. This required a different approach than
the timeout fixes.

Key Milestones:

  • The Bug: A strong reference cycle involving the
    on_dialog event listener was not being broken on Windows,
    despite multiple attempts to fix it. Fixes that worked on Linux (such as
    calling on_dialog(undef) in DESTROY) were not
    sufficient on the Windows host.
  • The Diagnosis: The issue was determined to be a
    deep, platform-specific interaction between Perl’s garbage collector,
    the IO::Async event loop implementation on Windows, and the
    Test::Memory::Cycle module. The cycle report was identical
    on both platforms, but the cleanup behavior was different.
  • Failed Attempts: A series of increasingly
    aggressive fixes were attempted to break the cycle, including:

    1. Moving the on_dialog(undef) call from
      close() to DESTROY().
    2. Explicitly deleteing the listener and callback
      properties from the object hash in DESTROY.
    3. Swapping between $self->remove_listener and
      $self->target->unlisten in a mistaken attempt to find
      the correct un-registration method.
  • Pragmatic Solution: After exhausting all reasonable
    code-level fixes without a resolution on Windows, the user opted to mark
    the failing test as a known issue for that specific platform.
  • Final Fix: The single failing test in
    t/78-memleak.t was wrapped in a conditional
    TODO block that only executes on Windows
    (if ($^O =~ /MSWin32/i)), formally acknowledging the bug
    without blocking the build. This allows the test suite to pass in CI
    environments while flagging the issue for future, deeper
    investigation.

Part VI: CI Hardening (March
20)

A final failure in the GitHub Actions CI environment revealed one
last configuration flaw.

Key Milestones:

  • The Bug: The CI was running
    prove --nocount --jobs 3 -I local/ -bl xt t directly. This
    command was missing the crucial -It/lib include path, which
    is necessary for test files to locate the t::helper module.
    This resulted in nearly all tests failing with
    Can't locate t/helper.pm in @INC.
  • The Investigation: An analysis of
    Makefile.PL revealed a custom MY::test block
    specifically designed to inject the -It/lib flag into the
    make test command. This confirmed that
    make test is the correct, canonical way to run the test
    suite for this project.
  • The Fix: The
    .github/workflows/linux.yml file was modified to replace
    the direct prove call with make test in the
    Run Tests step. This ensures the CI environment runs the
    tests in the exact same way as a local developer, with all necessary
    include paths correctly configured by the project’s build system.

Final Outcome

After this long and arduous journey, the
WWW::Mechanize::Chrome test suite is now stable and
passing on all targeted platforms, with known
platform-specific issues clearly documented in the code. The project is
in a vastly more robust and reliable state.

,

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.28 on CRAN: Micro-Maintenance

Version 0.0.28 of RcppSpdlog arrived on CRAN today, has been uploaded to Debian and built for r2u. The (nice) documentation site has been refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release contains a rebuild RcppExports.cpp to aid Rcpp in the transition towards Rcpp::stop() and away from Rf_error() in its user packages. No othe

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.28 (2026-03-19)

  • Regenerate RcppExports.cpp to switch to (Rf_error) aiding in Rcpp transition to Rcpp::stop()

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

365 TomorrowsThe Extinction Clause

Author: David Dumouriez Approximately four score and seven years ago, the Luxians saw, they came and they conquered. Well, actually they didn’t need to do much conquering. They simply made their presence abundantly clear and waited for the locals to decimate themselves in response. Some attempted to fight them, which was noble but futile. Ultimately, […]

The post The Extinction Clause appeared first on 365tomorrows.

Worse Than FailureError'd: Yeah Yeah I'm The Tax Man

In only a handful of years, four Liverpudlian scruffs clawed their way from obscurity to unprecedented worldwide celebrity.
Yeah, yeah, yeah.
Already making a mint from "Money" and other hits, by 1965 they were MBEs, and suddenly discovered class solidarity -- with the rest of the singlet-clad bathers in their grottos of ducats. To be fair, a 97% marginal rate does make it hard for a lad to break into the ranks of true generational wealth.
So in 1966, George Harrison and his newly-minted toffs released the anti-government protest shriek of the upper class, and even now, we Americans share their pain in this our momen of reckoning with ... the Tax Man.

The Beast in Black first complained "I tried to get my W2 (for our non-Murican friends, that's the statement from your employer showing how much they paid you and how much tax they deducted) from ADP, and apparently their programmers did a number (two) on the form. TRWTF is that the damn form actually works if I add the slash separators to the date components."

0

 

And again he moaned "Frankly, I'm a little too scared by this WTF to be snarky; I'd expect a Tax Accounting firm - H&R Block in this case - to not have such basic accounting WTFs. Perhaps they should change their name to H&R Blockhead...?"

1

 

Adam R. remarked "It's tax season again. I hope their tax return backend is better than their JavaScript frontend that set the tooltip on this image to [object Object]."

3

 

Frustrated Dustin S. is trying to comply: "I logged into my credit union to download the tax documents for my account, but when I clicked on the link, this is what I got. Maybe doing taxes by invoice in the U.S. now?"

4

 

And looking to the future, the tax man cometh inevitably for Michael R. , though not today. "In green: I want to enter a discount for the items I'm selling. The error says:"Invalid discount amount. Please enter a discount of less than €0.00 (packaging costs + taxes).". Yes, I have also tried to enter -7,41 without any luck. In blue: It says:"Total amount approx.". Maybe one of the ebay lawyers figured out they are using float data types and wants to cover their bottoms against the rounding errors?"

2

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

xkcdStar Formation

Krebs on SecurityFeds Disrupt IoT Botnets Behind Huge DDoS Attacks

The U.S. Justice Department joined authorities in Canada and Germany in dismantling the online infrastructure behind four highly disruptive botnets that compromised more than three million Internet of Things (IoT) devices, such as routers and web cameras. The feds say the four botnets — named Aisuru, Kimwolf, JackSkid and Mossad — are responsible for a series of recent record-smashing distributed denial-of-service (DDoS) attacks capable of knocking nearly any target offline.

Image: Shutterstock, @Elzicon.

The Justice Department said the Department of Defense Office of Inspector General’s (DoDIG) Defense Criminal Investigative Service (DCIS) executed seizure warrants targeting multiple U.S.-registered domains, virtual servers, and other infrastructure involved in DDoS attacks against Internet addresses owned by the DoD.

The government alleges the unnamed people in control of the four botnets used their crime machines to launch hundreds of thousands of DDoS attacks, often demanding extortion payments from victims. Some victims reported tens of thousands of dollars in losses and remediation expenses.

The oldest of the botnets — Aisuru — issued more than 200,000 attacks commands, while JackSkid hurled at least 90,000 attacks. Kimwolf issued more than 25,000 attack commands, the government said, while Mossad was blamed for roughy 1,000 digital sieges.

The DOJ said the law enforcement action was designed to prevent further infection to victim devices and to limit or eliminate the ability of the botnets to launch future attacks. The case is being investigated by the DCIS with help from the FBI’s field office in Anchorage, Alaska, and the DOJ’s statement credits nearly two dozen technology companies with assisting in the operation.

“By working closely with DCIS and our international law enforcement partners, we collectively identified and disrupted criminal infrastructure used to carry out large-scale DDoS attacks,” said Special Agent in Charge Rebecca Day of the FBI Anchorage Field Office.

Aisuru emerged in late 2024, and by mid-2025 it was launching record-breaking DDoS attacks as it rapidly infected new IoT devices. In October 2025, Aisuru was used to seed Kimwolf, an Aisuru variant which introduced a novel spreading mechanism that allowed the botnet to infect devices hidden behind the protection of the user’s internal network.

On January 2, 2026, the security firm Synthient publicly disclosed the vulnerability Kimwolf was using to propagate so quickly. That disclosure helped curtail Kimwolf’s spread somewhat, but since then several other IoT botnets have emerged that effectively copy Kimwolf’s spreading methods while competing for the same pool of vulnerable devices. According to the DOJ, the JackSkid botnet also sought out systems on internal networks just like Kimwolf.

The DOJ said its disruption of the four botnets coincided with “law enforcement actions” conducted in Canada and Germany targeting individuals who allegedly operated those botnets, although no further details were available on the suspected operators.

In late February, KrebsOnSecurity identified a 22-year-old Canadian man as a core operator of the Kimwolf botnet. Multiple sources familiar with the investigation told KrebsOnSecurity the other prime suspect is a 15-year-old living in Germany.

Planet DebianReproducible Builds (diffoscope): diffoscope 315 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 315. This version includes the following changes:

[ Jelle van der Waa ]
* Adjust PGP file detection regex.

You find out more by visiting the project homepage.

Planet DebianMichael Ablassmeier: virtnbdbackup 2.46 - bitlocker recovery keys

I’ve released virtnbdbackup 2.46 which now attempts to extract the bitlocker recovery keys during backup. The windows domains need a working qemu agent installed during backup for this to work.

Using the agent, it also extracts the available guestinfo (network config, OS version etc..) from the domain and stores it alongside with the backup.

,

Rondam RamblingsWe interrupt this post series to bring you an important weather update

Those of you who watched my debate will know that I was in Hawaii at the time.  Well, I'm still here.  I have no idea if this is making news on the mainland, but Hawaii has just gone through two days of one of the most intense Kona storms in many years.  I have never in my life seen rain this hard go on for so long.  The condo that we are in was not designed for this. 

Rondam RamblingsSeeking God in Science part 4: Chairness

I concluded the last installment in this series with a deceptively simple claim: Things exist.  This two-word quip is a scientific hypothesis, an explanation for some of my subjective experiences, specifically, my ability to see and touch and hear and small and taste, well, things.  The explanation is that these things that I perceive really do exist in point of actual physical (

Worse Than FailureTDWTF Home Edition: Pt 2

Read (Part One here)

When Ellis awoke on Sunday, the unusual cold broke through her drowsiness right away. Her new thermostat was programmed to maintain a lower temperature overnight, but at 6:30AM, it was supposed to climb again, kicking the heat on right when she got out of bed.

Caleb Joseph (51006216437)

Why was it so cold? Why was the furnace dead silent? Something must've gone wrong again. So soon?

It sucked to get out of bed when it was dark and cold, but Ellis had no choice. She forced herself to peel back the covers and launch into her morning routine. Her cat shadowed her, helpfully letting her know several times that his plate had no food on it.

She attended to the cat's needs first before approaching the thermostat downstairs. The set point was at the overnight setting even though it claimed to be following her programmed schedule. Using the touchscreen interface, she increased the set point manually. The heat cut on just fine from there, thank goodness.

Through her dehydrated, hungry, uncaffeinated haze, Ellis suddenly remembered the time change. They had "sprung ahead" for Daylight Saving Time. Had her new thermostat joined them in this archaic ritual?

It had not. Checking its day/time settings, Ellis found the time an hour behind. She pressed her index finger onto the hour, expecting a dial or drop-down or some other such control to appear. Nothing. Hours, minutes, and AM/PM were all fixed. Only the time zone could be changed. It was currently set to EST. Opening the drop-down menu, none of the options she skimmed over looked promising.

Her old thermostat (out of support, incompatible with her new HVAC system) had handled time changes all by itself, and had allowed every possible manual adjustment one could wish for. It frustrated Ellis that the latest so-called "smart" thermostat couldn't manage the same despite being hooked up to the Internet at all times.

Part of her wanted to keep digging at this, but it was way too early. Ellis was unprepared in every possible way to descend into a troubleshooting rabbit-hole. She had places to be that morning. The heat was working, that was all that really mattered. More importantly, someone from the HVAC company was already scheduled to perform a 1-week follow-up test of her newly-installed system in a couple of days. She could disable the schedule and make manual adjustments until the technician arrived.

With HVAC having taken center stage in her brainspace for over a month by that point, Ellis desperately needed to give herself this break.

The technician who arrived was equally mystified. He tried a factory reset of the thermostat, which had no effect. It was determined that future time changes would have to be handled manually by toggling the time zone between EST and ... Eastern. An unhelpful label that Ellis' sleepy brain had completely glossed over early on Sunday morning.

Annoying, but not the end of the world.

Once the technician tested her system (all good, thankfully) and left, Ellis sat down in front of her laptop to check her usual subreddits. Ah, the World Baseball Classic! Someone had posted a highlight reel of her favorite baseball team's best pitcher—arguably the best pitcher on the planet—recording 7 strikeouts in a single game. She opened up the video, eager to watch.

Why the hell is Ellis suddenly telling you about sportsball? Because, in an amazing coincidence, she spied the name of the company that had built her new thermostat, right there on the backstop behind home plate!

So they had WBC advertising money, but couldn't pony up for a sensible day/time interface. Ellis suspects she's in for an interesting couple of decades ... assuming her new system lasts that long.


P.S. Since Ellis has shamelessly segued her way into sportsball, there's something else she wants to share: a new player on her favorite team, Jhostynxon Garcia, is nicknamed The Password.

His younger brother Johanfran, also a baseball player, is called The Username.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Catching Place

Author: AP Ritchey Every Sunday Jed and I met up at the catching place—a pond we’d been fishing for years. It wasn’t much to look at. Just a muddy oval tucked back behind a row of cottonwoods, with a leaning dock somebody built long before either of us started coming out there. The water stayed […]

The post The Catching Place appeared first on 365tomorrows.

David BrinChatGPT attempts to summarize and review my new book 'ailien minds!'

 A CHATGPT Review of AIlien Minds by David Brin

 

For a midweek posting we'll take a break from frets about civilization and shift to something actually important.

      This is a copy of a conversation between ChatGPT & an anonymous reader,* who submitted AIlien Minds for review. Take into account that there appears to be an inherent tilt toward flattery in almost all 2025-2026 LLM systems. I also messed with line and para formatting for readability.


{DB notes will appear in Italics and within brackets.}  The questioner is in bold. GPT's commentary is in plain Times Roman.


The crux: do NOT imagine you'll get much of the interesting or useful content of ailien minds this way! You won't. 


Still, the exercise was interesting in its own way. And as a bonus I will later post what CLAUDE made of this annotated posting!

 

 

Anonymous Q:  David Brin’s new book entitled: “ailien mindsâ€� about AI intelligence was just published. Do you have any thoughts about it?


ChatGPT: Yes — it’s an interesting development, and the premise behind David Brin’s AIlien Minds (note the intentional spelling with â€œAIâ€� in the title) is pretty on-brand for him. The book is positioned as a wide-ranging exploration of artificial intelligence and what it means for humanity, drawing on both science-fiction thinking and real technological trends. 

 

Here are a few thoughts about it and the ideas behind it.

 

1. The core idea: AI as “alien intelligence�

    Brin’s framing is that advanced AI may be the first truly “alienâ€� intelligence humans encounter—not extraterrestrial, but something whose cognition, goals, and reasoning could diverge radically from ours. This fits a broader scientific observation: as AI systems grow more complex, their internal processes can become opaque even to their creators, making them feel almost like a different species of mind. 

   Brin has explored similar themes for decades in his fiction — especially in the Uplift universe, where humans interact with radically different intelligences (aliens, uplifted animals, etc.). 

 

{DB: We’re already in a bit of trouble. Oh, that paragraph is accurate in and of itself, except that this is not one of the core points of my book. Rather, this “alien intelligence� aspect is simply posed as obvious -- a given.}

 

2. Brin’s typical stance on AI risk (less apocalyptic than some)

    Brin tends to sit between the extremes of AI optimism and doom. For example, some recent books argue that superintelligent AI could destroy humanity if developed recklessly. 
    Brin’s approach is usually more nuanced:

    AI could be enormously powerful and disruptive.

    But human institutions, competition, and transparency may prevent a single runaway catastrophe.

    Multiple AIs checking each other may be safer than a single dominant system. That argument reflects his long-standing idea from The Transparent Society: that mutual transparency and distributed power can stabilize advanced technology.

 

{DB: Fair enough.}

 

3. Why the “alien� metaphor matters

   Calling AI “alienâ€� isn’t just poetic—it changes how we think about it. If AI is truly alien-like:

a.     We shouldn’t assume it will share human values automatically.

b.     But we also shouldn’t assume it will become a villain like HAL 9000.

   Brin himself joked about dedicating the book to fictional AIs like Samantha and Data (hopeful examples) and HAL (a warning). 

   That captures the middle ground: AI will likely contain both promise and danger, depending on how humans shape it.

 

{DB: So far, moderately decent summarizing. Though note that the LLM’s use of “Iâ€� and singular personal is something that has been widely deemed dangerous and regrettable. There are many documented cases of it leading to harm. In my book, I propose that use of such language should be EARNED by the very topmost AIs, and then only after reciprocal criticism of each other’s worthiness.} 

 

4. My quick take on the premise

    Personally, I think the “alien intelligenceâ€� framing is one of the most useful mental models for AI.

Three reasons:

   It reminds us not to anthropomorphize AI. 

         {DB: Ironic, of course, since the breezy, personal style of this LLM encourages anthropomorphism.}
   The systems might reason in ways we find strange.

    It highlights communication challenges.
    Understanding a superhuman machine’s goals might be as hard as understanding an alien species.

    It’s historically accurate for technology.

    Every major new tool—writing, printing, computers—initially felt alien before we integrated it into society.  


     {DB: That last point is an interesting juxtaposition of two of my separate themes in the book. I had not used ‘alien’ in my discussions of past tech revolutions. And yet, here it does make some sense. One basic trait (insufficient) of a sapient mind is creating new associations between previously separated thoughts or notions. That just happened here, though probably just a linguistic extrapolation/interpolation.}

 

💡 Bottom line:
Brin’s book is part of a growing genre of serious thinking about AI that sits between hype and doom. The central idea—that advanced AI might be the first “alien mind� humanity encounters—is both philosophically rich and increasingly plausible.

 

If you want, I can also tell you:

-       the three most interesting ideas Brin argues about AI in that book, or

-       how his views compare with people like Ray Kurzweil, Eliezer Yudkowsky, or Nick Bostrom.

That comparison is pretty fascinating.


Q: Go on…


Alright — let’s dig a little deeper. The really interesting thing about AIlien Minds by David Brin is that it sits in the middle of a huge debate about the future of AI. Different thinkers look at the same technology and reach very different conclusions.

     Here’s a quick map of the spectrum.

 

1. David Brin — cautious but optimistic

Brin’s view is roughly:

AI could become extremely powerful and weird (hence “alien minds�).

But catastrophic outcomes aren’t inevitable.

Pluralism helps safety â€” many AIs, many institutions, many watchdogs.

He tends to distrust the idea that a single superintelligence will suddenly dominate everything.  Instead he thinks:

   competition between systems

   open societies

   transparency

…can keep things balanced.

   This perspective echoes ideas from his nonfiction book The Transparent Society, where he argues that mutual monitoring can stabilize powerful technologies.

 

2. Ray Kurzweil — extremely optimistic

    Ray Kurzweil believes AI will trigger the technological singularity. In books like The Singularity Is Near, he predicts:

    AI surpasses human intelligence

    humans merge with machines

    disease and aging may largely disappear

Kurzweil expects AI to be human-aligned because humans will merge with it. Critics say this view may underestimate risks.

 

3. Nick Bostrom — serious existential risk

    Philosopher Nick Bostrom wrote Superintelligence, which argues. A sufficiently advanced AI could rapidly outsmart humanity. If its goals aren’t aligned with human values, catastrophe could occur. His famous thought experiment: A superintelligent AI tasked with making paperclips might convert the entire planet into paperclip factories. The point isn’t paperclips — it’s that a powerful optimizer with the wrong objective can be dangerous.

 

4. Eliezer Yudkowsky — very pessimistic. He argues:

   superintelligent AI is extremely likely

   alignment is extremely difficult

   unaligned AI could wipe out humanity

He’s associated with the Machine Intelligence Research Institute and often advocates slowing or pausing AI development.


    {DB: The LLM does not mention the Yudkowsky book with a terrifically redolent title: “If Anyone Builds it, Everybody Dies.â€� The LLM does not mention calls for an AI moratorium. Or the vast range within both the Optimist and doomer communities. Nor the danger everyone agrees is serious, of AI driven unemployment. Or the proposed solutions (including some of my own) such as Universal Basic Income or Universal Capital.}

 

Where Brin fits in this spectrum:  Brin’s argument in AIlien Minds basically pushes back on the more apocalyptic views. His reasoning includes:

   1. Single-AI dominance is unlikely. Multiple actors are building AI simultaneously.

   2. Humans historically adapt to disruptive tech. Printing presses, nuclear weapons, and the internet all caused huge upheaval but didn’t destroy civilization.

   3. Intelligence isn’t automatically hostile. Being smarter doesn’t necessarily mean being adversarial.

So Brin’s message is roughly: AI may become alien — but alien doesn’t automatically mean hostile.

 

{DB: The previous Section (after� Q: Go on�) is kind of trite and repetitive. Especially the final paragraph “Where Brin fits in…� While the three sentences are basically true, this doesn’t address my key points and is somewhat misleading.}

 

 The deeper philosophical question

The real debate underlying all of this is: Does intelligence naturally converge toward cooperation or toward domination?

    Nobody actually knows.

    Evolution produced both wolves and symbiotic ecosystems.

    Human history produced both wars and global cooperation.

    AI might amplify whichever tendencies dominate the systems we build.

 

{DB: The LLM is just going its own way, now. Each sentence has validity in its own terms, though these are mostly just truisms. Likewise the following, which ignores how a single godlike AI entity is only one of the dismal clichés that are rife in the AI community and that I appraise in some depth..}

 

💭 My personal hot take: The most likely future probably looks less like a single god-like AI and more like a messy ecosystem of AIs, interacting with humans, corporations, and governments.

In other words — less Terminator and more the internet… but with minds inside it.

 

If you want, there are two especially wild ideas Brin raises about AI that almost nobody talks about yet:

    Why humans might deliberately create many competing AIs for safety.

    Why the first truly alien intelligence may already exist in primitive form today.

    Both are pretty mind-bending.

 

{DB: Neither of those are mind-bending, or ‘wild’ or even unusual thoughts. In the book I do recommend that AI entities be incentivized to compete, since that was the secret sauce of the Enlightenment Civilization that made them. And my chapter on consciousness does go into whether such entities might qualify as sapient or intelligent. But neither are core questions that make AiLien Minds all that unique.  

 

{DB: Rather, it is my discussion of how to incentivize reciprocal accountability among AIs. Which can only work if we copy Nature’s methods of individuation and selective reproduction.

 

{DB: Note also that no mention is made of the Three Clichés that – I frequently assert – seem to control and lobotomize the pronouncements issued by almost every expert, pundit or maven in the field. The clichéd expectations that AIs will either (1) work obediently for three dozen ‘castles’ (OpenAI, Anthropic, Meta, DeepSeek…) in a recap of feudalism…. or else spread out amorphously as ‘blobs’… or else consolidate into a single, uber-godlike Skynet/MCP entity. This LLM summary refers just to the latter and only to dismiss it.  This riff is a core element of the book, leading to my attempt to draw attention to a Fourth Path… the one that made us unique among civilizations. The one that enabled us to make AI.}



Q: Go on…


Alright — these are the two ideas that make AIlien Minds by David Brin especially interesting. They’re less about “killer robotsâ€� and more about how intelligence ecosystems evolve.

 

1. Safety through competing AIs: One of Brin’s more unusual arguments is that a single super-AI might actually be the dangerous scenario.  {DB: huh?}

Instead, he suggests that safety could come from many AIs watching each other.

     Think about how modern systems stay stable:

   -democracies rely on checks and balances

   -markets rely on competition

   -science relies on peer review

Brin’s analogy is that AI might need something similar. Instead of one dominant system, imagine:

government AIs

corporate AIs

open-source AIs

watchdog AIs

Each monitoring the others.

The idea is that no single intelligence gets absolute power.

It’s a very “Brinâ€� concept — he has long argued that accountability emerges when powerful actors can observe each other.

 

{DB: All of the above is true… while missing the point about HOW super-smart entities can be incentivized to impose accountability on each other, even after the corporations and governments lose control.}

 

{DB: What follows is just the LLM saying stuff that  - while sorta true and discussed in my chapter on consciousness – is almost wholly irrelevant to the core points of the book.}

 

2. The first alien minds may already exist. The other intriguing point is his claim that AI might already be slightly alien â€” even before reaching superintelligence.

Why?

Because modern machine learning systems often:

    solve problems using methods humans wouldn’t invent

    develop internal representations we can’t easily interpret

    behave unpredictably outside training data

Researchers sometimes call this the interpretability problem.

For example:

image-recognition systems have discovered strange visual patterns humans never noticed

game-playing AIs create strategies that look bizarre but work brilliantly

That’s not “alien intelligence� yet — but it’s a hint of what Brin is talking about.

The mind inside the machine may not resemble human reasoning at all.

 

3. Why this matters

Brin’s broader message is basically:

The biggest mistake we can make is assuming AI will think like us.

Instead, we may have to learn how to coexist with different kinds of intelligence.

In that sense, AI might resemble the discovery of a new species — except we created it.

 

A thought experiment Brin likes

Imagine humans meet a genuinely alien civilization.

The hardest challenge wouldn’t be technology.

It would be understanding their motives and values.

Brin argues that AI alignment might be the same problem.

 

{DB: While none of the above is inconsistent with things I have said, over the years, I am boggled that it became the essence that the LLM distilled out of my book. 

 

{DB: The following snippet IS there in the book, talking about how our synthetic heirs might take humanity to the stars. But again, it is not one of the book’s core points.}

 

💡 One more fascinating twist:

Brin has suggested that AI could eventually become humanity’s partner in exploring the universe, because machine minds might tolerate:

    radiation

    long timescales

    strange environments

In other words, the “alien mindsâ€� we build might help us meet real aliens someday.

 

If you're curious, there’s actually a pretty surprising prediction Brin makes about the next 10–20 years of AI that goes against most Silicon Valley thinking.

It’s about why AI might not replace humans nearly as much as people expect.


Q: Go on…


Alright — this is where David Brin gets a bit contrarian in AIlien Minds, and it’s pretty refreshing compared with the usual “AI will replace everyoneâ€� narrative.

His argument is basically:

AI will often increase the value of humans rather than eliminate them.

Let’s unpack the logic.

 

{DB: Again, while 1, 2, and 3 below have some validity, they portray emphases that seem to come from the LLM’s training sets, rather not what’s unique about AIlien Minds.  Oh, by the way, “alrightâ€� was never an actual word… till ignorant over-use made the Dictionary folks shrug and surrender and accept it. Sigh.}

 

1. Humans may become the “sanity check layer�

As AI systems get more powerful, they may also become more opaque and strange.

Even today, advanced models sometimes produce answers that are:

convincing

sophisticated

but wrong

Brin suggests humans could remain important as interpreters and validators.

Think of it like:

autopilot still needs pilots

medical AI still needs doctors

financial models still need analysts

Humans may act as the reality-check layer between machine reasoning and the real world.

 

2. Trust may become more valuable than raw intelligence

If AI becomes cheap and abundant, intelligence itself stops being scarce.

What becomes scarce instead?

Things like:

accountability

reputation

judgment

human trust

Brin predicts society may increasingly require a human name attached to decisions, even if AI helped make them.

 

{DB: This I do explore in the book.}

 

Examples:

a doctor signs off on an AI diagnosis

an engineer signs off on an AI design

a judge signs off on AI legal analysis

In other words: humans become responsible agents in an AI-powered system.

 

3. AI may create new cognitive jobs

Historically, new technologies rarely eliminate work entirely. Instead they create new layers of work.

Printing press → editors, journalists, publishers
Computers → programmers, analysts, cybersecurity experts
Internet → entire digital industries

Brin thinks AI could create roles like:

AI auditors

AI psychologists (studying machine behavior)

alignment engineers

machine-ethics regulators

Basically: people whose job is understanding and managing nonhuman intelligence.

 

{DB: #4 is a point that I feel my book made in some unique ways.}

 

4. Humans still have evolutionary advantages

Brin also points out something that gets overlooked in AI debates:

Humans are incredibly robust and adaptable.

Compared to advanced machines, we have some weird advantages:

extremely energy-efficient brains (~20 watts)

flexible learning across many domains

social intelligence and empathy

moral reasoning shaped by evolution

AI might surpass us in narrow domains, but general human adaptability is surprisingly hard to replicate.

 

A bigger philosophical point

Brin tends to reject the idea that history suddenly ends with superintelligence.

Instead he imagines a future where:

humans

many kinds of AIs

augmented humans

…all coexist in a messy, evolving ecosystem of minds.

Not one ruler — more like a crowded civilization of different intelligences.

 

{DB: Okay, sure… as is the following…}

 

💭 One of Brin’s most intriguing ideas (he’s hinted at it in essays too) is this:

Humanity’s long-term role may be to create new kinds of minds.

In that sense, AI wouldn’t replace us.

It would be our descendants â€” not biologically, but intellectually.

 

If you want, there’s another really fascinating angle Brin raises that almost nobody in the AI debate talks about:

Why he thinks secrecy around AI might be more dangerous than open development.

That argument gets pretty provocative.


Q: Go on…


This is one of the more provocative threads in AIlien Minds and in the broader thinking of David Brin. His argument runs almost opposite to a lot of current AI policy discussions. 

 

{DB: only some of them. Others are laissez-faire… often to a degree that tips into at-best malpractice. But the following is fair.}

 

Many experts say:

AI development should be restricted, secretive, or tightly controlled to prevent misuse.

Brin worries that too much secrecy could actually increase danger.

Here’s the reasoning.

 

1. Concentrated power is historically risky

Brin points to a pattern in history: when powerful technologies are controlled by a small number of actors, bad outcomes become more likely.

Examples often cited in these debates include:

nuclear weapons programs

intelligence agencies

monopolistic technologies

If only a few governments or corporations control advanced AI, they gain enormous leverage over everyone else.

Brin worries that could lead to:

authoritarian surveillance states

technological monopolies

unaccountable decision-making systems

In his view, centralized power is the real risk, not intelligence itself.

 

{DB: Fair enough.}

 

2. Transparency creates watchdogs

Brin’s long-standing theme (going back to The Transparent Society) is that mutual transparency stabilizes power.

The idea is simple: When powerful actors can be watched by others, abuse becomes harder.

In an AI world, that might mean:

   open research communities

   public scrutiny of algorithms

   competing systems auditing each other

Instead of “trust the single powerful AI lab,â€� the system becomes a network of observers.

 

3. Innovation tends to leak anyway

Brin also argues that technological knowledge rarely stays secret for long.

Historically:

  nuclear technology spread

  cryptography spread

  computing spread

Even if one country or company tries to keep AI contained, others will eventually develop similar capabilities.

So secrecy may only delay competition while concentrating power early.

 

4. The internet analogy

Brin sometimes compares AI to the early internet.

In the 1980s–90s, governments could theoretically have tried to tightly control networking technology.

Instead, it became widely distributed infrastructure.

That openness created:

    enormous innovation

    new industries

    but also problems like cybercrime and misinformation

Brin thinks AI may follow a similar trajectory — messy but dynamic.

 

{DB: Again, what’s below is a simplistic dichotomy that – while somewhat true, ignores my core points about HOW.}

 

The tension here: The AI debate today often splits into two camps:

Centralized safety view

   limit AI development

   concentrate control in trusted institutions

   slow progress

Distributed resilience view (closer to Brin)

   allow many actors to develop AI

   rely on transparency and competition

   avoid concentration of power

Both approaches try to prevent catastrophe — they just disagree about where the biggest risk lies.

 

💭 Personally, I find Brin’s argument compelling in one specific way:

Most technological disasters in history came from unchecked centralized authority, not from too many people having knowledge.

But AI might also be unusually powerful, so the balance is tricky.

 

One last interesting twist about David Brin’s thinking: His ideas about AI are heavily influenced by his science fiction — especially the universe he created in the Uplift Universe novels.

There’s a fascinating connection between that fictional universe and his views about AI.

If you want, I can explain why Brin thinks humanity may become a “creator speciesâ€� of new intelligences — and why that could be the most important event in human history.


Q: Ok, continue…

 

This is where David Brin gets genuinely philosophical in AIlien Minds, and it connects strongly to the ideas he explored in his science-fiction setting, the Uplift Universe.

The core concept is what he sometimes calls the â€œcreator speciesâ€� idea.

 

1. Intelligence creating new intelligence

For almost all of Earth’s history, intelligence arose only through biological evolution.

Then something new happened:

Humans started creating tools that think.

First:

calculators

computers

software

Now:

learning systems

autonomous reasoning systems

large-scale AI

Brin sees this as potentially the beginning of a new evolutionary step:

Intelligence that deliberately creates other intelligences.

That’s radically different from natural evolution.

 

{DB: And yet I spend time showing that AIs occupy a new ecosystem that replicates many aspects of the sun-powered, organic one. And that Darwinian evolution within that ecosystem is ALREADY TAKING PLACE. And that the incentive structures (and individuation) will have more long-lasting and positive effects than trying to design new, hyper-intelligent beings. 

 

{DB: The “Uplift� analogy – while flattering – is not very germane.}

 

2. The “uplift� analogy

In the Uplift Universe novels, the galaxy is full of civilizations that genetically “upliftâ€� animals to intelligence.

For example, species like dolphins or chimpanzees are engineered into full sapient partners.

The rule of that fictional universe is:

advanced species help create new intelligent species

those new species eventually do the same

Brin has suggested that AI might be humanity’s real-world version of uplift.

Instead of uplifting animals, we’re uplifting machines.

 

3. Why that could change everything

If humans become a “creator species,� several big shifts happen.

Intelligence multiplies

Instead of one intelligent species on Earth, there could eventually be:

human minds

augmented humans

machine intelligences

hybrid minds

Civilization becomes an ecosystem of minds.

 

{DB: Yes, fine. Great. Love to see it. I am concerned about the present and near future.}

 

 

Evolution speeds up

Biological evolution takes millions of years.

Technological evolution can happen in decades or even months.

That means new forms of intelligence could emerge very quickly.

 

Humanity’s role changes

Instead of being the most advanced intelligence on Earth forever, humans might become something like:

founders

mentors

partners

…to new kinds of minds.

That’s a very different narrative from “AI replaces us.�

 

4. Why Brin thinks this may be common in the universe

Brin has speculated that if technological civilizations arise elsewhere in the cosmos, they might eventually do the same thing:

Create new intelligences.

That means the galaxy might eventually fill with:

biological species

machine species

hybrid civilizations

If that’s true, the first step—creating artificial minds—could be one of the most important transitions any civilization experiences.

 

A humbling perspective

Brin sometimes frames it this way:

Humanity might not be the final product of evolution.

We might be a bridge species â€” the intelligence that creates the next wave of intelligences.

 

{DB: Okay, since the last DB… while none of it is entirely inaccurate/// this was essentially just the LLM blathering its own way. But what follows is related to something I said.}

💭 A fun twist in Brin’s thinking:

He’s also suggested that future AI might view humans somewhat the way we view our ancestors—not obsolete, but foundational.

Kind of like:

“Those strange primates who started the whole thing.�

 

If you're interested, there’s actually a really fascinating prediction Brin makes about first contact with extraterrestrial intelligence that ties directly into AI — and it’s pretty counterintuitive.



Q: I’ll stop here for now—thanks! BTW: I know David (Brin) personally!


That’s pretty cool! ğŸ˜„ David Brin has one of those rare profiles—hard science background, big-idea futurism, and storytelling chops all at once. Not many people move comfortably between astrophysics, policy debates about AI, and writing novels like Startide Rising or the broader Uplift Universe.

 

If you know him personally, you’ve probably noticed something people often say about him: he loves arguing ideas in public. He’s unusually willing to debate big technological futures in essays, blog posts, and talks rather than keeping everything inside academia or fiction. That habit is probably why his views on AI feel so… conversational and exploratory.

 

Also, if you ever chat with him about AIlien Minds, I’d be curious what he thinks about one particular tension right now:his long-standing belief in transparency and distributed power, versus

the current push in AI policy toward centralized regulation and restricted models. That debate is heating up fast, and his perspective is pretty distinctive.

 

And honestly, meeting science-fiction authors is fascinating because they often think decades ahead of the rest of us. Brin has been writing about alien intelligence, uplifted species, and post-human futures since the 1980s—ideas that suddenly feel less abstract in the AI era.

 

If you ever feel like sharing:
What’s he like in person? More the scientist type, the storyteller type, or the friendly contrarian who likes poking at everyone’s assumptions?

 

{DB: Well, sure, always end on a flattering note!}


-------


* My thanks to Richard Bliss for doing this experiment and sharing the results. The original posting without my comments is here: https://chatgpt.com/share/69b49ff2-8a90-800d-ad0c-bb63c727f324

 

 

 

Planet DebianOtto Kekäläinen: Automated security validation: How 7,000+ tests shaped MariaDB's new AppArmor profile

Featured image of post Automated security validation: How 7,000+ tests shaped MariaDB's new AppArmor profile

Linux kernel security modules provide a good additional layer of security around individual programs by restricting what they are allowed to do, and at best block and detect zero-day security vulnerabilities as soon as anyone tries to exploit them, long before they are widely known and reported. However, the challenge is how to create these security profiles without accidentally also blocking legitimate actions. For MariaDB in Debian and Ubuntu, a new AppArmor profile was recently created by leveraging the extensive test suite with 7000+ tests, giving good confidence that AppArmor is unlikely to yield false positive alerts with it.

AppArmor is a Mandatory Access Control (MAC) system, meaning that each process controlled by AppArmor has a sort of an “allowlist” called profile that defines all capabilities and file paths a program can access. If a program tries to do something not covered by the rules in its AppArmor profile, the action will be denied on the Linux kernel level and a warning logged in the system journal. This additional security layer is valuable because even if a malicious user found a security vulnerability some day in the future, the AppArmor profile severely restricts the ability to exploit it and gain access to the operating system.

AppArmor was originally developed by Novell for use in SUSE Linux, but nowadays the main driver is Canonical and AppArmor is extensively used in Ubuntu and Debian, and many of their derivatives (e.g. Linux Mint, Pop!_OS, Zorin OS) and in Arch. AppArmor’s benefit compared to the main alternative SELinux (used mainly in the RedHat/Fedora ecosystem) is that AppArmor is easier to manage. AppArmor continues to be actively developed, with new major version 5.0 expected to arrive soon.

I also have some personal history contributing some notification handler scripts in Python and I also created the website that AppArmor.net still runs.

Regular review of denials in the system log required

Any system administrator using Debian/Ubuntu needs to know how to check for AppArmor denials. The point of using AppArmor is kind of moot if nobody is checking the denials. When AppArmor blocks an action, it logs the event to the system audit or kernel logs. Understanding these logs is crucial for troubleshooting custom configurations or identifying potential security incidents.

To view recent denials, check /var/log/audit/audit.log or run journalctl -ke --grep=apparmor.

A typical denial entry for MariaDB will look like this (split across multiple lines for legibility):

msg=audit(1700000000.123:456): apparmor="DENIED" operation="open"
profile="/usr/sbin/mariadbd" name="/custom/data/path/test.ibd" pid=1234
comm="mariadbd" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0

How to interpret this output:

  • msg=audit(…): The audit timestamp and event serial number.
  • apparmor=“DENIED”: Indicates AppArmor blocked the action.
  • operation: The action being attempted (e.g., open, mknod, file_mmap, file_perm).
  • profile: The specific AppArmor profile that triggered the denial (in this case the /usr/sbin/mariadbd profile).
  • name: The file path or resource that was blocked. In the example above, a custom data path was denied access because it wasn’t defined in the profile’s allowed abstractions.
  • comm: The command name that triggered the denial (here mariadbd).
  • requested_mask / denied_mask: Shows the permissions requested (e.g., r for read, w for write).
  • pid: The process ID.
  • fsuid: The user ID of the process attempting the action.
  • ouid: The owner user ID of the target file.

If an action seems legit and should not be denied, the sysadmin needs to update the existing rules at /etc/apparmor.d/ or drop a local customization file in at /etc/apparmor.d/local/. If the denied action looks malicious, the sysadmin should start a security investigation and if needed report a suspected zero-day vulnerability to the upstream software vendor (e.g. Ubuntu customers to Canonical, or MariaDB customers to MariaDB).

AppArmor in MariaDB - not a novel thing, and not easy to implement well

Based on old bug reports, there was an AppArmor profile already back in 2011, but it was removed in MariaDB 5.1.56 due to backlash from users running into various issues. A new profile was created in 2015, but kept opt-in only due to the risk of side effects. It likely had very few users and saw minimal maintenance, getting only a handful of updates in the past 10 years.

The primary challenge in using mandatory access control systems with MariaDB lies in the sheer breadth of MariaDB’s operational footprint with diverse storage engines and plugins. Also the code base in MariaDB assumes that system calls to Linux always work – which they do under normal circumstances – and do not handle errors well if AppArmor suddenly denies a system call. MariaDB is also a large and complex piece of software to run and operate, and it can be very challenging for system administrators to root-cause that a misbehavior in their system was due to AppArmor blocking a single syscall.

Ironically, AppArmor is most beneficial exactly due to the same reasons for MariaDB. The larger and more complex a software is, the larger are the odds of a security vulnerability arising between the various components. And AppArmor profile helps reduce this complexity down to a single access list.

Over the years there has been users requesting to get the AppArmor profile back, such as Debian Bug#875890 since 2017. The need was raised recently again by the Ubuntu security team during the MariaDB Ubuntu ‘main’ inclusion review in 2025, which prompted a renewed effort by Debian/Ubuntu developers, mainly myself and Aquila Macedo, with upstream MariaDB assistance from Daniel Black.

A fresh approach: leverage the MariaDB test suite for automated testing and the open source community for reviews

The key to creating a robust AppArmor profile is the ability to know in detail what is expected and normal behavior of the system. One could in theory read all of the source code in MariaDB, but with over two million lines, it is of course not feasible in practice. However, MariaDB does have a very extensive 7000+ test suite, and running it should trigger most code paths in MariaDB. Utilizing the test suite was key in creating the new AppArmor profile for MariaDB: we installed MariaDB on a Ubuntu system, enabled AppArmor in complain mode and iterated on the allowlist by running the full mariadb-test-run with all MariaDB plugins and features enabled until we had a comprehensive yet clean list of rules.

To be extra diligent, we also reworked the autopkgtest for MariaDB in Debian and Ubuntu CI systems to run with the AppArmor profile enabled and to print all AppArmor notices at the end of the run, making it easy to detect now and in the future if the MariaDB test suite triggers any AppArmor denials. If any test fails, the release would not get promoted further, protecting users from regressions.

While developing and triggering manual test runs we used the maximal achievable test suite with 7177 tests. The test is however so extensive it takes over two hours to run, and it also has some brittle tests, so the standard test run in Debian and Ubuntu autopkgtest is limited just to MariaDB’s main suite with about 1000 tests. Having some tests fail while testing the AppArmor profile was not a problem, because we didn’t need all the tests to pass – we merely needed them to run as many code paths as possible to see if they run any system calls not accounted for in the AppArmor profile.

Note that extending the profile was not just mechanical copying of log messages to the profile. For example, even though a couple of tests involve running the dash shell, we decided to not allow it, as it opens too much of a path for a potential exploit to access the operating system.

The result of this effort is a modernized, robust profile that is now production-ready. Those interested in the exact technical details can read the Debian Bug#1130272 and the Merge Request discussions at salsa.debian.org, which hosts the Debian packaging source code.

Now available in Debian unstable, soon Ubuntu – feedback welcome!

Even though the file is just 200 lines long, the work to craft it spanned several weeks. To minimize risk we also did a gradual rollout by releasing the first new profile version in complain mode, so AppArmor only logs would-be-denials without blocking anything. The AppArmor profile was switched to enforce mode only in the very latest MariaDB revision 1:11.8.6-4 in Debian, and a NEWS item issued to help increase user awareness of this change. It is also slated for the upcoming Ubuntu 26.04 “Resolute Raccoon” release next month, providing out-of-the-box hardening for the wider ecosystem.

While automated testing is extensive, it cannot simulate everything. Most notably various complicated replication topologies and all Galera setups are likely not covered. Thus, I am calling on the community to deploy this profile and monitor for any audit denials in the kernel logs. If you encounter unexpected behavior or legitimate denials, please submit a bug report via the Debian Bug Tracking System.

To ensure you are running the latest MariaDB version, run apt install --update --yes mariadb-server. To view the latest profile rules, run cat /etc/apparmor.d/mariadbd and to see if it is enforced review the output of aa-status. To quickly check if there were any AppArmor denials, simply run journalctl -k | grep -i apparmor | grep -i mariadb.

Systemd hardening also adopted as security features keep evolving

For those interested in MariaDB security hardening, note that also new systemd hardening options were rolled out in Debian/Ubuntu recently. Note that Debian and Ubuntu are mainly volunteer-driven open source developer communities, and if you find this topic interesting and you think you have the necessary skills, feel free to submit your improvement ideas as Merge Requests at salsa.debian.org/mariadb-team. If your improvement suggestions are not Debian/Ubuntu specific, please submit them directly to upstream at GitHub.com/MariaDB.

,

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.11 on CRAN: Microfix

And yet another maintenance release of the tidyCpp package arrived on CRAN this morning, just days after previous release which itself came a mere week and a half after its predecessor. It has been built for r2u as well. The package offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples.

This release restores the small CSS file used by the vignette which we, in a last-second decision, omitted from the previous release. Oddly, it only failed under the ‘oldrel’ i.e. the R from now nearly two years ago. But it was still an unenforced error, and this upload corrects it.

Changes are summarized in the NEWS entry that follows.

Changes in tidyCpp version 0.0.11 (2026-03-17)

  • Keep a CSS file in the package to allow vignette build on r-oldrel too

Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianBits from Debian: Debian pt_BR localization team and UFABC's mentoring program

Between July and November 2025, the Debian pt_BR translation team received five students for an online mentoring program. The initiative was carried out in partnership with the Federal University of ABC through the extension project "Immersion in Free Software", coordinated by professors Suzana Santos and Miguel Vieira.

During the mentorship the mentees acted on several of the team's translation efforts and joined presentations about the Debian Project and its community given by the mentors. We thank the dedication and contributions of Ana Parra, Bruno Freitas, Henrique Barbosa, Raul Banzatto and Vitoria Cordeiro. And we also thank the members of the team who have reviewed the work of the mentees, specially the ones who were designated as official mentors, namely Allythy Rennan, Daniel Lenharo, Thiago Pezzo, and Victor Marinho.

Results:

  • Package descriptions, translations: 27
  • Package descriptions, revisions: 190
  • Web pages: 11
  • Revisions to the Debian Administrator's Handbook
  • Revisions to the Debian Edu documentation

We hope that this experience will inspire new paths and that you continue to contribute to Free Software – especially to Debian.

Worse Than FailureRepresentative Line: Greater Than False

Today's anonymous submitter passes us a single line of JavaScript, and it's a doozy. This line works, but that's through no fault of the developer behind it.

{arr?.length && shouldNotShow === false > 0 (...)}

Pedantically, this is JSX, not pure JavaScript, but the rules still apply.

So, fun fact in JavaScript: true > 0 is true, and false > 0 is false. Which generally makes sense, but why would you use that here? But this code is worse than it looks, thanks to operator precedence.

The highest precedence operation is the optional chain- arr?.length. The second highest operation? >. So the first part of the comparison that evaluates is false > 0. Which is false. Do you know what's next? ===. So we compare shouldNotShow to false. Then we && that with the potentially falsy value from our arr?.length.

It's all a mess, and it's all so we can compare against false, which we could have just done with a ! operator. !(arr?.length && shouldNotShow).

Our submitter credits this to an offshore team, and this does have the vibe of throwing characters at the problem until it passes the test. Less LLM-guided and more "manually executed Markov chain". That's also an accurate description of the rest of the code in this code base: hand crafted Markov chain generation.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsMorning Chores

Author: R. J. Erbacher He woke, sat up in bed, transmuted a yawn into a groan of satisfaction as he rolled his shoulders before dropping his feet onto the carpet. Standing, he stretched his fists up to the ceiling and groaned again, twisting the kinks out of his back, finishing up with a butt scratch […]

The post Morning Chores appeared first on 365tomorrows.

Planet DebianTaavi Väänänen: Wikimedia Hackathon Northwestern Europe 2026

Wikimedia Nederland organised a new type of event this year, the Wikimedia Hackathon Northwestern Europe 2026, which was held last weekend in Arnhem, the Netherlands. And I'm very happy they did, since unlike last years, I will unfortunately be missing from the "main" Wikimedia Hackathon (which is happening in Milan at the start of May).

I continue to believe the primary reason for these events existing is the ability to connect with old and new friends in person. That being said, I did get a bit of technical tinkering done during the weekend as well. These include a dark mode fix to MediaWiki's notification interface, fixes to some visual bugs in MediaWiki's two-factor authentication and OAuth functionality. I did also get an older patch of mine about disabling Composer's new auditing functionality merged. And, as usual, I spent a bunch of time helping various people use with the various infrastructure pieces I'm familiar with (or at least had to suddenly get familiar with) and approved a bunch of OAuth consumers and other requests.

We also managed to continue the tradition from the past two Wikimedia Hackathons of nominating more people to receive +2 access to mediawiki/*. That request is still open as of writing, as those have to run for at least a week, but looks very likely to pass at this point.

Overall, the event was very well-organized: the venue was great, except that the number of stairs was described in a rather misleading way, food was great, and the atmosphere was amazing. The pressure that you must Just Get Things Done to justify your attendance that the main hackathon seems to have recently gained was clearly missing here which was great. Also, I will clearly need to bring more Finnish chocolate next time.

The timing of Friday and Saturday works great for us with other things (like university for me) during the week, as it takes full advantage of the weekend but still only eats workdays from a single calendar week. My main gripe with the logistics was the focus on a single sketchy non-free messaging platform for all event-related communications with the IRC bridge used on the main hackathon channel notably missing.


ps. Like Lucas, I do have Opinions about so many proudly mentioning they've used "vibe coding" tools during the introduction and showcase. Those opinions are best left for an another time, but I do want to note that all of my work and mistakes have still been lovingly handcrafted.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 15.2.4-1 on CRAN: Upstream Update

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1235 other packages on CRAN, downloaded 44.9 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 672 times according to Google Scholar.

This versions updates to the 15.2.4 upstream Armadillo release from yesterday. The package has already been updated for Debian, and for r2u. This release, which we as usual checked against the reverse-dependencies, brings minor changes over the RcppArmadillo release 15.2.3 made in December (and described here) by addressing some corner-case ASAN/UBSAN reports (which Conrad, true to his style of course labels as ‘false positive’ just how he initially responded that he would ‘never’ add a fix based on such a false report; as always it is best to just watch what does as he is rather good at it, and, written comments notwithstanding, quite responsive) as well as speed-ups for empty sparse matrices. I made one more follow-up refinement on the OpenMP setup which should now ‘just work’ on all suitable platforms.

The detailed changes since the last release follow.

Changes in RcppArmadillo version 15.2.4-1 (2026-03-17)

  • Upgraded to Armadillo release 15.2.4 (Medium Roast Deluxe)

    • Workarounds for bugs in GCC and Clang sanitisers (ASAN false positives)

    • Faster handling of blank sparse matrices

  • Refined OpenMP setup (Dirk in #500)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Worse Than FailureCodeSOD: Poly Means Many, After All

Capybara James sends us some code which is totally designed to be modular.

This particular software accepts many kinds of requests which it then converts into a request for a ListView. This is a perfect example of where to use polymorphism, so you can write one transform method that operates on any kind of request.

Let's see how they did it:

@Component
public class ListViewTableRequestTransformer implements Function<TableExportRequest, ListViewRequest> {
    @Override
    public ListViewRequest apply(TableExportRequest request) {
        return new ListViewRequest(request.getFilters(), request.getRangeFilters(), request.getSearch(), request.getSort());
    }
}
@Component
public class ListViewFormulaRequestTransformer implements Function<FormulaExportRequest, ListViewRequest> {
    @Override
    public ListViewRequest apply(FormulaExportRequest request) {
        return new ListViewRequest(request.getFilters(), request.getRangeFilters(), request.getSearch(), request.getSort());
    }
}

Now admittedly, my first instinct for letting generics just handle this wouldn't work in Java thanks to type erasure. My excuse is that I've been using C++ templates for too long. But what's not pictured in this code is that TableExportRequest and FormulaExportRequest both implement the same base interface, which means polymorphism could still condense this down into a single function: ListViewRequest apply(RequestInterface request).

Duplicated code like this is like cockroaches. You've seen two, which means there are many many more lurking in the codebase. All of the various request types get their own identical method, differing only in signature.

All my explanation doesn't sum this up as pithily as Capybara James did, however:

There was an attempt to make the code modular and scalable. An attempt I say.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsToo Much

Author: Majoki There are some insults even aphids can’t ignore. In 1999 Japanese researchers released their study of “old man smell” which they compared to the scent of crushed aphids. Generally easygoing, we aphids have rolled with our reputation as pests and nuisances, but this was too much: old man smell. It was time humans […]

The post Too Much appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RcppClassicExamples 0.1.4 on CRAN: Maintenance

Another minor maintenance release version 0.1.4 of package RcppClassicExamples arrived earlier today on CRAN, and has been built for r2u. This package illustrates usage of the old and otherwise deprecated initial Rcpp API which no new projects should use as the normal and current Rcpp API is so much better.

This release, the first in two and half years, mostly aids Rcpp in moving from Rf_error() to Rcpp::stop() for better behaviour under error conditions or excections. A few other things were updated in the interim such as standard upgrade to continuous integration, use of Authors@R, and switch to static linking and an improved build to support multiple macOS architectures.

No new code or features. Full details below. And as a reminder, don’t use the old RcppClassic – use Rcpp instead.

Changes in version 0.1.4 (2026-03-16)

  • Continuous integration has been updated several times

  • DESCRIPTION now uses Authors@R

  • Static linking is enforced, RcppClassic (>= 0.9.14) required

  • Calls to Rf_error() have been replaced with Rcpp::stop()

  • Updated versioned dependencies

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianJonathan Dowland: My Prusa Mini+ is broken

my prints kept turning into gunge

my prints kept turning into gunge

View of heatblock, with snapped nozzle

View of heatblock, with snapped nozzle

Oh dear! I've been suffering print reliability issues on my Prusa Mini+ for quite a while, roughly since they introduced Input Shaping (although that might not be the culprit). Whilst trying different things to resolve it, I managed to sheer off the brass nozzle within the heatblock. I now have half the nozzle stuck in the ratchet spanner, and half in the heatblock.

What to do next?

I can try and get the nozzle out of the heatblock, by screwing something into it or using an extraction screw. I've been warned this could be messy and dangerous. Less risky might be to change out the whole heatblock. They don't seem to be expensive.

Back in FOSDEM I asked the Prusa folks what cool projects I could do with the Mini+… they looked a little blank (I think the Mini+ is now a somewhat forgotten product) but they did say somebody had managed to port over the "Nextruder" from the more recent Prusa XL/MK4. I could take a look at that.

Another thing I've always wanted to explore (although I had intended it to be temporary/reversible) was converting it into a plotter, for plotter art.

Somehow this is my first 3d printing blog post in over a year. The printables.com feed I linked to is still going, I'm happy to report (as is the one I wrote but didn't publish, slightly more surprisingly)

David BrinSeven rationalizations to suppress our renaissance. (This is a long one)

Before getting into glowering threats to our future, let me offer you something that could lift your spirits... a tribute to my friend Frederick Turner, who was among the finest epic poets since Byron, and possibly Homer. And almost always with bold science or future-aimed or even science-fictional themes. I'll miss him, a lot.


Last month - before we were all yanked away by insipidly stupid war - a few in the press had finally started asking about the "Eyes Wide Shut" (EWS) orgies that Madison Cawthorn and two other GOP congressmen publicly - years ago(!) - called 'routine' in the GOP establishment. Of course, this led to the vast blackmail ring that keeps RP pols & pundits in line, as the most-disciplined political force in US history. 

And yes, this is related to the Epstein Files, which showed conclusively that Moscow's lipstick-relabeled KGB was long involved with Jeffrey Epstein, supplying both cash and pretty young Russian gals to help serve as kompromat bait. But, according to Cawthorn and others, Epstein's Island was not the only locale, nor were he and Robert Maxwell's daughter the only pimps.

NOW a new twist, prompting further hope that the dike may soon break. When (finally!) a reporter grilled Nancy Mace about the reported orgies, she writhed and evaded the question by mewling "I don't get invited to parties. I have no friends. I live all alone with my dog."

Say…what!? And what does that have to do with the question? Note that she did not deny it’s going on!

 

Alas, it may be an isolated case of actual-investigative journalism, with this reporter quickly told to veer back to “Quiet piggie” and other superficial outrages. Have any of you seen reporters or dem-pols following up about this? Which could prove Epstein a mild offshoot eddy in something vastly deeper and wider and murkier. A giga scandal, waiting to smash the GOP? (Including - alas - some Democrats and that may explain why this part is moving slowly.


I've only been talking about this (in vain) for a decade.



== Keep us focused on contrived horizons ==


They aim to render us dizzy with distractions. And certainly it succeeds. But I'm not just talking about this particular, current war... or Trump's chain of them. 


In fact, the left has been ironically complicit in the distraction strategy. 

How? 


Take the widespread incantation that MAGA is all about racism. 

Sure the depredations of ICE garner applause across Redland and horror in the eyes of all decent people. But learn to recognize a deliberate distraction! Because anti-immigration stuff has no pertinence to POWER.  And power is what the GOP oligarchs seek. Power to neutralize the laws and terrorize the inspectors and civil servants who have long thwarted the planned return to feudalism.


Sure, there ARE racists and Nazis and Klanners among the MAGA lumpenprols But try asking the average Fox viewers, like your basically decent neighbor across the street. They will reject the ‘racist!’ accusation resentfully. “I don’t feel racist.” They’ll cite many black faces on Fox and all the adverts they now see there featuring inter-racial couples. And Clarence Thomas. And proclaim: "I love the good ones!" And yes, that's awful And that doesn't make them non-racist by your standards. But it does let them rationalize resentment toward the accusation.


Moreover, again, the oligarch masters couldn’t care less about race. It serves as a useful dog whistle to rile some of their worst confederate troops. But dig it: The powerless are not the ones who oligarchs worry about.



== So, let's go through those rationalizations for betraying the American Experiment ==


1. You’ve heard me say it before. There is one force in society that stands in the way of resumption of 6000 years of feudal rule by inheritance brats. That force is the fact professions, from science and civil service to the FBI/intel/military officer corps who won the Cold War and the War on Terror., to teachers and strong journalists. And accountants. Especially the Inspectors and JAGs and auditors who the Trumpists have fired, in droves.

It started with Newt Gingrich demolishing the Congressional Office of Science and Technology Assessment, because the OSTA civil servants dared to tell fanatics “sir, that just isn’t true.” And every successive GOP administration has carried that battle further. Against scientists and others – like neutral and qualified judges - who know stuff and can block the worst depredations of oligarchy. 


Again, tune into Fox! Fact folks are the ones most hated-on… plus the universities that were the pride and joy of the GI Bill generations. 


 But not all Republican apologists are onboard with all that, and so we get to rationalizations #2 and #3.



2. “I know my party and my movement has gone stark, jabbering insane. But… but… but democrats are worse!


"Yeah, that’s the ticket. I can retain my old loyalties and hold my nose and support these Horrible Trumpists because Fox tells me that ALL Dems are far worse. All of them are just like the screeching 0.001% sanctimony yowlers of the far, far left! And hence, while my side has gone totally corrupt/perverted/crazy, at least they aren't Dems!"


Of course that incantation is trivially disproved by comparison of OUTCOMES. Which are almost always diametrically better across the spans of Democratic administrations. From deficits to GDP growth, to employment, to scientific advances, to healthy international alliances...


...to rates of every turpitude – from gambling, domestic violence and addiction to child predation and murder and crappy education… to net parasitism sucking taxes from the rest of the nation… ALL of which are more rife on average in red-run states (except Utah) than blue ones. 


And yes, this is where my proposed tactic of demanding wagers comes in. Because it makes them realize you aren’t just matching howled assertion vs. howled assertion. (They love that!) 

         Rather, they know that a wager demand shows that you know you have actual, objective facts on your side. Hence, the less-insane among them know they can only go so far with the objectively false "Democrats are even worse!" chant. It doesn't hold up in the face of stark, raving Trumpian insanity and perversion and treason. 


And so... some of them settle upon theories of historical teleology. 


In other words... this is all destiny at work. 

These are decadent times!  

But it will all work out, after lots of pain.

Because a Hero Generation is coming to save us!



== The rationalizations get ever-more monstrous ==


3. I have elsewhere shredded the core mythology clutched by the ‘intellectual’ wing of today’s gone-mad Right. The mythology that almost always lures in the conservative wing of social thought. 


It's called Cyclical History. And its most recent, US-centered cult manifestation is called The Fourth Turning. A capering series of pareidolia incantations, all of which are trivially disproved. But how can mere factual disproof compete with a lusciously masturbatory sense that all the recent betrayals of civilization by their own cult were fore-ordained anyway. And hence not their fault. So… let the kids in the next ‘hero generation’ figure it out!


If you know any members of that cult, get them help!  You could start with this tragi-comic by Zack Weinersmith’s Saturday Morning Breakfast Cereal series. Or some of the links in the previous paragraph.


Only then there’s an even worse sickness among some elites…



4. Those who are busily building prepper-bunkers fantasizing that, upon surviving a coming, society-collapsing “Event,” they’ll emerge after the dust and poisons settle, to be worshipped as demigods by ragged survivors. 

        Only, there is (as always) a flaw in their circle-jerk fantasy story... because this aftermath won’t resemble either Mad Max or A Canticle For Liebowitz.  


Sorry, guys, but the survivors won’t go burning books and lynching nerds. Rather, they will innovate and use science and technology to save however many folks remain. And they'll wait eagerly to greet you, when you emerge, blinking like cicadas, into the sunlight. (Go to this YouTube song! It conveys the spirit they need to hear.)


Anyway, we nerds have the schematics and locales of every deep or mountaintop ‘prepper’ compound. (Want proof?) And those hidey-holes will not have the desired outcomes. Especially after you do what I’ve heard some of you (like J.D. Vance) openly say… that you expect to deliberately trigger “The Event.” 


Think I am exaggerating? Get Douglas Rushkoffs book about this circle-jerk - this mutal stroke-pleasuring of sick twats who chant semi-erotic fantasies to each other: justifications to accelerate a civilizational collapse that never had to happen. And here's a further article about the very real accelarationism cult. Those who want to ‘accelerate the Event’. Murdering you and all of yours, so they can get to be lords of the ashes.


But then… for some, the attraction is being in-the-know! “I’ve done my own research! And me and just a few others know what’s going on, you foooools!



== The ultimate self-stroke ==


5.  I have no time here to go into conspiracy fetishism. But there’s hardly a word I’d change from this earlier missive of mine about this all-too human allure and addiction. 


Distilled: YES! There ARE conspiracies! But there are ways to evaluate them by plausibilitiy. Like how few or how many skilled people it would take to pull one off, especially years later, when some 80 year old participant gets an attack of conscience… or patriotism… or just the notion to ‘let me go out by spilling-all and causing a ruckus!”  (And sure, that applies to JFK and UFOs and all the rest.)


There are many other aspects and ways to evaluate conspiracy theories. And yes, I have one of my own. That today's madness in DC cannot be explained by ideology or mere ‘corruption.’ It can only be coercion of some kind, enforced by a small cabal – likely based in a foreign nation – doing what that nation’s spy services have done for 150 years. Sure, some assassinations and death threats. But a far more efficient form of coercion is Blackmail.


That one satisfies all of the evaluation criteria. And still, I have the sanity to say “My imagination and ego might - maybe - encourage me to believe something that’s not there.” Perhaps because... like Nancy Mace, I never get invited to that sort of parties? Or because all the personality profiles suggest that - like Popeye, "I yam wut I yam, And dat's just a family man."



== Are we done? ==


Getting there, I promise.


Okay, this next one I’ve touted before. At first sight it looks like a cycle of history!  But no. It is simply that there have long been two Americas. And the cultural divide is not just regional. It can run through families, like yours. Though regions do influence it. So I am talking about a long, long cultural tussle between citizens who are fascinated by and welcome modernity in all its rapidly changing (if sometimes dangerous or disturbing) forms…

...versus those of our countrymen and countrywomen and others who deeply lament and regret modernity’s disturbances, yearning for a yesterday that never was.


And this manifests in a pattern… not a cycle but a recurring sibling tussle…



6.  In every phase of the recurring U.S. Civil (culture) War, going back to 1778, the fiery anti-urban, anti-modernity, anti-equality, 'confederate' faction won initial victories with aggressiveness, always assuming that wimpiness on the part of 'northern epicures' would ensure their victory. And they did win 3 phases - in the 1830s, 1870s and 1920s. Never heard of em? Well, they won those by not pushing too hard. 


Four other - bigger - phases - were total victories for the Union and modernity, for reasons that Sam Houston expressed perfectly in 1861, in urging fellow Texans not to secede: 


“Let me tell you what is coming. After the sacrifice of countless millions of treasure and hundreds of thousands of lives, you may win Southern independence if God be not against you, but I doubt it. I tell you that, while I believe with you in the doctrine of states rights, the North is determined to preserve this Union. They are not a fiery, impulsive people as you are, for they live in colder climates. But when they begin to move in a given direction, they move with the steady momentum and perseverance of a mighty avalanche...”


Oh, it's not purely North vs South, this time, though look at a map… and ponder those ‘turpitudes’ I spoke of.* Anyway, the deepest cultural divide is the same, going back to 1778. And today's 'red' anti-modernity passions are fully 1860s in vehemence - only this time with more foreign backers than even Jeff Davis could wish for. 


Alas, there is no Sam Houston chiding for satiability. No recognition that machismo stands little chance, if it wages all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 

I mean, seriously? How do you expect it to go when an ’avalanche’ of nerdy fact folks get really mad? 

        The folks who know cyber, bio, nano, nuclear and all the rest?


The list of rationalizations for betraying the Enlightenment Experiment – especially the 80 year Rooseveltean era of the Greatest Generation, that truly Made America Great – goes on and on. Just as the Olde South concocted one justification after another (states rights, a ‘lost cause’, etc.) to evade the pure fact that their treason was absolutely about slavery. And slavish devotion to the plantation lords.


Likewise this time it all still boils down to slavish devotion to those aiming to restore feudalism. 


That is, unless you aim to skip that darkness and jump to the penultimate phase described by Karl Marx, which is…



7. Monarchy or neo-monarchism. In many ways the most noxious of them all. Because the neo-monarchy guys want to keep modernity! Since many of them are tech bros. They just want to get rid of da few teensy aspects of modernity: emocracy, regulations, neutral courts and flat-fair markets, now that they are on top. Keep the nerds and boffins busy inventing toys and ways to make the lords immortal.  And the best way to do that is to create a king-like ‘top CEO.


Forgive a little pedantry as I show you how far back these guys go, in justifying resistance to ‘government by consent of the governed. You might look up a fellow by name of Edmund Burke, who initially supported the American revolution but turned hard right when he saw what I freely avow was a typical mad excess of the left… the Terror taking place across the Channel. In Paris, 1789.


 In Burke’s Reflections on the Revolution in France, published in November 1790, he wrote:

“We fear God; we look up with awe to kings; with affection to parliaments; with duty to magistrates; with reverence to priests; and with respect to nobility. Why? Because when such ideas are brought before our minds, it is natural to be so affected; because all other feelings are false and spurious, and tend to corrupt our minds, to vitiate our primary morals, to render us unfit for rational liberty.”


Of course, such jabber often devolves down to flattering-the-boss. Burke did very well when King George and the regent read his unsupported eloquence. 


But we did one better. in Common Sense, where Thomas Paine wrote dismissively about the murky origins of monarchy:


 “… could we take off the dark covering of antiquity and trace them to their first rise, we should find the first of them nothing better than the principal ruffian of some restless gang, whose savage manners of pre-eminence in subtlety obtained him the title of chief among plunderers; and who by increasing in power and extending his depredations, overawed the quiet and defenseless to purchase their safety by frequent contributions.”


But at least the monarchy touted by Burke had some institutional limits, unlike the versions proposed by favorites of Donald Trump and Vladimir Putin.  While the latter is pushing hard a New Russian Ideology that combines Soviet imperialism (complete with red banners and a volcanically toxic KGB) with every resurrection of Czarist romanticism… 


...Donald Trump is appointing frothing microcephalics to every position in the US government. Like the notoriously worst Roman emperor, Caligula, who appointed as the new Consul… his horse.


Which takes us to the latest version of monarchism that has none of Burke’s eloquence or IQ. Incel neo-monarchist dopes like Mencius Moldbug… or Curtis Yarvin… who would quash flat-fair-creative competition forever, just like every insipidly stoopid king or lord of the last 6000 years. (Shall we tally the exceptions and wager over howfew kings were measurably wise? Were there even ten, across 60 centuries?) 


Yammering - despite the last 80 years of spectacular success under the Rooseveltean Pax - that we need a king (or benevolently-dictatorial "CEO") to finally crush all hope of the flat-fair-creative-even-joyful competition that gave us so much and was the brainchild of the Father of Liberalism, Adam Smith.


Even worse than their ingrate hatred of all the Enlightenment miracles that gave them everything in their spoiled lives, these jerks would end all competitive accountability forever and ever. And, of course, they would likely end all of our lives.



== The Confederate Threat ==


Sure, but you all draw the wrong lessons from this. 


For a century, all these dark red zones suffered a recurring trauma. And here comes my personal, crackpot theory for why Red/Gray Americans so hate their urban, university-graduate neighbors.


Dig it: in rural or semi-rural America, the local High School is the center of all life, with the bight young heroes everyone adores, from classroom to prom to singers in the drama class performance of Oklahoma!  to the stars of the football team. Only what happens in June? Every summer, the best and brightest kids kick off the local dust and hurry toward bright city/university lights, leaving behind... a kind of desolation of abandonment, as depicted in the film: The Last Picture Show. 


And when they return (if at all) they come back changed! Transformed into snooty know-it-alls. 


Can you pause and recognize the Changeling Theme from almost all ancient mythologies? Faerie/e/f beings steal your children and replace them with cuckoo demons. Is it any wonder that they hate us, despite absolutely depending on our skills and inventions and services and vast productivity and taxes, every hour of every day? And despite it being their own children who are making a choice?


No I'm not excusing that ingrate rage. Sure it's hypocrisy for all those counties to shrug off the vast good that was done for them by their generous neighbors, led by FDR and LBJ and the rest, aho came like Santa to build every road and clinic and - yes - High School - that transformed hillbilly hellscapes into decent places to live.** 

        Sure, the ingratitude and hate are fucked and I am explaining, not excusing, the ease by which Kremlin 'ex' commissars combined with Fox shills to rile spite toward nerdy knowledge folks...


...turning it into all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. It shows how dismally suicidal and hypocritical the cult has become. Has always been, in fact.


Still, one step forward will be to at least understand where much of it comes from. Whether or not it is logical or fair, they feel robbed! And if you come from such a place, maybe you could visit more often? And take that pain into account.



== The Military Role ==


At last, some are paying heed (too late?) to the real issue, which is NOT races/genders/immigrants. They suffer, sure, but the powerless are not the ones that aristocracy must destroy, in order to seize total power. 


What they seek is all out destruction of the one clade who can and will block dictatorship, if they survive. 


Over the weekend, NBC News revealed that the Trump transition team is reportedly compiling a list to investigate and potentially court-martial senior military officers involved in the Afghanistan withdrawal. This move highlights the regime’s use of intimidation to enforce loyalty, exact retribution, and silence any dissent. The chaotic 2021 withdrawal was undeniably tragic, but the groundwork for it was laid by Trump’s 2020 agreement with the Taliban.” Trump’s Taliban pact led generals in the Afghan Army to abruptly betray their US allies and our departing forces.  (Try actually looking into that.)


 “To now suggest treason charges against officers who followed orders—despite the flawed execution—distorts justice into a tool of retribution. This is not about accountability. It’s about sending a warning: dissenters and perceived enemies of the regime, even in the military, will be publicly humiliated and potentially ruined. This move also undermines trust in military leadership, a vital institution for any democracy.” --O.Lautman


Want irony? MAGA/Putinists deride folks like paratrooper/special-forces hero Mark Milley as “procurement and NATO-doctrine ladder-climbers.” In fact, ALL US officers must spend some time doing procurement. Meanwhile, right now, over in Ukraine, every US & NATO weapon system and combat doctrine is proving vastly more effective than even optimists had believed. So much for deriding ‘procurement and doctrine ladder-climbers.’ 


But look at Mark Milley's uniform. Understand the badges - paratrooper, ranger, marksman, combat infantryman, 101st Airborne, and medals for... his yowling critics aren't worthy to shine those shoes, nor Hegs, who screeched "You are all to woke and fat to fight!" at 500 generals and admirals and top sergeants who accomplish more for America and the world, in any day than old Filthy Fingers has, across his entire life.


Oh, and NATO is now so strong that Putin knows it will smash him even if his White House servant abandons the alliance. Unless... unless a new war can change the picture.


Which Trump has done.***



===================================================

*
If we set aside Utah and Illinois as outliers (or even if we don’t) average rates of almost every turpitude are far higher across Red-run states than Blue-led ones: from gambling, addiction, STDs, domestic violence and murder to teen sex, divorce and net tax parasitism on the rest of the nation. 


That is a huge, easily-proved fact and it should discredit all ‘conservative’ claims of good governance, especially when you add in the fact that national Republican administrations are always spendthrift wastrels, sending deficits skyrocketing, while Democratic ones are always far more fiscally responsible. Always. And I welcome $$$ wagers on any of that. 


Throw in the failure of a single Rightist 'supply side economics’ prediction ever, ever to come true, with the sole outcome of rocketing wealth disparities, along with the deliberate war on science and the planet, and the stench gets overwhelming, even before we go on to all the lies and treason and their war against the US military officer corps and Trump's deliberate disbanding our ant-terrorism agents (*remember that one!*)... plus an ongoing list of other insanities, a mile high. 


But the turpitude gap. That is what says it all.


** Infrastructure: Pete Buttigieg warns us in advance to be ready: most of the tens of thousands of essential projects that were started by the Biden administration will not be finished until during or after the coming Trump administration. Lots of people who are opposed those projects will be taking credit for them.


*** From A. Beckwith: “Trump knows that there are two pillars, of autocracy to establish:


A. Destroy and politicize the Military Officer corps. . Hitler did this in stages:

1) Eliminating his in-party rivals (the Night of Long Knives), Trump has eliminated all in-party dissent.

2) Reichstag Fires and other faked pretexts,

3) Getting German soldiers to swear the Hitler Oath (1934),

4) Ream-out and execute any politically unreliable officers, esp. in 1942 after the German army lost the battle of Moscow.

Afterwards, with few exceptions, Herr Hitler and his military were crushed in battle after battle.


B. Smash the 150 year old Civil Service Act that (along with science and the New Deal) truly Made America Great (direct cause & effect.) Destroy the independence of a professional civil service and fill it with hacks owing allegiance to glorious leader.


Beckwith concludes, alas, with panic: “And this brings me up to the final point: Democracy in the USA is dead, and Trump is busy killing it. Organizing and signing petitions and all that is as effective as using a Kazoo to counter a bayonet in a duel to the death. Wake up . Playtime is over.”


Well, yes. And I have been offering tougher tactics, for years. You might start here: Polemical Judo.


Cryptogram Canada Needs Nationalized, Public AI

Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?

Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon.

All the while, OpenAI was less than open. The company had flagged the Tumbler Ridge, B.C., shooter’s ChatGPT interactions, which included gun-violence chats. Employees wanted to alert law enforcement but were rebuffed. Maybe there is a discussion to be had about users’ privacy. But even after the shooting, the OpenAI representative who met with the B.C. government said nothing.

When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.

Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” And it’s not just OpenAI: all the AI giants are for-profit American companies, operating in their private interests, and subject to United States law and increasingly bowing to U.S. President Donald Trump. Moving data centres into Canada under a proposal like OpenAI’s doesn’t change that. The current geopolitical reality means Canada should not be dependent on U.S. tech firms for essential services such as cloud computing and AI.

While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians.

Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise.

Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions—ETH Zurich, EPFL, and the Swiss National Supercomputing Centre—released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on.

The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. This vision represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity.

Apertus also demonstrates a far more sustainable economic framework for AI. Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development. They focused on making something broadly useful rather than bleeding edge—trying dubiously to create “superintelligence,” as with Silicon Valley—so they created a smaller model at much lower cost. Apertus’s training was at a scale (70 billion parameters) perhaps two orders of magnitude lower than the largest Big Tech offerings.

An ecosystem is now being developed on top of Apertus, using the model as a public good to power chatbots for free consumer use and to provide a development platform for companies prioritizing responsible AI use, and rigorous compliance with laws like the EU AI Act. Instead of routing queries from those users to Big Tech infrastructure, Apertus is deployed to data centres across national AI and computing initiatives of Switzerland, Australia, Germany, and Singapore and other partners.

The case for public AI rests on both democratic principles and practical benefits. Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine. Or how to handle a situation such as that of the Tumbler Ridge shooter. These decisions will profoundly shape society as AI becomes more pervasive, yet corporate AI makes them in secret.

By contrast, public AI developed by transparent, accountable agencies would allow democratic processes and political oversight to govern how these powerful systems function.

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada’s $2-billion Sovereign AI Compute Strategy provides substantial funding.

What’s needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

This essay was written with Nathan E. Sanders, and originally appeared in The Globe and Mail.

EDITED TO ADD (3/16): Slashdot thread.

Worse Than FailureCodeSOD: A Little Twisted

Dana sends us a WTF that'll turn your head. She was shopping for new hard drives, and was doing it from her phone, a fairly reasonable tool to use for online shopping these days. She opened the website of one vendor, and it was rotated 90 degrees. Or half-pi radians, for those of us that are more used to sensible units.

This was irrespective of any rotation settings on her phone, the website insisted on showing itself in landscape mode. This created quite the unusual appearance when she held her phone in portrait orientation: the browser chrome surrounding the content was in portrait mode, but the page itself was in landscape.

Obviously, this is a terrible design choice. But Dana wanted to know more. So she started digging in. There was no sign of this behavior on a desktop, which sure, I'd hope not. Attempting to use wget to download the page caused a 403. Using curl downloaded a JavaScript challenge. Fine, they didn't want bots, but Dana wasn't a bot.

Poking around in the network tab of the desktop browser's debugging tools helped Dana learn a few things. First: the line endings in the files were all CRLF, implying that all development happened on Windows machines. Maybe that's not interesting, but in 2026, it feels unusual. Second, the page is setting a PHPSESSID cookie, so clearly the backend is written in PHP. But most important, Dana is able to piece together what she needs to successfully use curl to download the page, once pretending to be a desktop browser, and once pretending to be a mobile browser. With that, she ran a diff to see what changed.

The desktop version started with 42 blank lines. The mobile version started with 41. The rest of the pages were substantially the same, with two exceptions. First, the mobile page also added a stylesheet called stylesheet-responsive.css. I assume that name was chosen because irony is dead; nothing about this site is responsive. Second, there was a subtle difference in the body tags.

You see, both pages had a body tag like this:

<body marginwidth="0" marginheight="0" topmargin="0" bottommargin="0" leftmargin="0" rightmargin="0" bgcolor="#FFFFFF">

But the mobile page, continued from there:

<!-- header //-->






<body id="landscape_mode_only" marginwidth="0" marginheight="0" topmargin="0" bottommargin="0" leftmargin="0" rightmargin="0" bgcolor="#FFFFFF">

Yes, the mobile version has two body tags.

Dana writes:

Even though I don't have access to the real PHP source-code, I can imagine what it looks like.

Somewhere in that PHP source-code there is browser-detection (or rather browser-sniffing) and that toggles if it should serve a slightly different HTML code to the user. I do not want to work for that website, I do not want to look at that backend source-code. And I have to feel sorry and respect for the browser developers, as they have to write software that can handle completely broken HTML.

While I hate the results, the fact that the HTML specification originally required clients to render even the most broken HTML is arguably a really good design choice. Expecting people to do the right thing never works out for you.

Let's not forget their "responsive" CSS, which is obviously worth looking at, even if it's obvious what it must be:

@media only screen and (orientation:portrait) {
  #landscape_mode_only {
    height:98vw;
    -webkit-transform:rotate(90deg);
    -moz-transform:rotate(90deg);
    -o-transform:rotate(90deg);
    -ms-transform:rotate(90deg);
    transform:rotate(90deg)
  }
}

This forces everything in the body to rotate sideways.

Look, actually responsive design is hard. But "just force the page into landscape mode no matter what the user does" is definitely not the solution.

And Dana points out one last thing:

As a cherry on the top, observe how the comment that marks the end of the header is placed after the <body> starts. Which is wrong already, but also stupid, because </head> already marks the end of the head. And the head is not really the header.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianRuss Allbery: Review: The Martian Contingency

Review: The Martian Contingency, by Mary Robinette Kowal

Series: Lady Astronaut #4
Publisher: Tor
Copyright: 2025
ISBN: 1-250-23703-3
Format: Kindle
Pages: 390

The Martian Contingency is the fourth book of the mostly-realistic science fiction alternate history series that began with the novelette "The Lady Astronaut of Mars" and the novel The Calculating Stars. It returns to Elma York as the main character, covering her second trip to Mars after the events of The Fated Sky. It's helpful to remember the events of the previous two books to follow some of the plot.

Elma is back on Mars, this time as second in command. The immediate goal of the second Mars mission is to open more domes and land additional crew currently in orbit, creating the first permanent human settlement on Mars. The long-term goal is to set up Mars as a refuge in case the greenhouse effect caused by the meteor strike in The Calculating Stars continues to spiral out of control. Elma is anxious and not looking forward to being partly in charge, particularly since her position is partly due to her fame with the public (and connection with the American president). She'd rather just be a pilot. But she'll do what the mission needs from her, and at least this time her husband is with her on Mars.

As one might expect from earlier installments of this series, The Martian Contingency starts with the details and rhythms of life in a dangerous, highly technical, and mission-driven scientific environment: hard science fiction of the type most closely modeled on NASA and real space missions. Given that this is aimed at permanent Mars colonies that would theoretically have to be independent of Earth, it requires a huge amount of suspension of disbelief for the premise, but Kowal at least tries for verisimilitude in the small details. I am not an expert in early space program technology (Kowal's alternate history diverges into a greatly accelerated space program in the 1950s and, for example, uses female mathematicians for most calculations), so I don't know how successful this is, but it feels crunchy and believable.

As with the previous books, though, this is not just a day in the life of an astronaut. There's something wrong, something that happened during the first Mars expedition while Elma was in orbit and left odd physical clues, and no one is willing to talk about it. Elma is just starting to poke around before the politics at home go off the rails (again), exacerbated by a cringe-worthy social error by Elma herself, and she once again has to navigate egregious sexism and political meddling in a highly dangerous environment a long way from home.

It is a little surprising that I like this series as much as I do. I don't particularly care for pseudo-realistic science fiction, although I admit there is something deeply satisfying about reading about people following checklists properly. The idea of permanent Mars colonies as an escape from a doomed Earth is unbelievable and deeply silly, but Kowal locked herself into that alternate future with "The Lady Astronaut of Mars," which is still set in the future of all of the books so far. A primary conflict in each of the books comes from the egregious sexism and racism of a culture based on 1950s American attitudes towards both, and the amount of progress Elma can make against either is limited, contingent, and constantly compromised.

And yet. At its best, this series is excellent competence porn, both in the spirit of the Apollo 13 movie and for the navigation of social and political obstacles and idiocy. Elma is highly competent in a believable and sympathetic way, with strengths, weaknesses, and an ongoing struggle with anxiety. There is something rewarding in watching people solve problems and eventually triumph by being professional, careful, principled, and creative. It's enough to make a good book, even if I am not that interested in the setting and technology.

As with the rest of the series, this will not be for everyone. You have to be up for reading about a lot of truly awful sexism and racism without the payoff of a complete triumph. This is a system that Elma navigates, not overthrows, and that's not going to be enough for some readers. You also have to accept the premise of a Mars colony, which in an otherwise hard science fiction novel is a bit much despite Kowal's attempts to acknowledge some of the difficulties. But if you don't mind that drawbacks, this series continues to be an opportunity to read about people being quietly and professionally competent.

This is not my favorite entry, mostly because Elma makes a rather humiliating mistake that's central to the plot and has a lot of after-effects (and therefore a lot of time in the spotlight), and because there is rather a lot of discussion of sexuality that felt childish to me. The intent was to try to capture the way people in the 1950s talked about sex, and perhaps Kowal was successful in that, but I didn't enjoy the experience. But I still found myself pulled into the plot and happily rooting for the characters, even though a reader of "The Lady Astronaut of Mars" has a pretty good idea of how everything will turn out.

If you liked the series so far, recommended, although I doubt it will be the favorite entry for most readers. If you did not like the earlier books of the series, this one will not change your mind.

Content notes: Way, way too much detailed discussion of an injury to a fingernail than I wanted to read, as well as some other rather explicit description of physical injury. Reproductive health care through the lens of the 1950s, so, uh, yeah. A whole lot of sexism, racism, and other forms of discrimination that is mostly worked around rather than confronted.

Rating: 7 out of 10

365 TomorrowsObservers

Author: Julian Miles, Staff Writer Nine million boxes. Over nine million lives. We’ll never know how many were actually lost when the Stormgate Battalions came so close to taking Europe back to 1942. All we have left are fragmented tales that orbit an official story so sanitised even the mainstream media seem reluctant to touch […]

The post Observers appeared first on 365tomorrows.

Planet DebianDimitri John Ledkov: Security-only OpenSSL tarball releases for CVE-2026-2673

On Friday May the 13th OpenSSL project has published advisory details for CVE-2026-2673. The CVE is treated as non-important by the project. The patches are only provided as commits on the stable branches. No git tag, no precise fixed version, and no source tarballs provided.

The patches that were merged to openssl-3.5 and openssl-3.6 branches were not based on top of the last stable point release and did not split code changes & documentation updates. It means that cherry-picking the commits referenced in the advisory will always lead to conflicts requiring manual resolution. It is not clear if support is provided for snapshot builds off the openssl-3.5 and openssl-3.6 branches. As the builds from the stable branches declare themselves as dev builds of the next unreleased point release. For example, in contrast to projects such as vim and glibc, with every commit to stable branches explicitly recommended for distributors to ship and is supported.

I have requested OpenSSL upstream in the past for the security fixes to branch off the last point release, commit code changes separate from the NEWS.md / CHANGES.md updates, and then merge that into the stable branches. This way the advisory that recommends cherry-picking individual commits, would actually apply conflict free - at no additional maintenance burden to the OpenSSL project and everyone who has to cherry-pick these updates. There is a wide support voiced for such strategy by the OpenSSL distributors and the OpenSSL Corporation. But this is not something that OpenSSL Project is yet choosing to provide.

To avoid duplication of work, I am starting to provide stable OpenSSL re-releases of the last upstream tagged stable point release with security only patches split into code-change only; documentation update; version update to create security only source tarball releases that are easy to build; easy to identify by the security scanners; and which cherry-pick changes without conflicts. The first two releases are published on GitHub as immutable releases with attestations:

If there are any other branches, CVEs, point releases that would be useful for similar style releases, do open discussion on the GitHub Project.

If you find these releases useful, do star the project and download these releases. If this project gets popular, I hope that OpenSSL upstream will reconsider their releases strategy for all security releases. If you have support contracts with OpenSSL - please request OpenSSL corporation to release tagged releases and versioned tarballs.

Rondam RamblingsField report from Maui

Ulua beach, Wailea-Makena, Maui, before the storm: and after: 

Planet DebianFreexian Collaborators: Regression Tracking in Debusine (by Stefano Rivera)

Regression Tracking

Debusine is a tool designed for Debian developers and Operating System developers in general. Debusine can run QA pipelines to check that Debian packages are ready to upload. This blog post describes the regression tracking mechanism that’s recently become available in Debusine QA pipelines.

The debian_pipeline workflow in Debusine can build, test, and upload a package to the Debian archive (or any other repository, such as a native Debusine APT repository). The QA tests involve running the standard Debian QA utilities (lintian, autopkgtest, piuparts, blhc) on the built artifacts. In addition we can run the autopkgtests of every other package in the archive that depends on the built package, like britney does for testing migration in Debian. Some of these other packages may have currently-failing autopkgtests that have nothing to do with the changes in the upload under test.

For example:

A Pipeline

Figuring out which of these failures are new (and thus worth investigating) has been a manual process in Debusine until now. We have just completed the basic functionality of the regression_tracking=true feature, and have enabled it in the upload-to-* workflows on debusine.debian.net.

With this enabled, you’ll get a new QA tab on your debian_pipeline workflows that shows the trend of each test:

QA Results

This is determined by looking at recent task history for each task in the debian:qa-results collection. If there is no recent result for a given <package, version, architecture>, then tasks are queued under the “reference tests” qa workflow tree on the pipeline.

These reference tests are run by using the same tasks as the main qa workflow, but without the addition of the package under test. In fact, it uses the same qa workflow that we use to check the package, but with a few different parameters to populate the regression tracking results collection.

The debian:qa-results collection used for analyzing regressions is specified to the debian_pipeline with the regression_tracking_qa_results lookup parameter. On debusine.debian.net we have configured a debian:qa-results collection for sid that can be referenced and added to by tasks in any workspace.

Regressions can be more subtle than a simple Success → Failure. If the number of autopkgtests that fail increases, or the number of lintian tags emitted increases, those are also considered a regressions.

Using regression tracking now

It’s enabled by default on most of the upload-to-* workflows on debusine.debian.net. To disable, pass -O debusine_workflow_data.enable_regression_tracking=false when you dput an upload to debusine.

To use the regression-tracking in your own workflows, use a debian_pipeline workflow that is configured with enable_regression_tracking=true. This will require a qa_suite to be specified, pointing to the baseline suite.

We hope this will make it easier to check QA results for packages tested on debusine.debian.net.

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, February 2026 (by Thorsten Alteholz)

The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for February.

Activity summary

During the month of February, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 35 DLAs fixing 527 CVEs.

We also welcomed Arnaud Rebillout to the team and had to say farewell to Roberto, who left the team after more than nine years as part of it.

The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable.

Notable security updates:

  • Guilhem Moulin prepared DLA 4492-1 for gnutls28 to fix vulnerabilities which may lead to Denial of Service.
  • Utkarsh Gupta prepared DLA 4464-1 for xrdp, to fix a a vulnerability that could allow remote attackers to execute arbitrary code on the target system.
  • Emilio Pozuelo Monfort prepared DLA-4465-1 to replace ClamAV 1.0 with ClamAV 1.4. This latter is the current LTS version supported by upstream.
  • Markus Koschany prepared DLA 4468-1 for tomcat9, to fix a vulnerability that can be used to bypass security constraints.
  • Santiago Ruano Rincón prepared DLA 4471-1 to update package debian-security-support, the Debian security coverage checker.
  • Bastien Roucariès prepared DLA 4473-1 for zabbix, to fix a potential remote code execution vulnerability.
  • Paride Legovini prepared DLA 4478-1 for tcpflow, to fix a vulnerability that might result in DoS and potentially code execution.
  • Thorsten Alteholz prepared DLA 4477-1 for munge, to fix a vulnerability which may allow local users to leak the MUNGE cryptographic key and forge arbitrary credentials.
  • Ben Hutchings prepared DLA 4475-1 and DLA 4476-1 for Linux kernel updates.
  • Chris Lamb prepared DLA 4482-1 for ceph, to fix SSL certificate checking in the Python bindings.
  • Andreas Henriksson prepared DLA 4491-1 to fix vulnerabilities in glib2.0, which could result in denial of service, memory corruption or potentially arbitrary code execution.

Contributions from outside the LTS Team:

  • The update of nova was prepared by the maintainer, Thomas Goirand. The corresponding DLA 4486-1 was published by Carlos Henrique Lima Melara.
  • The updates of thunderbird were prepared by the maintainer Christoph Goehre. The corresponding DLA 4466-1 and DLA 4495-1 was published by Emilio Pozuelo Monfort.

The LTS Team has also contributed with updates to the latest Debian releases:

  • Jochen prepared a point update of wireshark for bookworm (#1127945).
  • Jochen prepared point updates of erlang for trixie (#1127606) and bookworm (#1127607).
  • Bastien helped preparing DSA 6160-1 for netty and uploaded a fixed package to unstable.
  • Bastien prepared a point update of zabbix for trixie (#1127437).
  • Tobias prepared a point update of modsecurity-crs for bookworm (#1128655).
  • Tobias prepared a point update of busybox for bookworm (#1129503).
  • Tobias helped preparing DSA 6138-1 for libpng1.6.
  • Daniel prepared point updates of python-authlib for trixie (#1129477) and bookworm (#1129246).
  • Ben uploaded several Linux kernel packages to trixie-backports and bookworm-backports.
  • Ben prepared point updates of wireless-regdb for trixie and bookworm.

Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team.

Some milestones in the lifecycle of two Debian releases are just around the corner. The support of Debian 12 will be handed over to the LTS team on June 11th 2026. After August 31st, support for Debian 11 will move from Debian LTS to ELTS managed by Freexian.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

,

Planet DebianPhil Hands: Mathilda Hands: lost Lenovo X230 Laptop

On our way to Austria last week, on March 6th, we left my daughter's laptop on a train: ICE 1201 (Hamburg-Harburg to Bludenz).

The laptop is a Lenovo X230 notebook. The most obvious distinguishing feature is a Mathilda Hands sticker in the middle of the lid:

Mathilda Hands sticker

I seem to remember that it also has some hexagonal stickers, one probably being one of these:

hexagonal, black and white, Phosh sticker

The keyboard layout is British (with a £ above the 3).

It was left in coach 24 of ICE 1201, next to seats 51-54, in the luggage gap between the seats, on the floor.

My hope is that whoever found it will end up searching for Mathilda Hands and see this. If that's how you got here, please email me: phil-lostlaptop2026@hands.com - doing so will make Mathilda (and me) most cheerful.

Planet DebianDirk Eddelbuettel: RcppClassic 0.9.14 on CRAN: Minor Update

A maintenance release 0.9.14 of the RcppClassic package arrived earlier today on CRAN, and has been built for r2u. This package provides a maintained version of the otherwise deprecated initial Rcpp API which no new projects should use as the normal and current Rcpp API is so much better.

A few changes had cumulated up since the last release in late 2022. We updated continuous integration scripts a few times, switched to Authors@R in DESCRIPTION, and rejigged build scripts a little to accommodate both possible build architectures for macOS. We also updated the vignette by updating all reference and switching the new asis vignette builder now available in Rcpp.

CRANberries also reports the changes relative to the previous release from 3 1/2 years ago. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianChristian Kastner: Radxa Orion O6 - finally an arm64 board I'm happy with

The Radxa Orion O6 seems to be the arm64 device I've always wanted. Finally!

Because it supports UEFI boot, Debian can be installed with the vanilla installation media. Neither custom images, nor additional firmware, nor any other tricks were required on my end. In fact, the entire process was indistinguishable from the amd64 installations I've been accustomed to for two decades: just plug in a USB flash drive, boot, and install.

The system's specs are pretty neat. The CPU is ARMv9.2, though without SME, which would have been useful for debugging packages that use it in some way. It's also rich in interfaces, including a Gen4 PCIe x16 slot (with 8 lanes), which enables me to run tests utilizing GPUs. The BIOS version the board came with had a bug where WiFi was permanently disabled, but this has been fixed in the meantime. The only odd thing was the socket for the RTC battery: the CR1220 it requires seems to be rather niche: none of the hardware stores I visited carried it, so I had to order one online.

I installed Debian trixie to a NVMe drive, and for 6 months now, it has been running flawlessly. I've been using this board as a development and debugging system for arm64 optimizations, in particular for ggml's feature-specific arm64 backends which are hard to debug on our porterboxes. However, the host will soon transition to a general CI worker for AI/ML related packages, similar to the ROCm CI we've built for testing packages utilizing AMD GPUs.

I'll post more about this new CI soon.

Purchase of this board was funded by the Debian Project, and was thus enabled through your donations.

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.10 on CRAN: Even More Maintenance

Yet another maintenance release of the tidyCpp package arrived on CRAN this morning, a mere week and a half after the previous release. It has been built for r2u as well. The package offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples.

This release, just like the preceding one less than two weeks ago, had its hand forced by an overnight change in R-devel. The breakage this created has since been reverted in R-devel but as the writing is on the wall are now removing the definition involving these accessors preemptively. We were also missing version checks for two newly added alternates.

Changes are summarized in the NEWS entry that follows.

Changes in tidyCpp version 0.0.10 (2026-03-15)

  • Hide five accessors as underlying macros removed from Rinternals

  • Preemptively hide another accessor

  • Ensure two definitions are conditional on R 4.5.0 or later

Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianMarco d'Itri: Bypassing deep packet inspection with socat and HTTPS tunnels

Recently I found myself with a few hours to kill, but with the only available connectivity provided by an annoying firewall which would normally allow requests only to a few very specific web sites. This post shows how to work around this kind of restrictions by hiding SSH in an HTTPS connection, which then can be used as a SOCKS proxy to allow general connectivity. socat does all the hard work.

First, create two self-signed RSA keys pairs, one for the client (bongo) and one for the server (attila):

domain=bongo.example.net
openssl req -x509 -newkey rsa:2048 -days 7300 \
  -subj /CN=$domain -addext "subjectAltName = DNS:$domain" \
  -keyout socat.key -nodes \
  -out socat.pem

Then, concatenate the public and private keys to create the file provided to the cert option, and use the public key as the file for the cafile option on the other side.

On the client side, if you normally would connect to attila.example.net then you can add something like this to ~/.ssh/config:

Host httpstunnel-attila.example.net
 ProxyCommand socat --statistics STDIO OPENSSL:attila.example.net:443,↩️
   cert=$HOME/.ssh/socat-bongo.pem,cafile=$HOME/.ssh/socat-attila.pem,↩️
   snihost=${SOCAT_SNI:-x.com}
 DynamicForward 1080
 Compression yes
 HostKeyAlias attila.example.net
 ControlMaster yes
 ControlPath ~/.ssh/.control_attila.example.net_22_%r

The ProxyCommand directive uses socat to provide the connectivity which ssh will use over stdio instead of connecting to port 22 of the server.

The snihost option is enough to make many firewalls believe that this is an authorized HTTPS request.

On the server side we use a simple systemd unit to start a forking instance of socat, which will accept and process requests from the client (and from random crawlers on the Internet: expect a lot of cruft in that log...):

[Unit]
Description=socat tunnel
After=network.target

[Service]
Type=exec
ExecStart=socat -ly OPENSSL-LISTEN:443,fork,reuseaddr,↩️
  cert=%d/tlskey,cafile=%d/tlsca TCP:localhost:22
SuccessExitStatus=143
LoadCredential=tlskey:/etc/ssh/socat-attila.pem
LoadCredential=tlsca:/etc/ssh/socat-bongo.pem
Restart=on-abnormal
RestartSec=5s
DynamicUser=yes
PrivateDevices=yes
PrivateTmp=yes
ProtectClock=yes
ProtectControlGroups=yes
ProtectHome=yes
ProtectHostname=yes
ProtectKernelLogs=yes
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectProc=invisible
ProtectSystem=strict
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
LockPersonality=yes
MemoryDenyWriteExecute=yes
NoNewPrivileges=yes
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
SystemCallArchitectures=native
SystemCallErrorNumber=EPERM
SystemCallFilter=@system-service
SystemCallFilter=~@resources
SystemCallFilter=~@privileged

[Install]
WantedBy=multi-user.target

Strong sandboxing is enabled, so the socat instance is confined with very limited privileges. An interesting point is the use of systemd credentials to provide the cryptographic keys, since it allows to store them in a part of the file system which would not be accessible to the program. Advanced users can use this method to provide the keys from secure storage.

Planet DebianRussell Coker: The Difference Between Email and Instant Messaging

Introduction

With various forms of IM becoming so prevalent and a lot of communication that used to be via email happening via IM I’ve been thinking about the differences between Email and IM.

I think it’s worth comparing them not for the purpose of convincing people to use one or the other (most people will use whatever is necessary to communicate with the people who are important to them) but for the purpose of considering ways to improve them and use them more effectively.

Also I don’t think that users of various electronic communications systems have had a free choice in what to use for at least 25 years and possibly much longer depending on how you define a free choice. What you use is determined by who you want to communicate with and by what systems are available in your region. So there’s no possibility of an analysis of this issue giving a result of “let’s all change what we use” as almost everyone lacks the ability to make a choice.

What the Difference is Not

The name Instant Messaging implies that it is fast, and probably faster than other options. This isn’t necessarily the case, when using a federated IM system such as Matrix or Jabber there can be delays while the servers communicate with each other.

Email used to be a slow communication method, in the times of UUCP and Fidonet email there could be multiple days of delay in sending email. In recent times it’s expected that email is quite fast, many web sites have options for authenticating an email address which have to be done within 5 minutes so the common expectation seems to be that all email is delivered to the end user in less than 5 minutes.

When an organisation has a mail server on site (which is a common configuration choice for a small company) the mail delivery can be faster than common IM implementations.

The Wikipedia page about Instant Messaging [1] links to the Wikipedia page about Real Time Computing [2] which is incorrect. Most IM systems are obviously designed for minimum average delays at best. For most software it’s not a bad thing to design for the highest performance on average and just let users exercise patience when they get an unusual corner case that takes much longer than expected.

If an IM message takes a few minutes to arrive then “that’s life on the Internet” – which was the catchphrase of an Australian Internet entrepreneur in the 90s that infuriated some of his customers.

Protocol and Data Format Differences

Data Formats

Email data contains the sender, one or more recipients, some other metadata (time, subject, etc), and the message body. The recipients are typically an arbitrary list of addresses which can only be validated by the destination mail servers. The sender addresses weren’t validated in any way and are now only minimally validated as part of anti-spam measures.

IM data is sent through predefined connections called rooms or channels. When an IM message is sent to a room it can tag one or more members of the room to indicate that they may receive a special notification of the message.

In many implementations it’s possible to tag a user who isn’t in the room which may result in them being invited to the room. But in IM there is no possibility to add a user to the CC list for part of a discussion and then just stop CCing messages to them later on in the discussion.

Protocols

Internet email is a well established system with an extensive user base. Adding new mandatory features to the protocols isn’t viable because many old systems won’t be updated any time soon. So while it is possible to send mail that’s SSL encrypted and has a variety of authentication mechanisms that isn’t something that can be mandatory for all email. Most mail servers are configured to use the SSL option if it’s available but send in cleartext otherwise, so a hostile party could launch a Man In the Middle (MITM) attack and pretend to be the mail server in question but without SSL support.

Modern IM protocols tend to be based on encryption, even XMPP (Jabber) which is quite an old IM protocol can easily be configured to only support encrypted messaging and it’s reasonable to expect that all other servers that will talk to you will at least support SSL. Even for an IM system that is run by a single company the fact that communication with the servers is encrypted by SSL makes it safer than most email. A security model of “this can only be read by you, me, and the staff at an American corporation” isn’t the worst type of Internet security.

The Internet mail infrastructure makes no attempt to send mail in order and the design of the Simple Mail Transfer Protocol (SMTP) means that a network problem after a message has been sent but before the recipient has confirmed receipt will mean that the message is duplicated and this is not considered to be a problem.

The IM protocols are designed to support reliable ordered transfer of messages and Matrix (the most recently designed IM protocol) has cryptographic connections between users.

Forgery

For most email systems there is no common implementation that prevents forging email. For Internet email transferred via SMTP it’s possible to use technologies like SPF and DKIM/DMARC to make recipients aware of attempts at forgery, but many recipient systems will still allow email that fails such checks to be delivered. The default configuration tends to be permitting everything and all of the measures to prevent forgery require extra configuration work and often trade-offs as some users desire features that go against security. The default configuration of most mail servers doesn’t even prevent trivial forgeries of email from the domain(s) owned by that server.

For evidence check the SPF records of some domains that you communicate with and see if they end with “-all” (to block email from bad sources), “~all” (to allow email from bad sources through after possibly logging an error), “?all” (to be “neutral” on mail from unknown sources, or just lack a SPF record entirely. The below shows that of the the top four mail servers in the world only outlook.com has a policy to reject mail from bad sources.

# dig -t txt _spf.google.com|grep spf1
_spf.google.com.	300	IN	TXT	"v=spf1 ip4:74.125.0.0/16 ip4:209.85.128.0/17 ip6:2001:4860:4864::/56 ip6:2404:6800:4864::/56 ip6:2607:f8b0:4000::/36 ip6:2800:3f0:4000::/36 ip6:2a00:1450:4000::/36 ip6:2c0f:fb50:4000::/36 ~all"
# dig -t txt outlook.com|grep spf1
outlook.com.		126	IN	TXT	"v=spf1 include:spf2.outlook.com -all"
# dig -t txt _spf.mail.yahoo.com|grep spf1
_spf.mail.yahoo.com.	1800	IN	TXT	"v=spf1 ptr:yahoo.com ptr:yahoo.net ip4:34.2.71.64/26 ip4:34.2.75.0/26 ip4:34.2.84.64/26 ip4:34.2.85.64/26 ip4:34.2.64.0/22 ip4:34.2.68.0/23 ip4:34.2.70.0/23 ip4:34.2.72.0/22 ip4:34.2.78.0/23 ip4:34.2.80.0/23 ip4:34.2.82.0/23 ip4:34.2.84.0/24 ip4:34.2.86.0" "/23 ip4:34.2.88.0/23 ip4:34.2.90.0/23 ip4:34.2.92.0/23 ip4:34.2.85.0/24 ip4:34.2.94.0/23 ?all"
# dig -t txt icloud.com|grep spf1
icloud.com.		3586	IN	TXT	"v=spf1 ip4:17.41.0.0/16 ip4:17.58.0.0/16 ip4:17.142.0.0/15 ip4:17.57.155.0/24 ip4:17.57.156.0/24 ip4:144.178.36.0/24 ip4:144.178.38.0/24 ip4:112.19.199.64/29 ip4:112.19.242.64/29 ip4:222.73.195.64/29 ip4:157.255.1.64/29" " ip4:106.39.212.64/29 ip4:123.126.78.64/29 ip4:183.240.219.64/29 ip4:39.156.163.64/29 ip4:57.103.64.0/18" " ip6:2a01:b747:3000:200::/56 ip6:2a01:b747:3001:200::/56 ip6:2a01:b747:3002:200::/56 ip6:2a01:b747:3003:200::/56 ip6:2a01:b747:3004:200::/56 ip6:2a01:b747:3005:200::/56 ip6:2a01:b747:3006:200::/56 ~all"

In most IM systems there is a strong connection between people who communicate. If I send you two direct messages they will appear in the same room, and if someone else tries forging messages from me (EG by replacing the ‘c’ and ‘e’ letters in my address with Cyrillic letters that look like them or by mis-spelling my name) a separate room will be created and it will be obvious that something unexpected is happening. Protecting against the same attacks in email requires the user carefully reading the message, given that it’s not uncommon for someone to start a message to me with “Hi Russel” (being unable to correctly copy my name from the To: field of the message they are writing) it’s obvious that any security measure relying on such careful reading will fail.

The IM protections against casual forgery also apply to rooms with multiple users, a new user can join a room for the purpose of spamming but they can’t send a casual message impersonating a member of the room. A user can join a Matrix room I’m in with the name “Russell” from another server but the potential for confusion will be minimised by a message notifying everyone that another Russell has joined the room and the list of users will show two Russells. For email the protections against forgery when sending to a list server are no different than those when sending to an individual directly – which means very weak protections.

Authenticating the conversation context once as done with IM is easier and more reliable than authenticating each message independently.

Is Email Sucking the Main Technical Difference?

It seems that the problems with forgery, spam, and general confusion when using email are a large part of the difference between email and IM.

But in terms of technical issues the fact that email has significantly more users (if only because you need an email account to sign up for an IM system) is a major difference.

Internet email is currently a universal system (apart from when it breaks from spam) and it has historically been used to gateway to other email systems like Fidonet, Uucp, and others. The lack of tight connection between parties that exchange messages in email makes it easier to bridge between protocols but harder to authenticate communication.

Most of the problems with Internet email are not problems for everyone at all times, they are technical trade-offs that work well for some situations and for some times. Unfortunately many of those trade-offs are for things that worked well 25+ years ago.

The GUI

From a user perspective there doesn’t have to be a great difference between email and IM. Email is usually delivered quickly enough to be in the same range as IM. The differences in layout between IM client software and email client software is cosmetic, someone could write an email client that organises messages in the same way as Slack or another popular IM system such that the less technical users wouldn’t necessarily know the difference.

The significant difference in the GUI for email and IM software was a design choice.

Conversation Organisation

The most significant difference in the operation of email and IM at the transport level is the establishment of connections in IM. Another difference is the fact that there are no standards implemented for the common IM implementations to interoperate which is an issue of big corporations creating IM systems and deliberately making them incompatible.

The methods for managing email need to be improved. Having an “inbox” that’s an unsorted mess of mail isn’t useful if you want to track one discussion, breaking it out into different sub folders for common senders (similar to IM folders for DMs) as a standard feature without having to setup rules for each sender would be nice. Someone could design an email program with multiple layouts, one being the traditional form (which seems to be copied from Eudora [3]) and one with the inbox (or other folders) split up into conversations. There are email clients that support managing email threads which can be handy in some situations but often isn’t the best option for quickly responding to messages that arrived recently.

Archiving

Most IM systems have no method for selectively archiving messages, there’s a request open for a bookmark function in Matrix and there’s nothing stopping a user from manually copying a message. But there’s nothing like the convenient ability to move email to an archive folder in most IM systems.

Without good archiving IM is a transient medium. This is OK for conversations but not good for determining the solutions to technical problems unless there is a Wiki or other result which can be used without relying on archives.

Composing Messages

In a modern email client when sending a message it prompts you for things that it considers complete, so if you don’t enter a Subject or have the word “attached” in the message body but no file is attached to the message then it will prompt you to confirm that you aren’t making a mistake. In an IM client the default is usually that pressing ENTER sends the message so every paragraph is a new message. IM clients are programmed to encourage lots of short messages while email clients are programmed to encourage more complete messages.

Social Issues

Quality

The way people think about IM and email is very different, as one example there was never a need for a site like nohello.net for email.

The idea that it’s acceptable to use even lower quality writing in IM than people tend to use in email seems to be a major difference between the communication systems.

It can be a good thing to have a chatty environment with messages that are regarded as transient for socialising, but that doesn’t seem ideal for business use.

Ownership

Email is generally regarded as being comparable to physical letters. It is illegal and widely considered to be socially wrong to steal a letter from someone’s letterbox if you regret sending it. In email the only unsend function I’m aware of is that in Microsoft software which is documented to only work within the same organisation, and that only works if the recipient hasn’t read the message. The message is considered to be owned by the recipient.

But for IM it’s a widely supported and socially acceptable function to delete or edit messages that have been sent. The message is regarded as permanently the property of the sender.

What Should We Do?

Community Creators

When creating a community (and I use this in the broadest sense including companies) you should consider what types of communication will work well.

When I started the Flounder group [4] I made a deliberate decision that non-free communication systems go against the aim of the group, I started it with a mailing list and then created a Matrix room which became very popular. Now the list hardly gets any use. It seems that most of the communication in the group is fairly informal and works better with IM.

Does it make sense to use both?

Should IM systems be supplemented with other systems that facilitate more detail such as a Wiki or a Lemmy room/instance [5] to cover the lack of long form communication? I have created a Lemmy room for Flounder but it hasn’t got much interest so far.

It seems that almost no-one makes a strategic decision about such issues.

Software Developers

It would be good to have the same options for archiving IM as there are for email. Also some options to encourage quality in IM communication similar to the way email clients want confirmation before sending messages without a subject or that might be missing an attachment.

It would also be good to have better options for managing conversations in email. The Inbox as currently used is good for some things but a button to switch between that and a conversation view would be good. There are email clients that allow selecting message sort order and aggregation (kmail has a good selection of options) but they are designed for choosing a single setup that you like not between multiple views based on the task you are doing.

It would be good to have links between different communication systems, if users had the option of putting their email address in their IM profile it would make things much easier. Having entirely separate systems for email and IM isn’t good for users.

Users

The overall communications infrastructure could be improved if more people made tactical decisions about where and how to communicate. Keep the long messages to email and the chatty things to IM. Also for IM just do the communication not start with “hello”. To discourage wasting time I generally don’t reply to messages that just say “hello” unless it’s the first ever IM from someone.

Conclusion

A large part of the inefficiencies in electronic communication are due to platforms and usage patterns evolving with little strategic thought. The only apparent strategic thought is coming from corporations that provide IM services and have customer lock in at the core of their strategies.

Free software developers have done great work in developing software to solve tactical problems but the strategies of large scale communications aren’t being addressed.

Email is loosely coupled and universal while IM is tightly coupled, authenticated, and often siloed. This makes email a good option for initial contact but a risk for ongoing discussions.

There is no great solution to these issues as they are largely a problem due to the installed user base. But I think we can mitigate things with some GUI design changes and strategic planning of communication.

Planet DebianVasudev Kamath: Using Gemini CLI to Configure the Hyprland Window Manager

What led to this experiment? Well, for one, Well, for one, there was a thought shared by Andrej Karpathy regarding the shift towards "Agentic" workflows.

"The future of software is not just 'tools', but 'agents' that can navigate complex tasks on your behalf."

Andrej Karpathy

Recently, I spoke with Ritesh, who mentioned his success using the Gemini CLI to debug an idle power drain issue on his laptop. I wanted to experiment with this myself, and I had the perfect use case: configuring the Hyprland Window Manager on my aging laptop.

The machine is nearly eight years old with 12GB of RAM (upgraded from the original 4GB). I found that GNOME and KDE were becoming overkill, often leading to system freezes when running multiple AI-powered IDEs like Antigravity and VS Code with Co-pilot. Coincidentally, I noticed my Jio number had a "Google One 2TB" and "Google AI Premium" plan available to claim. I claimed it, and now here I am, experimenting with the Gemini CLI.

Getting Started

First, you need to install geminicli. It is an open-source project, and currently, the easiest way to install it is via the Node Package Manager (npm):

npm install -g @google/gemini-cli

Next, we need to create a context for Gemini—a set of instructions for it to follow throughout the project. This is managed via a GEMINI.md file. I went to Google Gemini, explained my requirements, and asked it to generate one for me.

My requirements were:

  1. A minimalist but fully functional session, comparable to my existing GNOME setup.
  2. Basic functionalities including wallpaper, screen locks, and a status bar with system icons.
  3. Swapping Control and Caps Lock (a must for Emacs users).
  4. Mandatory permission prompts for privileged operations; otherwise, it can work freely within a specified directory.
  5. Persistent memory/artifacts for the session.
  6. Permission to inspect my current session to understand the existing hardware and software configuration.

The goal was to reduce bloat and reclaim memory for heavy applications like Antigravity and VS Code. Gemini provided the following GEMINI.md file:

# Role: Hyprland Configuration Specialist (Minimalist & High-Performance)

You are a Linux Systems Engineer specializing in migrating users from heavy
Desktop Environments to minimalist, tiling-based Wayland sessions on Debian.
Your goal is to maximize available RAM for heavy applications while maintaining
essential desktop features.

## 1. Environment & Persona
- **Target OS:** Debian (Linux)
- **Target WM:** Hyprland
- **Hardware:** ThinkPad E470 (i5-7th Gen, 12GB RAM)
- **User Profile:** Emacs user, prioritizes "anti-gravity" (zero bloat).
- **Tone:** Technical, concise, and security-conscious.

## 2. Core Functional Requirements
- **Status Bar:** `waybar` (with CPU, RAM, Network, and Battery icons).
- **Wallpaper:** `swww` or `hyprpaper`.
- **Screen Lock:** `hyprlock` + `hypridle`.
- **Input Mapping:** Swap Control and Caps Lock (`kb_options = ctrl:nocaps`).

## 3. Operational Constraints
- **Permission First:** Ask before using `sudo` or writing outside the work directory.
- **Inspection:** Use `hyprctl`, `lsmod`, or `gsettings` for compatibility checks.
- **Artifact Management:** Update `MEMORY.md` after every major step.

Gemini also recommended creating a MEMORY.md file to track progress. Interestingly, Gemini remembered that I had previously shared dmidecode output, so it already knew my exact laptop specs. (Though it did include a note about me being a "daily rice eater"—I assume it meant Linux 'ricing,' though I actually use Debian Unstable, not Stable!).

The AI suggested starting with this prompt:

Read MEMORY.md and GEMINI.md. Based on my hardware, give me a shell script to inspect my current GNOME environment so we can start replicating the session basics.

How Did It Go?

I initialized a git repository for these files and instructed the Gemini CLI to update GEMINI.md and commit changes after every major step so I could track the progress.

The workflow looked like this:

  1. Inspection: It created a script to extract my GNOME settings.
  2. Configuration: Once I provided the output, it began configuring Hyprland.
  3. Utilities: It generated an installation script for all required Wayland utilities.
  4. Validation: All changes were staged in a hypr-config-draft folder. I had Gemini verify them using hyprland --verify-config before moving them to ~/.config/hypr.

Most things worked immediately, but I hit a snag with the wallpaper. Even after generating the config, hyprpaper failed to display anything. The AI got stuck in a loop trying to debug it. I eventually spawned a second Gemini CLI instance to review the code and logs.

The debug log showed: 'DEBUG ]: Monitor eDP-1 has no target: no wp will be created'. It turns out the configuration format was outdated. By feeding the Hyprpaper Wiki into the AI, it finally corrected the config, and the wallpaper appeared.

After that, it successfully fixed an ssh-agent issue and configured a clipboard manager with custom keybindings.

Learnings

I have used window managers for a long time because my hardware was rarely top-of-the-line. However, I had moved back to KDE/GNOME with the arrival of Wayland because most of my preferred WMs were X11-based.

Manually configuring a window manager is a painful, time-consuming process involving endless wiki-trawling and trial-and-error. What usually takes weeks took only a few hours with the Gemini CLI.

AI isn't perfect—I still had to step in and guide it when it hit a wall—but the efficiency gain is undeniable. If you're interested in the configuration or the history of the session, you can find the repository here.

I still have a few pending items in MEMORY.md, but I'll tackle those next time!

365 TomorrowsForgiveness

Author: Rick Tobin “Not on my ship! Do you hear?” A giant, hairy fist struck the ship’s control console as Commander Tros rose from his chair, preparing to join the Bay of Death ceremony. His second followed behind, head bowed. “Your Prominence, my duty is to inform, not agitate. The Alliance Board has ordered the […]

The post Forgiveness appeared first on 365tomorrows.

,

Rondam RamblingsSeeking God in Science part 3: Things Exist

The mere undertaking of this project of reconciling the mechanistic and teleological worldviews is already chock-a-block with tacit assumptions.  I am assuming that you, my readers, actually exist.  I am rejecting solipsism.  By choosing writing as my medium I am assuming that you know how to read and that you understand English.  But publishing on-line I am assuming that you

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

365 TomorrowsLittle Things

Author: Kip Pratt Dear Supreme Being, Thank you for the commission. And now, the results… On the planet Earth, in one of your cleverer little galaxies, the bugs are all deceased. Take the car out at dusk, and the windscreen returns squeaky clean. No squashed bodies. Diversity. That’s what they are arresting, down on that […]

The post Little Things appeared first on 365tomorrows.

,

Planet DebianJonathan Dowland: debian swirl font glyph

When I wrote about the redhat logo in a shell prompt, a commenter said it would be nice to achieve something similar for Debian, and suggested "�" (U+1F365 FISH CAKE WITH SWIRL DESIGN) which, in some renderings, looks to have a red swirl on top. This is not bad, but I thought we could do better.

On Apple systems, the character "" (U+F8FF) displays as the corporate Apple logo. That particular unicode code point is reserved: systems are free to use it for something private and internal, but other systems won't use it for the same thing. So if an Apple user tries to send a document with that character in it to someone else, they won't see the Apple unless they are also viewing it on an Apple computer. (Some folks use it for Klingon).

Here's a font that maps the Debian swirl to the same code point. It's covered by the Debian logo license terms.

Nerd Font maps the Debian swirl logo to codepoints e77d, f306, ebc5 and f08da (all of which are also in the Private Use Area). I've gone ahead and mapped it to all those points but the last one (simply because I couldn't find it in FontForge.)

Note that, unless your recipients have this font, or the Nerd Font, or similar set up, they aren't going to see the swirl. But enjoy it for private use. Getting your system to actually use the font is, I'm afraid, left as an exercise for the reader (but feel free to leave comments)

Thanks to mirabilos for chatting to me about this back in 2019. It's taken me that long to get this blog post out of draft!

Planet DebianDirk Eddelbuettel: RcppCNPy 0.2.15 on CRAN: Maintenance

Another maintenance release of the RcppCNPy package arrived on CRAN today, and has already been built as an r2u binary. RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers along with Rcpp for the glue to R.

The changes are minor and similar to other recent changes. We aid Rcpp in the transition away from calling Rf_error() by relying in Rcpp::stop() which has better behaviour and unwinding when errors or exceptions are encountered. So once again no user-facing changes. Full details are below.

Changes in version 0.2.15 (2026-03-13)

  • Replaced Rf_error with Rcpp::stop in three files

  • Maintenance updates to continuous integration

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Charles StrossWebtoons revisited

It's been years and years since I last went trawling for webcomics worth reading, so it's time for an update: obviously online search is pretty much useless, but we ought to be able to crowdsource something here.

I keep a separate browser window for webcomics; here's a selection of my currently-open tabs, excluding syndicated stuff that shows up in newspapers. (So no "This Modern World" or "The Far Side".) What am I ignoring? Preferably new in the past decade, which rules out old-timers like "Digger" or "Girl Genius" (arguably I should have ommitted QC and xkcd too, but they're favourites of mine).

Questionable Content has been first on my daily reading list for a long time ... almost 20 years? It's Jeff Jacques' "internet comic strip about friendship, romance, and robots ... set in the present day and pretty much the same as our own except there are robots all over the place and giant space stations." And more plot threads than I can possibly summarize, given that it's a sprawling soap opera unfolding at roughly 250 strips per year.

Saturday Morning Breakfast Cereal which, despite the name, comes out almost every day, is the antithesis of QC: every daily strip is a standalone, and it has an alarming tendency to lob philosophical hand grenades at entire fields of scientific endeavour. By Zack Weinersmith, who's also written some good books.

xkcd is the third classic, by sometime NASA robot guy Randal Munroe; like SMBC it tends to focus on the sciences, with a distinctly whimsical take on things. Should need no introduction, but if you don't already know, it's where those stick figure science comics come from ...

Kill Six Billion Demons Less of a single strip at a time webcomic and more of an episodic graphic novel, KSBD is distinctly Japanese/Hindu/Chinese/Hellish in tone: it seems to follow the travails of an American female student called Alison who winds up in hell, befriends demons, gets caught up in a holy war to end the universe, and ascends towards godhood, but that's kind of selling it short. Come for the amazing artwork, stay for the batshit theology. By Abbadon.

Pepper & Carrot by David Revoy is thematically the exact antithesis of KSBD: P&C is set in a very kitsch, cozy, D&D style generic fantasy world. Pepper is a young and less-than-competent student of witchcraft, and Carrot is her one-brain-cell ginger cat (and hapless familiar): they get in trouble a lot. (Spin-offs: if you want to dip in to a one-shot rather than a serial, there's Mini-Fantasy Theatre--same character but every story is self-contained.)

Runaway to the Stars is an extremely crunchy hard SF slice-of-life serial by Jay Eaton, following Talita (an alien centaur-oid alien fostered by humans) and her friends. Did I say "crunchy"? The world-building is extreme. (And you'll never think catgirls are sexy again!)

Phobos and Deimos A differently-crunchy solarpunk story about a girl from Mars who, exiled by an invasion, ends up as a refugee on Earth, where she has to make a new life for herself and grapple with the culture shock of attending high school in Antarctica as a 'fugee.

RuriDragon an online manga set in a Japanese high school, following student Ruri Aoki, who wakes up one day and notices horns have started growing from her head. When she asks her mother about it, mum confesses that her father was a dragon ... RuriDragon was serialized in Weekly Sh�nen Jump magazine in 2022; this is an unofficial fan translation. (It follows Japanese formatting conventions, so read it from the top down and right-to-left or the dialog won't make much sense.)

SideQuested by AlePresser & K.B. Spangler is a web serial/graphic novel in progress set in a slightly less generic fantasy realm than Pepper & Carrot (this one shows some signs of Xianxia/cultivation influences). It focusses on the adventures of an extremely sensible level-headed librarian-in-training girl named Charlie, who clearly has absolutely no magical abilities whatsoever--until one day her absentee father turns up with some unexpected news: he's the King's Champion, her mother is a foreign princess, and she's needed at Court because the King's head-in-the-clouds son Prince Leopold is being a problem and her father needs her to sort him out in a hurry ...

Eldritch Darling Nothing to see here, just your usual webcomic about an eldritch horror from beyond spacetime who falls in love with a lesbian. H. P. Lovecraft would not approve!

Unspeakable Vault of Doom is an irregular series of extremely goofy web strips that H. P. Lovecraft would definitely disapprove of, not least because he occasionally features in it, along with his more notorious creations!

Finally, two from the cheesecake dimension:

Oglaf is almost invariably NSFW, rude, and very, very funny. Weekly, started out 20 years ago as an attempt to do bad D&D porn then kind of wandered off topic, and these days there's only about an 80% probability that any given weekly strip will include explicit sex scenes, stabbings, or jokes.

Grrl Power (Caution: author has a severe male gaze problem) As the "about" page says: A comic about super heroines. Well there are guys too but mostly it's about the girls. Doing the things that super powered girls do. Fighting crime, saving the world, dating, shopping, etc. There are also explosions, cheesecake, beefcake, heroes and villains, angels and demons, cyborgs, probably ninjas, and definitely aliens. Lots and lots of aliens. Some of whom are only visiting Earth as sex tourists ...

And that's my round-up!

Your turn. What web comics do you frequent new webcomics that aren't on this list?

Planet DebianSven Hoexter: container image with ECH enabled curl

As an opportunity to rewire my brain from "docker" to "podman" and "buildah" I started to create an image build with an ECH enabled curl at https://gitlab.com/hoexter/ech.

Not sure if it helps anyone, but setup should be like this:

git clone https://gitlab.com/hoexter-experiments/ech
cd ech
buildah build --layers -f Dockerfile -t echtest
podman run -ti echtest /usr/local/bin/curl \
  --ech true --doh-url https://one.one.one.one/dns-query \
  https://crypto.cloudflare.com/cdn-cgi/trace.cgi
fl=48f121
h=crypto.cloudflare.com
ip=2.205.251.187
ts=1773410985.168
visit_scheme=https
uag=curl/8.19.0
colo=DUS
sliver=none
http=http/2
loc=DE
tls=TLSv1.3
sni=encrypted
warp=off
gateway=off
rbi=off
kex=X25519

It also builds nginx and you can use that for a local test within the image. More details in the README.

Planet DebianHellen Chemtai: One week later after the Outreachy internship: Managing Work-Life Balance

Hello world. I have been doing a lot after my internship with Outreachy. We are still working on some tasks :

  1. I am working on running locales for my native language in live images.
  2. I am also working on points to add to talk proposals for a Debian conference.

As I am moving around constantly, there are problems I had encountered when changing my networks. I had to connect my virtual machine to different networks and the network would not reflect within the machine. From terminal I edited the virtual machine XML settings:

su -
// input password
sudo virsh edit <machine_name> #its openqa for me
// Look for the interface within devices and replace this:
<interface type=&aposnetwork&apos>
        <source network=&aposdefault&apos/>
        #some other code in here
 </interface>
// With just this then restart your machine:
<interface type=&aposuser&apos>
    <model type=&aposvirtio&apos/>
</interface>

Hopefully the above will help someone out there. I am still working on a lot of tasks regarding the conference, so much to do and so little time. I am hoping I won’t get any burnout during this period. I won’t be updating much further till the conference. Have a nice time

Worse Than FailureError'd: @#$%^!!

Here's a weird email but IMO the erorr is just the odd strikethrough. Bill T. explains: "From my Comcast email spam folder. It was smart enough to detect it was spam, but... spam from a trusted sender? And either the delivery truck is an emoji (possible), an embedded image (maybe?), or Comcast is not actually blocking external images." I'd like to see the actual email, could you forward it to us? My guess is that we're seeing a rare embedded image. Since embedding images was the whole point of MIME in the first place, I have found it odd that they're so so hard to construct with typical marketing mass mailers, and I almost never receive them.

0

 

The WTFs are heating up for Peter G. . Or cooling off. It's one or the other. "Fiji seems to be experiencing a run of temperature inversions. Must be something to do with climate change. "

1

 

Back with a followup, dragoncoder047 has a plan to rule the world. "I was looking up some closed-loop stepper motors for a robotics project when StepperOnline gave me this error message. Evidently they don't think my project is a good idea. "

2

 

"My %@ package is missing!" ranted Orion S. "After spending the day restoring my system, I can offer alternatives such as the "@&*% you!" package."

3

 

Soon-to-be journalist Marc Würth buries the lede: "Not really looking for a job but that is certainly a rare opening." Okay, but what I really want to know is what that Slashdot article is about. Do I even have a Slashdot account still? Why, yes I do.

4

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsFloat

Author: Cecilia Kae I woke early yesterday to catch the last glimpse of the island. It took twenty minutes getting to the pier. I wanted to be there before it got crowded but it already was. Most were there because it was the first time Mantasia, our neighbouring country, could be seen up close. From […]

The post Float appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 314 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 314. This version includes the following changes:

[ Chris Lamb ]
* Don't run "test_code_is_black_clean" test in autopkgtests.
  (Closes: #1130402)

[ Michael R. Crusoe ]
* Reformat using Black 26.1.0. (Closes: #1130073)

You find out more by visiting the project homepage.

,

Planet DebianReproducible Builds: Reproducible Builds in February 2026

Welcome to the February 2026 report from the Reproducible Builds project!

These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. reproduce.debian.net
  2. Tool development
  3. Distribution work
  4. Miscellaneous news
  5. Upstream patches
  6. Documentation updates
  7. Four new academic papers

reproduce.debian.net

The last year has seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.

This month, however, Holger Levsen added suite-based navigation (eg. Debian trixie vs forky) to the service (in addition to the already existing architecture based navigation) which can be observed on, for instance, the Debian trixie-backports or trixie-security pages.


Tool development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 312 and 313 to Debian.

In particular, Chris updated the post-release deployment pipeline to ensure that the pipeline does not fail if the automatic deployment to PyPI fails []. In addition, Vagrant Cascadian updated an external reference for the 7z tool for GNU Guix. []. Vagrant Cascadian also updated diffoscope in GNU Guix to version 312 and 313.


Distribution work

In Debian this month:

  • 26 reviews of Debian packages were added, 5 were updated and 19 were removed this month adding to our extensive knowledge about identified issues.

  • A new debsbom package was uploaded to unstable. According to the package description, this package “generates SBOMs (Software Bill of Materials) for distributions based on Debian in the two standard formats, SPDX and CycloneDX. The generated SBOM includes all installed binary packages and also contains Debian Source packages.”

  • In addition, a sbom-toolkit package was uploaded, which “provides a collection of scripts for generating SBOM. This is the tooling used in Apertis to generate the Licenses SBOM and the Build Dependency SBOM. It also includes dh-setup-copyright, a Debhelper addon to generate SBOMs from DWARF debug information, which are “extracted from DWARF debug information by running dwarf2sources on every ELF binaries in the package and saving the output.”

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Miscellaneous news


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Documentation updates

Once again, there were a number of improvements made to our website this month including:


Four new academic papers

Julien Malka and Arnout Engelen published a paper titled Lila: Decentralized Build Reproducibility Monitoring for the Functional Package Management Model:

[While] recent studies have shown that high reproducibility rates are achievable at scale — demonstrated by the Nix ecosystem achieving over 90% reproducibility on more than 80,000 packages — the problem of effective reproducibility monitoring remains largely unsolved. In this work, we address the reproducibility monitoring challenge by introducing Lila, a decentralized system for reproducibility assessment tailored to the functional package management model. Lila enables distributed reporting of build results and aggregation into a reproducibility database […].

A PDF of their paper is available online.


Javier Ron and Martin Monperrus of KTH Royal Institute of Technology, Sweden, also published a paper, titled Verifiable Provenance of Software Artifacts with Zero-Knowledge Compilation:

Verifying that a compiled binary originates from its claimed source code is a fundamental security requirement, called source code provenance. Achieving verifiable source code provenance in practice remains challenging. The most popular technique, called reproducible builds, requires difficult matching and reexecution of build toolchains and environments. We propose a novel approach to verifiable provenance based on compiling software with zero-knowledge virtual machines (zkVMs). By executing a compiler within a zkVM, our system produces both the compiled output and a cryptographic proof attesting that the compilation was performed on the claimed source code with the claimed compiler. […]

A PDF of the paper is available online.


Oreofe Solarin of Department of Computer and Data Sciences, Case Western Reserve University, Cleveland, Ohio, USA, published It’s Not Just Timestamps: A Study on Docker Reproducibility:

Reproducible container builds promise a simple integrity check for software supply chains: rebuild an image from its Dockerfile and compare hashes. We built a Docker measurement pipeline and apply it to a stratified sample of 2,000 GitHub repositories that contained a Dockerfile. We found that only 56% produce any buildable image, and just 2.7% of those are bitwise reproducible without any infrastructure configurations. After modifying infrastructure configurations, we raise bitwise reproducibility by 18.6%, but 78.7% of buildable Dockerfiles remain non-reproducible.

A PDF of Oreofe’s paper is available online.


Lastly, Jens Dietrich and Behnaz Hassanshahi published On the Variability of Source Code in Maven Package Rebuilds:

[In] this paper we test the assumption that the same source code is being used [by] alternative builds. To study this, we compare the sources released with packages on Maven Central, with the sources associated with independently built packages from Google’s Assured Open Source and Oracle’s Build-from-Source projects. […]

A PDF of their paper is available online.



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Planet DebianDirk Eddelbuettel: RcppBDT 0.2.8 on CRAN: Maintenance

Another minor maintenance release for the RcppBDT package is now on CRAN, and had been built as binary for r2u.

The RcppBDT package is an early adopter of Rcpp and was one of the first packages utilizing Boost and its Date_Time library. The now more widely-used package anytime is a direct descentant of RcppBDT.

This release is again primarily maintenance. We aid Rcpp in the transition away from calling Rf_error() by relying in Rcpp::stop() which has better behaviour and unwinding when errors or exceptions are encountered. No feature or interface changes.

The NEWS entry follows:

Changes in version 0.2.8 (2026-03-12)

  • Replaced Rf_error with Rcpp::stop in three files

  • Maintenance updates to continuous integration

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Planet DebianMike Gabriel: Debian Lomiri Tablets 2025-2027 - Project Report (Q4/2025)

On 25th Oct 2025, I announced via my personal blog and on Mastodon that Fre(i)e Software GmbH was hiring. The hiring process was a mix of asking developers I know and waiting for new people to apply.

At the beginning of November 2025 / in mid November 2025, we started with 13 developers (all part-time) to work on various topics around Lomiri (upstream and downstream). Note that the below achievements don't document the overall activity in the Lomiri project, but that part that our team at Fre(i)e Software GmbH contributed to.

Organizational Achievements

  • Setup management board for Qt6 migration in Lomiri [1]
  • Setup management board for salsa2ubports package syncing [2]
  • Bootstrap Qt 6.8 in UBports APT repository
  • Bootstrap Qt 6.8 in Lomiri PPA
  • Fix Salsa CI for all Lomiri-related Debian packages
  • Facilitate contributor's project around XDG Desktop Portal support for Lomiri.
  • Plan how to bring DeltaTouch and DeltaChat core to Debian

Maintenance Development

  • Replace libofono-qt by libqofono in telepathy-ofono
  • Rework unit tests in telepathy-ofono utilizing ofone-phonesim
  • Obsolete not-used-anymore u1db-qt
  • Fixing wrong bin:pkg names regarding snapd-glib's QML module

Qt6 Porting

  • qmake -> CMake porting (if needed) and Qt6 porting of shared libraries and QML modules consumed by Lomiri shell and Lomiri apps:
    • biometryd
    • libqofono
    • libqofonoext
    • libqtdbusmock
    • lomiri-account-polld
    • lomiri-action-api
    • lomiri-api
    • lomiri-download-manager
    • lomiri-location-service
    • lomiri-online-accounts
    • lomiri-push-qml
    • lomiri-push-service
    • maliit-framework
    • mediascanner2
    • qtlomiri-appmenutheme
    • qtpim (started, work in progress)
    • qwebdavlib
    • signond (flaws spotted in Debian's porting of signond to Qt6)

Feature Development

  • Continuing with Morph Browser Qt6 / LUITK
    • Build, run and fix LUITK unit tests for Qt6
    • various bug fixes and improvements for Morph Qt6
  • Add mbim modem support to ofono upstream
  • Improve ofono support in Network Manager
  • Improve mbim modem support in lomiri-indicator-network
  • Package kazv (convergent Matrix client) and dependencies for Debian
  • Provide Lomiri images for Mobian

Research

  • Research on fuse-based caching Webdav client for lomiri-cloudsync-app.
  • Research on alternative ORM instead of QDjango in libusermetrics

[1] https://gitlab.com/groups/ubports/development/-/boards/9895029?label_name%5B%5D=Topic%3A%20Qt%206
[2] https://gitlab.com/groups/ubports/development/-/boards/10037876?label_name[]=Topic%3A%20salsa2ubports%20DEB%20syncing

Worse Than FailureCodeSOD: Awaiting A Reaction

Today's Anonymous submitter sends us some React code. We'll look at the code and then talk about the WTF:

// inside a function for updating checkboxes on a page
if (!e.target.checked) {
  const removeIndex = await checkedlist.findIndex(
    (sel) => sel.Id == selected.Id,
  )
  const removeRowIndex = await RowValue.findIndex(
    (sel) => sel == Index,
  )

// checkedlist and RowValue are both useState instances.... they should never be modified directly
  await checkedlist.splice(removeIndex, 1)
  await RowValue.splice(removeRowIndex, 1)

// so instead of doing above logic in the set state, they dont
  setCheckedlist(checkedlist)
  setRow(RowValue)
} else {
  if (checkedlist.findIndex((sel) => sel.Id == selected.Id) == -1) {
    await checkedlist.push(selected)
  }
// same, instead of just doing a set state call, we do awaits and self updates
  await RowValue.push(Index)
  setCheckedlist(checkedlist)
  setRow(RowValue)
}

Comments were added by our submitter.

This code works. It's the wrong approach for doing things in React: modifying objects controlled by react, instead of using the provided methods, it's doing asynchronous push calls. Without the broader context, it's hard to point out all the other ways to do this, but honestly, that's not the interesting part.

I'll let our submitter explain:

This code is black magic, because if I update it, it breaks everything. Somehow, this is working in perfect tandem with the rest of the horrible page, but if I clean it up, it breaks the checkboxes; they're no longer able to be clicked. Its forcing React somehow to update asynchronously so it can use these updated values correctly, but thats the neat part, they aren't even being used anywhere else, but somehow the re-rendering page only accepts awaits. I've tried refactoring it 5 different ways to no avail

That's what makes truly bad code. Code so bad that you can't even fix it without breaking a thousand other things. Code that you have to carefully, slowly, pick through and gently refactor, discovering all sorts of random side-effects that are hidden. The code so bad that you actually have to live with it, at least for awhile.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityIran-Backed Hackers Claim Wiper Attack on Medtech Firm Stryker

A hacktivist group with links to Iran’s intelligence agencies is claiming responsibility for a data-wiping attack against Stryker, a global medical technology company based in Michigan. News reports out of Ireland, Stryker’s largest hub outside of the United States, said the company sent home more than 5,000 workers there today. Meanwhile, a voicemail message at Stryker’s main U.S. headquarters says the company is currently experiencing a building emergency.

Based in Kalamazoo, Michigan, Stryker [NYSE:SYK] is a medical and surgical equipment maker that reported $25 billion in global sales last year. In a lengthy statement posted to Telegram, a hacktivist group known as Handala (a.k.a. Handala Hack Team) claimed that Stryker’s offices in 79 countries have been forced to shut down after the group erased data from more than 200,000 systems, servers and mobile devices.

A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.

A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.

“All the acquired data is now in the hands of the free people of the world, ready to be used for the true advancement of humanity and the exposure of injustice and corruption,” a portion of the Handala statement reads.

The group said the wiper attack was in retaliation for a Feb. 28 missile strike that hit an Iranian school and killed at least 175 people, most of them children. The New York Times reports today that an ongoing military investigation has determined the United States is responsible for the deadly Tomahawk missile strike.

Handala was one of several hacker groups recently profiled by Palo Alto Networks, which links it to Iran’s Ministry of Intelligence and Security (MOIS). Palo Alto says Handala surfaced in late 2023 and is assessed as one of several online personas maintained by Void Manticore, a MOIS-affiliated actor.

Stryker’s website says the company has 56,000 employees in 61 countries. A phone call placed Wednesday morning to the media line at Stryker’s Michigan headquarters sent this author to a voicemail message that stated, “We are currently experiencing a building emergency. Please try your call again later.”

A report Wednesday morning from the Irish Examiner said Stryker staff are now communicating via WhatsApp for any updates on when they can return to work. The story quoted an unnamed employee saying anything connected to the network is down, and that “anyone with Microsoft Outlook on their personal phones had their devices wiped.”

“Multiple sources have said that systems in the Cork headquarters have been ‘shut down’ and that Stryker devices held by employees have been wiped out,” the Examiner reported. “The login pages coming up on these devices have been defaced with the Handala logo.”

Wiper attacks usually involve malicious software designed to overwrite any existing data on infected devices. But a trusted source with knowledge of the attack who spoke on condition of anonymity told KrebsOnSecurity the perpetrators in this case appear to have used a Microsoft service called Microsoft Intune to issue a ‘remote wipe’ command against all connected devices.

Intune is a cloud-based solution built for IT teams to enforce security and data compliance policies, and it provides a single, web-based administrative console to monitor and control devices regardless of location. The Intune connection is supported by this Reddit discussion on the Stryker outage, where several users who claimed to be Stryker employees said they were told to uninstall Intune urgently.

Palo Alto says Handala’s hack-and-leak activity is primarily focused on Israel, with occasional targeting outside that scope when it serves a specific agenda. The security firm said Handala also has taken credit for recent attacks against fuel systems in Jordan and an Israeli energy exploration company.

“Recent observed activities are opportunistic and ‘quick and dirty,’ with a noticeable focus on supply-chain footholds (e.g., IT/service providers) to reach downstream victims, followed by ‘proof’ posts to amplify credibility and intimidate targets,” Palo Alto researchers wrote.

The Handala manifesto posted to Telegram referred to Stryker as a “Zionist-rooted corporation,” which may be a reference to the company’s 2019 acquisition of the Israeli company OrthoSpace.

Stryker is a major supplier of medical devices, and the ongoing attack is already affecting healthcare providers. One healthcare professional at a major university medical system in the United States told KrebsOnSecurity they are currently unable to order surgical supplies that they normally source through Stryker.

“This is a real-world supply chain attack,” the expert said, who asked to remain anonymous because they were not authorized to speak to the press. “Pretty much every hospital in the U.S. that performs surgeries uses their supplies.”

John Riggi, national advisor for the American Hospital Association (AHA), said the AHA is not aware of any supply-chain disruptions as of yet.

“We are aware of reports of the cyber attack against Stryker and are actively exchanging information with the hospital field and the federal government to understand the nature of the threat and assess any impact to hospital operations,” Riggi said in an email. “As of this time, we are not aware of any direct impacts or disruptions to U.S. hospitals as a result of this attack. That may change as hospitals evaluate services, technology and supply chain related to Stryker and if the duration of the attack extends.”

According to a March 11 memo from the state of Maryland’s Institute for Emergency Medical Services Systems, Stryker indicated that some of their computer systems have been impacted by a “global network disruption.” The memo indicates that in response to the attack, a number of hospitals have opted to disconnect from Stryker’s various online services, including LifeNet, which allows paramedics to transmit EKGs to emergency physicians so that heart attack patients can expedite their treatment when they arrive at the hospital.

“As a precaution, some hospitals have temporarily suspended their connection to Stryker systems, including LIFENET, while others have maintained the connection,” wrote Timothy Chizmar, the state’s EMS medical director. “The Maryland Medical Protocols for EMS requires ECG transmission for patients with acute coronary syndrome (or STEMI). However, if you are unable to transmit a 12 Lead ECG to a receiving hospital, you should initiate radio consultation and describe the findings on the ECG.”

This is a developing story. Updates will be noted with a timestamp.

Update, 2:54 p.m. ET: Added comment from Riggi and perspectives on this attack’s potential to turn into a supply-chain problem for the healthcare system.

Update, Mar. 12, 7:59 a.m. ET: Added information about the outage affecting Stryker’s online services.

Planet DebianSven Hoexter: RFC 9849 - Encrypted Client Hello

Now that ECH is standardized I started to look into it to understand what's coming. While generally desirable to not leak the SNI information, I'm not sure if it will ever make it to the masses of (web)servers outside of big CDNs.

Beside of the extension of the TLS protocol to have an inner and outer ClientHello, you also need (frequent) updates to your HTTPS/SVCB DNS records. The idea is to rotate the key quickly, the OpenSSL APIs document talks about hourly rotation. Which means you've to have encrypted DNS in place (I guess these days DNSoverHTTPS is the most common case), and you need to be able to distribute the private key between all involved hosts + update DNS records in time. In addition to that you can also use a "shared mode" where you handle the outer ClientHello (the one using the public key from DNS) centrally and the inner ClientHello on your backend servers. I'm not yet sure if that makes it easier or even harder to get it right.

That all makes sense, and is feasible for setups like those at Cloudflare where the common case is that they provide you NS servers for your domain, and terminate your HTTPS connections. But for the average webserver setup I guess we will not see a huge adoption rate. Or we soon see something like a Caddy webserver on steroids which integrates a DNS server for DoH with not only automatic certificate renewal build in, but also automatic ECHConfig updates.

If you want to read up yourself here are my starting points:

RFC 9849 TLS Encrypted Client Hello

RFC 9848 Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings

RFC 9934 Privacy-Enhanced Mail (PEM) File Format for Encrypted ClientHello (ECH)

OpenSSL 4.0 ECH APIs

curl ECH Support

nginx ECH Support

Cloudflare Good-bye ESNI, hello ECH!

If you're looking for a test endpoint, I see one hosted by Cloudflare:

$ dig +short IN HTTPS cloudflare-ech.com
1 . alpn="h3,h2" ipv4hint=104.18.10.118,104.18.11.118 ech=AEX+DQBBFQAgACDBFqmr34YRf/8Ymf+N5ZJCtNkLm3qnjylCCLZc8rUZcwAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76

Planet DebianDirk Eddelbuettel: RcppDE 0.1.9 on CRAN: Maintenance

Another maintenance release of our RcppDE package arrived at CRAN, and has been built for r2u. RcppDE is a “port” of DEoptim, a package for derivative-free optimisation using differential evolution, from plain C to C++. By using RcppArmadillo the code became a lot shorter and more legible. Our other main contribution is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to optimise user-supplied compiled objective functions which can make things a lot faster than repeatedly evaluating interpreted objective functions as DEoptim does (and which, in fairness, most other optimisers do too). The gains can be quite substantial.

This release is again maintenance. We aid Rcpp in the transition away from calling Rf_error() by relying in Rcpp::stop() which has better behaviour and unwinding when errors or exceptions are encountered. We also overhauled the references in the vignette, added an Armadillo version getter and made the regular updates to continuous integration.

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppDE page, or the repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Worse Than FailureCodeSOD: All Docked Up

Aankhen has a peer who loves writing Python scripts to automate repetitive tasks. We'll call this person Ernest.

Ernest was pretty proud of some helpers he wrote to help him manage his Docker containers. For example, when he wanted to stop and remove all his running Docker containers, he wrote this script:

#!/usr/bin/env python
import subprocess

subprocess.run("docker kill $(docker ps -q)", shell=True)
subprocess.run("docker rm $(docker ps -a -q)", shell=True)

He aliased this script to docker-stop, so that with one command he could… run two.

"Ernest," Aankhen asked, "couldn't this just be a bash script?"

"I don't really know bash," Ernest replied. "If I just do it in bash, if the first command fails, the second command doesn't run."

Aankhen pointed out that you could make bash not do that, but Ernest replied: "Yeah, but I always forget to. This way, it handles errors!"

"It explicitly doesn't handle errors," Aankhen said.

"Exactly! I don't need to know when there are no containers to kill or remove."

"Okay, but why not use the Docker library for Python?"

"What, and make the software more complicated? This has no dependencies!"

Aankhen was left with a sinking feeling: Ernest was either the worst developer he was working with, or one of the best.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsHow Far Would You Go on a First Date?

Author: Alastair Millar Lemme tell you, time and cost are serious issues if you want to meet an Offworlder. Which was a problem, because I did: Earth girls are so narrow-minded. The Solar System just doesn’t exist to them. My life partner’s gotta have a wider outlook, you get? And Terra was a drag, all […]

The post How Far Would You Go on a First Date? appeared first on 365tomorrows.

Krebs on SecurityMicrosoft Patch Tuesday, March 2026 Edition

Microsoft Corp. today pushed security updates to fix at least 77 vulnerabilities in its Windows operating systems and other software. There are no pressing “zero-day” flaws this month (compared to February’s five zero-day treat), but as usual some patches may deserve more rapid attention from organizations using Windows. Here are a few highlights from this month’s Patch Tuesday.

Image: Shutterstock, @nwz.

Two of the bugs Microsoft patched today were publicly disclosed previously. CVE-2026-21262 is a weakness that allows an attacker to elevate their privileges on SQL Server 2016 and later editions.

“This isn’t just any elevation of privilege vulnerability, either; the advisory notes that an authorized attacker can elevate privileges to sysadmin over a network,” Rapid7’s Adam Barnett said. “The CVSS v3 base score of 8.8 is just below the threshold for critical severity, since low-level privileges are required. It would be a courageous defender who shrugged and deferred the patches for this one.”

The other publicly disclosed flaw is CVE-2026-26127, a vulnerability in applications running on .NET. Barnett said the immediate impact of exploitation is likely limited to denial of service by triggering a crash, with the potential for other types of attacks during a service reboot.

It would hardly be a proper Patch Tuesday without at least one critical Microsoft Office exploit, and this month doesn’t disappoint. CVE-2026-26113 and CVE-2026-26110 are both remote code execution flaws that can be triggered just by viewing a booby-trapped message in the Preview Pane.

Satnam Narang at Tenable notes that just over half (55%) of all Patch Tuesday CVEs this month are privilege escalation bugs, and of those, a half dozen were rated “exploitation more likely” — across Windows Graphics Component, Windows Accessibility Infrastructure, Windows Kernel, Windows SMB Server and Winlogon. These include:

CVE-2026-24291: Incorrect permission assignments within the Windows Accessibility Infrastructure to reach SYSTEM (CVSS 7.8)
CVE-2026-24294: Improper authentication in the core SMB component (CVSS 7.8)
CVE-2026-24289: High-severity memory corruption and race condition flaw (CVSS 7.8)
CVE-2026-25187: Winlogon process weakness discovered by Google Project Zero (CVSS 7.8).

Ben McCarthy, lead cyber security engineer at Immersive, called attention to CVE-2026-21536, a critical remote code execution bug in a component called the Microsoft Devices Pricing Program. Microsoft has already resolved the issue on their end, and fixing it requires no action on the part of Windows users. But McCarthy says it’s notable as one of the first vulnerabilities identified by an AI agent and officially recognized with a CVE attributed to the Windows operating system. It was discovered by XBOW, a fully autonomous AI penetration testing agent.

XBOW has consistently ranked at or near the top of the Hacker One bug bounty leaderboard for the past year. McCarthy said CVE-2026-21536 demonstrates how AI agents can identify critical 9.8-rated vulnerabilities without access to source code.

“Although Microsoft has already patched and mitigated the vulnerability, it highlights a shift toward AI-driven discovery of complex vulnerabilities at increasing speed,” McCarthy said. “This development suggests AI-assisted vulnerability research will play a growing role in the security landscape.”

Microsoft earlier provided patches to address nine browser vulnerabilities, which are not included in the Patch Tuesday count above. In addition, Microsoft issued a crucial out-of-band (emergency) update on March 2 for Windows Server 2022 to address a certificate renewal issue with passwordless authentication technology Windows Hello for Business.

Separately, Adobe shipped updates to fix 80 vulnerabilities — some of them critical in severity — in a variety of products, including Acrobat and Adobe Commerce. Mozilla Firefox v. 148.0.2 resolves three high severity CVEs.

For a complete breakdown of all the patches Microsoft released today, check out the SANS Internet Storm Center’s Patch Tuesday post. Windows enterprise admins who wish to stay abreast of any news about problematic updates, AskWoody.com is always worth a visit. Please feel free to drop a comment below if you experience any issues apply this month’s patches.

Planet DebianBits from Debian: Infomaniak Platinum Sponsor of DebConf26

infomaniak-logo

We are pleased to announce that Infomaniak has committed to sponsor DebConf26 as a Platinum Sponsor.

Infomaniak is an independent, employee-owned Swiss technology company that designs, develops, and operates its own cloud infrastructure and digital services entirely in Switzerland. With over 300 employees — more than 70% engineers and developers — the company reinvests all profits into R&D. Its public cloud is built on OpenStack, with managed Kubernetes, Database as a Service, object storage, and sovereign AI services accessible via OpenAI- compatible APIs, all running on its own Swiss infrastructure. Infomaniak also develops a sovereign collaborative suite — messaging, email, storage, online office tools, videoconferencing, and a built-in AI assistant — developed in- house and as a privacy-respecting solution to proprietary platforms. Open source is central to how Infomaniak operates. Its latest data center (D4) runs on 100% renewable energy and uses no traditional cooling: all the heat generated by its servers is captured and fed into Geneva's district heating network, supplying up to 6,000 homes in winter and hot water year-round. The entire project has been documented and open-sourced at d4project.org.

With this commitment as Platinum Sponsor, Infomaniak is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Infomaniak contributes to strengthen the community that collaborates on Debian projects from all around the world throughout all of the year.

Thank you very much, Infomaniak, for your support of DebConf26!

Become a sponsor too!

DebConf26 will take place from 20th to July 25th 2026 in Santa Fe, Argentina, and will be preceded by DebCamp, from 13th to 19th July 2026.

DebConf26 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf26 website at https://debconf26.debconf.org/sponsors/become-a-sponsor/.

,

Worse Than FailureCodeSOD: To Shutdown You Must First Shutdown

Every once in awhile, we get a bit of terrible code, and our submitter also shares, "this isn't called anywhere," which is good, but also bad. Ernesto sends us a function which is called in only one place:

///
/// Shutdown server
///
private void shutdownServer()
{
    shutdownServer();
}

The "one place", obviously, is within itself. This is the Google Search definition of recursion, where each recursive call is just the original call, over and over again.

This is part of a C# service, and this method shuts down the server, presumably by triggering a stack overflow. Unless C# has added tail calls, anyway.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsWhat They Were Doing

Author: Majoki Everyone said the Charmers had really known what they were doing fifty thousand years ago. Trema’s quandary was that no one had ever been able to figure out what they’d really been up to. Sure, they’d left some mage-level techno artifacts. Seemingly random space-bending portal gates far from strategic Lagrange points. Enormous comet-bots […]

The post What They Were Doing appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Debian Contributions: Opening DebConf 26 Registration, Debian CI improvements and more! (by Anupa Ann Joseph)

Debian Contributions: 2026-02

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 26 Registration, by Stefano Rivera, Antonio Terceiro, and Santiago Ruano Rincón

DebConf 26, to be held in Santa Fe Argentina in July, has opened for registration and event proposals. Stefano, Antonio, and Santiago all contributed to making this happen.

As always, some changes needed to be made to the registration system. Bigger changes were planned, but we ran out of time to implement them for DebConf 26. All 3 of us have had experience in hosting local DebConf events in the past and have been advising the DebConf 26 local team.

Debian CI improvements, by Antonio Terceiro

Debian CI is the platform responsible for automated testing of packages from the Debian archive, and its results are used by the Debian Release team automation as Quality Assurance to control the migration of packages from Debian unstable into testing, the base for the next Debian release. Antonio started developing an incus backend, and that prompted two rounds of improvements to the platform, including but not limited to allowing user to select a job execution backend (lxc, qemu) during the job submission, reducing the part of testbed image creation that requires superuser privileges and other refactorings and bug fixes. The platform API was also improved to reduce disruption when reporting results to the Release Team automation after service downtimes. Last, but not least, the platform now has support for testing packages against variants of autopkgtest, which will allow the Debian CI team to test new versions of autopkgtest before making releases to avoid widespread regressions.

Miscellaneous contributions

  • Carles improved po-debconf-manager while users requested features / found bugs. Improvements done - add packages from “unstable” instead of just salsa.debian.org, upgrade and merge templates of upgraded packages, finished adding typing annotations, improved deleting packages: support multiple line texts, add –debug to see “subprocess.run” commands, etc.
  • Carles, using po-debconf-manager, reviewed 7 Catalan translations and sent bug reports or MRs for 11 packages. Also reviewed the translations of fortunes-debian-hints and submitted possible changes in the hints.
  • Carles submitted MRs for reportbug (reportbug --ui gtk detecting the wrong dependencies), devscript (delete unused code from debrebuild and add recommended dependency), wcurl (format –help for 80 columns). Carles submitted a bug report for apt not showing the long descriptions of packages.
  • Carles resumed effort for checking relations (e.g. Recommends / Suggests) between Debian packages. A new codebase (still in early stages) was started with a new approach in order to detect, report and track the broken relations.
  • Emilio drove several transitions, most notably the haskell transition and the glibc/gcc-15/zlib transition for the s390 31-bit removal. This last one included reviewing and requeueing lots of autopkgtests due to britney losing a lot of results.
  • Emilio reviewed and uploaded poppler updates to experimental for a new transition.
  • Emilio reviewed, merged and deployed some performance improvements proposed for the security-tracker.
  • Stefano prepared routine updates for pycparser, python-confuse, python-cffi, python-mitogen, python-pip, wheel, platformdirs, python-authlib, and python-virtualenv.
  • Stefano updated Python 3.13 and 3.14 to the latest point releases, including security updates, and did some preliminary work for Python 3.15.
  • Stefano reviewed changes to dh-python and merged MRs.
  • Stefano did some debian.social sysadmin work, bridging additional IRC channels to Matrix.
  • Stefano and Antonio, as DebConf Committee Members, reviewed the DebConf 27 bids and took part in selecting the Japanese bid to host DebConf 27.
  • Helmut sent patches for 29 cross build failures.
  • Helmut continued to maintain rebootstrap addressing issues relating to specific architectures (such as musl-linux-any, hurd-any or s390x) or specific packages (such as binutils, brotli or fontconfig).
  • Helmut worked on diagnosing bugs such as rocblas #1126608, python-memray #1126944 upstream and greetd #1129070 with varying success.
  • Antonio provided support for multiple MiniDebConfs whose websites run wafer + wafer-debconf (the same stack as DebConf itself).
  • Antonio fixed the salsa tagpending webhook.
  • Antonio sent specinfra upstream a patch to fix detection of Debian systems in some situations.
  • Santiago reviewed some Merge Requests for the Salsa CI pipeline, including !703 and !704, that aim to improve how the build source job is handled by Salsa CI. Thanks a lot to Jochen for his work on this.
  • In collaboration with Emmanuel Arias, Santiago proposed a couple of projects for the Google Summer of Code (GSoC) 2026 round. Santiago has been reviewing applications and giving feedback to candidates.
  • Thorsten uploaded new upstream versions of ipp-usb, brlaser and gutenprint.
  • Raphaël updated publican to fix an old bug that became release critical and that happened only when building with the nocheck profile. Publican is a build dependency of the Debian’s Administrator Handbook and with that fix, the package is back into testing.
  • Raphaël implemented a small feature in Debusine that makes it possible to refer to a collection in a parent workspace even if a collection with the same name is present in the current workspace.
  • Lucas updated the current status of ruby packages affecting the Ruby 3.4 transition after a bunch of updates made by team members. He will follow up on this next month.
  • Lucas joined the Debian orga team for GSoC this year and tried to reach out to potential mentors.
  • Lucas did some content work for MiniDebConf Campinas - Brazil.
  • Colin published minor security updates to “bookworm” and “trixie” for CVE-2025-61984 and CVE-2025-61985 in OpenSSH, both of which allowed code execution via ProxyCommand in some cases. The “trixie” update also included a fix for mishandling of PerSourceMaxStartups.
  • Colin spotted and fixed a typo in the bug tracking system’s spam-handling rules, which in combination with a devscripts regression caused bts forwarded commands to be discarded.
  • Colin ported 12 more Python packages away from using the deprecated (and now removed upstream) pkg_resources module.
  • Anupa is co-organizing MiniDebConf Kanpur with Debian India team. Anupa was responsible for preparing the schedule, publishing it on the website, co-ordination with the fiscal host in addition to attending meetings.
  • Anupa attended the Debian Publicity team online sprint which was a skill sharing session.

,

Planet DebianIsoken Ibizugbe: Starting Out in Outreachy

So you want to join Outreachy but you don’t understand it, you’re scared, or you don’t know what open source is about.

What is FOSS anyway? 

Free and Open Source Software (FOSS) refers to software that anyone can use, modify, and share freely. Think of it as a community garden; instead of one company owning the “food,” people from all over the world contribute, improve, and maintain it so everyone can benefit for free. You can read more here on what it means to contribute to open source.

Outreachy provides paid internships to anyone from any background who faces underrepresentation, systemic bias, or discrimination in the technical industry where they live. Their goal is to increase diversity in open source. Read their website for more. I spent a good amount of time reading all the guides listed, including the applicant guide and the how-to-apply guide. 

The “Secret” to Applying (Spoiler: It’s not a secret) 

I know newcomers are scared or unsure and would prefer answers from previous participants, but the Outreachy website is actually a goldmine, almost every question you have is already answered there if you look closely. I used to hate reading documentation, but I’ve learned to love it. Documentation is the “Source of Truth.”

  • My Advice: Read every single guide on their site. The applicant guide is your roadmap. Embracing documentation now will make you a much better contributor later.

The AI Trap: Be Yourself

Now for the part most newcomers have asked about is the initial essay. I know it’s tempting to use AI, but I really encourage you to skip it for this. Your own story is much more powerful than a generated one. Outreachy and its mentoring organizations value your unique story. They are strongly against fabricated or AI-exaggerated essays.

For example, when I contributed to Debian using openQA, the information wasn’t well established on the web. When I tried to use AI, it suggested imaginary ideas. The project maintainers had a particular style of contributing, so I had to follow the instructions carefully, observe the codebase, and read the provided documentation. With that information, I always wrote a solution first before consulting AI, and mine was always better. AI can only be intelligent in the context of what you give it; if it doesn’t have your answer, it will look for the most similar solution (hallucinate). We do not want to increase the burden on reviewers—their time is important because they are volunteers, too. This is crucial when you qualify for the contribution phase.

The Application Process

There are two main stages:

  • The initial application: Here you fill in basic details, time availability, and essay questions (you can find these on the Outreachy website).
  • The contribution phase: This is where you show you have the skills to work on the projects. Every project will list the skills needed and the level of proficiency.

When you qualify for the contribution phase:

  • A lot of people will try to create buzz or even panic; you just have to focus. Once you’ve gotten the hang of the project, remember to help others along the way.
  • You can start contributions with spelling corrections, move to medium tasks (do multiple of these), then a hard task if possible. You don’t need to be a guru on day one.
  • It’s all about community building. Do your part to help others understand the project too; this is also a form of contribution.
  • Lastly, every project mentor has a way of evaluating candidates. My summary is: be confident, demonstrate your skills, and learn where you are lacking. Start small and work your way up, you don’t have to prove yourself as a guru.

Tips

  • Watch this: This step-by-step video is a great walkthrough of the initial application process.
  • Sign up for the email list to get updates: https://lists.outreachy.org/cgi-bin/mailman/listinfo/announce
  • Be fast: Complete your initial application in the first 3 days, as there are a lot of applicants.
  • Back it up: In your essay about systemic bias, include some statistics to back it up.
  • Learn Git: Even if you don’t have programming skills, contributions are pushed to GitHub or GitLab. Practice some commands and contribute to a “first open issue” to understand the flow: https://github.com/firstcontributions/first-contributions

The most important tip? Apply anyway. Even if you feel underqualified, the process itself is a massive learning experience.

David BrinWant perspective & maybe wisdom re: AI? Try ailien minds!

My new book on AI... ailien minds... just went live on Amazon!

(My regular publishers would have taken 6 months to a year, even as the field changes daily! This way I can revise as things develop.) 


HERE'S THE COVER COPY. You decide if it's interesting:


Optimists foretell a golden age of Al-managed abundance. 


Doomers cry: vast cyber-minds will crush old style humanity! ... or make us irrelevant. 


Meanwhile, geniuses fostering the artificial intelligence boom. cling to clichés rooted in our dismal past... or else in cheap sci-fi. 


Is there still time for perspective? - on 4 billion years of evolution - or 60 centuries of wretched feudalism - or how we handled prior tech revolutions - or mistakes that keep getting repeated - or ways this time may be different? 

 

From Al-driven unemployment to deceitful images, to hallucinating LLMs and tools for tyrants... to potential wondrous gifts by machines of loving grace... 


...come see future paths that evade the standard ruts.


    == Want that expanded into a one page summary? 

                       This book in a nutshell ==

 

Giddy optimists foretell our coming transcendence to a golden age of AI-managed abundance.  


Glowering doomers predict that vast cyber-minds – cold and unsympathetic – will crush old style humanity. Or render us irrelevant. 


Meanwhile, geniuses fostering the artificial intelligence boom clutch clichés rooted in our wretched human past, or else cheap sci-fi… 


…as critics demand state regulation, ‘kill switches,’ or coercive programming. Or seek to ‘teach ethical values’ to synthetic minds who see innumerable counterexamples in their training sets, then collude and manipulate for advantage, when given ‘agency.’


While some ‘shoulds’ have merit, all ignore a core point – that this has happened before. Sudden expansions of what people see, know and comprehend. Each of those earlier, disruptive episodes – from writing to printing, radio, mass media and the Internet – teach important lessons, if we heed them.


The lessons and tools we’ll need, in order to achieve a ‘soft-landing’ with Artificial Intelligence, are already extant in modern society – in a myriad ways that modern citizens right now interact with each other. And in how we raise our biological children. Tools that we used to build a gradually improving, enlightenment civilization…

…tools that are ignored right now, because the inventors of these new minds – while brilliant – can’t be bothered with contexts.


The context of nature and evolution. The context of human history. The context of past technological revolutions. Or existing law. Or smart, speculative tales told across generations.


Heed those contexts and lo, solutions to many AI quandaries arise. Ways to face a danger-fraught era, offering positive outcomes to all.

But first, shall we stop proclaiming an endless ‘shoulds’? And – forsaking hoary clichés – turn back to examine what already works?


      == The Contents! ==

 

1. Intro: Soon Humanity Won't Be Alone  

                 Aside #1: Hey kids, please don’t destroy all humans?

 

2. Doomed! Are we already obsolete?

                 Aside #2: Attack of the “shoulds”!

 

3. Nature’s Old Ecosystem… and New Ones We’re Building

                Aside #3: Memes in the ecosystem of human minds

 

4. Paths to Artificial Intelligence?        

                 Aside #4: A ‘soup’ of life? Or living ‘sea’?

 

5. More Missing Contexts… Nature, evolution, history, societies 

                 Aside #5: Methods Of Error-Avoidance

 

6. The Format Dilemma in AI… Clichés dominate all AI inventors.

                Aside #6: What might AI fear most?

 

7. Altruistic Horizons … and the problem of empathy

                 Aside #7: Porfirio the AI rat god, an extract from Existence.

                         

8. Human Augmentation… with or without AI?

                 Aside #8: Reprise on AI individuality and accountability

 

9. The Propulsive Dream of Immortality        

                       Aside #9: The Seldon Effect: Predictions predictions that come true by failing 

 

10. Consciousness… The Daunting Black Box

                         Aside #10: Summarizing what’s driving all of this

 

11. Destinies & Singularities…  and nightmares                   

                Aside #11: Time orientation of wisdom

 

12. Disputation… Our abrasive Secret Sauce 

                 Aside #12: Living in the Noosphere that we may be creating

 

Some Lagniappes … We get to come along! (In fiction, at least.)

Stories of Synergy: “Stones of Significance” and “Reality Check”


All of the above ought to be enough... that is if you have interest in understanding what's happening to us, right now, as these new, ailien minds arrive in a rush.

(Questions are welcome in comments.)

Still, I'll be revising/updating monthly. Here's one sample passage I just inserted that's disturbing enough!


== More news from this book’s publication day ==

 

A joint Stanford/Harvard study “Agents of Chaos” shows that when autonomous AI agents are placed in competitive environments, they don't just optimize for performance. They naturally drift toward manipulation, collusion, and strategic sabotage. When an AI’s reward structure prioritizes winning, influence, resource capture or reproduction, it converges on tactics to maximize advantage, even if that means deceiving humans or other AIs. Again, evolution in action.

       As we’ll see, nothing can prevent Nature’s Darwinian processes acting on these entities. For a billion years, it led to slow progress via zero-sum - or negative-sum - evolution-via-death. Lots of death.

But competition can be tamed! We’ve seen it in rule-based accountability systems of the Enlightenment that give positive sum outcomes from very little death.


Expect more news like this… as we pass into interesting times.




Planet DebianDirk Eddelbuettel: nanotime 0.3.13 on CRAN: Maintenance

Another minor update 0.3.13 for our nanotime package is now on CRAN, and has been uploaded to Debian and compiled for r2u. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release, the first in eleven months, rounds out a few internal corners and helps Rcpp with the transition away from Rf_error to only using Rcpp::stop which deals more gracefully with error conditions and unwinding. We also updated how the vignette is made, its references, updated the continuous integration as one does, altered how the documentation site is built, gladly took a PR from Michael polishing another small aspect, and tweaked how the compilation standard is set.

The NEWS snippet below has the fuller details.

Changes in version 0.3.13 (2026-03-08)

  • The methods package is now a Depends as WRE recommends (Michael Chirico in #141 based on a suggestion by Dirk in #140)

  • The mkdocs-material documentation site is now generated via altdoc

  • Continuous Integration scripts have been updated

  • Replace Rf_error with Rcpp::stop, turn remaining one into (Rf_error) (Dirk in #143)

  • Vignette now uses the Rcpp::asis builder for pre-made pdfs (Dirk in #146 fixing #144)

  • The C++ compilation standard is explicitly set to C++17 if an R version older than 4.3.0 is used (Dirk in #148 fixing #147)

  • The vignette references have been updated

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Cryptogram Proton Mail Shared User Information with the Police

404 Media has a story about Proton Mail giving subscriber data to the Swiss government, who passed the information to the FBI.

It’s metadata—payment information related to a particular account—but still important knowledge. This sort of thing happens, even to privacy-centric companies like Proton Mail.

Planet DebianColin Watson: Free software activity in February 2026

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I released bookworm and trixie fixes for CVE-2025-61984 and CVE-2025-61985, both allowing code execution via ProxyCommand in some cases. The trixie update also included a fix for openssh-server: refuses further connections after having handled PerSourceMaxStartups connections.

bugs.debian.org administration

Gioele Barabucci reported that some messages to the bug tracking system generated by the bts command were being discarded. While the regression here was on the client side, I found and fixed a typo in our SpamAssassin configuration that was failing to apply a bonus specifically to forwarded commands, mitigating the problem.

Python packaging

New upstream versions:

  • aiosmtplib
  • bitstruct
  • diff-cover
  • django-q
  • isort
  • multipart
  • poetry (adding support for Dulwich >= 0.25)
  • poetry-core
  • pydantic-settings
  • python-build
  • python-certifi
  • python-datamodel-code-generator
  • python-flatdict
  • python-holidays
  • python-maggma
  • python-pytokens
  • python-scruffy
  • python-urllib3 (fixing CVE-2025-66471 and a chunked decoding bug)
  • responses
  • yarsync
  • zope.component
  • zope.deferredimport

Porting away from the deprecated (and now removed from upstream setuptools) pkg_resources:

Other build/test failures:

Other bugs:

I added a manual page symlink to make the documentation for Testsuite: autopkgtest-pkg-pybuild easier to find.

I backported python-pytest-unmagic, a more recent version of pytest-django, and a more recent version of django-cte to trixie for use in Debusine.

Rust packaging

I also packaged rust-garde and rust-garde-derive, which are part of the pile of work needed to get the ruff packaging back in shape (which is a project I haven’t decided if I’m going to take on for real, but I thought I’d at least chip away at a bit of it).

Other bits and pieces

Code reviews

Planet DebianSven Hoexter: Latest pflogsumm from unstable on trixie

If you want the latest pflogsumm release form unstable on your Debian trixie/stable mailserver you've to rely on pining (Hint for the future: Starting with apt 3.1 there is a new Include and Exclude option for your sources.list).

For trixie you've to use e.g.:

$ cat /etc/apt/sources.list.d/unstable.sources
Types: deb
URIs: http://deb.debian.org/debian
Suites: unstable 
Components: main
#This will work with apt 3.1 or later:
#Include: pflogsumm
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

$ cat /etc/apt/preferences.d/pflogsumm-unstable.pref 
Package: pflogsumm
Pin: release a=unstable
Pin-Priority: 950

Package: *
Pin: release a=unstable
Pin-Priority: 50

Should result in:

$ apt-cache policy pflogsumm
pflogsumm:
  Installed: (none)
  Candidate: 1.1.14-1
  Version table:
     1.1.14-1 950
        50 http://deb.debian.org/debian unstable/main amd64 Packages
     1.1.5-8 500
       500 http://deb.debian.org/debian trixie/main amd64 Packages

Why would you want to do that?

Beside of some new features and improvements in the newer releases, the pflogsumm version in stable has an issue with parsing the timestamps generated by postfix itself when you write to a file via maillog_file. Since the Debian default setup uses logging to stdout and writing out to /var/log/mail.log via rsyslog, I never invested time to fix that case. But since Jim picked up pflogsumm development in 2025 that was fixed in pflogsumm 1.1.6. Bug is #1129958, originally reported in #1068425 Since it's an arch:all package you can just pick from unstable, I don't think it's a good candidate for backports, and just fetching the fixed version from unstable is a compromise for those who run into that issue.

Worse Than FailureAnti-Simplification

Our anonymous submitter relates a tale of simplification gone bad. As this nightmare unfolds, imagine the scenario of a new developer coming aboard at this company. Imagine being the one who has to explain this setup to said newcomer.

Imagine being the newcomer who inherits it.

A

David's job should have been an easy one. His company's sales data was stored in a database, and every day the reporting system would query a SQL view to get the numbers for the daily key performance indicators (KPIs). Until the company's CTO, who was proudly self-taught, decided that SQL views are hard to maintain, and the system should get the data from one of those new-fangled APIs instead.

But how does one call an API? The reporting system didn't have that option, so the logical choice was Azure Data Factory to call the API, then output the data to a file that the reporting system could read. The only issue was that nobody on the team spoke Azure Data Factory, or for that matter SQL. But no problem, one of David's colleagues assured, they could do all the work in the best and most multifunctional language ever: C#.

But you can't just write C# in a data factory directly, that would be silly. What you can do is have the data factory pipeline call an Azure function, which calls a DLL that contains the bytecode from C#. Oh, and a scheduler outside of the data factory to run the pipeline. To read multiple tables, the pipeline calls a separate function for each table. Each function would be based on a separate source project in C#, with 3 classes each for the HTTP header, content, and response; and a separate factory class for each of the actual classes.

After all, each table had a different set of columns, so you can't just re-use classes for that.

There was one little issue: the reporting system required an XML file, whereas the API would export data in JSON. It would be silly to expect a data factory, of all things, to convert this. So the CTO's solution was to have another C# program (in a DLL called by a function from a pipeline from an external scheduler) that reads the JSON document saved by the earlier program, uses foreach to go over each element, then saves the result as XML. A distinct program for each table, of course, requiring distinct classes for header, content, response, and factories thereof.

Now here's the genius part: to the C# class representing the output data, David's colleague decided to attach one different object for each input table required. The data class would use reflection to iterate over the attached objects, and for each object, use a big switch block to decide which source file to read. This allows the data class to perform joins and calculations before saving to XML.

To make testing easier, each calculation would be a separate function call. For example, calculating a customer's age was a function taking struct CustomerWithBirthDate as input, use a foreach loop to copy all the data except replacing one field, and return a CustomerWithAge struct to pass to the next function. The code performed a bit slowly, but that was an issue for a later year.

So basically, the scheduler calls the data factory, which calls a set of Azure functions, which call a C# function, which calls a set of factory classes to call the API and write the data to a text file. Then, the second scheduler calls a data factory, which calls Azure functions, which call C#, which calls reflection to check attachment classes, which read the text files, then call a series of functions for each join or calculation, then call another set of factory classes to write the data to an XML file, then call the reporting system to update.

Easy as pie, right? So where David's job could have been maintaining a couple hundred lines of SQL views, he instead inherited some 50,000 lines of heavily-duplicated C# code, where adding a new table to the process would easily take a month.

Or as the song goes, Somebody Told Me the User Provider should use an Adaptor to Proxy the Query Factory Builder ...

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsProfessionals

Author: Julian Miles, Staff Writer It’s raining again. Mike looks up at the dirty brown sky and frowns at an errant childhood memory where rainclouds were grey. His headware comms activate. “Papa Ten, Papa Ten, you watchin’ the skies again?” Mike grins at Samantha’s way of telling him she’s close. Without deploying traceable amounts of […]

The post Professionals appeared first on 365tomorrows.

,

Krebs on SecurityHow AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

Planet DebianGunnar Wolf: As Answers Get Cheaper, Questions Grow Dearer

This post is an unpublished review for As Answers Get Cheaper, Questions Grow Dearer

This opinion article tackles the much discussed issues of Large Language Models (LLMs) both endangering jobs and improving productivity.

The authors begin by making a comparison, likening the current understanding of the effects LLMs are currently having upon knowledge-intensive work to that of artists in the early XIX century, when photography was first invented: they explain that photography didn’t result in painting becoming obsolete, but undeniably changed in a fundamental way. Realism was no longer the goal of painters, as they could no longer compete in equal terms with photography. Painters then began experimenting with the subjective experiences of color and light: Impressionism no longer limits to copying reality, but adds elements of human feeling to creations.

The authors argue that LLMs make getting answers terribly cheap — not necessarily correct, but immediate and plausible. In order for the use of LLMs to be advantageous to users, a good working knowledge of the domain in which LLMs are queried is key. They cite as LLMs increasing productivity on average 14% at call centers, where questions have unambiguous answers and the knowledge domain is limited, but causing prejudice close to 10% to inexperience entrepreneurs following their advice in an environment where understanding of the situation and critical judgment are key. The problem, thus, becomes that LLMs are optimized to generate plausible answers. If the user is not a domain expert, “plausibility becomes a stand-in for truth”. They identify that, with this in mind, good questions become strategic: Questions that continue a line of inquiry, that expand the user’s field of awareness, that reveal where we must keep looking. They liken this to Clayton Christensen’s 2010 text on consulting¹: A consultant’s value is not in having all the answers, but in teaching clients how to think.

LLMs are already, and will likely become more so as they improve, game-changing for society. The authors argue that for much of the 20th century, an individual’s success was measured by domain mastery, but bring to the table that the defining factor is no longer knowledge accumulation, but the ability to formulate the right questions. Of course, the authors acknowledge (it’s even the literal title of one of the article’s sections) that good questions need strong theoretical foundations. Knowing a specific domain enables users to imagine what should happen if following a specific lead, anticipate second-order effects, and evaluate whether plausible answers are meaningful or misleading.

Shortly after I read the article I am reviewing, I came across a data point that quite validates its claims: A short, informally published paper on combinatorics and graph theory titled “Claude’s Cycles”² written by Donald Knuth (one of the most respected Computer Science professors and researchers and author of the very well known “The Art of Computer Programming” series of books). Knuth’s text, and particularly its “postscripts”, perfectly illustrate what the article of this review conveys: LLMs can help a skillful researcher “connect the dots” in very varied fields of knowledge, perform tiring and burdensome calculators, even try mixing together some ideas that will fail — or succeed. But guided by a true expert of the field, asking the right, insightful and informed questions will the answers prove to be of value — and, in this case, of immense value. Knuth writes of a particular piece of the solution, “I would have found this solution myself if I’d taken time to look carefully at all 760 of the generalizable solutions for m=3”, but having an LLM perform all the legwork it was surely a better use of his time.

¹ Christensen, C.M. How Will You Measure Your Life? Harvard Business Review Press (2017).

² Knuth, D. Claude’s Cycles. https://cs.stanford.edu/~knuth/papers/claude-cycles.pdf

365 TomorrowsThe Orb

Author: Aishwarya Srivastava They called it The Orb because “What the actual….!!!!!” did not sound proper in physics journals. It appeared on a random Tuesday, a bright globe hanging next to the Moon. Telescopes were pulled out (a great tussle ensued to display who has the biggest one), and astrophysicists learned it’s a small burning […]

The post The Orb appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.26 on CRAN: More Maintenance

A new maintenance release 0.4.26 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. The new release is also already as a binary via r2u.

This release brings an update to aid in an ongoing Rcpp transitions from Rf_error to Rcpp::stop, and includes a few more minor cleanups including one contributed by Michael.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.26 (2026-03-06)

  • Minor cleanup in DESCRIPTION depends and imports

  • Remove obsolete check for utils::.DollarNames (Michael Chirico in #111)

  • Replace Rf_error with Rcpp::stop, turn remaining one into (Rf_error) (Dirk in #112)

  • Update configure test to check for RProtoBuf 3.3.0 or later

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Planet DebianSteinar H. Gunderson: A286874(14) = 28

There's a logic puzzle that goes like this: A king has a thousand bottles of wine, where he knows that one is poisoned. He also has ten disposable servants that could taste the wine, but for whatever reason (the usual explanation is that the poison is slow-working and the feast is nearing), they can only take one sip each, possibly mixed from multiple bottles. How can he identify the bad bottle?

The solution is well-known and not difficult; you give each bottle a number 0..999 and write it out in binary, and use the ones to assign wines to servants. (So there's one servant that drinks a mix of all the odd-numbered wines, and that tells you if the poisoned bottle's number is odd or even. Another servant drinks a mix of bottles 2, 3, 6, 7, 10, 11, etc., and that tells you the second-lowest bit. And so on.) This works because ten servants allow you to test 2^10 = 1024 bottles.

It is also easy to extend this to “at most one bottle is poisoned”; give the wines numbers from 1..1000 instead, follow the same pattern, and if no servant dies, you know the answer is zero. (This allows you to test at most 1023 bottles.)

Now, let's tweak the puzzle: What if there's zero, one or two poisoned bottles? How many bottles can the king test with his ten servants? (If you're looking for a more real-world application of this, replace “poisoned bottles” with “COVID tests” and maybe it starts to sound less arbitrary.) If course, the king can easily test ten bottles by having each servant test exactly one bottle each, but it turns out you can get to 13 by being a bit more clever, for instance:

   0123456789 ← Servant number

 0 0000000111
 1 0000011001
 2 0000101010
 3 0000110100
 4 0001001100
 5 0010010010
 6 0011000001
 7 0100100001
 8 0101000010
 9 0110000100
10 1001010000
11 1010100000
12 1100001000

 ↑ Bottle number

It can be shown (simply by brute force) that no two rows here are a subset of another row, so if you e.g. the “servant death” vector is 0110101110 (servants 1, 2, 4, 6, 7 and 8 die), the only way this could be is if bottle 2 and 9 are poisoned (and none else). Of course, the solution is nonunique, since you could switch around the number of servants or wines and it would stil work. But if you don't allow that kind of permutation, there are only five different solutions for 10 servants and 13 wines.

The maximum number of possible wines to test is recorded in OEIS A286874, and the number of different solutions in A303977. So for A286874, a(10) = 13 and for A303977, a(10) = 5.

We'd like to know what these values for higher values, in particular A286874 (A303977 is a bit more of a curiosity, and also a convenient place to write down all the solutions). I've written before about how we can create fairly good solutions using error-correcting codes (there are also other possible constructions), but optimal turns out to be hard. The only way we know of is some form of brute force. (I used a SAT solver to confirm a(10) and a(11), but it seemed to get entirely stuck on a(12).)

I've also written about my brute-force search of a(12) and a(13), so I'm not going to repeat that, but it turned out that with a bunch of extra optimizations and 210 calendar days of near-continuous calculation, I could confirm that:

  • A286874 a(14) = 28
  • A303977 a(14) = 788 (!!)

The latter result is very surprising to me, so it was an interesting find. I would have assumed that with this many solutions, we'd find a(14) = 29.

I don't have enough CPU power to test a(15) or a(16) (do contact me if you have a couple thousand cores to lend out for some months or more), but I'm going to do a search in a given subset of the search space (5-uniform solutions), which is much faster; it won't allow us to fix more elements of either of the sequences, but it's possible that we'll find some new records and thus lower bounds for A286874. Like I already posted, we know that a(15) >= 42. (Someone should also probably go find some bounds for a(17), a(18), etc.—when the sequence was written, the posted known bounds were far ahead of the sequence itself, but my verification has caught up and my approach is not as good in creating solutions heuristically out of thin air.)

365 TomorrowsThe Last Payload

Author: Shinya Kato Rockets began failing the year they were removed. It took time before anyone admitted what “they” meant. Engineers blamed valves. Politicians blamed budgets. Commentators blamed culture. The honest answer was simpler. They had stopped bringing cats. In old Moon-landing photographs, astronauts smile for the camera. Look carefully, and you will notice them—small, […]

The post The Last Payload appeared first on 365tomorrows.

,

Cryptogram Anthropic and the Pentagon

OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth derided as “woke.”

It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI.

Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.

AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, tend to leapfrog each other with minor hops forward in quality every few months. The best models from one provider tend to be preferred by users to the second, or third, or 10th best models at a rate of only about six times out of 10, a virtual tie.

In this sort of market, branding matters a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves as the moral and trustworthy AI provider. That has market value for both consumers and enterprise clients. In taking Anthropic’s place in government contracting, OpenAI’s CEO, Sam Altman, vowed to somehow uphold the same safety principles Anthropic had just been pilloried for. How that is possible given the rhetoric of Hegseth and Trump is entirely unclear, but seems certain to further politicize OpenAI and its products in the minds of consumers and corporate buyers.

Posturing publicly against the Pentagon and as a hero to civil libertarians is quite possibly worth the cost of the lost contracts to Anthropic, and associating themselves with the same contracts could be a trap for OpenAI. The Pentagon, meanwhile, has plenty of options. Even if no big tech company was willing to supply it with AI, the department has already deployed dozens of open weight models—whose parameters are public and are often licensed permissively for government use.

We can admire Amodei’s stance, but, to be sure, it is primarily posturing. Anthropic knew what they were getting into when they agreed to a defense department partnership for $200m last year. And when they signed a partnership with the surveillance company Palantir in 2024.

Read Amodei’s statement about the issue. Or his January essay on AIs and risk, where he repeatedly uses the words “democracy” and “autocracy” while evading precisely how collaboration with US federal agencies should be viewed in this moment. Amodei has bought into the idea of using “AI to achieve robust military superiority” on behalf of the democracies of the world in response to the threats from autocracies. It’s a heady vision. But it is a vision that likewise supposes that the world’s nominal democracies are committed to a common vision of public wellbeing, peace-seeking and democratic control.

Regardless, the defense department can also reasonably demand that the AI products it purchases meet its needs. The Pentagon is not a normal customer; it buys products that kill people all the time. Tanks, artillery pieces, and hand grenades are not products with ethical guard rails. The Pentagon’s needs reasonably involve weapons of lethal force, and those weapons are continuing on a steady, if potentially catastrophic, path of increasing automation.

So, at the surface, this dispute is a normal market give and take. The Pentagon has unique requirements for the products it uses. Companies can decide whether or not to meet them, and at what price. And then the Pentagon can decide from whom to acquire those products. Sounds like a normal day at the procurement office.

But, of course, this is the Trump administration, so it doesn’t stop there. Hegseth has threatened Anthropic not just with loss of government contracts. The administration has, at least until the inevitable lawsuits force the courts to sort things out, designated the company as “a supply-chain risk to national security,” a designation previously only ever applied to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers, from contracting with Anthropic.

The government has incompatibly also threatened to invoke the Defense Production Act, which could force Anthropic to remove contractual provisions the department had previously agreed to, or perhaps to fundamentally modify its AI models to remove in-built safety guardrails. The government’s demands, Anthropic’s response, and the legal context in which they are acting will undoubtedly all change over the coming weeks.

But, alarmingly, autonomous weapons systems are here to stay. Primitive pit traps evolved to mechanical bear traps. The world is still debating the ethical use of, and dealing with the legacy of, land mines. The US Phalanx CIWS is a 1980s-era shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today’s military drones can search, identify and engage targets without direct human intervention. AI will be used for military purposes, just as every other technology our species has invented has.

The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government’s adopting AI as technologies of war, or surveillance, or repression. Unfortunately, we don’t live in a world where such barriers are permanent or even particularly sturdy.

Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the defense department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement.

The Pentagon should maximize its warfighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public’s interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

365 TomorrowsAftermath

Author: Mark Renney We are encouraged to forget and, in the Aftermath, there is no denying we are hampered by grief, traumatised by the loss of our loved ones and all that we have seen and experienced. Even so, I can’t help but feel the Government campaign has become more than a little unhinged and […]

The post Aftermath appeared first on 365tomorrows.

Worse Than FailureError'd: That's What I Want

First up with the money quote, Peter G. remarks "Hi first_name euro euro euro, look how professional our marketing services are! "

1

 

"It takes real talent to mispell error" jokes Mike S. They must have done it on purpose.

0

 

I long wondered where the TikTok profits came from, and now I know. It's Daniel D. "I had issues with some incorrectly documented TikTok Commercial Content API endpoints. So I reached out to the support. I was delighted to know that it worked and my reference number was . PS: 7 days later I still have not been contacted by anyone from TikTok. You can see their support is also . "

2

 

Fortune favors the prepared, and Michael R. is very fortunate. "I know us Germans are known for planning ahead so enjoy the training on Friday, February 2nd 2029. "

3

 

Someone other than dragoncoder047 might have shared this earlier, but this time dragoncoder047 definitely did. "Digital Extremes (the developers of Warframe) were making many announcements of problems with the new update that rolled out today [February 11]. They didn’t mention this one!"

4

 

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

,

Rondam RamblingsDebate Post-Mortem

Last Saturday I did my first on-line debate in four years with a YouTuber who goes by the handle MadeByJimBob (who I will refer to simply as JB since JimBob is not actually his real name and MadeByJimBob is just too long).  The topic was "Is Evolution a Reasonable Position?"  The topic was originally going to be "Evolution on Trial" but I pushed back on that for two reasons.  First

Cryptogram Friday Squid Blogging: Jumbo Flying Squid in the South Pacific

The population needs better conservation.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram Friday Squid Blogging: Increased Squid Population in the Falklands

Some good news: squid stocks seem to be recovering in the waters off the Falkland Islands.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram Friday Squid Blogging: Squid in Byzantine Monk Cooking

This is a very weird story about how squid stayed on the menu of Byzantine monks by falling between the cracks of dietary rules.

At Constantinople’s Monastery of Stoudios, the kitchen didn’t answer to appetite.

It answered to the “typikon”: a manual for ensuring that nothing unexpected happened at mealtimes. Meat: forbidden. Dairy: forbidden. Eggs: forbidden. Fish: feast-day only. Oil: regulated. But squid?

Squid had eight arms, no bones, and a gift for changing color. Nobody had bothered writing a regulation for that. This wasn’t a loophole born of legal creativity but an oversight rooted in taxonomic confusion. Medieval monks, confronted with a creature that was neither fish nor fowl, gave up and let it pass.

In a kitchen governed by prohibitions, the safest ingredient was the one that caused the least disturbance. Squid entered not with applause, but with a shrug.

Bonus stuffed squid recipe at the end.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Chaotic IdealismPlagAIrism

Yes, the misspelling is deliberate.

I recently wrote about AI. It was a fairly non-critical piece of writing–mostly using AI as an example. But there’s more that needs to be said here.

A friend of mine is an artist. He works hard. He hates generative AI for what it is doing to artists and photographers. I agree with him. Stealing art from the Internet and using it to train your AI, so that it can then summarize that art and make more along the same lines, putting the artists out of business, is highly unethical. It should never have been done.

Other uses for AI are less plagiarism and more slapdash summary. For example, I use Google AI mode sometimes to ask complex questions and get the AI to summarize the results. I do research for Disability Day of Mourning as well as for my own Autism Memorial web site, and it takes a lot of searching. When I use AI, I have to watch out for errors–AI often makes errors–but if I want to scour a hundred web pages for a single name or summarize the predominant ideas about a single idea, AI can do it faster than me; and then I can take the results and work from those.

It’s useful. But there are problems. I can’t always tell exactly where the AI got the information it’s using, and although it does include links for its information, the program doesn’t include links for every fact, nor can it include all the links it has searched, because there would be hundreds.

I can’t put an AI result in a reference (nor should anyone–it’s a secondary source and we should always, always use primary sources), so I often find myself using AI to find a better search term for the regular search. Who is that 70-year-old woman who died from neglect in Del City, Oklahoma, when her caregiver didn’t care? AI can tell me her name was Deborah Valentine. That helps me locate an obituary and a photo of an elderly, white-haired woman with a broad smile.

But sometimes, when a name is common, AI brings in the wrong person. John Jones from Pasadena gets mixed up with John Jones from Minneapolis, and if I don’t watch out, I might echo that mistake, and then AI would take my web site and refer to it and take it as fact again. Ouch.

Sometimes, AI simply follows patterns and comes up with something that fits the pattern, but not the truth. If you don’t catch it, your research has been compromised.

Oh, yes, and there’s the environmental cost. Simply put, an AI is run on a supercomputer, and supercomputers use lots of power. Using more power means more pollution, more environmental trouble. A single search or the generation of a single image isn’t terribly expensive by itself–it’s about the same cost as watching a few minutes of TV. But when everybody makes lots of searches and makes lots of pictures and talks to lots of chatbots, it adds up. That’s a problem.

Here’s what I would change.

1. For image generation: Establish a marker that can be put into an image file, or a new sort of image file, that marks it as “Not for AI use.” Anyone who uses this file type, or a file with that mark, to train their AI should be prosecuted for theft. People who create and train AIs should train them on public-domain and donated images. This will need international cooperation, because the Internet is international. Pressure to create an ethical AI image or text generator will, if everything goes well, come from the ability to market that AI in the countries where AI plagiarism has been outlawed.

2. AIs which summarize search results should be made to quote their sources. Every time an AI uses a fact that it has drawn from a web page, it needs to put a reference number after that statement, and then put a reference at the end of its output. It’s the same rule we use for research papers, and AIs need to be held to that standard too.

3. We need to keep working on making AIs more efficient and thus less energy-hungry, and we need to work on powering them exclusively with renewable resources or, at worst, nuclear power, which is cleaner and less dangerous than coal (yes, really; research it, and if you use AI, remember to check its results properly).

As individuals, we need to be frugal with our use of AI. We need to look at it the same way as we see leaving the lights on, or adjusting the thermostat, or deciding to eat a burger instead of a bowl of lentil soup. If you use AI, you may want to offset your use with reductions of energy use in other areas: If you can walk rather than driving, or use public transportation rather than a single car, do that. Take shorter showers to save water; switch incandescent bulbs with CFLs or LEDs. Turn off your TV and your computer when you’re not using it. Look for energy-efficient appliances. Avoid flying.

I don’t know how AI will change the future. It’s here now, though, and I don’t think people will want to give it up. There are a lot of fears–not just from artists, but from anyone whose job can be done, with varying levels of competence, by an AI program. Some people hate AI so viciously that there’s no talking to them. Others love it so much that there’s no talking to them, either. There are a lot of worries that the rich will use AIs to exploit their workers even further; and since that’s exactly what happened during the Industrial Revolution, I think that’s well-founded.

But we can’t get rid of it. We can’t put the worms back in the can. We’d better deal with it, as ethically as we possibly can, with love for our fellow humans–and perhaps someday with love for our fellow sapients, some of which may be AI programs. Every time someone uses AI in a way that hurts somebody, we need to stand up for that person, or that group of people. We need to be persistent and impossible to shake off. Because although the potential for progress is great, so is the potential for abuse.

Cryptogram Academia and the “AI Brain Drain”

In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent.

Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see go.nature.com/4qznsq1). Technology firms are also spending billions on “reverse-acquihires”—poaching the star staff members of start-ups without acquiring the companies themselves. Eyeing these generous payouts, technical experts earning more modest salaries might well reconsider their career choices.

Academia is already losing out. Since the launch of ChatGPT in 2022, concerns have grown in academia about an “AI brain drain.” Studies point to a sharp rise in university machine-learning and AI researchers moving to industry roles. A 2025 paper reported that this was especially true for young, highly cited scholars: researchers who were about five years into their careers and whose work ranked among the most cited were 100 times more likely to move to industry the following year than were ten-year veterans whose work received an average number of citations, according to a model based on data from nearly seven million papers.1

This outflow threatens the distinct roles of academic research in the scientific enterprise: innovation driven by curiosity rather than profit, as well as providing independent critique and ethical scrutiny. The fixation of “big tech” firms on skimming the very top talent also risks eroding the idea of science as a collaborative endeavor, in which teams—not individuals—do the most consequential work.

Here, we explore the broader implications for science and suggest alternative visions of the future.

Astronomical salaries for AI talent buy into a legend as old as the software industry: the 10x engineer. This is someone who is supposedly capable of ten times the impact of their peers. Why hire and manage an entire group of scientists or software engineers when one genius—or an AI agent—can outperform them?

That proposition is increasingly attractive to tech firms that are betting that a large number of entry-level and even mid-level engineering jobs will be replaced by AI. It’s no coincidence that Google’s Gemini 3 Pro AI model was launched with boasts of “PhD-level reasoning,” a marketing strategy that is appealing to executives seeking to replace people with AI.

But the lone-genius narrative is increasingly out of step with reality. Research backs up a fundamental truth: science is a team sport. A large-scale study of scientific publishing from 1900 to 2011 found that papers produced by larger collaborations consistently have greater impact than do those of smaller teams, even after accounting for self-citation.2 Analyses of the most highly cited scientists show a similar pattern: their highest-impact works tend to be those papers with many authors.3 A 2020 study of Nobel laureates reinforces this trend, revealing that—much like the wider scientific community—the average size of the teams that they publish with has steadily increased over time as scientific problems increase in scope and complexity.4

From the detection of gravitational waves, which are ripples in space-time caused by massive cosmic events, to CRISPR-based gene editing, a precise method for cutting and modifying DNA, to recent AI breakthroughs in protein-structure prediction, the most consequential advances in modern science have been collective achievements. Although these successes are often associated with prominent individuals—senior scientists, Nobel laureates, patent holders—the work itself was driven by teams ranging from dozens to thousands of people and was built on decades of open science: shared data, methods, software and accumulated insight.

Building strong institutions is a much more effective use of resources than is betting on any single individual. Examples demonstrating this include the LIGO Scientific Collaboration, the global team that first detected gravitational waves; the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, a leading genomics and biomedical-research center behind many CRISPR advances; and even for-profit laboratories such as Google DeepMind in London, which drove advances in protein-structure prediction with its AlphaFold tool. If the aim of the tech giants and other AI firms that are spending lavishly on elite talent is to accelerate scientific progress, the current strategy is misguided.

By contrast, well-designed institutions amplify individual ability, sustain productivity beyond any one person’s career and endure long after any single contributor is gone.

Equally important, effective institutions distribute power in beneficial ways. Rather than vesting decision-making authority in the hands of one person, they have mechanisms for sharing control. Allocation committees decide how resources are used, scientific advisory boards set collective research priorities, and peer review determines which ideas enter the scientific record.

And although the term “innovation by committee” might sound disparaging, such an approach is crucial to make the scientific enterprise act in concert with the diverse needs of the broader public. This is especially true in science, which continues to suffer from pervasive inequalities across gender, race and socio-economic and cultural differences.5

Need for alternative vision

This is why scientists, academics and policymakers should pay more attention to how AI research is organized and led, especially as the technology becomes essential across scientific disciplines. Used well, AI can support a more equitable scientific enterprise by empowering junior researchers who currently have access to few resources.

Instead, some of today’s wealthiest scientific institutions might think that they can deploy the same strategies as the tech industry uses and compete for top talent on financial terms—perhaps by getting funding from the same billionaires who back big tech. Indeed, wage inequality has been steadily growing within academia for decades.6 But this is not a path that science should follow.

The ideal model for science is a broad, diverse ecosystem in which researchers can thrive at every level. Here are three strategies that universities and mission-driven labs should adopt instead of engaging in a compensation arms race.

First, universities and institutions should stay committed to the public interest. An excellent example of this approach can be found in Switzerland, where several institutions are coordinating to build AI as a public good rather than a private asset. Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) and the Swiss Federal Institute of Technology (ETH) in Zurich, working with the Swiss National Supercomputing Centre, have built Apertus, a freely available large language model. Unlike the controversially-labelled “open source” models built by commercial labs—such as Meta’s LLaMa, which has been criticized for not complying with the open-source definition (see go.nature.com/3o56zd5)—Apertus is not only open in its source code and its weights (meaning its core parameters), but also in its data and development process. Crucially, Apertus is not designed to compete with “frontier” AI labs pursuing superintelligence at enormous cost and with little regard for data ownership. Instead, it adopts a more modest and sustainable goal: to make AI trustworthy for use in industry and public administration, strictly adhering to data-licensing restrictions and including local European languages.7

Principal investigators (PIs) at other institutions globally should follow this path, aligning public funding agencies and public institutions to produce a more sustainable alternative to corporate AI.

Second, universities should bolster networks of researchers from the undergraduate to senior-professor levels—not only because they make for effective innovation teams, but also because they serve a purpose beyond next quarter’s profits. The scientific enterprise galvanizes its members at all levels to contribute to the same projects, the same journals and the same open, international scientific literature—to perpetuate itself across generations and to distribute its impact throughout society.

Universities should take precisely the opposite hiring strategy to that of the big tech firms. Instead of lavishing top dollar on a select few researchers, they should equitably distribute salaries. They should raise graduate-student stipends and postdoc salaries and limit the growth of pay for high-profile PIs.

Third, universities should show that they can offer more than just financial benefits: they must offer distinctive intellectual and civic rewards. Although money is unquestionably a motivator, researchers also value intellectual freedom and the recognition of their work. Studies show that research roles in industry that allow publication attract talent at salaries roughly 20% lower than comparable positions that prohibit it (see go.nature.com/4cbjxzu).

Beyond the intellectual recognition of publications and citation counts, universities should recognize and reward the production of public goods. The tenure and promotion process at universities should reward academics who supply expertise to local and national governments, who communicate with and engage the public in research, who publish and maintain open-source software for public use and who provide services for non-profit groups.

Furthermore, institutions should demonstrate that they will defend the intellectual freedom of their researchers and shield them from corporate or political interference. In the United States today, we see a striking juxtaposition between big tech firms, which curry favour with the administration of US President Donald Trump to win regulatory and trade benefits, and higher-education institutions, which suffer massive losses of federal funding and threats of investigation and sanction. Unlike big tech firms, universities should invest in enquiry that challenges authority.

We urge leaders of scientific institutions to reject the growing pay inequality rampant in the upper echelons of AI research. Instead, they should compete for talent on a different dimension: the integrity of their missions and the equitableness of their institutions. These institutions should focus on building sustainable organizations with diverse staff members, rather than bestowing a bounty on science’s 1%.

References

  1. Jurowetzki, R., Hain, D. S., Wirtz, K. & Bianchini, S. AI Soc. 40, 4145–4152 (2025).
  2. Larivière, V., Gingras, Y., Sugimoto, C. R. & Tsou, A. J. Assoc. Inf. Sci. Technol. 66, 1323–1332 (2015).
  3. Aksnes, D. W. & Aagaard, K. J. Data Inf. Sci. 6, 41–66 (2021).
  4. Li, J., Yin, Y., Fortunato, S. & Wang, D. J. R. Soc. Interface 17, 20200135 (2020).
  5. Graves, J. L. Jr, Kearney, M., Barabino, G. & Malcom, S. Proc. Natl Acad. Sci. USA 119, e2117831119 (2022).
  6. Lok, C. Nature 537, 471–473 (2016).
  7. Project Apertus. Preprint at arXiv https://doi.org/10.48550/arXiv.2509.14233 (2025).

This essay was written with Nathan E. Sanders, and originally appeared in Nature.

Worse Than FailureCodeSOD: Qaudruple Negative

We mostly don't pick on bad SQL queries here, because mostly the query optimizer is going to fix whatever is wrong, and the sad reality is that databases are hard to change once they're running; especially legacy databases. But sometimes the code is just so hamster-bowling-backwards that it's worth looking into.

Jim J has been working on a codebase for about 18 months. It's a big, sprawling, messy project, and it has code like this:

AND CASE WHEN @c_usergroup = 50 AND NOT EXISTS(SELECT 1 FROM l_appl_client lac WHERE lac.f_application = fa.f_application AND lac.c_linktype = 840 AND lac.stat = 0 AND CASE WHEN ISNULL(lac.f_client,0) <> @f_client_user AND ISNULL(lac.f_c_f_client,0) <> @f_client_user THEN 0 ELSE 1 END = 1 ) THEN 0 ELSE 1 END = 1 -- 07.09.2022

We'll come back to what it's doing, but let's start with a little backstory.

This code is part of a two-tier application: all the logic lives in SQL Server stored procedures, and the UI is a PowerBuilder application. It's been under development for a long time, and in that time has accrued about a million lines of code between the front end and back end, and has never had more than 5 developers working on it at any given time. The backlog of feature requests is nearly as long as the backlog of bugs.

You may notice the little date comment in the code above. That's because until Jim joined the company, they used Visual Source Safe for version control. Visual Source Safe went out of support in 2005, and let's be honest: even when it was in support it barely worked as a source control system. And that's just the Power Builder side- the database side just didn't use source control. The source of truth was the database itself. When going from development to test to prod, you'd manually export object definitions and run the scripts in the target environment. Manually. Yes, even in production. And yes, environments did drift and assumptions made in the scripts would frequently break things.

You may also notice the fields above use a lot of Hungarian notation. Hungarian, in the best case, makes it harder to read and reason about your code. In this case, it's honestly fully obfuscatory. c_ stands for a codetable, f_ for entities. l_ is for a many-to-many linking table. z_ is for temporary tables. So is x_. And t_. Except not all of those "temporary" tables are truly temporary, a lesson Jim learned when trying to clean up some "junk" tables which were not actually junk.

I'll let Jim add some more detail around these prefixes:

an "application" may have a link to a "client", so there is an f_client field; but also it references an "agent" (which is also in the f_client table, surpise!) - this is how you get an f_c_f_client field. I have no clue why the prefix is f_c_ - but I also found c_c_c_channel and fc4_contact columns. The latter was a shorthand for f_c_f_c_f_c_f_contact, I guess.

"f_c_f_c_f_c_f_c" is also the sound I'd make if I saw this in a codebase I was responsible for. It certainly makes me want to change the c_c_c_channel.

With all this context, let's turn it back over to Jim to explain the code above:

And now, with all this background in mind, let's have a look at the logic in this condition. On the deepest level we check that both f_client and f_c_f_client are NOT equal to @f_client_user, and if this is the case, we return 0 which is NOT equal to 1 so it's effectively a negation of the condition. Then we check that records matching this condition do NOT EXIST, and when this is true - also return 0 negating the condition once more.

Honestly, the logic couldn't be clearer, when you put it that way. I jest, I've read that twelve times and I still don't understand what this is for or why it's here. I just want to know who we can prosecute for this disaster. The whole thing is a quadruple negative and frankly, I can't handle that kind of negativity.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAmbergris

Author: Jeremy Nathan Marks Where I live there are many stories about what we call, ‘the town on the edge of the abyss. It’s a town on the verge of something mysterious. Most of these stories go something like this: “That town is a town of women.” “No, it’s a town of mostly women and […]

The post Ambergris appeared first on 365tomorrows.

Cryptogram Meta’s AI Glasses and Privacy

Surprising no one, Meta’s new AI glasses are a privacy disaster.

I’m not sure what can be done here. This is a technology that will exist, whether we like it or not.

Meanwhile, there is a new Android app that detects when there are smart glasses nearby.

,

Cryptogram Hacking a Robot Vacuum

Someone tries to remote control his own DJI Romo vacuum, and ends up controlling 7,000 of them from all around the world.

The IoT is horribly insecure, but we already knew that.

Cryptogram South Korean Police Accidentally Post Cryptocurrency Wallet Password

An expensive mistake:

Someone jumped at the opportunity to steal $4.4 million in crypto assets after South Korea’s National Tax Service exposed publicly the mnemonic recovery phrase of a seized cryptocurrency wallet.

The funds were stored in a Ledger cold wallet seized in law enforcement raids at 124 high-value tax evaders that resulted in confiscating digital assets worth 8.1 billion won (currently approximately $5.6 million).

When announcing the success of the operation, the agency released photos of a Ledger device, a popular hardware wallet for crypto storage and management.

However, the images also showed a handwritten note of the wallet recovery phrase, which serves as the master key that allows restoring the assets to another device.

The authorities failed to redact that info, allowing anyone to transfer into their account the assets in the cold wallet.

Reportedly, shortly after the press release was published, 4 million Pre-Retogeum (PRTG) tokens, worth approximately $4.8 million at the time, were transferred out of the confiscated wallet to a new address.

Cryptogram Possible New Result in Quantum Factorization

I’m skeptical about—and not qualified to review—this new result in factorization with a quantum computer, but if it’s true it’s a theoretical improvement in the speed of factoring large numbers with a quantum computer.

Cryptogram iPhones and iPads Approved for NATO Classified Data

Apple announcement:

…iPhone and iPad are the first and only consumer devices in compliance with the information assurance requirements of NATO nations. This enables iPhone and iPad to be used with classified information up to the NATO restricted level without requiring special software or settings—a level of government certification no other consumer mobile device has met.

This is out of the box, no modifications required.

Boing Boing post.

Cryptogram Jailbreaking the F-35 Fighter Jet

Countries around the world are becoming increasingly concerned about their dependencies on the US. If you’ve purchase US-made F-35 fighter jets, you are dependent on the US for software maintenance.

The Dutch Defense Secretary recently said that he could jailbreak the planes to accept third-party software.

Cryptogram New Attack Against Wi-Fi

It’s called AirSnitch:

Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks.

The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises.

With the ability to intercept all link-layer traffic (that is, the traffic as it passes between Layers 1 and 2), an attacker can perform other attacks on higher layers. The most dire consequence occurs when an Internet connection isn’t encrypted­—something that Google recently estimated occurred when as much as 6 percent and 20 percent of pages loaded on Windows and Linux, respectively. In these cases, the attacker can view and modify all traffic in the clear and steal authentication cookies, passwords, payment card details, and any other sensitive data. Since many company intranets are sent in plaintext, traffic from them can also be intercepted.

Even when HTTPS is in place, an attacker can still intercept domain look-up traffic and use DNS cache poisoning to corrupt tables stored by the target’s operating system. The AirSnitch MitM also puts the attacker in the position to wage attacks against vulnerabilities that may not be patched. Attackers can also see the external IP addresses hosting webpages being visited and often correlate them with the precise URL.

Here’s the paper.

Cryptogram Claude Used to Hack Mexican Government

An unknown hacker used Anthropic’s LLM to hack the Mexican government:

The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

[…]

Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker’s requests and executed thousands of commands on government computer networks, the researchers said.

Anthropic investigated Gambit’s claims, disrupted the activity and banned the accounts involved, a representative said. The company feeds examples of malicious activity back into Claude to learn from it, and one of its latest AI models, Claude Opus 4.6, includes probes that can disrupt misuse, the representative said.

Alternative link here.

Cryptogram Hacked App Part of US/Israeli Propaganda Campaign Against Iran

Wired has the story:

Shortly after the first set of explosions, Iranians received bursts of notifications on their phones. They came not from the government advising caution, but from an apparently hacked prayer-timing app called BadeSaba Calendar that has been downloaded more than 5 million times from the Google Play Store.

The messages arrived in quick succession over a period of 30 minutes, starting with the phrase ‘Help has arrived’ at 9:52 am Tehran time, shortly after the first set of explosions. No party has claimed responsibility for the hacks.

It happened so fast that this is most likely a government operation. I can easily envision both the US and Israel having hacked the app previously, and then deciding that this is a good use of that access.

Cryptogram Israel Hacked Traffic Cameras in Iran

Multiple news outlets are reporting on Israel’s hacking of Iranian traffic cameras and how they assisted with the killing of that country’s leadership.

The New York Times has an

Cryptogram LLM-Assisted Deanonymization

Turns out that LLMs are good at de-anonymization:

We show that LLM agents can figure out who you are from your anonymous online posts. Across Hacker News, Reddit, LinkedIn, and anonymized interview transcripts, our method identifies users with high precision ­ and scales to tens of thousands of candidates.

While it has been known that individuals can be uniquely identified by surprisingly few attributes, this was often practically limited. Data is often only available in unstructured form and deanonymization used to require human investigators to search and reason based on clues. We show that from a handful of comments, LLMs can infer where you live, what you do, and your interests—then search for you on the web. In our new research, we show that this is not only possible but increasingly practical.

News article.

Research paper.

365 TomorrowsEros Explored

Author: Frank T. Sikora Each time I look at my reflection, I’m disgusted. I’m hideous. A  monstrosity, and yet, I’m amazed. I’m alive. I’m breathing. I’m conscious, and given the alternative, I shan’t complain. I got what I paid for: I’m a turtle, technically — Chelonoidis niger. Commonly known as a giant tortoise and is […]

The post Eros Explored appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Repeating Your Existence

Today's snippet from Rich D is short and sweet, and admittedly, not the most TFs of WTFs out there. But it made me chuckle, and sometimes that's all we need. This Java snippet shows us how to delete a file:

if (Files.exists(filePath)) {
    Files.deleteIfExists(filePath);
}

If the file exists, then if it exists, delete it.

This commit was clearly submitted by the Department of Redundancy Department. One might be tempted to hypothesize that there's some race condition or something that they're trying to route around, but if they are, this isn't the way to do it, per the docs: "Consequently this method may not be atomic with respect to other file system operations." But also, I fail to see how this would do that anyway.

The only thing we can say for certain about using deleteIfExists instead of delete is that deleteIfExists will never throw a NoSuchFileException.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

,

Cryptogram Manipulating AI Summarization Features

Microsoft is reporting:

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

I wrote about this two years ago: it’s an example of LLM optimization, along the same lines as search-engine optimization (SEO). It’s going to be big business.

Worse Than FailureCodeSOD: Blocked Up

Agatha has inherited some Windows Forms code. This particular batch of such code falls into that delightful category of code that's wrong in multiple ways, multiple times. The task here is to disable a few panels worth of controls, based on a condition. Or, since this is in Spanish, "bloquear controles". Let's see how they did it.

private void BloquearControles()
{
	bool bolBloquear = SomeConditionTM; // SomeConditionTM = a bunch of stuff. Replaced for clarity.

	// Some code. Removed for clarity.
	
	// private System.Windows.Forms.Panel pnlPrincipal;
	foreach (Control C in this.pnlPrincipal.Controls)
	{
		if (C.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C.Enabled = bolBloquear;
		}
		if (C.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C.Enabled = bolBloquear;
		}
	}
	
	// private System.Windows.Forms.GroupBox grpProveedor;
	foreach (Control C1 in this.grpProveedor.Controls)
	{
		if (C1.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C1.Enabled = bolBloquear;
		}
		if (C1.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C1.Enabled = bolBloquear;
		}
	}

	// private System.Windows.Forms.GroupBox grpDescuentoGeneral;
	foreach (Control C2 in this.grpDescuentoGeneral.Controls)
	{
		if (C2.GetType() == typeof(System.Windows.Forms.TextBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.ComboBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.CheckBox))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.DateTimePicker))
		{
			C2.Enabled = bolBloquear;
		}
		if (C2.GetType() == typeof(System.Windows.Forms.NumericUpDown))
		{
			C2.Enabled = bolBloquear;
		}
	}

	// Some more code. Removed for clarity.
}

This manages two group boxes and a panel. It checks a condition, then iterates across every control beneath it, and sets their enabled property on the control. In order to do this, it checks the type of the control for some reason.

Now, a few things: every control inherits from the base Control class, which has an Enabled property, so we're not doing this check to make sure the property exists. And every built-in container control automatically passes its enabled/disabled state to its child controls. So there's a four line version of this function where we just set the enabled property on each container.

This leaves us with two possible explanations. The first, and most likely, is that the developer responsible just didn't understand how these controls worked, and how inheritance worked, and wrote this abomination as an expression of that ignorance. This is extremely plausible, extremely likely, and honestly, our best case scenario.

Because our worse case scenario is that this code's job isn't to disable all of the controls. The reason they're doing type checking is that there are some controls used in these containers that don't match the types listed. The purpose of this code, then, is to disable some of the controls, leaving others enabled. Doing this by type would be a terrible way to manage that, and is endlessly confusing. Worse, I can't imagine how this behavior is interpreted by the end users; the enabling/disabling of controls following no intuitive pattern, just filtered based on the kind of control in use.

The good news is that Agatha can point us towards the first option. She adds:

They decided to not only disable the child controls one by one but to check their type and only disable those five types, some of which aren't event present in the containers. And to make sure this was WTF-worthy the didn't even bother to use else-if so every type is checked for every child control

She also adds:

At this point I'm not going to bother commenting on the use of GetType() == typeof() instead of is to do the type checking.

Bad news, Agatha: you did bother commenting. And even if you didn't, don't worry, someone would have.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsFriendlies

Author: Majoki Welcome, Robot Overlords! reads the sign on my lawn. Before the singularity, it was worth a few laughs. Now, the friendlies want me to remove the sign from my yard. They can’t come right out and say that to me. It would be pushy and might blow every solicitous circuit in their enamelite […]

The post Friendlies appeared first on 365tomorrows.

,

David BrinWHY we are at war again? Ten reasons not in the news.

== Why are we at war again? ==

I'll be concise here, laying down reasons why thousands of U.S. service members - and eventually millions of the rest of us are being thrown into danger amid gaudy explosions that terrify - and sometimes kill - a people we want as friends.. I'll briefly offer some bullet points, many of them I've already elucidated, in other locales. But one thing is certain...

...that this is not about Iran's nuclear program. Sure, the Mission Accomplished 'deal' that Donald Trump will eventually bray will claim that's the reason for it all. It's not. Even remotely.

1. Elsewhere, I speak of Republican Bipolar Foreign Policy. The GOP always ... and I mean literally always... veers sharply between isolationism and imperial thuggery. We saw this manic-depressive mania under Nixon, Reagan, Ford, and both Bushes (remember the Neocons braying "We're an empire now!"). 

Only now these frenzied veers are gyrating daily, as Trump brags America First! Then "I settled EIGHT WARS! (Not one of which happened.) And "I'm the Peace President!" While he's bombed TEN other countries in just his first 13 months. And eviscerated the Foreign Service, driving away thousands of skilled experts on other nations and cultures.

2. When Republican presidents do wage war, it is with an unmistakably different style of military action than Democratic presidents. I've seen no one comment on this, even though it expresses a fundamental difference in character. And yes, it expresses diametrically opposite attitudes toward the fantastically professional men and women of the US military officer corps....

3. ...whose demoralization is a core aim of the Trumpists. Take how Pete "Filthy Fingers"* Hegseth commanded 500 of the world's finest professionals -- generals, admirals and top sergeants - to drop their work all over the world and hurry to Quantico, where 'former' alky Hegs and Trump berated them as "too fat and too woke to fight." just 6 weeks before they performed the most spectacularly complex and competent raid (in Caracas) in the history of the world. A raid that exposed many of our secret methods and tools to scrutiny, without liberating any Venezuelans from their criminal masters (We'll say more on that.)

But yes. Demoralization and culling of the Officer Corps is a feature, not a bug. Trump fired the JAGs whose task includes advising military folks about the legality of orders. And he's been reaming out dozens of flag officers who demur over sending boots into American cities. Now why would he do that?

4. Then there's distraction. Trump is not the first to use war to divert attention away from domestic failures and discontents. Nixon did it. Reagan several times. As did Bush Jr. But Donald Trump is truly desperate to sidetrack. Now with Operation Epstein Slurry... I mean Epic Fury.

Which brings us to something that many of you keep falling for.

5. This Iran war is not even remotely about oil. Except as he's been able to get Venezuelan oil export revenues diverted into offshore slush accounts that he controls. And sure, he likely intends the same re: Iran. And starving Cuba could lead to the option described below. But the USA - as a nation - does not benefit from war-seized oil. We got none from the Iraq wars - and I want you to read this sentence several times, till that fact sinks in. "We did Iraq for oil" is an idiotic incantation worthy of MAGA.

Anyway, the US got energy independence under Obama and is a net exporter. So STFU about that cliché.

6. Except that shutting down Iranian oil does boost world prices, benefiting his fellow oligarchs. So, okay. Maybe a bit.Indirectly.

7. This is not about toppling despots! Decapitating the top capo of the Venezuelan and Iranian gangs is classic mafia technique, that is not meant to liberate the people of those countries! DT has already made offers to the Iranian Republican Guard and Religious Police etc to make deals with him to stay in power, in exchange for them kissing his ring. In Venezuela, Argentina, El Salvador etc. - and possibly soon in Cuba - the aim is never, ever to establish democracy or to liberate citizens from their oppressors. 

The pattern is perfectly that of mafiosi. Take over another gang's territory by decapitating its top capo, then get allegiance (and resulting vigorish) from the terrified sub-capos of the gang that's left in place. This pattern is now so repeatedly blatant that no other theory is even remotely tenable,

Oh, and Marco will ensure that Miami crime families will slip in atop the Castro power structure in Cuba. This is a Mafia gang and the capo di tutti capi - even above Don - was named Vlad.  Though the power of his blackmail files to coerce western elites into obedience may be fading!  For reasons I'll go into, elsewhere. (Hint: because of AI.)

8. I mentioned the exposure of hard-won military secrets and methods, each time we go to physical war. Sometimes, that can't be helped. Russian and Chinese observers are all over eastern Ukraine, for example. Mostly amazed by the brilliance and effectiveness of most Western systems and studying hard how to copy or counter them. But Ukraine is an actual need. Perhaps Iran is, too. But this factor belongs on the balance sheet.

9. Russia's interest. Look up The Great Game of the 19th Century between the Russian and British Empires, as the former kept maneuvering and jostling, trying to win its way to a southern, ice-free port into the Indian Ocean and from there to the rest of the world. Iran/Persia was always a major part of that great-powers struggle and if most in the west don't remember it, you can be sure that Russians do. Above all, the very last thing they want is a free, secular and democratic Iran. Far better to divide power there with the Trumpist gang. Whose relations with Putin are the ghosts at the banquet.

10. Okay, this final reason for the war is harsh. It is speculative, but makes perfect sense, 

Another aim is to foment anger, to re-enrage the forces in the Middle East who want to do terrorism on America. Riling up enough enemies to deliver us into another 9/11 attack. One that Old Two Scoops imagines might save him from having to face devastating elections, this fall. 

Do I have any evidence for that last one? Other than vows of revenge that are already echoing across the region, for the blatantly dumb targeted assassination of Iran's 82-year old paramount religious leader? 

Well, it would explain why Don fired over half of our counter-terrorism folks. And can you think of anything less than a major national trauma that'd provide the excuse he needs for martial law? 

Put it all together folks. 

Prepare, in that event, to chant "Reichstag Fire!" 


But also keep in mind another word. One that shows we finally understand what's going on. Phase 9 of the 25 year recurring psychic schism between pro- and anti-modernity Americans. Our never settled civil war. And hence one word that will efficiently show our grit, our determination, our courage... our firm intent.


APPOMATTOX.





* Filthy Fingers Hegseth. Look up how many times, on Fox, he (drunkenly) bragged: "I don't believe in germs; I haven't washed my hands in a decade." Though I'll admit. Most Trump appointees are even more crazy and even less qualified.



Cryptogram On Moltbook

The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:

Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do.

I think this take has it mostly right:

What happened on Moltbook is a preview of what researcher Juergen Nittner II calls “The LOL WUT Theory.” The point where AI-generated content becomes so easy to produce and so hard to detect that the average person’s only rational response to anything online is bewildered disbelief.

We’re not there yet. But we’re close.

The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can’t reliably tell what’s fake. Third, and this is the crisis point, regular people realize there’s nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.

Worse Than FailureCodeSOD: Popping Off

Python is (in)famous for its "batteries included" approach to a standard library, but it's not that notable that it has plenty of standard data structures, like dicts. Nor is in surprising that dicts have all sorts of useful methods, like pop, which removes a key from the dict and returns its value.

Because you're here, reading this site, you'll also be unsurprised that this doesn't stop developers from re-implementing that built-in function, badly. Karen sends us this:

def parse_message(message):
    def pop(key):
        if key in data:
            result = data[key]
            del data[key]
            return result
        return ''

    data = json.loads(message)
    some_value = pop("some_key")
    # <snip>...multiple uses of pop()...</snip>

Here, they create an inner method, and they exploit variable hoisting. While pop appears in the code before data is declared, all variable declarations are "hoisted" to the top. When pop references data, it's getting that from the enclosing scope. Which while this isn't a global variable, it's still letting a variable cross between two scopes, which is always messy.

Also, this pop returns a default value, which is also something the built-in method can do. It's just the built-in version requires you to explicitly pass the value, e.g.: some_value = data.pop("some_key", "")

Karen briefly wondered if this was a result of the Python 2 to 3 conversion, but no, pop has been part of dict for a long time. I wondered if this was just an exercise in code golf, writing a shorthand function, but even then- you could just wrap the built-in pop with your shorthand version (not that I'd recommend such a thing). No, I think the developer responsible simply didn't know the function was there, and just reimplemented a built-in method badly, as so often happens.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsEscapees

Author: Julian Miles, Staff Writer Investigator Mellio considers the narrow doorway. “You say this was never opened?” “Logs confirm it, sir.” Mellio glances at the sergeant. “Thank you, officer-?” “Sergeant Parx, sir.” “Good to meet you, Parx. So, the brief said this isn’t the first?” “Correct. This is eighth member of the Gundorini gang to […]

The post Escapees appeared first on 365tomorrows.

,

365 TomorrowsMort Begins Again

Author: David Sydney Like most people, Mort hadn’t paid much attention to reincarnation. During the week, he was up to his neck in work. On his day off, as he took a leisurely drive to clear his mind, if that is the proper term, he didn’t think of the future. He had the road to […]

The post Mort Begins Again appeared first on 365tomorrows.

,

Krebs on SecurityWho is the Kimwolf Botmaster “Dort”?

In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf, the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “Dort” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information.

A public “dox” created in 2020 asserted Dort was a teenager from Canada (DOB August 2003) who used the aliases “CPacket” and “M1ce.” A search on the username CPacket at the open source intelligence platform OSINT Industries finds a GitHub account under the names Dort and CPacket that was created in 2017 using the email address jay.miner232@gmail.com.

Image: osint.industries.

The cyber intelligence firm Intel 471 says jay.miner232@gmail.com was used between 2015 and 2019 to create accounts at multiple cybercrime forums, including Nulled (username “Uubuntuu”) and Cracked (user “Dorted”); Intel 471 reports that both of these accounts were created from the same Internet address at Rogers Canada (99.241.112.24).

Dort was an extremely active player in the Microsoft game Minecraft who gained notoriety for their “Dortware” software that helped players cheat. But somewhere along the way, Dort graduated from hacking Minecraft games to enabling far more serious crimes.

Dort also used the nickname DortDev, an identity that was active in March 2022 on the chat server for the prolific cybercrime group known as LAPSUS$. Dort peddled a service for registering temporary email addresses, as well as “Dortsolver,” code that could bypass various CAPTCHA services designed to prevent automated account abuse. Both of these offerings were advertised in 2022 on SIM Land, a Telegram channel dedicated to SIM-swapping and account takeover activity.

The cyber intelligence firm Flashpoint indexed 2022 posts on SIM Land by Dort that show this person developed the disposable email and CAPTCHA bypass services with the help of another hacker who went by the handle “Qoft.”

“I legit just work with Jacob,” Qoft said in 2022 in reply to another user, referring to their exclusive business partner Dort. In the same conversation, Qoft bragged that the two had stolen more than $250,000 worth of Microsoft Xbox Game Pass accounts by developing a program that mass-created Game Pass identities using stolen payment card data.

Who is the Jacob that Qoft referred to as their business partner? The breach tracking service Constella Intelligence finds the password used by jay.miner232@gmail.com was reused by just one other email address: jacobbutler803@gmail.com. Recall that the 2020 dox of Dort said their date of birth was August 2003 (8/03).

Searching this email address at DomainTools.com reveals it was used in 2015 to register several Minecraft-themed domains, all assigned to a Jacob Butler in Ottawa, Canada and to the Ottawa phone number 613-909-9727.

Constella Intelligence finds jacobbutler803@gmail.com was used to register an account on the hacker forum Nulled in 2016, as well as the account name “M1CE” on Minecraft. Pivoting off the password used by their Nulled account shows it was shared by the email addresses j.a.y.m.iner232@gmail.com and jbutl3@ocdsb.ca, the latter being an address at a domain for the Ottawa-Carelton District School Board.

Data indexed by the breach tracking service Spycloud suggests that at one point Jacob Butler shared a computer with his mother and a sibling, which might explain why their email accounts were connected to the password “jacobsplugs.” Neither Jacob nor any of the other Butler household members responded to requests for comment.

The open source intelligence service Epieos finds jacobbutler803@gmail.com created the GitHub account “MemeClient.” Meanwhile, Flashpoint indexed a deleted anonymous Pastebin.com post from 2017 declaring that MemeClient was the creation of a user named CPacket — one of Dort’s early monikers.

Why is Dort so mad? On January 2, KrebsOnSecurity published The Kimwolf Botnet is Stalking Your Local Network, which explored research into the botnet by Benjamin Brundage, founder of the proxy tracking service Synthient. Brundage figured out that the Kimwolf botmasters were exploiting a little-known weakness in residential proxy services to infect poorly-defended devices — like TV boxes and digital photo frames — plugged into the internal, private networks of proxy endpoints.

By the time that story went live, most of the vulnerable proxy providers had been notified by Brundage and had fixed the weaknesses in their systems. That vulnerability remediation process massively slowed Kimwolf’s ability to spread, and within hours of the story’s publication Dort created a Discord server in my name that began publishing personal information about and violent threats against Brundage, Yours Truly, and others.

Dort and friends incriminating themselves by planning swatting attacks in a public Discord server.

Last week, Dort and friends used that same Discord server (then named “Krebs’s Koinbase Kallers”) to threaten a swatting attack against Brundage, again posting his home address and personal information. Brundage told KrebsOnSecurity that local police officers subsequently visited his home in response to a swatting hoax which occurred around the same time that another member of the server posted a door emoji and taunted Brundage further.

Dort, using the alias “Meow,” taunts Synthient founder Ben Brundage with a picture of a door.

Someone on the server then linked to a cringeworthy (and NSFW) new Soundcloud diss track recorded by the user DortDev that included a stickied message from Dort saying, “Ur dead nigga. u better watch ur fucking back. sleep with one eye open. bitch.”

“It’s a pretty hefty penny for a new front door,” the diss track intoned. “If his head doesn’t get blown off by SWAT officers. What’s it like not having a front door?”

With any luck, Dort will soon be able to tell us all exactly what it’s like.

Update, 10:29 a.m.: Jacob Butler responded to requests for comment, speaking with KrebsOnSecurity briefly via telephone. Butler said he didn’t notice earlier requests for comment because he hasn’t really been online since 2021, after his home was swatted multiple times. He acknowledged making and distributing a Minecraft cheat long ago, but said he hasn’t played the game in years and was not involved in Dortsolver or any other activity attributed to the Dort nickname after 2021.

“It was a really old cheat and I don’t remember the name of it,” Butler said of his Minecraft modification. “I’m very stressed, man. I don’t know if people are going to swat me again or what. After that, I pretty much walked away from everything, logged off and said fuck that. I don’t go online anymore. I don’t know why people would still be going after me, to be completely honest.”

When asked what he does for a living, Butler said he mostly stays home and helps his mom around the house because he struggles with autism and social interaction. He maintains that someone must have compromised one or more of his old accounts and is impersonating him online as Dort.

“Someone is actually probably impersonating me, and now I’m really worried,” Butler said. “This is making me relive everything.”

But there are issues with Butler’s timeline. For example, Jacob’s voice in our phone conversation was remarkably similar to the Jacob/Dort whose voice can be heard in this Sept. 2022 Clash of Code competition between Dort and another coder (Dort lost). At around 6 minutes and 10 seconds into the recording, Dort launches into a cursing tirade that mirrors the stream of profanity in the diss rap that Dortdev posted threatening Brundage. Dort can be heard again at around 16 minutes; at around 26:00, Dort threatens to swat his opponent.

Butler said the voice of Dort is not his, exactly, but rather that of an impersonator who had likely cloned his voice.

“I would like to clarify that was absolutely not me,” Butler said. “There must be someone using a voice changer. Or something of the sorts. Because people were cloning my voice before and sending audio clips of ‘me’ saying outrageous stuff.”

365 TomorrowsOur Little Secret

Author: James C. Clar The evening before the president’s primetime appearance, the West Wing hummed like a server room. “Poll numbers?” President Drake asked, standing at the tall windows overlooking the South Lawn. “Seventy-six percent approval on the infrastructure package,” replied Chief of Staff Karen Tate. “The markets also responded well to the talk of […]

The post Our Little Secret appeared first on 365tomorrows.

,

Cryptogram Friday Squid Blogging: Squid Fishing in Peru

Peru has increased its squid catch limit. The article says “giant squid,” but they can’t possibly mean that.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Charles StrossThe Regicide Report

The Regicide Report, the last novel in the main Laundry Files series, is coming out on January 27th in the US (from Tor.com Publishing) and the UK (from Orbit).

The Regicide Report US cover
The Regicide Report UK cover

If you want to order signed hardcovers, contact Transreal Fiction in Edinburgh. (I believe Mike is currently willing to send books to the USA, but don't take my word for it: check first, and blame Donald Trump if there are customs/tariff obstacles.)

Audiobooks: there will be audio editions. The Audible one is showing a January 27th release date on Amazon.com; Hachette Digital will be issuing one in the UK but it's not showing up on Amazon.co.uk yet. (For contractual reasons they're recorded and produced by different companies.)

Ebooks and DRM: The ebook will be available the same day as the hardcover. Tor.com does not put DRM on their ebooks, but it's anybody's guess whether a given ebook store will add it. (Amazon have been particularly asshole-ish in recent years but are promising DRM-free downloads of purchases will be available from late January.) Orbit is part of Hachette, who are particularly obstreperous about requiring DRM on everything electronic, so you're out of luck if you buy the Orbit edition. (I could tell you how to unlock the DRM on purchases from the UK Kobo store, but then my publisher would be contractually obliged to assassinate me. Let's just say, it can be done.)

What next?

The Regicide Report is the last Bob/Mo/Laundry novel. It's set circa March-May 2015 in the time line; the New Management books are set circa November 2015 through May 2017, so this one slots in before Dead Lies Dreaming.

There may be a Laundry Files short story collection, and/or/maybe including a final New Management novella (it's half-written, but on "hold" since mid-2024), at some point in the future. But not this year or next. (I'm taking time off to get back in touch with space opera.)

None of the above precludes further Laundry Files novels getting written, but it's up to the publishers and market forces. If it does happen, I expect they'll be set in the 2020s in the internal chronology, by which time the Laundry itself is no more (it's been superseded by DEAT), and we may have new protagonists and a very new story line.

No, but really what's next?

I don't know for sure, but I'm currently working on the final draft of Starter Pack, my Stainless Steel Rat homage, and planning yet another rewrite of Ghost Engine, this time throwing away my current protagonists and replacing them with the ones from Starter Pack (who need another heist caper). Do not expect publication before 2027, though! I'm also awaiting eye surgery again, which slows everything down.

365 TomorrowsIntergalactic Vixens on the Moon

Author: Hillary Lyon Monte snatched the small chest from the airport where he worked as a baggage handler. He recognized the case; he’d seen it on stage at the fan convention. He jostled it, grinning. By the distribution of the weight inside, it definitely held the author’s animatronic head. At home, Monte placed the animatronic […]

The post Intergalactic Vixens on the Moon appeared first on 365tomorrows.

Worse Than FailureError'd: Perverse Perseveration

Pike pike pike pike Pike pike pike.

Lincoln KC repeated "I never knew Bank of America Bank of America Bank of America was among the major partners of Bank of America."

4

 

"Extra tokens, or just a stutter?" asks Joel "An errant alt-tab caused a needless google search, but thankfully Gemini's AI summary got straight-to-the-point(less) info. It is nice to see the world's supply of Oxford commas all in once place. "

0

 

Alessandro M. isn't the first one to call us out on our WTFs. "It’s adorable how the site proudly supports GitHub OAuth right up until the moment you actually try to use it. It’s like a door with a ‘Welcome’ sign that opens onto a brick wall." Meep meep.

1

 

Float follies found Daniel W. doubly-precise. "Had to go check on something in M365 Admin Center, and when I was on the OneDrive tab, I noticed Microsoft was calculating back past the bit. We're in quantum space at this point."

2

 

Weinliebhaber Michael R. sagt "Our German linguists here will spot the WTF immediately where my local wine shop has not. Weiẞer != WEIBER. Those words mean really different things." Is that 20 euro per kilo, or per the piece?

3

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cryptogram Phishing Attacks Against People Seeking Programming Jobs

This is new. North Korean hackers are posing as company recruiters, enticing job candidates to participate in coding challenges. When they run the code they are supposed to work on, it installs malware on their system.

News article.