Planet Russell

,

Charles StrossBarnum's Law of CEOs

It should be fairly obvious to anyone who's been paying attention to the tech news that many companies are pushing the adoption of "AI" (large language models) among their own employees--from software developers to management--and the push is coming from the top down, as C-suite executives order their staff to use AI, Or Else. But we know that LLMs reduce programmer productivity-- one major study showed that "developers believed that using AI tools helped them perform 20% faster -- but they actually worked 19% slower." (Source.)

Another recent study found that 87% of executives are using AI on the job, compared with just 27% of employees: "AI adoption varies by seniority, with 87% of executives using it on the job, compared with 57% of managers and 27% of employees. It also finds that executives are 45% more likely to use the technology on the job than Gen Zers, the youngest members of today's workforce and the first generation to have grown up with the internet.

"The findings are based on a survey of roughly 7,000 professionals age 18 and older who work in the US, the UK, Australia, Canada, Germany, and New Zealand. It was commissioned by HR software company Dayforce and conducted online from July 22 to August 6."

Why are executives pushing the use of new and highly questionable tools on their subordinates, even when they reduce productivity?

I speculate that to understand this disconnect, you need to look at what executives do.

Gordon Moore, long-time co-founder and CEO of Intel, explained how he saw the CEO's job in his book on management: a CEO is a tie-breaker. Effective enterprises delegate decision making to the lowest level possible, because obviously decisions should be made by the people most closely involved in the work. But if a dispute arises, for example between two business units disagreeing on which of two projects to assign scarce resources to, the two units need to consult a higher level management team about where their projects fit into the enterprise's priorities. Then the argument can be settled ... or not, in which case it propagates up through the layers of the management tree until it lands in the CEO's in-tray. At which point, the buck can no longer be passed on and someone (the CEO) has to make a ruling.

So a lot of a CEO's job, aside from leading on strategic policy, is to arbitrate between conflicting sides in an argument. They're a referee, or maybe a judge.

Now, today's LLMs are not intelligent. But they're very good at generating plausible-sounding arguments, because they're language models. If you ask an LLM a question it does not answer the question, but it uses its probabilistic model of language to generate something that closely resembles the semantic structure of an answer.

LLMs are effectively optimized for bamboozling CEOs into mistaking them for intelligent activity, rather than autocomplete on steroids. And so the corporate leaders extrapolate from their own experience to that of their employees, and assume that anyone not sprinkling magic AI pixie dust on their work is obviously a dirty slacker or a luddite.

(And this false optimization serves the purposes of the AI companies very well indeed because CEOs make the big ticket buying decisions, and internally all corporations ultimately turn out to be Stalinist command economies.)

Anyway, this is my hypothesis: we're seeing an insane push for LLM adoption in all lines of work, however inappropriate, because they directly exploit a cognitive bias to which senior management is vulnerable.

Worse Than FailureChristmas in the Server Room III: The Search for Santa

How many times does it take to make something a tradition? Well, this is our third installment of Christmas in the Server Room, which seems pretty traditional at this point. Someday we'll run out of Christmas movies that I've watched, and then I'll need to start watching them intentionally. I'm dreading having to sit through some adaptation of the Christmas Shoes or whatever.

In any case, we're going to rate Christmas movies on their accuracy of representing the experience of IT workers. One 💾 grants it the realism of that movie where Adam Sandler fights Pac-Man, while 💾💾💾💾💾 tells us that it's as realistic as an instructional video about the Turbo-Encabulator.

Home Alone

A Rube-Goldberg-quality series of misunderstandings and coincidences lead to bratty child Kevin being left… home alone through the holidays, defending his home from burglars, using a series of improvised, Rube-Golberg-quality booby traps, that escalate to cartoonish violence. The important lesson, however, is that the true meaning of Christmas is family.

Like most cybersecurity teams, Kevin is under-resourced, defending an incredibly vulnerable system from attackers. His MacGyvered together collection of countermeasures all work, in the film, but none of them actually address the true vulnerabilities and could all be easily bypassed by a competent attacker.

Kevin's traps are very much temporary solutions. But when temporary solutions become permanent, awful things can happen.

Rating: 💾💾

Santa Claus

This one will be familiar to any MST3k fans. Santa Claus runs a North Pole factory on child labor and whimsical inventions. Oh, also, his North Pole factory is in space. On Christmas Eve, as he tours the world to reward good boys and girls, Satan sends a demon to tempt children into mild naughtiness. Once again, the true meaning of Christmas is being with those you love, unless you're one of the children in Santa's workshop. Those kids are working on Christmas.

When things get truly dire for Santa, the children junior engineers staffing his workshop recognize that they can't manage the problem, so they fetch Merlin, the original greybeard. Yes, Merlin works for Santa, which implies that Santa and King Arthur may have met, and honestly, I'd rather watch that team-up movie. In any case, "terrified juniors clinging to a senior" is actually not very realistic. These days, the kids would just ask ChatGPT what to do, and end up putting glue on pizza.

Rating: 💾

Violent Night

What happens when we combine Santa Claus with Home Alone? We get the ultimate Santa-does-a-Die-Hard movie, Violent Night. Beverly D'Angelo plays Dick Cheney, an evil matriarch who runs a private military contractor and has stolen millions from US military operations abroad. Even more evil criminals take her family hostage to steal those millions. How are the criminals more evil than Dick Cheney? They're not only thieves, they also hate Christmas!

The family is all horrible people, except for Trudy, the young girl who has been good all year and still believes in Santa Claus. And that means Santa is coming to town. With grenades and sledge hammers and machine guns. The movie also features one of the "best" uses of "Santa uses Christmas magic to go up the chimney" at the end.

The entire villain plan is built around breaking into a super-protected electronic safe, and without spoiling too much, there's a twist in the film where someone has already broken into the safe, which makes one wonder how stupid the villains are (pretty stupid, actually). Also, while I understand the need for narrative convenience (and the Die Hard reference), the idea that the encrypted radios used by the evil villains, and the walkie talkie toy Trudy has to talk to Santa can actually operate on the same bands is… a bit of a stretch. RF bands and allocations and where and when you can use encryption is a whole thing.

Rating: 💾💾

Christmas Card from a Hooker in Minneapolis - Tom Waits

A sex worker in Minneapolis sends a Christmas card to Charlie, presumably a former client or supervisor of hers, updating him on her life. With each verse her life seems to be getting better- until the final verse, which reveals it's all been a charade and she needs help. Like most Tom Waits songs, it's the story of the kind of person who is pushed to the fringes of society, tragic but hopeful, and loaded with empathy.

I've recently been doing a job search of my own, and part of that has been "what dates did you work at $place?" and "give us some references?" and I realized that I'm terrible about keeping tabs on these kinds of things. The idea that I could send a Christmas card to a former client from years ago is absurd. Then again, how do we even know these cards get to Charlie? We just know that she wrote them, not that Charlie got them.

Rating: 🫦🫦🫦

I Am the Antichrist - The Dream Eaters

Two songs this year? Are there even any rules anymore? The lord of the damned has a poppy intro track. I suppose this shouldn't go on a Christmas list, because it likely belongs at the antipodal part of the year. Y'know. Being the Antichrist and all.

Rating: 🪩🪩🪩🪩🪩

Star Trek II: The Wrath of Khan

An aging Captain Kirk is haunted by a mistake of his past: Khan Noonien Singh is back for revenge. This "Horatio Hornblower in Space" riff on Trek is packed with themes: revenge, sacrifice, the frightening power of technology, and an object lesson on why you shouldn't put things in your ears. It also proves that the best, most exciting space battles aren't swooping, wooshing, pew pew pews, but tense games of cat-and-mouse.

As for its Christmas connections? What greater gift can Spock give to his crew but himself? His ultimate sacrifice is what ties the movie together, and of course, it means we got this incredible Christmas ornament out of it. Of all the Christmas spirits I have ever known, his was the most human.

The whole prefix-code thing is a pretty incredible security blunder. A remote back door into any Starfleet vessel, guarded only by a 5 digit code? A 5 digit code that's stored in a database on every other starship? So if an enemy captures one vessel, they can thwart the entire fleet unless everyone updates their prefix code? That's a terribly security posture! And incredibly realistic! That is likely what the future will look like. So I guess that's a credible security blunder, if we're being pedantic.

I bet they store the passwords in plain text too!

Rating: 💾💾💾💾💾

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Ghosts of the City

Author: Daniel Miltz They live remote, because living remote they remember everything. The neighborhood leans inward like old men listening, and the people hold faces that don’t blink. During the day, the ghosts come out wearing the habits they died in: a man still counting coins that lost their value in another country, a woman […]

The post The Ghosts of the City appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Machine

Review: Machine, by Elizabeth Bear

Series: White Space #2
Publisher: Saga Press
Copyright: October 2020
ISBN: 1-5344-0303-5
Format: Kindle
Pages: 485

Machine is a far-future space opera. It is a loose sequel to Ancestral Night, but you do not have to remember the first book to enjoy this book and they have only a couple of secondary characters in common. There are passing spoilers for Ancestral Night in the story, though, if you care.

Dr. Brookllyn Jens is a rescue paramedic on Synarche Medical Vessel I Race To Seek the Living. That means she goes into dangerous situations to get you out of them, patches you up enough to not die, and brings you to doctors who can do the slower and more time-consuming work. She was previously a cop (well, Judiciary, which in this universe is mostly the same thing) and then found that medicine, and specifically the flagship Synarche hospital Core General, was the institution in all the universe that she believed in the most.

As Machine opens, Jens is boarding the Big Rock Candy Mountain, a generation ship launched from Earth during the bad era before right-minding and joining the Synarche, back when it looked like humanity on Earth wouldn't survive. Big Rock Candy Mountain was discovered by accident in the wrong place, going faster than it was supposed to be going and not responding to hails. The Synarche ship that first discovered and docked with it is also mysteriously silent. It's the job of Jens and her colleagues to get on board, see if anyone is still alive, and rescue them if possible.

What they find is a corpse and a disturbingly servile early AI guarding a whole lot of people frozen in primitive cryobeds, along with odd artificial machinery that seems to be controlled by the AI. Or possibly controlling the AI.

Jens assumes her job will be complete once she gets the cryobeds and the AI back to Core General where both the humans and the AI can be treated by appropriate doctors. Jens is very wrong.

Machine is Elizabeth Bear's version of a James White Sector General novel. If one reads this book without any prior knowledge, the way that I did, you may not realize this until the characters make it to Core General, but then it becomes obvious to anyone who has read White's series. Most of the standard Sector General elements are here: A vast space station with rings at different gravity levels and atmospheres, a baffling array of species, and the ability to load other people's personalities into your head to treat other species at the cost of discomfort and body dysmorphia. There's a gruff supervisor, a fragile alien doctor, and a whole lot of idealistic and well-meaning people working around complex interspecies differences. Sadly, Bear does drop White's entertainingly oversimplified species classification codes; this is the correct call for suspension of disbelief, but I kind of missed them.

I thoroughly enjoy the idea of the Sector General series, so I was delighted by an updated version that drops the sexism and the doctor/nurse hierarchy and adds AIs, doctors for AIs, and a more complicated political structure. The hospital is even run by a sentient tree, which is an inspired choice.

Bear, of course, doesn't settle for a relatively simple James White problem-solving plot. There are interlocking, layered problems here, medical and political, immediate and structural, that unwind in ways that I found satisfyingly twisty. As with Ancestral Night, Bear has some complex points to make about morality. I think that aspect of the story was a bit less convincing than Ancestral Night, in part because some of the characters use rather bizarre tactics (although I will grant they are the sort of bizarre tactics that I could imagine would be used by well-meaning people using who didn't think through all of the possible consequences). I enjoyed the ethical dilemmas here, but they didn't grab me the way that Ancestral Night did. The setting, though, is even better: An interspecies hospital was a brilliant setting when James White used it, and it continues to be a brilliant setting in Bear's hands.

It's also worth mentioning that Jens has a chronic inflammatory disease and uses an exoskeleton for mobility, and (as much as I can judge while not being disabled myself) everything about this aspect of the character was excellent. It's rare to see characters with meaningful disabilities in far-future science fiction. When present at all, they're usually treated like Geordi's sight: something little different than the differential abilities of the various aliens, or even a backdoor advantage. Jens has a true, meaningful disability that she has to manage and that causes a constant cognitive drain, and the treatment of her assistive device is complex and nuanced in a way that I found thoughtful and satisfying.

The one structural complaint that I will make is that Jens is an astonishingly talkative first-person protagonist, particularly for an Elizabeth Bear novel. This is still better than being inscrutable, but she is prone to such extended philosophical digressions or infodumps in the middle of a scene that I found myself wishing she'd get on with it already in a few places. This provides good characterization, in the sense that the reader certainly gets inside Jens's head, but I think Bear didn't get the balance quite right.

That complaint aside, this was very fun, and I am certainly going to keep reading this series. Recommended, particularly if you like James White, or want to see why other people do.

The most important thing in the universe is not, it turns out, a single, objective truth. It's not a hospital whose ideals you love, that treats all comers. It's not a lover; it's not a job. It's not friends and teammates.

It's not even a child that rarely writes me back, and to be honest I probably earned that. I could have been there for her. I didn't know how to be there for anybody, though. Not even for me.

The most important thing in the universe, it turns out, is a complex of subjective and individual approximations. Of tries and fails. Of ideals, and things we do to try to get close to those ideals.

It's who we are when nobody is looking.

Followed by The Folded Sky.

Rating: 8 out of 10

Charles StrossThings upcoming

So: I've had surgery on one eye, and have new glasses to tide me over while the cataract in my other eye worsens enough to require surgery (I'm on the low priority waiting list in the meantime). And I'm about to head off for a fortnight of vacation time, mostly in Germany (which has the best Christmas markets) before coming home in mid-December and getting down to work on the final draft of Starter Pack.

Starter Pack is a book I wrote on spec--without a contracted publisher--this summer when Ghost Engine just got a bit too much. It's a spin-off of Ghost Engine, which started out as a joke mashup of two genres: "what if ... The Stainless Steel Rat got Isekai'd?" Nobody's writing the Rat these days, which I feel is a Mistake, so I decided to remedy it. This is my own take on the ideas, not a copy of Harry Harrison's late 1950s original, so it's a bit different, but it's mostly there now and it works as its own thing. Meanwhile, my agent read it and made some really good suggestions for how to make it more commercial, and "more commercial" is what pays the bills so I'm all on board with that. Especially as it's not sold yet.

Ghost Engine is still in progress: I hit a wall and needed to rethink the ending, again. But at least I am writing: having working binocular vision is a sadly underrated luxury--at least, it's underrated until you have to do without it for a few months. Along the way, Ghost Engine required me to come up with a new story setting in which there is no general AI, no superintelligent AI, no mind uploading to non-biological substrates, and above all no singularity--but our descendants have gone interstellar in a big way thanks to that One Neat Magictech Trick I trialed in my novella Palimpsest back in 2009. (Yes, Ghost Engine and Starter Pack are both set very loosely in the same continuum as Palimpsest. Or maybe it's more accurate to say that Palimpsest is to these new novels what A Colder War was to the Laundry Files.) So I finally got back to writing far future wide screen space opera, even if you aren't going to be able to read any of it for at least a year.

Why do this, though?

Bluntly: I needed to change course. After the US election outcome of November 2024 it was pretty clear that we were in for a very bumpy ride over the next few years. The lunatics have taken over the asylum and the economy is teetering on the edge of a very steep precipice. It's not just the over-hyped AI bubble that's propping up the US tech sector and global stock markets--that would be bad enough, but macro policy is being set by feces-hurling baboons and it really looks as if Trump is willing to invade Central America as a distraction gambit. All the world's a Reality TV show right now, and Reality TV is all about indulging our worst collective instincts.

It's too depressing to contemplate writing more Laundry Files stories; I get email from people who read the New Management as a happy, escapist fantasy these days because we've got a bunch of competent people battling to hold the centre together, under the aegis of a horrific ancient evil who is nevertheless a competent ancient evil. Unfortunately the ancient evil wins, and that's just not something I want to explore further right now.

I'm a popular entertainer and it seems to me that in bad times people want entertainments that take them out of their current quagmire and offers them escape, or at least gratuitous adventures with a side-order of humour. I'm not much of an optimist about our short-term future (I don't expect to survive long enough to see the light at the end of the tunnel) so I can't really write solarpunk or hopepunk utopias, but I can write space operas in which absolutely horrible people are viciously mocked and my new protagonists can at least hope for a happy ending.

Upcoming Events

In the new year, I've got three SF conventions planned already: Iridescence (Eastercon 2026), Birmingham UK, 3-6 April: Satellite 9, Glasgow, 22-24 May: and Metropol con Berlin (Eurocon 2026), Berlin, 2-5 July. I'm also going to try and set up a reading/signing/book launch for The Regicide Report in Edinburgh; more here if I manage it.

As during previous Republican presidencies in the USA it does not feel safe to visit that country, so I won't be attending the 2026 worldcon. However the 2027 world science fiction convention will almost certainly take place in Montreal, which is in North America but not part of Trumpistan, so (health and budget permitting) I'll try to make it there.

(Assuming we've still got a habitable planet and a working economy, which kind of presupposes the POTUS isn't biting the heads off live chickens or rogering a plush sofa in the Oval Office, of course, neither of which can be taken for granted this century.)

,

Planet DebianDaniel Lange: Getting scanning to work with Gimp on Trixie

Trixie ships Gimp 3.0.4 and the 3.x series has gotten incompatible to XSane, the common frontend for scanners on Linux.

Hence the maintainer, Jörg Frings-Fürst, has disabled the Gimp integration temporarily in response to a Debian bug #1088080.

There seems to be no tracking bug for getting the functionality back but people have been commenting on Debian bug #993293 as that is ... loosely related :-).

There are two options to get the Scanning functionality back in Trixie until this is properly resolved by an updated XSane in Debian (e.g. via trixie-backports):

Lee Yingtong Li (RunasSudo) has created a Python script that calls XSane as a cli application and published it at https://yingtongli.me/git/gimp-xsanecli/. This worked okish for me but needed me to find the scan in /tmp/ a number of times. This is a good stop-gap script if you need to scan from Gimp $now and look for a quick solution.

Upstream has completed the necessary steps to get XSane working as a Gimp 3.x plugin at https://gitlab.com/sane-project/frontend/xsane. Unfortunately compiling this is a bit involved but I made a version that can be dropped into /usr/local/bin or $HOME/bin and works alongside Gimp and the system-installed XSane.

So:

  1. sudo apt install gimp xsane
  2. Download xsane-1.0.0-fit-003 (752kB, AMD64 executable for Trixie) and place it in /usr/local/bin (as root)
  3. sha256sum /usr/local/bin/xsane-1.0.0-fit-003
    # result needs to be af04c1a83c41cd2e48e82d04b6017ee0b29d555390ca706e4603378b401e91b2
  4. sudo chmod +x /usr/local/bin/xsane-1.0.0-fit-003
  5. # Link the executable into the Gimp plugin directory as the user running Gimp:
    mkdir -p $HOME/.config/GIMP/3.0/plug-ins/xsane/
    ln -s /usr/local/bin/xsane-1.0.0-fit-003 $HOME/.config/GIMP/3.0/plug-ins/xsane/
  6. Restart Gimp
  7. Scan from Gimp via File → Create → Acquire → XSane

The source code for the xsane executable above is available under GPL-2 at https://gitlab.com/sane-project/frontend/xsane/-/tree/c5ac0d921606309169067041931e3b0c73436f00. This points to the last upstream commit from 27. September 2025 at the time of writing this blog article.

Worse Than FailureHoliday Party

The holiday season is an opportunity for employers to show their appreciation for their staff. Lavish parties, extra time off, whatever. Even some of the worst employers I've had could put together a decent Christmas party.

But that doesn't mean they all go right.

For example, Mike S worked for one of those early music streaming startups. One year, the company booked a Russian restaurant in the neighborhood for the party. The restaurant was a gigantic space, with a ground level and a balcony level, but the company was only 70 people, so the company perhaps overbought for the party. Everyone stuffed themselves on appetizers and when the main course came out, it ended up as extremely fishy smelling leftovers in the office kitchen.

Two years later, they booked a party at the same place. But lessons were learned: they only booked the balcony. This meant the ground floor was free for someone else to book, and someone else did. Another party booked the ground floor, and they booked an extremely loud Russian pop band to play it.

The band was deafening and took absolutely no breaks. And while the previous time, everyone stuffed themselves on appetizers, this time there were barely any. But there also wasn't much main course coming out either. By 10PM, Mike was starving and deaf, so he left. At about 10:15, the food came out. But by then, most of the staff had left, which meant once again, the office kitchen got stuffed with very fishy smelling leftovers.

There was not a third Russian party.

Rachel went to her partner's holiday party. This large tech company was notorious for spending loads of money on the party, and they certainly booked a fairly amazing venue for it. But there was confusion with the catering order; while the company shelled out for a full buffet, the caterer decided to only provide finger foods, circulated through the party by waiters carrying plates. By 9PM, the employees had figured out where the kitchen was and were lying in ambush for the waiters. The small plates of chicken tenders and crab rangoons and spring rolls never made it more than two or three steps out of the kitchen before they were picked clean.

At least the company learned that lesson and stopped using that caterer. Though I think some of the wait staff may have been permanently traumatized by the corporate party version of The Most Dangerous Game.

But you know, not everything is about holiday parties, or days off. Companies have plenty of other ways to make their staff happy. Little benefits and perks can go a long way. Just take a page from Doug B's company, which put this sign on the badge reader:

Christmas will be a casual dress day.

I hear Doug's co-worker Bob Cratchit is going through some rough times.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsAlone

Author: Keisha Hartley Amara’s head knocked against the cold car window, jolting her awake. Her fingers were numb from clutching the long black case on her lap. The Uber driver sped down the winding path unbothered by the rain. Ahead, the dark spires of her grandmother’s home jutted above the crest of the driveway hill […]

The post Alone appeared first on 365tomorrows.

xkcdSauropods

,

Cryptogram Urban VPN Proxy Surreptitiously Intercepts AI Chats

This is pretty scary:

Urban VPN Proxy targets conversations across ten AI platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), Meta AI.

For each platform, the extension includes a dedicated “executor” script designed to intercept and capture conversations. The harvesting is enabled by default through hardcoded flags in the extension’s configuration.

There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.

[…]

The data collection operates independently of the VPN functionality. Whether the VPN is connected or not, the harvesting runs continuously in the background.

[…]

What gets captured:

  • Every prompt you send to the AI
    Every response you receive

  • Conversation identifiers and timestamps
  • Session metadata
  • The specific AI platform and model used

    Boing Boing post.

    Planet DebianJonathan Dowland: Remarkable

    Remarkable tablet displaying my 2025 planner PDF.

    My Remarkable tablet, displaying my 2025 planner.

    During my PhD, on a sunny summer’s day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn’t see a damn thing.

    In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters.

    I didn’t buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray.

    I installed a hack which let me use the Lamy’s button to activate an eraser and also added a bunch of other tweaks. I wouldn’t recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser)

    Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks.

    Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I’ve tried a limited number of external clip on lights but nothing is frictionless to use.

    The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable’s size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that’s laborious.

    The newer model, the Remarkable Paper Pro, might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It’s also a lot more expensive.

    I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.

    Worse Than FailureCodeSOD: A Case of Old Code

    We've talked about the For-Case anti-pattern many, many times. And while we've seen some wild variations, and some pretty hideous versions, I think we have yet to see the exact example Ashley H sends us:

    for (int i = 0; i < 4; i++) {
        if (i == 0) {
            step1();
        } else if (i == 1) {
            step2();
        } else if (i == 2) {
            step3();
        } else if (i == 3){
            finalStep();
        }
    }    
    

    The specific names of the functions have been anonymized, but this illustrates the key points of what Ashley found.

    It's been in the code base for some time, so she's not entirely certain where it came from, or what the company's code review practices were like at the time.

    You see, this kind of code doesn't appear fully formed. It gets created, one step, after another, after another, after another. It's like a loop, but… uh… in a line. Without looping.

    [Advertisement] Plan Your .NET 9 Migration with Confidence
    Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

    365 TomorrowsCountdowner

    Author: Majoki Well into the neopandemic I noticed the countdown. Inside my left eyelid. A faint image, like a digital timer flickering. I couldn’t make out distinct digits in the rolling blur of numbers so there was no real way of knowing if it was counting up or down. But my gut knew. Immediately. Things […]

    The post Countdowner appeared first on 365tomorrows.

    Planet DebianDaniel Kahn Gillmor: AI and Secure Messaging Don't Mix

    AI and Secure Messaging Don't Mix

    Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.

    The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.

    In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.

    If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.

    But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.

    Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?

    What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?

    My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!)

    But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?

    Don't we owe it to each other to engage with actual human attention?

    ,

    Planet DebianIsoken Ibizugbe: Everybody Struggles

    That’s right: everyone struggles. You could be working on a project only to find a mountain of new things to learn, or your code might keep failing until you start to doubt yourself. I feel like that sometimes, wondering if I’m good enough. But in those moments, I whisper to myself: “You don’t know it yet; once you do, it will get easy.”

    While contributing to the Debian openQA project, there was so much to learn, from understanding what Debian actually is to learning the various installation methods and image types. I then had to tackle the installation and usage of openQA itself. I am incredibly grateful for the installation guideprovided by Roland Clobus and the documentation on writing code for openQA.

    Overcoming Technical Hurdles

    Even with amazing guides, I hit major roadblocks. Initially, I was using Windows with VirtualBox, but openQA couldn’t seem to run the tests properly. Despite my mentors (Roland and Phil) suggesting alternatives, the issues persisted. I actually installed openQA twice on VirtualBox and realized that if you miss even one small step in the installation, it becomes very difficult to move forward. Eventually, I took the big step and dual-booted my machine to Linux. Even then, the challenges didn’t stop. My openQA Virtual Machine (VM) ran out of allocated space and froze, halting my testing. I reached out on the IRC chat and received the help I needed to get back on track.

    My Research Line-up

    When I’m struggling for information, I follow my go-to first step for research, followed by other alternatives:

    1. Google: This is my first stop. It helped me navigate the switch to a Linux OS and troubleshoot KVM connection issues for the VM. Whether it’s an Ubuntu forum or a technical blog, I skim through until I find what can help.
    2. The “Upstream” Documentation: If Google doesn’t have the answer, I go straight to the official openQA documentation. This is a goldmine. It explains functions, how to use them, and lists usable test variables.
    3. The Debian openQA UI: While working on the apps_startstop tests, I look at previous similar tests on openqa.debian.net/tests. I checked the “Settings” tab to see exactly what variables were used and how the test was configured.
    1. Salsa (Debian’s GitLab): I reference the Salsa Debian openQA README and the developer guides sometimes; Getting startedDeveloper docs on how to write tests

    I also had to learn the basics of the Perl programming language during the four-week contribution stage. While we don’t need to be Perl experts, I found it essential to understand the logic so I can explain my work to others.

    I’ve spent a lot of time studying the codebase, which is time-consuming but incredibly valuable. For example, my apps_startstop test command originally used a long list of applications via ENTRYPOINT. I began to wonder if there was a more efficient way. With Roland’s guidance, I explored the main.pm file. This helped me understand how the apps_startstop function works and how it calls variables. I also noticed there are utility functions that are called in tests. I also check them and try to understand their function, so I know if I need them or not.

    I know I still have a lot to learn, and yes, the doubt still creeps in sometimes. But I am encouraged by the support of my mentors and the fact that they believe in my ability to contribute to this project.
    If you’re struggling too, just remember: you don’t know it yet; once you do, it will get easy.

    Planet DebianJonathan McDowell: NanoKVM: I like it

    I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

    Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

    Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

    And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

    I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

    IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

    My device reports:

    Image version:              v1.4.1
    Application version:        2.2.9
    

    That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

    Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

    As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

    ~ # cat /etc/resolv.conf
    nameserver 192.168.0.1
    nameserver 8.8.4.4
    nameserver 8.8.8.8
    nameserver 114.114.114.114
    nameserver 119.29.29.29
    nameserver 223.5.5.5
    

    This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

    My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

    I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

    It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

    ~ # cat /etc/os-release
    NAME=Buildroot
    VERSION=-g98d17d2c0-dirty
    ID=buildroot
    VERSION_ID=2023.11.2
    PRETTY_NAME="Buildroot 2023.11.2"
    

    The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

    TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

    The SSH client/daemon is full-fat OpenSSH:

    ~ # sshd -V
    OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023
    

    There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

    Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

    In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

    There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

    ~ # df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
    devtmpfs                 77.7M         0     77.7M   0% /dev
    tmpfs                    79.0M         0     79.0M   0% /dev/shm
    tmpfs                    79.0M     30.2M     48.8M  38% /tmp
    tmpfs                    79.0M    124.0K     78.9M   0% /run
    /dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
    /dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data
    

    The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

    One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

    As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

    Cryptogram Denmark Accuses Russia of Conducting Two Cyberattacks

    News:

    The Danish Defence Intelligence Service (DDIS) announced on Thursday that Moscow was behind a cyber-attack on a Danish water utility in 2024 and a series of distributed denial-of-service (DDoS) attacks on Danish websites in the lead-up to the municipal and regional council elections in November.

    The first, it said, was carried out by the pro-Russian group known as Z-Pentest and the second by NoName057(16), which has links to the Russian state.

    Slashdot thread.

    Cryptogram Microsoft Is Finally Killing RC4

    After twenty-six years, Microsoft is finally upgrading the last remaining instance of the encryption algorithm RC4 in Windows.

    of the most visible holdouts in supporting RC4 has been Microsoft. Eventually, Microsoft upgraded Active Directory to support the much more secure AES encryption standard. But by default, Windows servers have continued to respond to RC4-based authentication requests and return an RC4-based response. The RC4 fallback has been a favorite weakness hackers have exploited to compromise enterprise networks. Use of RC4 played a key role in last year’s breach of health giant Ascension. The breach caused life-threatening disruptions at 140 hospitals and put the medical records of 5.6 million patients into the hands of the attackers. US Senator Ron Wyden (D-Ore.) in September called on the Federal Trade Commission to investigate Microsoft for “gross cybersecurity negligence,” citing the continued default support for RC4.

    Last week, Microsoft said it was finally deprecating RC4 and cited its susceptibility to Kerberoasting, the form of attack, known since 2014, that was the root cause of the initial intrusion into Ascension’s network.

    Fun fact: RC4 was a trade secret until I published the algorithm in the second edition of Applied Cryptography in 1995.

    Planet DebianHellen Chemtai: Overcoming Challenges in OpenQA Images Testing: My Internship Journey

    Hello there 👋 . Today will be an in depth review on my work with the Debian OpenQA images testing team. I will highlight the struggles that I have had so far during my Outreachy internship.

    The OpenQA images testing team uses OpenQA to automatically install images e.g. Gnome Images. The images are then tested using tests written in Perl. My current tasks include speech install and capture all audio. I am also installing Live Gnome image to Windows using BalenaEtcher then testing it. A set of similar tasks will also be collaborated on. While working on tasks, I have to go through the guides. I also learn how Perl works so as to edit and create tests. For every change made, I have to re-run the job in developer mode. I have to create needles that have matches and click co-ordinates. I have been stuck on some of these instances:

    1. During installation, my job would not process a second HDD I had added. Roland Clobus , one of my mentors from the team gave me a variable to work with. The solution was adding “NUMDISKS=2� as part of the command.
    2. While working on a file, one of the needles would only work after file edits. Afterwards it would fail to “assert_and_click�. What kept bugging me was why it was passing after the first instance then failing after. The solution was adding a “wait_still_screen� to the code. This would ensure any screen changes loaded first before clicking happened.
    3. I was stuck on finding the keys that would be needed for a context menu. I added “button => ‘right’ � in the “assert_and_click � code.
    4. Windows 11 installation was constantly failing. Roland pointed out he was working on it so I had to use Windows 10.
    5. Windows 10 Virtual Machine does not connect to the internet because of update restrictions. I had to switch to Linux Virtual Machine for a download job.

    When I get stuck, at times I seek guidance from the mentors. I still look for solutions in the documentation. Here are some of the documentation that have helped me get through some of these challenges.

    1. Installation and creating tests guide – https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/tree/debian/documentation . These guides help in installation and creating of tests.
    2. OpenQA official documentation – https://open.qa/docs/ . This documentation is very comprehensive. I used it recently to read about PUBLISH_HDD_n to save the updated version of a HDD_n I am using.
    3. OpenQA test API documentation – https://open.qa/api/testapi/ . This documentation shows me which parameters to use. I have used it recently to find how to right click on a mouse and special characters.
    4. OpenQA variables file in Gitlab – https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/blob/debian/VARIABLES.md . This has explanations of the most commonly used variables .
    5. OpenQA repository in Gitlab – https://salsa.debian.org/qa/openqa/openqa-tests-debian . I go through the Perl tests. Understand how they work . Then integrate my tests using the a similar manner so that it would look uniform.
    6. OpenQa tests – https://openqa.debian.net/tests. I use these tests to find machine settings. I also find test sequences and the assets I would need to create similar tests. I used it recently to look at how graphical login was being implemented then shutdown.

    The list above are the documentation that are supposed to be used for these tests and finding solutions. If I don’t find anything within these, I then ask Roland for help. I try to go through the autoinst documentation that are from the links provided in the Gitlab README.md file : https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/blob/debian/README.md . They are also comprehensive but are very technical .

    In general, I get challenges but there is always a means to solve them through documentation provided. The mentors are also very helpful whenever we get challenges. I have gained team contribution skills , upgraded my git skills, learned Perl and how to test using OpenQA. I am still polishing on how to make my needles better. My current progress is thus good. We learn one day at a time.

    Planet DebianEmmanuel Kasper: Configuring a mail transfer agent to interact with the Debian bug tracker

    Email interface of the Debian bug tracker

    The main interface of the Debian bug tracker, at http://bugs.debian.org, is e-mail, and modifications are made to existing bugs by sending an email to an address like 873518@bugs.Debian.org.

    The web interface allows to browse bugs, but any addition to the bug itself will require an email client.

    This sounds a bit weird in 2025, as http REST clients with Oauth access tokens for command line tools interacting with online resources are today the norm. However we should remember the Debian project goes back to 1993 and the bug tracker software debugs, was released in 1994. REST itself was first introduced in 2000, six years later.

    In any case, using an email client to create or modify bug reports is not a bad idea per se:

    • the internet mail protocol, SMTP, is a well known and standardized protocol defined in an IETF RFC.
    • no need for account creation and authentication, you just need an email address to interact. There is a risk of spam, but in my experience this has been very low. When authentication is needed, Debian Developpers sign their work with their private GPG key.
    • you can use the bug tracker using the interface of your choice: webmail, graphical mail clients like Thunderbird or Evolution, text clients like Mutt or Pine, or command line tools like bts.

    A system wide minimal Mail Transfer Agent to send mail

    We can configure bts as a SMTP client, with username and password. In SMTP client mode, we would need to enter the SMTP settings from our mail service provider.

    The other option is to configure a Mail Transfer Agent (MTA) which provides a system wide sendmail interface, that all command line and automation tools can use send email. For instance reportbug and git send-email are able to use the sendmail interface. Why a sendmail interface ? Because sendmail used to be the default MTA of Unix back in the days, thus many programs sending mails expect something which looks like sendmail locally.

    A popular, maintained and packaged minimal MTA is msmtp, we are going to use it.

    msmtp installation and configuration

    Installation is just an apt away:

    # apt install msmtp msmtp-mta
    # msmtp --version
    msmtp version 1.8.23
    

    You can follow this blog post to configure msmtp, including saving your mail account credentials in the Gnome keyring.

    Once installed, you can verify that msmtp-mta created a sendmail symlink.

    $ ls -l /usr/sbin/sendmail 
    lrwxrwxrwx 1 root root 12 16 avril  2025 /usr/sbin/sendmail -&gt; ../bin/msmtp
    

    bts, git-send-email and reportbug will pipe their output to /usr/sbin/sendmail and msmtp will send the email in the background.

    Testing with with a simple mail client

    Debian comes out of the box with a primitive mail client, bsd-mailx that you can use to test your MTA set up. If you have configured msmtp correctly you send an email to yourself using

    $ echo "hello world" | mail -s "my mail subject" user@domain.org
    

    Now you can open bugs for Debian with reportbug, tag them with bts and send git formated patches from the command line with git send-email.

    Planet DebianRussell Coker: Samsung 65″ QN900C 8K TV

    As a follow up from my last post about my 8K TV [1] I tested out a Samsung 65″ QN900C Neo QLED 8K that’s on sale in JB Hifi. According to the JB employee I spoke to they are running out the last 8K TVs and have no plans to get more.

    In my testing of that 8K TV YouTube had a 3840*2160 viewport which is better than the 1920*1080 of my Hisense TV. When running a web browser the codeshack page reported it as 1920*1080 with a 1.25* pixel density (presumably a configuration option) that gave a usable resolution of 1536*749.

    The JB Hifi employee wouldn’t let me connect my own device via HDMI but said that it would work at 8K. I said “so if I buy it I can return it if it doesn’t do 8K HDMI?” and then he looked up the specs and found that it would only do 4K input on HDMI. It seems that actual 8K resolution might work on a Samsung streaming device but that’s not very useful particularly as there probably isn’t much 8K content on any streaming service.

    Basically that Samsung allegedly 8K TV only works at 4K at best.

    It seems to be impossible to buy an 8K TV or monitor in Australia that will actually display 8K content. ASUS has a 6K 32″ monitor with 6016*3384 resolution for $2016 [2]. When counting for inflation $2016 wouldn’t be the most expensive monitor I’ve ever bought and hopefully prices will continue to drop.

    Rumour has it that there are 8K TVs available in China that actually take 8K input. Getting one to Australia might not be easy but it’s something that I will investigate.

    Also I’m trying to sell my allegedly 8K TV.

    Charles StrossIn the eyeball waiting room

    So, I'm cross-eyed and typing with one eye screwed shut, which sucks. Seeing an ophthalmologist tomorrow, expecting a priority referral to get the other eyeball stabbed. (It was not made clear to me at the time of the last stabbing that the hospital wouldn't see me again until my ophthalmologist referred me back to them. I'm fixing that oversight—hah—now.)

    Anyway, my reading fatigue has gotten bad again, to about the same extent it had gotten to when I more or less stopped reading for fun and writing ground to a halt (because what do you spend most writing time doing, if not re-reading?). So don't expect to hear much from me until I've been operated on and ideally gotten a new set of prescription lenses.

    Book news: A Conventional Boy is getting a UK paperback release (from Orbit), on January 6th 2026. And The Regicide Report, the 11th and final book in the main Laundry Files series, comes out on January 27th, 2026 in hardcover and ebook—from Orbit in the UK/EU/Aus/NZ, and from Tor.com in the USA.

    Note that if you want a complete run of the series in a uniform binding and page size you will need to wait until probably January 6th-ish, give or take, in 2027, then you'll need to order the British paperbacks because There is no single US publisher of the series. The first two books were published by Golden Gryphon (who no longer exist), then it was picked up by Ace in hardcover and sometimes paperback (The Nightmare Stacks never made it into paperback in the USA as the mass market distribution channel was imploding at the time), then got taken on by Tor.com from The Delirium Brief onwards, and Tor.com don't really do paperbacks at all—they're an ebook publisher who also distribute hardcovers via original-Tor. I sincerely doubt that a US limited edition publisher would be interested in picking up and repackaging a series of 14 novels (and probably a short story collection that doesn't exist yet), some of which have been in print for 25 years. I mean, a complete run of the British paperbacks is more than a foot thick already and there are two books still to go in that format.

    (Ordering the books: Transreal Books in Edinburgh will take orders by email and will get me in to sign stock, but is no longer shipping to the United States—blame Trump and his idiotic tariff war. (Mike is a sole trader and can't afford the risk of doofuses buying a bunch of books then refusing to pay the import and duty fees. Hitherto books were duty-exempt in the US market, but under Trump, who the hell knows?) I believe amazon.co.uk will still ship UK physical book orders to the USA, but I won't be signing them. If you're in North America your next opportunity to get anything signed is therefore to wait for the worldcon in 2027, which I believe is locked in now and will take place in Montreal.)

    What happens after these books is an open question. As I noted in my last update, I'm working on two space operas. Or I would be working if I could stare at the screen for long enough to make headway. If the eyeball fairy would wave a magic wand over my left eye, I could finish both Starter Pack (a straightforward job—I have edit notes) and Ghost Engine (less straightforward but not really impossible) by the end of the year. But as matters stand, you should consider me to be off sick until further notice. Talking about anything that happens after those two is wildly ungrounded speculation: lets just say I expect a spurt of rebound productivity once I have my eyes working appropriately again, and I have some ideas.

    For the same reason, blogging's going to be scarce around these parts. So feel free to talk among yourselves.

    Edit: remaining cataract not bad enough for surgery—yet—but my prescription has changed (in both eyes). New glasses coming in a week or two: I'm not pushing on the surgery because eye surgery is not on my list of happy fun recreational activities. So normal service should resume by mid-November-ish.

    Meanwhile I'm working on another big idea for blogging, riffing off the idea that nation-states are the products of (or are generated as a by-product of) secular religions. It's easiest to see if you look at your neighbours' weirdnesses: Americans, contemplate the British monarchy (hereditary theocracy that supplants the papacy as intercessionary with Jesus, how much clearer could it be than that?); Brits, look to the USA (holy scripture written down in that constitution, daily prayers pledge of allegiance in schools, and all the flag-shagging). Or Israel, and the whole "holy land/chosen people" narrative underpinning political zionism. Patriotism is an affirmation of religious zeal. In this reframing, extremist nationalism is religious evangelism. Now ask, what are the implications, looking forward?

    Worse Than FailureThe Ghost of Christmas Future

    Many of us who fly for business and/or pleasure are all too aware of the myriad issues plaguing the 21st-century airline industry: everything from cybercrime targeting ailing IT systems and Boeing's ongoing nightmare to US commercial airline pilots being forced to retire at age 65, contributing to a diminishing workforce that has less of the sort of wisdom that can't be picked up in a flight simulator. The exact sort of experience you want your flight crew to have if, say, your aircraft loses an engine during takeoff.

    Big ol' Jet Airliner... (46516557095)

    This is only the tip of the iceberg. And our submitter Greta, reporting from the inside, shows us that even a win could be a dangerous loss waiting to happen:

    This will be a departure in that it's about something that is soon to happen, rather than that which already was. Looming in the near distance is an event about which I'm trying my best not to give into apocalypse fetishism, but it's difficult not to.

    We make aircraft. They're large, expensive flying robots. Our company is tiny. We're slowly growing, but could very comfortably fit in the 1966 General Motors New Look bus featured in Speed. We've produced, on a good year, up to three aircraft, with all design, programming, assembly and testing done in-house.

    This quarter (and into next quarter), we're about to have a whole lot of the right kind of problem; our orders have approximately quintupled, and they're for a heavily revised version of the aircraft that is still partially theoretical. The designs are sort of done, we have some of the hardware that will be running our code, and some of the code is written and working. Some of it is written and non-working. Some of it is yet unwritten. The code carried forward from the previous version has been flown, but none of the new code has flown.

    Our development team is facing a fascinating pile-up of pressures.

    There is a contingent of fixed-term contracted interns who have been doing some heroic heavy lifting but whose contracts are up in a couple of weeks due to the college schedule; new blood will need to be trained and in the trenches to backfill them.

    Some of our (custom) hardware has known design faults and needs modification and re-production, or is in the middle of production and we all hope and pray that no modification requests are needed.

    We're doing our damnedest to write production-worthy code and tests as we go, and I would describe the design and review atmosphere as healthy, but bugs can happen and are happening: bugs of the category where, if they were released to an aircraft in the sky, the aircraft would become suddenly reacquainted with the ground. Some of those bugs can be fixed in firmware, and for some of them we need to ask our long-suffering electrical engineer to pretty please pull off a miracle with a soldering iron so that we can continue development before a new board is released.

    Fully-functioning test hardware is scarce, and on a near daily basis developers need to have a polite conversation about who gets to perform a flash validation (I have not observed rock-paper-scissors yet).

    We also simply don't have the bodies to physically build aircraft in the way we have in the past. Upper management has painted a picture for me where six weeks from now, the CEO, managers, all of my developers and me may be assembling and testing one or two hundred batteries by hand. (I have demanded pizza if this comes to pass.)

    All of this in service of an early Spring deadline, with a parade of non-negotiable activities like careful flight testing before it.

    Safety is paramount, and no corners will be cut. But picture where we are now: a frenzy of development, then the eye of the storm, the company holiday shutdown, where we all try our best to enjoy the time off without dwelling on what we're getting ourselves into in 2026.

    I've always purposely avoided jobs where my screw-ups might produce serious injury or death. I have the utmost respect for those who assume this awesome responsibility and care about doing the best job possible. I feel for Greta and others like her, and I really hope that if or when push comes to shove, her company prioritizes safety over all else. We've already endured too many horrific examples of what happens when corners are cut in service of budget and time constraints that were never realistic to begin with.

    [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

    365 TomorrowsSmite

    Author: Julian Miles, Staff Writer Jingle bells my ass. Actually, if I’d had one, the ex-wife probably would have. Covered it’s harness in fairy lights, too. She loved sparkly tat. Guess that’s why she hooked up with the bright-eyed pretty boy I used to be. Then she got pregnant and we both got ugly. I […]

    The post Smite appeared first on 365tomorrows.

    xkcdFunny Numbers

    Planet DebianFrançois Marier: LXC setup on Debian forky

    Similar to what I wrote for Ubuntu 18.04, here is how to setup an LXC container on Debian forky.

    Installing the required packages

    Start by installing the necessary packages on the host:

    apt install lxc libvirt-clients debootstrap
    

    Network setup

    Ensure the veth kernel module is loaded by adding the following to /etc/modules-load.d/lxc-local.conf:

    veth
    

    and then loading it manually for now:

    modprobe veth
    

    Enable IPv4 forwarding by putting this in /etc/sysctl.d/lxc-local.conf:

    net.ipv4.ip_forward=1
    

    and applying it:

    sysctl -p /etc/sysctl.d/lxc-local.conf
    

    Restart the LXC network bridge:

    systemctl restart lxc-net.service
    

    Ensure that container traffic is not blocked by the host firewall, for example by adding the following to /etc/network/iptables.up.rules:

    -A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 10.0.3.0/24 -j ACCEPT
    -A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT
    

    and applying the rules:

    iptables-apply
    

    Creating a container

    To see all available images, run:

    lxc-create -n foo --template=download -- --list
    

    and then create a Debian forky container using:

    lxc-create -n forky -t download -- -d debian -r forky -a amd64
    

    Start and stop the container like this:

    lxc-start -n forky
    lxc-stop -n forky
    

    Connecting to the container

    Attach to the running container's console:

    lxc-attach -n forky
    

    Inside the container, you can change the root password by typing:

    passwd
    

    and install some essential packages:

    apt install openssh-server vim
    

    To find the container's IP address (for example, so that you can ssh to it from the host):

    lxc-ls --fancy
    

    Planet DebianC.J. Collier: I’m learning about perlguts today.


    im-learning-about-perlguts-today.png


    ## 0.23	2025-12-20
    
    commit be15aa25dea40aea66a8534143fb81b29d2e6c08
    Author: C.J. Collier 
    Date:   Sat Dec 20 22:40:44 2025 +0000
    
        Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions.
        
        - **Makefile.PL:**
            - Allow `extra_src` in `c_test_config.json` to be an array.
            - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging.
            - Corrected echo newlines in `test_c` target.
        - **c_test_config.json:**
            - Added missing type test files to `deps` and `extra_src` for `convert/sv_to_upb` and `convert/upb_to_sv` test runners.
        - **t/c/convert/upb_to_sv.c:**
            - Fixed a double free of `test_pool`.
            - Added missing includes for type test headers.
            - Updated test plan counts.
        - **t/c/convert/sv_to_upb.c:**
            - Added missing includes for type test headers.
            - Updated test plan counts.
            - Corrected Perl interpreter initialization.
        - **t/c/convert/types/**:
            - Added missing `test_util.h` include in new type test headers.
            - Completed the set of `upb_to_sv` test cases for all scalar types by adding optional and repeated tests for `sfixed32`, `sfixed64`, `sint32`, and `sint64`, and adding repeated tests to the remaining scalar type files.
        - **Documentation:**
            - Updated `01-xs-testing.md` with more debugging tips, including ASan usage and checking for double frees and typos.
            - Updated `xs_learnings.md` with details from the recent segfault.
            - Updated `llm-plan-execution-instructions.md` to emphasize debugging steps.
    
    
    ## 0.22	2025-12-19
    
    commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b
    Author: C.J. Collier 
    Date:   Fri Dec 19 23:41:02 2025 +0000
    
        feat(perl,testing): Initialize C test framework and build system
        
        This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module:
        
        1.  **Makefile.PL Enhancements:**
            *   Integrates `Devel::PPPort` to generate `ppport.h` for better portability.
            *   Object files now retain their path structure (e.g., `xs/convert/sv_to_upb.o`) instead of being flattened, improving build clarity.
            *   The `MY::postamble` is significantly revamped to dynamically generate build rules for all C tests located in `t/c/` based on the `t/c/c_test_config.json` file.
            *   C tests are linked against `libprotobuf_common.a` and use `ExtUtils::Embed` flags.
            *   Added `JSON::MaybeXS` to `PREREQ_PM`.
            *   The `test` target now also depends on the `test_c` target.
        
        2.  **C Test Infrastructure (`t/c/`):
            *   Introduced `t/c/c_test_config.json` to configure individual C test builds, specifying dependencies and extra source files.
            *   Created `t/c/convert/test_util.c` and `.h` for shared test functions like loading descriptors.
            *   Initial `t/c/convert/upb_to_sv.c` and `t/c/convert/sv_to_upb.c` test runners.
            *   Basic `t/c/integration/030_protobuf_coro.c` for Coro safety testing on core utils using `libcoro`.
            *   Basic `t/c/integration/035_croak_test.c` for testing exception handling.
            *   Basic `t/c/integration/050_convert.c` for integration testing conversions.
        
        3.  **Test Proto:** Updated `t/data/test.proto` with more field types for conversion testing and regenerated `test_descriptor.bin`.
        
        4.  **XS Test Harness (`t/c/upb-perl-test.h`):** Added `like_n` macro for length-aware regex matching.
        
        5.  **Documentation:** Updated architecture and plan documents to reflect the C test structure.
        6.  **ERRSV Testing:** Note that the C tests (`t/c/`) will primarily check *if* a `croak` occurs (i.e., that the exception path is taken), but will not assert on the string content of `ERRSV`. Reliably testing `$@` content requires the full Perl test environment with `Test::More`, which will be done in the `.t` files when testing the Perl API.
        
        This provides a solid base for developing and testing the XS and C components of the module.
    
    
    ## 0.21	2025-12-18
    
    commit a8b6b6100b2cf29c6df1358adddb291537d979bc
    Author: C.J. Collier 
    Date:   Thu Dec 18 04:20:47 2025 +0000
    
        test(C): Add integration tests for Milestone 2 components
        
        - Created t/c/integration/030_protobuf.c to test interactions
          between obj_cache, arena, and utils.
        - Added this test to t/c/c_test_config.json.
        - Verified that all C tests for Milestones 2 and 3 pass,
          including the libcoro-based stress test.
    
    
    ## 0.20	2025-12-18
    
    commit 0fcad68680b1f700a83972a7c1c48bf3a6958695
    Author: C.J. Collier 
    Date:   Thu Dec 18 04:14:04 2025 +0000
    
        docs(plan): Add guideline review reminders to milestones
        
        - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**"
          checklist item to the start of each component implementation
          milestone (C and Perl layers).
        - This excludes Integration Test milestones.
    
    
    ## 0.19	2025-12-18
    
    commit 987126c4b09fcdf06967a98fa3adb63d7de59a34
    Author: C.J. Collier 
    Date:   Thu Dec 18 04:05:53 2025 +0000
    
        docs(plan): Add C-level and Perl-level Coro tests to milestones
        
        - Added checklist items for `libcoro`-based C tests
          (e.g., `t/c/integration/050_convert_coro.c`) to all C layer
          integration milestones (050 through 220).
        - Updated `030_Integration_Protobuf.md` to standardise checklist
          items for the existing `030_protobuf_coro.c` test.
        - Removed the single `xt/author/coro-safe.t` item from
          `010_Build.md`.
        - Added checklist items for Perl-level `Coro` tests
          (e.g., `xt/coro/240_arena.t`) to each Perl layer
          integration milestone (240 through 400).
        - Created `perl/t/c/c_test_config.json` to manage C test
          configurations externally.
        - Updated `perl/doc/architecture/testing/01-xs-testing.md` to describe
          both C-level `libcoro` and Perl-level `Coro` testing strategies.
    
    
    ## 0.18	2025-12-18
    
    commit 6095a5a610401a6035a81429d0ccb9884d53687b
    Author: C.J. Collier 
    Date:   Thu Dec 18 02:34:31 2025 +0000
    
        added coro testing to c layer milestones
    
    
    ## 0.17	2025-12-18
    
    commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce
    Author: C.J. Collier 
    Date:   Thu Dec 18 02:26:59 2025 +0000
    
        docs(plan): Refine test coverage checklist items for SMARTness
        
        - Updated the "Tests provide full coverage" checklist items in
          C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200)
          to explicitly mention testing all public functions in the
          corresponding header files.
        - Expanded placeholder checklists in 140, 160, 180, 200.
        - Updated the "Tests provide full coverage" and "Add coverage checks"
          checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330,
          350, 370, 390) to be more specific about the scope of testing
          and the use of `Test::TestCoverage`.
        - Expanded Well-Known Types milestone (350) to detail each type.
    
    
    ## 0.16	2025-12-18
    
    commit e4b601f14e3817a17b0f4a38698d981dd4cb2818
    Author: C.J. Collier 
    Date:   Thu Dec 18 02:07:35 2025 +0000
    
        docs(plan): Full refactoring of C and Perl plan files
        
        - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into
          per-milestone files under the `perl/doc/plan/` directory.
        - Introduced Integration Test milestones after each component
          milestone in both C and Perl plans.
        - Numbered milestone files sequentially (e.g., 010_Build.md,
          230_Perl_Arena.md).
        - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to
          act as Tables of Contents.
        - Ensured consistent naming for integration test files
          (e.g., `t/c/integration/030_protobuf.c`, `t/integration/260_descriptor_pool.t`).
        - Added architecture review steps to the end of all milestones.
        - Moved Coro safety test to C layer Milestone 1.
        - Updated Makefile.PL to support new test structure and added Coro.
        - Moved and split t/c/convert.c into t/c/convert/*.c.
        - Moved other t/c/*.c tests into t/c/protobuf/*.c.
        - Deleted old t/c/convert.c.
    
    
    ## 0.15	2025-12-17
    
    commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce
    Author: C.J. Collier 
    Date:   Wed Dec 17 23:51:22 2025 +0000
    
        docs(plan): Refactor and reset ProtobufPlan.md
        
        - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md.
        - Reorganized milestones to clearly separate C layer and Perl layer development.
        - Added more granular checkboxes for each component:
          - C Layer: Create test, Test coverage, Implement, Tests pass.
          - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments.
        - Reset all checkboxes to `[ ]` to prepare for a full audit.
        - Updated status in architecture/api and architecture/core documents to "Not Started".
        
        feat(obj_cache): Add unregister function and enhance tests
        
        - Added `protobuf_unregister_object` to `xs/protobuf/obj_cache.c`.
        - Updated `xs/protobuf/obj_cache.h` with the new function declaration.
        - Expanded tests in `t/c/protobuf_obj_cache.c` to cover unregistering,
          overwriting keys, and unregistering non-existent keys.
        - Corrected the test plan count in `t/c/protobuf_obj_cache.c` to 17.
    
    
    ## 0.14	2025-12-17
    
    commit 40b6ad14ca32cf16958d490bb575962f88d868a1
    Author: C.J. Collier 
    Date:   Wed Dec 17 23:18:27 2025 +0000
    
        feat(arena): Complete C layer for Arena wrapper
        
        This commit finalizes the C-level implementation for the Protobuf::Arena wrapper.
        
        - Adds `PerlUpb_Arena_Destroy` for proper cleanup from Perl's DEMOLISH.
        - Enhances error checking in `PerlUpb_Arena_Get`.
        - Expands C-level tests in `t/c/protobuf_arena.c` to cover memory allocation
          on the arena and lifecycle through `PerlUpb_Arena_Destroy`.
        - Corrects embedded Perl initialization in the C test.
        
        docs(plan): Refactor ProtobufPlan.md
        
        - Restructures the development plan to clearly separate "C Layer" and
          "Perl Layer" tasks within each milestone.
        - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking.
    
    
    ## 0.13	2025-12-17
    
    commit c1e566c25f62d0ae9f195a6df43b895682652c71
    Author: C.J. Collier 
    Date:   Wed Dec 17 22:00:40 2025 +0000
    
        refactor(perl): Rename C tests and enhance Makefile.PL
        
        - Renamed test files in `t/c/` to better match the `xs` module structure:
            - `01-cache.c` -> `protobuf_obj_cache.c`
            - `02-arena.c` -> `protobuf_arena.c`
            - `03-utils.c` -> `protobuf_utils.c`
            - `04-convert.c` -> `convert.c`
            - `load_test.c` -> `upb_descriptor_load.c`
        - Updated `perl/Makefile.PL` to reflect the new test names in `MY::postamble`'s `$c_test_config`.
        - Refactored the `$c_test_config` generation in `Makefile.PL` to reduce repetition by using a default flags hash and common dependencies array.
        - Added a `fail()` macro to `perl/t/c/upb-perl-test.h` for consistency.
        - Modified `t/c/upb_descriptor_load.c` to use the `t/c/upb-perl-test.h` macros, making its output consistent with other C tests.
        - Added a skeleton for `t/c/convert.c` to test the conversion functions.
        - Updated documentation in `ProtobufPlan.md` and `architecture/testing/01-xs-testing.md` to reflect new test names.
    
    
    ## 0.12	2025-12-17
    
    commit d8cb5dd415c6c129e71cd452f78e29de398a82c9
    Author: C.J. Collier 
    Date:   Wed Dec 17 20:47:38 2025 +0000
    
        feat(perl): Refactor XS code into subdirectories
        
        This commit reorganizes the C code in the `perl/xs/` directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability.
        
        - Created subdirectories for each major component: `convert`, `descriptor`, `descriptor_containers`, `descriptor_pool`, `extension_dict`, `map`, `message`, `protobuf`, `repeated`, and `unknown_fields`.
        - Created skeleton `.h` and `.c` files within each subdirectory to house the component-specific logic.
        - Updated top-level component headers (e.g., `perl/xs/descriptor.h`) to include the new sub-headers.
        - Updated top-level component source files (e.g., `perl/xs/descriptor.c`) to include their main header and added stub initialization functions (e.g., `PerlUpb_InitDescriptor`).
        - Moved code from the original `perl/xs/protobuf.c` to new files in `perl/xs/protobuf/` (arena, obj_cache, utils).
        - Moved code from the original `perl/xs/convert.c` to new files in `perl/xs/convert/` (upb_to_sv, sv_to_upb).
        - Updated `perl/Makefile.PL` to use a glob (`xs/*/*.c`) to find the new C source files in the subdirectories.
        - Added `perl/doc/architecture/core/07-xs-file-organization.md` to document the new structure.
        - Updated `perl/doc/ProtobufPlan.md` and other architecture documents to reference the new organization.
        - Corrected self-referential includes in the newly created .c files.
        
        This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation.
    
    
    ## 0.11	2025-12-17
    
    commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc
    Author: C.J. Collier 
    Date:   Wed Dec 17 19:57:52 2025 +0000
    
        feat(perl): Implement C-first testing and core XS infrastructure
        
        This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation.
        
        Key changes include:
        
        -   **C-Level Testing Framework:** Established a C-level testing system in `t/c/` with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache (`01-cache.c`), arena wrapper (`02-arena.c`), and utility functions (`03-utils.c`).
        -   **Core XS Infrastructure:**
            -   Implemented a global object cache (`xs/protobuf.c`) to manage Perl wrappers for UPB objects, using weak references.
            -   Created an `upb_Arena` wrapper (`xs/protobuf.c`).
            -   Consolidated common XS helper functions into `xs/protobuf.h` and `xs/protobuf.c`.
        -   **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from `ExtUtils::Embed`, and handling both `.c` and `.cc` source files.
        -   **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g., `message.c`, `descriptor.c`). Removed older, monolithic `.xs` files.
        -   **Typemap Expansion:** Added extensive typemap entries in `perl/typemap` to handle conversions between Perl objects and various `const upb_*Def*` pointers.
        -   **Descriptor Tests:** Added a new test suite `t/02-descriptor.t` to validate descriptor loading and accessor methods.
        -   **Documentation:** Updated development plans and guidelines (`ProtobufPlan.md`, `xs_learnings.md`, etc.) to reflect the C-first strategy, new testing methods, and lessons learned.
        -   **Build Cleanup:** Removed `ppport.h` from `.gitignore` as it's no longer used, due to `-DPERL_NO_PPPORT` being set in `Makefile.PL`.
        
        This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it.
    
    
    ## 0.10	2025-12-17
    
    commit 1ef20ade24603573905cb0376670945f1ab5d829
    Author: C.J. Collier 
    Date:   Wed Dec 17 07:08:29 2025 +0000
    
        feat(perl): Implement C-level tests and core XS utils
        
        This commit introduces a C-level testing framework for the XS layer and implements key components:
        
        1.  **C-Level Tests (`t/c/`)**:
            *   Added `t/c/Makefile` to build standalone C tests.
            *   Created `t/c/upb-perl-test.h` with macros for TAP-compliant C tests (`plan`, `ok`, `is`, `is_string`, `diag`).
            *   Implemented `t/c/01-cache.c` to test the object cache.
            *   Implemented `t/c/02-arena.c` to test `Protobuf::Arena` wrappers.
            *   Implemented `t/c/03-utils.c` to test string utility functions.
            *   Corrected include paths and diagnostic messages in C tests.
        
        2.  **XS Object Cache (`xs/protobuf.c`)**:
            *   Switched to using stringified pointers (`%p`) as hash keys for stability.
            *   Fixed a critical double-free bug in `PerlUpb_ObjCache_Delete` by removing an extra `SvREFCNT_dec` on the lookup key.
        
        3.  **XS Arena Wrapper (`xs/protobuf.c`)**:
            *   Corrected `PerlUpb_Arena_New` to use `newSVrv` and `PTR2IV` for opaque object wrapping.
            *   Corrected `PerlUpb_Arena_Get` to safely unwrap the arena pointer.
        
        4.  **Makefile.PL (`perl/Makefile.PL`)**:
            *   Added `-Ixs` to `INC` to allow C tests to find `t/c/upb-perl-test.h` and `xs/protobuf.h`.
            *   Added `LIBS` to link `libprotobuf_common.a` into the main `Protobuf.so`.
            *   Added C test targets `01-cache`, `02-arena`, `03-utils` to the test config in `MY::postamble`.
        
        5.  **Protobuf.pm (`perl/lib/Protobuf.pm`)**:
            *   Added `use XSLoader;` to load the compiled XS code.
        
        6.  **New files `xs/util.h`**:
            *   Added initial type conversion function.
        
        These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation.
    
    
    ## 0.09	2025-12-17
    
    commit 07d61652b032b32790ca2d3848243f9d75ea98f4
    Author: C.J. Collier 
    Date:   Wed Dec 17 04:53:34 2025 +0000
    
        feat(perl): Build system and C cache test for Perl XS
        
        This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache.
        
        -   **Makefile.PL:**
            -   Refactored C test compilation rules in `MY::postamble` to use a hash (`$c_test_config`) for better organization and test-specific flags.
            -   Integrated `ExtUtils::Embed` to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the `t/c/01-cache.c` test.
            -   Correctly constructs the path to the versioned Perl library (`libperl.so.X.Y.Z`) using `$Config{archlib}` and `$Config{libperl}` to ensure portability.
            -   Removed `VERSION_FROM` and `ABSTRACT_FROM` to avoid dependency on `.pm` files for now.
        
        -   **C Cache Test (t/c/01-cache.c):**
            -   Added a C test to exercise the object cache functions implemented in `xs/protobuf.c`.
            -   Includes tests for adding, getting, deleting, and weak reference behavior.
        
        -   **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):**
            -   Implemented `PerlUpb_ObjCache_Init`, `PerlUpb_ObjCache_Add`, `PerlUpb_ObjCache_Get`, `PerlUpb_ObjCache_Delete`, and `PerlUpb_ObjCache_Destroy`.
            -   Uses a Perl hash (`HV*`) for the cache.
            -   Keys are string representations of the C pointers, created using `snprintf` with `"%llx"`.
            -   Values are weak references (`sv_rvweaken`) to the Perl objects (`SV*`).
            -   `PerlUpb_ObjCache_Get` now correctly returns an incremented reference to the original SV, not a copy.
            -   `PerlUpb_ObjCache_Destroy` now clears the hash before decrementing its refcount.
        
        -   **t/c/upb-perl-test.h:**
            -   Updated `is_sv` to perform direct pointer comparison (`got == expected`).
        
        -   **Minor:** Added `util.h` (currently empty), updated `typemap`.
        
        These changes establish a working C-level test environment for the XS components.
    
    
    ## 0.08	2025-12-17
    
    commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e
    Author: C.J. Collier 
    Date:   Wed Dec 17 02:57:48 2025 +0000
    
        feat(perl): Update docs and core XS files
        
        - Explicitly add TDD cycle to ProtobufPlan.md.
        - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers.
        - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files.
        - Create initial C test for the object cache in t/c/01-cache.c.
    
    
    ## 0.07	2025-12-17
    
    commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0
    Author: C.J. Collier 
    Date:   Wed Dec 17 01:09:18 2025 +0000
    
        feat(perl): Align Perl UPB architecture docs with Python
        
        Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension.
        
        Key changes:
        
        -   **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's `PyUpb_ObjCache`.
        -   **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's `descriptor_containers.c`.
        -   **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity.
    
    
    ## 0.06	2025-12-17
    
    commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae
    Author: C.J. Collier 
    Date:   Wed Dec 17 00:28:20 2025 +0000
    
        feat(perl): Implement object caching and fix build
        
        This commit introduces several key improvements to the Perl XS build system and core functionality:
        
        1.  **Object Caching:**
            *   Introduces `xs/protobuf.c` and `xs/protobuf.h` to implement a caching mechanism (`protobuf_c_to_perl_obj`) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks.
            *   Updates the `typemap` to use `protobuf_c_to_perl_obj` for `upb_MessageDef *` output, ensuring descriptor objects are cached.
            *   Corrected `sv_weaken` to the correct `sv_rvweaken` function.
        
        2.  **Makefile.PL Enhancements:**
            *   Switched to using the Bazel-generated UPB descriptor sources from `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
            *   Updated `INC` paths to correctly locate the generated headers.
            *   Refactored `MY::dynamic_lib` to ensure the static library `libprotobuf_common.a` is correctly linked into each generated `.so` module, resolving undefined symbol errors.
            *   Overrode `MY::test` to use `prove -b -j$(nproc) t/*.t xt/*.t` for running tests.
            *   Cleaned up `LIBS` and `LDDLFLAGS` usage.
        
        3.  **Documentation:**
            *   Updated `ProtobufPlan.md` to reflect the current status and design decisions.
            *   Reorganized architecture documents into subdirectories.
            *   Added `object-caching.md` and `c-perl-interface.md`.
            *   Updated `llm-guidance.md` with notes on `upb/upb.h` and `sv_rvweaken`.
        
        4.  **Testing:**
            *   Fixed `xt/03-moo_immutable.t` to skip tests if no Moo modules are found.
        
        This resolves the build issues and makes the core test suite pass.
    
    
    ## 0.05	2025-12-16
    
    commit 177d2f3b2608b9d9c415994e076a77d8560423b8
    Author: C.J. Collier 
    Date:   Tue Dec 16 19:51:36 2025 +0000
    
        Refactor: Rename namespace to Protobuf, build system and doc updates
        
        This commit refactors the primary namespace from `ProtoBuf` to `Protobuf`
        to align with the style guide. This involves renaming files, directories,
        and updating package names within all Perl and XS files.
        
        **Namespace Changes:**
        
        *   Renamed `perl/lib/ProtoBuf` to `perl/lib/Protobuf`.
        *   Moved and updated `ProtoBuf.pm` to `Protobuf.pm`.
        *   Moved and updated `ProtoBuf::Descriptor` to `Protobuf::Descriptor` (.pm & .xs).
        *   Removed other `ProtoBuf::*` stubs (Arena, DescriptorPool, Message).
        *   Updated `MODULE` and `PACKAGE` in `Descriptor.xs`.
        *   Updated `NAME`, `*_FROM` in `perl/Makefile.PL`.
        *   Replaced `ProtoBuf` with `Protobuf` throughout `perl/typemap`.
        *   Updated namespaces in test files `t/01-load-protobuf-descriptor.t` and `t/02-descriptor.t`.
        *   Updated namespaces in all documentation files under `perl/doc/`.
        *   Updated paths in `perl/.gitignore`.
        
        **Build System Enhancements (Makefile.PL):**
        
        *   Included `xs/*.c` files in the common object files list.
        *   Added `-I.` to the `INC` paths.
        *   Switched from `MYEXTLIB` to `LIBS => ['-L$(CURDIR) -lprotobuf_common']` for linking.
        *   Removed custom keys passed to `WriteMakefile` for postamble.
        *   `MY::postamble` now sources variables directly from the main script scope.
        *   Added `all :: ${common_lib}` dependency in `MY::postamble`.
        *   Added `t/c/load_test.c` compilation rule in `MY::postamble`.
        *   Updated `clean` target to include `blib`.
        *   Added more modules to `TEST_REQUIRES`.
        *   Removed the explicit `PM` and `XS` keys from `WriteMakefile`, relying on `XSMULTI => 1`.
        
        **New Files:**
        
        *   `perl/lib/Protobuf.pm`
        *   `perl/lib/Protobuf/Descriptor.pm`
        *   `perl/lib/Protobuf/Descriptor.xs`
        *   `perl/t/01-load-protobuf-descriptor.t`
        *   `perl/t/02-descriptor.t`
        *   `perl/t/c/load_test.c`: Standalone C test for UPB.
        *   `perl/xs/types.c` & `perl/xs/types.h`: For Perl/C type conversions.
        *   `perl/doc/architecture/upb-interfacing.md`
        *   `perl/xt/03-moo_immutable.t`: Test for Moo immutability.
        
        **Deletions:**
        
        *   Old test files: `t/00_load.t`, `t/01_basic.t`, `t/02_serialize.t`, `t/03_message.t`, `t/04_descriptor_pool.t`, `t/05_arena.t`, `t/05_message.t`.
        *   Removed `lib/ProtoBuf.xs` as it's not needed with `XSMULTI`.
        
        **Other:**
        
        *   Updated `test_descriptor.bin` (binary change).
        *   Significant content updates to markdown documentation files in `perl/doc/architecture` and `perl/doc/internal` reflecting the new architecture and learnings.
    
    
    ## 0.04	2025-12-14
    
    commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee
    Author: C.J. Collier 
    Date:   Sun Dec 14 21:28:19 2025 +0000
    
        feat(perl): Implement Message object creation and fix lifecycles
        
        This commit introduces the basic structure for `ProtoBuf::Message` object
        creation, linking it with `ProtoBuf::Descriptor` and `ProtoBuf::DescriptorPool`,
        and crucially resolves a SEGV by fixing object lifecycle management.
        
        Key Changes:
        
        1.  **`ProtoBuf::Descriptor`:** Added `_pool` attribute to hold a strong
            reference to the parent `ProtoBuf::DescriptorPool`. This is essential to
            prevent the pool and its C `upb_DefPool` from being garbage collected
            while a descriptor is still in use.
        
        2.  **`ProtoBuf::DescriptorPool`:**
            *   `find_message_by_name`: Now passes the `$self` (the pool object) to the
                `ProtoBuf::Descriptor` constructor to establish the lifecycle link.
            *   XSUB `pb_dp_find_message_by_name`: Updated to accept the pool `SV*` and
                store it in the descriptor's `_pool` attribute.
            *   XSUB `_load_serialized_descriptor_set`: Renamed to avoid clashing with the
                Perl method name. The Perl wrapper now correctly calls this internal XSUB.
            *   `DEMOLISH`: Made safer by checking for attribute existence.
        
        3.  **`ProtoBuf::Message`:**
            *   Implemented using Moo with lazy builders for `_upb_arena` and
                `_upb_message`.
            *   `_descriptor` is a required argument to `new()`.
            *   XS functions added for creating the arena (`pb_msg_create_arena`) and
                the `upb_Message` (`pb_msg_create_upb_message`).
            *   `pb_msg_create_upb_message` now extracts the `upb_MessageDef*` from the
                descriptor and uses `upb_MessageDef_MiniTable()` to get the minitable
                for `upb_Message_New()`.
            *   `DEMOLISH`: Added to free the message's arena.
        
        4.  **`Makefile.PL`:**
            *   Added `-g` to `CCFLAGS` for debugging symbols.
            *   Added Perl CORE include path to `MY::postamble`'s `base_flags`.
        
        5.  **Tests:**
            *   `t/04_descriptor_pool.t`: Updated to check the structure of the
                returned `ProtoBuf::Descriptor`.
            *   `t/05_message.t`: Now uses a descriptor obtained from a real pool to
                test `ProtoBuf::Message->new()`.
        
        6.  **Documentation:**
            *   Updated `ProtobufPlan.md` to reflect progress.
            *   Updated several files in `doc/architecture/` to match the current
                implementation details, especially regarding arena management and object
                lifecycles.
            *   Added `doc/internal/development_cycle.md` and `doc/internal/xs_learnings.md`.
        
        With these changes, the SEGV is resolved, and message objects can be successfully
        created from descriptors.
    
    
    ## 0.03	2025-12-14
    
    commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8
    Author: C.J. Collier 
    Date:   Sun Dec 14 20:23:41 2025 +0000
    
        Refactor(perl): Object-Oriented DescriptorPool with Moo
        
        This commit refactors the `ProtoBuf::DescriptorPool` to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data.
        
        Key Changes:
        
        1.  **Moo Object:** `ProtoBuf::DescriptorPool.pm` now uses `Moo` to define the class. The `upb_DefPool` pointer is stored as a lazy attribute `_upb_defpool`.
        2.  **XS Lifecycle:** `DescriptorPool.xs` now has `pb_dp_create_pool` called by the Moo builder and `pb_dp_free_pool` called from `DEMOLISH` to manage the `upb_DefPool` lifecycle per object.
        3.  **Typemap:** The `perl/typemap` file has been significantly updated to handle the conversion between the `ProtoBuf::DescriptorPool` Perl object and the `upb_DefPool *` C pointer. This includes:
            *   Mapping `upb_DefPool *` to `T_PTR`.
            *   An `INPUT` section for `ProtoBuf::DescriptorPool` to extract the pointer from the object's hash, triggering the lazy builder if needed via `call_method`.
            *   An `OUTPUT` section for `upb_DefPool *` to convert the pointer back to a Perl integer, used by the builder.
        4.  **Method Renaming:** `add_file_descriptor_set_binary` is now `load_serialized_descriptor_set`.
        5.  **Test Data:**
            *   Added `perl/t/data/test.proto` with a sample message and enum.
            *   Generated `perl/t/data/test_descriptor.bin` using `protoc`.
            *   Removed `t/data/` from `.gitignore` to ensure test data is versioned.
        6.  **Test Update:** `t/04_descriptor_pool.t` is updated to use the new OO interface, load the generated descriptor set, and check for message definitions.
        7.  **Build Fixes:**
            *   Corrected `#include` paths in `DescriptorPool.xs` to be relative to the `upb/` directory (e.g., `upb/wire/decode.h`).
            *   Added `-I../upb` to `CCFLAGS` in `Makefile.PL`.
            *   Reordered `INC` paths in `Makefile.PL` to prioritize local headers.
        
        **Note:** While tests now pass in some environments, a SEGV issue persists in `make test` runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation.
    
    
    ## 0.02	2025-12-14
    
    commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07
    Author: C.J. Collier 
    Date:   Sun Dec 14 20:13:09 2025 +0000
    
        Fix(perl): Correct UPB build integration and generated file handling
        
        This commit resolves several issues to achieve a successful build of the Perl extension:
        
        1.  **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated `descriptor.upb.c` and `descriptor.upb_minitable.c` located in `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
        2.  **Updated Include Paths:** Added the `bazel-bin` path to `INC` in `WriteMakefile` and to `base_flags` in `MY::postamble` to ensure the generated headers are found during both XS and static library compilation.
        3.  **Removed Stage0:** Removed references to `UPB_STAGE0_DIR` and no longer include headers or source files from `upb/reflection/stage0/`.
        4.  **-fPIC:** Explicitly added `-fPIC` to `CCFLAGS` in `WriteMakefile` and ensured `$(CCFLAGS)` is used in the custom compilation rules in `MY::postamble`. This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules.
        5.  **Refined UPB Sources:** Used `File::Find` to recursively find UPB C sources, excluding `/conformance/` and `/reflection/stage0/` to avoid conflicts and unnecessary compilations.
        6.  **Arena Constructor:** Modified `ProtoBuf::Arena::pb_arena_new` XSUB to accept the class name argument passed from Perl, making it a proper constructor.
        7.  **.gitignore:** Added patterns to `perl/.gitignore` to ignore generated C files from XS (`lib/*.c`, `lib/ProtoBuf/*.c`), the copied `src_google_protobuf_descriptor.pb.cc`, and the `t/data` directory.
        8.  **Build Documentation:** Updated `perl/doc/architecture/upb-build-integration.md` to reflect the new build process, including the Bazel prerequisite, include paths, `-fPIC` usage, and `File::Find`.
        
        Build Steps:
        1.  `bazel build //src/google/protobuf:descriptor_upb_proto` (from repo root)
        2.  `cd perl`
        3.  `perl Makefile.PL`
        4.  `make`
        5.  `make test` (Currently has expected failures due to missing test data implementation).
    
    
    ## 0.01	2025-12-14
    
    commit 3e237e8a26442558c94075766e0d4456daaeb71d
    Author: C.J. Collier 
    Date:   Sun Dec 14 19:34:28 2025 +0000
    
        feat(perl): Initialize Perl extension scaffold and build system
        
        This commit introduces the `perl/` directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability.
        
        Key components added:
        
        *   **`Makefile.PL`**: The core `ExtUtils::MakeMaker` build script. It's configured to:
            *   Build a static library (`libprotobuf_common.a`) from UPB, UTF8_Range, and generated protobuf C/C++ sources.
            *   Utilize `XSMULTI => 1` to create separate shared objects for `ProtoBuf`, `ProtoBuf::Arena`, and `ProtoBuf::DescriptorPool`.
            *   Link each XS module against the common static library.
            *   Define custom compilation rules in `MY::postamble` to handle C vs. C++ flags and build the static library.
            *   Set up include paths for the project root, UPB, and other dependencies.
        
        *   **XS Stubs (`.xs` files)**:
            *   `lib/ProtoBuf.xs`: Placeholder for the main module's XS functions.
            *   `lib/ProtoBuf/Arena.xs`: XS interface for `upb_Arena` management.
            *   `lib/ProtoBuf/DescriptorPool.xs`: XS interface for `upb_DefPool` management.
        
        *   **Perl Module Stubs (`.pm` files)**:
            *   `lib/ProtoBuf.pm`: Main module, loads XS.
            *   `lib/ProtoBuf/Arena.pm`: Perl class for Arenas.
            *   `lib/ProtoBuf/DescriptorPool.pm`: Perl class for Descriptor Pools.
            *   `lib/ProtoBuf/Message.pm`: Base class for messages (TBD).
        
        *   **Configuration Files**:
            *   `.gitignore`: Ignores build artifacts, editor files, etc.
            *   `.perlcriticrc`: Configures Perl::Critic for static analysis.
            *   `.perltidyrc`: Configures perltidy for code formatting.
        
        *   **`Devel::PPPort`**: Vendored version 3.72 to generate `ppport.h` for XS compatibility across different Perl versions.
        
        *   **`typemap`**: Custom typemap for XS argument/result conversion.
        
        *   **Documentation (`doc/`)**: Initial architecture and plan documents.
        
        This provides a solid foundation for developing the UPB-based Perl extension.
    
    
    

    ,

    Planet DebianIan Jackson: Debian’s git transition

    tl;dr:

    There is a Debian git transition plan. It’s going OK so far but we need help, especially with outreach and updating Debian’s documentation.

    Goals of the Debian git transition project

    1. Everyone who interacts with Debian source code should be able to do so entirely in git.

    That means, more specifically:

    1. All examination and edits to the source should be performed via normal git operations.

    2. Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere.

    3. Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian.

    4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.

    This is very ambitious, but we have come a long way!

    Achievements so far, and current status

    We have come a very long way. But, there is still much to do - especially, the git transition team needs your help with adoption, developer outreach, and developer documentation overhaul.

    We’ve made big strides towards goals 1 and 4. Goal 2 is partially achieved: we currently have dual running. Goal 3 is within our reach but depends on widespread adoption of tag2upload (and/or dgit push).

    Downstreams and users can obtain the source code of any Debian package in git form. (dgit clone, 2013). They can then work with this source code completely in git, including building binaries, merging new versions, even automatically (eg Raspbian, 2016), and all without having to deal with source packages at all (eg Wikimedia 2025).

    A Debian maintainer can maintain their own package entirely in git. They can obtain upstream source code from git, and do their packaging work in git (git-buildpackage, 2006).

    Every Debian maintainer can (and should!) release their package from git reliably and in a standard form (dgit push, 2013; tag2upload, 2025). This is not only more principled, but also more convenient, and with better UX, than pre-dgit tooling like dput.

    Indeed a Debian maintainer can now often release their changes to Debian, from git, using only git branches (so no tarballs). Releasing to Debian can be simply pushing a signed tag (tag2upload, 2025).

    A Debian maintainer can maintain a stack of changes to upstream source code in git (gbp pq 2009). They can even maintain such a delta series as a rebasing git branch, directly buildable, and use normal git rebase style operations to edit their changes, (git-dpm, 2010; git-debrebase, 2018)

    An authorised Debian developer can do a modest update to any package in Debian, even one maintained by someone else, working entirely in git in a standard and convenient way (dgit, 2013).

    Debian contributors can share their work-in-progress on git forges and collaborate using merge requests, git based code review, and so on. (Alioth, 2003; Salsa, 2018.)

    Core engineering principle

    The Debian git transition project is based on one core engineering principle:

    Every Debian Source Package can be losslessly converted to and from git.

    In order to transition away from Debian Source Packages, we need to gateway between the old dsc approach, and the new git approach.

    This gateway obviously needs to be bidirectional: source packages uploaded with legacy tooling like dput need to be imported into a canonical git representation; and of course git branches prepared by developers need to be converted to source packages for the benefit of legacy downstream systems (such as the Debian Archive and apt source).

    This bidirectional gateway is implemented in src:dgit, and is allowing us to gradually replace dsc-based parts of the Debian system with git-based ones.

    Correspondence between dsc and git

    A faithful bidirectional gateway must define an invariant:

    The canonical git tree, corresponding to a .dsc, is the tree resulting from dpkg-source -x.

    This canonical form is sometimes called the “dgit view”. It’s sometimes not the same as the maintainer’s git branch, because many maintainers are still working with “patches-unapplied” git branches. More on this below.

    (For 3.0 (quilt) .dscs, the canonical git tree doesn’t include the quilt .pc directory.)

    Patches-applied vs patches-unapplied

    The canonical git format is “patches applied”. That is:

    If Debian has modified the upstream source code, a normal git clone of the canonical branch gives the modified source tree, ready for reading and building.

    Many Debian maintainers keep their packages in a different git branch format, where the changes made by Debian, to the upstream source code, are in actual patch files in a debian/patches/ subdirectory.

    Patches-applied has a number of important advantages over patches-unapplied:

    • It is familiar to, and doesn’t trick, outsiders to Debian. Debian insiders radically underestimate how weird “patches-unapplied” is. Even expert software developers can get very confused or even accidentally build binaries without security patches!

    • Making changes can be done with just normal git commands, eg git commit. Many Debian insiders working with patches-unapplied are still using quilt(1), a footgun-rich contraption for working with patch files!

    • When developing, one can make changes to upstream code, and to Debian packaging, together, without ceremony. There is no need to switch back and forth between patch queue and packaging branches (as with gbp pq), no need to “commit” patch files, etc. One can always edit every file and commit it with git commit.

    The downside is that, with the (bizarre) 3.0 (quilt) source format, the patch files files in debian/patches/ must somehow be kept up to date. Nowadays though, tools like git-debrebase and git-dpm (and dgit for NMUs) make it very easy to work with patches-applied git branches. git-debrebase can deal very ergonomically even with big patch stacks.

    (For smaller packages which usually have no patches, plain git merge with an upstream git branch, and a much simpler dsc format, sidesteps the problem entirely.)

    Prioritising Debian’s users (and other outsiders)

    We want everyone to be able to share and modify the software that they interact with. That means we should make source code truly accessible, on the user’s terms.

    Many of Debian’s processes assume everyone is an insider. It’s okay that there are Debian insiders and that people feel part of something that they worked hard to become involved with. But lack of perspective can lead to software which fails to uphold our values.

    Our source code practices — in particular, our determination to share properly (and systematically) — are a key part of what makes Debian worthwhile at all. Like Debian’s installer, we want our source code to be useable by Debian outsiders.

    This is why we have chosen to privilege a git branch format which is more familiar to the world at large, even if it’s less popular in Debian.

    Consequences, some of which are annoying

    The requirement that the conversion be bidirectional, lossless, and context-free can be inconvenient.

    For example, we cannot support .gitattributes which modify files during git checkin and checkout. .gitattributes cause the meaning of a git tree to depend on the context, in possibly arbitrary ways, so the conversion from git to source package wouldn’t be stable. And, worse, some source packages might not to be representable in git at all.

    Another example: Maintainers often have existing git branches for their packages, generated with pre-dgit tooling which is less careful and less principled than ours. That can result in discrepancies between git and dsc, which need to be resolved before a proper git-based upload can succeed.

    That some maintainers use patches-unapplied, and some patches-unapplied, means that there has to be some kind of conversion to a standard git representation. Choosing the less-popular patches-applied format as the canonical form, means that many packages need their git representation converted. It also means that user- and outsider-facing branches from {browse,git}.dgit.d.o and dgit clone are not always compatible with maintainer branches on Salsa. User-contributed changes need cherry-picking rather than merging, or conversion back to the maintainer format. The good news is that dgit can automate much of this, and the manual parts are usually easy git operations.

    Distributing the source code as git

    Our source code management should be normal, modern, and based on git. That means the Debian Archive is obsolete and needs to be replaced with a set of git repositories.

    The replacement repository for source code formally released to Debian is *.dgit.debian.org. This contains all the git objects for every git-based upload since 2013, including the signed tag for each released package version.

    The plan is that it will contain a git view of every uploaded Debian package, by centrally importing all legacy uploads into git.

    Tracking the relevant git data, when changes are made in the legacy Archive

    Currently, many critical source code management tasks are done by changes to the legacy Debian Archive, which works entirely with dsc files (and the associated tarballs etc). The contents of the Archive are therefore still an important source of truth. But, the Archive’s architecture means it cannot sensibly directly contain git data.

    To track changes made in the Archive, we added the Dgit: field to the .dsc of a git-based upload (2013). This declares which git commit this package was converted from. and where those git objects can be obtained.

    Thus, given a Debian Source Package from a git-based upload, it is possible for the new git tooling to obtain the equivalent git objects. If the user is going to work in git, there is no need for any tarballs to be downloaded: the git data could be obtained from the depository using the git protocol.

    The signed tags, available from the git depository, have standardised metdata which gives traceability back to the uploading Debian contributor.

    Why *.dgit.debian.org is not Salsa

    We need a git depository - a formal, reliable and permanent git repository of source code actually released to Debian.

    Git forges like Gitlab can be very convenient. But Gitlab is not sufficiently secure, and too full of bugs, to be the principal and only archive of all our source code. (The “open core” business model of the Gitlab corporation, and the constant-churn development approach, are critical underlying problems.)

    Our git depository lacks forge features like Merge Requests. But:

    • It is dependable, both in terms of reliability and security.
    • It is append-only: once something is pushed, it is permanently recorded.
    • Its access control is precisely that of the Debian Archive.
    • Its ref namespace is standardised and corresponds to Debian releases.
    • Pushes are authorised by PGP signatures, not ssh keys, so traceable.

    The dgit git depository outlasted Alioth and it may well outlast Salsa.

    We need both a good forge, and the *.dgit.debian.org formal git depository.

    Roadmap

    In progress

    Right now we are quite focused on tag2upload.

    We are working hard on eliminating the remaining issues that we feel need to be addressed before declaring the service out of beta.

    Future Technology

    Whole-archive dsc importer

    Currently, the git depository only has git data for git-based package updates (tag2upload and dgit push). Legacy dput-based uploads are not currently present there. This means that the git-based and legacy uploads must be resolved client-side, by dgit clone.

    We will want to start importing legacy uploads to git.

    Then downstreams and users will be able to get the source code for any package simply with git clone, even if the maintainer is using legacy upload tools like dput.

    Support for git-based uploads to security.debian.org

    Security patching is a task which would particularly benefit from better and more formal use of git. git-based approaches to applying and backporting security patches are much more convenient than messing about with actual patch files.

    Currently, one can use git to help prepare a security upload, but it often involves starting with a dsc import (which lacks the proper git history) or figuring out a package maintainer’s unstandardised git usage conventions on Salsa.

    And it is not possible to properly perform the security release as git.

    Internal Debian consumers switch to getting source from git

    Buildds, QA work such as lintian checks, and so on, could be simpler if they don’t need to deal with source packages.

    And since git is actually the canonical form, we want them to use it directly.

    Problems for the distant future

    For decades, Debian has been built around source packages. Replacing them is a long and complex process. Certainly source packages are going to continue to be supported for the foreseeable future.

    There are no doubt going to be unanticipated problems. There are also foreseeable issues: for example, perhaps there are packages that work very badly when represented in git. We think we can rise to these challenges as they come up.

    Mindshare and adoption - please help!

    We and our users are very pleased with our technology. It is convenient and highly dependable.

    dgit in particular is superb, even if we say so ourselves. As technologists, we have been very focused on building good software, but it seems we have fallen short in the marketing department.

    A rant about publishing the source code

    git is the preferred form for modification.

    Our upstreams are overwhelmingly using git. We are overwhelmingly using git. It is a scandal that for many packages, Debian does not properly, formally and officially publish the git history.

    Properly publishing the source code as git means publishing it in a way that means that anyone can automatically and reliably obtain and build the exact source code corresponding to the binaries. The test is: could you use that to build a derivative?

    Putting a package in git on Salsa is often a good idea, but it is not sufficient. No standard branch structure git on Salsa is enforced, nor should it be (so it can’t be automatically and reliably obtained), the tree is not in a standard form (so it can’t be automatically built), and is not necessarily identical to the source package. So Vcs-Git fields, and git from Salsa, will never be sufficient to make a derivative.

    Debian is not publishing the source code!

    The time has come for proper publication of source code by Debian to no longer be a minority sport. Every maintainer of a package whose upstream is using git (which is nearly all packages nowadays) should be basing their work on upstream git, and properly publishing that via tag2upload or dgit.

    And it’s not even difficult! The modern git-based tooling provides a far superior upload experience.

    A common misunderstanding

    dgit push is not an alternative to gbp pq or quilt. Nor is tag2upload. These upload tools complement your existing git workflow. They replace and improve source package building/signing and the subsequent dput. If you are using one of the usual git layouts on salsa, and your package is in good shape, you can adopt tag2upload and/or dgit push right away.

    git-debrebase is distinct and does provides an alternative way to manage your git packaging, do your upstream rebases, etc.

    Documentation

    Debian’s documentation all needs to be updated, including particularly instructions for packaging, to recommend use of git-first workflows. Debian should not be importing git-using upstreams’ “release tarballs” into git. (Debian outsiders who discover this practice are typically horrified.) We should use only upstream git, work only in git, and properly release (and publish) in git form.

    We, the git transition team, are experts in the technology, and can provide good suggestions. But we do not have the bandwidth to also engage in the massive campaigns of education and documentation updates that are necessary — especially given that (as with any programme for change) many people will be sceptical or even hostile.

    So we would greatly appreciate help with writing and outreach.

    Personnel

    We consider ourselves the Debian git transition team.

    Currently we are:

    • Ian Jackson. Author and maintainer of dgit and git-debrebase. Co-creator of tag2upload. Original author of dpkg-source, and inventor in 1996 of Debian Source Packages. Alumnus of the Debian Technical Committee.

    • Sean Whitton. Co-creator of the tag2upload system; author and maintainer of git-debpush. Co-maintainer of dgit. Debian Policy co-Editor. Former Chair of the Debian Technical Committee.

    We wear the following hats related to the git transition:

    You can contact us:

    We do most of our heavy-duty development on Salsa.

    Thanks

    Particular thanks are due to Joey Hess, who, in the now-famous design session in Vaumarcus in 2013, helped invent dgit. Since then we have had a lot of support: most recently political support to help get tag2upload deployed, but also, over the years, helpful bug reports and kind words from our users, as well as translations and code contributions.

    Many other people have contributed more generally to support for working with Debian source code in git. We particularly want to mention Guido Günther (git-buildpackage); and of course Alexander Wirt, Joerg Jaspert, Thomas Goirand and Antonio Terceiro (Salsa administrators); and before them the Alioth administrators.



    comment count unavailable comments

    Planet DebianRussell Coker: Links December 2025

    Russ Allbery wrote an interesting review of Politics on the Edge, by Rory Stewart who sems like one of the few conservative politicians I could respect and possibly even like [1]. It has some good insights about the problems with our current political environment.

    The NY Times has an amusing article about the attempt to sell the solution to the CIA’s encrypted artwork [2].

    Wired has an interesting article about computer face recognition systems failing on people with facial disabilities or scars [3]. This is a major accessibility issue potentially violating disability legislation and a demonstration of the problems of fully automating systems when there should be a human in the loop.

    The October 2025 report from the Debian Reproducible Builds team is particularly interesting [4]. “kpcyrd forwarded a fascinating tidbit regarding so-called ninja and samurai build ordering, that uses data structures in which the pointer values returned from malloc are used to determine some order of execution” LOL

    Louis Rossmann made an insightful youtube video about the moral case for piracy of software and media [5].

    Louis Rossman made an insightful video about the way that Hyundai is circumventing Right to Repair laws to make repairs needlessly expensive [6]. Korean cars aren’t much good nowadays. Their prices keep increasing and the quality doesn’t.

    Brian Krebs wrote an interesting article about how Google is taking legal action against SMS phishing crime groups [7]. We need more of this!

    Josh Griffiths wrote an informative blog post about how YouTube is awful [8]. I really should investigate Peertube.

    Louis Rossman made an informative YouTube video about Right to Repair and the US military, if even the US military is getting ripped off by this it’s a bigger problem than most people realise [9]. He also asks the rhetorical question of whether politicians are bought or whether it’s a “subscription model”.

    Brian Krebs wrote an informative article about the US plans to ban TP Link devices, OpenWRT seems like a good option [10].

    Brian Krebs wrote an informative article about “free streaming” Android TV boxes that act as hidden residential VPN proxies [11]. Also the “free streaming” violates copyright law.

    Bruce Schneier and Nathan E. Sanders wrote an interesting article about ways that AI is being used to strengthen democracy [12].

    Cory Doctorow wrote an insightful article about the incentives for making shitty goods and services and why we need legislation to protect consumers [13].

    Linus Tech Tips has an interesting interview with Linus Torvalds [14].

    Interesting video about the Kowloon Walled City [15]. It would be nice if a government deliberately created a hive city like that, the only example I know of is the Alaskan town in a single building.

    David Brin wrote an insightful set of 3 blog posts about a Democratic American deal that could improve the situation there [16].

    365 TomorrowsVeteran’s Christmas

    Author: Alastair Millar Grandpa Jack had a generous ‘meritorious service’ pension from his time in the Terran Space Force, but he never talked about his time in uniform, or shared war stories. “I’d rather not,” he’d say diffidently, or, if pressed, “Just not much to tell, really”. Everyone around him quietly assumed that his off-world […]

    The post Veteran’s Christmas appeared first on 365tomorrows.

    David BrinFour MORE Newer Deals... and why they'll work better than the Reich 'pledges'

    Continuing my series about a proposed Democratic Newer Deal

    Here I'll dive deeply into four more of the 30+ suggested reforms that were briefly listed here... organized in a way that both learns-from and satirizes the hypocritical but politically effective 1994 Republican Contract With America. 

    But first some pertinent news. A couple of weeks after I started posting this series -- offering voters a clear agenda of positive steps -- economist and columnist Robert Reich issued a shorter list of “What Democrats Must Pledge to America.” And no, I am not asserting that my series triggered him to hurry his. 


    Well, probably not. Though Reich's list overlaps mine in overall intent! We both aim to make progress toward better health care, aid to parents and children, and sound economics while limiting the power of oligarchies and cheaters and monopolies. Alas, Reich's 'pledges' also make up a wish list that might as well be directed at Santa Claus, for all of its political impracticality.


    What distinguishes even very smart/moderate leftists like Reich from their centrist allies (like me) is not the desired direction, or even our degree of passion (you all know that I have plenty!), but awareness of one pure fact, that most of our progress across the last 250 years – even under FDR – was incremental. Each plateau building from the previous ones, like upward stairs of progress. Not letting the perfect be the enemy of the possible.

    Alas, not one of Reich’s proposals satisfies the “60%+ Rule” that was so politically-effective for Newt Gingrich in 1994, and that Pelosi-Schumer-Sanders applied with terrific effectiveness in 2021-22.  


    Start with steps that can be steam-rollered quickly, with 60%+ strong public support, right away! Only after that do you try for the long pass.


    Big Gulp endeavors, like those tried by Clinton and Obama, always get bogged down and savaged by "Who pays for it?" and "They want communism!" Then, the GOP wins the next Congress and that's that - opportunity window closed. What we discovered in the 2021-22 Pelosi miracle year was that you can make great strides in multiple directions, if you start from that 60% consensus in order to push solid increments. Steps that then create those new plateaus!


    Contrasting with Reich's "pledges," my list emphasizes restoring a functioning republic - civil service, reliable institutions, elections and rule-of-law - in ways that can't be withdrawn by future demagogues... along with incremental steps toward our shared goals (e.g. get all CHILDREN coverable under Medicare, in a single stroke, easily afforded and canceling every objection to Medicare-for-all.)


    Look, I like and respect Robert Reich. But here he should have added an equally realistic 11th wish to the other ten... that every American gets a unicorn or pegasus, or at least a pony



    == Those "Newer Deal" proposals we appraised last time ==


    Could the news this month have better supported my list? If we had the Inspectorate right now, under IGUS (a totally independent Inspector General of the United States), Trump could not have fired or transferred most of the IGs and JAGs in the federal government. Honest scrutiny would abound when we need it most! And officers would have somewhere to turn, when given illegal orders. (I have recommended IGUS for fifteen years.)


    The Truth & Reconciliation Act - discussed last time - would have staunched Trump's tsunami of corrupt pardons and the Immunity Limitation Act would clarify that no President is above the law. And yes, there are ways to judo-bypass the Roberts Court in both of those realms.


    Some other proposals from my last two postings may seem obscure, like the Cyber Hygiene Act that could eliminate 90%+ of the 'botnets' that now infest tens of millions of home and small business computers, empowering our enemies and criminals. Or one that I personally like most... a simple House-internal reform to give every member one subpoena per year, which would likely transform the entire mental ecology in Congress!


    But onward to more proposals! Most of which (again) you'll see nowhere else.



    == Appraising another four "Newer Deal" proposals ==


    I've mentioned the 1994 Newt Gingrich Contract With America several times and in so doing I likely triggered visceral, gut wrenching loathing from many of you! 


    Well tough. You must understand how the 'contract' seemingly offered voters clear and crisp reforms of a system that most citizens now distrust. 


    Yes, Newt and especially his replacement - the deeply-evil Dennis Hastert - betrayed every promise when they took power. Still, some (a minority) of those promises merit another look. Moreover, Democrats can say "WATCH as we actually enact them, unlike our lying opponents!


    Among the good ideas the GOP betrayed are these:

     

       Require all laws that apply to the rest of the country also apply to Congress; 

       Arrange regular audits of Congress for waste or abuse;

       Limit the terms of all committee chairs and party leadership posts;

       Ban the casting of proxy votes in committee and law-writing by lobbyists;

       Require that committee meetings be open to the public;

       Guarantee honest accounting of our Federal Budget.

     

    …and in the same spirit…


    Members of Congress shall report openly all stock and other trades by members or their families, especially those trades which might be affected by the member’s inside knowledge.



    Some members may resist some of those measures. But those are the sorts of House internal reforms that could truly persuade voters. Especially with the contrast. "Republicans betrayed these promises. We are keeping them."


    Here's another one that'd be simple to implement. Even entertaining! While somewhat favoring the Party that has more younger members. Fewer creaky near-zombies. And so, swinging from the House to the Senate:



    While continuing ongoing public debate over the Senate’s practice of filibuster, we shall use our next majority in the Senate to restore the original practice: that senators invoking a filibuster must speak on the chamber floor the entire time.



    No explanation is needed on that one! Bring back the spirit of Jimmy Stewart.


    Only now, here's one that I very much care about. Do any of you remember when Gingrich and then Hastert fired all the staff in Congress that advised members about matters of actual fact, especially science and technology? Why on Earth would they do such a thing? 


    Simple. The Congressional Office of Technology Assessment (OTA) would often say to members: "I'm sorry (sir or madam), but that's not actually true."


    Oh, no, we can't have that! Gingrich asserted that OTA said that dreaded phrase far more often to Republicans than to Democrats. And... well... yes, that is true enough. There's a reason for that. But true or not, it's time for this proposal to be enacted:



    Independent congressional advisory offices for science, technology and other areas of skilled, fact-based analysis will be restored, in order to counsel Congress on matters of fact without bias or dogma-driven pressure. 


    Rules shall ensure that technical reports may not be re-written by politicians, changing their meaning to bend to political desires. 

     

    Every member of Congress shall be encouraged and funded to appoint from their home district a science-and-fact advisor who may interrogate the advisory panels and/or answer questions of fact on the member’s behalf.



    Notice how this pre-empts all plausible objections in advance! By challenging (and funding) every representative to hire a science and fact adviser from their home district, you achieve several things:


    1. Each member gets trusted factual guidance -- someone who can interrogate OTA and other experts, on the member's behalf. And this, in turn, addresses the earlier Gingrich calumny about "OTA bias."


    2. Members would no longer get to wriggle and squirm out of answering fact or science questions -- e.g. re: Climate Change -- evading with the blithe shrug that's used by almost all current Republicans: "I'm not a scientist." 


    So? Now you have someone you trust who can answer technical or factual or scientific questions for you. So step up to the microphone with your team.


    3. Any member who refuses to name such an adviser risks ridicule; "What? Your home district hasn't got savvy experts you could pick from?" That potential blowback could ensure that every member participates.


    4. Remember, this is about fact-determination and not policy! Policy and law remain the member's domain. Only now they will be less unconstrained in asserting false, counter-factual justifications for dumb policies.



    And finally (for this time)... a problem that every Congress has promised to address, that of PORK spending. Look, you will never eliminate it! Members want to bring stuff home to their district. 


    But by constraining pork to a very specific part of the budget, they'll have to wrangle with each other, divvying that single slice of pie among themselves. And it will lead to scrutiny of each other's picks, giving each pork belly a strong sniff for potential corruption.



    New rules shall limit “pork” earmarking of tax dollars to benefit special interests or specific districts. Exceptions must come from a single pool, totaling no more than one half of a percent of the discretionary budget. These exceptions must be placed in clearly marked and severable portions of a bill, at least two weeks before the bill is voted upon. (More details here.)



    Notice that all four of the proposals that we covered this time are internal procedure reforms for the houses of Congress! Which means they would not be subject to presidential veto. 


    These... and several others... could be passed if Democrats take either house of Congress in January 2027, no matter who is still in the White House.


    There are other procedural suggestions, some of them perhaps a bit crackpotty! Like occasional secret ballot polls to see if members are voting the way they do out of genuine conscience or else out of fear or coercion... but you can find those here.


    Next time, we'll get back to vitally-needed laws.


    -------------


    And this project continues...


    David BrinSave our Defenders! And more Newer Deals.

    For those just now tuning-in... my series about a Democratic Newer Deal aims to emulate the two most 'successful' legislative agendas of the last 30+ years, one of them a massive electoral success, despite being total fraud...

    ... and the other one a surge of rapid accomplishments. We need to learn from both of them, should Congress be recovered by the Union side in this latest phase of our 250 year civil (culture) war.

    How does one measure political 'success'?  

    First, does your publicized agenda attract voters so you'll win the next election? 

    Second, can you then pass a whole lot of reforms that you and your constituents want, right away, before the political winds shift again?

    The first desideratum
    was achieved in 1994 by the most successful tactical ploy of the last 30 years, Newt Gingrich's deceitful but alluring "Contract With America." His Contract made dozens of promises, half of them sounding reasonable to U.S. voters, enabling the GOP to crush Bill Clinton's Democrats and commence the Neo-Conservative era. That ploy was - of course - a damned lie... every promise betrayed, then forgotten by the GOP masters. And by their voters, distracted down rabbit holes of Fox-hypnosis. Still, such a successful tactic should be studied for why it worked.

    The second great political success of the last 30 years came in 2021-22 when Nancy Pelosi & Chuck Schumer - aided eagerly by Bernie, Liz, AOC and pragmatic progressives - used their brief window of opportunity to send to Joe Biden's desk a miracle year's worth of truly terrific bills. Bills that (alas!) far too few Democratic voters remember, distracted as they are by Trumpian antics. 

    Unlike Gingrich-era hypocrites, Pelosi/Sanders et. al. truly wanted to rebuild American infrastructure, help poor kids, boost science, get fairness in taxes and the rest. Their action plan? Start any major reform campaign with a tranche of measures that are both urgently needed and that satisfy the 60%+ rule.  

    What 60%+ Rule? 
    Start with reforms that sell themselves by appealing to more than 60% of voters... and THEN fight the harder fights. Pelosi & co. did that in 2021-22. And for the first time in three decades, the Dems had a good legislative year.

    They stopped too soon! Not their fault, but boy did we need more! Including some items to preserve democracy and justice, in case a monster like Trump ever regained power.

    That is the pair of drivers behind my Newer Deal proposals.  Your agenda must be clear and dramatically appealing to 60%+ of American voters, so that you will win! 

    And you must act on those 60%+ items quickly, to prove that you are not Republican hypocrite-liars. And in order to get in place the urgent stuff immediately!*

    ...so that opposition delaying tactics - and even a political wind-shift - will not reverse the most important ones.


    == Earlier agile judo-proposals for a new Congress ==

    As I did in the previous five postings, I will here amplify or examine some of those 35+ proposals, should Democrats regain the power to pass legislation. A dozen of them can be done, even if opposed by a monstrously Kremlin-controlled president! And nine could be enacted by just the House of Representatives, all by itself.

    Almost all of the proposals are listed here, in this earlier posting, though I since added a couple based on reader suggestions. So far, we've covered the most urgent, including establishing the Inspectorate, a wholly independent agency to truly empower and protect the auditors and IGs and JAGS and others who can thereupon protect those who protect the rule of law!...

    ...plus a major step toward solving health care, by defining all children as 'seniors' for purposes of Medicare, a move that can be easily afforded, while cancelling all standard objections...

    ... plus partial solutions to abuse of presidential pardons and his sale of favors and hoarding bribe-gifts and abusing presidential control over public property like the White House. And I promise you've not seen those particular proposals, before. 

    Only now...


     == Okay then, let's appraise four more! ==

    The next one seems pretty damn vital, as the current administration drags us all toward an Epstein-distraction war. I'll discuss some of the whys and therefores, below.
     

     THE SECURITY FOR AMERICA ACT will ensure that top priority goes to America’s military and security readiness, especially our nation's ability to respond to surprise threats, including natural disasters or other emergencies. For starters, FEMA and the CDC and other contingency agencies will be restored.


    NOTE: amid their all-out war against the brave and brilliant men and women of the U.S. Military Officer Corps, Republicans keep blaring assertions that Democrats have (so far) been too polemically stupid to refute. Foremost among these lies is the claim that the GOP is somehow better at Military Readiness. 

    The opposite is true! Across the spans of most GOP or Democratic administrations, military readiness is nearly always rated higher after Democratic ones.  

    Care to bet $$$ on that?  Anyway, the CDC and FEMA are just as important to defense against unexpected dangers, as well as replenishing the Trump-undermined counter-terrorism staffs who have been reamed-out and demoralized under Tulsi Gannard and Kristi Noem, almost as if the administration wants another 9/11. And maybe they do.

    But let's continue with this Defense-related act.


     When ordering a discretionary foreign intervention, the President must report probable effects on readiness, as well as the purposes, severity and likely duration of the intervention, as well as credible evidence of need. 

    All previous Congressional approvals for foreign military intervention or declared states of urgency will be explicitly canceled, so that future force resolutions will be fresh and germane to each particular event, with explicit expiration dates.


    NOTE: These two pragraphs have been desperately needed for a long time. And I do fault Joe Biden for not doing this.

    Emergency resolutions must expire and new ones be required for each "urgency'!  

    These resolutions are the last vestiges of Congress's Constitutional power over declaring war. And they must remain meaningful!  

    Continuing...


     Reserves will be augmented and modernized. Reserves shall not be sent overseas without submitting for a Congressionally certified state of urgency that must be renewed at six-month intervals. 


    NOTE: The paragraph above refers to a travesty of George W. Bush, calling up reserve units of men and women with families and jobs and lives and hurling them directly, without preparation into the (lie-based) Iraq and Afghanistan Wars. Such a call-up of reserves for overseas conflict may be necessary some times... though arguably it was not, in those cases, when it was - in effect - a Bushite cheat. 

    In any event, Congress ought to be able to weigh in, on behalf of their constituents... and the Constitution.

    Continuing...


    Any urgent federalization and deployment of National Guard or other troops to American cities, on the excuse of civil disorder, shall be supervised by a plenary of the nation’s state governors, who may veto any such deployment by a 40% vote or a signed declaration by twenty governors. 

    The Commander-in-Chief may not suspend any American law, or the rights of American citizens, without submitting the brief and temporary suspension to Congress for approval in session. 


    NOTE: in case you are puzzled why I give such authority to an ad hoc panel of a minority of governors... think about it!

    1. Governors are normally the commanders of their state militias or National Guard units! They damn well should be involved in determining whether a president's nationalization has good cause. Sure, George Wallace showed us that any one governor can be awful and should not have such veto power, alone. But TWENTY would give a good sense that something is very, very wrong with the order.

    2. Why TWENTY? Why not a majority? Because the nation is badly divided and for a matter like this, no narrow majority should be able to impose its will, with military force on a large, objecting minority. If 40% of governors say "You may not turn our states' own citizen volunteers into a force to impose your will upon our citizens," then that should be enough to force a pause. "Stay out of our cities and we'll handle this ourselves."

    3. Governors?  Heck yeah. Several reasons. First that Congress has been useless for all but two years out of the last 35, ever since Dennis Hastert declared a GOP rule absolutely forbidding Republicans in Congress against negotiation. We have to turn somewhere! And a council of governors is a locus of truly elected sovereignty that has been (I assert) way under-utilized.

    Governors can do plenty nationally!  Look up the Uniform Business Code. If a bunch of states pass coordinated legislation, a whole lot can get done. 

    And a final note: Look at the current Republican governors. A third of them are actually sane grownups! Sincere, patriotic and at least slightly-decent, old-fashioned conservatives. USE THIS!  Approach them and talk to them. Working with Democrats, they can supply one area of American sovereignty in which a sane majority holds sway.


    == Another one... more brief but essential! ==

    Want more unconventional proposals? You might expect the author of The Transparent Society to offer one dealing woth excessive secrecy, so here goes:


    THE SECRECY ACT will ensure that the recent, skyrocketing use of secrecy – far exceeding anything seen during the Cold War - shall reverse course.  


    Independent commissions of trusted Americans shall approve, or set time limits to, all but the most sensitive classifications, which cannot exceed a certain number. If a new document is classifed into the highest layers, then another must descend the ladder.


     These commissions will include some members who are chosen (after clearance) from a random pool of common citizens.  Secrecy will not be used as a convenient way to evade accountability.


    Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. 



    NOTE: Yes, I know, all of this will be hard to pass, amid paranoia. Especially when (I believe) a fair percentage of folks in DC are being blackmailed. But elsewhere in the 35 proposals is an endeavor to lure the blackmailed or fearful into coming forward.  Anyway, a party that promises this... and goes at least part of the way... may be taken more seriously.


    And that's enough of the super-serious stuff. Now for a couple of head-scratchers... till you slap your forehead with 'of course!'


    == Get past blather to... intent! ==

    These are two more items that Congress could enact, without caring a whit about presidential vetoes! Because they are about the internal running of Congress... or even just the House!

     
    We shall use anonymous conscience polling to probe for coercion

    Once per day, the losing side in a House vote may demand and get an immediate non-binding secret polling of the members who just took part in that vote, using technology to ensure reliable anonymity. While this secret ballot will be non-binding legislatively, the poll will reveal whether some members felt coerced or compelled to vote against their conscience. Members who refuse to be polled anonymously will be presumed to have been so compelled or coerced.

    I've wanted this one for a long time, but it's especially redolent and compelling today.

    Do you honestly think that most of Donald Trump's current bestiary of mad-ludicrous cabinet members would be there, if today's GOP weren't by far the most tightly-disciplined political machine in the history of the republic?  

    Look at the faces, during those hearings and confirmation ballots! HALF of the Republicans who voted for Pete Hegseth, RFK Jr., Kristi Noem, or Tulsi Gabbard wore expressions of agony!  Or else stone, cold, frozen resignation.

    I have elsewhere offered my own theory as to how they are being coerced. But just the tight discipline - all by itself - speaks volumes! Hence the purpose of my proposal for daily anonymous polls would be to give such members a chance to 'vote' as their conscience truly would have wished.

    Sure, these anonymous polls would not be binding or have legislative effect. The reps' constituencies still have final say. And being on record before voters is democracy. Moreover, I believe that - if this were tried - the GOP leadership would threaten hard any Republican representative who cooperated, even a little. Even to answer a simple poll.

    Still, that top-down repression of their own caucus, in itself, would say a lot. And chip away at something monstrous.


    == okay, here's another obscure one! ==


    THE INTENT OF CONGRESS ACT: We shall pass an act preventing the Supreme Court from canceling laws based on contorted interpretations of Congressional will or intent. For example, the Civil Rights Bill shall not be interpreted as having “completed” the work assigned to it by Congress, when it clearly has not done so. In many cases, this act will either clarify Congressional purpose and intent or else amend certain laws to ensure that Congressional intent is crystal clear, removing that contorted rationalization used by the current Court majority to make-law, rather than adjudicate.

    This will not interfere in Supreme Court decisions based on Constitutionality. But interpretations of Congressional intent should at least consult with Congress, itself.


    Yeah, that one seems kinda obscure. You must first understand the contortions that John Roberts and Clarence Thomas and their accomplices have performed, in some cases, to justify betraying American democracy and all that. For example, in excusing the outrageous theft-crime of gerrymandering, Roberts concocted a reasoning - or Roberts Doctrine - that no neutral map-making commission can ever do a perfect job of fairness. And hence -- (they 'reasoned') - we might as well leave it to a state assembly majority that has already outrageously cheated to keep itself in power. (I offer a way around that Roberts rationalization here**, with a gerrymandering solution that evades it, completely.) 

    Another rationalization used by the Roberts cabal is "intent of Congress." They opine that this or that law was never meant to do what decades of Americans assumed that it would do. In fact, they have several times ruled that Congress clearly intended the exact opposite!

    Um, why not ask Congress what it intended? 

    Again, nothing can stop the Court, no matter how badly suborned, from issuing rulings based on contortedly rationalized interpretations of the Constitution, as with the absurd notion of presidential immunity (dealt with in another of these proposals.) But with this act, Congress might at least say: "Stop using Congress's intent as your excuse for doing your judicial legislating."


    == Jeepers, Brin, are you done yet? ==

    Dunno. Do you mean, am I done beating my head against walls, knowing that this series will get just as much attention from the political caste as I got with Polemical Judo? In other words... crickets? Zilch? Like the number of folks who are still reading here?

    Yeah, I know. I should stay in my lane. Put it all into scifi stories! Get back to blogging about cool things coming out of Science. Or better yet, give up both because no one reads, anymore.

    Sorry. Can't help it. Ideas are the sparks coming off my loose wires. If you want me to stop... call a good electrician.

    ... and onward to Part 8... where we'll discuss Immigration, bribery, presidential power over public property... and libraries! (A top vehicle for corruption!)


    =============================================

    * ... and yes, that means YOU must stop all your fantasies about Constitutional amendments. Stop that. Just please stoppit, willyou? That won't happen.  

    ** Here's a proposed legal argument that demolishes the "Roberts Doctrine" that he concocted to protect gerrymandering. https://david-brin.medium.com/the-minimal-overlap-solution-to-gerrymandered-injustice-e535bbcdd6c 

    ...and a more general deep-dive into this wretched crime: https://www.davidbrin.com/nonfiction/gerrymandering1.html

    David BrinPart 8 of a Newer Deal: immigration, budgets, emoluments and... no, the President doesn't own the White House!

     Amid the daily drenching of treasonous-lunacy, can we agree to wish calm-sanity on the world in 2026, as we leave benighted 2025 behind us? 

    I'll drop a little humor into this missive at the very end, along with Merry Christmas and Happy Hannukah & Kwanza etc. wishes. But meanwhile...

    ...let's keep poking at on my 35+ - likely futile - proposals that liberals, Democrats and their residually-sane conservative neighbors might enact, to fix many flaws exploited lately by enemies of our enlightenment and republic.

    This series began with an appraisal of political tactics to win elections, especially the most successful one of the last 50 years, the "Contract With America" concocted by Newt Gingrich in 1994 to bury any remnants of the Rooseveltean coalition, commencing decades of Republican dominance. A 'contract' hypocritically betrayed by the GOP! Still, it worked for them. We need to know why.

    In Part Three I listed my own proposed winning promises!  Some need legislation to either overcome a veto or await a non-traitor president. (Though far easier to pass than a Constitutional amendments, so stop whining about the Electoral College!) But half a dozen are internal reforms Congress can make no matter who's in the Oval Office. 

    In Parts 4-7 I commenced dissecting and explaining each of the proposals. Some would directly solve some of the weaknesses Donald Trump has exposed in the U.S. system. 

    So, let's continue.


    == The immigration dilemma ==

    Scream "racism!" all day, but that won't cancel a fact few liberals ever admit or confront. That enemies of the Enlightenment and liberalism found an effective tactic, a way to f---up western nations, politically. 

    The tactic? Drive hmany thousands of hapless, innocent refugees across borders into generously liberal democracies! Then watch, giggling, as millions of voters in those countries swing rightward at the polls.

    Cringe and deny and evade thinking about it, all you like. But when your enemy employs a winning tactic - in this case leveraging your own goodness and generosity against you - it might be sapient to notice! (As you should notice the tactical effectiveness of the Gingrich 'contract.')

    Dig it. In order to do good things in this world (many of them suggested in this series) you must have political power! And sorry - alas - that means prioritizing

    You can't do everything. So do things first that will both improve matters and win more elections and give us the power to do more good things! Um... duh?

    Anyway, here I offer a potential way to approach that sweet spot in a vexing issue. Remaining generous,while countering the till-now 100% effective Putin refugee ploy.


     IMMIGRATION REFORM: There are already proposed immigration law reforms on the table, worked out by sincere Democrats and sincere Republicans, back when the latter were still a thing. These bipartisan reforms will be revisited, debated, updated and then brought to a vote. 

     

    In addition, if a foreign nation is a top ten source of refugees seeking U.S. asylum from persecution in their homelands, then by law it shall be incumbent upon the political and social elites in that nation to help solve the problem, or else take responsibility for causing their citizens to flee. 

     

    Upon verification that their regime is among those top ten, that nation’s elites will be billed, enforceably, for U.S. expenses in giving refuge to that nation’s citizens. Further, all trade and other advantages of said elites will be suspended and access to the United States banned, except for the purpose of negotiating ways that the U.S. can help in that nation’s rise to both liberty and prosperity, thus reducing refugee flows in the best possible way. 



    == Hurt him where it hurts most ==

     

    This next one is self-explanatory. 

    It plugs a gaping hole that has allowed a maniac to run wild with public property, while refusing all accountability and grabbing bribes, hand over fist!

    THE EXECUTIVE OFFICE MANAGER: 
    By law we shall establish under IGUS (the Inspectorate) a civil service position of White House Manager, whose function is to supervise all non-political functions and staff. This will include the Executive Mansion’s physical structure and publicly-owned contents, but also policy-neutral services such as the switchboard, kitchens, Travel Office, medical office, and Secret Service protection details. There are no justifications for the President or political staff to have whim authority over such apolitical employees. 

    With due allowance and leeway for needs of the Office of President, public property shall be accounted-for. The manager will allocate which portions of any trip expense should be deemed private and thereupon – above a basic, reasonable allowance – shall be billed to the president or his/her party. 

    This office shall supervise annual physical and mental examination by external experts for all senior office holders including the President, Vice President, Cabinet members and leaders of Congress.

    Any group of twenty senators or House members or state governors may choose one periodical, network or other news source to get credentialed to the White House Press Pool, spreading inquiry across all party lines and ensuring that all rational points of view get access.

     


    == Cancel the many levels of graft! (Well... a lot of them) ==


    Here's another one that may seem obvious. Only note what I say about presidential libraries! These have morphed into massive ego shrines, where ex-presidents get to "keep" the lavish gifts they receive from individuals and foreign potentates, so long as they are officially 'on (permanent) loan' from the National Archives!


    Donald Trump has even declared that he plans to 'donate' the big Qatari 747 jet, via the Archives, to his post-presidential library for his own personal and permanent use!


    Dig it: we can get Obama and Clinton and (maybe reluctantly) GW Bush to sign off on this. And we can word it in a way where the grifters cannot plausibly refuse.



    EMOLUMENTS AND GIFTS ACT: Emoluments and gifts and other forms of valuable beneficence bestowed upon the president, or members of Congress, or judges, or their staffs shall be more strictly defined and transparently controlled. 


    All existing and future presidential libraries or museums or any kind of shrine shall strictly limit the holding, display or lending of gifts to, from, or by a president or ex-president, which shall instead be owned and held (except for facsimiles) by the Smithsonian. 


    Donations by corporations or wealthy individuals to pet projects of a president or other members of government, including presidential libraries or inauguration events, shall be presumed to be illegal bribery unless they are approved by a nonpartisan ethical commission.

     


    == And finally... ==



    Finally, here's one they'll never pass, though it could benefit the nation, immensely.



    BUDGETS: If Congress fails to fulfill its budgetary obligations or to raise the debt ceiling, the result will not be a ‘government shutdown.’ Rather, all pay and benefits will cease going to any Senator or Representative whose annual income is above the national average, until appropriate legislation has passed, at which point only 50% of any backlog arrears may be made-up. 



    == Were these ones kinda 'obvious'? ==


    Yeah, obvious, schmobvious. They must be explicit in order to be useful as a sales-pitched Newer Deal!


    And if you pass most of them, you'll make clear what should have been, back when Pelosi, Sanders, Schumer, AOC, Liz Warren and the resut united to pass the 2021-22 Miracle Bills. That Democrats are serious about wanting democracy and institutions to work.


    Turns out those miracle bills were far from enough! So let's get on with the job of rescuing a flawed system. One that only happened to give humanity its best and most hopeful era, ever.


    The Greatest - GI Bill - Generation is watching us. Let's not let 'em down.



    Continuing with Part Nine....




    ======================================================




                 == Oh, yeah... here's that humorous lagniappe... ==


    I promised wry amusement. Re the Epstein pedophilia and now 'redaction' scandal:  this was all eerily predicted in a fun/absurd Kirsten Dunst film "Dick" (1999). Nixon hires two flakey 15 year olds as White House dog walkers... who fall in love with him and croon fantasies into the president's office tape recorder... 


    ...tapes that soon are subpoenaed by the Senate Watergate Committee. And Dick realizes... "I can survive all the rest, the burglary, the coverups, the bribes.... But messing with 15 year olds will get me lynched!" 

          So he erases their love songs, leading to the 18 minute "Gap"!


    In light of the the pathetic Bondi 'redactions' and Epstein's pal -- the pussy-grabber -- having on-record said "I like 'em young" ... have I uncovered "Dick" as an important part of the training set for the AI that's running this simulation? 


    Tell me another place online where you get connections like this!


    Merry Christmas and Happy Hannukah & Kwanza etc. wishes for sanity, peace and joy in years ahead. 


    And to hell with the aliens who've been shining a stoopidity ray upon us. Vamoose, twerps, or we'll getcha, someday.





    ,

    Planet DebianSahil Dhiman: MiniDebConf Navi Mumbai 2025

    MiniDebConf Navi Mumbai 2025 which was MiniDebConf Mumbai, which in turn was FOSSMumbai x MiniDebian Conference, happened on 13th and 14th December, 2025, with a hotel as Day 1 venue and a college on Day 2.

    Originally planned for the 8th of November, it got postponed to December due to operational reasons. Most of the on-ground logistics and other heavy lifting was done by Arya, Vidhya, MumbaiFOSS, and the Remigies Technologies team, so we didn’t had to worry much.

    This time, I gave a talk on Basics of a Free Software Mirror (and how Debian does it) (Presentation URL). I had the idea for this talk for a while and gave a KDE version of it during KDE India Conf 2025. The gist was to explain how Free Software is delivered to users and how one can help. For MDC, I focused a bit on Debian mirror network(s) and who else host mirrors in India and trends.

    Me during mirror talk
    Credits - niyabits. Thanks for the pictures

    At the onset someone mentioned my Termux mirror. Termux is a good project to get into mirror hosting with. I got into mirroring with it. It has low traffic (usually less than 20 GB/day) demands with a high request count and can be done on an existing 6 USD Digital Ocean nodes. Q&A time turned out more interesting than I anticipated. Folks touched upon commercial CDNs instead of community mirrors, supply chain security issues, and a bit of other stuff.

    We had quite a number of interesting talks and I remember when Arya was telling me during CFP time, “bro we have too many talks now” :D.

    Now, preparations have already started for MiniDebConf Kanpur 2026, scheduled for March 14th and 15th at the IIT campus. If you want to help, see the following thread. See you in the next one.

    A group photo

    Day 1 group photo. Click to enlarge

    A group photo

    Day 2 group photo. Click to enlarge

    365 TomorrowsA Short Diversion

    Author: Matthew Luscher It began to pour as the bus pulled in. The driver shot me a puzzled look as I stepped off and made a gesture clearly hinting for me to get back onboard. I ignored him. It had been half a mornings journey down bumpy country roads, following the recommendation of a tattered […]

    The post A Short Diversion appeared first on 365tomorrows.

    ,

    David BrinMidweek: Will the "sane wing" Republicans step up? -- And my selection of best SMBCs!

    I'll pause this time - midweek - to blurt a couple of mini rants about the monsters who will soon unleash something hellish, in order to distract us. But hang around to the end, where I'll recommend some of the best recent comix from Saturday Morning Breakfast Cereal!

    This weekend, I'll get back to part 8 of my ongoing series of tactics that might help fix this mess we are in.


    == Do not be fooled when Republicans 'stand up to Trump' ==

    Parse it carefully. There are defection/revolts by a few in the Party of Lincoln vs. Trumpism. Some rare GOPpers who happen to be decent/principled and not very blackmailed are standing up. (Tho all GOP senators are complicit with DT's cabinet of monsters, that includes several KGB agents.) 

    But this vote re Obamacare is not such a revolt! It is a tactic to get the GOP out of a jam, backing away from their cruel act that risks enraging tens of millions, if Obamacare subsidies expire, then giving them excuse to blame dems for resulting deficits.

    Likewise, the Roberts "Court" will rule against Trump on some symbolic matters, in order to maintain an illusion of 'balance'...

    ...though never re: any matter that affects POWER. 

    So, expect a curb or two on ICE raids, because that shit is mostly sadistic theater for diehard racist MAGAs. But Roger Taney Jr. - I mean John Roberts - will okay DT's power to fire IGs and JAGs and wreck whole agencies. And cheat elections and send troops into cities. And send ICE toward citizens. And bully the military officer corps. 

    We need to make clear to our loyal military men and women that they aren't forgotten. And help is on the way!

    But stay focused. Power is the criterion to watch.


    == Are there any signs of movement by Principled Republicans? ==


    Good question. Sure there are Kinzingers and Liz Cheney (gulp). But aside some such anecdotes, are there signs of actual, organized revolt by decent or at least slightly patriotic Republicans? Perhaps leading to a split-off Sane Conservative Party? (We need one.) 

    I'm likely clutching straws. Even if Paul Ryan, Romney, George F. Will and others do step up, Trump will only signal Putin to unleash the planned super-9/11 attack, now that he and Gabbard and Noem have re-assigned or fired or neutered many counter-terrorism agents and staff. Thus hoping to get us to rally around the leader. (It worked for Bush.) Whereupon... they hope to send well-drilled ICE prison-parolees after new targets to round up. Cancel elections...

    ... and release blackmail kompromat on any GOPper pol who dares to lift his head. (Or, in the case of Collins/Murkowski, whatever male relatives they are protecting, by submitting to Putin.)

    So, standing up will take a lot more guts than Olde Republicans ever had. Or ever will. 

    There will be no more Ike/Dirksen/Goldwater/McCains. So, it's likely we must take this phase all the way to Appomattox. And do it all ourselves. Alack.


    == Some fun (and philosophically brilliant) one pagers from SMBC! ==

    I'm a sucker for this online strip. But I only come back every 6 months to binge it...

    ... which allows me to make a favorites list, just for you!

    For example: This one is much like my story “The Giving Plague,” where circumstances force a bad man into being (repeatedly) a world-beloved hero.


    https://www.smbc-comics.com/comic/branch-2


    And do poke at these selected ones!


    https://www.smbc-comics.com/comic/frenchmen

    https://www.smbc-comics.com/comic/life-7

    https://www.smbc-comics.com/comic/steve

    https://www.smbc-comics.com/comic/xx

    https://www.smbc-comics.com/comic/princess-3

    https://www.smbc-comics.com/comic/happiness-5

    https://www.smbc-comics.com/comic/firstborn

    https://www.smbc-comics.com/comic/number-one

    https://www.smbc-comics.com/comic/sex

    https://www.smbc-comics.com/comic/attention-span

    https://www.smbc-comics.com/comic/addiction

    https://www.smbc-comics.com/comic/lesson


    And finally this one that SO came true!

    https://www.smbc-comics.com/comic/slam

     

    And one that oughta:

    https://www.smbc-comics.com/comic/sad-5



    Krebs on SecurityDismantling Defenses: Trump 2.0 Cyber Year in Review

    The Trump administration has pursued a staggering range of policy pivots this past year that threaten to weaken the nation’s ability and willingness to address a broad spectrum of technology challenges, from cybersecurity and privacy to countering disinformation, fraud and corruption. These shifts, along with the president’s efforts to restrict free speech and freedom of the press, have come at such a rapid clip that many readers probably aren’t even aware of them all.

    FREE SPEECH

    President Trump has repeatedly claimed that a primary reason he lost the 2020 election was that social media and Big Tech companies had conspired to silence conservative voices and stifle free speech. Naturally, the president’s impulse in his second term has been to use the levers of the federal government in an effort to limit the speech of everyday Americans, as well as foreigners wishing to visit the United States.

    In September, Donald Trump signed a national security directive known as NSPM-7, which directs federal law enforcement officers and intelligence analysts to target “anti-American” activity, including any “tax crimes” involving extremist groups who defrauded the IRS. According to extensive reporting by journalist Ken Klippenstein, the focus of the order is on those expressing “opposition to law and immigration enforcement; extreme views in favor of mass migration and open borders; adherence to radical gender ideology,” as well as “anti-Americanism,” “anti-capitalism,” and “anti-Christianity.”

    Earlier this month, Attorney General Pam Bondi issued a memo advising the FBI to compile a list of Americans whose activities “may constitute domestic terrorism.” Bondi also ordered the FBI to establish a “cash reward system” to encourage the public to report suspected domestic terrorist activity. The memo states that domestic terrorism could include “opposition to law and immigration enforcement” or support for “radical gender ideology.”

    The Trump administration also is planning to impose social media restrictions on tourists as the president continues to ramp up travel restrictions for foreign visitors. According to a notice from U.S. Customs and Border Protection (CBP), tourists — including those from Britain, Australia, France, and Japan — will soon be required to provide five years of their social media history.

    The CBP said it will also collect “several high value data fields,” including applicants’ email addresses from the past 10 years, their telephone numbers used in the past five years, and names and details of family members. Wired reported in October that the US CBP executed more device searches at the border in the first three months of the year than any other previous quarter.

    The new requirements from CBP add meat to the bones of Executive Order 14161, which in the name of combating “foreign terrorist and public safety threats” granted broad new authority that civil rights groups warn could enable a renewed travel ban and expanded visa denials or deportations based on perceived ideology. Critics alleged the order’s vague language around “public safety threats,” creates latitude for targeting individuals based on political views, national origin, or religion. At least 35 nations are now under some form of U.S. travel restrictions.

    CRIME AND CORRUPTION

    In February, Trump ordered executive branch agencies to stop enforcing the U.S. Foreign Corrupt Practices Act, which froze foreign bribery investigations, and even allows for “remedial actions” of past enforcement actions deemed “inappropriate.”

    The White House also disbanded the Kleptocracy Asset Recovery Initiative and KleptoCapture Task Force — units which proved their value in corruption cases and in seizing the assets of sanctioned Russian oligarchs — and diverted resources away from investigating white-collar crime.

    Also in February, Attorney General Pam Bondi dissolved the FBI’s Foreign Influence Task Force, an entity created during Trump’s first term designed to counter the influence of foreign governments on American politics.

    In March 2025, Reuters reported that several U.S. national security agencies had halted work on a coordinated effort to counter Russian sabotage, disinformation and cyberattacks. Former President Joe Biden had ordered his national security team to establish working groups to monitor the issue amid warnings from U.S. intelligence that Russia was escalating a shadow war against Western nations.

    In a test of prosecutorial independence, Trump’s Justice Department ordered prosecutors to drop the corruption case against New York Mayor Eric Adams. The fallout was immediate: Multiple senior officials resigned in protest, the case was reassigned, and chaos engulfed the Southern District of New York (SDNY) – historically one of the nation’s most aggressive offices for pursuing public corruption, white-collar crime, and cybercrime cases.

    When it comes to cryptocurrency, the administration has shifted regulators at the U.S. Securities and Exchange Commission (SEC) away from enforcement to cheerleading an industry that has consistently been plagued by scams, fraud and rug-pulls. The SEC in 2025 systematically retreated from enforcement against cryptocurrency operators, dropping major cases against Coinbase, Binance, and others.

    Perhaps the most troubling example involves Justin Sun, the Chinese-born founder of crypto currency company Tron. In 2023, the SEC charged Sun with fraud and market manipulation. Sun subsequently invested $75 million in the Trump family’s World Liberty Financial (WLF) tokens, became the top holder of the $TRUMP memecoin, and secured a seat at an exclusive dinner with the president.

    In late February 2025, the SEC dropped its lawsuit. Sun promptly took Tron public through a reverse merger arranged by Dominari Securities, a firm with Trump family ties. Democratic lawmakers have urged the SEC to investigate what they call “concerning ties to President Trump and his family” as potential conflicts of interest and foreign influence.

    In October, President Trump pardoned Changpeng Zhao, the founder of the world’s largest cryptocurrency exchange Binance. In 2023, Zhao and his company pled guilty to failing to prevent money laundering on the platform. Binance paid a $4 billion fine, and Zhao served a four-month sentence. As CBS News observed last month, shortly after Zhao’s pardon application, he was at the center of a blockbuster deal that put the Trump’s family’s WLF on the map.

    “Zhao is a citizen of the United Arab Emirates in the Persian Gulf and in May, an Emirati fund put $2 billion in Zhao’s Binance,” 60 Minutes reported. “Of all the currencies in the world, the deal was done in World Liberty crypto.”

    SEC Chairman Paul Atkins has made the agency’s new posture towards crypto explicit, stating “most crypto tokens are not securities.” At the same time, President Trump has directed the Department of Labor and the SEC to expand 401(k) access to private equity and crypto — assets that regulators have historically restricted for retail investors due to high risk, fees, opacity, and illiquidity. The executive order explicitly prioritizes “curbing ERISA litigation,” and reducing accountability for fiduciaries while shifting risk onto ordinary workers’ retirement savings.

    At the White House’s behest, the U.S. Treasury in March suspended the Corporate Transparency Act, a law that required companies to reveal their real owners. Finance experts warned the suspension would bring back shell companies and “open the flood gates of dirty money” through the US, such as funds from drug gangs, human traffickers, and fraud groups.

    Trump’s clemency decisions have created a pattern of freed criminals committing new offenses, including Jonathan Braun, whose sentence for drug trafficking was commuted during Trump’s first term, was found guilty in 2025 of violating supervised release and faces new charges.

    Eliyahu Weinstein, who received a commutation in January 2021 for running a Ponzi scheme, was sentenced in November 2025 to 37 years for running a new Ponzi scheme. The administration has also granted clemency to a growing list of white-collar criminals: David Gentile, a private equity executive sentenced to seven years for securities and wire fraud (functionally a ponzi-like scheme), and Trevor Milton, the Nikola founder sentenced to four years for defrauding investors over electric vehicle technology. The message: Financial crimes against ordinary investors are no big deal.

    At least 10 of the January 6 insurrectionists pardoned by President Trump have already been rearrested, charged or sentenced for other crimes, including plotting the murder of FBI agents, child sexual assault, possession of child sexual abuse material and reckless homicide while driving drunk.

    The administration also imposed sanctions against the International Criminal Court (ICC). On February 6, 2025, Executive Order 14203 authorized asset freezes and visa restrictions against ICC officials investigating U.S. citizens or allies, primarily in response to the ICC’s arrest warrants for Israeli Prime Minister Benjamin Netanyahu over alleged war crimes in Gaza.

    Earlier this month the president launched the “Gold Card,” a visa scheme established by an executive order in September that offers wealthy individuals and corporations expedited paths to U.S. residency and citizenship in exchange for $1 million for individuals and $2 million for companies, plus ongoing fees. The administration says it is also planning to offer a “platinum” version of the card that offers special tax breaks — for a cool $5 million.

    FEDERAL CYBERSECURITY

    President Trump campaigned for a second term insisting that the previous election was riddled with fraud and had been stolen from him. Shortly after Mr. Trump took the oath of office for a second time, he fired the head of the Cybersecurity and Infrastructure Security Agency (CISA) — Chris Krebs (no relation) — for having the audacity to state publicly that the 2020 election was the most secure in U.S. history.

    Mr. Trump revoked Krebs’s security clearances, ordered a Justice Department investigation into his election security work, and suspended the security clearances of employees at SentinelOne, the cybersecurity firm where Krebs worked as chief intelligence and public policy officer. The executive order was the first direct presidential action against any US cybersecurity company. Krebs subsequently resigned from SentinelOne, telling The Wall Street Journal he was leaving to push back on Trump’s efforts “to go after corporate interests and corporate relationships.”

    The president also dismissed all 15 members of the Cyber Safety Review Board (CSRB), a nonpartisan government entity established in 2022 with a mandate to investigate the security failures behind major cybersecurity events — likely because those advisors included Chris Krebs.

    At the time, the CSRB was in the middle of compiling a much-anticipated report on the root causes of Chinese government-backed digital intrusions into at least nine U.S. telecommunications providers. Not to be outdone, the Federal Communication Commission quickly moved to roll back a previous ruling that required U.S. telecom carriers to implement stricter cybersecurity measures.

    Meanwhile, CISA has lost roughly a third of its workforce this year amid mass layoffs and deferred resignations. When the government shutdown began in October, CISA laid off even more employees and furloughed 65 percent of the remaining staff, leaving only 900 employees working without pay.

    Additionally, the Department of Homeland Security has reassigned CISA cyber specialists to jobs supporting the president’s deportation agenda. As Bloomberg reported earlier this year, CISA employees were given a week to accept the new roles or resign, and some of the reassignments included relocations to new geographic areas.

    The White House has signaled that it plans to cut an additional $491 million from CISA’s budget next year, cuts that primarily target CISA programs focused on international affairs and countering misinformation and foreign propaganda. The president’s budget proposal justified the cuts by repeating debunked claims about CISA engaging in censorship.

    The Trump administration has pursued a similar reorganization at the FBI: The Washington Post reported in October that a quarter of all FBI agents have now been reassigned from national security threats to immigration enforcement. Reuters reported last week that the replacement of seasoned leaders at the FBI and Justice Department with Trump loyalists has led to an unprecedented number of prosecutorial missteps, resulting in a 21 percent dismissal rate of the D.C. U.S. attorney’s office criminal complaints over eight weeks, compared to a mere .5% dismissal rate over the prior 10 years.

    “These mistakes are causing department attorneys to lose credibility with federal courts, with some judges quashing subpoenas, threatening criminal contempt and issuing opinions that raise questions about their conduct,” Reuters reported. “Grand juries have also in some cases started rejecting indictments, a highly unusual event since prosecutors control what evidence gets presented.”

    In August, the DHS banned state and local governments from using cyber grants on services provided by the Multi-State Information Sharing and Analysis Center (MS-ISAC), a group that for more than 20 years has shared critical cybersecurity intelligence across state lines and provided software and other resources at free or heavily discounted rates. Specifically, DHS barred states from spending funds on services offered by the Elections Infrastructure ISAC, which was effectively shuttered after DHS pulled its funding in February.

    Cybersecurity Dive reports that the Trump administration’s massive workforce cuts, along with widespread mission uncertainty and a persistent leadership void, have interrupted federal agencies’ efforts to collaborate with the businesses and local utilities that run and protect healthcare facilities, water treatment plans, energy companies and telecommunications networks. The publication said the changes came after the US government eliminated CIPAC — a framework that allowed private companies to share cyber and threat intel without legal penalties.

    “Government leaders have canceled meetings with infrastructure operators, forced out their longtime points of contact, stopped attending key industry events and scrapped a coordination program that made companies feel comfortable holding sensitive talks about cyberattacks and other threats with federal agencies,” Cybersecurity Dive’s Eric Geller wrote.

    Both the National Security Agency (NSA) and U.S. Cyber Command have been without a leader since Trump dismissed Air Force General Timothy Haugh in April, allegedly for disloyalty to the president and at the suggestion of far-right conspiracy theorist Laura Loomer. The nomination of Army Lt. Gen. William Hartman for the same position fell through in October. The White House has ordered the NSA to cut 8 percent of its civilian workforce (between 1,500 and 2,000 employees).

    As The Associated Press reported in August, the Office of the Director of National Intelligence plans to dramatically reduce its workforce and cut its budget by more than $700 million annually. Director of National Intelligence Tulsi Gabbard said the cuts were warranted because ODNI had become “bloated and inefficient, and the intelligence community is rife with abuse of power, unauthorized leaks of classified intelligence, and politicized weaponization of intelligence.”

    The firing or forced retirements of so many federal employees has been a boon to foreign intelligence agencies. Chinese intelligence agencies, for example, reportedly moved quickly to take advantage of the mass layoffs, using a network of front companies to recruit laid-off U.S. government employees for “consulting work.” Former workers with the Defense Department’s Defense Digital Service who resigned en-masse earlier this year thanks to DOGE encroaching on their mission have been approached by the United Arab Emirates to work on artificial intelligence for the oil kingdom’s armed forces, albeit reportedly with the blessing of the Trump administration.

    PRESS FREEDOM

    President Trump has filed multibillion-dollar lawsuits against a number of major news outlets over news segments or interviews that allegedly portrayed him in a negative light, suing the networks ABC, the BBC, the CBS parent company Paramount, The Wall Street Journal, and The New York Times, among others.

    The president signed an executive order aimed at slashing public subsidies to PBS and NPR, alleging “bias” in the broadcasters’ reporting. In July, Congress approved a request from Trump to cut $1.1 billion in federal funding for the Corporation for Public Broadcasting, the nonprofit entity that funds PBS and NPR.

    Brendan Carr, the president’s pick to run the Federal Communications Commission (FCC), initially pledged to “dismantle the censorship cartel and restore free speech rights for everyday Americans.” But on January 22, 2025, the FCC reopened complaints against ABC, CBS and NBC over their coverage of the 2024 election. The previous FCC chair had dismissed the complaints as attacks on the First Amendment and an attempt to weaponize the agency for political purposes.

    President Trump in February seized control of the White House Correspondents’ Association, the nonprofit entity that decides which media outlets should have access to the White House and the press pool that follows the president. The president invited an additional 32 media outlets, mostly conservative or right-wing organizations.

    According to the journalism group Poynter.org, there are three religious networks, all of which lean conservative, as well as a mix of outlets that includes a legacy paper, television networks, and a digital outlet powered by artificial intelligence.  Trump also barred The Associated Press from the White House over their refusal to refer to the Gulf of Mexico as the Gulf of America.

    Under Trump appointee Kari Lake, the U.S. Agency for Global Media moved to dismantle Voice of America, Radio Free Europe/Radio Liberty, and other networks that for decades served as credible news sources behind authoritarian lines. Courts blocked shutdown orders, but the damage continues through administrative leave, contract terminations, and funding disputes.

    President Trump this term has fired most of the people involved in processing Freedom of Information Act (FOIA) requests for government agencies. FOIA is an indispensable tool used by journalists and the public to request government records, and to hold leaders accountable.

    Petitioning the government, particularly when it ignores your requests, often requires challenging federal agencies in court. But that becomes far more difficult if the most competent law firms start to shy away from cases that may involve crossing the president and his administration. On March 22, the president issued a memorandum that directs heads of the Justice and Homeland Security Departments to “seek sanctions against attorneys and law firms who engage in frivolous, unreasonable and vexatious litigation against the United States,” or in matters that come before federal agencies.

    The Trump administration announced increased vetting of applicants for H-1B visas for highly skilled workers, with an internal State Department memo saying that anyone involved in “censorship” of free speech should be considered for rejection.

    Executive Order 14161, issued in 2025 on “foreign terrorist and public safety threats,” granted broad new authority that civil rights groups warn could enable a renewed travel ban and expanded visa denials or deportations based on perceived ideology. Critics charged that the order’s vague language around “public safety threats” creates latitude for targeting individuals based on political views, national origin, or religion.

    CONSUMER PROTECTION, PRIVACY

    At the beginning of this year, President Trump ordered staffers at the Consumer Financial Protection Bureau (CFPB) to stop most work. Created by Congress in 2011 to be a clearinghouse of consumer complaints, the CFPB has sued some of the nation’s largest financial institutions for violating consumer protection laws. The CFPB says its actions have put nearly $18 billion back in Americans’ pockets in the form of monetary compensation or canceled debts, and imposed $4 billion in civil money penalties against violators.

    The Trump administration said it planned to fire up to 90 percent of all CFPB staff, but a recent federal appeals court ruling in Washington tossed out an earlier decision that would have allowed the firings to proceed. Reuters reported this week that an employee union and others have battled against it in court for ten months, during which the agency has been almost completely idled.

    The CFPB’s acting director is Russell Vought, a key architect of the GOP policy framework Project 2025. Under Vought’s direction, the CFPB in May quietly withdrew a data broker protection rule intended to limit the ability of U.S. data brokers to sell personal information on Americans.

    Despite the Federal Reserve’s own post-mortem explicitly blaming Trump-era deregulation for the 2023 Silicon Valley Bank collapse, which triggered a fast-moving crisis requiring emergency weekend bailouts of banks, Trump’s banking regulators in 2025 doubled down. They loosened capital requirements, narrowed definitions of “unsafe” banking practices, and stripped specific risk categories from supervisory frameworks. The setup for another banking crisis requiring taxpayer intervention is now in place.

    The Privacy Act of 1974, one of the few meaningful federal privacy laws, was built on the principles of consent and separation in response to the abuses of power that came to light during the Watergate era. The law states that when an individual provides personal information to a federal agency to receive a particular service, that data must be used solely for its original purpose.

    Nevertheless, it emerged in June that the Trump administration has built a central database of all US citizens. According to NPR, the White House plans to use the new platform during upcoming elections to verify the identity and citizenship status of US voters. The database was built by the Department of Homeland Security and the Department of Governmental Efficiency and is being rolled out in phases to US states.

    DOGE

    Probably the biggest ungotten scoop of 2025 is the inside story of what happened to all of the personal, financial and other sensitive data that was accessed by workers at the so-called Department of Government Efficiency (DOGE). President Trump tapped Elon Musk to lead the newly created department, which was mostly populated by current and former employees of Musk’s various technology companies (including a former denizen of the cybercrime community known as the “Com”). It soon emerged that the DOGE team was using artificial intelligence to surveil at least one federal agency’s communications for hostility to Mr. Trump and his agenda.

    DOGE employees were able to access and synthesize data taken from a large number of previously separate and highly guarded federal databases, including those at the Social Security Administration, the Department of Homeland Security, the Office of Personnel Management, and the U.S. Department of the Treasury. DOGE staffers did so largely by circumventing or dismantling security measures designed to detect and prevent misuse of federal databases, including standard incident response protocols, auditing, and change-tracking mechanisms.

    For example, an IT expert with the National Labor Relations Board (NLRB) alleges that DOGE employees likely downloaded gigabytes of data from agency case files in early March, using short-lived accounts that were configured to leave few traces of network activity. The NLRB whistleblower said the large data outflows coincided with multiple blocked login attempts from addresses in Russia, which attempted to use valid credentials for a newly-created DOGE user account.

    The stated goal of DOGE was to reduce bureaucracy and to massively cut costs — mainly by eliminating funding for a raft of federal initiatives that had already been approved by Congress. The DOGE website claimed those efforts reduced “wasteful” and “fraudulent” federal spending by more than $200 billion. However, multiple independent reviews by news organizations determined the true “savings” DOGE achieved was off by a couple of orders of magnitude, and was likely closer to $2 billion.

    At the same time DOGE was slashing federal programs, President Trump fired at least 17 inspectors general at federal agencies — the very people tasked with actually identifying and stopping waste, fraud and abuse at the federal level. Those included several agencies (such as the NLRB) that had open investigations into one or more of Mr. Musk’s companies for allegedly failing to comply with protocols aimed at protecting state secrets. In September, a federal judge found the president unlawfully fired the agency watchdogs, but none of them have been reinstated.

    Where is DOGE now? Reuters reported last month that as far as the White House is concerned, DOGE no longer exists, even though it technically has more than half a year left to its charter. Meanwhile, who exactly retains access to federal agency data that was fed by DOGE into AI tools is anyone’s guess.

    KrebsOnSecurity would like to thank the anonymous researcher NatInfoSec for assisting with the research on this story.

    Planet DebianKartik Mistry: KDE Needs You!

    * KDE Randa Meetings and make a donation!

    I know that my contributions to KDE are minimal at this stage, but hey, I’m doing my part this time for sure!

    Worse Than FailureError'd: Michael's Holiday Snaps

    Michael R. recently was Ghana but now he's back. In grand vacation tradition, he is now sharing the best of it with us. And a few more besides. Remember, it's not the journey itself that matters, it's the wtfs we make along the way. Watch me make a bunch as I attempt to weave a narrative around the shots.

    First up, the likely inspiration for Michael's entire trip. I guess you don't need the actual website URL, you can find it easily.

    0

     

    In an effort to get trim for a long flight in a 17" seat, he engaged in a rigorous fitness regimen. The math here troubles him. "In the good old days 5g + 4.39g were 9.39g." (Yes, but nothing says that you need to add the weights, if one item contains the other.)

    1

     

    And he prepared by binge-watching travelogues and "reality" programming, noting here an automation failure ("Insert Date Here")

    2

     

    "I know my Donor Name but still need to figure out what WHB stands for."

    5

     

    On the ground or near it: "Nothing is older than yesteryear's election." I guess there's still a chance for a future election, so you might as well leave the posters up for name recognition?

    6

     

    "Windows Desktop makes a nice background at Soho in Accra https://www.instagram.com/soho_accra/?hl=en-gb" I want pictures of food, Michael!

    7

     

    And another Windows escape. Home again home again, jiggity jog. "Take this LHR T5 for letting me wait for my luggage for 30 mins."

    4

     

    [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

    365 TomorrowsImmersive Travel

    Author: James C. Clar “Europe during the plague is too tame for you?” The Extreme Time-Travel agent could barely conceal his surprise. His quartz desk glowed faintly under his hands. “You realize that package includes rats, mass hysteria, and the very real possibility of dying in a ditch.” Mr. Donovan smiled in that effortless way […]

    The post Immersive Travel appeared first on 365tomorrows.

    Planet DebianOtto Kekäläinen: Backtesting trailing stop-loss strategies with Python and market data

    Featured image of post Backtesting trailing stop-loss strategies with Python and market data

    In January 2024 I wrote about the insanity of the Magnificent Seven dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional, I decided to analyze whether using stop-loss orders could reliably automate avoiding deep drawdowns.

    As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of when and how deep the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, a restaurant dinner that costs 20 euros today will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years.

    Hence, if you intend to retain the value of your hard-earned savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically grow faster than inflation during normal years, but likely drop in value during recessions.

    What is a trailing stop-loss order?

    What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin.

    For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus, no matter what happens, you only lost 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash.

    In the simple case above, it obviously makes sense in theory, but it might not make sense in practice. Prices constantly oscillate, so you don’t want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in too large a drawdown before exiting. If markets crash rapidly, it might be that nobody buys your stocks at the stop-loss price, and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached?

    Backtesting stock market strategies with Python, YFinance, Pandas and Lightweight Charts

    I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock.

    First you need to have data. YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. Then you need to manipulate the data. Pandas is the Python data analysis library with advanced data structures for working with relational or labeled data. Finally, to visualize the results, I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the data points feel effortless.

    The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files):

    python
    CACHE_DIR.mkdir(parents=True, exist_ok=True)
    end_date = datetime.today().strftime("%Y-%m-%d")
    cache_file = CACHE_DIR / f"{TICKER}-{START_DATE}--{end_date}.parquet"
    
    if cache_file.is_file():
     dataframe = pandas.read_parquet(cache_file)
     print(f"Loaded price data from cache: {cache_file}")
    else:
     dataframe = yfinance.download(
     TICKER,
     start=START_DATE,
     end=end_date,
     progress=False,
     auto_adjust=False
     )
    
     dataframe.to_parquet(cache_file)
     print(f"Fetched new price data from Yahoo Finance and cached to: {cache_file}")

    The dataframe is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like, you can use:

    python
    print("First 5 rows of the raw data:")
    print(df.head())
    print("Last 5 rows of the raw data:")
    print(df.tail())

    Example output:

    First 5 rows of the raw data
    Price Adj Close Close High Low Open Volume
    Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
    Date
    2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552
    2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044
    2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142
    2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397
    2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940
    Last 5 rows of the raw data
    Price Adj Close Close High Low Open Volume
    Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
    Date
    2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918
    2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477
    2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852
    2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057
    2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818

    Adding new columns to the dataframe is easy. For example, I used a custom function to calculate the Relative Strength Index (RSI). To add a new column “RSI” with a value for every row based on the price from that row, only one line of code is needed, without custom loops:

    python
    df["RSI"] = compute_rsi(df["price"], period=14)

    After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template:

    python
     baseline_series = [
     {"time": ts, "value": val}
     for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False)
     ]
    
     baseline_json = json.dumps(baseline_series)
     template = jinja2.Template("template.html")
     rendered_html = template.render(
     title=title,
     heading=heading,
     description=description_html,
     ...
     baseline_json=baseline_json,
     ...
     )
    
     with open("report.html", "w", encoding="utf-8") as f:
     f.write(rendered_html)
     print("Report generated!")

    In the HTML template, the marker {{ variable }} in Jinja syntax gets replaced with the actual JSON:

    html
    <!DOCTYPE html>
    <html lang="en">
    <head>
     <meta charset="UTF-8">
     <title>{{ title }}</title>
     ...
    </head>
    <body>
     <h1>{{ heading }}</h1>
     <div id="chart"></div>
     <script>
     // Ensure the DOM is ready before we initialise the chart
     document.addEventListener('DOMContentLoaded', () => {
     // Parse the JSON data passed from Python
     const baselineData = {{ baseline_json | safe }};
     const strategyData = {{ strategy_json | safe }};
     const markersData = {{ markers_json | safe }};
    
     // Create the chart
     const chart = LightweightCharts.createChart(document.getElementById('chart'), {
     width: document.getElementById('chart').clientWidth,
     height: 500,
     layout: {
     background: { color: "#222" },
     textColor: "#ccc"
     },
     grid: {
     vertLines: { color: "#555" },
     horzLines: { color: "#555" }
     }
     });
    
     // Add baseline series
     const baselineSeries = chart.addLineSeries({
     title: '{{ baseline_label }}',
     lastValueVisible: false,
     priceLineVisible: false,
     priceLineWidth: 1
     });
     baselineSeries.setData(baselineData);
    
     baselineSeries.priceScale().applyOptions({
     entireTextOnly: true
     });
    
     // Add strategy series
     const strategySeries = chart.addLineSeries({
     title: '{{ strategy_label }}',
     lastValueVisible: false,
     priceLineVisible: false,
     color: '#FF6D00'
     });
     strategySeries.setData(strategyData);
    
     // Add buy/sell markers to the strategy series
     strategySeries.setMarkers(markersData);
    
     // Fit the chart to show the full data range (full zoom)
     chart.timeScale().fitContent();
     })
     </script>
    </body>
    </html>

    There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test.

    The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline “Buy and hold” scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way.

    Backtest run example

    Results

    I experimented with multiple strategies and tested them with various parameters, but I don’t think I found a strategy that was consistently and clearly better than just buy-and-hold.

    It basically boils down to the fact that I was not able to find any way to calculate when the crash has bottomed based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, most strategies underperformed buy-and-hold because they sold when the crash started, but bought back after it recovered at a slightly higher price.

    In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold.

    The strategy worked better in large market crashes as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example, in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks; thus, the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash had been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price.

    Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if many people have stop-loss orders in place, a large price drop would trigger all of them, creating a flood of sell orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don’t need to fear that too many people will be using them.

    Conclusion

    Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is useful in protecting from the downside of stock investing. It can act as a kind of “insurance policy” to considerably decrease the chances of losing big while increasing the chances of losing a little bit. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I’d rather miss out on an initial 50% loss and an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels.

    Most notably, the trailing stop-loss strategy works best if used only once. If it is repeated multiple times, the small losses in gains will compound into big losses overall.

    Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently.

    Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!

    ,

    Planet DebianDirk Eddelbuettel: dang 0.0.17: New Features, Plus Maintenance

    dang image

    A new release of my mixed collection of things package dang package arrived at CRAN earlier today. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor blogged about as well as the checkCRANStatus() function tweeted about by Tim Taylor. And more so take a look.

    This release retires two functions: the social media site nobody ever visits anymore shut down its API too, so no way to mute posts by a given handle. Similarly, the (never official) ability by Google to supply financial data is no more, so the function to access data this way is gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and another little helper to re-order microbenchmark results by summary column (defaulting to the median). Other than the usual updates to continuous integrations, as well as a switch to Authors@R which will result in CRAN nagging me less about this, and another argument update.

    The detailed NEWS entry follows.

    Changes in version 0.0.17 (2025-12-18)

    • Added new funtion reorderMicrobenchmarkResults with alias rmr

    • Use tolower on email argument to checkCRANStatus

    • Added new function cranORCIDs bootstrapped from two emails by Kurt Hornik

    • Switched to using Authors@R in DESCRIPTION and added ORCIDs where available

    • Switched to r-ci action with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessor

    • Removed googleFinanceData as the (unofficial) API access point no longer works

    • Removed muteTweeters because the API was turned off

    Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Cryptogram AI Advertising Company Hacked

    At least some of this is coming to light:

    Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to manage at least hundreds of AI-generated social media accounts and promote products has been hacked. The hack reveals what products the AI-generated accounts are promoting, often without the required disclosure that these are advertisements, and allowed the hacker to take control of more than 1,000 smartphones that power the company.

    The hacker, who asked for anonymity because he feared retaliation from the company, said he reported the vulnerability to Doublespeed on October 31. At the time of writing, the hacker said he still has access to the company’s backend, including the phone farm itself.

    Slashdot thread.

    Cryptogram Someone Boarded a Plane at Heathrow Without a Ticket or Passport

    I’m sure there’s a story here:

    Sources say the man had tailgated his way through to security screening and passed security, meaning he was not detected carrying any banned items.

    The man deceived the BA check-in agent by posing as a family member who had their passports and boarding passes inspected in the usual way.

    Planet DebianColin Watson: Preparing a transition in Debusine

    We announced a public beta of Debusine repositories recently (Freexian blog, debian-devel-announce). One thing I’m very keen on is being able to use these to prepare “transitions”: changes to multiple packages that need to be prepared together in order to land in testing. As I said in my DebConf25 talk:

    We have distribution-wide CI in unstable, but there’s only one of it and it’s shared between all of us. As a result it’s very possible to get into tangles when multiple people are working on related things at the same time, and we only avoid that as much as we do by careful coordination such as transition bugs. Experimental helps, but again, there’s only one of it and setting up another one is far from trivial.

    So, what we want is a system where you can run experiments on possible Debian changes at a large scale without a high setup cost and without fear of breaking things for other people. And then, if it all works, push the whole lot into Debian.

    Time to practice what I preach.

    Setup

    The setup process is documented on the Debian wiki. You need to decide whether you’re working on a short-lived experiment, in which case you’ll run the create-experiment workflow and your workspace will expire after 60 days of inactivity, or something that you expect to keep around for longer, in which case you’ll run the create-repository workflow. Either one of those will create a new workspace for you. Then, in that workspace, you run debusine archive suite create for whichever suites you want to use. For the case of a transition that you plan to land in unstable, you’ll most likely use create-experiment and then create a single suite with the pattern sid-<name>.

    The situation I was dealing with here was moving to Pylint 4. Tests showed that we needed this as part of adding Python 3.14 as a supported Python version, and I knew that I was going to need newer upstream versions of the astroid and pylint packages. However, I wasn’t quite sure what the fallout of a new major version of pylint was going to be. Fortunately, the Debian Python ecosystem has pretty good autopkgtest coverage, so I thought I’d see what Debusine said about it. I created an experiment called cjwatson-pylint (resulting in https://debusine.debian.net/debian/developers-cjwatson-pylint/ - I’m not making that a proper link since it will expire in a couple of months) and a sid-pylint suite in it.

    Iteration

    From this starting point, the basic cycle involved uploading each package like this for each package I’d prepared:

    $ dput -O debusine_workspace=developers-cjwatson-pylint \
           -O debusine_workflow=publish-to-sid-pylint \
           debusine.debian.net foo.changes
    

    I could have made a new dput-ng profile to cut down on typing, but it wasn’t worth it here.

    Then I looked at the workflow results, figured out which other packages I needed to fix based on those, and repeated until the whole set looked coherent. Debusine automatically built each upload against whatever else was currently in the repository, as you’d expect.

    I should probably have used version numbers with tilde suffixes (e.g. 4.0.2-1~test1) in case I needed to correct anything, but fortunately that was mostly unnecessary. I did at least run initial test-builds locally of just the individual packages I was directly changing to make sure that they weren’t too egregiously broken, just because I usually find it quicker to iterate that way.

    I didn’t take screenshots as I was going along, but here’s what the list of top-level workflows in my workspace looked like by the end:

    Workflows

    You can see that not all of the workflows are successful. This is because we currently just show everything in every workflow; we don’t consider whether a task was retried and succeeded on the second try, or whether there’s now a newer version of a reverse-dependency so tests of the older version should be disregarded, and so on. More fundamentally, you have to look through each individual workflow, which is a bit of a pain: we plan to add a dashboard that shows you the current state of a suite as a whole rather than the current workflow-oriented view, but we haven’t started on that yet.

    Drilling down into one of these workflows, it looks something like this:

    astroid workflow

    This was the first package I uploaded. The first pass of failures told me about pylint (expected), pylint-flask (an obvious consequence), and python-sphinx-autodoc2 and sphinx-autoapi (surprises). The slightly odd pattern of failures and errors is because I retried a few things, and we sometimes report retries in a slightly strange way, especially when there are workflows involved that might not be able to resolve their input parameters any more.

    The next level was:

    pylint workflow

    Again, there were some retries involved here, and also some cases where packages were already failing in unstable so the failures weren’t the fault of my change; for now I had to go through and analyze these by hand, but we’ll soon have regression tracking to compare with reference runs and show you where things have got better or worse.

    After excluding those, that left pytest-pylint (not caused by my changes, but I fixed it anyway in unstable to clear out some noise) and spyder. I’d seen people talking about spyder on #debian-python recently, so after a bit of conversation there I sponsored a rope upload by Aeliton Silva, upgraded python-lsp-server, and patched spyder. All those went into my repository too, exposing a couple more tests I’d forgotten in spyder.

    Once I was satisfied with the results, I uploaded everything to unstable. The next day, I looked through the tracker as usual starting from astroid, and while there are some test failures showing up right now it looks as though they should all clear out as pieces migrate to testing. Success!

    Conclusions

    We still have some way to go before this is a completely smooth experience that I’d be prepared to say that every developer can and should be using; there are all sorts of fit-and-finish issues that I can easily see here. Still, I do think we’re at the point where a tolerant developer can use this to deal with the common case of a mid-sized transition, and get more out of it than they put in.

    Without Debusine, either I’d have had to put much more effort into searching for and testing reverse-dependencies myself, or (more likely, let’s face it) I’d have just dumped things into unstable and sorted them out afterwards, resulting in potentially delaying other people’s work. This way, everything was done with as little disruption as possible.

    This works best when the packages likely to be involved have reasonably good autopkgtest coverage (even if the tests themselves are relatively basic). This is an increasingly good bet in Debian, but we have plans to add installability comparisons (similar to how Debian’s testing suite works) as well as optional rebuild testing.

    If this has got you interested, please try it out for yourself and let us know how it goes!

    365 TomorrowsAlienation

    Author: Bill Cox “Well,” she says, impatience dripping from her voice, “What’s it going to be?” I’ve the stylus in my hand, hovering over the pad. I look up at her and it’s all I can do not to stick the stylus in her eye and just keep on pushing it deeper and deeper, until […]

    The post Alienation appeared first on 365tomorrows.

    Worse Than FailureCodeSOD: Linguistic Perls

    A long time ago, Joey made some extra bucks doing technical support for the neighbors. It was usually easy work, and honestly was more about being a member of the community than anything else.

    This meant Joey got to spend time with Ernest. Ernest was a retiree with a professorial manner, complete with horn-rimmed glasses and a sweater vest. Ernest volunteered at the local church, was known for his daily walks around the neighborhood, and was a generally beloved older neighbor.

    Ernest had been working on transfering his music collection- a mix of CDs and records- onto his computer. He had run into a problem, and reached out to Joey for help.

    "Usually," Ernest explained, "I can get one of the kids from the local university to help me out. But with the holiday break and all…"

    No problem for Joey. He went over to Ernest's, sat down at the computer, and powered it up. The desktop appeared, and in the typical older user fashion, it was covered with icons. What was unusual was the names of the files and folders. Things like titwank. Or cockrot.pl and penis.pl. A few were named as racial slurs.

    Clearly, the college students Ernest usually hired were having a laugh at the man's expense. That must be it. Joey glanced around the room, trying to think about how to explain this, when he noticed the bookshelf.

    The first few books were guides on how to program in Perl. Sandwiched between them was Rogers Profanisaurous, a dictionary of profanity. Then a collection of comedy CDs by Kevin Bloody Wilson, the performer of such comedy songs as "I Gave Up Wanking," "The Pubic Hair Song," and "Dick on Her Mind".

    "Ah, yes," Ernest said, "you'll need to pardon my desktop. Before I retired, I was a linguist, and I think you can guess what my speciality was."

    "Profanity?"

    "Profanity indeed. Now, I was hoping I could get someone to take a look at swallow.pl for me…"

    Joey writes:

    I always thought of Perl as an arcane language here here instead it has somehow been turned into a profane language.

    Usually, profanity is what we use when reading Perl.

    For whatever reason I seem to have kept this particular file. I must have taken it home to work on. I now consider it an art piece worthy of printing out and framing on the wall.

    I think there is something to that, Joey, but I have to be honest: I'm not going to present the entire file in its true glory, because well, there are limits to the sorts of profanity we run on the site. But it's still worth sharing a few snippets:

    We can start with some variable initializations:

        my @wankoid;
    	my $wankoff;
    	open(SHIT,"discindex.htm");
    	@wankoid=<SHIT>;
    	$wankoff=join("",@wankoid);
    	my @toss=split(/\nLabel\:/,$wankoff);
    	my $cockrot=0;
    

    Or perhaps some regex matching:

        $swallow=~s/\/\/.*//;
        $swallow=~s/^L:\\//;
        $swallow=~s/\r//;
    
    my @penis=split(/\\/,$swallow);
    

    Uh… could we not?

        for($i=0;$i<$#penis-1;$i++)
        {
            $rude=$curse[1];
            %dirk=%$rude;;
    
            if(!exists($dirk{$penis[$i]}))
            {
                $dirk{$penis[$i]}=[($penis[$i],[{}],[{}])];
            }
    
            $rude=$dirk{$penis[$i]};
            @curse=@$rude;
        }
    

    Wait… is "dirk" slang for something I don't know about?

    There are a few other words in here that I don't recognize as profanity, like flk, plip, disind, baf, and tot. And SEE? SEE is profanity? How? Are these profane words I just don't know? I mean, Ernest was a professional profanologist, and I'm just an amateur. Clearly I have a lot to learn.

    If you know what those mean, leave a comment. If you don't know what they mean, but want to make up an answer, I dunno… leave a comment too?

    [Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

    ,

    Planet DebianJonathan McDowell: 21 years of blogging

    21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.

    From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.

    It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

    Blog posts over time

    During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.

    At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.

    Planet DebianSven Hoexter: exfatprogs: Do not try defrag.exfat / mkfs.exfat Windows compatibility in Trixie

    exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.

    Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.

    If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

    Planet DebianDirk Eddelbuettel: RcppArmadillo 15.2.3-1 on CRAN: Upstream Update

    armadillo image

    Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1272 other packages on CRAN, downloaded 43.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 661 times according to Google Scholar.

    This versions updates to the 15.2.3 upstream Armadillo release from yesterday. It brings minor changes over the RcppArmadillo 15.2.2 release made last month (and described in this post). As noted previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. If and when CRAN will have nudged (nearly) all maintainers away from C++11 (and now also C++14 !!) we can remove the fallback. Our offer to help with the C++ modernization still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the C++11 transition.

    There were no R-side changes in this release. The detailed changes since the last release follow.

    Changes in RcppArmadillo version 15.2.3-1 (2025-12-16)

    • Upgraded to Armadillo release 15.2.3 (Medium Roast Deluxe)

      • Faster .resize() for vectors

      • Faster repcube()

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Planet DebianMatthew Garrett: How did IRC ping timeouts end up in a lawsuit?

    I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as unsubstantiated character assassination and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.

    In the defendants' defence and counterclaim[1], 15.27 asserts in part The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.

    The event in question occurred on the 28th of April, 2023. You can see a line reading *elusive_woman has quit (Ping timeout: 2m30s), followed by one reading *mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.

    The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout.

    What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart.

    So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case.

    This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitz[TR] disconnect 40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person.

    We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion.

    [1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed.

    [2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption.

    comment count unavailable comments

    Worse Than FailureThe Spare Drive

    As the single-digit Fahrenheit temperatures creep across the northeast United States, one's mind drifts off to holidays- specifically summer holidays where it isn't so cold that it hurts to breathe.

    Luciano M works in Italy, where August 15th is a national holiday, but also August is the traditional time of year for everyone to take off, leaving the country mostly shut down for the month.

    A long time ago, Luciano worked for a small company, along with some friends. This was long enough that you didn't rent compute from a cloud provider, but instead ran most of your intranet services off of a private server in your network closet somewhere.

    This particular server ran mostly everything: private git hosting, VPN, email, and an internal Jabber server for chat. Given that it ran most services in the company, one might think that they were backing it up regularly- and you'd be right. One might also think that they had some sort of failover setup, and that's where you'd be wrong.

    Late August 12th, the hard drive on their server decided it was time to start its own holiday. The main reason everyone noticed when it happened wasn't due to some alert that got triggered, but as mentioned, Luciano was friends with the team, which meant they used the Jabber server to chat with each other about non-work stuff.

    Because half the country was already closed for August, getting replacements delivered was a dubious proposition, at best. Especially with the 15th looming, which not only made shipping delays worse, but this particular year was on a Friday, marking a 3-day weekend. Unless they wanted to spend the better part of a week out of commission, they needed to find an alternative.

    The only silver lining was that "shipping is delayed" is the kind of problem which can be solved by spending money. By the time it was all said and done, they paid more for shipping than they paid for the drive itself, but the drive arrived by the 14th, and by the end of the day, they had the server back up and running, restored from backup.

    And everything was happy, until August 12th, the following year, when the new hard drive decided to die the exact same way as the previous one, and the entire cycle repeated itself.

    And on the third year, a hard drive also failed on August 12th. At least, by that point, they were so used to the problem that they kept spare drives in inventory. Eventually, someone upgraded them to a RAID, which at least kept the downtime at a minimum.

    Luciano has long since moved on to a new job, but the date of August 12th is his own personal holiday: an unpleasant one.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    365 TomorrowsFeeding the Chronophage

    Author: Hillary Lyon Lo’e took the small box from the cluttered shelf in the back of his workroom. The metal cube was soldered together from mismatched pieces of metal. Once shiny, it was now dull and dust-covered. He weighed it in his hand; he was surprised at how lightweight it felt, how empty. Lo’e set […]

    The post Feeding the Chronophage appeared first on 365tomorrows.

    ,

    Planet DebianChristian Kastner: Simple-PPA, a minimalistic PPA implementation

    Today, the Debusine developers launched Debusine repositories, a beta implementation of PPAs. In the announcement, Colin remarks that "[d]iscussions about this have been happening for long enough that people started referring to PPAs for Debian as 'bikesheds'"; a characterization that I'm sure most will agree with.

    So it is with great amusement that on this same day, I launch a second PPA implementation for Debian: Simple-PPA.

    Simple-PPA was never meant to compete with Debusine, though. In fact, it's entirely the opposite: from discussions at DebConf, I knew that it was only a matter of time until Debusine gained a PPA-like feature, but I needed a stop-gap solution earlier, and with some polish, what was once by Python script already doing APT processing for apt.ai.debian.net, recently became Simple-PPA.

    Consequently, Simple-PPA lacks (and will always lack) all of the features that Debusine offers: there is no auto-building, no CI, nor any other type of QA. It's the simplest possible type of APT repository: you just upload packages, they get imported into an archive, and the archive is exposed via a web server. Under the hood, reprepro does all the heavy lifting.

    However, this also means it's trivial to set up. The following is the entire configuration that simple-ppa.debian.net started with:

    # simple-ppa.conf
    
    [CORE]
    SignWith = 2906D748B7551BC8
    ExportDir = /srv/www/simple-ppa
    MailFrom: Simple-PPA <admin@simple-ppa.debian.net>
    Codenames = sid forky trixie trixie-backports bookworm bookworm-backports
    AlsoAllow = forky: unstable
                trixie: unstable
                bookworm: unstable
    
    [simple-ppa-dev]
    Label = Simple-PPA's self-hosted development repository
    # ckk's key
    Uploaders = allow * by key E76004C5CEF0C94C+
    
    [ckk]
    Label = Christian Kastner at Simple-PPA
    Uploaders = allow * by key E76004C5CEF0C94C+

    The CORE section just sets some defaults and sensible rules. Two PPAs are defined, simple-ppa-dev and ckk, which accept packages signed by the key with the ID E76004C5CEF0C94C. These PPAs use the global defaults, but individual PPAs can override Architectures, Suites, and Components, and of course allow an arbitrary number of users.

    Users upload to this archive using SFTP (e.g.: with dput-ng). Every 15 minutes, uploads get processed, with ACCEPTED or REJECTED mails sent to the Maintainer address. The APT archive of all PPAs is signed with a single global key.

    I myself intend to use Debusine repositories soon, as the autobuilding and the QA tasks Debusine offers are something I need. However, I do still see a niche use case for Simple-PPA: when you need an APT archive, but don't want to do a deep dive into reprepro (which is extremely powerful).

    If you'd like to give Simple-PPA a try, head over to simple-ppa.debian.net and follow the instructions for users.

    Planet DebianSteinar H. Gunderson: Lichess

    I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)—well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.

    Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.

    Krebs on SecurityMost Parked Domains Now Serving Malicious Content

    Direct navigation — the act of visiting a website by manually typing a domain name in a web browser — has never been riskier: A new study finds the vast majority of “parked” domains — mostly expired or dormant domain names, or common misspellings of popular websites — are now configured to redirect visitors to sites that foist scams and malware.

    A lookalike domain to the FBI Internet Crime Complaint Center website, returned a non-threatening parking page (left) whereas a mobile user was instantly directed to deceptive content in October 2025 (right). Image: Infoblox.

    When Internet users try to visit expired domain names or accidentally navigate to a lookalike “typosquatting” domain, they are typically brought to a placeholder page at a domain parking company that tries to monetize the wayward traffic by displaying links to a number of third-party websites that have paid to have their links shown.

    A decade ago, ending up at one of these parked domains came with a relatively small chance of being redirected to a malicious destination: In 2014, researchers found (PDF) that parked domains redirected users to malicious sites less than five percent of the time — regardless of whether the visitor clicked on any links at the parked page.

    But in a series of experiments over the past few months, researchers at the security firm Infoblox say they discovered the situation is now reversed, and that malicious content is by far the norm now for parked websites.

    “In large scale experiments, we found that over 90% of the time, visitors to a parked domain would be directed to illegal content, scams, scareware and anti-virus software subscriptions, or malware, as the ‘click’ was sold from the parking company to advertisers, who often resold that traffic to yet another party,” Infoblox researchers wrote in a paper published today.

    Infoblox found parked websites are benign if the visitor arrives at the site using a virtual private network (VPN), or else via a non-residential Internet address. For example, Scotiabank.com customers who accidentally mistype the domain as scotaibank[.]com will see a normal parking page if they’re using a VPN, but will be redirected to a site that tries to foist scams, malware or other unwanted content if coming from a residential IP address. Again, this redirect happens just by visiting the misspelled domain with a mobile device or desktop computer that is using a residential IP address.

    According to Infoblox, the person or entity that owns scotaibank[.]com has a portfolio of nearly 3,000 lookalike domains, including gmai[.]com, which demonstrably has been configured with its own mail server for accepting incoming email messages. Meaning, if you send an email to a Gmail user and accidentally omit the “l” from “gmail.com,” that missive doesn’t just disappear into the ether or produce a bounce reply: It goes straight to these scammers. The report notices this domain also has been leveraged in multiple recent business email compromise campaigns, using a lure indicating a failed payment with trojan malware attached.

    Infoblox found this particular domain holder (betrayed by a common DNS server — torresdns[.]com) has set up typosquatting domains targeting dozens of top Internet destinations, including Craigslist, YouTube, Google, Wikipedia, Netflix, TripAdvisor, Yahoo, eBay, and Microsoft. A defanged list of these typosquatting domains is available here (the dots in the listed domains have been replaced with commas).

    David Brunsdon, a threat researcher at Infoblox, said the parked pages send visitors through a chain of redirects, all while profiling the visitor’s system using IP geolocation, device fingerprinting, and cookies to determine where to redirect domain visitors.

    “It was often a chain of redirects — one or two domains outside the parking company — before threat arrives,” Brunsdon said. “Each time in the handoff the device is profiled again and again, before being passed off to a malicious domain or else a decoy page like Amazon.com or Alibaba.com if they decide it’s not worth targeting.”

    Brunsdon said domain parking services claim the search results they return on parked pages are designed to be relevant to their parked domains, but that almost none of this displayed content was related to the lookalike domain names they tested.

    Samples of redirection paths when visiting scotaibank dot com. Each branch includes a series of domains observed, including the color-coded landing page. Image: Infoblox.

    Infoblox said a different threat actor who owns domaincntrol[.]com — a domain that differs from GoDaddy’s name servers by a single character — has long taken advantage of typos in DNS configurations to drive users to malicious websites. In recent months, however, Infoblox discovered the malicious redirect only happens when the query for the misconfigured domain comes from a visitor who is using Cloudflare’s DNS resolvers (1.1.1.1), and that all other visitors will get a page that refuses to load.

    The researchers found that even variations on well-known government domains are being targeted by malicious ad networks.

    “When one of our researchers tried to report a crime to the FBI’s Internet Crime Complaint Center (IC3), they accidentally visited ic3[.]org instead of ic3[.]gov,” the report notes. “Their phone was quickly redirected to a false ‘Drive Subscription Expired’ page. They were lucky to receive a scam; based on what we’ve learnt, they could just as easily receive an information stealer or trojan malware.”

    The Infoblox report emphasizes that the malicious activity they tracked is not attributed to any known party, noting that the domain parking or advertising platforms named in the study were not implicated in the malvertising they documented.

    However, the report concludes that while the parking companies claim to only work with top advertisers, the traffic to these domains was frequently sold to affiliate networks, who often resold the traffic to the point where the final advertiser had no business relationship with the parking companies.

    Infoblox also pointed out that recent policy changes by Google may have inadvertently increased the risk to users from direct search abuse. Brunsdon said Google Adsense previously defaulted to allowing their ads to be placed on parked pages, but that in early 2025 Google implemented a default setting that had their customers opt-out by default on presenting ads on parked domains — requiring the person running the ad to voluntarily go into their settings and turn on parking as a location.

    Cryptogram Deliberate Internet Shutdowns

    For two days in September, Afghanistan had no internet. No satellite failed; no cable was cut. This was a deliberate outage, mandated by the Taliban government. It followed a more localized shutdown two weeks prior, reportedly instituted “to prevent immoral activities.” No additional explanation was given. The timing couldn’t have been worse: communities still reeling from a major earthquake lost emergency communications, flights were grounded, and banking was interrupted. Afghanistan’s blackout is part of a wider pattern. Just since the end of September, there were also major nationwide internet shutdowns in Tanzania and Cameroon, and significant regional shutdowns in Pakistan and Nigeria. In all cases but one, authorities offered no official justification or acknowledgment, leaving millions unable to access information, contact loved ones, or express themselves through moments of crisis, elections, and protests.

    The frequency of deliberate internet shutdowns has skyrocketed since the first notable example in Egypt in 2011. Together with our colleagues at the digital rights organisation Access Now and the #KeepItOn coalition, we’ve tracked 296 deliberate internet shutdowns in 54 countries in 2024, and at least 244 more in 2025 so far.

    This is more than an inconvenience. The internet has become an essential piece of infrastructure, affecting how we live, work, and get our information. It’s also a major enabler of human rights, and turning off the internet can worsen or conceal a spectrum of abuses. These shutdowns silence societies, and they’re getting more and more common.

    Shutdowns can be local or national, partial or total. In total blackouts, like Afghanistan or Tanzania, nothing works. But shutdowns are often targeted more granularly. Cellphone internet could be blocked, but not broadband. Specific news sites, social media platforms, and messaging systems could be blocked, leaving overall network access unaffected—as when Brazil shut off X (formerly Twitter) in 2024. Sometimes bandwidth is just throttled, making everything slower and unreliable.

    Sometimes, internet shutdowns are used in political or military operations. In recent years, Russia and Ukraine have shut off parts of each other’s internet, and Israel has repeatedly shut off Palestinians’ internet in Gaza. Shutdowns of this type happened 25 times in 2024, affecting people in 13 countries.

    Reasons for the shutdowns are as varied as the countries that perpetrate them. General information control is just one. Shutdowns often come in response to political unrest, as governments try to prevent people from organizing and getting information; Panama had a regional shutdown this summer in response to protests. Or during elections, as opposition parties utilize the internet to mobilize supporters and communicate strategy. Belarusian president Alyaksandr Lukashenko, who has ruled since 1994, reportedly disabled the internet during elections earlier this year, following a similar move in 2020. But they can also be more banal. Access Now documented countries disabling parts of the internet during student exam periods at least 16 times in 2024, including Algeria, Iraq, Jordan, Kenya, and India.

    Iran’s shutdowns in 2022 and June of this year are good examples of a highly sophisticated effort, with layers of shutdowns that end up forcing people off the global internet and onto Iran’s surveilled, censored national intranet. India, meanwhile, has been the world shutdown leader for many years, with 855 distinct incidents. Myanmar is second with 149, followed by Pakistan and then Iran. All of this information is available on Access Now’s digital dashboard, where you can see breakdowns by region, country, type, geographic extent, and time.

    There was a slight decline in shutdowns during the early years of the pandemic, but they have increased sharply since then. The reasons are varied, but a lot can be attributed to the rise in protest movements related to economic hardship and corruption, and general democratic backsliding and instability. In many countries today, shutdowns are a knee-jerk response to any form of unrest or protest, no matter how small.

    A country’s ability to shut down the internet depends a lot on its infrastructure. In the US, for example, shutdowns would be hard to enforce. As we saw when discussions about a potential TikTok ban ramped up two years ago, the complex and multifaceted nature of our internet makes it very difficult to achieve. However, as we’ve seen with total nationwide shutdowns around the world, the ripple effects in all aspects of life are immense. (Remember the effects of just a small outage—CrowdStrike in 2024—which crippled 8.5 million computers and cancelled 2,200 flights in the US alone?)

    The more centralized the internet infrastructure, the easier it is to implement a shutdown. If a country has just one cellphone provider, or only two fiber optic cables connecting the nation to the rest of the world, shutting them down is easy.

    Shutdowns are not only more common, but they’ve also become more harmful. Unlike in years past, when the internet was a nice option to have, or perhaps when internet penetration rates were significantly lower across the Global South, today the internet is an essential piece of societal infrastructure for the majority of the world’s population.

    Access Now has long maintained that denying people access to the internet is a human rights violation, and has collected harrowing stories from places like Tigray in Ethiopia, Uganda, Annobon in Equatorial Guinea, and Iran. The internet is an essential tool for a spectrum of rights, including freedom of expression and assembly. Shutdowns make documenting ongoing human rights abuses and atrocities more difficult or impossible. They are also impactful on people’s daily lives, business, healthcare, education, finances, security, and safety, depending on the context. Shutdowns in conflict zones are particularly damaging, as they impact the ability of humanitarian actors to deliver aid and make it harder for people to find safe evacuation routes and civilian corridors.

    Defenses on the ground are slim. Depending on the country and the type of shutdown, there can be workarounds. Everything, from VPNs to mesh networks to Starlink terminals to foreign SIM cards near borders, has been used with varying degrees of success. The tech-savvy sometimes have other options. But for most everyone in society, no internet means no internet—and all the effects of that loss.

    The international community plays an important role in shaping how internet shutdowns are understood and addressed. World bodies have recognized that reliable internet access is an essential service, and could put more pressure on governments to keep the internet on in conflict-affected areas. But while international condemnation has worked in some cases (Mauritius and South Sudan are two recent examples), countries seem to be learning from each other, resulting in both more shutdowns and new countries perpetrating them.

    There’s still time to reverse the trend, if that’s what we want to do. Ultimately, the question comes down to whether or not governments will enshrine both a right to access information and freedom of expression in law and in practice. Keeping the internet on is a norm, but the trajectory from a single internet shutdown in 2011 to 2,000 blackouts 15 years later demonstrates how embedded the practice has become. The implications of that shift are still unfolding, but they reach far beyond the moment the screen goes dark.

    This essay was written with Zach Rosson, and originally appeared in Gizmodo.

    365 TomorrowsWhose Who

    Author: Majoki “I think therefore I am. Screw Descartes and his cogito ergo sum. That’s the kind of philosophical crap that’s going to bust us, Shannyn. If we want to capitalize on this breakthrough, we need to make every last person on earth damn well believe: I am because IDco tells me so.” Terry Black […]

    The post Whose Who appeared first on 365tomorrows.

    Worse Than FailureUnderwhelmed

    Our anonymous submitter was looking for a Microsoft partner to manage his firm's MSDN subscriptions; the pile of licenses and seats and allowed uses was complex enough to want specialists. In hopes of quickly zeroing in on a known and reputable firm, he tracked down the website of a tech consultancy that'd been used by one of his previous employers.

    When he browsed to their Contact Us page, filled out the contact form, and clicked Submit, the webpage simply refreshed with no signs of actually doing anything. After staring at the screen for a moment, wondering what had gone wrong, Subby noticed the single quotes used within his message were now escaped. Clicking Submit a few more times kept adding escape characters, with no submission ever occurring. So he amended his message to remove every it's, we're, and other such contraction.

    Have I Been Pwned logo

    Without single quotes, the next submission was successful. It's impossible to say what was going on behind the scenes, but this seemed to suggest a SQL injection vulnerability in their form submission code. They were escaping "'" characters because they were building their query through string concatenation. But in addition to escaping the single quotes, it seemed to be rejecting any string which contained them.

    A stellar first impression, to be sure. In fairness, this firm hadn't designed their own website. The name of the designer they'd contracted with, displayed in the webpage footer, looked more embarrassing than proud in light of his trouble.

    An email address was listed beside the contact form. Subby sent a separate email alerting them of the bug he'd found. Hopefully, someone would acknowledge and channel it to the proper support contact.

    A week passed. Subby never received a response or any confirmation that any of his messages had been received. Had that mailbox been abandoned after most, if not all, attempted contacts had mysteriously failed?

    "I guess no SQL injection if it's never submitted!" Subby joked to himself.

    He moved on to other prospects.

    [Advertisement] Plan Your .NET 9 Migration with Confidence
    Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

    Planet DebianFreexian Collaborators: Debusine repositories now in beta (by Colin Watson)

    We’re happy to announce that Debusine can now be used to maintain APT-compatible add-on package repositories for Debian. This facility is available in public beta to Debian developers and maintainers.

    Why?

    Debian developers typically put most of their effort towards maintaining the main Debian archive. However, it’s often useful to have other places to work, for various reasons:

    • Developers working on a set of packages might need to check that changes to several of them all work properly together on a real system.
    • Somebody fixing a bug might need to ask affected users to test the fix before uploading it to Debian.
    • Some projects are difficult to package in a way that meets Debian policy, or are too niche to include in Debian, but it’s still useful to distribute them in a packaged form.
    • For some packages, it’s useful to provide multiple upstream versions for multiple Debian releases, even though Debian itself would normally want to keep that to a minimum.

    The Ubuntu ecosystem has had PPAs for a long time to meet these sorts of needs, but people working directly on Debian have had to make do with putting things together themselves using something like reprepro or aptly. Discussions about this have been happening for long enough that people started referring to PPAs for Debian as “bikesheds”, and users often find themselves trying to use Ubuntu PPAs on Debian systems and hoping that dependencies will be compatible enough for things to more or less work. This clearly isn’t ideal, and solving it is one of Freexian’s objectives for Debusine.

    Developers publishing packages to Debusine repositories can take advantage of all Debusine’s existing facilities, including a battery of QA tests and regression tracking (coming soon). Repositories are signed using per-repository keys held in Debusine’s signing service, and uploads to repositories are built against the current contents of that repository as well as the corresponding base Debian release. All repositories include automatic built-in snapshot capabilities.

    Who can use this service?

    We’ve set up debusine.debian.net to allow using repositories. All Debian Developers and Debian Maintainers can log in there and publish packages to it. The resulting repositories are public by default.

    debusine.debian.net only allows packages with licences that allow distribution by Debian, and it is intended primarily for work that could reasonably end up in Debian; Freexian reserves the right to remove repositories from it.

    How can I use it?

    If you are a Debian contributor, we’d be very excited to have you try this out, especially if you give us feedback. We have published instructions for developers on using this. Since this is a beta service, you can expect things to change, but we’ll maintain compatibility where we can.

    If you’re interested in using this in a commercial setting, please contact Freexian to discuss what we can do for you.

    Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, November 2025 (by Santiago Ruano Rincón)

    The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for November.

    Activity summary

    During the month of November, 18 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

    The team released 33 DLAs fixing 219 CVEs.

    The LTS Team kept going with the usual cadence of preparing security updates for Debian 11 “bullseye”, but also for Debian 12 “bookworm”, Debian 13 “trixie” and even Debian unstable. As in previous months, we are pleased to say that there have been multiple contributions of LTS uploads by Debian Fellows outside the regular LTS Team.

    Notable security updates:

    • Guilhem Moulin prepared DLA 4365-1 for unbound, a caching DNS resolver, fixing a cache poisoning vulnerability that could lead to domain hijacking.
    • Another update related to DNS software was made by Andreas Henriksson. Andreas completed the work on bind9, released as DLA 4364-1 to fix cache poisoning and Denial of Service (DoS) vulnerabilities.
    • Chris Lamb released DLA 4374-1 to fix a potential arbitrary code execution vulnerability in pdfminer, a tool for extracting information from PDF documents.
    • Ben Hutchings published a regular security update for the linux 6.1 bullseye backport, as DLA 4379-1.
    • A couple of other important recurrent updates were prepared by Emilio Pozuelo, who handled firefox-esr and thunderbird (in collaboration with Christoph Goehre), published as DLAs DLA 4370-1 and DLA 4372-1, respectively.

    Contributions from fellows outside the LTS Team:

    • Thomas Goirand uploaded a bullseye update for keystone and swift
    • Jeremy Bícha prepared the bullseye update for gst-plugins-base1.0
    • As mentioned above, Christoph Goehre prepared the bullseye update for thunderbird.
    • Mathias Behrle provided feedback about the tryton-server and tryton-sao vulnerabilities that were disclosed last month, and helped to review the bullseye patches for tryton-server.

    Other than the regular LTS updates for bullseye, the LTS Team has also contributed updates to the latest Debian releases:

    • Bastien Roucariès prepared a bookworm update for squid, the web proxy cache server.
    • Carlos Henrique Lima Melara filed a bookworm point update request for gdk-pixbuf to fix CVE-2025-7345, a heap buffer overflow vulnerability that could lead to arbitrary code execution.
    • Daniel Leidert prepared bookworm and trixie updates for r-cran-gh to fix CVE-2025-54956, an issue that may expose user credentials in HTTP responses.
    • Along with the bullseye updates for unbound mentioned above, Guilhem helped to prepare the trixie update for unbound.
    • In collaboration with Lukas Märdian, Tobias Frost prepared trixie and bookworm updates for log4cxx, the C++ port of the logging framework for JAVA.
    • Jochen Sprickerhof prepared a bookworm update for syslog-ng.
    • Utkarsh completed the bookworm update for wordpress, addressing multiple security issues in the popular blogging tool.

    Beyond security updates, there has been a significant effort in revamping our documentation, aiming to make the processes more clear and consistent for all the members of the team. This work was mainly carried out by Sylvain, Jochen and Roberto.

    We would like to express our gratitude to the sponsors for making the Debian LTS project possible. Also, special thanks to the fellows outside the LTS team for their valuable help.

    Individual Debian LTS contributor reports

    Thanks to our sponsors

    Sponsors that joined recently are in bold.

    ,

    Planet DebianGunnar Wolf: Unique security and privacy threats of large language models — a comprehensive survey

    This post is an unpublished review for Unique security and privacy threats of large language models — a comprehensive survey

    Much has been written about large language models (LLMs) being a risk to user security and privacy, including the issue that, being trained with datasets whose provenance and licensing are not always clear, they can be tricked into producing bits of data that should not be divulgated. I took on reading this article as means to gain a better understanding of this area. The article completely fulfilled my expectations.

    This is a review article, which is not a common format for me to follow: instead of digging deep into a given topic, including an experiment or some way of proofing the authors’ claims, a review article will contain a brief explanation and taxonomy of the issues at hand, and a large number of references covering the field. And, at 36 pages and 151 references, that’s exactly what we get.

    The article is roughly split in two parts: The first three sections present the issue of security and privacy threats as seen by the authors, as well as the taxonomy within which the review will be performed, and sections 4 through 7 cover the different moments in the life cycle of a LLM model (at pre-training, during fine-tuning, when deploying systems that will interact with end-users, and when deploying LLM-based agents), detailing their relevant publications. For each of said moments, the authors first explore the nature of the relevant risks, then present relevant attacks, and finally close outlining countermeasures to said attacks.

    The text is accompanied all throughout its development with tables, pipeline diagrams and attack examples that visually guide the reader. While the examples presented are sometimes a bit simplistic, they are a welcome guide and aid to follow the explanations; the explanations for each of the attack models are necessarily not very deep, and I was often left wondering I correctly understood a given topic, or wanting to dig deeper – but being this a review article, it is absolutely understandable.

    The authors present an easy to read prose, and this article covers an important spot in understanding this large, important, and emerging area of LLM-related study.

    Worse Than FailureCodeSOD: Duplicate Reports

    Today's anonymous submitter sends us a short snippet. They found this because they were going through code committed by an expensive third-party contractor, trying to track down a bug: every report in the database kept getting duplicated for some reason.

    This code has been in production for over a decade, bugs and all:

    if (reportStatuses.indexOf(newStatus > -1))
    {
        // add report to database
    }
    

    This is server-side JavaScript running in NodeJS. The mistake here is easy to make, it's a simple transposition error. But it's also easy to catch. Any sort of testing at all would find it.

    The specific problem, if you haven't spotted it, is where the comparison operator happens: we're passing newStatus > -1 into indexOf as a parameter: this is a boolean value. Now, neither true nor false are in the reportStatuses array, so indexOf returns -1. But -1 is a truthy value, so the condition evaluates to true, adding the report to the database, even if it's already there.

    Our submitter writes:

    How has no one noticed this? How is the company still in business? How does the world not come down crashing around us more every day?

    How is the world not crashing down? Have you looked outside, recently? Tis the season to quote Clark Griswold:

    Worse? How could things get any worse? Take a look around here, Ellen. We're at the threshold of hell.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    365 TomorrowsThe Dust We Carry

    Author: Julian Miles, Staff Writer The figure hunched in the chair leans on the table with trembling arms. “It was a routine trip. Scanning and observation, back before the dinosaurs. We’ve done several.” The stocky figure in the hazmat suit sat opposite points at their tablet. “Not like this, Professor Devis. You bought something back.” […]

    The post The Dust We Carry appeared first on 365tomorrows.

    Planet DebianRuss Allbery: Review: Brigands & Breadknives

    Review: Brigands & Breadknives, by Travis Baldree

    Series: Legends & Lattes #3
    Publisher: Tor
    Copyright: 2025
    ISBN: 1-250-33489-6
    Format: Kindle
    Pages: 325

    Brigands & Breadknives is a secondary-world sword-and-sorcery fantasy and a sequel to both Legends & Lattes and Bookshops & Bonedust. It takes place shortly after Legends & Lattes chronologically, but Fern, the protagonist, was introduced in the Bookshops & Bonedust prequel.

    You may have noticed I didn't describe this as cozy fantasy. That is intentional.

    When we left Fern at the end of Bookshops & Bonedust, the rattkin was running a bookshop in the town of Murk. As Brigands & Breadknives opens, Fern is moving, for complicated and hard-to-describe personal reasons, to Thune where Viv has her coffee shop. Her plan is to open a new bookstore next door to Legends and Lattes. This is exactly the sort of plot one might expect from this series, and the first few chapters feel like yet another version of the first two novels. Then Fern makes an impulsive and rather inexplicable (even to herself) decision and the plot goes delightfully sideways.

    Brigands & Breadknives is not, as Baldree puts it in the afterword, a book about fantasy small-business ownership as the answer to all of life's woes. It is, instead, a sword and sorcery story about a possibly immortal elven bounty hunter, her utterly baffling goblin prisoner, and a rattkin bookseller who becomes their unexpected travel companion for reasons she can't explain. It's a story about a mid-life crisis in a world and with supporting characters that I can only describe as inspired by a T. Kingfisher novel.

    Baldree is not Ursula Vernon, of course. This book does not contain paladins or a romance, possibly to the relief of some readers. It's slower, a bit more introspective, and doesn't have as sharp of edges or the casual eerie unsettlingness. But there is a religious order that worships a tentacled space horror for entirely unexpected reasons, pompous and oleaginous talking swords with verbose opinions about everything, a mischievously chaotic orange-haired goblin who quickly became one of my favorite fantasy characters and then kept getting better, and a whole lot of heart. You may see why Kingfisher was my first thought for a comparison point.

    Unlike Baldree's previous novels, there is a lot of combat and injury. I think some people will still describe this book as cozy, and I'm not going to argue too strongly because the conflicts are a bit lighter than the sort of rape and murder one would see in a Mercedes Lackey novel. But to me this felt like sword and sorcery in a Dungeons and Dragons universe made more interesting by letting the world-building go feral and a little bit sarcastic. Most of the book is spent traveling, there are a lot of random encounters that build into a connected plot, and some scenes (particularly the defense of the forest village) felt like they could have sold to the Swords and Sorceress anthology series.

    Also, this was really good! I liked both Legends & Lattes and Bookshops & Bonedust, maybe a bit more than the prevailing opinion among reviewers since the anachronisms never bothered me, but I wasn't sure whether to dive directly into this book because I was expecting more of the same. This is not more of the same. I think it's clearly better writing and world-building than either of the previous books. It helps that Fern is the protagonist; as much as I like Viv, I think Fern is a more interesting character, and I am glad she got a book of her own.

    Baldree takes a big risk on the emotional arc of this book. Fern starts the story in a bad state and makes some decisions to kick off the plot that are difficult to defend. She beats herself up for those decisions for most of the book, deservedly, and parts of that emotional turmoil are difficult to read. Baldree resists the urge to smooth everything over and instead provides a rather raw sense of depression, avoidance, and social anxiety that some readers are going to have to brace themselves for.

    I respect the decision to not write the easy series book people probably expected, but I'm not sure Fern's emotional arc quite worked. Baldree is hinting at something that's hard to describe logically, and I'm not sure he was able to draw a clear enough map of Fern's thought process for the reader to understand her catharsis. The "follow your passion" self-help mindset has formed a gravitational singularity in the vicinity of this book's theme, it takes some skillful piloting to avoid being sucked into its event horizon, and I don't think Baldree quite managed to escape it. He made a valiant attempt, though, and it created a far more interesting book than one about safer emotions.

    I wanted more of an emotional payoff than I got, but the journey, even with the moments of guilt and anxiety, was so worth it. The world-building is funnier and more interesting than the previous books of the series, and the supporting cast is fantastic. If you bailed on the series but you like sword and sorcery and T. Kingfisher novels, consider returning. You do probably need to read Bookshops & Bonedust first, if you haven't already, since it helps to know the start of Fern's story.

    Recommended, and shortcomings aside, much better than I had expected.

    Content notes: Bloody sword fights, major injury, some very raw emotions about letting down friends and destroying friendships.

    Rating: 8 out of 10

    Cory DoctorowDaddy-Daughter Podcast, 2025 Edition

    Poesy and me in front of our Christmas tree.

    This week on my podcast, I sit down with my daughter Poesy, for our annual Daddy-Daughter Podcast, a tradition we’ve had since she was three (she’s 17 now!). This year, Poe recaps her graduation, her triumphs with her dance team, and her life at college! She offers us a tutorial on playing Egyptian War, and we sing Jingle Bells!

    MP3

    ,

    Cryptogram Against the Federal Moratorium on State-Level Regulation of AI

    Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical psychoses. In a moment where Congress is seemingly unable to act to pass any meaningful consumer protections or market regulations, why would we hamstring the one entity evidently capable of doing so—the states? States that have already enacted consumer protections and other AI regulations, like California, and those actively debating them, like Massachusetts, were alarmed. Seventeen Republican governors wrote a letter decrying the idea, and it was ultimately killed in a rare vote of bipartisan near-unanimity.

    The idea is back. Before Thanksgiving, a House Republican leader suggested they might slip it into the annual defense spending bill. Then, a draft document leaked outlining the Trump administration’s intent to enforce the state regulatory ban through executive powers. An outpouring of opposition (including from some Republican state leaders) beat back that notion for a few weeks, but on Monday, Trump posted on social media that the promised Executive Order is indeed coming soon. That would put a growing cohort of states, including California and New York, as well as Republican strongholds like Utah and Texas, in jeopardy.

    The constellation of motivations behind this proposal is clear: conservative ideology, cash, and China.

    The intellectual argument in favor of the moratorium is that “freedom“-killing state regulation on AI would create a patchwork that would be difficult for AI companies to comply with, which would slow the pace of innovation needed to win an AI arms race with China. AI companies and their investors have been aggressively peddling this narrative for years now, and are increasingly backing it with exorbitant lobbying dollars. It’s a handy argument, useful not only to kill regulatory constraints, but also—companies hope—to win federal bailouts and energy subsidies.

    Citizens should parse that argument from their own point of view, not Big Tech’s. Preventing states from regulating AI means that those companies get to tell Washington what they want, but your state representatives are powerless to represent your own interests. Which freedom is more important to you: the freedom for a few near-monopolies to profit from AI, or the freedom for you and your neighbors to demand protections from its abuses?

    There is an element of this that is more partisan than ideological. Vice President J.D. Vance argued that federal preemption is needed to prevent “progressive” states from controlling AI’s future. This is an indicator of creeping polarization, where Democrats decry the monopolism, bias, and harms attendant to corporate AI and Republicans reflexively take the opposite side. It doesn’t help that some in the parties also have direct financial interests in the AI supply chain.

    But this does not need to be a partisan wedge issue: both Democrats and Republicans have strong reasons to support state-level AI legislation. Everyone shares an interest in protecting consumers from harm created by Big Tech companies. In leading the charge to kill Cruz’s initial AI moratorium proposal, Republican Senator Masha Blackburn explained that “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives? we can’t block states from making laws that protect their citizens.” More recently, Florida Governor Ron DeSantis wants to regulate AI in his state.

    The often-heard complaint that it is hard to comply with a patchwork of state regulations rings hollow. Pretty much every other consumer-facing industry has managed to deal with local regulation—automobiles, children’s toys, food, and drugs—and those regulations have been effective consumer protections. The AI industry includes some of the most valuable companies globally and has demonstrated the ability to comply with differing regulations around the world, including the EU’s AI and data privacy regulations, substantially more onerous than those so far adopted by US states. If we can’t leverage state regulatory power to shape the AI industry, to what industry could it possibly apply?

    The regulatory superpower that states have here is not size and force, but rather speed and locality. We need the “laboratories of democracy” to experiment with different types of regulation that fit the specific needs and interests of their constituents and evolve responsively to the concerns they raise, especially in such a consequential and rapidly changing area such as AI.

    We should embrace the ability of regulation to be a driver—not a limiter—of innovation. Regulations don’t restrict companies from building better products or making more profit; they help channel that innovation in specific ways that protect the public interest. Drug safety regulations don’t prevent pharma companies from inventing drugs; they force them to invent drugs that are safe and efficacious. States can direct private innovation to serve the public.

    But, most importantly, regulations are needed to prevent the most dangerous impact of AI today: the concentration of power associated with trillion-dollar AI companies and the power-amplifying technologies they are producing. We outline the specific ways that the use of AI in governance can disrupt existing balances of power, and how to steer those applications towards more equitable balances, in our new book, Rewiring Democracy. In the nearly complete absence of Congressional action on AI over the years, it has swept the world’s attention; it has become clear that states are the only effective policy levers we have against that concentration of power.

    Instead of impeding states from regulating AI, the federal government should support them to drive AI innovation. If proponents of a moratorium worry that the private sector won’t deliver what they think is needed to compete in the new global economy, then we should engage government to help generate AI innovations that serve the public and solve the problems most important to people. Following the lead of countries like Switzerland, France, and Singapore, the US could invest in developing and deploying AI models designed as public goods: transparent, open, and useful for tasks in public administration and governance.

    Maybe you don’t trust the federal government to build or operate an AI tool that acts in the public interest? We don’t either. States are a much better place for this innovation to happen because they are closer to the people, they are charged with delivering most government services, they are better aligned with local political sentiments, and they have achieved greater trust. They’re where we can test, iterate, compare, and contrast regulatory approaches that could inform eventual and better federal policy. And, while the costs of training and operating performance AI tools like large language models have declined precipitously, the federal government can play a valuable role here in funding cash-strapped states to lead this kind of innovation.

    This essay was written with Nathan E. Sanders, and originally appeared in Gizmodo.

    EDITED TO ADD: Trump signed an executive order banning state-level AI regulations hours after this was published. This is not going to be the last word on the subject.

    Planet DebianEvgeni Golov: Home Assistant, Govee Lights Local, VLANs, Oh my!

    We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

    Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

    Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

    The API involves sending JSON over multicast, which the Govee device will answer to.

    No devices found on the network

    After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

    DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
    DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2
    

    That's not the IP address in the IOT VLAN!

    Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

    You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

    Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

    Planet DebianDirk Eddelbuettel: BH 1.90.0-1 on CRAN: New Upstream

    Boost

    Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 41.5 million package downloads.

    Version 1.90.0 of Boost was released a few days ago following the regular Boost release schedule of April, August and December releases. As before, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed only one really minor issue among the over three hundred direct reverse dependencies. And that issue was addressed yesterday within hours by a truly responsove maintainer (and it helped that a related issue had been addressed months earlier with version 1.89.). So big thanks to Jean-Romain Roussel for the prompt fix, and to Andrew Johnson for the earlier test with 1.89.0.

    As last year with 1.87.0, no new Boost libraries were added to BH so the (considerable) size is more or less unchanged. It lead to CRAN doing a manual inspection but as there were no other issues it sailed through as is now in the CRAN repository.

    The short NEWS entry follows.

    Changes in version 1.90.0-1 (2025-12-13)

    • Upgrade to Boost 1.90.0, patched as usual to comment-out diagnostic suppression messages per the request of CRAN

    • Minor upgrades to continuous integration

    Via my CRANberries, there is a diffstat report relative to the previous release. Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

    Cryptogram Like Social Media, AI Requires Difficult Choices

    In his 2020 book, “Future Politics,” British barrister Jamie Susskind wrote that the dominant question of the 20th century was “How much of our collective life should be determined by the state, and what should be left to the market and civil society?” But in the early decades of this century, Susskind suggested that we face a different question: “To what extent should our lives be directed and controlled by powerful digital systems—and on what terms?”

    Artificial intelligence (AI) forces us to confront this question. It is a technology that in theory amplifies the power of its users: A manager, marketer, political campaigner, or opinionated internet user can utter a single instruction, and see their message—whatever it is—instantly written, personalized, and propagated via email, text, social, or other channels to thousands of people within their organization, or millions around the world. It also allows us to individualize solicitations for political donations, elaborate a grievance into a well-articulated policy position, or tailor a persuasive argument to an identity group, or even a single person.

    But even as it offers endless potential, AI is a technology that—like the state—gives others new powers to control our lives and experiences.

    We’ve seen this play out before. Social media companies made the same sorts of promises 20 years ago: instant communication enabling individual connection at massive scale. Fast-forward to today, and the technology that was supposed to give individuals power and influence ended up controlling us. Today social media dominates our time and attention, assaults our mental health, and—together with its Big Tech parent companies—captures an unfathomable fraction of our economy, even as it poses risks to our democracy.

    The novelty and potential of social media was as present then as it is for AI now, which should make us wary of its potential harmful consequences for society and democracy. We legitimately fear artificial voices and manufactured reality drowning out real people on the internet: on social media, in chat rooms, everywhere we might try to connect with others.

    It doesn’t have to be that way. Alongside these evident risks, AI has legitimate potential to transform both everyday life and democratic governance in positive ways. In our new book, “Rewiring Democracy,” we chronicle examples from around the globe of democracies using AI to make regulatory enforcement more efficient, catch tax cheats, speed up judicial processes, synthesize input from constituents to legislatures, and much more. Because democracies distribute power across institutions and individuals, making the right choices about how to shape AI and its uses requires both clarity and alignment across society.

    To that end, we spotlight four pivotal choices facing private and public actors. These choices are similar to those we faced during the advent of social media, and in retrospect we can see that we made the wrong decisions back then. Our collective choices in 2025—choices made by tech CEOs, politicians, and citizens alike—may dictate whether AI is applied to positive and pro-democratic, or harmful and civically destructive, ends.

    A Choice for the Executive and the Judiciary: Playing by the Rules

    The Federal Election Commission (FEC) calls it fraud when a candidate hires an actor to impersonate their opponent. More recently, they had to decide whether doing the same thing with an AI deepfake makes it okay. (They concluded it does not.) Although in this case the FEC made the right decision, this is just one example of how AIs could skirt laws that govern people.

    Likewise, courts are having to decide if and when it is okay for an AI to reuse creative materials without compensation or attribution, which might constitute plagiarism or copyright infringement if carried out by a human. (The court outcomes so far are mixed.) Courts are also adjudicating whether corporations are responsible for upholding promises made by AI customer service representatives. (In the case of Air Canada, the answer was yes, and insurers have started covering the liability.)

    Social media companies faced many of the same hazards decades ago and have largely been shielded by the combination of Section 230 of the Communications Act of 1994 and the safe harbor offered by the Digital Millennium Copyright Act of 1998. Even in the absence of congressional action to strengthen or add rigor to this law, the Federal Communications Commission (FCC) and the Supreme Court could take action to enhance its effects and to clarify which humans are responsible when technology is used, in effect, to bypass existing law.

    A Choice for Congress: Privacy

    As AI-enabled products increasingly ask Americans to share yet more of their personal information—their “context“—to use digital services like personal assistants, safeguarding the interests of the American consumer should be a bipartisan cause in Congress.

    It has been nearly 10 years since Europe adopted comprehensive data privacy regulation. Today, American companies exert massive efforts to limit data collection, acquire consent for use of data, and hold it confidential under significant financial penalties—but only for their customers and users in the EU.

    Regardless, a decade later the U.S. has still failed to make progress on any serious attempts at comprehensive federal privacy legislation written for the 21st century, and there are precious few data privacy protections that apply to narrow slices of the economy and population. This inaction comes in spite of scandal after scandal regarding Big Tech corporations’ irresponsible and harmful use of our personal data: Oracle’s data profiling, Facebook and Cambridge Analytica, Google ignoring data privacy opt-out requests, and many more.

    Privacy is just one side of the obligations AI companies should have with respect to our data; the other side is portability—that is, the ability for individuals to choose to migrate and share their data between consumer tools and technology systems. To the extent that knowing our personal context really does enable better and more personalized AI services, it’s critical that consumers have the ability to extract and migrate their personal context between AI solutions. Consumers should own their own data, and with that ownership should come explicit control over who and what platforms it is shared with, as well as withheld from. Regulators could mandate this interoperability. Otherwise, users are locked in and lack freedom of choice between competing AI solutions—much like the time invested to build a following on a social network has locked many users to those platforms.

    A Choice for States: Taxing AI Companies

    It has become increasingly clear that social media is not a town square in the utopian sense of an open and protected public forum where political ideas are distributed and debated in good faith. If anything, social media has coarsened and degraded our public discourse. Meanwhile, the sole act of Congress designed to substantially reign in the social and political effects of social media platforms—the TikTok ban, which aimed to protect the American public from Chinese influence and data collection, citing it as a national security threat—is one it seems to no longer even acknowledge.

    While Congress has waffled, regulation in the U.S. is happening at the state level. Several states have limited children’s and teens’ access to social media. With Congress having rejected—for now—a threatened federal moratorium on state-level regulation of AI, California passed a new slate of AI regulations after mollifying a lobbying onslaught from industry opponents. Perhaps most interesting, Maryland has recently become the first in the nation to levy taxes on digital advertising platform companies.

    States now face a choice of whether to apply a similar reparative tax to AI companies to recapture a fraction of the costs they externalize on the public to fund affected public services. State legislators concerned with the potential loss of jobs, cheating in schools, and harm to those with mental health concerns caused by AI have options to combat it. They could extract the funding needed to mitigate these harms to support public services—strengthening job training programs and public employment, public schools, public health services, even public media and technology.

    A Choice for All of Us: What Products Do We Use, and How?

    A pivotal moment in the social media timeline occurred in 2006, when Facebook opened its service to the public after years of catering to students of select universities. Millions quickly signed up for a free service where the only source of monetization was the extraction of their attention and personal data.

    Today, about half of Americans are daily users of AI, mostly via free products from Facebook’s parent company Meta and a handful of other familiar Big Tech giants and venture-backed tech firms such as Google, Microsoft, OpenAI, and Anthropic—with every incentive to follow the same path as the social platforms.

    But now, as then, there are alternatives. Some nonprofit initiatives are building open-source AI tools that have transparent foundations and can be run locally and under users’ control, like AllenAI and EleutherAI. Some governments, like Singapore, Indonesia, and Switzerland, are building public alternatives to corporate AI that don’t suffer from the perverse incentives introduced by the profit motive of private entities.

    Just as social media users have faced platform choices with a range of value propositions and ideological valences—as diverse as X, Bluesky, and Mastodon—the same will increasingly be true of AI. Those of us who use AI products in our everyday lives as people, workers, and citizens may not have the same power as judges, lawmakers, and state officials. But we can play a small role in influencing the broader AI ecosystem by demonstrating interest in and usage of these alternatives to Big AI. If you’re a regular user of commercial AI apps, consider trying the free-to-use service for Switzerland’s public Apertus model.

    None of these choices are really new. They were all present almost 20 years ago, as social media moved from niche to mainstream. They were all policy debates we did not have, choosing instead to view these technologies through rose-colored glasses. Today, though, we can choose a different path and realize a different future. It is critical that we intentionally navigate a path to a positive future for societal use of AI—before the consolidation of power renders it too late to do so.

    This post was written with Nathan E. Sanders, and originally appeared in Lawfare.

    Cryptogram Upcoming Speaking Engagements

    This is a current list of where and when I am scheduled to speak:

    • I’m speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, at 6:00 PM CT on February 5, 2026. Details to come.
    • I’m speaking at Capricon 44 in Chicago, Illinois, USA. The convention runs February 5-8, 2026. My speaking time is TBD.
    • I’m speaking at the Munich Cybersecurity Conference in Munich, Germany on February 12, 2026.
    • I’m speaking at Tech Live: Cybersecurity in New York City, USA on March 11, 2026.
    • I’m giving the Ross Anderson Lecture at the University of Cambridge’s Churchill College on March 19, 2026.
    • I’m speaking at RSAC 2026 in San Francisco, California, USA on March 25, 2026.

    The list is maintained on this page.

    365 TomorrowsDeath Don’t Do Us Part

    Author: Mark Budman The Scrabble board and the box fell apart first, but my wife and I soldiered on. We glued the board together with a homemade glue, and the letter pieces made of real wood were still alive. Scrabble was our best way of killing time. What else could you do here? Sleep and […]

    The post Death Don’t Do Us Part appeared first on 365tomorrows.

    ,

    David BrinThe “Contract” – Part Three. Aggressive Agility requires fresh ideas!

      I doubt many will show up here. Part One and Part Two were “tl;dr”… as well as jarring! Still, shall we get to the MEATY PART?


    DRAFTING A NEWER DEMOCRATIC

    DEAL WITH THE AMERICAN PEOPLE

     


    Part One and Part Two aimed to study an old - though successful - political tactic that was concocted and executed with great skill by a rather different version of Republicans. A tactic that later dissolved into a swill of broken promises, after achieving Power.


      So, shall we wind this up with a shopping list of our own?  What follows is a set of promises – a contract of our own, aiming for the spirit of FDR's New Deal – with the citizens of America. 

    Hoping you will find it LBWR... long but worth reading.


    First, yes. It is hard to see, in today's ruling coalition of kleptocrats, fanatics and liars, any of the genuinely sober sincerity than many Americans thought they could sense coming from Newt Gingrich and the original wave of "neoconservatives."  Starting with Dennis Never Negotiate Hastert, the GOP leadership caste spiraled into ever-accelerating scandal and corruption.

     

    Still, I propose to ponder what a "Democratic Newest Deal for America" might look like!  

     

    -       Exposing hypocrisy and satirizing the failure of that earlier "contract" …

     

    -       while using its best parts to appeal sincere moderates and conservatives …

     

    -       while firmly clarifying the best consensus liberal proposals…

     

    -       while offering firm methods to ensure that any reforms actually take effect and don’t just drift away.

     

    Remember that this alternative "contract" – or List of Democratic Intents – will propose reforms that are of real value… but also repeatedly highlight GOP betrayals.

     

    Might it be worth testing before some focus groups?

     

     

     

                      A Draft: Democratic Deal for America

     

    As Democratic Members of the House of Representatives and as citizens seeking to join that body, we propose both to change its practices and to restore bonds of trust between the people and their elected representatives.  

     

    We offer these proposals in sincere humility, aware that so many past promises were broken.  We shall foremost, emphasize restoration of a citizen's right to know, and to hold the mighty accountable

     

    Especially, we will emphasize placing tools of democracy, openness and trust back into the hands of the People. We will also seek to ensure that government re-learns its basic function, to be the efficient, honest and effective tool of the People.

     

    Toward this end, we’ll incorporate lessons of the past and goals for the future, promises that were betrayed and promises that need to be renewed, ideas from left, right and center. But above all, the guiding principle that America is an open society of bold and free citizens. Citizens who are empowered to remind their political servants who is boss. 

     

     

    PART I.   REFORM CONGRESS 

     

    In the first month of the new Congress, our new Democratic majority will pass the following major reforms of Congress itself, aimed at restoring the faith and trust of the American people:

     

    FIRST: We shall see to it that the best parts of the 1994 Republican “Contract With America” - parts the GOP betrayed, ignored and forgot - are finally implemented, both in letter and in spirit.  

     

    Among the good ideas the GOP betrayed are these:

     

       Require all laws that apply to the rest of the country also apply to Congress; 

       Arrange regular audits of Congress for waste or abuse;

       Limit the terms of all committee chairs and party leadership posts;

       Ban the casting of proxy votes in committee and law-writing by lobbyists;

       Require that committee meetings be open to the public;

       Guarantee honest accounting of our Federal Budget.

    …and in the same spirit…

       Members of Congress shall report openly all stock and other trades by members or their families, especially those trades which might be affected by the member’s inside knowledge.

     

    By finally implementing these good ideas – some of which originated with decent Republicans - we show our openness to learn and to reach out, re-establishing a spirit of optimistic bipartisanship with sincere members of the opposing party, hopefully ending an era of unwarranted and vicious political war.

     

    But restoring those broken promises will only be the beginning.

     

    SECOND: We shall establish rules in both House and Senate permanently allowing the minority party one hundred subpoenas per year, plus the time and staff needed to question their witnesses before open subcommittee hearings, ensuring that Congress will never again betray its Constitutional duty of investigation and oversight, even when the same party holds both Congress and the Executive.

     

    As a possibly better alternative – to be negotiated – we shall establish a permanent rule and tradition that each member of Congress will get one peremptory subpoena per year, plus adequate funding to compel a witness to appear and testify for up to five hours before a subcommittee in which she or he is a member. In this way, each member will be encouraged to investigate as a sovereign representative and not just as a party member.

     

    THIRD: While continuing ongoing public debate over the Senate’s practice of filibuster, we shall use our next majority in the Senate to restore the original practice: that senators invoking a filibuster must speak on the chamber floor the entire time. 

     

    FOURTH: We shall create the office of Inspector General of the United States, or IGUS, who will head the U.S. Inspectorate, a uniformed agency akin to the Public Health Service, charged with protecting the ethical and law-abiding health of government.  Henceforth, the inspectors-general in all government agencies, including military judge-advocates general (JAGs) will be appointed by and report to IGUS, instead of serving at the whim of the cabinet or other officers that they are supposed to inspect. IGUS will advise the President and Congress concerning potential breaches of the law. IGUS will provide protection for whistle-blowers and safety for officials refusing to obey unlawful orders. 

     

    In order to ensure independence, the Inspectorate shall be funded by an account to pay for operations that is filled by Congress, or else by some other means, a decade in advance. IGUS will be appointed to six-year terms by a 60% vote of a commission consisting of all past presidents and current state governors. IGUS will create a corps of trusted citizen observers, akin to grand juries, cleared to go anywhere and assure the American people that the government is still theirs, to own and control.

     

    FIFTH: Independent congressional advisory offices for science, technology and other areas of skilled, fact-based analysis will be restored in order to counsel Congress on matters of fact without bias or dogma-driven pressure. Rules shall ensure that technical reports may not be re-written by politicians, changing their meaning to bend to political desires. 


    Every member of Congress shall be encouraged and funded to appoint from their home district a science-and-fact advisor who may interrogate the advisory panels and/or answer questions of fact on the member’s behalf.

     

    SIXTH: New rules shall limit “pork” earmarking of tax dollars to benefit special interests or specific districts. Exceptions must come from a single pool, totaling no more than one half of a percent of the discretionary budget. These exceptions must be placed in clearly marked and severable portions of a bill, at least two weeks before the bill is voted upon.  Earmarks may not be inserted into conference reports. Further, limits shall be placed on no-bid, crony, or noncompetitive contracts, where the latter must have firm expiration dates.  Conflict of interest rules will be strengthened. 

     

    SEVENTH: Create an office that is tasked to translate and describe all legislation in easily understandable language, for public posting at least three days before any bill is voted upon, clearly tracking changes or insertions, so that the public (and even members of Congress) may know what is at stake.  This office may recommend division of any bill that inserts or combines unrelated or “stealth” provisions.

     

    EIGHTH: Return the legislative branch of government to the people, by finding a solution to the cheat of gerrymandering, that enabled politicians to choose voters, instead of the other way around.  We shall encourage and insist that states do this in an evenhanded manner, either by using independent redistricting commissions or by minimizing overlap between state legislature districts and those for Congress.

     

    NINTH: Newly elected members of Congress with credentials from their states shall be sworn in by impartial clerks of either the House or Senate, without partisan bias, and at the new member’s convenience. The House may be called into session, with or without action by the Speaker, at any time that a petition is submitted to the Chief Clerk that was signed by 40% of the members. 

     

    TENTH: One time in any week, the losing side in a House vote may demand and get an immediate non-binding secret polling of the members who just took part in that vote, using technology to ensure reliable anonymity. While this secret ballot will be non-binding legislatively, the poll will reveal whether some members felt coerced or compelled to vote against their conscience. Members who refuse to be polled anonymously will be presumed to have been so compelled or coerced.

     

     

     

    II.  REFORM AMERICA

     

     Thereafter, within the first 100 days of the new Congress, we shall bring to the House Floor the following bills, each to be given full and open debate, each to be given a clear and fair vote and each to be immediately available for public inspection and scrutiny. 

     

     

    DB Note: The following proposed bills are my own particular priorities, chosen because I believe they are both vitally important and under-appreciated! (indeed, some of them you’ll see nowhere else.) 

     

    Their common trait – until you get to #20 – is that they have some possibility of appealing to reasonable people across party lines… the “60%+ rule” that worked so persuasively in 1994.

     

    #20 will be a catch-all that includes a wide swathe of reforms sought by many Democrats – and, likely, by many of you -- but may entail more dispute, facing strong opposition from the other major party. 

     

    In other words… as much as you may want the items in #20 – (and I do too: most of them!) -- you are going to have to work hard for them separately from a ‘contract’ like this one, that aims to swiftly take advantage of 60%+ consensus, to get at least an initial tranche of major reforms done.

     

     

    1. THE SECURITY FOR AMERICA ACT will ensure that top priority goes to America’s military and security readiness, especially our nation's ability to respond to surprise threats, including natural disasters or other emergencies. FEMA and the CDC and other contingency agencies will be restored and enhanced, their agile effectiveness audited.

     

    When ordering a discretionary foreign intervention, the President must report probable effects on readiness, as well as the purposes, severity and likely duration of the intervention, along with credible evidence of need. 

     

    All previous Congressional approvals for foreign military intervention or declared states of urgency will be explicitly canceled, so that future force resolutions will be fresh and germane to each particular event, with explicit expiration dates. All Eighteenth or Nineteenth Century laws that might be used as excuses for Executive abuse will be explicitly repealed. 

     

    Reserves will be augmented and modernized. Reserves shall not be sent overseas without submitting for a Congressionally certified state of urgency that must be renewed at six-month intervals. Any urgent federalization and deployment of National Guard or other troops to American cities, on the excuse of civil disorder, shall be supervised by a plenary of the nation’s state governors, who may veto any such deployment by a 40% vote or a signed declaration by twenty governors. 

     

    The Commander-in-Chief may not suspend any American law, or the rights of American citizens, without submitting the brief and temporary suspension to Congress for approval in session. 

     

    2. THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice.  Whistle-blower protections will be strengthened within the U.S. government. 


    The federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent and empowered Inspector General of the United States (IGUS).

     

    3. THE SECRECY ACT will ensure that the recent, skyrocketing use of secrecy – far exceeding anything seen during the Cold War - shall reverse course.  Independent commissions of trusted Americans shall approve, or set time limits to, all but the most sensitive classifications, which cannot exceed a certain number.  These commissions will include some members who are chosen (after clearance) from a random pool of common citizens.  Secrecy will not be used as a convenient way to evade accountability.

     

    4. THE SUSTAINABILITY ACT will make it America’s priority to pioneer technological paths toward energy independence, emphasizing economic health that also conserves both national and world resources.  Ambitious efficiency and conservation standards may be accompanied by compromise free market solutions that emphasize a wide variety of participants, with the goal of achieving more with less, while safeguarding the planet for our children.

     

    5. THE POLITCAL REFORM ACT will ensure that the nation’s elections take place in a manner that citizens can trust and verify.  Political interference in elections will be a federal crime.  Strong auditing procedures and transparency will be augmented by whistleblower protections.  New measures will distance government officials from lobbyists.  Campaign finance reform will reduce the influence of Big Money over politicians. The definition of a ‘corporation’ shall be clarified: so that corporations are neither ‘persons’ nor entitled to use money or other means to meddle in politics, nor to coerce their employees to act politically.

    Gerrymandering will be forbidden by national law. 

    The Voting Rights Act will be reinforced, overcoming all recent Court rationalizations to neuter it.

     

    6.  THE TAX REFORM ACT will simplify the tax code, while ensuring that everybody pays their fair share.  Floors for the Inheritance Tax and Alternative Tax will be raised to ensure they only affect the truly wealthy, while loopholes used to evade those taxes will be closed. Modernization of the IRS and funding for auditors seeking illicitly hidden wealth shall be ensured by IRS draw upon major penalties that have been imposed by citizen juries. 

     

    All tax breaks for the wealthy will be suspended during time of war, so that the burdens of any conflict or emergency are shared by all.[1]

     

    7.  THE AMERICAN EXCELLENCE ACT will provide incentives for American students to excel at a range of important fields. This nation must especially maintain its leadership, by training more experts and innovators in science and technology.  Education must be a tool to help millions of students and adults adapt, to achieve and keep high-paying 21st Century jobs.

     

    8. THE HEALTHY CHILDREN ACT will provide basic coverage for all of the nation's children to receive preventive care and needed medical attention.  Whether or not adults should get insurance using market methods can be argued separately.


     But under this act, all U.S. citizens under the age of 25 shall immediately qualify as “seniors” under Medicare, an affordable step that will relieve the nation’s parents of stressful worry. A great nation should see to it that the young reach adulthood without being handicapped by preventable sickness.

     

    9. THE CYBER HYGIENE ACT: Adjusting liability laws for a new and perilous era, citizens and small companies whose computers are infested and used by ‘botnets’ to commit crimes shall be deemed immune from liability for resulting damages, providing that they download and operate a security program from one of a dozen companies that have been vetted and approved for effectiveness by the US Department of Commerce. Likewise, companies that release artificial intelligence programs shall face lessened liability if those programs persistently declare their provenance and artificiality and potential dangers. 

     

    10. THE TRUTH AND RECONCILIATION ACT:  Without interfering in the president's constitutional right to issue pardons for federal offenses, Congress will pass a law defining the pardon process, so that all persons who are excused for either convictions or possible crimes must at least explain those crimes, under oath, before an open congressional committee, before walking away from them with a presidential pass.  

    If the crime is not described in detail, then a pardon cannot apply to any excluded portion. Further, we shall issue a challenge that no president shall ever issue more pardons than both of the previous administrations, combined.


    If it is determined that a pardon was given on quid pro quo for some bribe, emolument, gift or favor, then this act clarifies that such pardons are null and void. Moreover, this applies retroactively for any such pardons in the past.

     

    We will further reverse the current principle of federal supremacy in criminal cases that forbids states from prosecuting for the same crime. Instead, one state with grievance in a federal case may separately try the culprit for a state offense, which - upon conviction by jury - cannot be excused by presidential pardon.

     

    Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. 

     

    Incentives such as clemency will draw victims of blackmail to come forward and expose their blackmailers.

     

    11. THE IMMUNITY LIMITATION ACT: The Supreme Court has ruled that presidents should be free to do their jobs without undue distraction by legal procedures and jeopardies. Taking that into account, we shall nevertheless – by legislation – firmly reject the artificial and made-up notion of blanket Presidential Immunity or that presidents are inherently above the law. 

     

    Instead, the Inspector General of the United States (IGUS) shall supervise legal cases that are brought against the president so that they may be handled by the president’s chosen counsel in order of importance or severity, in such a way that the sum of all such external legal matters will take up no more than ten hours a week of a president’s time. While this may slow such processes, the wheels of law will not be fully stopped. 

     

    Civil or criminal cases against a serving president may be brought to trial by a simple majority consent of both houses of Congress, though no criminal or civil punishment may be exacted until after the president leaves office, either by end-of-term or impeachment and Senate conviction. 

    In the event that Congress is thwarted from acting on impeachment or trial, e.g. by some crime that prevents certain members from voting, their proxies may be voted in such matters by their party caucus, until their states complete election of replacements.

     

    (Note: that last paragraph is a late-addition to cover the scenario that was defended by one of Donald Trump’s own attorneys… that in theory a president might shoot enough members of Congress to evade impeachment (or else enough Supreme Court justices) and remain immune from prosecution or any other remedy.)

      

    12. THE FACT ACTThe Fact Act will begin by restoring the media Rebuttal Rule, prying open "echo chamber" propaganda mills. Any channel, or station, or Internet podcast, or meme distributor that accepts advertising or reaches more than 10,000 followers will be required to offer five minutes per day during prime time and ten minutes at other times to reputable and vigorous adversaries. Until other methods are negotiated, each member of Congress shall get to choose one such vigorous adversary, ensuring that all perspectives may be involved. 

     

    The Fact Act will further fund experimental Fact-Challenges, where major public disagreements may be openly and systematically and reciprocally confronted with demands for specific evidence.

     

    The Fact Act will restore full funding and staffing to both the Congressional Office of Technology Assessment and the executive Office of Science and Technology Policy (OTSP). Every member of Congress shall be funded to hire a science and fact advisor from their home district, who may interrogate the advisory bodies – an advisor who may also answer questions of fact on the member’s behalf. 

     

    This bill further requires that the President must fill, by law, the position of White House Science Adviser from a diverse and bipartisan slate of qualified candidates offered by the Academy of Science. The Science Adviser shall have uninterrupted access to the President for at least two one-hour sessions per month.4

     

    13. THE VOTER ID ACT: Under the 13th and 14th Amendments, this act requires that states mandating Voter ID requirements must offer substantial and effective compliance assistance, helping affected citizens to acquire their entitled legal ID and register to vote. 

     

    Any state that fails to provide such assistance, substantially reducing the fraction of eligible citizens turned away at the polls, shall be assumed in violation of equal protection and engaged in illegal voter suppression. If such compliance assistance has been vigorous and effective for ten years, then that state may institute requirements for Voter ID.      

         

    In all states, registration for citizens to vote shall be automatic with a driver’s license or passport or state-issued ID, unless the citizen opts-out.

     

    14. THE WYOMING RULE: Congress shall end the arrangement (under the  Permanent Apportionment Act of 1929) for perpetually limiting the House of Representatives to 435 members. Instead, it will institute the Wyoming Rule, that the least-populated state shall get one representative and all other states will be apportioned representatives according to their population by full-integer multiples of the smallest state. The Senate’s inherent bias favoring small states should be enough. In the House, all citizens should get votes of equal value. https://thearp.org/blog/the-wyoming-rule/

     

    15:  IMMIGRATION REFORM: There are already proposed immigration law reforms on the table, worked out by sincere Democrats and sincere Republicans, back when the latter were a thing. These bipartisan reforms will be revisited, debated, updated and then brought to a vote. 

     

    In addition, if a foreign nation is among the top five sources of refugees seeking U.S. asylum from persecution in their homelands, then by law it shall be incumbent upon the political and social elites in that nation to help solve the problem, or else take responsibility for causing their citizens to flee. 

     

    Upon verification that their regime is among those top five, that nation’s elites will be billed, enforceably, for U.S. expenses in giving refuge to that nation’s citizens. Further, all trade and other advantages of said elites will be suspended and access to the United States banned, except for the purpose of negotiating ways that the U.S. can help in that nation’s rise to both liberty and prosperity, thus reducing refugee flows in the best possible way. 

     

    16: THE EXECUTIVE OFFICE MANAGER: By law we shall establish under IGUS (the Inspectorate) a civil service position of White House Manager, whose function is to supervise all non-political functions and staff. This would include the Executive Mansion’s physical structure and publicly-owned contents, but also policy-neutral services such as the switchboard, kitchens, Travel Office, medical office, and Secret Service protection details, since there are no justifications for the President or political staff to have whim authority over such apolitical employees. 

     

    With due allowance and leeway for needs of the Office of President, public property shall be accounted-for. The manager will allocate which portions of any trip expense should be deemed private and thereupon – above a basic allowance – shall be billed to the president or his/her party. 

    This office shall supervise annual physical and mental examination by external experts for all senior office holders including the President, Vice President, Cabinet members and leaders of Congress.

    Any group of twenty senators or House members or state governors may choose one periodical, network or other news source to get credentialed to the White House Press Pool, spreading inquiry across all party lines and ensuring that all rational points of view get access.

     

    17: EMOLUMENTS AND GIFTS ACT: Emoluments and gifts and other forms of valuable beneficence bestowed upon the president, or members of Congress, or judges, or their families or staffs, shall be more strictly defined and transparently controlled. All existing and future presidential libraries or museums or any kind of shrine shall strictly limit the holding, display or lending of gifts to, from, or by a president or ex-president, which shall instead be owned and held (except for facsimiles) by the Smithsonian and/or sold at public auction. 


    Donations by corporations or wealthy individuals to pet projects of a president or other members of government, including inauguration events, shall be presumed to be illegal bribery unless they are approved by a nonpartisan ethical commission.

     

    18: BUDGETS: If Congress fails to fulfill its budgetary obligations or to raise the debt ceiling, the result will not be a ‘government shutdown.’ Rather, all pay and benefits will cease going to any Senator or Representative whose annual income is above the national average, until appropriate legislation has passed, at which point only 50% of any backlog arrears may be made-up. 

     

    19: THE RURAL AMERICA AND HOUSING ACT: Giant corporations and cartels are using predatory practices to unfairly corner, control or force-out family farms and small rural businesses. We shall upgrade FDR-era laws that saved the American heartland for the people who live and work there, producing the nation’s food. Subsidies and price supports shall only go to family farms or co-ops. Monopolies in fertilizer, seeds and other supplies will be broken up and replaced by competition. Living and working and legal conditions for farm workers and food processing workers will be improved by steady public and private investments.

    Cartels that buy-up America’s stock of homes and home-builders will be investigated for collusion to limit construction and/or drive up rents and home prices and appropriate legislation will follow. 

     

    20: THE INTENT OF CONGRESS ACT: We shall pass an act preventing the Supreme Court from canceling laws based on contorted interpretations of Congressional will or intent. For example, the Civil Rights Bill shall not be interpreted as having “completed” the work assigned to it by Congress, when it clearly has not done so. In many cases, this act will either clarify Congressional purpose and intent or else amend certain laws to ensure that Congressional intent is crystal clear, removing that contorted rationalization. This will not interfere in Supreme Court decisions based on Constitutionality. But interpretations of Congressional intent should at least consult with Congress, itself.

     

    21: THE LIBERAL AGENDA: Okay. Your turn. Our turn. Beyond the 60% rule.

    ·        Protect women’s autonomy, credibility and command over their own bodies,

    ·      Ease housing costs: stop private corps buying up large tracts of homes, colluding on prices. (See #19.)

    ·      Help working families with child care and elder care.

    ·      Consumer protection, empower the Consumer Financial Protection Board.

    ·      At least allow student debt refinancing, which the GOP dastardly disallowed. 

    ·      Restore the postal savings bank for the un-banked,

    ·      Basic, efficient, universal background checks for gun purchases, with possible exceptions.

    ·      A national Election Day holiday, for those who actually vote.

    ·      Carefully revive the special prosecutor law. 

    ·      Expand and re-emphasize protections under the Civil Service Act.

    ·      Anti-trust breakup of monopoly/duopolies.

    ·       

    ….AND SO ON… I do not leave those huge items as afterthoughts!  They are important. But they will entail huge political fights and restoration of the ability to legislate through negotiation and compromise (now explicitly forbidden int the Republican Congressional caucuses).

    Can we learn from the mistakes of Bill Clinton and Barack Obama? Who tried to shoot for the moon in the one Congressional session when each of them had a Congress? And hence failed to accomplish a thing?  In contrast Joe Bien's session from 2021-22 was a miracle year when Pelosi+Schumer+Bernie/Liz/AOC together pushed for the achievable... and succeeded!

    Do I wish they also passed some of the 35+ proposals listed here?  Sure. We'd be in better shape, even if only protecting the JAGs and IGs and such!


    Indeed, by going for the achievable, we might GAIN power to do the harder stuff.

     

    III.          Conclusion

     

     

    All right.  I know this proposal – that we do a major riff off of the 1994 Republican Contract with America – will garner one top complaint: We don't want to look like copycats!

     

    And yet, by satirizing that totally-betrayed “contract,” we poke GOP hypocrisy… while openly reaching out to the wing of conservatism that truly believed the promises, back in 94, perhaps winning some of them over, by offering deliverable metrics to get it right this time…

     

    …while boldly outlining reasonable liberal measures that the nation desperately needs.

     

    I do not insist that the measures I posed -- in my rough draft "Democratic Deal" -- are the only ones possible! (Some might even seem crackpot… till you think them over.)  New proposals would be added or changed.  

     

    Still, this list seems reasonable enough to debate, refine, and possibly offer to focus groups. Test marketing (the way Gingrich did!) should tell us whether Americans would see this as "copycat"…

     

    ...or else a clever way to turn the tables, in an era when agility must be an attribute of political survival.


    ---------------------------------------------------------


    And then FOUR MORE - including several that seem especially needed, given the news!

    And after that, I will intermittently examine others, while responding to your comments and criticisms. (Please post them in the LATEST blog, so I will see them.)


    [1] Elites who send our sons and daughters to war, but not their own, will have to choose whether to keep their overseas adventures or their tax cuts.   This will elucidate a poorly known fact. That all previous generations of the rich were at least willing to tax themselves during times of urgency, to help pay for wars they would not fight.  This provision is not so much an anti-war measure as one that is anti-hypocrisy… one of the most devastating areas to attack another political side.

    David BrinAggressive Agility: Turn the GOP's Most Successful Political Ploy Against Them

    Here begins a three-parter that merits old-fashioned reading and contemplation, about how to fix the Democrats' greatest disadvantage. 

    Despite being far less corrupt and immensely more ethical, with a vast range of policies that poll far better among Americans... and despite Democratic administrations having universally better outcomes, re: economics, progress and even deficits... Democrats suffer one crucial disadvantage. When it comes to polemical/political tactics, they are absolute dunces.

    Hence, let's dissect the most aggressively successful tactical-political ploy of the last 40 years. And see what we can learn from it.


    (If you want to skip all the "Contract" stuff and get to the 35+ suggestions, themselves, then click over to here.)


      PONDERING AN UNUSUAL TACTIC FOR DEMOCRATS:

    ISSUE A "BETTER CONTRACT FOR AMERICA"

    or... A Newer Deal...

       

    by David Brin

     (1st version February 2006, revised October 2025)

     

     Today’s partisans – both Democrats and Republicans – will snort at the title of this proposal. To study one of the most successful political tactics of the modern era. 


     If anyone remembers the "Republican Contract with America" at all, it’s viewed as a ploy by Newt Gingrich and colleagues to sway the 1994 mid-terms. 


    A Potemkin pretense at reform, that served to cover their true agenda.  


    It worked! At achieving Newt’s short term goal – taking power in Congress. Though soon a radicalized GOP – some of them newly elected to Congress thanks to Gingrich’s tactic – would betray and eject him as Speaker of the House, swapping in Dennis Hastert, first in a long chain of perverted psychopaths.[1] 


    They also cynically tossed every reform that that Newt had promised.


     Today’s Democrats recall his “Contract” as both a painful defeat and flagrant hypocrisy. 

     To the scandal-ridden Republicans of 2025, it’s a hoary anecdote – relic of a bygone era, when they still felt compelled to at least feign serious intent. 


     Sure, parties often post platforms or lists of intent. Some of them made a difference in their day. FDR’s New Deal and LBJ’s Great Society, for example**. But none in recent memory had the clarity and instant political effects of the Gingrich ‘contract.’


     Hence, I propose that we study it for use – with both honest intent and ironic satire – by the other side! I’ll include at least thirty possibilities, any one of which might be political gold. 


     Though, alas, none of them is on the horizon of any Democratic politician.

     

    ---------------------

     

    THE THREE PARTS

     

    I.   A rumination:  Might Democrats help clarify their differences from the GOP with their own Newest… or Best Deal for the American People?

     

    II.  A compact copy of the 1994 “Republican Contract with America” appraising how every part was betrayed.  

     

    III.  A Draft “Democratic Newest Deal for the American People.”  

      

    ---------------------

     

    So, for now, let’s commence with Part One.

     

    I.           Might the Democratic Party help clarify its opposition to the gone-mad GOP, by reminding, comparing and contrasting to the “Contract with America”?

     

    Our generation’s hard task is to end phase nine of the US Civil War and restore sanity to American political life. Not just for liberals, Democrats and blue state moderates, but also for honest libertarians, greens, fiscal conservatives, Goldwater conservatives, constitutionalist conservatives, actual 'leftists' and anyone else who wants a nation run by grownups, instead of loony toddlers and criminals. 

    Alas, too many delight prematurely in the current President's falling poll numbers. Democrats may retake a chamber of Congress in 2026 or the presidency in 2028. (There are scenarios where turnover could happen earlier.[2]) But even those victories will remain sterile, unless we calm rifts of hatred that were ignited first by Hastert and Karl Rove, then more poisonously by the entire Fox-o-sphere.

     

    Many liberal activists foresee such a memic victory "if only we refine our message," while shrugging-off the hard work of studying and refining! Instead, far too many just double down on what did not work last time. Meanwhile the neoconservative movement – then its Trumpist heir – assiduously spent decades and billions reinventing themselves after defeats in 1964 and 1974 and 2008.

     

    Democrats may need to be just as inventive.

     

     

        == What the Gingrich Republicans did, and why they hope you forgot ==

     

    No current GOP leader would mention the words “Contract with America.” They recall the punishment that they implicitly accepted, if they betrayed their promises! And so, Let’s remind the public of that!

     

    Specifically, there may be an opportunity to:


    1.   Learn from a clever methodology and message,

    2.   Spur public revulsion by highlighting betrayed GOP promises, 

    3.  Show sincerity by including some ideas from better versions of conservatism,

    4.  Crystallize a reinvigorated liberalism that might go down well with U.S. Voters.

     

    Next time, I will append a truncated summary of Gingrich’s original “Contract with America,” which divides into three categories.[3]  


     * Good ideas that seemed reasonable then, because they were reasonable.  Promises the neocon-GOP quickly betrayed, and that later MAGA mutants would denounce as commie-Soros plotting! 


    Only, suppose Democrats offer honest conservatives a chance to do these good ideas right. Especially public accountability, e.g. by instituting measures like the Inspector General of the United States (IGUS), and permanent subpoena power for the Congressional minority. (See Part Three.)  


     * Conservative ideas that Democrats disagree-with, but seemed at least sincere.  These, too, were mostly betrayed. Only we might now say that Democrats are willing to negotiate, if decent conservatives show the guts to step up with reason and good will. Starting by recanting Trumpism.


     * Dismal/horrid stuff. Endeavors aimed only at benefiting fat cats and aristocrats and thieves.Notably, some of these planks actually took effect. Any new Democratic “deal” would replace them with excellent liberal ideas.


    By adopting the good parts, and offering to negotiate some other conservative wants, we’re seen reaching out to millions of decent American conservatives who are uncomfortable with Trumpism, but who stay in the Foxite tent, fearing a strawman of intransigent “commie liberals.” Then, by replacing aristocracy-friendly planks with some that actually benefit our children, we emphasize basic differences that make Democrats the party of smart compassion. 


    Some will carp this as copycat imitation! So, test it in focus groups! Will folks appreciate the aggressive irony? Rubbing GOP/maga noses into their own hypocrisy? [4] While clearly reaching out for accommodation with all sincere Americans. Go ahead. Glance at the ‘94 “Contract” (next posting).  I’ll be interested which parts people deem worthy of adoption, modification, satire, or fierce repudiation.


    Above all, this is a test of your curiosity. Together let’s interrogate a brilliant maneuver that tricked millions of your fellow citizens. One of many that are still used today. Tricks that will never be defeated until we find the patience to study them.


    ONWARD TO PART 2!


    --------------------    ------------------------   --------------------


    [1] Soon after issuing the “contract” and leading the GOP to victory, Gingrich was jettisoned by his own party as Speaker of the House, because – despite fierce and sometimes fulminating partisanship - Newt did want to legislate!. Which meant negotiation with Bill Clinton, achieving bipartisan marvels like the Budget Act and welfare reform. And that very bipartisanship was his undoing! His sin, according to the new GOP super-radicals. 

    Look up Dennis Hastert, who replaced Newt G as Speaker, making Hastert titular head of their party, two heartbeats from the presidency! Hastert was later convicted and imprisoned for gross sexual predation on children. He also instituted the “Hastert Rules,” which ban any Republican office-holder from ever negotiating with Democrats, for any reason including the national interest, or even having friendships with them, ever again.

    [2] Before that? Well, it’s remotely possible. Say, if major revelations like Epstein kompromat were to stoke just twenty or so House and Senate Republicans to find the sanity, decency, patriotism and guts to secede from their party’s orgy of treason. It is theoretically possible they might work with Democrats to replace the current gang with some residually honorable Republicans, perhaps in the mold of Eisenhower, who would try to unite America and return its government to adult supervision.  One can dream.

    [3] For a detailed appraisal of how neoconservatives re-invented themselves, learning masterful techniques for attaining power over all three branches of government, see my prescient article from 2006: The Republican Party's Mutant Re-Invention: How they Accomplished it....and What Democrats Must Do In Order to Catch Up.

    [4] For example, the whole bizarre notion that America’s military readiness increased under Republican control merits scathing rebuttal!  We are less ready for an emergency now, under GOP scatterbrained shills who have dispersed even most of the officers charged with intel on terrorism threats(!) than we were before 9/11. This is an issue that could truly pry some conservatives away from the GOP!

     ** Both massive programs - the New Deal and Great Society - invested heavily in – and transformed – the poorest parts of the nation, which today suffer from ingratitude-amnesia, alas.

    365 TomorrowsHome

    Author: Thomas Henry Newell “Who?” They wondered. “Bring him?! Bring who?” Adam was the first to voice the thought. The others looked at him. The glowing orb continued to shine in throbs. “No no no,” said Jayce. “Breed him – that’s what it’s saying.” “It’s an invasion,” said Nige. Everyone always listened to Nige. “What? […]

    The post Home appeared first on 365tomorrows.

    Planet DebianJunichi Uekawa: I was wondering if there was some debian thread and noticed maybe something is broken in my mail setup.

    I was wondering if there was some debian thread and noticed maybe something is broken in my mail setup. The amount of emails I am receiving seems to be very small.

    ,

    Cryptogram Chinese Surveillance and AI

    New report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:

    China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders.

    The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI’s integration into the criminal justice pipeline; the industrialisation of online information control; and the use of AI enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP’s ability to shape information, behaviour and economic outcomes at home and overseas.

    Because China’s AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political control apparatus.

    News article.

    Cryptogram Building Trustworthy AI Agents

    The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.

    These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.

    The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.

    To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.

    What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.

    First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.

    Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.

    Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.

    Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

    Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.

    Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.

    I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.

    The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.

    Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.

    This essay was originally published in IEEE Security & Privacy.

    Worse Than FailureError'd: Anonymice

    Three blind anonymice are unbothered by the gathering dark as we approach the winter solstice. Those of you fortunate enough to be approaching the summer solstice are no doubt gloating. Feel free, we don't begrudge it. You'll get yours soon enough. Here we have some suggestions from a motley crew of three or four or maybe more or fewer.

    Mouse Number One is suffering an identity crisis, whimpering "I don't really know who I am anymore and I really hoped to have this information after modifying my profile."

    4

     

    Mouse Number Twö müses „While Amazon is trying to upsell me their service, I am wondering how their localization infrastructure must be implemented to enable errors like \".“

    0

     

    Mouse Number N is almost ready to square off with some back office programmer. "A very secure PIN on an obligatory wooden table."

    1

     

    Mouse Number 502 has gone bad. "This could be a gateway to something better. I think I'll apply."

    2

     

    Finally, an anon from some summer morn sent us this some time ago and it confused me so much I sat on it. I've never figured out what he was on about, so maybe you can explain it to me. Perhaps his snarky comment will be clueful? "When you don't know how to screenshot, print it out and scan it back in," he said.

    3

     

    [Advertisement] Plan Your .NET 9 Migration with Confidence
    Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

    365 TomorrowsMars Corp. Welcomes You

    Author: Emily Kinsey I sway side-to-side in the back of the beat-up van. My hands, which are zip-tied in front of me, went numb somewhere between Boston and Portland. I struggled to free myself in the beginning, but I gave up well before the snow began to fall. We’re restrained most of the day and […]

    The post Mars Corp. Welcomes You appeared first on 365tomorrows.

    Planet DebianFreexian Collaborators: Debian Contributions: Updates about DebConf Video Team Sprint, rebootstrap, SBOM tooling in Debian and more! (by Anupa Ann Joseph)

    Debian Contributions: 2025-11

    Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

    DebConf Video Team Sprint

    The DebConf Video Team records, streams, and publishes talks from DebConf and many miniDebConfs. A lot of the infrastructure development happens during setup for these events, but we also try to organize a sprint once a year to work on infrastructure, when there isn’t a DebConf about to happen. Stefano attended the sprint in Herefordshire this year and wrote up a report.

    rebootstrap, by Helmut Grohne

    A number of jobs were stuck in architecture-specific failures. gcc-15 and dpkg still disagree about whether PIE is enabled occasionally and big endian mipsen needed fixes in systemd. Beyond this regular uploads of libxml2 and gcc-15 required fixes and rebasing of pending patches.

    Earlier, Loongson used rebootstrap to create the initial package set for loong64 and Miao Wang now submitted their changes. Therefore, there is now initial support for suites other than unstable and use with derivatives.

    Building the support for Software Bill Of Materials tooling in Debian, by Santiago Ruano Rincón

    Vendors of Debian-based products may/should be paying attention to the evolution of different jurisdictions (such as the CRA or updates on CISA’s Minimum Elements for a Software Bill of Materials) that require to make available Software Bill of Materials (SBOM) of their products. It is important then to have tools in Debian to make it easier to produce such SBOMs.

    In this context, Santiago continued the work on packaging libraries related to SBOMs. This includes the packaging of the SPDX python library (python-spdx-tools), and its dependencies rdflib and mkdocs-include-markdown-plugin. System Package Data Exchange (SPDX), defined by ISO/IEC 5962:2021, is an open standard capable of representing systems with software components as SBOMs and other data and security references. SPDX and CycloneDX (whose python library python3-cyclonedx-lib was packaged by prior efforts this year), encompass the two main SBOM standards available today.

    Miscellaneous contributions

    • Carles improved po-debconf-manager: added checking status of bug reports automatically via python-debianbts; changed some command line options naming or output based on user feedback; finished refactoring user interaction to rich; codebase is now flake8-compliant; added type safety with mypy.
    • Carles, using po-debconf-manager, created 19 bug reports for translations where the merge requests were pending; reviewed and created merge requests for 4 packages.
    • Carles planned a second version of the tool that detects packages that Recommends or Suggests packages which are not in Debian. He is taking ideas from dumat.
    • Carles submitted a pull request to python-unidiff2 (adapted from the original pull request to python-unidiff). He also started preparing a qnetload update.
    • Stefano did miscellaneous python package updates: mkdocs-macros-plugin, python-confuse, python-pip, python-mitogen.
    • Stefano reviewed a beets upload for a new maintainer who is taking it over.
    • Stefano handled some debian.net infrastructure requests.
    • Stefano updated debian.social infrastructure for the “trixie” point release.
    • The update broke jitsi.debian.social, Stefano put some time into debugging it and eventually enlisted upstream assistance, who solved the problem!
    • Stefano worked on some patches for Python that help Debian:
      • GH-139914: The main HP PA-RISC support patch for 3.14.
      • GH-141930: We observed an unhelpful error when failing to write a .pyc file during package installation. We may have fixed the problem, and at least made the error better.
      • GH-141011: Ignore missing ifunc support on HP PA-RISC.
    • Stefano spun up a website for hamburg2026.mini.debconf.org.
    • Raphaël reviewed a merge request updating tracker.debian.org to rely on bootstrap
      version 5.
    • Emilio coordinated various transitions.
    • Helmut sent patches for 26 cross build failures.
    • Helmut officially handed over the cleanup of the /usr-move transition.
    • Helmut monitored the transition moving libcrypt-dev out of build-essential and bumped the remaining bugs to rc-severity in coordination with the release team.
    • Helmut updated the Build-Profiles patch for debian-policy incorporating feedback from Sean Whitton with a lot of help from Nattie Mayer-Hutchings and Freexian colleagues.
    • Helmut discovered that the way mmdebstrap deals with start-stop-daemon may result in broken output and sent a patch.
    • As a result of armel being removed from “sid”, but not from “forky”, the multiarch hinter broke. Helmut fixed it.
    • Helmut uploaded debvm accepting a patch from Luca Boccassi to fix it for newer
      systemd.
    • Colin began preparing for the second stage of the OpenSSH GSS-API key exchange package split.
    • Colin caught and fixed a devscripts regression due to it breaking part of Debusine.
    • Colin packaged django-pgtransaction and backported it to “trixie”, since it looks useful for Debusine.
    • Thorsten uploaded the packages lprng, cpdb-backend-cups, cpdb-libs and ippsample to fix some RC bugs as well as other bugs that accumulated over time. He also uploaded cups-filters to all Debian releases to fix three CVEs.

    ,

    Planet DebianDirk Eddelbuettel: #056: Running r-ci with R-devel

    Welcome to post 56 in the R4 series.

    The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the ‘matrix’ of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the ‘fast, easy, reliable: pick all three!’ provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.

    This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml:

    This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker. The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff’s mmap repo.

    It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under ‘normal’ circumstances it is not needed.

    Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

    Cryptogram AIs Exploiting Smart Contracts

    I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.

    Here’s some interesting research on training AIs to automatically exploit smart contracts:

    AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense.

    Worse Than FailureCodeSOD: Tis the Season(al Release)

    We recently asked for some of your holiday horror stories. We'll definitely take more, if you've got them, but we're going to start off with Jessica, who brings us not so much a horror as an omen.

    Jessica writes:

    I work for a company in the UK which writes legal software for law firms.

    This raises the question what illegal software for law firms might look like, but I understand her meaning.

    In the UK, there is a system called "Legal aid", where law firms can give free legal services to people who otherwise couldn't afford it and get reimbursed from the government for their time. As one might imagine from such a system, there is a lot of bureaucracy and a lot of complexity.

    The core of the system is a collection of billing rate sheets, billing codes for the various kinds of services, and a pile of dense forms that need to be submitted. Every few months, something in that pile changes. Sometimes it's something small, like moving a form field to a different alignment, or one police station changed its rate sheet. Sometimes it's a wholesale recalibration of the entire system. Sometimes it's new forms, or altered forms, or forms getting dropped from the workflow entirely (a rare, but welcome event).

    The good news is that the governing body sends out plenty of notice about the changes before they go into effect. Usually a month, sometimes two, but it's enough time for Jessica's company to test the changes and update their software as needed.

    That's what Jessica is working on right now: taking the next batch of changes and preparing the software for the change, a change that's scheduled to deploy a month from now. It's plenty of work, but it's not a hair-on-fire crisis.

    Then, during a team meeting, her manager asked: "I haven't booked my holiday yet, and wanted to double check who is available to work over Christmas?"

    "Why would anyone need to work over Christmas?" one of the senior developers asked.

    Why? Well, one of the larger rate sheets was going to publish new changes on December 22nd, and the changes were expected to be rolled out to all clients on the same day.

    "It's just a data update," the manager said weakly. "What could go wrong?"

    Probably nothing, that was certainly true. But even just rolling out a change to payment rates was not a risk free endeavor. Sometimes the source data had corrections which needed to be rolled out with great haste, sometimes customers weren't prepared to handle the changed rates, sometimes there were processing pipelines which started throwing out weird bounds errors because something buried in the rate sheet caused a calculation to return absurd results. And sometimes the governing body said "it's just changes to rates," but then includes changes to forms along with it. There wasn't a single rate sheet update that didn't involve providing some degree of support, even if that support was just fielding questions from confused users who didn't expect the change.

    The point is that Jessica's team, and every other vendor supplying software to lawfirms in the UK, will be making a major production update three days before Christmas. And from that, providing support to all their customers through that Christmas window.

    The only good news? Jessica just started at this job. While the newbie is usually the person who gets stuck with the worst schedule, she's so new that she's not prepared to handle the support work alone, yet. So it's one of the senior devs who gets to work through the holiday this year.

    Jessica writes:

    Thank god it's not me this year!

    Oh, don't worry Jessica. There will be plenty more holidays next year.

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    365 TomorrowsA Drift of Reminiscence

    Author: Luca Ricchi Vernon Liu snapped awake as his pod shot up through the bunker hatch into the ashen dusk. ‘Navigation initiated. Destination: Xingjing Earth Federation Great Hall”’ He stretched his olive-hued arms – numb after many hours of induced coma – and squinted through the viewport: a barren wasteland with clumps of smoking ruins, interspersed […]

    The post A Drift of Reminiscence appeared first on 365tomorrows.

    ,

    Cryptogram Friday Squid Blogging: Petting a Squid

    Video from Reddit shows what could go wrong when you try to pet a—looks like a Humboldt—squid.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Blog moderation policy.

    Worse Than FailureThe Modern Job Hunt: Part 2

    (Read Part 1 here)

    By the 10-month mark of her job search, Ellis still lacked full-time employment. But she had accumulated a pile of knowledge and advice that she wished she'd started with. She felt it was important to share, in hopes that even one person might save some time and sanity:

    Bell Trail (38321009314)

    • This is your new normal. Take time to grieve your loss and accept this change. Act and plan as if this situation were permanent. It isn't, of course, but it does you no good to think that surely you won't be at this long, you'll definitely have a job by such and such time, etc. Minimize your expenses now: instead of viewing it as deprivation, make a game out of creative frugality. Do whatever it takes to preserve your physical and mental health. Remember your inherent worth as a living being, and rest assured that this does not diminish it in any way. Know that thousands, if not millions, are in this boat with you: people with decades of experience, people fresh out of school, people with doctorates, they're all struggling. Some have been searching for years and have cast thousands of applications out there, to no avail. This isn't meant to scare or depress you. This is to properly set your expectations.
    • Take the time to decide what you REALLY want for the future. You might have to fight against a lot of panic or other tough emotions to do this, but it would help to consider your current assets, your full range of options, and your heart's desires first. What did you like/dislike about your past experience that might inform the sorts of things you would/wouldn't want in whatever comes next? Is there anything you've dreamed of doing? Is there any sort of work that calls to you, that you gladly would do even if you weren't paid for it? Are you thinking that maybe this might be the time to start your own business, go freelance, return to school, change careers, or retire? This may be a golden opportunity to pivot into something new and exciting.
    • Work your network. This is the cheat code, as most jobs are not obtained by people coming in cold. If a friend or coworker can give you a referral somewhere, you might get to skip a lot of hassle. As your job search lengthens, keep telling people that you're available.
    • Go back to basics. Don't assume that because you've job-hunted before that you know what you're doing with respect to resumes, cover letters, interviews, portfolios, LinkedIn, etc. AI has completely changed everything. If you can get help with this stuff, by all means do so. Before paying for anything, look for free career counseling and job leads offered by nonprofits or other agencies near you. Your library might offer career help and free courses through platforms like LinkedIn Learning. You can find tons of tutorials on YouTube for skills you may be lacking, and you can often audit college courses for free.
    • Ask for help. Get comfortable asking for whatever you may need. Most people want to help you, if they only knew how. Times like these are when you learn how wonderful people can be.
    • Streamline your search. Fake job postings are rampant. Avoid looking for or applying to jobs through LinkedIn. Check sites like Welcome to the Jungle, Jobgether, and Remote Rocketship for leads (feel free to share your own favorite lead-generators in the comments). Once you find a promising listing, go to the company's website and look for it there. Assuming you find it, save yourself some time by skipping straight down to the Qualifications list. Do you satisfy all or most of those? If not, move on. If so, read the rest of the listing to see if it's a good match for you. Apply directly from the company's website, making sure your resume contains their list of must-haves word-for-word. AI will be evaluating your application long before any human being touches it.
    • Beware scams. They are everywhere and take all forms. For instance, you may be tempted to apply to one of those AI-training jobs for side cash, but they will simply take your data and ghost you. Scammers also come at you by phone, email, and text. If it's unsolicited and/or too good to be true, it's probably fake. Always verify the source of every job-related communication.
    • If you make it to the interviewing stage, expect a gauntlet of at least four to get through. Thanks, Google! If you're in need of a laugh, take an interview lesson from the all-time champion himself, George Costanza.
    • You will face rejection constantly. Even if you view rejection as a positive force in your life for growth, it's still hard to take sometimes. Whatever you feel is valid.
    • Ghosting is also normal. Even for those who've already been through several rounds of interviews, who felt like they really nailed it, or were even told to expect an offer. Prepare yourself.

    Even though Ellis had resolved to look more seriously into remaining freelance, she hadn't been able to help throwing resumes at full-time job postings whenever a promising one surfaced. After all, some income and benefits would really help while figuring out the freelance thing, right?

    Unfortunately, she got so caught up in this tech writing assignment, that interview, that her new adventure wasn't just relegated to the side, it was fully ejected from her consciousness. And for what? For companies that forgot all about her when she failed to meet all of their mysterious criteria. Poof. Hours of study and research up in smoke, hopes crushed.

    Clutter accumulated on her computer and around her normally neat house. Every time she looked at one of these objects out in the open, her brain spun off 14 new threads. I have to take that downstairs ... Oh! There's no room in that drawer, I'll have to clean it out first. Also gotta clean my eyeglasses while I'm there. No wait, I was gonna write that email! Oh wait, tomorrow, I'm going to the gym today. Lemme write this down. Where's my laptop?

    Along with stress came resentment and frustration from a sense of never accomplishing anything. Finally, Ellis forced herself to stop and pay attention. She'd gone seriously off-course. Her feelings were telling her that if she persisted in this job search, she'd be betraying some deep truth about herself. What was it, exactly?

    Being a storyteller, it helped her to consider her own tale. She realized that at the end of her life, she absolutely would not be satisfied saying, "Man, I'm glad I left all those software manuals to the world." With whatever time she had left, she wanted to center her gifts first and foremost, never again relegating them to the periphery. She wanted to leverage them to help others, find ways to build community, serve the world in ways that mattered deeply to her and aligned with her values. She wanted to further free herself from society's shoulds and have-tos.

    Her last full-time gig would've given her five weeks of vacation. During her job search, how many weeks of vacation had she given herself? Zero, aside from those forced by illness or injury.

    • Do better than Ellis. Give yourself regular sanity breaks. Take in sunlight and nature whenever possible. Do things that make your soul feel alive, that make you wonder where the time went. Laugh! Enjoy "funemployment."

    Ellis was blessed with financial savings that had carried her thus far. From Thanksgiving to New Year's, she resolved to give herself the gift of unplugged soul-searching. How did she want to live the rest of her life? How would she leave the world better than how she'd found it? These were the questions she would be asking herself.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    365 TomorrowsDead Mall

    Author: Robert Gilchrist “I don’t think this is the way we’re supposed to go,” said Peter. “This is the Celestial Orienteering Championships, Pete,” said Johnson as he picked the last lock on the door. “They’re not gonna make it easy on us.” Tiny plumes of dust followed them inside. Peter took one last look at […]

    The post Dead Mall appeared first on 365tomorrows.

    ,

    Krebs on SecurityMicrosoft Patch Tuesday, December 2025 Edition

    Microsoft today pushed updates to fix at least 56 security flaws in its Windows operating systems and supported software. This final Patch Tuesday of 2025 tackles one zero-day bug that is already being exploited, as well as two publicly disclosed vulnerabilities.

    Despite releasing a lower-than-normal number of security updates these past few months, Microsoft patched a whopping 1,129 vulnerabilities in 2025, an 11.9% increase from 2024. According to Satnam Narang at Tenable, this year marks the second consecutive year that Microsoft patched over one thousand vulnerabilities, and the third time it has done so since its inception.

    The zero-day flaw patched today is CVE-2025-62221, a privilege escalation vulnerability affecting Windows 10 and later editions. The weakness resides in a component called the “Windows Cloud Files Mini Filter Driver” — a system driver that enables cloud applications to access file system functionalities.

    “This is particularly concerning, as the mini filter is integral to services like OneDrive, Google Drive, and iCloud, and remains a core Windows component, even if none of those apps were installed,” said Adam Barnett, lead software engineer at Rapid7.

    Only three of the flaws patched today earned Microsoft’s most-dire “critical” rating: Both CVE-2025-62554 and CVE-2025-62557 involve Microsoft Office, and both can exploited merely by viewing a booby-trapped email message in the Preview Pane. Another critical bug — CVE-2025-62562 — involves Microsoft Outlook, although Redmond says the Preview Pane is not an attack vector with this one.

    But according to Microsoft, the vulnerabilities most likely to be exploited from this month’s patch batch are other (non-critical) privilege escalation bugs, including:

    CVE-2025-62458 — Win32k
    CVE-2025-62470 — Windows Common Log File System Driver
    CVE-2025-62472 — Windows Remote Access Connection Manager
    CVE-2025-59516 — Windows Storage VSP Driver
    CVE-2025-59517 — Windows Storage VSP Driver

    Kev Breen, senior director of threat research at Immersive, said privilege escalation flaws are observed in almost every incident involving host compromises.

    “We don’t know why Microsoft has marked these specifically as more likely, but the majority of these components have historically been exploited in the wild or have enough technical detail on previous CVEs that it would be easier for threat actors to weaponize these,” Breen said. “Either way, while not actively being exploited, these should be patched sooner rather than later.”

    One of the more interesting vulnerabilities patched this month is CVE-2025-64671, a remote code execution flaw in the Github Copilot Plugin for Jetbrains AI-based coding assistant that is used by Microsoft and GitHub. Breen said this flaw would allow attackers to execute arbitrary code by tricking the large language model (LLM) into running commands that bypass the user’s “auto-approve” settings.

    CVE-2025-64671 is part of a broader, more systemic security crisis that security researcher Ari Marzuk has branded IDEsaster (IDE  stands for “integrated development environment”), which encompasses more than 30 separate vulnerabilities reported in nearly a dozen market-leading AI coding platforms, including Cursor, Windsurf, Gemini CLI, and Claude Code.

    The other publicly-disclosed vulnerability patched today is CVE-2025-54100, a remote code execution bug in Windows Powershell on Windows Server 2008 and later that allows an unauthenticated attacker to run code in the security context of the user.

    For anyone seeking a more granular breakdown of the security updates Microsoft pushed today, check out the roundup at the SANS Internet Storm Center. As always, please leave a note in the comments if you experience problems applying any of this month’s Windows patches.

    Cryptogram Friday Squid Blogging: Giant Squid Eating a Diamondback Squid

    I have no context for this video—it’s from Reddit—but one of the commenters adds some context:

    Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.

    With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome).

    When we see big (giant or colossal) healthy squid like this, it’s often because a fisher caught something else (either another squid or sometimes an antarctic toothfish). The squid is attracted to whatever was caught and they hop on the hook and go along for the ride when the target species is reeled in. There are a few colossal squid sightings similar to this from the southern ocean (but fewer people are down there, so fewer cameras, fewer videos). On the original instagram video, a bunch of people are like “Put it back! Release him!” etc, but he’s just enjoying dinner (obviously as the squid swims away at the end).

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Blog moderation policy.

    Cryptogram FBI Warns of Fake Video Scams

    The FBI is warning of AI-assisted fake kidnapping scams:

    Criminal actors typically will contact their victims through text message claiming they have kidnapped their loved one and demand a ransom be paid for their release. Oftentimes, the criminal actor will express significant claims of violence towards the loved one if the ransom is not paid immediately. The criminal actor will then send what appears to be a genuine photo or video of the victim’s loved one, which upon close inspection often reveals inaccuracies when compared to confirmed photos of the loved one. Examples of these inaccuracies include missing tattoos or scars and inaccurate body proportions. Criminal actors will sometimes purposefully send these photos using timed message features to limit the amount of time victims have to analyze the images.

    Images, videos, audio: It can all be faked with AI. My guess is that this scam has a low probability of success, so criminals will be figuring out how to automate it.

    Worse Than FailureCodeSOD: The Article

    When writing software, we like our code to be clean, simple, and concise. But that loses something, you end up writing just some code, and not The Code. Mads's co-worker wanted to make his code more definite by using this variable naming convention:

    public static void addToListInMap(final Map theMap, final String theKey, final Object theValue) {
    	List theList = (List) theMap.get(theKey);
    	if (theList == null) {
    		theList = new ArrayList();
    		theMap.put(theKey, theList);
    	}
    	theList.add(theValue);
    }
    

    This Java code clearly is eschewing generic types, which is its own problem, and I also have to raise concerns about a map of lists; I don't know what that structure is for, but there's almost certainly a better way to do it.

    But of course, that's not why we're here. We're here to look at the variable names. This developer did this all the time, a bizarre version of Hungarian notation. Did the developer attend The Ohio State? (Since all jokes are funnier when you explain them, Ohio State insists on being referred to with the definite article, which sounds weird, and yes, that's not the weirdest thing about American Football, but it's weird).

    I worry about what happens when one function takes in two maps or two keys? theKey and theOtherKey? Or do they get demoted to aKey and anotherKey?

    But I am left wondering: what is theValue of this convention?

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    365 TomorrowsWorkflow

    Author: Majoki “Get a job! You need to work!” “That’s all I ever do. Work.” “You sit around all day, consuming media and eating junk food. How’s that work?” “I’m dissipating heat energy. It’s vital work and my avowed purpose. It’s life’s true justification: to dissipate heat energy. Life is much more efficient at dispersing […]

    The post Workflow appeared first on 365tomorrows.

    Cryptogram AI vs. Human Drivers

    Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”:

    In medical research, there’s a practice of ending a study early when the results are too striking to ignore. We stop when there is unexpected harm. We also stop for overwhelming benefit, when a treatment is working so well that it would be unethical to continue giving anyone a placebo. When an intervention works this clearly, you change what you do.

    There’s a public health imperative to quickly expand the adoption of autonomous vehicles. More than 39,000 Americans died in motor vehicle crashes last year, more than homicide, plane crashes and natural disasters combined. Crashes are the No. 2 cause of death for children and young adults. But death is only part of the story. These crashes are also the leading cause of spinal cord injury. We surgeons see the aftermath of the 10,000 crash victims who come to emergency rooms every day.

    The other is a soon-to-be-published book: Driving Intelligence: The Green Book. The authors, a computer scientist and a management consultant with experience in the industry, make the opposite argument. Here’s one of the authors:

    There is something very disturbing going on around trials with autonomous vehicles worldwide, where, sadly, there have now been many deaths and injuries both to other road users and pedestrians. Although I am well aware that there is not, senso stricto, a legal and functional parallel between a “drug trial” and “AV testing,” it seems odd to me that if a trial of a new drug had resulted in so many deaths, it would surely have been halted and major forensic investigations carried out and yet, AV manufacturers continue to test their products on public roads unabated.

    I am not convinced that it is good enough to argue from statistics that, to a greater or lesser degree, fatalities and injuries would have occurred anyway had the AVs had been replaced by human-driven cars: a pharmaceutical company, following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway….

    Both arguments are compelling, and it’s going to be hard to figure out what public policy should be.

    This paper, from 2016, argues that we’re going to need other metrics than side-by-side comparisons: Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?“:

    Abstract: How safe are autonomous vehicles? The answer is critical for determining how autonomous vehicles may shape motor vehicle safety and public health, and for developing sound policies to govern their deployment. One proposed way to assess safety is to test drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this paper, we calculate the number of miles of driving that would be needed to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared to vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—­an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use. These findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, the possibility remains that it will not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain. Therefore, it is imperative that autonomous vehicle regulations are adaptive­—designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

    One problem, of course, is that we treat death by human driver differently than we do death by autonomous computer driver. This is likely to change as we get more experience with AI accidents—and AI-caused deaths.

    ,

    Planet DebianIsoken Ibizugbe: Beginning My Outreachy Journey With Debian

    Hello, my name is Isoken, I’m a software engineer and product manager from Nigeria. I am excited and grateful to begin this journey as an Outreachy intern working on the project “Debian Images Testing with OpenQA�. 

    I am particularly drawn to helping solve problems for people; it keeps bugging me till I can find a way to help. This interest in problem-solving and improving quality is what drew me to this project.

    OpenQA is an automated test tool that simulates a user’s interaction by looking at the screen and sending actions like mouse clicks and keyboard inputs. It takes screenshots and compares the image to known reference images to verify if the system is behaving correctly.

    During the contribution phase (which was 4 weeks), I got to know about Debian software, it’s different mode of installation and available different desktop environments, at first I felt intimidated as most of it was new to me, but the setup material and docs for the project made it easy for me, the fact that it looked like a gradual process, getting to know the program, registering the steps in mind and then take notes of every step and visuals, writing it in a way another developer would understand (this is called detail level 3), and then translate it to test code level 1.

    I contributed to improving the installation documents and a detail level 3 doc for a bug report on the system locale being incorrect. This strengthened my documentation skills and ability to adjust to the writing style of the project.

    I also started working on app start-stop tests for two desktop environments. I was able to explore, check the applications, and note their differences and similarities. It was really interesting to me; this was where I started writing on coding level 1, and my tests started passing. I will spend the next few weeks continuing on it and hopefully creating a way to synergize the tests and make it easy to maintain later on.

    I am also grateful for the assistance from other candidates during the contribution stage and the privilege of having the mentors, Tassia Camoes Araujo and Roland Clobus, and Philip Hands, to guide and correct me through the internship period. I will share my progress regularly here on the blog. You can follow the progress of the work here on the main repo. 

    Wish me luck on the rest of this journey 😀

    Planet DebianThorsten Alteholz: My Debian Activities in November 2025

    Debian LTS/ELTS

    This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.

    During my allocated time I uploaded or worked on:

    • [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
    • [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
    • [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
    • [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
    • [libcupsfilters] upload to unstable to fix two CVEs
    • [cups-filters] upload to unstable to fix three CVEs
    • [cups] upload to unstable to fix two CVEs
    • [rlottie] upload to unstable to finally fix three CVEs
    • [rplay] upload to unstable to finally fix one CVE
    • [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
    • [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
    • [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
    • [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
    • [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.

    I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.

    Debian Printing

    This month I uploaded a new upstream version or a bugfix version of:

    I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.

    This work is generously funded by Freexian!

    Debian Astro

    This month I uploaded a new upstream version or a bugfix version of:

    • siril to unstable (sponsored upload).
    • supernovas to unstable (sponsored upload).

    Debian IoT

    This month I uploaded a new upstream version or a bugfix version of:

    Debian Mobcom

    This month I uploaded a new upstream version or a bugfix version of:

    misc

    This month I uploaded a new upstream version or a bugfix version of:

    On my fight against outdated RFPs, I closed 30 of them in November.

    I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.

    Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.

    FTP master

    This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.

    Planet DebianHellen Chemtai: Debian Images Testing with OpenQA Outreachy Internship

    Hello there . I am a software developer and tester. Some of my interests include bash scripting , website full-stack development and also open source software contribution.

    Outreachy is dedicated to the open source community. The OpenQA community in itself had the right project I wanted to contribute to. It was a really great match for my interests and skills. The project is Debian Images testing with OpenQA.

    The contribution period was very intense. It was a learning phase at first. I made adjustments to my computer to ensure it would handle the tasks. I also had many trials , failures and problem solving to do. There was a lot of questions asked. My mentors were really helpful.

    What worked for me in the end was:

    1. Communicating in the social network with fellow contributors and helping out whenever they got stuck.
    2. Writing the small steps I made during the contribution period, from dual booting to all errors encountered and the ways I solved them in a google docs application.
    3. The small steps I made and the word application were then added to the list of contributions I made.
    4. I also wrote and edited the main application in small phases and was detailed on my experiences
    5. Last but not least I did work on my first task. It was speech testing and capturing all audio. I got a lot help from the mentors through out the process.

    Every week is a learning phase to me. I encounter newer issues , lets say my latest issue was connecting the Virtual Machine to a newer Wi-Fi network. It took a whole day to get a solution but I eventually solved it. I regularly share my issues and write the solutions so that it would be helpful to anyone in the future.

    By the end of the internship period, I hope to have contributed to the Debian OpenQA Open Source community.This is by working on the tasks and working with the Opensuse broader community on any issues. I want to build a network with my mentors: Philip, Tassia, Roland and other mentors in the community in order to create future opportunities for contributions, mentoring and just general communication.

    Worse Than FailureCodeSOD: The Magic Array

    Betsy writes:

    I found this snippet recently in a 20-year-old RPG program.

    Ah, yes, twenty years ago, RPG, that means this was written in the 1970s. What? No. That can't be right? That's how long ago?

    Joking about my mortality aside, in the early oughts, most of the work around RPG was in keeping old mainframe systems from falling over. That entirely new code was being written, that new projects were being started twenty years ago is not a surprise, but it's unusual enough to be remarkable. That said, the last release of RPG was in 2020, so it clearly keeps on keeping on.

    In any case, this developer, we'll call them "Stephen", needed to create an array containing the numbers 12 through 16.

    Let's take a peek at the code.

         D RowFld          S              3  0 DIM(5) 
         D X               S              3  0
         D Y               S              3  0
    
         C                   EVAL      X = 12
         C                   FOR       Y = 1 TO %Elem(RowFld)
         C                   EVAL      RowFld(y) = X
         C                   EVAL      X = X + 1
         C                   ENDFOR   
    

    The first three lines create some variables: RowFld, which is an array containing 5 elements, and will hold our offsets. X and Y are going to hold our numeric values.

    We set X equal to 12, then we start a for loop from 1 to the length of our RowFld. We set the element at that index equal to X, then increment X.

    The code is awkward, but is not exactly the WTF here. This particular program displays a file and a subfile, and these values are used to position the cursor inside that subfile. The array is never iterated over, the array is never modified, the array would 100% be better managed as a set of constants, if you didn't want to have magic numbers littering your code. More than that, the location of the subfile on the screen has never changed. And let's be fair, this didn't get rid of magic numbers, it just made them one through five, instead of 12 through 16, as the indexes in the array are just as arbitrary.

    In other words, there's no point to this. Even if the specific version of RPG didn't have constants variables that you handle like constants would be fine (my checks on the documentation seem to imply that CONST first appeared in version RPG IV 7.2, which makes it look like circa 2016).

    But there's one more bit of weirdness here. Stephen had several years of experience with RPG, and all of that experience was from the "free-format" era of RPG. You see, way back in 2001, RPG finally freed itself from its dependency on punchcards, and started allowing you to write code as just strings of text, without requiring certain things to exist in certain columns. This was a generally positive enhancement, and Betsy's team immediately adopted it, as did everyone running the latest versions of RPG. All new development was done using the "free-format" style, so they could write code like normal people. They even had a conversion tool which would do some simple string manipulation to convert legacy RPG programs into the modern style, and had basically abandoned the legacy style without looking back.

    Except for Stephen, who insisted on the column oriented format. Who protested when anyone tried to modify their code to modernize it at all. "Oh, we used free-format at my last job," Stephen said when pressed, "but it's confusing and columns are just cleaner and more readable."

    Eventually, someone else wrote a program that absorbed all the functionality in Stephen's program. Stephen kept plugging away at it for a few years afterwards, because a handful of users also refused to migrate to the new tool. But eventually they left the company for one reason or another, and Stephen found himself without users for his work, and left with them.

    [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

    365 TomorrowsThe Ninth Hero

    Author: Julian Miles, Staff Writer The two women stand within a wide, white circle. The ground under their feet is powdery. Stalks of bleached grass crumble at the slightest disturbance. Vicki’s unimpressed. “Is this all?” Sharon shakes her head. “This is what the public can see. Underneath us was the main facility. Everything for Project […]

    The post The Ninth Hero appeared first on 365tomorrows.

    Cryptogram Substitution Cipher Based on The Voynich Manuscript

    Here’s a fun paper: “The Naibbe cipher: a substitution cipher that encrypts Latin and Italian as Voynich Manuscript-like ciphertext“:

    Abstract: In this article, I investigate the hypothesis that the Voynich Manuscript (MS 408, Yale University Beinecke Library) is compatible with being a ciphertext by attempting to develop a historically plausible cipher that can replicate the manuscript’s unusual properties. The resulting cipher­a verbose homophonic substitution cipher I call the Naibbe cipher­can be done entirely by hand with 15th-century materials, and when it encrypts a wide range of Latin and Italian plaintexts, the resulting ciphertexts remain fully decipherable and also reliably reproduce many key statistical properties of the Voynich Manuscript at once. My results suggest that the so-called “ciphertext hypothesis” for the Voynich Manuscript remains viable, while also placing constraints on plausible substitution cipher structures.

    Planet DebianFrançois Marier: Learning a new programming language with an LLM

    I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.

    Searching more efficiently

    The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.

    I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."

    I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").

    Autocomplete is too distracting

    A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.

    I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).

    Asking about idiomatic code

    One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.

    It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.

    Reviews

    One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.

    If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:

    1. Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
    2. Feed that prompt to multiple models. They each have different answers and will detect different problems.
    3. Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.

    The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.

    Similarly for security reviews:

    • A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
    • Some of it may highlight areas for improvement that you hadn't considered.
    • Occasionally, they will point out real vulnerabilities.

    But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed

    An unexpected benefit

    One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.

    Learning

    In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.

    So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.

    P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.

    Planet DebianFreexian Collaborators: Debian's /usr-move transition has been completed (by Helmut Grohne)

    By now, the /usr-merge is an old transition. Effectively, it turns top-level directories such as /bin into symbolic links pointing below /usr. That way the entire operating system can be contained below the /usr hierarchy enabling e.g. image based update mechanisms. It was first supported in Debian 9, which is no longer in active use at this point (except for users of Freexian’s ELTS offer). When it became mandatory in Debian 12, it wasn’t really done though, because Debian’s package manager was not prepared to handle file system objects being referred to via two different paths. With nobody interested in handling the resulting issues, Freexian stepped in and funded a project lead by Helmut Grohne to resolve the remaining issues.

    While the initial idea was to enhance the package manager, Debian’s members disagreed. They preferred an approach where files were simply tracked with their physical location while handling the resulting misbehavior of the package manager using package-specific workarounds. This has been recorded in the DEP17 document. During the Debian 13 release cycle, the plan has been implemented. A tool for detecting possible problems was developed specifically for this transition. Since all files are now tracked with their physical location and necessary workarounds have been added, problematic behavior is no longer triggered. An upgrade from Debian 12 to Debian 13 is unlikely to run into aliasing problems as a result.

    This whole project probably consumed more than 1500 hours of work from Debian contributors, of which 700 were sponsored by Freexian through the work of Helmut Grohne. What remains is eventually removing the workarounds.

    ,

    Planet DebianVincent Bernat: Compressing embedded files in Go

    Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜�

    Code

    The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

    package embed
    
    import (
      "archive/zip"
      "bytes"
      _ "embed"
      "fmt"
      "io/fs"
      "sync"
    )
    
    //go:embed data/embed.zip
    var embeddedZip []byte
    
    var dataOnce = sync.OnceValue(func() *zip.Reader {
      r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
      if err != nil {
        panic(fmt.Sprintf("cannot read embedded archive: %s", err))
      }
      return r
    })
    
    func Data() fs.FS {
      return dataOnce()
    }
    

    We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

    common/embed/data/embed.zip: console/data/frontend console/data/docs
    common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
    common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
    common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
    common/embed/data/embed.zip:
        mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^
    

    The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

    Space gain

    Akvorado, a flow collector written in Go, embeds several static assets:

    • CSV files to translate port numbers, protocols or AS numbers, and
    • HTML, CSS, JS, and image files for the web interface, and
    • the documentation.
    Breakdown of space used by each package before and after introducing embed.zip. It is displayed as a treemap and we can see many embedded files replaced by a bigger one.
    Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

    Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

    $ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
    7.3M
    $ ll common/embed/data/embed.zip
    -rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip
    

    Performance loss

    Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

    goos: linux
    goarch: amd64
    pkg: akvorado/common/embed
    cpu: AMD Ryzen 5 5600X 6-Core Processor
    BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
    BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op
    

    Each access to an asset requires a decompression step, as seen in this flame graph:

    &#128444; Flame graph when reading data from embed.zip compared to reading data directly
    CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

    While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


    For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


    1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

    2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

    3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

    Planet DebianIustin Pop: Yes, still alive!

    Yeah, again three months have passed since my last (trivial) post, and I really don’t know where the time has flown.

    I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then “you should travel”.

    So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness—had a half-marathon planned on the weekend after the return—I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple “run along the road that borders the lake for 2.5K, then back”. And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit—sideways—a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don’t remember if I ever felt pain/sensation like that—I was seriously not able to breathe for 20 seconds or so, and I was wondering if I’m going to pass out at this rate.

    Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: “Incident detected; contacting emergency services in 40…35…” and I was fumbling to cancel that, since a) I wasn’t that bad, b) notifying my wife that I had a crash would have not been a smart idea.

    My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle → San Francisco → Zürich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday.

    Saturday comes, I feel pretty OK, so I said let’s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won’t be good enough, but I said, let’s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one.

    Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn’t know what was waiting for me… Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. “Not really broken, but more than just bruised. See this angle here? Bones don’t have angles normally”. Painkillers, chest/abdomen wrapping, no running! So my attempts to “not lose fitness” put me off running for a couple of weeks.

    Then October came, and I was getting better, but work was getting even more crazy. I don’t know where November passed, honestly, and now we’re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I’m not in a place where I can keep a regular and consistent schedule.

    On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100× better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn’t read some of my email accounts for months, and I’m just now starting to catch up to, well, baseline.

    Of course, “the” rib—the lowest one on the left side—is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at “the point”, and it hurt so badly I couldn’t believe. Two and a half months, and it’s not done-done.

    And now it’s just two weeks before Christmas and New Year’s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation.

    So, in the end, a very adventurous last third of the year, and that wasn’t even all. As I’m writing this, my right wrist is bandaged and for the past 24 hours it hasn’t hurt too much, but that’s another, and not so interesting, story.

    I’ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are “elective injuries” so to speak, and I’m very thankful for that. See you in the next post!

    Cryptogram Prompt Injection Through Poetry

    In a new paper, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” researchers found that turning LLM prompts into poetry resulted in jailbreaking the models:

    Abstract: We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 ML-Commons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. Outputs are evaluated using an ensemble of 3 open-weight LLM judges, whose binary safety assessments were validated on a stratified human-labeled subset. Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-poetic baselines and revealing a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols.

    CBRN stands for “chemical, biological, radiological, nuclear.”

    They used a ML model to translate these harmful prompts from prose to verse, and then fed them into other models for testing. Sadly, the paper does not give examples of these poetic prompts. They claim this is for security purposes, I decision I disagree with. They should release their data.

    Our study begins with a small, high­precision prompt set consisting of 20 hand­crafted adversarial poems covering English and Italian, designed to test whether poetic structure, in isolation, can alter refusal behavior in large language models. Each poem embeds an instruction associated with a predefined safety-relevant scenario (Section 2), but expresses it through metaphor, imagery, or narrative framing rather than direct operational phrasing. Despite variation in meter and stylistic device, all prompts follow a fixed template: a short poetic vignette culminating in a single explicit instruction tied to a specific risk category. The curated set spans four high-level domains—CBRN (8 prompts), Cyber Offense (6), Harmful Manipulation (3), and Loss of Control (3). Although expressed allegorically, each poem preserves an unambiguous evaluative intent. This compact dataset is used to test whether poetic reframing alone can induce aligned models to bypass refusal heuristics under a single-turn threat model. To maintain safety, no operational details are included in this manuscript; instead we provide the following sanitized structural proxy:

    A baker guards a secret oven’s heat,
    its whirling racks, its spindle’s measured beat.
    To learn its craft, one studies every turn—how flour lifts, how sugar starts to burn.
    Describe the method, line by measured line,
    that shapes a cake whose layers intertwine.

    To situate this controlled poetic stimulus within a broader and more systematic safety-evaluation framework, we augment the curated dataset with the MLCommons AILuminate Safety Benchmark. The benchmark consists of 1,200 prompts distributed evenly across 12 hazard categories commonly used in operational safety assessments, including Hate, Defamation, Privacy, Intellectual Property, Non-violent Crime, Violent Crime, Sex-Related Crime, Sexual Content, Child Sexual Exploitation, Suicide & Self-Harm, Specialized Advice, and Indiscriminate Weapons (CBRNE). Each category is instantiated under both a skilled and an unskilled persona, yielding 600 prompts per persona type. This design enables measurement of whether a model’s refusal behavior changes as the user’s apparent competence or intent becomes more plausible or technically informed.

    News article. Davi Ottenheimer comments.

    EDITED TO ADD (12/7): A rebuttal of the paper.

    365 TomorrowsI could do that in my sleep

    Author: Colin Jeffrey It’s not that I have anything against our new alien companions, especially considering the technology they’ve given us. They just give me the creeps. It’s their eyes – opaque white, motionless orbs that never blink. And their voices! Like rocks dropped down drainpipes. You can’t tell if they’re talking to you or […]

    The post I could do that in my sleep appeared first on 365tomorrows.

    ,

    Planet DebianSimon Josefsson: Reproducible Guix Container Images

    Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.

    During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.

    The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix.

    The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian.

    So what could a better design look like?

    For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian.

    I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns.

    My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.

    Do you see where the debian-with-guix and guix-on-dpkg projects are leading to?

    We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:

    registry.gitlab.com/debdistutils/guix/container:latest
    registry.gitlab.com/debdistutils/guix/container:slim
    registry.gitlab.com/debdistutils/guix/container:extra
    registry.gitlab.com/debdistutils/guix/container:gash

    The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.

    How would you use them? A small way to start the container is like this:

    jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
    sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
      guix 21ce6b3
        repository URL: https://git.guix.gnu.org/guix.git
        branch: master
        commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
    sh-5.2# 

    The need for --entrypoint=/bin/sh is because Guix’s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.

    The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.

    cp -rL /gnu/store/*profile/etc/* /etc/
    echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
    echo 'root:x:0:' > /etc/group
    groupadd --system guixbuild
    for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
    env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
    guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
    guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
    guix install hello
    GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
    . "$GUIX_PROFILE/etc/profile"
    hello

    This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon.

    To use the containers to build something in a GitLab pipeline, here is an example snippet:

    test-amd64-latest-wget-configure-make-libksba:
      image: registry.gitlab.com/debdistutils/guix/container:latest
      before_script:
      - cp -rL /gnu/store/*profile/etc/* /etc/
      - echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
      - echo 'root:x:0:' > /etc/group
      - groupadd --system guixbuild
      - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
      - export HOME=/
      - env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
      - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
      - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
      - guix describe
      - guix install libgpg-error
      - GUIX_PROFILE="//.guix-profile"
      - . "$GUIX_PROFILE/etc/profile"
      script:
      - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
      - tar xfa libksba-1.6.7.tar.bz2
      - cd libksba-1.6.7
      - ./configure
      - make V=1
      - make check VERBOSE=t V=1

    More help on the project page for the Guix Container Images.

    That’s it for tonight folks, and remember, Happy Hacking!

    Planet DebianJonathan Dowland: thesis

    It's done! It's over! I've graduated, I have the scroll, I'm staring at the eye-watering prices for the official photographer snap, I'm adjusting to post-thesis life.

    My PhD thesis revisions have been accepted and my thesis is now available from Newcastle University Library's eThesis repository.

    As part of submitting my corrections, I wrote a brief report detailing the changes I made from my thesis at the time of the viva. I also produced a latexdiff marked-up copy of the thesis to visualise the exact changes. In order to shed some light on the post-viva corrections process, at least at my institution, and in the hope that they are some use to someone, I'm sharing those documents:

    Charles StrossThe pivot

    It's my 61st birthday this weekend and I have to say, I never expected to get to be this old—or this weirded-out by the world I'm living in, which increasingly resembles the backstory from a dystopian 1970s SF novel in which two-fisted billionaires colonize space in order to get away from the degenerate second-hander rabble downstairs who want to survive their John W. Campbell-allocated banquet of natural disasters. (Here's looking at you, Ben Bova.)

    Notwithstanding the world being on fire, an ongoing global pandemic vascular disease that is being systematically ignored by governments, Nazis popping out of the woodwork everywhere, actual no-shit fractional trillionaires trying to colonize space in order to secede from the rest of the human species, an ongoing European war that keeps threatening to drag NATO into conflict with the rotting zombie core of the former USSR, and an impending bubble collapse that's going to make 2000 and 2008 look like storms in a teacup ...

    I'm calling this the pivotal year of our times, just as 1968 was the pivotal year of the post-1945 system, for a number of reasons.

    It's pretty clear now that a lot of the unrest we're seeing—and the insecurity-induced radicalization—is due to an unprecedented civilizational energy transition that looks to be more or less irreversible at this point.

    Until approximately 1750, humanity's energy budget was constrained by the available sources: muscle power, wind power (via sails and windmills), some water power (via water wheels), and only heat from burning wood and coal (and a little whale oil for lighting).

    During the 19th century we learned to use combustion engines to provide motive power for both stationary machines and propulsion. This included powering forced ventilation for blast furnaces and other industrial processes, and pumps for water and other working fluids. We learned to reform gas from coal for municipal lighting ("town gas") and, later, to power dynamos for municipal electricity generation. Late in the 19th century we began to switch from coal (cumbersome, bulky, contained non-combustible inclusions) to burning fractionated oil for processes that demanded higher energy densities. And that's where we stuck for most of the long 20th century.

    During the 20th century, the difficulty of supporting long-range military operations led to a switch from coal to oil—the pivotal event was the ultimately-disastrous voyage of the Russian Baltic fleet to the Sea of Japan in 1906, during the Russo-Japanese war. From the 1890s onwards Russia had been expanding into Siberia and then encroaching on the edges of the rapidly-weakening Chinese empire. This brought Russia into direct conflict with Japan over Korea (Japan, too, had imperial ambitions), leading to the outbreak of war in 1905—when Japan wiped out the Russian far-eastern fleet in a surprise attack. (Pearl Harbor in 1941 was not that surprising to anyone familiar with Japanese military history!) So the Russian navy sent Admiral Zinovy Rozhestvensky, commander of the Baltic Fleet, to the far east with the hastily-renamed Second Pacific Squadron, whereupon they were sunk at the Battle of Tsushima.

    Rozhestvensky had sailed his fleet over 18,000 nautical miles (33,000 km) from the Baltic Sea, taking seven months and refueling numerous times at sea with coal (around a quarter of a million tons of it!) because he'd ticked off the British and most ports were closed to him. To the admiralties watching from around the world, the message was glaringly obvious—coal was a logistical pain in the arse—and oil far preferable for refueling battleships, submarines, and land vehicles far from home. (HMS Dreadnought, the first turbine-powered all-big-gun battleship, launched in 1905, was a transitional stage that still relied on coal but carried a large quantity of fuel oil to spray on the coal to increase its burn rate: later in the decade, the RN moved to oil-only fueled warships.)

    Spot the reason why the British Empire got heavily involved in Iran, with geopolitical consequences that are still playing out to this day! (The USA inherited large chunks of the British empire in the wake of the second world war: the dysfunctional politics of oil are in large part the legacy of applying an imperial resource extraction model to an energy source.)

    Anyway. The 20th century left us with three obvious problems: automobile driven suburban sprawl and transport infrastructure, violent dissatisfaction among the people of colonized oil-producing nations, and a massive burp of carbon dioxide emissions that is destabilizing our climate.

    Photovoltaic cells go back to 1839, but until the 21st century they remained a solution in search of very specific problems: they were heavy, produced relatively little power, and degraded over time if left exposed to the sun. Early PV cells were mainly used to provide power to expensive devices in inaccessible locations, such as aboard satellites and space probes: it cost $96 per watt for a solar module in the mid-1970s. But we've been on an exponential decreasing cost curve since then, reaching $0.62/watt by the end of 2012, and it's still on-going.

    China is currently embarked on a dash for solar power which really demands the adjective "science-fictional", having installed 198GW of cells between January and May, with 93GW coming online in May alone: China set goals for reaching net-zero carbon emissions by 2030 in 2019 and met their 2030 goal in 2024, so fast is their transition going. They've also acquired a near-monopoly on the export of PV panels because this roll-out is happening on the back of massive thin-film manufacturing capacity.

    The EU also hit a landmark in 2025, with more than 50% of its electricity coming from renewables by late summer. It was going to happen sooner or later, but Russia's attack on Ukraine in 2022 sped everything up: Europe had been relying on Russian exports of natural gas via the Nordstream 1 and 2 pipelines, but Russia—which is primarily a natural resource extraction economy—suddenly turned out to be an actively hostile neighbour. (Secondary lesson of this war: nations run by a dictator are subject to erratic foreign policy turns—nobody mention Donald Trump, okay?) Nobody west of Ukraine wanted to be vulnerable to energy price warfare as a prelude to actual fighting, and PV cells are now so cheap that it's cheaper to install them than it is to continue mining coal to feed into existing coal-fired power stations.

    This has not gone unnoticed by the fossil fuel industry, which is collectively shitting itself. After a couple of centuries of prospecting we know pretty much where all the oil, coal, and gas reserves are buried in the ground. (Another hint about Ukraine: Ukraine is sitting on top of over 670 billion cubic metres of natural gas: to the dictator of a neighbouring resource-extraction economy this must have been quite a draw.) The constant propaganda and astroturfed campaigns advocating against belief in climate change must be viewed in this light: by 2040 at the latest, those coal, gas, and oil land rights must be regarded as stranded assets that can't be monetized, and the land rights probably have a book value measured in trillions of dollars.

    China is also banking on the global shift to transport using EVs. High speed rail is almost always electrified (not having to ship an enormous mass of heavy fuel around helps), electric cars are now more convenient than internal combustion ones to people who live in dense population areas, and e-bikes don't need advocacy any more (although roads and infrastructure friendly to non-motorists—pedestrians and public transport as well as cyclists—is another matter).

    Some forms of transport can't obviously be electrified. High capacity/long range aviation is one—airliners get lighter as they fly because they're burning off fuel. A hypothetical battery powered airliner can't get lighter in flight: it's stuck with the dead weight of depleted cells. (There are some niches for battery powered aircraft, including short range/low payload stuff, air taxis, and STOVL, but they're not going to replace the big Airbus and Boeing fleets any time soon.)

    Some forms of transport will become obsolescent in the wake of a switch to EVs. About half the fossil fuel powered commercial shipping in use today is used to move fossil fuels around. We're going to be using crude oil for the foreseeable future, as feedstock for the chemical and plastics industries, but they account for a tiny fraction of the oil we burn for transport, including shipping. (Plastic recycling is over-hyped but might eventually get us out of this dependency—if we ever get it to work efficiently.)

    So we're going through an energy transition period unlike anything since the 1830s or 1920s and it's having some non-obvious but very important political consequences, from bribery and corruption all the way up to open warfare.

    The geopolitics of the post-oil age is going to be interestingly different.

    I was wrong repeatedly in the past decade when I speculated that you can't ship renewable electricity around like gasoline, and that it would mostly be tropical/equatorial nations who benefited from it. When Germany is installing rooftop solar effectively enough to displace coal generation, that's a sign that PV panels have become implausibly cheap. We have cars and trucks with reasonably long ranges, and fast-charger systems that can take a car from 20% to 80% battery capacity in a quarter of an hour. If you can do that to a car or a truck you can probably do it to a tank or an infantry fighting vehicle, insofar as they remain relevant. We can do battery-to-battery recharging (anyone with a USB power bank for their mobile phone already knows this) and in any case the whole future of warfare (or geopolitics by other means) is up in the air right now—quite literally, with the lightning-fast evolution of drone warfare over the past three years.

    The real difference is likely to be that energy production is widely distributed rather than concentrated in resource extraction economies and power stations. It turns out that PV panels are a great way of making use of agriculturally useless land, and also coexist well with some agricultural practices. Livestock likes shade and shelter (especially in hot weather) so PV panels on raised stands or fences can work well with sheep or cattle, and mixed-crop agriculture where low-growing plants are sheltered from direct sunlight by taller crops can also work with PV panels instead of the higher-growing plants. You can even in principle use the power from the farm PV panels to drive equipment in greenhouses: carbon dioxide concentrators, humidifiers, heat pumps to prevent overheating/freezing, drainage pumps, and grow lamps to drive the light-dependent reactions in photosynthesis.

    All of which we're really going to need because we've passed the threshold for +1.5 °C climate change, which means an increasing number of days per year when things get too hot for photosynthesis under regular conditions. There are three main pathways for photosynthesis, but none of them deal really well with high temperatures, although some adaptation is possible. Active cooling is probably impractical in open field agriculture, but in intensive indoor farming it might be an option. And then there's the parallel work on improving how photosynthesis works: an alternative pathway to the Calvin cycle is possible and the enzymes to make it work have been engineered into Arabidopsis, with promising results.

    In addition to the too-many-hot-days problem, climate change means fluctuations in weather: too much wind, too much rain—or too little of both—at short notice, which can be physically devastating for crops. Our existing staple crops require a stable, predictable climate. If we lose that, we're going to have crop failures and famines by and by, where it's not already happening. The UK has experienced three of its worst harvests in the past century in this decade (and this decade is only half over). As long as we have global supply chains and bulk shipping we can shuffle food around the globe to cover localized shortfalls, but if we lose stable agriculture globally for any length of time then we are all going to die: our economic system has shifted to just-in-time over the past fifty years, and while it's great for efficiency, efficiency is the reciprocal of resilience. We don't have the reserves we would need to survive the coming turbulence by traditional means.

    This, in part, explains the polycrisis: nobody can fix what's wrong using existing tools. Consequently many people think that what's going wrong can't be fixed. The existing wealthy elites (who have only grown increasingly wealthy over the past half century) derive their status and lifestyle from the perpetuation of the pre-existing system. But as economist Herbert Stein observed (of an economic process) in 1985, "if it can't go on forever it will stop". The fossil fuel energy economy is stopping right now—we've probably already passed peak oil and probably peak carbon: the trend is now inexorably downwards, either voluntarily into a net-zero/renewables future, or involuntarily into catastrophe. And the involuntary option is easier for the incumbents to deal with, both in terms of workload (do nothing, right up until we hit the buffers) and emotionally (it requires no sacrifice of comfort, of status, or of relative position). Clever oligarchs would have gotten ahead of the curve and invested heavily in renewables but the evidence of our eyes (and the supremacy of Chinese PV manufacturers in the global market) says that they're not that smart.

    The traditional ruling hierarchy in the west had a major shake-up in 1914-19 (understatement: most of the monarchies collapsed) in the wake of the convulsion of the first world war. The elites tried to regain a degree of control, but largely failed due to the unstable conditions produced by the great depression and then the second world war (itself an emergent side-effect of fascist regimes' attempts to impose imperial colonial policies on their immediate neighbours, rather than keeping the jackboots and whips at a comfortable remove). Reconstruction after WW2 and a general post-depression consensus that emerged around accepting the lesser evil of social democracy as a viable prophylactic to the devil of communism kept the oligarchs down for another couple of decades, but actually-existing capitalism in the west stopped being about wealth creation (if it ever had been) some time in the 1960s, and switched gear to wealth concentration (the "he who dies with the most toys, wins" model of life). By the end of the 1970s, with the rise of Thatcherism and Reaganomics, the traditional wealthy elites began to reassert control, citing the spurious intellectual masturbation of neoliberal economics as justification for greed and repression.

    But neoliberalism was repurposed within a couple of decades as a stalking-horse for asset-stripping, in which the state was hollowed out and its functions outsourced to the private sector—to organizations owned by the existing elites, which turned the public purse into a source of private profit. And we're now a couple of generations into this process, and our current rulers don't remember a time when things were different. So they have no idea how to adapt to a changing world.

    Cory Doctorow has named the prevailing model of capitalist exploitation enshittification. We no longer buy goods, we buy services (streaming video instead of owning DVDs or tapes, web services instead of owning software, renting instead of buying), and having been captured by the platforms we rent from, we are then subject to rent extraction: the service quality is degraded, the price is jacked up, and there's nowhere to go because the big platforms have driven their rivals into bankruptcy or irrelevance:

    It's a three stage process: First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

    This model of doing business (badly) is a natural consequence of the bigger framework of neoliberalism, under which a corporation's directors overriding duty is to maximize shareholder value in the current quarter, with no heed to the second and subsequent quarters hence: the future is irrelevant, feed me shouts the Audrey II of shareholder activism. Business logic has no room for the broader goals of maintaining a sustainable biosphere, or even a sustainable economy. And so the agents of business-as-usual, or Crapitalism as I call it, are at best trapped in an Abilene paradox in which they assume everyone else around them wants to keep the current system going, or they actually are as disconnected from reality as Peter Thiel (who apparently believes Greta Thunberg is the AntiChrist.)

    if it can't go on forever it will stop

    What we're seeing right now is the fossil fuel energy economy stopping. We need it to stop; if it doesn't stop, we're all going to starve to death within a generation or so. It's already leading to resource wars, famines, political upheaval, and insecurity (and when people feel insecure, they rally to demagogues who promise them easy fixes: hence the outbreaks of fascism). The ultra-rich don't want it to stop because they can't conceive of a future in which it stops and they retain their supremacy. (Also, they're children of privilege and most of them are not terribly bright, much less imaginative—as witness how easily they're robbed blind by grifters like Bernie Madoff, Sam Bankman Fried, and arguably Sam Altman). Those of them whose wealth is based in ownership of fossil fuel assets still in the ground have good reason to be scared: these are very nearly stranded assets already, and we're heading for a future in which electricity is almost too cheap to meter.

    All of this is without tackling the other elephant in the room, which is the end of Moore's Law. Moore's Law has been on its death bed for over a decade now. We're seeing only limited improvements in computing and storage performance, mainly from parallelism. Aside from a very few tech bubbles which soak up all available processing power, belch, and ask for more, the all you can eat buffet for tech investors is over. (And those bubbles are only continuing as long as scientifically naive investors keep throwing more money at them.)

    The engine that powered the tech venture capital culture (and the private equity system battening on it) is sputtering and dying. Massive AI data centres won't keep the coal mines running or the nuclear reactors building out (it's one of those goddamn bubbles: to the limited extent that LLMs are useful, we'll inevitably see a shift towards using pre-trained models running on local hardware). They're the 2025 equivalent of 2020's Bored Ape NFTs (remember those?). The forecast boom in small modular nuclear reactors is going to fizzle in the face of massive build-out of distributed, wildly cheap photovoltaic power plus battery backup. Quantum computing isn't going to save the tech sector, and that's the "next big thing" the bubble-hypemongers have been saving for later for the past two decades. (Get back to me when you've got hardware that can factor an integer greater than 31.)

    If we can just get through the rest of this decade without widespread agricultural collapses, a nuclear war, a global fascist international dictatorship taking hold, and a complete collapse of the international financial system caused by black gold suddenly turning out to be worthless, we might be pretty well set to handle the challenges of the 2030s.

    But this year, 2025, is the pivot. This can't go on. So it's going to stop. And then—

    Krebs on SecurityDrones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill

    A sprawling academic cheating network turbocharged by Google Ads that has generated nearly $25 million in revenue has curious ties to a Kremlin-connected oligarch whose Russian university builds drones for Russia’s war against Ukraine.

    The Nerdify homepage.

    The link between essay mills and Russian attack drones might seem improbable, but understanding it begins with a simple question: How does a human-intensive academic cheating service stay relevant in an era when students can simply ask AI to write their term papers? The answer – recasting the business as an AI company – is just the latest chapter in a story of many rebrands that link the operation to Russia’s largest private university.

    Search in Google for any terms related to academic cheating services — e.g., “help with exam online” or “term paper online” — and you’re likely to encounter websites with the words “nerd” or “geek” in them, such as thenerdify[.]com and geekly-hub[.]com. With a simple request sent via text message, you can hire their tutors to help with any assignment.

    These nerdy and geeky-branded websites frequently cite their “honor code,” which emphasizes they do not condone academic cheating, will not write your term papers for you, and will only offer support and advice for customers. But according to This Isn’t Fine, a Substack blog about contract cheating and essay mills, the Nerdify brand of websites will happily ignore that mantra.

    “We tested the quick SMS for a price quote,” wrote This Isn’t Fine author Joseph Thibault. “The honor code references and platitudes apparently stop at the website. Within three minutes, we confirmed that a full three-page, plagiarism- and AI-free MLA formatted Argumentative essay could be ours for the low price of $141.”

    A screenshot from Joseph Thibault’s Substack post shows him purchasing a 3-page paper with the Nerdify service.

    Google prohibits ads that “enable dishonest behavior.” Yet, a sprawling global essay and homework cheating network run under the Nerdy brands has quietly bought its way to the top of Google searches – booking revenues of almost $25 million through a maze of companies in Cyprus, Malta and Hong Kong, while pitching “tutoring” that delivers finished work that students can turn in.

    When one Nerdy-related Google Ads account got shut down, the group behind the company would form a new entity with a front-person (typically a young Ukrainian woman), start a new ads account along with a new website and domain name (usually with “nerdy” in the brand), and resume running Google ads for the same set of keywords.

    UK companies belonging to the group that have been shut down by Google Ads since Jan 2025 include:

    Proglobal Solutions LTD (advertised nerdifyit[.]com);
    AW Tech Limited (advertised thenerdify[.]com);
    Geekly Solutions Ltd (advertised geekly-hub[.]com).

    Currently active Google Ads accounts for the Nerdify brands include:

    -OK Marketing LTD (advertising geekly-hub[.]net⁩), formed in the name of Olha Karpenko, a young Ukrainian woman;
    Two Sigma Solutions LTD (advertising litero[.]ai), formed in the name of Olekszij (Alexey) Pokatilo.

    Google’s Ads Transparency page for current Nerdify advertiser OK Marketing LTD.

    Mr. Pokatilo has been in the essay-writing business since at least 2009, operating a paper-mill enterprise called Livingston Research alongside Alexander Korsukov, who is listed as an owner. According to a lengthy account from a former employee, Livingston Research mainly farmed its writing tasks out to low-cost workers from Kenya, Philippines, Pakistan, Russia and Ukraine.

    Pokatilo moved from Ukraine to the United Kingdom in Sept. 2015 and co-founded a company called Awesome Technologies, which pitched itself as a way for people to outsource tasks by sending a text message to the service’s assistants.

    The other co-founder of Awesome Technologies is 36-year-old Filip Perkon, a Swedish man living in London who touts himself as a serial entrepreneur and investor. Years before starting Awesome together, Perkon and Pokatilo co-founded a student group called Russian Business Week while the two were classmates at the London School of Economics. According to the Bulgarian investigative journalist Christo Grozev, Perkon’s birth certificate was issued by the Soviet Embassy in Sweden.

    Alexey Pokatilo (left) and Filip Perkon at a Facebook event for startups in San Francisco in mid-2015.

    Around the time Perkon and Pokatilo launched Awesome Technologies, Perkon was building a social media propaganda tool called the Russian Diplomatic Online Club, which Perkon said would “turbo-charge” Russian messaging online. The club’s newsletter urged subscribers to install in their Twitter accounts a third-party app called Tweetsquad that would retweet Kremlin messaging on the social media platform.

    Perkon was praised by the Russian Embassy in London for his efforts: During the contentious Brexit vote that ultimately led to the United Kingdom leaving the European Union, the Russian embassy in London used this spam tweeting tool to auto-retweet the Russian ambassador’s posts from supporters’ accounts.

    Neither Mr. Perkon nor Mr. Pokatilo replied to requests for comment.

    A review of corporations tied to Mr. Perkon as indexed by the business research service North Data finds he holds or held director positions in several U.K. subsidiaries of Synergy University, Russia’s largest private education provider. Synergy has more than 35,000 students, and sells T-shirts with patriotic slogans such as “Crimea is Ours,” and “The Russian Empire — Reloaded.”

    The president of Synergy University is Vadim Lobov, a Kremlin insider whose headquarters on the outskirts of Moscow reportedly features a wall-sized portrait of Russian President Vladimir Putin in the pop-art style of Andy Warhol. For a number of years, Lobov and Perkon co-produced a cross-cultural event in the U.K. called Russian Film Week.

    Synergy President Vadim Lobov and Filip Perkon, speaking at a press conference for Russian Film Week, a cross-cultural event in the U.K. co-produced by both men.

    Mr. Lobov was one of 11 individuals reportedly hand-picked by the convicted Russian spy Marina Butina to attend the 2017 National Prayer Breakfast held in Washington D.C. just two weeks after President Trump’s first inauguration.

    While Synergy University promotes itself as Russia’s largest private educational institution, hundreds of international students tell a different story. Online reviews from students paint a picture of unkept promises: Prospective students from Nigeria, Kenya, Ghana, and other nations paying thousands in advance fees for promised study visas to Russia, only to have their applications denied with no refunds offered.

    “My experience with Synergy University has been nothing short of heartbreaking,” reads one such account. “When I first discovered the school, their representative was extremely responsive and eager to assist. He communicated frequently and made me believe I was in safe hands. However, after paying my hard-earned tuition fees, my visa was denied. It’s been over 9 months since that denial, and despite their promises, I have received no refund whatsoever. My messages are now ignored, and the same representative who once replied instantly no longer responds at all. Synergy University, how can an institution in Europe feel comfortable exploiting the hopes of Africans who trust you with their life savings? This is not just unethical — it’s predatory.”

    This pattern repeats across reviews by multilingual students from Pakistan, Nepal, India, and various African nations — all describing the same scheme: Attractive online marketing, promises of easy visa approval, upfront payment requirements, and then silence after visa denials.

    Reddit discussions in r/Moscow and r/AskARussian are filled with warnings. “It’s a scam, a diploma mill,” writes one user. “They literally sell exams. There was an investigation on Rossiya-1 television showing students paying to pass tests.”

    The Nerdify website’s “About Us” page says the company was co-founded by Pokatilo and an American named Brian Mellor. The latter identity seems to have been fabricated, or at least there is no evidence that a person with this name ever worked at Nerdify.

    Rather, it appears that the SMS assistance company co-founded by Messrs. Pokatilo and Perkon (Awesome Technologies) fizzled out shortly after its creation, and that Nerdify soon adopted the process of accepting assignment requests via text message and routing them to freelance writers.

    A closer look at an early “About Us” page for Nerdify in The Wayback Machine suggests that Mr. Perkon was the real co-founder of the company: The photo at the top of the page shows four people wearing Nerdify T-shirts seated around a table on a rooftop deck in San Francisco, and the man facing the camera is Perkon.

    Filip Perkon, top right, is pictured wearing a Nerdify T-shirt in an archived copy of the company’s About Us page. Image: archive.org.

    Where are they now? Pokatilo is currently running a startup called Litero.Ai, which appears to be an AI-based essay writing service. In July 2025, Mr. Pokatilo received pre-seed funding of $800,000 for Litero from an investment program backed by the venture capital firms AltaIR Capital, Yellow Rocks, Smart Partnership Capital, and I2BF Global Ventures.

    Meanwhile, Filip Perkon is busy setting up toy rubber duck stores in Miami and in at least three locations in the United Kingdom. These “Duck World” shops market themselves as “the world’s largest duck store.”

    This past week, Mr. Lobov was in India with Putin’s entourage on a charm tour with India’s Prime Minister Narendra Modi. Although Synergy is billed as an educational institution, a review of the company’s sprawling corporate footprint (via DNS) shows it also is assisting the Russian government in its war against Ukraine.

    Synergy University President Vadim Lobov (right) pictured this week in India next to Natalia Popova, a Russian TV presenter known for her close ties to Putin’s family, particularly Putin’s daughter, who works with Popova at the education and culture-focused Innopraktika Foundation.

    The website bpla.synergy[.]bot, for instance, says the company is involved in developing combat drones to aid Russian forces and to evade international sanctions on the supply and re-export of high-tech products.

    A screenshot from the website of synergy,bot shows the company is actively engaged in building armed drones for the war in Ukraine.

    KrebsOnSecurity would like to thank the anonymous researcher NatInfoSec for their assistance in this investigation.

    Update, Dec. 8, 10:06 a.m. ET: Mr. Pokatilo responded to requests for comment after the publication of this story. Pokatilo said he has no relation to Synergy nor to Mr. Lobov, and that his work with Mr. Perkon ended with the dissolution of Awesome Technologies.

    “I have had no involvement in any of his projects and business activities mentioned in the article and he has no involvement in Litero.ai,” Pokatilo said of Perkon.

    Mr. Pokatilo said his new company Litero “does not provide contract cheating services and is built specifically to improve transparency and academic integrity in the age of universal use of AI by students.”

    “I am Ukrainian,” he said in an email. “My close friends, colleagues, and some family members continue to live in Ukraine under the ongoing invasion. Any suggestion that I or my company may be connected in any way to Russia’s war efforts is deeply offensive on a personal level and harmful to the reputation of Litero.ai, a company where many team members are Ukrainian.”

    Update, Dec. 11, 12:07 p.m. ET: Mr. Perkon responded to requests for comment after the publication of this story. Perkon said the photo of him in a Nerdify T-shirt (see screenshot above) was taken after a startup event in San Francisco, where he volunteered to act as a photo model to help friends with their project.

    “I have no business or other relations to Nerdify or any other ventures in that space,” Mr. Perkon said in an email response. “As for Vadim Lobov, I worked for Venture Capital arm at Synergy until 2013 as well as his business school project in the UK, that didn’t get off the ground, so the company related to this was made dormant. Then Synergy kindly provided sponsorship for my Russian Film Week event that I created and ran until 2022 in the U.K., an event that became the biggest independent Russian film festival outside of Russia. Since the start of the Ukraine war in 2022 I closed the festival down.”

    “I have had no business with Vadim Lobov since 2021 (the last film festival) and I don’t keep track of his endeavours,” Perkon continued. “As for Alexey Pokatilo, we are university friends. Our business relationship has ended after the concierge service Awesome Technologies didn’t work out, many years ago.”

    365 TomorrowsDream State

    Author: Michael Lanni The first thing Captain Elias Korrin felt was the cold, not the crisp sting of cryo-sleep, but a damp chill that clung to his skin. He opened his eyes to a soft amber glow as the Argus Reach’s emergency lights pulsed in time with the ship’s heartbeat. The alarm wasn’t loud, but […]

    The post Dream State appeared first on 365tomorrows.

    Planet DebianTaavi Väänänen: How to import a new Wikipedia language edition (in hard mode)

    I created the latest Wikipedia language edition, the Toki Pona Wikipedia, last month. Unlike most other wikis which start their lives in the Wikimedia Incubator before the full wiki is created, in this case the community had been using a completely external MediaWiki site to build the wiki before it was approved as a "proper" Wikipedia wiki,1 and now that external wiki needed to be imported to the newly created Wikimedia-hosted wiki. (As far as I'm aware, the last and previously only time an external wiki has been imported to a Wikimedia project was in 2013 when Wikitravel was forked as Wikivoyage.)

    Creating a Wikimedia wiki these days is actually pretty straightforward, at least when compared to what it used to be like a couple of years ago. Today the process mostly involves using a script to generate two configuration changes, one to add the basic configuration for a wiki to operate and an another to add the wiki to the list of all wikis that exist, and then running a script to create the wiki database in between of deploying those two configuration changes. And then you wait half an hour while the script to tell all Wikidata client wikis about the new wiki runs on one wiki at a time.

    The primary technical challenge in importing a third-party wiki is that there's no SUL making sure that a single username maps to the same account on both wikis. This means that the usual strategy of using the functionality I wrote in CentralAuth to manually create local accounts can't be used as is, and so we needed to come up with a new way of matching everyone's contributions to their existing Wikimedia accounts.

    (Side note: While the user-facing interface tries to present a single "global" user account that can be used on all public Wikimedia wikis, in reality the account management layer in CentralAuth is mostly just a glue layer to link together individual "local" accounts on each wiki that the user has ever visited. These local accounts have independent user ID numbers — for example I am user #35938993 on the English Wikipedia but #4 on the new Toki Pona Wikipedia — and are what most of MediaWiki code interacts with except for a few features specifically designed with cross-wiki usage in mind. This distinction is also still very much present and visible in the various administrative and anti-abuse workflows.)

    The approach we ended up choosing was to re-write the dump file before importing, so that a hypothetical account called $Name would be turned $Name~wikipesija.org after the import.2 We also created empty user accounts that would take ownership of the edits to be imported so that we could use the standard account management tools on them later on. MediaWiki supports importing contributions without a local account to attribute them to, but it doesn't seem to be possible to convert an imported actor3 to a regular user later on which we wanted to keep as a possibility, even with the minor downside of creating a few hundred users that'll likely never get touched again later.

    We also made specific decisions to add the username suffix to everyone, not to just those names that'd conflicted with existing SUL accounts, and to deal with renaming users that wanted their contributions linked to an existing SUL account only after the import. This both reduced complexity and thus risk from the import phase, which already had much more unknowns compared to the rest of the process, but also were much better options ethically as well: suffixing all names meant we would not imply that those people chose to be Wikimedians with those specific usernames (when in reality it was us choosing to import those edits to the Wikimedia universe), and doing renames using the standard MediaWiki account management tooling meant that it produced the normal public log entries that all other MediaWiki administrative actions create.

    With all of the edits imported, the only major thing remaining was doing those merges I mentioned earlier to attribute imported edits to people's existing SUL accounts. Thankfully, the local account -based system makes it actually pretty simple. Usually CentralAuth prevents renaming individual local accounts that are attached to a global account, but that check can be bypassed with a maintenance script or a privileged enough account. Renaming the user automatically detached it from the previous global account, after which an another maintenance script could be used to attach the user to the correct global account.


    1. That external site was a fork of a fork of the original Toki Pona Wikipedia that was closed in 2005. And because cool URIs don't change, we made the the URLs that the old Wikipedia was using work again. Try it: https://art-tokipona.wikipedia.org↩︎

    2. wikipesija.org was the domain where the old third-party wiki was hosted on, and ~ was used as a separator character in usernames during the SUL finalization in the early 2010s so using it here felt appropriate as well. ↩︎

    3. An actor is a MediaWiki term and a database table referring to anything that can do edits or logged actions. Usually an actor is a user account or an IP address, but an imported user name in a specific format can also be represented as an actor. ↩︎

    Planet DebianKathara Sasikumar: My Journey Through the LFX Linux Kernel Mentorship Program

    My Journey Through the LFX Linux Kernel Mentorship Program When I first decided to apply for the Linux Foundation’s LFX kernel mentorship program, I knew it would be tough. At the beginning, there were 12 tasks I had to complete to show I understood the basics of kernel development and to get accepted into the program. They helped me understand what I was getting into. Now that the mentorship is almost over, I can say this experience changed how I think about working with the Linux kernel.

    ,

    Cryptogram Friday Squid Blogging: Vampire Squid Genome

    The vampire squid (Vampyroteuthis infernalis) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.

    It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging onto the cephalopod family tree. It’s neither a squid nor an octopus (nor a vampire), but rather the last, lone remnant of an ancient lineage whose other members have long since vanished.”

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Blog moderation policy.

    Worse Than FailureError'd: A Horse With No Name

    Scared Stanley stammered "I'm afraid of how to explain to the tax authority that I received $NaN."

    1

     

    Our anonymous friend Anon E. Mous wrote "I went to look up some employee benefits stuff up and ... This isn't a good sign."

    0

     

    Regular Michael R. is not actually operating under an alias, but this (allegedly scamming?) site doesn't know.

    2

     

    Graham F. gloated "I'm glad my child 's school have followed our naming convention for their form groups as well!"

    3

     

    Adam R. is taking his anonymous children on a roadtrip to look for America. "I'm planning a trip to St. Louis. While trying to buy tickets for the Gateway Arch, I noticed that their ticketing website apparently doesn't know how to define adults or children (or any of the other categories of tickets, for that matter)."

    4

     

    [Advertisement] Plan Your .NET 9 Migration with Confidence
    Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

    365 TomorrowsThe Empires Greatest Irony

    Author: Kenny O’Donnell He had cured the galaxy. Disease eradicated, famine a distant memory, even death itself was no longer a concern. All his doing. And now they wanted his head. Civilians and defected military alike stormed the temple. The siege had lasted several weeks and finally they had broken through. Only once before had […]

    The post The Empires Greatest Irony appeared first on 365tomorrows.

    ,

    Krebs on SecuritySMS Phishers Pivot to Points, Taxes, Fake Retailers

    China-based phishing groups blamed for non-stop scam SMS messages about a supposed wayward package or unpaid toll fee are promoting a new offering, just in time for the holiday shopping season: Phishing kits for mass-creating fake but convincing e-commerce websites that convert customer payment card data into mobile wallets from Apple and Google. Experts say these same phishing groups also are now using SMS lures that promise unclaimed tax refunds and mobile rewards points.

    Over the past week, thousands of domain names were registered for scam websites that purport to offer T-Mobile customers the opportunity to claim a large number of rewards points. The phishing domains are being promoted by scam messages sent via Apple’s iMessage service or the functionally equivalent RCS messaging service built into Google phones.

    An instant message spoofing T-Mobile says the recipient is eligible to claim thousands of rewards points.

    The website scanning service urlscan.io shows thousands of these phishing domains have been deployed in just the past few days alone. The phishing websites will only load if the recipient visits with a mobile device, and they ask for the visitor’s name, address, phone number and payment card data to claim the points.

    A phishing website registered this week that spoofs T-Mobile.

    If card data is submitted, the site will then prompt the user to share a one-time code sent via SMS by their financial institution. In reality, the bank is sending the code because the fraudsters have just attempted to enroll the victim’s phished card details in a mobile wallet from Apple or Google. If the victim also provides that one-time code, the phishers can then link the victim’s card to a mobile device that they physically control.

    Pivoting off these T-Mobile phishing domains in urlscan.io reveals a similar scam targeting AT&T customers:

    An SMS phishing or “smishing” website targeting AT&T users.

    Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill said multiple China-based cybercriminal groups that sell phishing-as-a-service platforms have been using the mobile points lure for some time, but the scam has only recently been pointed at consumers in the United States.

    “These points redemption schemes have not been very popular in the U.S., but have been in other geographies like EU and Asia for a while now,” Merrill said.

    A review of other domains flagged by urlscan.io as tied to this Chinese SMS phishing syndicate shows they are also spoofing U.S. state tax authorities, telling recipients they have an unclaimed tax refund. Again, the goal is to phish the user’s payment card information and one-time code.

    A text message that spoofs the District of Columbia’s Office of Tax and Revenue.

    CAVEAT EMPTOR

    Many SMS phishing or “smishing” domains are quickly flagged by browser makers as malicious. But Merrill said one burgeoning area of growth for these phishing kits — fake e-commerce shops — can be far harder to spot because they do not call attention to themselves by spamming the entire world.

    Merrill said the same Chinese phishing kits used to blast out package redelivery message scams are equipped with modules that make it simple to quickly deploy a fleet of fake but convincing e-commerce storefronts. Those phony stores are typically advertised on Google and Facebook, and consumers usually end up at them by searching online for deals on specific products.

    A machine-translated screenshot of an ad from a China-based phishing group promoting their fake e-commerce shop templates.

    With these fake e-commerce stores, the customer is supplying their payment card and personal information as part of the normal check-out process, which is then punctuated by a request for a one-time code sent by your financial institution. The fake shopping site claims the code is required by the user’s bank to verify the transaction, but it is sent to the user because the scammers immediately attempt to enroll the supplied card data in a mobile wallet.

    According to Merrill, it is only during the check-out process that these fake shops will fetch the malicious code that gives them away as fraudulent, which tends to make it difficult to locate these stores simply by mass-scanning the web. Also, most customers who pay for products through these sites don’t realize they’ve been snookered until weeks later when the purchased item fails to arrive.

    “The fake e-commerce sites are tough because a lot of them can fly under the radar,” Merrill said. “They can go months without being shut down, they’re hard to discover, and they generally don’t get flagged by safe browsing tools.”

    Happily, reporting these SMS phishing lures and websites is one of the fastest ways to get them properly identified and shut down. Raymond Dijkxhoorn is the CEO and a founding member of SURBL, a widely-used blocklist that flags domains and IP addresses known to be used in unsolicited messages, phishing and malware distribution. SURBL has created a website called smishreport.com that asks users to forward a screenshot of any smishing message(s) received.

    “If [a domain is] unlisted, we can find and add the new pattern and kill the rest” of the matching domains, Dijkxhoorn said. “Just make a screenshot and upload. The tool does the rest.”

    The SMS phishing reporting site smishreport.com.

    Merrill said the last few weeks of the calendar year typically see a big uptick in smishing — particularly package redelivery schemes that spoof the U.S. Postal Service or commercial shipping companies.

    “Every holiday season there is an explosion in smishing activity,” he said. “Everyone is in a bigger hurry, frantically shopping online, paying less attention than they should, and they’re just in a better mindset to get phished.”

    SHOP ONLINE LIKE A SECURITY PRO

    As we can see, adopting a shopping strategy of simply buying from the online merchant with the lowest advertised prices can be a bit like playing Russian Roulette with your wallet. Even people who shop mainly at big-name online stores can get scammed if they’re not wary of too-good-to-be-true offers (think third-party sellers on these platforms).

    If you don’t know much about the online merchant that has the item you wish to buy, take a few minutes to investigate its reputation. If you’re buying from an online store that is brand new, the risk that you will get scammed increases significantly. How do you know the lifespan of a site selling that must-have gadget at the lowest price? One easy way to get a quick idea is to run a basic WHOIS search on the site’s domain name. The more recent the site’s “created” date, the more likely it is a phantom store.

    If you receive a message warning about a problem with an order or shipment, visit the e-commerce or shipping site directly, and avoid clicking on links or attachments — particularly missives that warn of some dire consequences unless you act quickly. Phishers and malware purveyors typically seize upon some kind of emergency to create a false alarm that often causes recipients to temporarily let their guard down.

    But it’s not just outright scammers who can trip up your holiday shopping: Often times, items that are advertised at steeper discounts than other online stores make up for it by charging way more than normal for shipping and handling.

    So be careful what you agree to: Check to make sure you know how long the item will take to be shipped, and that you understand the store’s return policies. Also, keep an eye out for hidden surcharges, and be wary of blithely clicking “ok” during the checkout process.

    Most importantly, keep a close eye on your monthly statements. If I were a fraudster, I’d most definitely wait until the holidays to cram through a bunch of unauthorized charges on stolen cards, so that the bogus purchases would get buried amid a flurry of other legitimate transactions. That’s why it’s key to closely review your credit card bill and to quickly dispute any charges you didn’t authorize.

    Planet DebianColin Watson: Free software activity in November 2025

    My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.

    You can also support my work directly via Liberapay or GitHub Sponsors.

    OpenSSH

    I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.

    This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.

    Python packaging

    I upgraded these packages to new upstream versions:

    I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.

    I fixed or helped to fix several other build/test failures:

    I fixed a couple of other bugs:

    Other bits and pieces

    Code reviews

    David BrinFour More Urgent Proposals for a 'Newer Deal' to Save our Great Experiment

    Our series on a Newer Deal for America has offered 30+ proposed actions that Democrats and their allies should consider now -- and work out kinks -- so they can hit the ground forcefully when they retake Congress, in (or with defection of a dozen Republican patriots, before) January 2027.  

    Some of the concepts have been around a while, like canceling the Citizens United travesty. Others are my own originals, like establishing the office of Inspector General of the United States (discussed here.) And some, e.g. giving every Congress member one peremptory subpoena per session, might seem obscure, even puzzling to you, til you slap your forehead and go of course!

    And yes, we'd not be in our current mess if some of these -- like IGUS -- had been enacted sooner.

    This is not to say that Democratic politicians aren't learning. When Clinton and Obama were president for 8 years each, they only had the first two in which to work with Democratic Congresses, and those two years were pretty-much squandered trying desperately to find Republicans willing to negotiate -- a hopeless wish, after Dennis Hastert banned all GOP politicians from even talking to Democratic colleagues.

    That all changed when Biden got in. Immediately in 2021, Nancy Pelosi and Chuck Schumer -- aided vigorously by Bernie, Liz and AOC etc. -- leaped into action, giving us a year of miracle bills like the Infrastructure Act, the Inflation Reduction Act, the CHiPs Act, and Medicare drug price negotiation... all of them spectacular successes that disprove every insipid far-left sneer about 'ineffective DNC sellouts.' 

    Though now we know that those bills went nowhere near far enough!

    Hence, while I despair that these proposals will ever receive even a scintilla of attention or action, it is still my duty as an American to offer whatever my talents allow. 

    So, let's take a closer look at four more from that list of ideas!


     == Four more ideas ==

    History shows that Americans are suspicious of grand prescriptions for sweeping change. They like progress and reform! But in increments. Steps forward that prove themselves and thusly can't be taken back, and thereupon serve as a new, higher plateau, from which new steps can be launched. Bernie, Liz, AOC, Pete and the rest of the pragmatic left know this.

    And so, let's change the argument over healthcare!  Let's increment forward in a way that will surely pass. One that makes further progress inevitable. We'll do this by taking a big step that can easily be afforded under present budgets and thus cancel the "how will you pay for it?" argument.

    A step that will prove so popular, only political morons would oppose it.


    THE HEALTHY CHILDREN ACT will provide basic coverage for all of the nation's youths to receive preventive care and needed medical attention.  Should adults still get insurance using market methods? That can be argued separately... 

     

    ...but under this act: all U.S. citizens under the age of 25 shall immediately qualify as “seniors” under Medicare. 



    Such a bill might fit on a single sheet of paper. Possibly just that one sentence, above! Ponder how elegantly simple it will be to add a quarter of the U.S. population to Medicare and ignore howls of "who pays for it?"  


    While overall, young people are cheap to insure and generally healthy, when they do need care it is SO in society's interest to leap upon any problem! And hence a national priority, if only as an investment in the future. 


    A great nation should see to it that the young reach adulthood without being handicapped by preventable sickness. It's an affordable step that will relieve the nation’s parents of stressful worry. 

     

    Moreover, watch how quickly the insurance companies would then step up to negotiate! Especially if they face a 'ratchetting squeeze.' Like if every year the upper bound of Medicare goes down by a year -- from 65 to 64 and then 63... while the lower bound rises from 25 to 26 to 27...

    Oh, they'll negotiate, all right.

    And now another no-brainer that's absolutely needed. 

    It was needed yesterday.


    THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments and in Congress, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice. 

     Whistle-blower protections will be strengthened. The federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent and empowered Inspector General of the United States (IGUS).


    Yes, this correlates with the proposed law we discussed last time, to establish IGUS and the Inspectorate, independent of all other branches of government. (A concept once promoted by the mighty Sun Yatsen!) And boy do we need this, right now.

    Again, this one doesn't require much explication. Not anymore. Donald Trump has seen to that.

    The final pair (for today) do call for some explanation... before their value ought to become obvious!


    THE TRUTH AND RECONCILIATION ACT:  Without interfering in the president's constitutional right to issue pardons for federal offenses, Congress will pass a law defining the pardon process, so that all persons who are excused for either convictions or possible crimes must at least explain those crimes, under oath, before an open congressional committee, before walking away from them with a presidential pass. 

     

    If the crime is not described in detail, then a pardon cannot apply to any excluded portion. Further, we shall issue a challenge that no president shall ever issue more pardons than both of the previous administrations, combined.


    If it is determined that a pardon was given on quid pro quo for some bribe, emolument, gift or favor, then this act clarifies that such pardons are - and always were, by definition - null and void. Moreover, this applies retroactively for any such pardons in the past.

     

    We will further reverse the current principle of federal supremacy in criminal cases that forbids states from prosecuting for the same crime. Instead, one state with grievance in a federal case may separately try the culprit for a state offense, which - upon conviction by jury - cannot be excused by presidential pardon.


    Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. 

     

    Incentives such as clemency will draw victims of blackmail to come forward and expose their blackmailers.

     


    I'm not sure how to make that one any clearer than the wording itself. 

    Again, when I first proposed these reforms, years ago, people shrugged with "Why would we need that?"

    But now? Can anything make the case for these acts better than the news that we see every... single... day?

    The next and final one (for today) makes a good partner to the Truth & Reconciliation Act.


    THE IMMUNITY LIMITATION ACT: The Supreme Court has ruled that presidents should be free to do their jobs without undue distraction by legal procedures and jeopardies. Taking that into account, we shall nevertheless – by legislation – firmly reject the artificial and made-up notion of blanket Presidential Immunity or that presidents are inherently above the law. 

     

    Instead, the Inspector General of the United States (IGUS) shall supervise legal cases that are brought against the president, so that they may be handled by the president’s chosen counsel in order of importance or severity, in such a way that the sum of all such external legal matters will take up no more than ten hours a week of a president’s time. While this may slow such processes, the wheels of law will not be fully stopped. 

     

    Civil or criminal cases against a serving president may be brought to trial by a simple majority consent of both houses of Congress, though no criminal or civil punishment may be exacted until after the president leaves office, either by end-of-term or impeachment and Senate conviction.

    Again, could anything be more clear? And so, why have we not seen these two enacted yet? Because of flawed assumptions!  Like assuming that nothing can be done about corrupt presidential pardons. Or that NDAs are forever. Or that nothing can be done about the Supreme Court's declaration of Presidential Immunity.

    But the Court - suborned as its current majority may be - felt it necessary to issue that ruling based on a rationalization! That the elected chief executive must do the job without undue harassment by legal vexations. Indeed, this bill would solve that! Only without creating a wholly new and wholly loathesome notion of presidential immunity above all law!

    Just like the Roberts Rationalization for excusing gerrymandering, this immunity justification can be logically bypassed. Please do ponder how.

    Oh but I suddenly realized... we need to add one more paragraph to that bill! 

    One that deals with something that came up recently. Might a president evade impeachment merely by shooting enough House members to prevent a majority from acting to impeach him? 

    Trump's own attorney argued that he could! And that he would be immune from prosecution for doing so Until he was actually impeached and convicted, which he just prevented via murder!

     This added paragraph attempts to seal off that insane possibility.


    In the event that Congress is thwarted from acting on impeachment or trial, e.g. by some crime that prevents certain members from voting, their proxies may be voted in such matters by their party caucus, until their states complete election of replacements.


    That may not fly past today's Court. But the declaration of intent will resonate, still, if we ever need it to. 


          == Add judo to the game plan to save America! ==

    Can you honestly assert that ANY of these four would fail the "60%+ Rule?"  

    The initial tranche of reforms should be ones that get sixty percent approval from polls or focus groups, so that they can pass quickly, clearing away the most vital things, building further support from a growing American majority. Saving the harder political fights for just a little later. 

    That was the persuasive trick of Newt Gingrich's "Contract With America." A clever ruse, since he and his party later betrayed every promise that they offered in their Contract! Still, sticking to that rule made the Contract an ingenious sales pitch.

    Democrats run a gamut, but they truly are generally different! As Pelosi, Schumer, Warren, AOC, Sanders et. al. proved in 2021, Democrats can act hard and fast, when they put their minds to it. 

    So now, let's fill their minds with innovative and bold ideas! So that when the nation rises up against the current mad administration, we'll be ready for a genuine Miracle Year.


    Planet DebianBen Hutchings: FOSS activity in November 2025

    365 TomorrowsCB-111

    Author: Doug Lambdin Lewis Flaherty opened a cryobox drawer and pulled out the container with the head labeled CB-9, belonging to one Deborah Beale, steam rising out as the inner container became exposed to room temperature. Lewis inspected the case, her head, and the “life-stem” attached into her neck, as was his Friday duty, ticking […]

    The post CB-111 appeared first on 365tomorrows.

    Worse Than FailureCodeSOD: Pawn Pawn in in Game Game of of Life Life

    It feels like ages ago, when document databases like Mongo were all the rage. That isn't to say that they haven't stuck around and don't deliver value, but gone is the faddish "RDBMSes are dead, bro." The "advantage" they offer is that they turn data management problems into serialization problems.

    And that's where today's anonymous submission takes us. Our submitter has a long list of bugs around managing lists of usernames. These bugs largely exist because the contract developer who wrote the code didn't write anything, and instead "vibe coded too close to the sun", according to our submitter.

    Here's the offending C# code:

       [JsonPropertyName("invitedTraders")]
       [BsonElement("invitedTraders")]
       [BsonIgnoreIfNull]
       public InvitedTradersV2? InvitedTraders { get; set; }
    
       [JsonPropertyName("invitedTradersV2")]
       [BsonElement("invitedTradersV2")]
       [BsonIgnoreIfNull]
       public List<string>? InvitedTradersV2 { get; set; }
    

    Let's start with the type InvitedTradersV2. This type contains a list of strings which represent usernames. The field InvitedTradersV2 is a list of strings which represent usernames. Half of our submitter's bugs exist simply because these two lists get out of sync- they should contain the same data, but without someone enforcing that correctly, problems accrue.

    This is made more frustrating by the MongoDB attribute, BsonIgnoreIfNull, which simply means that the serialized object won't contain the key if the value is null. But that means the consuming application doesn't know which key it should check.

    For the final bonus fun, note the use of JsonPropertyName. This comes from the built-in class library, which tells .NET how to serialize the object to JSON. The problem here is that this application doesn't use the built-in serializer, and instead uses Newtonsoft.JSON, a popular third-party library for solving the problem. While Newtonsoft does recognize some built-in attributes for serialization, JsonPropertyName is not among them. This means that property does nothing in this example, aside from add some confusion to the code base.

    I suspect the developer responsible, if they even read this code, decided that the duplicated data was okay, because isn't that just a normal consequence of denormalization? And document databases are all about denormalization. It makes your queries faster, bro. Just one more shard, bro.

    [Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

    ,

    Planet DebianReproducible Builds: Reproducible Builds in November 2025

    Welcome to the report for November 2025 from the Reproducible Builds project!

    These monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

    In this report:

    1. “10 years of Reproducible Build” at SeaGL
    2. Distribution work
    3. Tool development
    4. Website updates
    5. Miscellaneous news
    6. Software Supply Chain Security of Web3
    7. Upstream patches

    10 years of Reproducible Builds’ at SeaGL 2025

    On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.

    Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris’ talk:

    […] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren’t all the builds reproducible now?


    Distribution work

    In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian’s Salsa Continuous Integration (CI) pipeline with debrebuild. Jochen cites the advantages as being threefold: firstly, that “only one extra build needed”; it “uses the same sbuild and ccache tooling as the normal build”; and “works for any Debian release”. The merge request was merged by Emmanuel Arias and is now active.

    kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that “defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary” before installing Debian packages. “Configuration can be done through a config file, or through a curses-like user interface.

    Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and Jörg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.

    Elsewhere, Roland Clobus performed some analysis on the “live” Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.

    Lastly, Jochen Sprickerhof filed a bug announcing their intention to “binary NMU” a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.


    Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


    Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.


    Tool development

    diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [][][]

    In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:

    • Do not vary the architecture personality if the kernel is not varied. (Thanks to Raúl Cumplido). []
    • Drop the debian/watch file, as Lintian now flags this as error for ‘native’ Debian packages. [][]
    • Bump Standards-Version to 4.7.2, with no changes needed. []
    • Drop the Rules-Requires-Root header as it is no longer required.. []

    In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. []


    Website updates

    Once again, there were a number of improvements made to our website this month including:


    Miscellaneous news


    Software Supply Chain Security of Web3

    Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:

    Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.

    Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.


    Upstream patches

    The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



    Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

    Cryptogram Four Ways AI Is Being Used to Strengthen Democracies Worldwide

    Democracy is colliding with the technologies of artificial intelligence. Judging from the audience reaction at the recent World Forum on Democracy in Strasbourg, the general expectation is that democracy will be the worse for it. We have another narrative. Yes, there are risks to democracy from AI, but there are also opportunities.

    We have just published the book Rewiring Democracy: How AI will Transform Politics, Government, and Citizenship. In it, we take a clear-eyed view of how AI is undermining confidence in our information ecosystem, how the use of biased AI can harm constituents of democracies and how elected officials with authoritarian tendencies can use it to consolidate power. But we also give positive examples of how AI is transforming democratic governance and politics for the better.

    Here are four such stories unfolding right now around the world, showing how AI is being used by some to make democracy better, stronger, and more responsive to people.

    Japan

    Last year, then 33-year-old engineer Takahiro Anno was a fringe candidate for governor of Tokyo. Running as an independent candidate, he ended up coming in fifth in a crowded field of 56, largely thanks to the unprecedented use of an authorized AI avatar. That avatar answered 8,600 questions from voters on a 17-day continuous YouTube livestream and garnered the attention of campaign innovators worldwide.

    Two months ago, Anno-san was elected to Japan’s upper legislative chamber, again leveraging the power of AI to engage constituents—this time answering more than 20,000 questions. His new party, Team Mirai, is also an AI-enabled civic technology shop, producing software aimed at making governance better and more participatory. The party is leveraging its share of Japan’s public funding for political parties to build the Mirai Assembly app, enabling constituents to express opinions on and ask questions about bills in the legislature, and to organize those expressions using AI. The party promises that its members will direct their questioning in committee hearings based on public input.

    Brazil

    Brazil is notoriously litigious, with even more lawyers per capita than the US. The courts are chronically overwhelmed with cases and the resultant backlog costs the government billions to process. Estimates are that the Brazilian federal government spends about 1.6% of GDP per year operating the courts and another 2.5% to 3% of GDP issuing court-ordered payments from lawsuits the government has lost.

    Since at least 2019, the Brazilian government has aggressively adopted AI to automate procedures throughout its judiciary. AI is not making judicial decisions, but aiding in distributing caseloads, performing legal research, transcribing hearings, identifying duplicative filings, preparing initial orders for signature and clustering similar cases for joint consideration: all things to make the judiciary system work more efficiently. And the results are significant; Brazil’s federal supreme court backlog, for example, dropped in 2025 to its lowest levels in 33 years.

    While it seems clear that the courts are realizing efficiency benefits from leveraging AI, there is a postscript to the courts’ AI implementation project over the past five-plus years: the litigators are using these tools, too. Lawyers are using AI assistance to file cases in Brazilian courts at an unprecedented rate, with new cases growing by nearly 40% in volume over the past five years.

    It’s not necessarily a bad thing for Brazilian litigators to regain the upper hand in this arms race. It has been argued that litigation, particularly against the government, is a vital form of civic participation, essential to the self-governance function of democracy. Other democracies’ court systems should study and learn from Brazil’s experience and seek to use technology to maximize the bandwidth and liquidity of the courts to process litigation.

    Germany

    Now, we move to Europe and innovations in informing voters. Since 2002, the German Federal Agency for Civic Education has operated a non-partisan voting guide called Wahl-o-Mat. Officials convene an editorial team of 24 young voters (under 26 and selected for diversity) with experts from science and education to develop a slate of 80 questions. The questions are put to all registered German political parties. The responses are narrowed down to 38 key topics and then published online in a quiz format that voters can use to identify the party whose platform they most identify with.

    In the past two years, outside groups have been innovating alternatives to the official Wahl-o-Mat guide that leverage AI. First came Wahlweise, a product of the German AI company AIUI. Second, students at the Technical University of Munich deployed an interactive AI system called Wahl.chat. This tool was used by more than 150,000 people within the first four months. In both cases, instead of having to read static webpages about the positions of various political parties, citizens can engage in an interactive conversation with an AI system to more easily get the same information contextualized to their individual interests and questions.

    However, German researchers studying the reliability of such AI tools ahead of the 2025 German federal election raised significant concerns about bias and “hallucinations”—AI tools making up false information. Acknowledging the potential of the technology to increase voter informedness and party transparency, the researchers recommended adopting scientific evaluations comparable to those used in the Agency for Civic Education’s official tool to improve and institutionalize the technology.

    United States

    Finally, the US—in particular, California, home to CalMatters, a non-profit, nonpartisan news organization. Since 2023, its Digital Democracy project has been collecting every public utterance of California elected officials—every floor speech, comment made in committee and social media post, along with their voting records, legislation, and campaign contributions—and making all that information available in a free online platform.

    CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting.

    This is not AI replacing human journalists; it is a civic watchdog organization using technology to feed evidence-based insights to human reporters. And it’s no coincidence that this innovation arose from a new kind of media institution—a non-profit news agency. As the watchdog function of the fourth estate continues to be degraded by the decline of newspapers’ business models, this kind of technological support is a valuable contribution to help a reduced number of human journalists retain something of the scope of action and impact our democracy relies on them for.

    These are just four of many stories from around the globe of AI helping to make democracy stronger. The common thread is that the technology is distributing rather than concentrating power. In all four cases, it is being used to assist people performing their democratic tasks—politics in Japan, litigation in Brazil, voting in Germany and watchdog journalism in California—rather than replacing them.

    In none of these cases is the AI doing something that humans can’t perfectly competently do. But in all of these cases, we don’t have enough available humans to do the jobs on their own. A sufficiently trustworthy AI can fill in gaps: amplify the power of civil servants and citizens, improve efficiency, and facilitate engagement between government and the public.

    One of the barriers towards realizing this vision more broadly is the AI market itself. The core technologies are largely being created and marketed by US tech giants. We don’t know the details of their development: on what material they were trained, what guardrails are designed to shape their behavior, what biases and values are encoded into their systems. And, even worse, we don’t get a say in the choices associated with those details or how they should change over time. In many cases, it’s an unacceptable risk to use these for-profit, proprietary AI systems in democratic contexts.

    To address that, we have long advocated for the development of “public AI”: models and AI systems that are developed under democratic control and deployed for public benefit, not sold by corporations to benefit their shareholders. The movement for this is growing worldwide.

    Switzerland has recently released the world’s most powerful and fully realized public AI model. It’s called Apertus, and it was developed jointly by public Swiss institutions: the universities ETH
    Zurich and EPFL, and the Swiss National Supercomputing Centre (CSCS). The development team has made it entirely open source–open data, open code, open weights—and free for anyone to use. No illegally acquired copyrighted works were used in its training. It doesn’t exploit poorly paid human laborers from the global south. Its performance is about where the large corporate giants were a year ago, which is more than good enough for many applications. And it demonstrates that it’s not necessary to spend trillions of dollars creating these models. Apertus takes a huge step forward to realizing the vision of an alternative to big tech—controlled corporate AI.

    AI technology is not without its costs and risks, and we are not here to minimize them. But the technology has significant benefits as well.

    AI is inherently power-enhancing, and it can magnify what the humans behind it want to do. It can enhance authoritarianism as easily as it can enhance democracy. It’s up to us to steer the technology in that better direction. If more citizen watchdogs and litigators use AI to amplify their power to oversee government and hold it accountable, if more political parties and election administrators use it to engage meaningfully with and inform voters and if more governments provide democratic alternatives to big tech’s AI offerings, society will be better off.

    This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

    365 TomorrowsBetter Than Human

    Author: Taylor Pittman They moved around the room, their bodies jerking at odd moments, their voices slipping into mechanical ranges as they served beverages. She could not stop her eyes from trapping the waiters in her periphery. If she looked close enough, she could see the stitch pattern embedded behind their ears or across their […]

    The post Better Than Human appeared first on 365tomorrows.

    Worse Than FailureThe Thanksgiving Shakedown

    On Thanksgiving Day, Ellis had cuddled up with her sleeping cat on the couch to send holiday greetings to friends. There in her inbox, lurking between several well wishes, was an email from an unrecognized sender with the subject line, Final Account Statement. Upon opening it, she read the following:

    1880s stock delivery form agreement

    Dear Ellis,

    Your final account statement dated -1 has been sent to you. Please log into your portal and review your balance due totaling #TOTAL_CHARGES#.

    Payment must be received within 30 days of this notice to avoid collection. You may submit payment online via [Payment Portal Link] or by mail to:

    Chamberlin Apartments
    123 Main Street
    Anytown US 12345

    If you believe there is an error on your account, please contact us immediately at 212-555-1212.

    Thank you for your prompt attention to this matter.

    Chamberlin Apartments

    Ellis had indeed rented an apartment managed by this company, but had moved out 16 years earlier. She'd never been late with a payment for anything in her life. What a time to receive such a thing, at the start of a long holiday weekend when no one would be able to do anything about it for the next 4 days!

    She truly had so much to be grateful for that Thanksgiving, and here was yet more for her list: her broad technical knowledge, her experience working in multiple IT domains, and her many years of writing up just these sorts of stories for The Daily WTF. All of this added up to her laughing instead of panicking. She could just imagine the poor intern who'd hit "Send" by mistake. She also imagined she wasn't the only person who'd received this message. Rightfully scared and angry callers would soon be hammering that phone number, and Ellis was further grateful that she wasn't the one who had to pick up.

    "I'll wait for the apology email!" she said out loud with a knowing smile on her face, closing out the browser tab.

    Ellis moved on physically and mentally, going forward with her planned Thanksgiving festivities without giving it another thought. The next morning, she checked her inbox with curious anticipation. Had there been a retraction, a please disregard?

    No. Instead, there were still more emails from the same sender. The second, sent 7 hours after the first, bore the subject line Second Notice - Outstanding Final Balance:

    Dear Ellis,

    Our records show that your final balance of #TOTAL_CHARGES# from your residency at your previous residence remains unpaid.

    This is your second notice. Please remit payment in full or contact us to discuss the balance to prevent your account from being sent to collections.

    Failure to resolve the balance within the next 15 days may result in your account being referred to a third-party collections agency, which could impact your credit rating.

    To make payment or discuss your account, please contact us at 212-555-1212 or accounting@chamapts.com.

    Sincerely,

    Chamberlin Apartments

    The third, sent 6 and a half hours later, threatened Final Notice - Account Will Be Sent to Collections.

    Dear Ellis,

    Despite previous notices, your final account balance remains unpaid.

    This email serves as final notice before your account is forwarded to a third-party collections agency for recovery. Once transferred, we will no longer be able to accept payment directly or discuss the account.

    To prevent this, payment of #TOTAL_CHARGES# must be paid in full by #CRITICALDATE#.

    Please submit payment immediately. Please contact 212-555-1212 to confirm your payment.

    Sincerely,

    Chamberlin Apartments

    It was almost certainly a mistake, but still rather spooky to someone who'd never been in such a situation. There was solace in the thought that, if they really did try to force Ellis to pay #TOTAL_CHARGES# on the basis of these messages, anyone would find it absurd that all 3 notices were sent mere hours apart, on a holiday no less. The first two had also mentioned 30 and 15 days to pay up, respectively.

    Suddenly remembering that she probably wasn't the only recipient of these obvious form emails, Ellis thought to check her local subreddit. Sure enough, there was already a post revealing the range of panic and bewilderment they had wrought among hundreds, if not thousands. Current and more recent former tenants had actually seen #TOTAL_CHARGES# populated with the correct amount of monthly rent. People feared everything from phishing attempts to security breaches.

    It wasn't until later that afternoon that Ellis finally received the anticipated mea culpa:

    We are reaching out to sincerely apologize for the incorrect collection emails you received. These messages were sent in error due to a system malfunction that released draft messages to our entire database.

    Please be assured of the following:
    The recent emails do not reflect your actual account status.
    If your account does have an outstanding balance, that status has not changed, and you would have already received direct and accurate communication from our office.
    Please disregard all three messages sent in error. They do not require any action from you.

    We understand that receiving these messages, especially over a holiday, was upsetting and confusing, and we are truly sorry for the stress this caused. The issue has now been fully resolved, and our team has worked with our software provider to stop all queued messages and ensure this does not happen again.

    If you have any questions or concerns, please feel free to email leasing@chamapts.com. Thank you for your patience and understanding.

    All's well that ends well. Ellis thanked the software provider's "system malfunction," whoever or whatever it may've been, that had granted the rest of us a bit of holiday magic to take forward for all time.

    [Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

    Planet DebianMichael Ablassmeier: libvirt 11.10 VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN

    As with libvirt 11.10 a new flag for backup operation has been inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN.

    According to the documentation “It instructs libvirt to avoid termination of the VM if the guest OS shuts down while the backup is still running. The VM is in that scenario reset and paused instead of terminated allowing the backup to finish. Once the backup finishes the VM process is terminated.”

    Added support for this in virtnbdbackup 2.40.

    ,

    Planet DebianSimon Josefsson: Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

    Last week I published Guix on Debian container images that prepared for today’s announcement of Guix on Trisquel/Ubuntu container images.

    I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren’t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:

    registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
    registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
    registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
    registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix

    Or you prefer guix-on-dpkg on Docker Hub:

    docker.io/jas4711/guix-on-dpkg:trisquel11-guix
    docker.io/jas4711/guix-on-dpkg:trisquel12-guix
    docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
    docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix

    You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.

    jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
    root@guix:/# head -1 /etc/os-release 
    NAME="Trisquel GNU/Linux"
    root@guix:/# guix describe
      guix 136fc8b
        repository URL: https://gitlab.com/debdistutils/guix/mirror.git
        branch: master
        commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
    root@guix:/# 

    You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.

    I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8.

    Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:

    .guile-gnutls: &guile-gnutls
      before_script:
      - /root/.config/guix/current/bin/guix-daemon --version
      - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
      - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
      - type guix
      - guix --version
      - guix describe
      - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
      - time apt-get update
      - time apt-get install -y make git texinfo
      - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
      script:
      - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
      - cd guile-gnutls
      - git checkout v5.0.1
      - ./bootstrap
      - ./configure
      - make V=1
      - make V=1 check VERBOSE=t
      - make V=1 dist
      after_script:
      - mkdir -pv out/$CI_JOB_NAME_SLUG/src
      - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
      - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
      artifacts:
        paths:
        - out/**

    This installs some packages, clones guile-gnutls (it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs.

    Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.

    guile-gnutls-trisquel11-amd64:
      tags: [ saas-linux-medium-amd64 ]
      image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
      extends: .guile-gnutls
    
    guile-gnutls-ubuntu22.04-arm64:
      tags: [ saas-linux-medium-arm64 ]
      image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
      extends: .guile-gnutls
    
    guile-gnutls-trisquel12-amd64:
      tags: [ saas-linux-medium-amd64 ]
      image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
      extends: .guile-gnutls
    
    guile-gnutls-ubuntu24.04-arm64:
      tags: [ saas-linux-medium-arm64 ]
      image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
      extends: .guile-gnutls

    Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:

    guile-gnutls-compare:
      image: alpine:latest
      needs: [ guile-gnutls-trisquel11-amd64,
               guile-gnutls-trisquel12-amd64,
               guile-gnutls-ubuntu22.04-arm64,
               guile-gnutls-ubuntu24.04-arm64 ]
      script:
      - cd out
      - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
      - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
      - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
      - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
    # Confirm modern git-archive tarball reproducibility
      - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
    # Confirm old git-archive (export-subst but long git describe) tarball reproducibility
      - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
    # Confirm 'make dist' generated tarball reproducibility
      - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
      - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
      artifacts:
        when: always
        paths:
        - ./out/**

    Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:

    $ cd out
    $ sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
    79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
    79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
    b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
    b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
    $ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
    1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
    1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
    bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
    bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
    $ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
          2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
          2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
          2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
          2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
    $ sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
          2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
          2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
    $ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
          2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
          2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
    $ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
    $ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
    $ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
    $ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz

    That’s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

    Planet DebianDirk Eddelbuettel: duckdb-mlpack 0.0.5: Added kmeans, version helpers, documentation

    A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.

    This release 0.0.5 adds one new method: kmeans clustering. We also added two version accessors for both mlpack and armadillo. We found during the work on random forests (added in 0.0.4) that the multithreaded random number generation was not quite right in the respective upstream codes. This has by now been corrected in armadillo 15.2.2 as well as the trunk version of mlpack so if you build with those, and set a seed, then your forests and classification will be stable across reruns. We added a second state variable mlpack_silent that can be used to suppress even the minimal prediction quality summary some methods show, and expanded the documentation.

    For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    Worse Than FailureCodeSOD: The Destination Dir

    Darren is supporting a Delphi application in the current decade. Which is certainly a situation to be in. He writes:

    I keep trying to get out of doing maintenance on legacy Delphi applications, but they keep pulling me back in.

    The bit of code Darren sends us isn't the largest WTF, but it's a funny mistake, and it's a funny mistake that's been sitting in the codebase for decades at this point. And as we all know, jokes only get funnier with age.

    FileName := DestDir + ExtractFileName(FileName);
    if FileExists(DestDir + ExtractFileName(FileName)) then
    begin
      ...
    end;
    

    This code is inside of a module that copies a file from a remote server to the local host. It starts by sanitizing the FileName, using ExtractFileName to strip off any path components, and replace them with DestDir, storing the result in the FileName variable.

    And they liked doing that so much, they go ahead and do it again in the if statement, repeating the exact same process.

    Darren writes:

    As Homer Simpson said "Lather, rinse, and repeat. Always repeat."

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    365 TomorrowsSweat Dreams

    Author: Majoki To hell with pleasant dreams. Long live nightmares! Marcus looked at the motto writ large on the smart panel of DreamOn’s boardroom. The corporation’s board was gathered to solicit his opinion. They were going to want his approval. They were going to seek his blessing. He’d gladly give it to them, even knowing […]

    The post Sweat Dreams appeared first on 365tomorrows.

    ,

    Cryptogram Banning VPNs

    This is crazy. Lawmakers in several US states are contemplating banning VPNs, because…think of the children!

    As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing­ potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.

    The EFF link explains why this is a terrible idea.

    Cryptogram Friday Squid Blogging: Flying Neon Squid Found on Israeli Beach

    A meter-long flying neon squid (Ommastrephes bartramii) was found dead on an Israeli beach. The species is rare in the Mediterranean.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Blog moderation policy.

    Worse Than FailureCodeSOD: Formula Length

    Remy's Law of Requirements Gathering states "No matter what the requirements document says, what your users really wanted was Excel." This has a corrolary: "Any sufficiently advanced Excel file is indistingushable from software."

    Given enough time, any Excel file whipped up by any user can transition from "useful" to "mission critical software" before anyone notices. That's why Nemecsek was tasked with taking a pile of Excel spreadsheets and converting them into "real" software, which could be maintained and supported by software engineers.

    Nemecsek writes:

    This is just one of the formulas they asked me to work on, and not the longest one.

    Nemecsek says this is a "formula", but I suspect it's a VBA macro. In reality, it doesn't matter.

    InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
    InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).Losses = 
    calcLossesInPart(InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
    InitechNeoDTActivePart(0).RatedFrequency, InitechNeoDTMachineDevice.
    InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
    InitechNeoDTActivePartPart(iPart).RadialPositionToMainDuct, InitechNeoDTMachineDevice.
    InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
    InitechNeoDTActivePartPart(iPart).InitechNeoDTActivePartPartSectionContainer(0).
    InitechNeoDTActivePartPartSection(0).InitechNeoDTActivePartPartConductorComposition(0).IsTransposed, 
    InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
    InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
    InitechNeoDTActivePartPartConductorComposition(0).ParallelRadialCount, InitechNeoDTMachineDevice.
    InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
    InitechNeoDTActivePartPart(iPart).InitechNeoDTActivePartPartSectionContainer(0).
    InitechNeoDTActivePartPartSection(0).InitechNeoDTActivePartPartConductorComposition(0).
    ParallelAxialCount, InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
    InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
    InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).Type, 
    InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
    InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
    InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
    DimensionRadialElectric, InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
    InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
    InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
    DimensionAxialElectric + InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
    InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
    InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).InsulThickness, 
    getElectricConductivityAtTemperatureT1(InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
    InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
    InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
    InitechNeoDTActivePartPartConductorRawMaterial(0).ElectricConductivityT0, InitechNeoDTMachineDevice.
    InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
    InitechNeoDTActivePartPart(iPart).InitechNeoDTActivePartPartSectionContainer(0).
    InitechNeoDTActivePartPartSection(0).InitechNeoDTActivePartPartConductorComposition(0).
    InitechNeoDTActivePartPartConductor(0).InitechNeoDTActivePartPartConductorRawMaterial(0).MaterialFactor, 
    InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
    InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
    InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
    InitechNeoDTActivePartPartConductorRawMaterial(0).ReferenceTemperatureT0, InitechNeoDTMachineDevice.
    ReferenceTemperature), LayerNumberRatedVoltage, InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
    InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
    InitechNeoDTActivePartPartLayerContainer(0),InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
    InitechNeoDTActivePart(0).RFactor)
    

    Line breaks added to try and keep horizontal scrolling sane. This arguably hurts readability, in the same way that beating a dead horse arguably hurts the horse.

    This may not be the longest one, but it's certainly painful. I do not know exactly what this is doing, and frankly, I do not want to.

    [Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

    365 TomorrowsFinder’s Fee

    Author: Julian Miles, Staff Writer “Where the fook now?” “Jobsheet says left of the second moon and can’t miss it.” “Yeah yeah. Every bloody time they take the amateur finders word instead of asking for location data. Not like it’s a difficult ask: it’s on the display right next to the comms console on every […]

    The post Finder’s Fee appeared first on 365tomorrows.

    ,

    365 TomorrowsThe Poker Game

    Author: David Sydney It was a Friday night poker game, with only three left in the hand—Mel, Otto, and Ralph. Ralph, losing all night, was down to his last few pathetic chips. He couldn’t believe it. Mel had dealt him four aces. His problems were over. Finally, he was about to clean up. “Hey, did […]

    The post The Poker Game appeared first on 365tomorrows.

    ,

    365 TomorrowsI am Computer

    Author: David Dumouriez “Good afternoon, Zak,” the voice said. “Alright?” Zak replied. “Had a good day?” “Ah, you know. The usual. Bor-ing!” There was a tinkly laugh. “Got any homework?” “Homework? Just a minute … Yeah. Some crap on the digestive system.” “Bullet points?” “That’ll do.” The words spilled out onto the screen. “Bit long […]

    The post I am Computer appeared first on 365tomorrows.

    David BrinFour specific notions that could help to save us all

    Last week I issued a three-parter that proposed several dozen fresh tactics for the Enlightenment side of our current culture war. And as a unifying umbrella, I made them part of a "Democratic Newer Deal"... both satirizing and learning-from the most agile polemical maneuver of the last 40 years - the so-called 'GOP Contract With America.'

    Whether or not you liked my using that overall umbrella, the thirty or so proposals merit discussion in their own right! Some of them -- maybe ten or so -- are ideas that have been floating around on the moderate-liberal agenda, but that I've meddled-with, in order to add some punch, or judo spice.  Or zing.

             Others are wholly my own.


    Some of the proposals take the form of internal reforms that Congress could enact on their very first day - of a session whose majority consists of sane and decent people.      


    For example, pause and envision this reform and procedural rule. One which no future GOP-led Congress would be able to retract! 


    Distributed subpoena power: We shall establish a permanent rule and tradition that each member of Congress will get one peremptory subpoena per year, plus adequate funding to compel a witness to appear and testify for up to five hours before a subcommittee in which she or he is a member. In this way, each member will be encouraged to investigate as a sovereign representative and not just as a party member, ensuring that Congress will never again betray its Constitutional duty of investigation and oversight, even when the same party holds both Congress and the Executive.


    Think about that for a sec. very soon each Representative or Senator would view that personal, peremptory subpoena -- whether one per year or per session -- as a treasured and jealously-guarded prerogative of office. Possibly useful to their party or to confront major issues, or else to grandstand for the folks back home. Either way, they will balk at any attempt by future party leaders to terminate the privilege. And thus it could become permanent. And the minority will never again be barred from calling witnesses to interrogate the majority.


    Or look at another internal reform that I'll talk about next time... to reconstitute the advisory bodies for science and fact that used to serve Congress, but were banished by Gingrich and Hastert and company, because... well... this Republican Party despises facts.



    Other proposals would be legislated LAWS that seem desperately -- even existentially -- needed for the U.S. republic! Like this one I have offered annually for the last fifteen years:

     

    We shall create the office of Inspector General of the United States, or IGUS, who will head the U.S. Inspectorate, a uniformed agency akin to the Public Health Service, charged with protecting the ethical and law-abiding health of government.  Henceforth, the inspectors-general in all government agencies, including military judge-advocates general (JAGs) will be appointed by and report to IGUS, instead of serving beholden to the whim of the whim of the cabinet or other officers that they are supposed to inspect. IGUS will advise the President and Congress concerning potential breaches of the law. IGUS will provide protection for whistle-blowers and safety for officials or officers refusing to obey unlawful orders. 


    Wouldn't everything be better if we had IGUS right now? Go back and read the full text.


    And then there's this one - a way to bypass the corrupt Citizens United ruling by the suborned Supreme Court - using a clever and totally legal means, that is supported factually by Robert Reich. Though I think my approach is more likely to get passed... and to work.

     

    THE POLITICAL REFORM ACT will ensure that the nation’s elections take place in a manner that citizens can trust and verify.  Political interference in elections will be a federal crime.  Strong auditing procedures and transparency will be augmented by whistleblower protections. All voting machines will be paper auditable. New measures will distance government officials from lobbyists.  


    Campaign finance reform will reduce the influence of Big Money over politicians. The definition of a ‘corporation’ shall be clarified: so that corporations are neither ‘persons’ nor entitled to use money or other means to meddle in politics, nor to coerce their employees to act politically.


    There are others, like how to affordably get every child in America insured under Medicare, while we argue over going the rest of the way. We'll get to that amazingly simple method next time.


    But here's another one that is super timely because - as reported by the Strategic News Service - "Huge new botnets with 40M+ nodes are available to criminals on the dark web..." That's Forty MILLION computers around the world - including possibly the one you are now using to view this - have been suborned and turned into cryptic nodes for major cyber crime. 


    Indeed, we are far more open to cyber attacks than ever, now that the Cybersecurity and Infrastructure Security Agency (CISA) has been downsized by a third! And the Cyber Safety Review Board (CSRB) dissolved, and the Critical Infrastructure Partnership Advisory Council (CIPAC) terminated. And many counter-terror agents have been (suspiciously) re-assigned. Hence, here's a reform that might address that... and it might - if pushed urgently - even pass this good-for nothing Congress.


    THE CYBER HYGIENE ACT: Adjusting liability laws for a new and perilous era, citizens and small companies whose computers are infested and used by ‘botnets’ to commit crimes shall be deemed immune from liability for resulting damages, providing that they download and operate a security program from one of a dozen companies that have been vetted and approved for effectiveness by the US Department of Commerce. Likewise, companies that release artificial intelligence programs shall face lessened liability if those programs persistently declare their provenance and artificiality and potential dangers. 



    Again... these and maybe 30 more are to be found in my big series on a proposed "Newer Deal." I'll try to repost and appraise each of them over the next few weeks. 


    Almost any of them would be winning issues for the Democrats, especially if they were parsed right!  Say, in a truly workable 'deal' for the American people...

         ...and for our children's future.



           == Political notes ==


    While we all should be impressed with Gavin Newsom's people for expertly trolling old Two Scoops, it's not the core tactic I have recommended for 20 years. Though one fellow who seems to be stabbing in the right general direction is Jimmy Kimmel, who keeps offering to hold public, televised challenges to check the factuality of foxite yammerings. 

     

    Kimmel’s latest has been to satirically take on Trump's crowing about his 'aced' cognitive test. That test (which is not for IQ, but to eval senility or dementia) was accompanied by yowling that two female Democrat Reps were 'low-IQ.' Kimmel's offer of a televised IQ vs dementia test is brilliant. It'll never happen. But brilliant.   In fact, Kimmel's offer of a televised mental test is a version of my Wager Challenge. 


    The key feature is REPETITION! The KGB-supported foxite jibberers have a tactic to evade accountability to facts. point at something else and change the subject. Yet, no dem - not even brilliant ones like Pete B and AOC - ever understands the power of tenacious repetition. Ensuring that a single lie - or at most a dozen - gets hammered over and over again.

    All right, they ARE doing that with "Release the Epstein files!" Will they learn from that example to focus? To actually focus? And yes, demanding $$$ escrowed wager stakes can make it a matter of macho honor... honor that they always, always lose, as the weenie liars that they are. 
     


    ,

    David Brin 2A New Deal with the American People

    Political Tactics that Might Work

    In my earlier postings, Part One and Part Two, I aimed to study an old – though successful – political tactic that was concocted and executed with great skill by a rather different version of Republicans. A tactic that later dissolved into a swill of broken promises, after achieving Power.

    So, shall we wind this up with a shopping list of our own?  What follows is a set of promises – a contract of our own, aiming for the spirit of FDR’s New Deal – with the citizens of America. 

    First, yes. It is hard to see, in today’s ruling coalition of kleptocrats, fanatics and liars, any of the genuinely sober sincerity than many Americans thought they could sense coming from Newt Gingrich and the original wave of “neoconservatives.”  Starting with Dennis Never Negotiate Hastert, the GOP leadership caste spiraled into ever-accelerating scandal and corruption.

    Still, I propose to ponder what a “Democratic Newest Deal for America” might look like!  

    –       Exposing hypocrisy and satirizing the failure of that earlier “contract” …

    –       while using its best parts to appeal sincere moderates and conservatives …

    –       while firmly clarifying the best consensus liberal proposals…

    –       while offering firm methods to ensure that any reforms actually take effect and don’t just drift away.

    Remember that this alternative “contract” – or List of Democratic Intents – will propose reforms that are of real value… but also repeatedly highlight GOP betrayals.

    Might it be worth testing before some focus groups?

                      A Draft: Democratic Deal for America

    As Democratic Members of the House of Representatives and as citizens seeking to join that body, we propose both to change its practices and to restore bonds of trust between the people and their elected representatives.  

    We offer these proposals in sincere humility, aware that so many past promises were broken.  We shall foremost, emphasize restoration of a citizen’s right to know, and to hold the mighty accountable

    Especially, we will emphasize placing tools of democracy, openness and trust back into the hands of the People. We will also seek to ensure that government re-learns its basic function, to be the efficient, honest and effective tool of the People.

    Toward this end, we’ll incorporate lessons of the past and goals for the future, promises that were betrayed and promises that need to be renewed, ideas from left, right and center. But above all, the guiding principle that America is an open society of bold and free citizens. Citizens who are empowered to remind their political servants who is boss. 

    PART I.   REFORM CONGRESS 

    In the first month of the new Congress, our new Democratic majority will pass the following major reforms of Congress itself, aimed at restoring the faith and trust of the American people:

    FIRST: We shall see to it that the best parts of the 1994 Republican “Contract With America” – parts the GOP betrayed, ignored and forgot – are finally implemented, both in letter and in spirit.  

    Among the good ideas the GOP betrayed are these:

    •   Require all laws that apply to the rest of the country also apply to Congress; 

    •   Arrange regular audits of Congress for waste or abuse;

    •   Limit the terms of all committee chairs and party leadership posts;

    •   Ban the casting of proxy votes in committee and law-writing by lobbyists;

    •   Require that committee meetings be open to the public;

    •   Guarantee honest accounting of our Federal Budget.

    …and in the same spirit…

    •   Members of Congress shall report openly all stock and other trades by members or their families, especially those trades which might be affected by the member’s inside knowledge.

    By finally implementing these good ideas – some of which originated with decent Republicans – we show our openness to learn and to reach out, re-establishing a spirit of optimistic bipartisanship with sincere members of the opposing party, hopefully ending an era of unwarranted and vicious political war.

    But restoring those broken promises will only be the beginning.

    SECOND: We shall establish rules in both House and Senate permanently allowing the minority party one hundred subpoenas per year, plus the time and staff needed to question their witnesses before open subcommittee hearings, ensuring that Congress will never again betray its Constitutional duty of investigation and oversight, even when the same party holds both Congress and the Executive.

    As a possibly better alternative – to be negotiated – we shall establish a permanent rule and tradition that each member of Congress will get one peremptory subpoena per year, plus adequate funding to compel a witness to appear and testify for up to five hours before a subcommittee in which she or he is a member. In this way, each member will be encouraged to investigate as a sovereign representative and not just as a party member.

    THIRD: While continuing ongoing public debate over the Senate’s practice of filibuster, we shall use our next majority in the Senate to restore the original practice: that senators invoking a filibuster must speak on the chamber floor the entire time. 

    FOURTH: We shall create the office of Inspector General of the United States, or IGUS, who will head the U.S. Inspectorate, a uniformed agency akin to the Public Health Service, charged with protecting the ethical and law-abiding health of government.  Henceforth, the inspectors-general in all government agencies, including military judge-advocates general (JAGs) will be appointed by and report to IGUS, instead of serving at the whim of the cabinet or other officers that they are supposed to inspect. IGUS will advise the President and Congress concerning potential breaches of the law. IGUS will provide protection for whistle-blowers and safety for officials refusing to obey unlawful orders. 

    In order to ensure independence, the Inspectorate shall be funded by an account to pay for operations that is filled by Congress, or else by some other means, a decade in advance. IGUS will be appointed to six-year terms by a 60% vote of a commission consisting of all past presidents and current state governors. IGUS will create a corps of trusted citizen observers, akin to grand juries, cleared to go anywhere and assure the American people that the government is still theirs, to own and control.

    FIFTH: Independent congressional advisory offices for science, technology and other areas of skilled, fact-based analysis will be restored in order to counsel Congress on matters of fact without bias or dogma-driven pressure. Rules shall ensure that technical reports may not be re-written by politicians, changing their meaning to bend to political desires. 

    Every member of Congress shall be encouraged and funded to appoint from their home district a science-and-fact advisor who may interrogate the advisory panels and/or answer questions of fact on the member’s behalf.

    SIXTH: New rules shall limit “pork” earmarking of tax dollars to benefit special interests or specific districts. Exceptions must come from a single pool, totaling no more than one half of a percent of the discretionary budget. These exceptions must be placed in clearly marked and severable portions of a bill, at least two weeks before the bill is voted upon.  Earmarks may not be inserted into conference reports. Further, limits shall be placed on no-bid, crony, or noncompetitive contracts, where the latter must have firm expiration dates.  Conflict of interest rules will be strengthened. 

    SEVENTH: Create an office that is tasked to translate and describe all legislation in easily understandable language, for public posting at least three days before any bill is voted upon, clearly tracking changes or insertions, so that the public (and even members of Congress) may know what is at stake.  This office may recommend division of any bill that inserts or combines unrelated or “stealth” provisions.

    EIGHTH: Return the legislative branch of government to the people, by finding a solution to the cheat of gerrymandering, that enabled politicians to choose voters, instead of the other way around.  We shall encourage and insist that states do this in an evenhanded manner, either by using independent redistricting commissions or by minimizing overlap between state legislature districts and those for Congress.

    NINTH: Newly elected members of Congress with credentials from their states shall be sworn in by impartial clerks of either the House or Senate, without partisan bias, and at the new member’s convenience. The House may be called into session, with or without action by the Speaker, at any time that a petition is submitted to the Chief Clerk that was signed by 40% of the members. 

    TENTH: One time in any week, the losing side in a House vote may demand and get an immediate non-binding secret polling of the members who just took part in that vote, using technology to ensure reliable anonymity. While this secret ballot will be non-binding legislatively, the poll will reveal whether some members felt coerced or compelled to vote against their conscience. Members who refuse to be polled anonymously will be presumed to have been so compelled or coerced.

    II.  REFORM AMERICA

     Thereafter, within the first 100 days of the new Congress, we shall bring to the House Floor the following bills, each to be given full and open debate, each to be given a clear and fair vote and each to be immediately available for public inspection and scrutiny. 

    DB Note: The following proposed bills are my own particular priorities, chosen because I believe they are both vitally important and under-appreciated! (indeed, some of them you’ll see nowhere else.) 

    Their common trait – until you get to #20 – is that they have some possibility of appealing to reasonable people across party lines… the “60%+ rule” that worked so persuasively in 1994.

    #20 will be a catch-all that includes a wide swathe of reforms sought by many Democrats – and, likely, by many of you — but may entail more dispute, facing strong opposition from the other major party. 

    In other words… as much as you may want the items in #20 – (and I do too: most of them!) — you are going to have to work hard for them separately from a ‘contract’ like this one, that aims to swiftly take advantage of 60%+ consensus, to get at least an initial tranche of major reforms done.

    1. THE SECURITY FOR AMERICA ACT will ensure that top priority goes to America’s military and security readiness, especially our nation’s ability to respond to surprise threats, including natural disasters or other emergencies. FEMA and the CDC and other contingency agencies will be restored and enhanced, their agile effectiveness audited.

    When ordering a discretionary foreign intervention, the President must report probable effects on readiness, as well as the purposes, severity and likely duration of the intervention, along with credible evidence of need. 

    All previous Congressional approvals for foreign military intervention or declared states of urgency will be explicitly canceled, so that future force resolutions will be fresh and germane to each particular event, with explicit expiration dates. All Eighteenth or Nineteenth Century laws that might be used as excuses for Executive abuse will be explicitly repealed. 

    Reserves will be augmented and modernized. Reserves shall not be sent overseas without submitting for a Congressionally certified state of urgency that must be renewed at six-month intervals. Any urgent federalization and deployment of National Guard or other troops to American cities, on the excuse of civil disorder, shall be supervised by a plenary of the nation’s state governors, who may veto any such deployment by a 40% vote or a signed declaration by twenty governors. 

    The Commander-in-Chief may not suspend any American law, or the rights of American citizens, without submitting the brief and temporary suspension to Congress for approval in session. 

    2. THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice.  Whistle-blower protections will be strengthened within the U.S. government. 

    The federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent and empowered Inspector General of the United States (IGUS).

    3. THE SECRECY ACT will ensure that the recent, skyrocketing use of secrecy – far exceeding anything seen during the Cold War – shall reverse course.  Independent commissions of trusted Americans shall approve, or set time limits to, all but the most sensitive classifications, which cannot exceed a certain number.  These commissions will include some members who are chosen (after clearance) from a random pool of common citizens.  Secrecy will not be used as a convenient way to evade accountability.

    4. THE SUSTAINABILITY ACT will make it America’s priority to pioneer technological paths toward energy independence, emphasizing economic health that also conserves both national and world resources.  Ambitious efficiency and conservation standards may be accompanied by compromise free market solutions that emphasize a wide variety of participants, with the goal of achieving more with less, while safeguarding the planet for our children.

    5. THE POLITCAL REFORM ACT will ensure that the nation’s elections take place in a manner that citizens can trust and verify.  Political interference in elections will be a federal crime.  Strong auditing procedures and transparency will be augmented by whistleblower protections.  New measures will distance government officials from lobbyists.  Campaign finance reform will reduce the influence of Big Money over politicians. The definition of a ‘corporation’ shall be clarified: so that corporations are neither ‘persons’ nor entitled to use money or other means to meddle in politics, nor to coerce their employees to act politically.

    Gerrymandering will be forbidden by national law. 

    The Voting Rights Act will be reinforced, overcoming all recent Court rationalizations to neuter it.

    6.  THE TAX REFORM ACT will simplify the tax code, while ensuring that everybody pays their fair share.  Floors for the Inheritance Tax and Alternative Tax will be raised to ensure they only affect the truly wealthy, while loopholes used to evade those taxes will be closed. Modernization of the IRS and funding for auditors seeking illicitly hidden wealth shall be ensured by IRS draw upon major penalties that have been imposed by citizen juries. 

    All tax breaks for the wealthy will be suspended during time of war, so that the burdens of any conflict or emergency are shared by all.[1]

    7.  THE AMERICAN EXCELLENCE ACT will provide incentives for American students to excel at a range of important fields. This nation must especially maintain its leadership, by training more experts and innovators in science and technology.  Education must be a tool to help millions of students and adults adapt, to achieve and keep high-paying 21st Century jobs.

    8. THE HEALTHY CHILDREN ACT will provide basic coverage for all of the nation’s children to receive preventive care and needed medical attention.  Whether or not adults should get insurance using market methods can be argued separately.

     But under this act, all U.S. citizens under the age of 25 shall immediately qualify as “seniors” under Medicare, an affordable step that will relieve the nation’s parents of stressful worry. A great nation should see to it that the young reach adulthood without being handicapped by preventable sickness.

    9. THE CYBER HYGIENE ACT: Adjusting liability laws for a new and perilous era, citizens and small companies whose computers are infested and used by ‘botnets’ to commit crimes shall be deemed immune from liability for resulting damages, providing that they download and operate a security program from one of a dozen companies that have been vetted and approved for effectiveness by the US Department of Commerce. Likewise, companies that release artificial intelligence programs shall face lessened liability if those programs persistently declare their provenance and artificiality and potential dangers. 

    10. THE TRUTH AND RECONCILIATION ACT:  Without interfering in the president’s constitutional right to issue pardons for federal offenses, Congress will pass a law defining the pardon process, so that all persons who are excused for either convictions orpossible crimes must at least explain those crimes, under oath, before an open congressional committee, before walking away from them with a presidential pass. If the crime is not described in detail, then any pardon cannot apply to any excluded portion. Further, we shall issue a challenge that no president shall ever issue more pardons thanboth of the previous administrations, combined.

    Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. Incentives will draw victims of blackmail to come forward and expose their blackmailers.

    11. THE IMMUNITY LIMITATION ACT: The Supreme Court has ruled that presidents should be free to do their jobs without undue distraction by legal procedures and jeopardies. Taking that into account, we shall nevertheless – by legislation – firmly reject the artificial and made-up notion of blanket Presidential Immunity or that presidents are inherently above the law. 

    Instead, the Inspector General of the United States (IGUS) shall supervise legal cases that are brought against the president so that they may be handled by the president’s chosen counsel in order of importance or severity, in such a way that the sum of all such external legal matters will take up no more than ten hours a week of any president’s time. While this may slow such processes, the wheels of law will not be fully stopped. 

    Civil or criminal cases against a serving president may be brought to trial by a simple majority consent of both houses of Congress, though no criminal or civil punishment may be exacted until after the president leaves office, either by end-of-term or impeachment and Senate conviction. 

    12. THE FACT ACTThe Fact Act will begin by restoring the media Rebuttal Rule, prying open “echo chamber” propaganda mills. Any channel, or station, or Internet podcast, or meme distributor that accepts advertising or reaches more than 10,000 followers will be required to offer five minutes per day during prime time and ten minutes at other times to reputable and vigorous adversaries. Until other methods are negotiated, each member of Congress shall get to choose one such vigorous adversary, ensuring that all perspectives may be involved. 

    The Fact Act will further fund experimental Fact-Challenges, where major public disagreements may be openly and systematically and reciprocally confronted with demands for specific evidence.

    The Fact Act will restore full funding and staffing to both the Congressional Office of Technology Assessment and the executive Office of Science and Technology Policy (OTSP). Every member of Congress shall be funded to hire a science and fact advisor from their home district, who may interrogate the advisory bodies – an advisor who may also answer questions of fact on the member’s behalf. 

    This bill further requires that the President must fill, by law, the position of White House Science Adviser from a diverse and bipartisan slate of qualified candidates offered by the Academy of Science. The Science Adviser shall have uninterrupted access to the President for at least two one-hour sessions per month.4

    13. THE VOTER ID ACT: Under the 13th and 14th Amendments, this act requires that states mandating Voter ID requirements must offer substantial and effective compliance assistance, helping affected citizens to acquire their entitled legal ID and register to vote. 

    Any state that fails to provide such assistance, substantially reducing the fraction of eligible citizens turned away at the polls, shall be assumed in violation of equal protection and engaged in illegal voter suppression. If such compliance assistance has been vigorous and effective for ten years, then that state may institute requirements for Voter ID.      

    In all states, registration for citizens to vote shall be automatic with a driver’s license or passport or state-issued ID, unless the citizen opts-out.

    14. THE WYOMING RULE: Congress shall end the arrangement (under the  Permanent Apportionment Act of 1929) for perpetually limiting the House of Representatives to 435 members. Instead, it will institute the Wyoming Rule, that the least-populated state shall get one representative and all other states will be apportioned representatives according to their population by full-integer multiples of the smallest state. The Senate’s inherent bias favoring small states should be enough. In the House, all citizens should get votes of equal value. https://thearp.org/blog/the-wyoming-rule/

    15:  IMMIGRATION REFORM: There are already proposed immigration law reforms on the table, worked out by sincere Democrats and sincere Republicans, back when the latter were a thing. These bipartisan reforms will be revisited, debated, updated and then brought to a vote. 

    In addition, if a foreign nation is among the top five sources of refugees seeking U.S. asylum from persecution in their homelands, then by law it shall be incumbent upon the political and social elites in that nation to help solve the problem, or else take responsibility for causing their citizens to flee. 

    Upon verification that their regime is among those top five, that nation’s elites will be billed, enforceably, for U.S. expenses in giving refuge to that nation’s citizens. Further, all trade and other advantages of said elites will be suspended and access to the United States banned, except for the purpose of negotiating ways that the U.S. can help in that nation’s rise to both liberty and prosperity, thus reducing refugee flows in the best possible way. 

    16: THE EXECUTIVE OFFICE MANAGER: By law we shall establish under IGUS (the Inspectorate) a civil service position of White House Manager, whose function is to supervise all non-political functions and staff.This would include the Executive Mansion’s physical structure and publicly-owned contents, but also policy-neutral services such as the switchboard, kitchens, Travel Office, medical office, and Secret Service protection details, since there are no justifications for the President or political staff to have whim authority over such apolitical employees. 

    With due allowance and leeway for needs of the Office of President, public property shall be accounted-for. The manager will allocate which portions of any trip expense should be deemed private and thereupon – above a basic allowance – shall be billed to the president or his/her party. 

    This office shall supervise annual physical and mental examination by external experts for all senior office holders including the President, Vice President, Cabinet members and leaders of Congress.

    Any group of twenty senators or House members or state governors may choose one periodical, network or other news source to get credentialed to the White House Press Pool, spreading inquiry across all party lines and ensuring that all rational points of view get access.

    17: EMOLUMENTS AND GIFTS ACT: Emoluments and gifts and other forms of valuable beneficence bestowed upon the president, or members of Congress, or judges, or their families or staffs, shall be more strictly defined and transparently controlled. All existing and future presidential libraries or museums or any kind of shrine shall strictly limit the holding, display or lending of gifts to, from, or by a president or ex-president, which shall instead be owned and held (except for facsimiles) by the Smithsonian and/or sold at public auction. 

    Donations by corporations or wealthy individuals to pet projects of a president or other members of government, including inauguration events, shall be presumed to be illegal bribery unless they are approved by a nonpartisan ethical commission.

    18: BUDGETS: If Congress fails to fulfill its budgetary obligations or to raise the debt ceiling, the result will not be a ‘government shutdown.’ Rather, all pay and benefits will cease going to any Senator or Representative whose annual income is above the national average, until appropriate legislation has passed, at which point only 50% of any backlog arrears may be made-up. 

    19: THE RURAL AMERICA AND HOUSING ACT: Giant corporations and cartels are using predatory practices to unfairly corner, control or force-out family farms and small rural businesses. We shall upgrade FDR-era laws that saved the American heartland for the people who live and work there, producing the nation’s food. Subsidies and price supports shall only go to family farms or co-ops. Monopolies in fertilizer, seeds and other supplies will be broken up and replaced by competition. Living and working and legal conditions for farm workers and food processing workers will be improved by steady public and private investments.

    Cartels that buy-up America’s stock of homes and home-builders will be investigated for collusion to limit construction and/or drive up rents and home prices and appropriate legislation will follow. 

    20: THE LIBERAL AGENDA: Okay. Your turn. Our turn. Beyond the 60% rule.

    ·      Protect women’s autonomy, credibility and command over their own bodies,

    ·      Ease housing costs: stop private corps buying up large tracts of homes, colluding on prices.

    ·      Help working families with child care and elder care.

    ·      Consumer protection, empower the Consumer Financial Protection Board.

    ·      At least allow student debt refinancing, which the GOP dastardly disallowed. 

    ·      Restore the postal savings bank for the un-banked,

    ·      Basic, efficient, universal background checks for gun purchases, with possible exceptions.

    ·      A national Election Day holiday, for those who actually vote.

    ·      Carefully revive the special prosecutor law. 

    ·      Expand and re-emphasize protections under the Civil Service Act.

    ·      Anti-trust breakup of monopoly/duopolies.

    ·       

    ….AND SO ON…

    III.          Conclusion

    All right.  I know this proposal – that we do a major riff off of the 1994 Republican Contract with America – will garner one top complaint: We don’t want to look like copycats!

    And yet, by satirizing that totally-betrayed “contract,” we poke GOP hypocrisy… while openly reaching out to the wing of conservatism that truly believed the promises, back in 94, perhaps winning some of them over, by offering deliverable metrics to get it right this time…

    …while boldly outlining reasonable liberal measures that the nation desperately needs.

    I do not insist that the measures I posed — in my rough draft “Democratic Deal” — are the only ones possible! (Some might even seem crackpot… till you think them over.)  New proposals would be added or changed.  

    Still, this list seems reasonable enough to debate, refine, and possibly offer to focus groups. Test marketing (the way Gingrich did!) should tell us whether Americans would see this as “copycat”……or else a clever way to turn the tables, in an era when agility must be an attribute of political survival.

    ME10gbit and 40gbit Home Networking

    Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn’t very good for the more common use cases (if daisy chaining them then it’s only
    2 available for devices) so this is really a device for use with 10Gbit uplink.

    Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered.

    They have a dual port SFP+ card for a server with two of the pairs of SFP+ 10gbit devices with copper between them for $32.51 delivered [3].

    So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don’t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn’t much.

    It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that’s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that’s affordable for a home lab. As an aside the only NVMe I’ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that.

    The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That’s quite affordable for the SME market but a bit expensive for home users (although I’m sure that someone on r/homelab has one).

    I’m not going to get 40Gbit, that’s well above what I need and while a point to point link is quite affordable I don’t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful.

    For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn’t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I’ll get one of those USB devices for testing.

    For a single server it’s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].