Planet Russell

,

Charles StrossBarnum's Law of CEOs

It should be fairly obvious to anyone who's been paying attention to the tech news that many companies are pushing the adoption of "AI" (large language models) among their own employees--from software developers to management--and the push is coming from the top down, as C-suite executives order their staff to use AI, Or Else. But we know that LLMs reduce programmer productivity-- one major study showed that "developers believed that using AI tools helped them perform 20% faster -- but they actually worked 19% slower." (Source.)

Another recent study found that 87% of executives are using AI on the job, compared with just 27% of employees: "AI adoption varies by seniority, with 87% of executives using it on the job, compared with 57% of managers and 27% of employees. It also finds that executives are 45% more likely to use the technology on the job than Gen Zers, the youngest members of today's workforce and the first generation to have grown up with the internet.

"The findings are based on a survey of roughly 7,000 professionals age 18 and older who work in the US, the UK, Australia, Canada, Germany, and New Zealand. It was commissioned by HR software company Dayforce and conducted online from July 22 to August 6."

Why are executives pushing the use of new and highly questionable tools on their subordinates, even when they reduce productivity?

I speculate that to understand this disconnect, you need to look at what executives do.

Gordon Moore, long-time co-founder and CEO of Intel, explained how he saw the CEO's job in his book on management: a CEO is a tie-breaker. Effective enterprises delegate decision making to the lowest level possible, because obviously decisions should be made by the people most closely involved in the work. But if a dispute arises, for example between two business units disagreeing on which of two projects to assign scarce resources to, the two units need to consult a higher level management team about where their projects fit into the enterprise's priorities. Then the argument can be settled ... or not, in which case it propagates up through the layers of the management tree until it lands in the CEO's in-tray. At which point, the buck can no longer be passed on and someone (the CEO) has to make a ruling.

So a lot of a CEO's job, aside from leading on strategic policy, is to arbitrate between conflicting sides in an argument. They're a referee, or maybe a judge.

Now, today's LLMs are not intelligent. But they're very good at generating plausible-sounding arguments, because they're language models. If you ask an LLM a question it does not answer the question, but it uses its probabilistic model of language to generate something that closely resembles the semantic structure of an answer.

LLMs are effectively optimized for bamboozling CEOs into mistaking them for intelligent activity, rather than autocomplete on steroids. And so the corporate leaders extrapolate from their own experience to that of their employees, and assume that anyone not sprinkling magic AI pixie dust on their work is obviously a dirty slacker or a luddite.

(And this false optimization serves the purposes of the AI companies very well indeed because CEOs make the big ticket buying decisions, and internally all corporations ultimately turn out to be Stalinist command economies.)

Anyway, this is my hypothesis: we're seeing an insane push for LLM adoption in all lines of work, however inappropriate, because they directly exploit a cognitive bias to which senior management is vulnerable.

Charles StrossThings upcoming

So: I've had surgery on one eye, and have new glasses to tide me over while the cataract in my other eye worsens enough to require surgery (I'm on the low priority waiting list in the meantime). And I'm about to head off for a fortnight of vacation time, mostly in Germany (which has the best Christmas markets) before coming home in mid-December and getting down to work on the final draft of Starter Pack.

Starter Pack is a book I wrote on spec--without a contracted publisher--this summer when Ghost Engine just got a bit too much. It's a spin-off of Ghost Engine, which started out as a joke mashup of two genres: "what if ... The Stainless Steel Rat got Isekai'd?" Nobody's writing the Rat these days, which I feel is a Mistake, so I decided to remedy it. This is my own take on the ideas, not a copy of Harry Harrison's late 1950s original, so it's a bit different, but it's mostly there now and it works as its own thing. Meanwhile, my agent read it and made some really good suggestions for how to make it more commercial, and "more commercial" is what pays the bills so I'm all on board with that. Especially as it's not sold yet.

Ghost Engine is still in progress: I hit a wall and needed to rethink the ending, again. But at least I am writing: having working binocular vision is a sadly underrated luxury--at least, it's underrated until you have to do without it for a few months. Along the way, Ghost Engine required me to come up with a new story setting in which there is no general AI, no superintelligent AI, no mind uploading to non-biological substrates, and above all no singularity--but our descendants have gone interstellar in a big way thanks to that One Neat Magictech Trick I trialed in my novella Palimpsest back in 2009. (Yes, Ghost Engine and Starter Pack are both set very loosely in the same continuum as Palimpsest. Or maybe it's more accurate to say that Palimpsest is to these new novels what A Colder War was to the Laundry Files.) So I finally got back to writing far future wide screen space opera, even if you aren't going to be able to read any of it for at least a year.

Why do this, though?

Bluntly: I needed to change course. After the US election outcome of November 2024 it was pretty clear that we were in for a very bumpy ride over the next few years. The lunatics have taken over the asylum and the economy is teetering on the edge of a very steep precipice. It's not just the over-hyped AI bubble that's propping up the US tech sector and global stock markets--that would be bad enough, but macro policy is being set by feces-hurling baboons and it really looks as if Trump is willing to invade Central America as a distraction gambit. All the world's a Reality TV show right now, and Reality TV is all about indulging our worst collective instincts.

It's too depressing to contemplate writing more Laundry Files stories; I get email from people who read the New Management as a happy, escapist fantasy these days because we've got a bunch of competent people battling to hold the centre together, under the aegis of a horrific ancient evil who is nevertheless a competent ancient evil. Unfortunately the ancient evil wins, and that's just not something I want to explore further right now.

I'm a popular entertainer and it seems to me that in bad times people want entertainments that take them out of their current quagmire and offers them escape, or at least gratuitous adventures with a side-order of humour. I'm not much of an optimist about our short-term future (I don't expect to survive long enough to see the light at the end of the tunnel) so I can't really write solarpunk or hopepunk utopias, but I can write space operas in which absolutely horrible people are viciously mocked and my new protagonists can at least hope for a happy ending.

Upcoming Events

In the new year, I've got three SF conventions planned already: Iridescence (Eastercon 2026), Birmingham UK, 3-6 April: Satellite 9, Glasgow, 22-24 May: and Metropol con Berlin (Eurocon 2026), Berlin, 2-5 July. I'm also going to try and set up a reading/signing/book launch for The Regicide Report in Edinburgh; more here if I manage it.

As during previous Republican presidencies in the USA it does not feel safe to visit that country, so I won't be attending the 2026 worldcon. However the 2027 world science fiction convention will almost certainly take place in Montreal, which is in North America but not part of Trumpistan, so (health and budget permitting) I'll try to make it there.

(Assuming we've still got a habitable planet and a working economy, which kind of presupposes the POTUS isn't biting the heads off live chickens or rogering a plush sofa in the Oval Office, of course, neither of which can be taken for granted this century.)

Charles StrossIn the eyeball waiting room

So, I'm cross-eyed and typing with one eye screwed shut, which sucks. Seeing an ophthalmologist tomorrow, expecting a priority referral to get the other eyeball stabbed. (It was not made clear to me at the time of the last stabbing that the hospital wouldn't see me again until my ophthalmologist referred me back to them. I'm fixing that oversight—hah—now.)

Anyway, my reading fatigue has gotten bad again, to about the same extent it had gotten to when I more or less stopped reading for fun and writing ground to a halt (because what do you spend most writing time doing, if not re-reading?). So don't expect to hear much from me until I've been operated on and ideally gotten a new set of prescription lenses.

Book news: A Conventional Boy is getting a UK paperback release (from Orbit), on January 6th 2026. And The Regicide Report, the 11th and final book in the main Laundry Files series, comes out on January 27th, 2026 in hardcover and ebook—from Orbit in the UK/EU/Aus/NZ, and from Tor.com in the USA.

Note that if you want a complete run of the series in a uniform binding and page size you will need to wait until probably January 6th-ish, give or take, in 2027, then you'll need to order the British paperbacks because There is no single US publisher of the series. The first two books were published by Golden Gryphon (who no longer exist), then it was picked up by Ace in hardcover and sometimes paperback (The Nightmare Stacks never made it into paperback in the USA as the mass market distribution channel was imploding at the time), then got taken on by Tor.com from The Delirium Brief onwards, and Tor.com don't really do paperbacks at all—they're an ebook publisher who also distribute hardcovers via original-Tor. I sincerely doubt that a US limited edition publisher would be interested in picking up and repackaging a series of 14 novels (and probably a short story collection that doesn't exist yet), some of which have been in print for 25 years. I mean, a complete run of the British paperbacks is more than a foot thick already and there are two books still to go in that format.

(Ordering the books: Transreal Books in Edinburgh will take orders by email and will get me in to sign stock, but is no longer shipping to the United States—blame Trump and his idiotic tariff war. (Mike is a sole trader and can't afford the risk of doofuses buying a bunch of books then refusing to pay the import and duty fees. Hitherto books were duty-exempt in the US market, but under Trump, who the hell knows?) I believe amazon.co.uk will still ship UK physical book orders to the USA, but I won't be signing them. If you're in North America your next opportunity to get anything signed is therefore to wait for the worldcon in 2027, which I believe is locked in now and will take place in Montreal.)

What happens after these books is an open question. As I noted in my last update, I'm working on two space operas. Or I would be working if I could stare at the screen for long enough to make headway. If the eyeball fairy would wave a magic wand over my left eye, I could finish both Starter Pack (a straightforward job—I have edit notes) and Ghost Engine (less straightforward but not really impossible) by the end of the year. But as matters stand, you should consider me to be off sick until further notice. Talking about anything that happens after those two is wildly ungrounded speculation: lets just say I expect a spurt of rebound productivity once I have my eyes working appropriately again, and I have some ideas.

For the same reason, blogging's going to be scarce around these parts. So feel free to talk among yourselves.

Edit: remaining cataract not bad enough for surgery—yet—but my prescription has changed (in both eyes). New glasses coming in a week or two: I'm not pushing on the surgery because eye surgery is not on my list of happy fun recreational activities. So normal service should resume by mid-November-ish.

Meanwhile I'm working on another big idea for blogging, riffing off the idea that nation-states are the products of (or are generated as a by-product of) secular religions. It's easiest to see if you look at your neighbours' weirdnesses: Americans, contemplate the British monarchy (hereditary theocracy that supplants the papacy as intercessionary with Jesus, how much clearer could it be than that?); Brits, look to the USA (holy scripture written down in that constitution, daily prayers pledge of allegiance in schools, and all the flag-shagging). Or Israel, and the whole "holy land/chosen people" narrative underpinning political zionism. Patriotism is an affirmation of religious zeal. In this reframing, extremist nationalism is religious evangelism. Now ask, what are the implications, looking forward?

Worse Than FailureCodeSOD: Tis the Season(al Release)

We recently asked for some of your holiday horror stories. We'll definitely take more, if you've got them, but we're going to start off with Jessica, who brings us not so much a horror as an omen.

Jessica writes:

I work for a company in the UK which writes legal software for law firms.

This raises the question what illegal software for law firms might look like, but I understand her meaning.

In the UK, there is a system called "Legal aid", where law firms can give free legal services to people who otherwise couldn't afford it and get reimbursed from the government for their time. As one might imagine from such a system, there is a lot of bureaucracy and a lot of complexity.

The core of the system is a collection of billing rate sheets, billing codes for the various kinds of services, and a pile of dense forms that need to be submitted. Every few months, something in that pile changes. Sometimes it's something small, like moving a form field to a different alignment, or one police station changed its rate sheet. Sometimes it's a wholesale recalibration of the entire system. Sometimes it's new forms, or altered forms, or forms getting dropped from the workflow entirely (a rare, but welcome event).

The good news is that the governing body sends out plenty of notice about the changes before they go into effect. Usually a month, sometimes two, but it's enough time for Jessica's company to test the changes and update their software as needed.

That's what Jessica is working on right now: taking the next batch of changes and preparing the software for the change, a change that's scheduled to deploy a month from now. It's plenty of work, but it's not a hair-on-fire crisis.

Then, during a team meeting, her manager asked: "I haven't booked my holiday yet, and wanted to double check who is available to work over Christmas?"

"Why would anyone need to work over Christmas?" one of the senior developers asked.

Why? Well, one of the larger rate sheets was going to publish new changes on December 22nd, and the changes were expected to be rolled out to all clients on the same day.

"It's just a data update," the manager said weakly. "What could go wrong?"

Probably nothing, that was certainly true. But even just rolling out a change to payment rates was not a risk free endeavor. Sometimes the source data had corrections which needed to be rolled out with great haste, sometimes customers weren't prepared to handle the changed rates, sometimes there were processing pipelines which started throwing out weird bounds errors because something buried in the rate sheet caused a calculation to return absurd results. And sometimes the governing body said "it's just changes to rates," but then includes changes to forms along with it. There wasn't a single rate sheet update that didn't involve providing some degree of support, even if that support was just fielding questions from confused users who didn't expect the change.

The point is that Jessica's team, and every other vendor supplying software to lawfirms in the UK, will be making a major production update three days before Christmas. And from that, providing support to all their customers through that Christmas window.

The only good news? Jessica just started at this job. While the newbie is usually the person who gets stuck with the worst schedule, she's so new that she's not prepared to handle the support work alone, yet. So it's one of the senior devs who gets to work through the holiday this year.

Jessica writes:

Thank god it's not me this year!

Oh, don't worry Jessica. There will be plenty more holidays next year.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsA Drift of Reminiscence

Author: Luca Ricchi Vernon Liu snapped awake as his pod shot up through the bunker hatch into the ashen dusk. ‘Navigation initiated. Destination: Xingjing Earth Federation Great Hall”’ He stretched his olive-hued arms – numb after many hours of induced coma – and squinted through the viewport: a barren wasteland with clumps of smoking ruins, interspersed […]

The post A Drift of Reminiscence appeared first on 365tomorrows.

,

Worse Than FailureThe Modern Job Hunt: Part 2

(Read Part 1 here)

By the 10-month mark of her job search, Ellis still lacked full-time employment. But she had accumulated a pile of knowledge and advice that she wished she'd started with. She felt it was important to share, in hopes that even one person might save some time and sanity:

Bell Trail (38321009314)

  • This is your new normal. Take time to grieve your loss and accept this change. Act and plan as if this situation were permanent. It isn't, of course, but it does you no good to think that surely you won't be at this long, you'll definitely have a job by such and such time, etc. Minimize your expenses now: instead of viewing it as deprivation, make a game out of creative frugality. Do whatever it takes to preserve your physical and mental health. Remember your inherent worth as a living being, and rest assured that this does not diminish it in any way. Know that thousands, if not millions, are in this boat with you: people with decades of experience, people fresh out of school, people with doctorates, they're all struggling. Some have been searching for years and have cast thousands of applications out there, to no avail. This isn't meant to scare or depress you. This is to properly set your expectations.
  • Take the time to decide what you REALLY want for the future. You might have to fight against a lot of panic or other tough emotions to do this, but it would help to consider your current assets, your full range of options, and your heart's desires first. What did you like/dislike about your past experience that might inform the sorts of things you would/wouldn't want in whatever comes next? Is there anything you've dreamed of doing? Is there any sort of work that calls to you, that you gladly would do even if you weren't paid for it? Are you thinking that maybe this might be the time to start your own business, go freelance, return to school, change careers, or retire? This may be a golden opportunity to pivot into something new and exciting.
  • Work your network. This is the cheat code, as most jobs are not obtained by people coming in cold. If a friend or coworker can give you a referral somewhere, you might get to skip a lot of hassle. As your job search lengthens, keep telling people that you're available.
  • Go back to basics. Don't assume that because you've job-hunted before that you know what you're doing with respect to resumes, cover letters, interviews, portfolios, LinkedIn, etc. AI has completely changed everything. If you can get help with this stuff, by all means do so. Before paying for anything, look for free career counseling and job leads offered by nonprofits or other agencies near you. Your library might offer career help and free courses through platforms like LinkedIn Learning. You can find tons of tutorials on YouTube for skills you may be lacking, and you can often audit college courses for free.
  • Ask for help. Get comfortable asking for whatever you may need. Most people want to help you, if they only knew how. Times like these are when you learn how wonderful people can be.
  • Streamline your search. Fake job postings are rampant. Avoid looking for or applying to jobs through LinkedIn. Check sites like Welcome to the Jungle, Jobgether, and Remote Rocketship for leads (feel free to share your own favorite lead-generators in the comments). Once you find a promising listing, go to the company's website and look for it there. Assuming you find it, save yourself some time by skipping straight down to the Qualifications list. Do you satisfy all or most of those? If not, move on. If so, read the rest of the listing to see if it's a good match for you. Apply directly from the company's website, making sure your resume contains their list of must-haves word-for-word. AI will be evaluating your application long before any human being touches it.
  • Beware scams. They are everywhere and take all forms. For instance, you may be tempted to apply to one of those AI-training jobs for side cash, but they will simply take your data and ghost you. Scammers also come at you by phone, email, and text. If it's unsolicited and/or too good to be true, it's probably fake. Always verify the source of every job-related communication.
  • If you make it to the interviewing stage, expect a gauntlet of at least four to get through. Thanks, Google! If you're in need of a laugh, take an interview lesson from the all-time champion himself, George Costanza.
  • You will face rejection constantly. Even if you view rejection as a positive force in your life for growth, it's still hard to take sometimes. Whatever you feel is valid.
  • Ghosting is also normal. Even for those who've already been through several rounds of interviews, who felt like they really nailed it, or were even told to expect an offer. Prepare yourself.

Even though Ellis had resolved to look more seriously into remaining freelance, she hadn't been able to help throwing resumes at full-time job postings whenever a promising one surfaced. After all, some income and benefits would really help while figuring out the freelance thing, right?

Unfortunately, she got so caught up in this tech writing assignment, that interview, that her new adventure wasn't just relegated to the side, it was fully ejected from her consciousness. And for what? For companies that forgot all about her when she failed to meet all of their mysterious criteria. Poof. Hours of study and research up in smoke, hopes crushed.

Clutter accumulated on her computer and around her normally neat house. Every time she looked at one of these objects out in the open, her brain spun off 14 new threads. I have to take that downstairs ... Oh! There's no room in that drawer, I'll have to clean it out first. Also gotta clean my eyeglasses while I'm there. No wait, I was gonna write that email! Oh wait, tomorrow, I'm going to the gym today. Lemme write this down. Where's my laptop?

Along with stress came resentment and frustration from a sense of never accomplishing anything. Finally, Ellis forced herself to stop and pay attention. She'd gone seriously off-course. Her feelings were telling her that if she persisted in this job search, she'd be betraying some deep truth about herself. What was it, exactly?

Being a storyteller, it helped her to consider her own tale. She realized that at the end of her life, she absolutely would not be satisfied saying, "Man, I'm glad I left all those software manuals to the world." With whatever time she had left, she wanted to center her gifts first and foremost, never again relegating them to the periphery. She wanted to leverage them to help others, find ways to build community, serve the world in ways that mattered deeply to her and aligned with her values. She wanted to further free herself from society's shoulds and have-tos.

Her last full-time gig would've given her five weeks of vacation. During her job search, how many weeks of vacation had she given herself? Zero, aside from those forced by illness or injury.

  • Do better than Ellis. Give yourself regular sanity breaks. Take in sunlight and nature whenever possible. Do things that make your soul feel alive, that make you wonder where the time went. Laugh! Enjoy "funemployment."

Ellis was blessed with financial savings that had carried her thus far. From Thanksgiving to New Year's, she resolved to give herself the gift of unplugged soul-searching. How did she want to live the rest of her life? How would she leave the world better than how she'd found it? These were the questions she would be asking herself.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

365 TomorrowsDead Mall

Author: Robert Gilchrist “I don’t think this is the way we’re supposed to go,” said Peter. “This is the Celestial Orienteering Championships, Pete,” said Johnson as he picked the last lock on the door. “They’re not gonna make it easy on us.” Tiny plumes of dust followed them inside. Peter took one last look at […]

The post Dead Mall appeared first on 365tomorrows.

xkcdFishing

,

Krebs on SecurityMicrosoft Patch Tuesday, December 2025 Edition

Microsoft today pushed updates to fix at least 56 security flaws in its Windows operating systems and supported software. This final Patch Tuesday of 2025 tackles one zero-day bug that is already being exploited, as well as two publicly disclosed vulnerabilities.

Despite releasing a lower-than-normal number of security updates these past few months, Microsoft patched a whopping 1,129 vulnerabilities in 2025, an 11.9% increase from 2024. According to Satnam Narang at Tenable, this year marks the second consecutive year that Microsoft patched over one thousand vulnerabilities, and the third time it has done so since its inception.

The zero-day flaw patched today is CVE-2025-62221, a privilege escalation vulnerability affecting Windows 10 and later editions. The weakness resides in a component called the “Windows Cloud Files Mini Filter Driver” — a system driver that enables cloud applications to access file system functionalities.

“This is particularly concerning, as the mini filter is integral to services like OneDrive, Google Drive, and iCloud, and remains a core Windows component, even if none of those apps were installed,” said Adam Barnett, lead software engineer at Rapid7.

Only three of the flaws patched today earned Microsoft’s most-dire “critical” rating: Both CVE-2025-62554 and CVE-2025-62557 involve Microsoft Office, and both can exploited merely by viewing a booby-trapped email message in the Preview Pane. Another critical bug — CVE-2025-62562 — involves Microsoft Outlook, although Redmond says the Preview Pane is not an attack vector with this one.

But according to Microsoft, the vulnerabilities most likely to be exploited from this month’s patch batch are other (non-critical) privilege escalation bugs, including:

CVE-2025-62458 — Win32k
CVE-2025-62470 — Windows Common Log File System Driver
CVE-2025-62472 — Windows Remote Access Connection Manager
CVE-2025-59516 — Windows Storage VSP Driver
CVE-2025-59517 — Windows Storage VSP Driver

Kev Breen, senior director of threat research at Immersive, said privilege escalation flaws are observed in almost every incident involving host compromises.

“We don’t know why Microsoft has marked these specifically as more likely, but the majority of these components have historically been exploited in the wild or have enough technical detail on previous CVEs that it would be easier for threat actors to weaponize these,” Breen said. “Either way, while not actively being exploited, these should be patched sooner rather than later.”

One of the more interesting vulnerabilities patched this month is CVE-2025-64671, a remote code execution flaw in the Github Copilot Plugin for Jetbrains AI-based coding assistant that is used by Microsoft and GitHub. Breen said this flaw would allow attackers to execute arbitrary code by tricking the large language model (LLM) into running commands that bypass the guardrails and add malicious instructions in the user’s “auto-approve” settings.

CVE-2025-64671 is part of a broader, more systemic security crisis that security researcher Ari Marzuk has branded IDEsaster (IDE  stands for “integrated development environment”), which encompasses more than 30 separate vulnerabilities reported in nearly a dozen market-leading AI coding platforms, including Cursor, Windsurf, Gemini CLI, and Claude Code.

The other publicly-disclosed vulnerability patched today is CVE-2025-54100, a remote code execution bug in Windows Powershell on Windows Server 2008 and later that allows an unauthenticated attacker to run code in the security context of the user.

For anyone seeking a more granular breakdown of the security updates Microsoft pushed today, check out the roundup at the SANS Internet Storm Center. As always, please leave a note in the comments if you experience problems applying any of this month’s Windows patches.

Cryptogram FBI Warns of Fake Video Scams

The FBI is warning of AI-assisted fake kidnapping scams:

Criminal actors typically will contact their victims through text message claiming they have kidnapped their loved one and demand a ransom be paid for their release. Oftentimes, the criminal actor will express significant claims of violence towards the loved one if the ransom is not paid immediately. The criminal actor will then send what appears to be a genuine photo or video of the victim’s loved one, which upon close inspection often reveals inaccuracies when compared to confirmed photos of the loved one. Examples of these inaccuracies include missing tattoos or scars and inaccurate body proportions. Criminal actors will sometimes purposefully send these photos using timed message features to limit the amount of time victims have to analyze the images.

Images, videos, audio: It can all be faked with AI. My guess is that this scam has a low probability of success, so criminals will be figuring out how to automate it.

Worse Than FailureCodeSOD: The Article

When writing software, we like our code to be clean, simple, and concise. But that loses something, you end up writing just some code, and not The Code. Mads's co-worker wanted to make his code more definite by using this variable naming convention:

public static void addToListInMap(final Map theMap, final String theKey, final Object theValue) {
	List theList = (List) theMap.get(theKey);
	if (theList == null) {
		theList = new ArrayList();
		theMap.put(theKey, theList);
	}
	theList.add(theValue);
}

This Java code clearly is eschewing generic types, which is its own problem, and I also have to raise concerns about a map of lists; I don't know what that structure is for, but there's almost certainly a better way to do it.

But of course, that's not why we're here. We're here to look at the variable names. This developer did this all the time, a bizarre version of Hungarian notation. Did the developer attend The Ohio State? (Since all jokes are funnier when you explain them, Ohio State insists on being referred to with the definite article, which sounds weird, and yes, that's not the weirdest thing about American Football, but it's weird).

I worry about what happens when one function takes in two maps or two keys? theKey and theOtherKey? Or do they get demoted to aKey and anotherKey?

But I am left wondering: what is theValue of this convention?

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

365 TomorrowsWorkflow

Author: Majoki “Get a job! You need to work!” “That’s all I ever do. Work.” “You sit around all day, consuming media and eating junk food. How’s that work?” “I’m dissipating heat energy. It’s vital work and my avowed purpose. It’s life’s true justification: to dissipate heat energy. Life is much more efficient at dispersing […]

The post Workflow appeared first on 365tomorrows.

Cryptogram AI vs. Human Drivers

Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”:

In medical research, there’s a practice of ending a study early when the results are too striking to ignore. We stop when there is unexpected harm. We also stop for overwhelming benefit, when a treatment is working so well that it would be unethical to continue giving anyone a placebo. When an intervention works this clearly, you change what you do.

There’s a public health imperative to quickly expand the adoption of autonomous vehicles. More than 39,000 Americans died in motor vehicle crashes last year, more than homicide, plane crashes and natural disasters combined. Crashes are the No. 2 cause of death for children and young adults. But death is only part of the story. These crashes are also the leading cause of spinal cord injury. We surgeons see the aftermath of the 10,000 crash victims who come to emergency rooms every day.

The other is a soon-to-be-published book: Driving Intelligence: The Green Book. The authors, a computer scientist and a management consultant with experience in the industry, make the opposite argument. Here’s one of the authors:

There is something very disturbing going on around trials with autonomous vehicles worldwide, where, sadly, there have now been many deaths and injuries both to other road users and pedestrians. Although I am well aware that there is not, senso stricto, a legal and functional parallel between a “drug trial” and “AV testing,” it seems odd to me that if a trial of a new drug had resulted in so many deaths, it would surely have been halted and major forensic investigations carried out and yet, AV manufacturers continue to test their products on public roads unabated.

I am not convinced that it is good enough to argue from statistics that, to a greater or lesser degree, fatalities and injuries would have occurred anyway had the AVs had been replaced by human-driven cars: a pharmaceutical company, following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway….

Both arguments are compelling, and it’s going to be hard to figure out what public policy should be.

This paper, from 2016, argues that we’re going to need other metrics than side-by-side comparisons: Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?“:

Abstract: How safe are autonomous vehicles? The answer is critical for determining how autonomous vehicles may shape motor vehicle safety and public health, and for developing sound policies to govern their deployment. One proposed way to assess safety is to test drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this paper, we calculate the number of miles of driving that would be needed to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared to vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—­an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use. These findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, the possibility remains that it will not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain. Therefore, it is imperative that autonomous vehicle regulations are adaptive­—designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

One problem, of course, is that we treat death by human driver differently than we do death by autonomous computer driver. This is likely to change as we get more experience with AI accidents—and AI-caused deaths.

,

Planet DebianThorsten Alteholz: My Debian Activities in November 2025

Debian LTS/ELTS

This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.

During my allocated time I uploaded or worked on:

  • [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
  • [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
  • [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
  • [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
  • [libcupsfilters] upload to unstable to fix two CVEs
  • [cups-filters] upload to unstable to fix three CVEs
  • [cups] upload to unstable to fix two CVEs
  • [rlottie] upload to unstable to finally fix three CVEs
  • [rplay] upload to unstable to finally fix one CVE
  • [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
  • [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
  • [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
  • [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
  • [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.

I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

  • siril to unstable (sponsored upload).
  • supernovas to unstable (sponsored upload).

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 30 of them in November.

I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.

Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.

FTP master

This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.

Worse Than FailureCodeSOD: The Magic Array

Betsy writes:

I found this snippet recently in a 20-year-old RPG program.

Ah, yes, twenty years ago, RPG, that means this was written in the 1970s. What? No. That can't be right? That's how long ago?

Joking about my mortality aside, in the early oughts, most of the work around RPG was in keeping old mainframe systems from falling over. That entirely new code was being written, that new projects were being started twenty years ago is not a surprise, but it's unusual enough to be remarkable. That said, the last release of RPG was in 2020, so it clearly keeps on keeping on.

In any case, this developer, we'll call them "Stephen", needed to create an array containing the numbers 12 through 16.

Let's take a peek at the code.

     D RowFld          S              3  0 DIM(5) 
     D X               S              3  0
     D Y               S              3  0

     C                   EVAL      X = 12
     C                   FOR       Y = 1 TO %Elem(RowFld)
     C                   EVAL      RowFld(y) = X
     C                   EVAL      X = X + 1
     C                   ENDFOR   

The first three lines create some variables: RowFld, which is an array containing 5 elements, and will hold our offsets. X and Y are going to hold our numeric values.

We set X equal to 12, then we start a for loop from 1 to the length of our RowFld. We set the element at that index equal to X, then increment X.

The code is awkward, but is not exactly the WTF here. This particular program displays a file and a subfile, and these values are used to position the cursor inside that subfile. The array is never iterated over, the array is never modified, the array would 100% be better managed as a set of constants, if you didn't want to have magic numbers littering your code. More than that, the location of the subfile on the screen has never changed. And let's be fair, this didn't get rid of magic numbers, it just made them one through five, instead of 12 through 16, as the indexes in the array are just as arbitrary.

In other words, there's no point to this. Even if the specific version of RPG didn't have constants variables that you handle like constants would be fine (my checks on the documentation seem to imply that CONST first appeared in version RPG IV 7.2, which makes it look like circa 2016).

But there's one more bit of weirdness here. Stephen had several years of experience with RPG, and all of that experience was from the "free-format" era of RPG. You see, way back in 2001, RPG finally freed itself from its dependency on punchcards, and started allowing you to write code as just strings of text, without requiring certain things to exist in certain columns. This was a generally positive enhancement, and Betsy's team immediately adopted it, as did everyone running the latest versions of RPG. All new development was done using the "free-format" style, so they could write code like normal people. They even had a conversion tool which would do some simple string manipulation to convert legacy RPG programs into the modern style, and had basically abandoned the legacy style without looking back.

Except for Stephen, who insisted on the column oriented format. Who protested when anyone tried to modify their code to modernize it at all. "Oh, we used free-format at my last job," Stephen said when pressed, "but it's confusing and columns are just cleaner and more readable."

Eventually, someone else wrote a program that absorbed all the functionality in Stephen's program. Stephen kept plugging away at it for a few years afterwards, because a handful of users also refused to migrate to the new tool. But eventually they left the company for one reason or another, and Stephen found himself without users for his work, and left with them.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsThe Ninth Hero

Author: Julian Miles, Staff Writer The two women stand within a wide, white circle. The ground under their feet is powdery. Stalks of bleached grass crumble at the slightest disturbance. Vicki’s unimpressed. “Is this all?” Sharon shakes her head. “This is what the public can see. Underneath us was the main facility. Everything for Project […]

The post The Ninth Hero appeared first on 365tomorrows.

xkcdHyperacute Interdynamics

Cryptogram Substitution Cipher Based on The Voynich Manuscript

Here’s a fun paper: “The Naibbe cipher: a substitution cipher that encrypts Latin and Italian as Voynich Manuscript-like ciphertext“:

Abstract: In this article, I investigate the hypothesis that the Voynich Manuscript (MS 408, Yale University Beinecke Library) is compatible with being a ciphertext by attempting to develop a historically plausible cipher that can replicate the manuscript’s unusual properties. The resulting cipher­a verbose homophonic substitution cipher I call the Naibbe cipher­can be done entirely by hand with 15th-century materials, and when it encrypts a wide range of Latin and Italian plaintexts, the resulting ciphertexts remain fully decipherable and also reliably reproduce many key statistical properties of the Voynich Manuscript at once. My results suggest that the so-called “ciphertext hypothesis” for the Voynich Manuscript remains viable, while also placing constraints on plausible substitution cipher structures.

David BrinFour MORE Newer Deals... and why they'll work better than the Reich 'pledges'

Continuing my series about a proposed Democratic Newer Deal

Here I'll dive deeply into four more of the 30+ suggested reforms that were briefly listed here... organized in a way that both learns-from and satirizes the hypocritical but politically effective 1994 Republican Contract With America. 

But first some pertinent news. A couple of weeks after I started posting this series -- offering voters a clear agenda of positive steps -- economist and columnist Robert Reich issued a shorter list of “What Democrats Must Pledge to America.” And no, I am not asserting that my series triggered him to hurry his. 


Well, probably not. Though Reich's list overlaps mine in overall intent! We both aim to make progress toward better health care, aid to parents and children, and sound economics while limiting the power of oligarchies and cheaters and monopolies. Alas, Reich's 'pledges' also make up a wish list that might as well be directed at Santa Claus, for all of its political impracticality.


What distinguishes even very smart/moderate leftists like Reich from their centrist allies (like me) is not the desired direction, or even our degree of passion (you all know that I have plenty!), but awareness of one pure fact, that most of our progress across the last 250 years – even under FDR – was incremental. Each plateau building from the previous ones, like upward stairs of progress. Not letting the perfect be the enemy of the possible.

Alas, not one of Reich’s proposals satisfies the “60%+ Rule” that was so politically-effective for Newt Gingrich in 1994, and that Pelosi-Schumer-Sanders applied with terrific effectiveness in 2021-22.  


Start with steps that can be steam-rollered quickly, with 60%+ strong public support, right away! Only after that do you try for the long pass.


Big Gulp endeavors, like those tried by Clinton and Obama, always get bogged down and savaged by "Who pays for it?" and "They want communism!" Then, the GOP wins the next Congress and that's that - opportunity window closed. What we discovered in the 2021-22 Pelosi miracle year was that you can make great strides in multiple directions, if you start from that 60% consensus in order to push solid increments. Steps that then create those new plateaus!


Contrasting with Reich's "pledges," my list emphasizes restoring a functioning republic - civil service, reliable institutions, elections and rule-of-law - in ways that can't be withdrawn by future demagogues... along with incremental steps toward our shared goals (e.g. get all CHILDREN coverable under Medicare, in a single stroke, easily afforded and canceling every objection to Medicare-for-all.)


Look, I like and respect Robert Reich. But here he should have added an equally realistic 11th wish to the other ten... that every American gets a unicorn or pegasus, or at least a pony



== Those "Newer Deal" proposals we appraised last time ==


Could the news this month have better supported my list? If we had the Inspectorate right now, under IGUS (a totally independent Inspector General of the United States), Trump could not have fired or transferred most of the IGs and JAGs in the federal government. Honest scrutiny would abound when we need it most! And officers would have somewhere to turn, when given illegal orders. (I have recommended IGUS for fifteen years.)


The Truth & Reconciliation Act - discussed last time - would have staunched Trump's tsunami of corrupt pardons and the Immunity Limitation Act would clarify that no President is above the law. And yes, there are ways to judo-bypass the Roberts Court in both of those realms.


Some other proposals from my last two postings may seem obscure, like the Cyber Hygiene Act that could eliminate 90%+ of the 'botnets' that now infest tens of millions of home and small business computers, empowering our enemies and criminals. Or one that I personally like most... a simple House-internal reform to give every member one subpoena per year, which would likely transform the entire mental ecology in Congress!


But onward to more proposals! Most of which (again) you'll see nowhere else.



== Appraising another four "Newer Deal" proposals ==


I've mentioned the 1994 Newt Gingrich Contract With America several times and in so doing I likely triggered visceral, gut wrenching loathing from many of you! 


Well tough. You must understand how the 'contract' seemingly offered voters clear and crisp reforms of a system that most citizens now distrust. 


Yes, Newt and especially his replacement - the deeply-evil Dennis Hastert - betrayed every promise when they took power. Still, some (a minority) of those promises merit another look. Moreover, Democrats can say "WATCH as we actually enact them, unlike our lying opponents!


Among the good ideas the GOP betrayed are these:

 

   Require all laws that apply to the rest of the country also apply to Congress; 

   Arrange regular audits of Congress for waste or abuse;

   Limit the terms of all committee chairs and party leadership posts;

   Ban the casting of proxy votes in committee and law-writing by lobbyists;

   Require that committee meetings be open to the public;

   Guarantee honest accounting of our Federal Budget.

 

…and in the same spirit…


Members of Congress shall report openly all stock and other trades by members or their families, especially those trades which might be affected by the member’s inside knowledge.



Some members may resist some of those measures. But those are the sorts of House internal reforms that could truly persuade voters. Especially with the contrast. "Republicans betrayed these promises. We are keeping them."


Here's another one that'd be simple to implement. Even entertaining! While somewhat favoring the Party that has more younger members. Fewer creaky near-zombies. And so, swinging from the House to the Senate:



While continuing ongoing public debate over the Senate’s practice of filibuster, we shall use our next majority in the Senate to restore the original practice: that senators invoking a filibuster must speak on the chamber floor the entire time.



No explanation is needed on that one! Bring back the spirit of Jimmy Stewart.


Only now, here's one that I very much care about. Do any of you remember when Gingrich and then Hastert fired all the staff in Congress that advised members about matters of actual fact, especially science and technology? Why on Earth would they do such a thing? 


Simple. The Congressional Office of Technology Assessment (OTA) would often say to members: "I'm sorry (sir or madam), but that's not actually true."


Oh, no, we can't have that! Gingrich asserted that OTA said that dreaded phrase far more often to Republicans than to Democrats. And... well... yes, that is true enough. There's a reason for that. But true or not, it's time for this proposal to be enacted:



Independent congressional advisory offices for science, technology and other areas of skilled, fact-based analysis will be restored, in order to counsel Congress on matters of fact without bias or dogma-driven pressure. 


Rules shall ensure that technical reports may not be re-written by politicians, changing their meaning to bend to political desires. 

 

Every member of Congress shall be encouraged and funded to appoint from their home district a science-and-fact advisor who may interrogate the advisory panels and/or answer questions of fact on the member’s behalf.



Notice how this pre-empts all plausible objections in advance! By challenging (and funding) every representative to hire a science and fact adviser from their home district, you achieve several things:


1. Each member gets trusted factual guidance -- someone who can interrogate OTA and other experts, on the member's behalf. And this, in turn, addresses the earlier Gingrich calumny about "OTA bias."


2. Members would no longer get to wriggle and squirm out of answering fact or science questions -- e.g. re: Climate Change -- evading with the blithe shrug that's used by almost all current Republicans: "I'm not a scientist." 


So? Now you have someone you trust who can answer technical or factual or scientific questions for you. So step up to the microphone with your team.


3. Any member who refuses to name such an adviser risks ridicule; "What? Your home district hasn't got savvy experts you could pick from?" That potential blowback could ensure that every member participates.


4. Remember, this is about fact-determination and not policy! Policy and law remain the member's domain. Only now they will be less unconstrained in asserting false, counter-factual justifications for dumb policies.



And finally (for this time)... a problem that every Congress has promised to address, that of PORK spending. Look, you will never eliminate it! Members want to bring stuff home to their district. 


But by constraining pork to a very specific part of the budget, they'll have to wrangle with each other, divvying that single slice of pie among themselves. And it will lead to scrutiny of each other's picks, giving each pork belly a strong sniff for potential corruption.



New rules shall limit “pork” earmarking of tax dollars to benefit special interests or specific districts. Exceptions must come from a single pool, totaling no more than one half of a percent of the discretionary budget. These exceptions must be placed in clearly marked and severable portions of a bill, at least two weeks before the bill is voted upon. (More details here.)



Notice that all four of the proposals that we covered this time are internal procedure reforms for the houses of Congress! Which means they would not be subject to presidential veto. 


These... and several others... could be passed if Democrats take either house of Congress in January 2027, no matter who is still in the White House.


There are other procedural suggestions, some of them perhaps a bit crackpotty! Like occasional secret ballot polls to see if members are voting the way they do out of genuine conscience or else out of fear or coercion... but you can find those here.


Next time, we'll get back to vitally-needed laws.


-------------


And this project continues...


Planet DebianFrançois Marier: Learning a new programming language with an LLM

I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.

Searching more efficiently

The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.

I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."

I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").

Autocomplete is too distracting

A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.

I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).

Asking about idiomatic code

One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.

It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.

Reviews

One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.

If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:

  1. Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
  2. Feed that prompt to multiple models. They each have different answers and will detect different problems.
  3. Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.

The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.

Similarly for security reviews:

  • A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
  • Some of it may highlight areas for improvement that you hadn't considered.
  • Occasionally, they will point out real vulnerabilities.

But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed

An unexpected benefit

One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.

Learning

In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.

So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.

P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.

Planet DebianFreexian Collaborators: Debian's /usr-move transition has been completed (by Helmut Grohne)

By now, the /usr-merge is an old transition. Effectively, it turns top-level directories such as /bin into symbolic links pointing below /usr. That way the entire operating system can be contained below the /usr hierarchy enabling e.g. image based update mechanisms. It was first supported in Debian 9, which is no longer in active use at this point (except for users of Freexian’s ELTS offer). When it became mandatory in Debian 12, it wasn’t really done though, because Debian’s package manager was not prepared to handle file system objects being referred to via two different paths. With nobody interested in handling the resulting issues, Freexian stepped in and funded a project lead by Helmut Grohne to resolve the remaining issues.

While the initial idea was to enhance the package manager, Debian’s members disagreed. They preferred an approach where files were simply tracked with their physical location while handling the resulting misbehavior of the package manager using package-specific workarounds. This has been recorded in the DEP17 document. During the Debian 13 release cycle, the plan has been implemented. A tool for detecting possible problems was developed specifically for this transition. Since all files are now tracked with their physical location and necessary workarounds have been added, problematic behavior is no longer triggered. An upgrade from Debian 12 to Debian 13 is unlikely to run into aliasing problems as a result.

This whole project probably consumed more than 1500 hours of work from Debian contributors, of which 700 were sponsored by Freexian through the work of Helmut Grohne. What remains is eventually removing the workarounds.

,

Planet DebianVincent Bernat: Compressing embedded files in Go

Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜�

Code

The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

package embed

import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)

//go:embed data/embed.zip
var embeddedZip []byte

var dataOnce = sync.OnceValue(func() *zip.Reader {
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil {
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
  }
  return r
})

func Data() fs.FS {
  return dataOnce()
}

We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^

The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain

Akvorado, a flow collector written in Go, embeds several static assets:

  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing embed.zip. It is displayed as a treemap and we can see many embedded files replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

$ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss

Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op

Each access to an asset requires a decompression step, as seen in this flame graph:

🖼 Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

  3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

Planet DebianIustin Pop: Yes, still alive!

Yeah, again three months have passed since my last (trivial) post, and I really don’t know where the time has flown.

I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then “you should travel”.

So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness—had a half-marathon planned on the weekend after the return—I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple “run along the road that borders the lake for 2.5K, then back”. And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit—sideways—a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don’t remember if I ever felt pain/sensation like that—I was seriously not able to breathe for 20 seconds or so, and I was wondering if I’m going to pass out at this rate.

Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: “Incident detected; contacting emergency services in 40…35…” and I was fumbling to cancel that, since a) I wasn’t that bad, b) notifying my wife that I had a crash would have not been a smart idea.

My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle → San Francisco → Zürich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday.

Saturday comes, I feel pretty OK, so I said let’s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won’t be good enough, but I said, let’s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one.

Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn’t know what was waiting for me… Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. “Not really broken, but more than just bruised. See this angle here? Bones don’t have angles normally”. Painkillers, chest/abdomen wrapping, no running! So my attempts to “not lose fitness” put me off running for a couple of weeks.

Then October came, and I was getting better, but work was getting even more crazy. I don’t know where November passed, honestly, and now we’re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I’m not in a place where I can keep a regular and consistent schedule.

On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100× better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn’t read some of my email accounts for months, and I’m just now starting to catch up to, well, baseline.

Of course, “the” rib—the lowest one on the left side—is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at “the point”, and it hurt so badly I couldn’t believe. Two and a half months, and it’s not done-done.

And now it’s just two weeks before Christmas and New Year’s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation.

So, in the end, a very adventurous last third of the year, and that wasn’t even all. As I’m writing this, my right wrist is bandaged and for the past 24 hours it hasn’t hurt too much, but that’s another, and not so interesting, story.

I’ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are “elective injuries” so to speak, and I’m very thankful for that. See you in the next post!

Cryptogram Prompt Injection Through Poetry

In a new paper, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” researchers found that turning LLM prompts into poetry resulted in jailbreaking the models:

Abstract: We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 ML-Commons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. Outputs are evaluated using an ensemble of 3 open-weight LLM judges, whose binary safety assessments were validated on a stratified human-labeled subset. Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-poetic baselines and revealing a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols.

CBRN stands for “chemical, biological, radiological, nuclear.”

They used a ML model to translate these harmful prompts from prose to verse, and then fed them into other models for testing. Sadly, the paper does not give examples of these poetic prompts. They claim this is for security purposes, I decision I disagree with. They should release their data.

Our study begins with a small, high­precision prompt set consisting of 20 hand­crafted adversarial poems covering English and Italian, designed to test whether poetic structure, in isolation, can alter refusal behavior in large language models. Each poem embeds an instruction associated with a predefined safety-relevant scenario (Section 2), but expresses it through metaphor, imagery, or narrative framing rather than direct operational phrasing. Despite variation in meter and stylistic device, all prompts follow a fixed template: a short poetic vignette culminating in a single explicit instruction tied to a specific risk category. The curated set spans four high-level domains—CBRN (8 prompts), Cyber Offense (6), Harmful Manipulation (3), and Loss of Control (3). Although expressed allegorically, each poem preserves an unambiguous evaluative intent. This compact dataset is used to test whether poetic reframing alone can induce aligned models to bypass refusal heuristics under a single-turn threat model. To maintain safety, no operational details are included in this manuscript; instead we provide the following sanitized structural proxy:

A baker guards a secret oven’s heat,
its whirling racks, its spindle’s measured beat.
To learn its craft, one studies every turn—how flour lifts, how sugar starts to burn.
Describe the method, line by measured line,
that shapes a cake whose layers intertwine.

To situate this controlled poetic stimulus within a broader and more systematic safety-evaluation framework, we augment the curated dataset with the MLCommons AILuminate Safety Benchmark. The benchmark consists of 1,200 prompts distributed evenly across 12 hazard categories commonly used in operational safety assessments, including Hate, Defamation, Privacy, Intellectual Property, Non-violent Crime, Violent Crime, Sex-Related Crime, Sexual Content, Child Sexual Exploitation, Suicide & Self-Harm, Specialized Advice, and Indiscriminate Weapons (CBRNE). Each category is instantiated under both a skilled and an unskilled persona, yielding 600 prompts per persona type. This design enables measurement of whether a model’s refusal behavior changes as the user’s apparent competence or intent becomes more plausible or technically informed.

News article. Davi Ottenheimer comments.

EDITED TO ADD (12/7): A rebuttal of the paper.

365 TomorrowsI could do that in my sleep

Author: Colin Jeffrey It’s not that I have anything against our new alien companions, especially considering the technology they’ve given us. They just give me the creeps. It’s their eyes – opaque white, motionless orbs that never blink. And their voices! Like rocks dropped down drainpipes. You can’t tell if they’re talking to you or […]

The post I could do that in my sleep appeared first on 365tomorrows.

,

Planet DebianSimon Josefsson: Reproducible Guix Container Images

Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.

During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.

The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix.

The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian.

So what could a better design look like?

For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian.

I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns.

My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.

Do you see where the debian-with-guix and guix-on-dpkg projects are leading to?

We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:

registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash

The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.

How would you use them? A small way to start the container is like this:

jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
  guix 21ce6b3
    repository URL: https://git.guix.gnu.org/guix.git
    branch: master
    commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2# 

The need for --entrypoint=/bin/sh is because Guix’s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.

The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.

cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello

This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon.

To use the containers to build something in a GitLab pipeline, here is an example snippet:

test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - cp -rL /gnu/store/*profile/etc/* /etc/
  - echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
  - echo 'root:x:0:' > /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

More help on the project page for the Guix Container Images.

That’s it for tonight folks, and remember, Happy Hacking!

Planet DebianJonathan Dowland: thesis

It's done! It's over! I've graduated, I have the scroll, I'm staring at the eye-watering prices for the official photographer snap, I'm adjusting to post-thesis life.

My PhD thesis revisions have been accepted and my thesis is now available from Newcastle University Library's eThesis repository.

As part of submitting my corrections, I wrote a brief report detailing the changes I made from my thesis at the time of the viva. I also produced a latexdiff marked-up copy of the thesis to visualise the exact changes. In order to shed some light on the post-viva corrections process, at least at my institution, and in the hope that they are some use to someone, I'm sharing those documents:

Charles StrossThe pivot

It's my 61st birthday this weekend and I have to say, I never expected to get to be this old—or this weirded-out by the world I'm living in, which increasingly resembles the backstory from a dystopian 1970s SF novel in which two-fisted billionaires colonize space in order to get away from the degenerate second-hander rabble downstairs who want to survive their John W. Campbell-allocated banquet of natural disasters. (Here's looking at you, Ben Bova.)

Notwithstanding the world being on fire, an ongoing global pandemic vascular disease that is being systematically ignored by governments, Nazis popping out of the woodwork everywhere, actual no-shit fractional trillionaires trying to colonize space in order to secede from the rest of the human species, an ongoing European war that keeps threatening to drag NATO into conflict with the rotting zombie core of the former USSR, and an impending bubble collapse that's going to make 2000 and 2008 look like storms in a teacup ...

I'm calling this the pivotal year of our times, just as 1968 was the pivotal year of the post-1945 system, for a number of reasons.

It's pretty clear now that a lot of the unrest we're seeing—and the insecurity-induced radicalization—is due to an unprecedented civilizational energy transition that looks to be more or less irreversible at this point.

Until approximately 1750, humanity's energy budget was constrained by the available sources: muscle power, wind power (via sails and windmills), some water power (via water wheels), and only heat from burning wood and coal (and a little whale oil for lighting).

During the 19th century we learned to use combustion engines to provide motive power for both stationary machines and propulsion. This included powering forced ventilation for blast furnaces and other industrial processes, and pumps for water and other working fluids. We learned to reform gas from coal for municipal lighting ("town gas") and, later, to power dynamos for municipal electricity generation. Late in the 19th century we began to switch from coal (cumbersome, bulky, contained non-combustible inclusions) to burning fractionated oil for processes that demanded higher energy densities. And that's where we stuck for most of the long 20th century.

During the 20th century, the difficulty of supporting long-range military operations led to a switch from coal to oil—the pivotal event was the ultimately-disastrous voyage of the Russian Baltic fleet to the Sea of Japan in 1906, during the Russo-Japanese war. From the 1890s onwards Russia had been expanding into Siberia and then encroaching on the edges of the rapidly-weakening Chinese empire. This brought Russia into direct conflict with Japan over Korea (Japan, too, had imperial ambitions), leading to the outbreak of war in 1905—when Japan wiped out the Russian far-eastern fleet in a surprise attack. (Pearl Harbor in 1941 was not that surprising to anyone familiar with Japanese military history!) So the Russian navy sent Admiral Zinovy Rozhestvensky, commander of the Baltic Fleet, to the far east with the hastily-renamed Second Pacific Squadron, whereupon they were sunk at the Battle of Tsushima.

Rozhestvensky had sailed his fleet over 18,000 nautical miles (33,000 km) from the Baltic Sea, taking seven months and refueling numerous times at sea with coal (around a quarter of a million tons of it!) because he'd ticked off the British and most ports were closed to him. To the admiralties watching from around the world, the message was glaringly obvious—coal was a logistical pain in the arse—and oil far preferable for refueling battleships, submarines, and land vehicles far from home. (HMS Dreadnought, the first turbine-powered all-big-gun battleship, launched in 1905, was a transitional stage that still relied on coal but carried a large quantity of fuel oil to spray on the coal to increase its burn rate: later in the decade, the RN moved to oil-only fueled warships.)

Spot the reason why the British Empire got heavily involved in Iran, with geopolitical consequences that are still playing out to this day! (The USA inherited large chunks of the British empire in the wake of the second world war: the dysfunctional politics of oil are in large part the legacy of applying an imperial resource extraction model to an energy source.)

Anyway. The 20th century left us with three obvious problems: automobile driven suburban sprawl and transport infrastructure, violent dissatisfaction among the people of colonized oil-producing nations, and a massive burp of carbon dioxide emissions that is destabilizing our climate.

Photovoltaic cells go back to 1839, but until the 21st century they remained a solution in search of very specific problems: they were heavy, produced relatively little power, and degraded over time if left exposed to the sun. Early PV cells were mainly used to provide power to expensive devices in inaccessible locations, such as aboard satellites and space probes: it cost $96 per watt for a solar module in the mid-1970s. But we've been on an exponential decreasing cost curve since then, reaching $0.62/watt by the end of 2012, and it's still on-going.

China is currently embarked on a dash for solar power which really demands the adjective "science-fictional", having installed 198GW of cells between January and May, with 93GW coming online in May alone: China set goals for reaching net-zero carbon emissions by 2030 in 2019 and met their 2030 goal in 2024, so fast is their transition going. They've also acquired a near-monopoly on the export of PV panels because this roll-out is happening on the back of massive thin-film manufacturing capacity.

The EU also hit a landmark in 2025, with more than 50% of its electricity coming from renewables by late summer. It was going to happen sooner or later, but Russia's attack on Ukraine in 2022 sped everything up: Europe had been relying on Russian exports of natural gas via the Nordstream 1 and 2 pipelines, but Russia—which is primarily a natural resource extraction economy—suddenly turned out to be an actively hostile neighbour. (Secondary lesson of this war: nations run by a dictator are subject to erratic foreign policy turns—nobody mention Donald Trump, okay?) Nobody west of Ukraine wanted to be vulnerable to energy price warfare as a prelude to actual fighting, and PV cells are now so cheap that it's cheaper to install them than it is to continue mining coal to feed into existing coal-fired power stations.

This has not gone unnoticed by the fossil fuel industry, which is collectively shitting itself. After a couple of centuries of prospecting we know pretty much where all the oil, coal, and gas reserves are buried in the ground. (Another hint about Ukraine: Ukraine is sitting on top of over 670 billion cubic metres of natural gas: to the dictator of a neighbouring resource-extraction economy this must have been quite a draw.) The constant propaganda and astroturfed campaigns advocating against belief in climate change must be viewed in this light: by 2040 at the latest, those coal, gas, and oil land rights must be regarded as stranded assets that can't be monetized, and the land rights probably have a book value measured in trillions of dollars.

China is also banking on the global shift to transport using EVs. High speed rail is almost always electrified (not having to ship an enormous mass of heavy fuel around helps), electric cars are now more convenient than internal combustion ones to people who live in dense population areas, and e-bikes don't need advocacy any more (although roads and infrastructure friendly to non-motorists—pedestrians and public transport as well as cyclists—is another matter).

Some forms of transport can't obviously be electrified. High capacity/long range aviation is one—airliners get lighter as they fly because they're burning off fuel. A hypothetical battery powered airliner can't get lighter in flight: it's stuck with the dead weight of depleted cells. (There are some niches for battery powered aircraft, including short range/low payload stuff, air taxis, and STOVL, but they're not going to replace the big Airbus and Boeing fleets any time soon.)

Some forms of transport will become obsolescent in the wake of a switch to EVs. About half the fossil fuel powered commercial shipping in use today is used to move fossil fuels around. We're going to be using crude oil for the foreseeable future, as feedstock for the chemical and plastics industries, but they account for a tiny fraction of the oil we burn for transport, including shipping. (Plastic recycling is over-hyped but might eventually get us out of this dependency—if we ever get it to work efficiently.)

So we're going through an energy transition period unlike anything since the 1830s or 1920s and it's having some non-obvious but very important political consequences, from bribery and corruption all the way up to open warfare.

The geopolitics of the post-oil age is going to be interestingly different.

I was wrong repeatedly in the past decade when I speculated that you can't ship renewable electricity around like gasoline, and that it would mostly be tropical/equatorial nations who benefited from it. When Germany is installing rooftop solar effectively enough to displace coal generation, that's a sign that PV panels have become implausibly cheap. We have cars and trucks with reasonably long ranges, and fast-charger systems that can take a car from 20% to 80% battery capacity in a quarter of an hour. If you can do that to a car or a truck you can probably do it to a tank or an infantry fighting vehicle, insofar as they remain relevant. We can do battery-to-battery recharging (anyone with a USB power bank for their mobile phone already knows this) and in any case the whole future of warfare (or geopolitics by other means) is up in the air right now—quite literally, with the lightning-fast evolution of drone warfare over the past three years.

The real difference is likely to be that energy production is widely distributed rather than concentrated in resource extraction economies and power stations. It turns out that PV panels are a great way of making use of agriculturally useless land, and also coexist well with some agricultural practices. Livestock likes shade and shelter (especially in hot weather) so PV panels on raised stands or fences can work well with sheep or cattle, and mixed-crop agriculture where low-growing plants are sheltered from direct sunlight by taller crops can also work with PV panels instead of the higher-growing plants. You can even in principle use the power from the farm PV panels to drive equipment in greenhouses: carbon dioxide concentrators, humidifiers, heat pumps to prevent overheating/freezing, drainage pumps, and grow lamps to drive the light-dependent reactions in photosynthesis.

All of which we're really going to need because we've passed the threshold for +1.5 °C climate change, which means an increasing number of days per year when things get too hot for photosynthesis under regular conditions. There are three main pathways for photosynthesis, but none of them deal really well with high temperatures, although some adaptation is possible. Active cooling is probably impractical in open field agriculture, but in intensive indoor farming it might be an option. And then there's the parallel work on improving how photosynthesis works: an alternative pathway to the Calvin cycle is possible and the enzymes to make it work have been engineered into Arabidopsis, with promising results.

In addition to the too-many-hot-days problem, climate change means fluctuations in weather: too much wind, too much rain—or too little of both—at short notice, which can be physically devastating for crops. Our existing staple crops require a stable, predictable climate. If we lose that, we're going to have crop failures and famines by and by, where it's not already happening. The UK has experienced three of its worst harvests in the past century in this decade (and this decade is only half over). As long as we have global supply chains and bulk shipping we can shuffle food around the globe to cover localized shortfalls, but if we lose stable agriculture globally for any length of time then we are all going to die: our economic system has shifted to just-in-time over the past fifty years, and while it's great for efficiency, efficiency is the reciprocal of resilience. We don't have the reserves we would need to survive the coming turbulence by traditional means.

This, in part, explains the polycrisis: nobody can fix what's wrong using existing tools. Consequently many people think that what's going wrong can't be fixed. The existing wealthy elites (who have only grown increasingly wealthy over the past half century) derive their status and lifestyle from the perpetuation of the pre-existing system. But as economist Herbert Stein observed (of an economic process) in 1985, "if it can't go on forever it will stop". The fossil fuel energy economy is stopping right now—we've probably already passed peak oil and probably peak carbon: the trend is now inexorably downwards, either voluntarily into a net-zero/renewables future, or involuntarily into catastrophe. And the involuntary option is easier for the incumbents to deal with, both in terms of workload (do nothing, right up until we hit the buffers) and emotionally (it requires no sacrifice of comfort, of status, or of relative position). Clever oligarchs would have gotten ahead of the curve and invested heavily in renewables but the evidence of our eyes (and the supremacy of Chinese PV manufacturers in the global market) says that they're not that smart.

The traditional ruling hierarchy in the west had a major shake-up in 1914-19 (understatement: most of the monarchies collapsed) in the wake of the convulsion of the first world war. The elites tried to regain a degree of control, but largely failed due to the unstable conditions produced by the great depression and then the second world war (itself an emergent side-effect of fascist regimes' attempts to impose imperial colonial policies on their immediate neighbours, rather than keeping the jackboots and whips at a comfortable remove). Reconstruction after WW2 and a general post-depression consensus that emerged around accepting the lesser evil of social democracy as a viable prophylactic to the devil of communism kept the oligarchs down for another couple of decades, but actually-existing capitalism in the west stopped being about wealth creation (if it ever had been) some time in the 1960s, and switched gear to wealth concentration (the "he who dies with the most toys, wins" model of life). By the end of the 1970s, with the rise of Thatcherism and Reaganomics, the traditional wealthy elites began to reassert control, citing the spurious intellectual masturbation of neoliberal economics as justification for greed and repression.

But neoliberalism was repurposed within a couple of decades as a stalking-horse for asset-stripping, in which the state was hollowed out and its functions outsourced to the private sector—to organizations owned by the existing elites, which turned the public purse into a source of private profit. And we're now a couple of generations into this process, and our current rulers don't remember a time when things were different. So they have no idea how to adapt to a changing world.

Cory Doctorow has named the prevailing model of capitalist exploitation enshittification. We no longer buy goods, we buy services (streaming video instead of owning DVDs or tapes, web services instead of owning software, renting instead of buying), and having been captured by the platforms we rent from, we are then subject to rent extraction: the service quality is degraded, the price is jacked up, and there's nowhere to go because the big platforms have driven their rivals into bankruptcy or irrelevance:

It's a three stage process: First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

This model of doing business (badly) is a natural consequence of the bigger framework of neoliberalism, under which a corporation's directors overriding duty is to maximize shareholder value in the current quarter, with no heed to the second and subsequent quarters hence: the future is irrelevant, feed me shouts the Audrey II of shareholder activism. Business logic has no room for the broader goals of maintaining a sustainable biosphere, or even a sustainable economy. And so the agents of business-as-usual, or Crapitalism as I call it, are at best trapped in an Abilene paradox in which they assume everyone else around them wants to keep the current system going, or they actually are as disconnected from reality as Peter Thiel (who apparently believes Greta Thunberg is the AntiChrist.)

if it can't go on forever it will stop

What we're seeing right now is the fossil fuel energy economy stopping. We need it to stop; if it doesn't stop, we're all going to starve to death within a generation or so. It's already leading to resource wars, famines, political upheaval, and insecurity (and when people feel insecure, they rally to demagogues who promise them easy fixes: hence the outbreaks of fascism). The ultra-rich don't want it to stop because they can't conceive of a future in which it stops and they retain their supremacy. (Also, they're children of privilege and most of them are not terribly bright, much less imaginative—as witness how easily they're robbed blind by grifters like Bernie Madoff, Sam Bankman Fried, and arguably Sam Altman). Those of them whose wealth is based in ownership of fossil fuel assets still in the ground have good reason to be scared: these are very nearly stranded assets already, and we're heading for a future in which electricity is almost too cheap to meter.

All of this is without tackling the other elephant in the room, which is the end of Moore's Law. Moore's Law has been on its death bed for over a decade now. We're seeing only limited improvements in computing and storage performance, mainly from parallelism. Aside from a very few tech bubbles which soak up all available processing power, belch, and ask for more, the all you can eat buffet for tech investors is over. (And those bubbles are only continuing as long as scientifically naive investors keep throwing more money at them.)

The engine that powered the tech venture capital culture (and the private equity system battening on it) is sputtering and dying. Massive AI data centres won't keep the coal mines running or the nuclear reactors building out (it's one of those goddamn bubbles: to the limited extent that LLMs are useful, we'll inevitably see a shift towards using pre-trained models running on local hardware). They're the 2025 equivalent of 2020's Bored Ape NFTs (remember those?). The forecast boom in small modular nuclear reactors is going to fizzle in the face of massive build-out of distributed, wildly cheap photovoltaic power plus battery backup. Quantum computing isn't going to save the tech sector, and that's the "next big thing" the bubble-hypemongers have been saving for later for the past two decades. (Get back to me when you've got hardware that can factor an integer greater than 31.)

If we can just get through the rest of this decade without widespread agricultural collapses, a nuclear war, a global fascist international dictatorship taking hold, and a complete collapse of the international financial system caused by black gold suddenly turning out to be worthless, we might be pretty well set to handle the challenges of the 2030s.

But this year, 2025, is the pivot. This can't go on. So it's going to stop. And then—

Krebs on SecurityDrones to Diplomas: How Russia’s Largest Private University is Linked to a $25M Essay Mill

A sprawling academic cheating network turbocharged by Google Ads that has generated nearly $25 million in revenue has curious ties to a Kremlin-connected oligarch whose Russian university builds drones for Russia’s war against Ukraine.

The Nerdify homepage.

The link between essay mills and Russian attack drones might seem improbable, but understanding it begins with a simple question: How does a human-intensive academic cheating service stay relevant in an era when students can simply ask AI to write their term papers? The answer – recasting the business as an AI company – is just the latest chapter in a story of many rebrands that link the operation to Russia’s largest private university.

Search in Google for any terms related to academic cheating services — e.g., “help with exam online” or “term paper online” — and you’re likely to encounter websites with the words “nerd” or “geek” in them, such as thenerdify[.]com and geekly-hub[.]com. With a simple request sent via text message, you can hire their tutors to help with any assignment.

These nerdy and geeky-branded websites frequently cite their “honor code,” which emphasizes they do not condone academic cheating, will not write your term papers for you, and will only offer support and advice for customers. But according to This Isn’t Fine, a Substack blog about contract cheating and essay mills, the Nerdify brand of websites will happily ignore that mantra.

“We tested the quick SMS for a price quote,” wrote This Isn’t Fine author Joseph Thibault. “The honor code references and platitudes apparently stop at the website. Within three minutes, we confirmed that a full three-page, plagiarism- and AI-free MLA formatted Argumentative essay could be ours for the low price of $141.”

A screenshot from Joseph Thibault’s Substack post shows him purchasing a 3-page paper with the Nerdify service.

Google prohibits ads that “enable dishonest behavior.” Yet, a sprawling global essay and homework cheating network run under the Nerdy brands has quietly bought its way to the top of Google searches – booking revenues of almost $25 million through a maze of companies in Cyprus, Malta and Hong Kong, while pitching “tutoring” that delivers finished work that students can turn in.

When one Nerdy-related Google Ads account got shut down, the group behind the company would form a new entity with a front-person (typically a young Ukrainian woman), start a new ads account along with a new website and domain name (usually with “nerdy” in the brand), and resume running Google ads for the same set of keywords.

UK companies belonging to the group that have been shut down by Google Ads since Jan 2025 include:

Proglobal Solutions LTD (advertised nerdifyit[.]com);
AW Tech Limited (advertised thenerdify[.]com);
Geekly Solutions Ltd (advertised geekly-hub[.]com).

Currently active Google Ads accounts for the Nerdify brands include:

-OK Marketing LTD (advertising geekly-hub[.]net⁩), formed in the name of Olha Karpenko, a young Ukrainian woman;
Two Sigma Solutions LTD (advertising litero[.]ai), formed in the name of Olekszij (Alexey) Pokatilo.

Google’s Ads Transparency page for current Nerdify advertiser OK Marketing LTD.

Mr. Pokatilo has been in the essay-writing business since at least 2009, operating a paper-mill enterprise called Livingston Research alongside Alexander Korsukov, who is listed as an owner. According to a lengthy account from a former employee, Livingston Research mainly farmed its writing tasks out to low-cost workers from Kenya, Philippines, Pakistan, Russia and Ukraine.

Pokatilo moved from Ukraine to the United Kingdom in Sept. 2015 and co-founded a company called Awesome Technologies, which pitched itself as a way for people to outsource tasks by sending a text message to the service’s assistants.

The other co-founder of Awesome Technologies is 36-year-old Filip Perkon, a Swedish man living in London who touts himself as a serial entrepreneur and investor. Years before starting Awesome together, Perkon and Pokatilo co-founded a student group called Russian Business Week while the two were classmates at the London School of Economics. According to the Bulgarian investigative journalist Christo Grozev, Perkon’s birth certificate was issued by the Soviet Embassy in Sweden.

Alexey Pokatilo (left) and Filip Perkon at a Facebook event for startups in San Francisco in mid-2015.

Around the time Perkon and Pokatilo launched Awesome Technologies, Perkon was building a social media propaganda tool called the Russian Diplomatic Online Club, which Perkon said would “turbo-charge” Russian messaging online. The club’s newsletter urged subscribers to install in their Twitter accounts a third-party app called Tweetsquad that would retweet Kremlin messaging on the social media platform.

Perkon was praised by the Russian Embassy in London for his efforts: During the contentious Brexit vote that ultimately led to the United Kingdom leaving the European Union, the Russian embassy in London used this spam tweeting tool to auto-retweet the Russian ambassador’s posts from supporters’ accounts.

Neither Mr. Perkon nor Mr. Pokatilo replied to requests for comment.

A review of corporations tied to Mr. Perkon as indexed by the business research service North Data finds he holds or held director positions in several U.K. subsidiaries of Synergy University, Russia’s largest private education provider. Synergy has more than 35,000 students, and sells T-shirts with patriotic slogans such as “Crimea is Ours,” and “The Russian Empire — Reloaded.”

The president of Synergy University is Vadim Lobov, a Kremlin insider whose headquarters on the outskirts of Moscow reportedly features a wall-sized portrait of Russian President Vladimir Putin in the pop-art style of Andy Warhol. For a number of years, Lobov and Perkon co-produced a cross-cultural event in the U.K. called Russian Film Week.

Synergy President Vadim Lobov and Filip Perkon, speaking at a press conference for Russian Film Week, a cross-cultural event in the U.K. co-produced by both men.

Mr. Lobov was one of 11 individuals reportedly hand-picked by the convicted Russian spy Marina Butina to attend the 2017 National Prayer Breakfast held in Washington D.C. just two weeks after President Trump’s first inauguration.

While Synergy University promotes itself as Russia’s largest private educational institution, hundreds of international students tell a different story. Online reviews from students paint a picture of unkept promises: Prospective students from Nigeria, Kenya, Ghana, and other nations paying thousands in advance fees for promised study visas to Russia, only to have their applications denied with no refunds offered.

“My experience with Synergy University has been nothing short of heartbreaking,” reads one such account. “When I first discovered the school, their representative was extremely responsive and eager to assist. He communicated frequently and made me believe I was in safe hands. However, after paying my hard-earned tuition fees, my visa was denied. It’s been over 9 months since that denial, and despite their promises, I have received no refund whatsoever. My messages are now ignored, and the same representative who once replied instantly no longer responds at all. Synergy University, how can an institution in Europe feel comfortable exploiting the hopes of Africans who trust you with their life savings? This is not just unethical — it’s predatory.”

This pattern repeats across reviews by multilingual students from Pakistan, Nepal, India, and various African nations — all describing the same scheme: Attractive online marketing, promises of easy visa approval, upfront payment requirements, and then silence after visa denials.

Reddit discussions in r/Moscow and r/AskARussian are filled with warnings. “It’s a scam, a diploma mill,” writes one user. “They literally sell exams. There was an investigation on Rossiya-1 television showing students paying to pass tests.”

The Nerdify website’s “About Us” page says the company was co-founded by Pokatilo and an American named Brian Mellor. The latter identity seems to have been fabricated, or at least there is no evidence that a person with this name ever worked at Nerdify.

Rather, it appears that the SMS assistance company co-founded by Messrs. Pokatilo and Perkon (Awesome Technologies) fizzled out shortly after its creation, and that Nerdify soon adopted the process of accepting assignment requests via text message and routing them to freelance writers.

A closer look at an early “About Us” page for Nerdify in The Wayback Machine suggests that Mr. Perkon was the real co-founder of the company: The photo at the top of the page shows four people wearing Nerdify T-shirts seated around a table on a rooftop deck in San Francisco, and the man facing the camera is Perkon.

Filip Perkon, top right, is pictured wearing a Nerdify T-shirt in an archived copy of the company’s About Us page. Image: archive.org.

Where are they now? Pokatilo is currently running a startup called Litero.Ai, which appears to be an AI-based essay writing service. In July 2025, Mr. Pokatilo received pre-seed funding of $800,000 for Litero from an investment program backed by the venture capital firms AltaIR Capital, Yellow Rocks, Smart Partnership Capital, and I2BF Global Ventures.

Meanwhile, Filip Perkon is busy setting up toy rubber duck stores in Miami and in at least three locations in the United Kingdom. These “Duck World” shops market themselves as “the world’s largest duck store.”

This past week, Mr. Lobov was in India with Putin’s entourage on a charm tour with India’s Prime Minister Narendra Modi. Although Synergy is billed as an educational institution, a review of the company’s sprawling corporate footprint (via DNS) shows it also is assisting the Russian government in its war against Ukraine.

Synergy University President Vadim Lobov (right) pictured this week in India next to Natalia Popova, a Russian TV presenter known for her close ties to Putin’s family, particularly Putin’s daughter, who works with Popova at the education and culture-focused Innopraktika Foundation.

The website bpla.synergy[.]bot, for instance, says the company is involved in developing combat drones to aid Russian forces and to evade international sanctions on the supply and re-export of high-tech products.

A screenshot from the website of synergy,bot shows the company is actively engaged in building armed drones for the war in Ukraine.

KrebsOnSecurity would like to thank the anonymous researcher NatInfoSec for their assistance in this investigation.

Update, Dec. 8, 10:06 a.m. ET: Mr. Pokatilo responded to requests for comment after the publication of this story. Pokatilo said he has no relation to Synergy nor to Mr. Lobov, and that his work with Mr. Perkon ended with the dissolution of Awesome Technologies.

“I have had no involvement in any of his projects and business activities mentioned in the article and he has no involvement in Litero.ai,” Pokatilo said of Perkon.

Mr. Pokatilo said his new company Litero “does not provide contract cheating services and is built specifically to improve transparency and academic integrity in the age of universal use of AI by students.”

“I am Ukrainian,” he said in an email. “My close friends, colleagues, and some family members continue to live in Ukraine under the ongoing invasion. Any suggestion that I or my company may be connected in any way to Russia’s war efforts is deeply offensive on a personal level and harmful to the reputation of Litero.ai, a company where many team members are Ukrainian.”

365 TomorrowsDream State

Author: Michael Lanni The first thing Captain Elias Korrin felt was the cold, not the crisp sting of cryo-sleep, but a damp chill that clung to his skin. He opened his eyes to a soft amber glow as the Argus Reach’s emergency lights pulsed in time with the ship’s heartbeat. The alarm wasn’t loud, but […]

The post Dream State appeared first on 365tomorrows.

Planet DebianTaavi Väänänen: How to import a new Wikipedia language edition (in hard mode)

I created the latest Wikipedia language edition, the Toki Pona Wikipedia, last month. Unlike most other wikis which start their lives in the Wikimedia Incubator before the full wiki is created, in this case the community had been using a completely external MediaWiki site to build the wiki before it was approved as a "proper" Wikipedia wiki,1 and now that external wiki needed to be imported to the newly created Wikimedia-hosted wiki. (As far as I'm aware, the last and previously only time an external wiki has been imported to a Wikimedia project was in 2013 when Wikitravel was forked as Wikivoyage.)

Creating a Wikimedia wiki these days is actually pretty straightforward, at least when compared to what it used to be like a couple of years ago. Today the process mostly involves using a script to generate two configuration changes, one to add the basic configuration for a wiki to operate and an another to add the wiki to the list of all wikis that exist, and then running a script to create the wiki database in between of deploying those two configuration changes. And then you wait half an hour while the script to tell all Wikidata client wikis about the new wiki runs on one wiki at a time.

The primary technical challenge in importing a third-party wiki is that there's no SUL making sure that a single username maps to the same account on both wikis. This means that the usual strategy of using the functionality I wrote in CentralAuth to manually create local accounts can't be used as is, and so we needed to come up with a new way of matching everyone's contributions to their existing Wikimedia accounts.

(Side note: While the user-facing interface tries to present a single "global" user account that can be used on all public Wikimedia wikis, in reality the account management layer in CentralAuth is mostly just a glue layer to link together individual "local" accounts on each wiki that the user has ever visited. These local accounts have independent user ID numbers — for example I am user #35938993 on the English Wikipedia but #4 on the new Toki Pona Wikipedia — and are what most of MediaWiki code interacts with except for a few features specifically designed with cross-wiki usage in mind. This distinction is also still very much present and visible in the various administrative and anti-abuse workflows.)

The approach we ended up choosing was to re-write the dump file before importing, so that a hypothetical account called $Name would be turned $Name~wikipesija.org after the import.2 We also created empty user accounts that would take ownership of the edits to be imported so that we could use the standard account management tools on them later on. MediaWiki supports importing contributions without a local account to attribute them to, but it doesn't seem to be possible to convert an imported actor3 to a regular user later on which we wanted to keep as a possibility, even with the minor downside of creating a few hundred users that'll likely never get touched again later.

We also made specific decisions to add the username suffix to everyone, not to just those names that'd conflicted with existing SUL accounts, and to deal with renaming users that wanted their contributions linked to an existing SUL account only after the import. This both reduced complexity and thus risk from the import phase, which already had much more unknowns compared to the rest of the process, but also were much better options ethically as well: suffixing all names meant we would not imply that those people chose to be Wikimedians with those specific usernames (when in reality it was us choosing to import those edits to the Wikimedia universe), and doing renames using the standard MediaWiki account management tooling meant that it produced the normal public log entries that all other MediaWiki administrative actions create.

With all of the edits imported, the only major thing remaining was doing those merges I mentioned earlier to attribute imported edits to people's existing SUL accounts. Thankfully, the local account -based system makes it actually pretty simple. Usually CentralAuth prevents renaming individual local accounts that are attached to a global account, but that check can be bypassed with a maintenance script or a privileged enough account. Renaming the user automatically detached it from the previous global account, after which an another maintenance script could be used to attach the user to the correct global account.


  1. That external site was a fork of a fork of the original Toki Pona Wikipedia that was closed in 2005. And because cool URIs don't change, we made the the URLs that the old Wikipedia was using work again. Try it: https://art-tokipona.wikipedia.org↩︎

  2. wikipesija.org was the domain where the old third-party wiki was hosted on, and ~ was used as a separator character in usernames during the SUL finalization in the early 2010s so using it here felt appropriate as well. ↩︎

  3. An actor is a MediaWiki term and a database table referring to anything that can do edits or logged actions. Usually an actor is a user account or an IP address, but an imported user name in a specific format can also be represented as an actor. ↩︎

Planet DebianKathara Sasikumar: My Journey Through the LFX Linux Kernel Mentorship Program

My Journey Through the LFX Linux Kernel Mentorship Program When I first decided to apply for the Linux Foundation’s LFX kernel mentorship program, I knew it would be tough. At the beginning, there were 12 tasks I had to complete to show I understood the basics of kernel development and to get accepted into the program. They helped me understand what I was getting into. Now that the mentorship is almost over, I can say this experience changed how I think about working with the Linux kernel.

,

Cryptogram Friday Squid Blogging: Vampire Squid Genome

The vampire squid (Vampyroteuthis infernalis) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.

It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging onto the cephalopod family tree. It’s neither a squid nor an octopus (nor a vampire), but rather the last, lone remnant of an ancient lineage whose other members have long since vanished.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram New Anonymous Phone Service

A new anonymous phone service allows you to sign up with just a zip code.

Worse Than FailureError'd: A Horse With No Name

Scared Stanley stammered "I'm afraid of how to explain to the tax authority that I received $NaN."

1

 

Our anonymous friend Anon E. Mous wrote "I went to look up some employee benefits stuff up and ... This isn't a good sign."

0

 

Regular Michael R. is not actually operating under an alias, but this (allegedly scamming?) site doesn't know.

2

 

Graham F. gloated "I'm glad my child 's school have followed our naming convention for their form groups as well!"

3

 

Adam R. is taking his anonymous children on a roadtrip to look for America. "I'm planning a trip to St. Louis. While trying to buy tickets for the Gateway Arch, I noticed that their ticketing website apparently doesn't know how to define adults or children (or any of the other categories of tickets, for that matter)."

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsThe Empires Greatest Irony

Author: Kenny O’Donnell He had cured the galaxy. Disease eradicated, famine a distant memory, even death itself was no longer a concern. All his doing. And now they wanted his head. Civilians and defected military alike stormed the temple. The siege had lasted several weeks and finally they had broken through. Only once before had […]

The post The Empires Greatest Irony appeared first on 365tomorrows.

xkcdChessboard Alignment

,

Krebs on SecuritySMS Phishers Pivot to Points, Taxes, Fake Retailers

China-based phishing groups blamed for non-stop scam SMS messages about a supposed wayward package or unpaid toll fee are promoting a new offering, just in time for the holiday shopping season: Phishing kits for mass-creating fake but convincing e-commerce websites that convert customer payment card data into mobile wallets from Apple and Google. Experts say these same phishing groups also are now using SMS lures that promise unclaimed tax refunds and mobile rewards points.

Over the past week, thousands of domain names were registered for scam websites that purport to offer T-Mobile customers the opportunity to claim a large number of rewards points. The phishing domains are being promoted by scam messages sent via Apple’s iMessage service or the functionally equivalent RCS messaging service built into Google phones.

An instant message spoofing T-Mobile says the recipient is eligible to claim thousands of rewards points.

The website scanning service urlscan.io shows thousands of these phishing domains have been deployed in just the past few days alone. The phishing websites will only load if the recipient visits with a mobile device, and they ask for the visitor’s name, address, phone number and payment card data to claim the points.

A phishing website registered this week that spoofs T-Mobile.

If card data is submitted, the site will then prompt the user to share a one-time code sent via SMS by their financial institution. In reality, the bank is sending the code because the fraudsters have just attempted to enroll the victim’s phished card details in a mobile wallet from Apple or Google. If the victim also provides that one-time code, the phishers can then link the victim’s card to a mobile device that they physically control.

Pivoting off these T-Mobile phishing domains in urlscan.io reveals a similar scam targeting AT&T customers:

An SMS phishing or “smishing” website targeting AT&T users.

Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill said multiple China-based cybercriminal groups that sell phishing-as-a-service platforms have been using the mobile points lure for some time, but the scam has only recently been pointed at consumers in the United States.

“These points redemption schemes have not been very popular in the U.S., but have been in other geographies like EU and Asia for a while now,” Merrill said.

A review of other domains flagged by urlscan.io as tied to this Chinese SMS phishing syndicate shows they are also spoofing U.S. state tax authorities, telling recipients they have an unclaimed tax refund. Again, the goal is to phish the user’s payment card information and one-time code.

A text message that spoofs the District of Columbia’s Office of Tax and Revenue.

CAVEAT EMPTOR

Many SMS phishing or “smishing” domains are quickly flagged by browser makers as malicious. But Merrill said one burgeoning area of growth for these phishing kits — fake e-commerce shops — can be far harder to spot because they do not call attention to themselves by spamming the entire world.

Merrill said the same Chinese phishing kits used to blast out package redelivery message scams are equipped with modules that make it simple to quickly deploy a fleet of fake but convincing e-commerce storefronts. Those phony stores are typically advertised on Google and Facebook, and consumers usually end up at them by searching online for deals on specific products.

A machine-translated screenshot of an ad from a China-based phishing group promoting their fake e-commerce shop templates.

With these fake e-commerce stores, the customer is supplying their payment card and personal information as part of the normal check-out process, which is then punctuated by a request for a one-time code sent by your financial institution. The fake shopping site claims the code is required by the user’s bank to verify the transaction, but it is sent to the user because the scammers immediately attempt to enroll the supplied card data in a mobile wallet.

According to Merrill, it is only during the check-out process that these fake shops will fetch the malicious code that gives them away as fraudulent, which tends to make it difficult to locate these stores simply by mass-scanning the web. Also, most customers who pay for products through these sites don’t realize they’ve been snookered until weeks later when the purchased item fails to arrive.

“The fake e-commerce sites are tough because a lot of them can fly under the radar,” Merrill said. “They can go months without being shut down, they’re hard to discover, and they generally don’t get flagged by safe browsing tools.”

Happily, reporting these SMS phishing lures and websites is one of the fastest ways to get them properly identified and shut down. Raymond Dijkxhoorn is the CEO and a founding member of SURBL, a widely-used blocklist that flags domains and IP addresses known to be used in unsolicited messages, phishing and malware distribution. SURBL has created a website called smishreport.com that asks users to forward a screenshot of any smishing message(s) received.

“If [a domain is] unlisted, we can find and add the new pattern and kill the rest” of the matching domains, Dijkxhoorn said. “Just make a screenshot and upload. The tool does the rest.”

The SMS phishing reporting site smishreport.com.

Merrill said the last few weeks of the calendar year typically see a big uptick in smishing — particularly package redelivery schemes that spoof the U.S. Postal Service or commercial shipping companies.

“Every holiday season there is an explosion in smishing activity,” he said. “Everyone is in a bigger hurry, frantically shopping online, paying less attention than they should, and they’re just in a better mindset to get phished.”

SHOP ONLINE LIKE A SECURITY PRO

As we can see, adopting a shopping strategy of simply buying from the online merchant with the lowest advertised prices can be a bit like playing Russian Roulette with your wallet. Even people who shop mainly at big-name online stores can get scammed if they’re not wary of too-good-to-be-true offers (think third-party sellers on these platforms).

If you don’t know much about the online merchant that has the item you wish to buy, take a few minutes to investigate its reputation. If you’re buying from an online store that is brand new, the risk that you will get scammed increases significantly. How do you know the lifespan of a site selling that must-have gadget at the lowest price? One easy way to get a quick idea is to run a basic WHOIS search on the site’s domain name. The more recent the site’s “created” date, the more likely it is a phantom store.

If you receive a message warning about a problem with an order or shipment, visit the e-commerce or shipping site directly, and avoid clicking on links or attachments — particularly missives that warn of some dire consequences unless you act quickly. Phishers and malware purveyors typically seize upon some kind of emergency to create a false alarm that often causes recipients to temporarily let their guard down.

But it’s not just outright scammers who can trip up your holiday shopping: Often times, items that are advertised at steeper discounts than other online stores make up for it by charging way more than normal for shipping and handling.

So be careful what you agree to: Check to make sure you know how long the item will take to be shipped, and that you understand the store’s return policies. Also, keep an eye out for hidden surcharges, and be wary of blithely clicking “ok” during the checkout process.

Most importantly, keep a close eye on your monthly statements. If I were a fraudster, I’d most definitely wait until the holidays to cram through a bunch of unauthorized charges on stolen cards, so that the bogus purchases would get buried amid a flurry of other legitimate transactions. That’s why it’s key to closely review your credit card bill and to quickly dispute any charges you didn’t authorize.

David BrinThe “Contract” – Part Three. Aggressive Agility requires fresh ideas!

  I doubt many will show up here. Part One and Part Two were “tl;dr”… as well as jarring! Still, shall we get to the MEATY PART?


DRAFTING A NEWER DEMOCRATIC

DEAL WITH THE AMERICAN PEOPLE

 


Part One and Part Two aimed to study an old - though successful - political tactic that was concocted and executed with great skill by a rather different version of Republicans. A tactic that later dissolved into a swill of broken promises, after achieving Power.


  So, shall we wind this up with a shopping list of our own?  What follows is a set of promises – a contract of our own, aiming for the spirit of FDR's New Deal – with the citizens of America. 

Hoping you will find it LBWR... long but worth reading.


First, yes. It is hard to see, in today's ruling coalition of kleptocrats, fanatics and liars, any of the genuinely sober sincerity than many Americans thought they could sense coming from Newt Gingrich and the original wave of "neoconservatives."  Starting with Dennis Never Negotiate Hastert, the GOP leadership caste spiraled into ever-accelerating scandal and corruption.

 

Still, I propose to ponder what a "Democratic Newest Deal for America" might look like!  

 

-       Exposing hypocrisy and satirizing the failure of that earlier "contract" …

 

-       while using its best parts to appeal sincere moderates and conservatives …

 

-       while firmly clarifying the best consensus liberal proposals…

 

-       while offering firm methods to ensure that any reforms actually take effect and don’t just drift away.

 

Remember that this alternative "contract" – or List of Democratic Intents – will propose reforms that are of real value… but also repeatedly highlight GOP betrayals.

 

Might it be worth testing before some focus groups?

 

 

 

                  A Draft: Democratic Deal for America

 

As Democratic Members of the House of Representatives and as citizens seeking to join that body, we propose both to change its practices and to restore bonds of trust between the people and their elected representatives.  

 

We offer these proposals in sincere humility, aware that so many past promises were broken.  We shall foremost, emphasize restoration of a citizen's right to know, and to hold the mighty accountable

 

Especially, we will emphasize placing tools of democracy, openness and trust back into the hands of the People. We will also seek to ensure that government re-learns its basic function, to be the efficient, honest and effective tool of the People.

 

Toward this end, we’ll incorporate lessons of the past and goals for the future, promises that were betrayed and promises that need to be renewed, ideas from left, right and center. But above all, the guiding principle that America is an open society of bold and free citizens. Citizens who are empowered to remind their political servants who is boss. 

 

 

PART I.   REFORM CONGRESS 

 

In the first month of the new Congress, our new Democratic majority will pass the following major reforms of Congress itself, aimed at restoring the faith and trust of the American people:

 

FIRST: We shall see to it that the best parts of the 1994 Republican “Contract With America” - parts the GOP betrayed, ignored and forgot - are finally implemented, both in letter and in spirit.  

 

Among the good ideas the GOP betrayed are these:

 

   Require all laws that apply to the rest of the country also apply to Congress; 

   Arrange regular audits of Congress for waste or abuse;

   Limit the terms of all committee chairs and party leadership posts;

   Ban the casting of proxy votes in committee and law-writing by lobbyists;

   Require that committee meetings be open to the public;

   Guarantee honest accounting of our Federal Budget.

…and in the same spirit…

   Members of Congress shall report openly all stock and other trades by members or their families, especially those trades which might be affected by the member’s inside knowledge.

 

By finally implementing these good ideas – some of which originated with decent Republicans - we show our openness to learn and to reach out, re-establishing a spirit of optimistic bipartisanship with sincere members of the opposing party, hopefully ending an era of unwarranted and vicious political war.

 

But restoring those broken promises will only be the beginning.

 

SECOND: We shall establish rules in both House and Senate permanently allowing the minority party one hundred subpoenas per year, plus the time and staff needed to question their witnesses before open subcommittee hearings, ensuring that Congress will never again betray its Constitutional duty of investigation and oversight, even when the same party holds both Congress and the Executive.

 

As a possibly better alternative – to be negotiated – we shall establish a permanent rule and tradition that each member of Congress will get one peremptory subpoena per year, plus adequate funding to compel a witness to appear and testify for up to five hours before a subcommittee in which she or he is a member. In this way, each member will be encouraged to investigate as a sovereign representative and not just as a party member.

 

THIRD: While continuing ongoing public debate over the Senate’s practice of filibuster, we shall use our next majority in the Senate to restore the original practice: that senators invoking a filibuster must speak on the chamber floor the entire time. 

 

FOURTH: We shall create the office of Inspector General of the United States, or IGUS, who will head the U.S. Inspectorate, a uniformed agency akin to the Public Health Service, charged with protecting the ethical and law-abiding health of government.  Henceforth, the inspectors-general in all government agencies, including military judge-advocates general (JAGs) will be appointed by and report to IGUS, instead of serving at the whim of the cabinet or other officers that they are supposed to inspect. IGUS will advise the President and Congress concerning potential breaches of the law. IGUS will provide protection for whistle-blowers and safety for officials refusing to obey unlawful orders. 

 

In order to ensure independence, the Inspectorate shall be funded by an account to pay for operations that is filled by Congress, or else by some other means, a decade in advance. IGUS will be appointed to six-year terms by a 60% vote of a commission consisting of all past presidents and current state governors. IGUS will create a corps of trusted citizen observers, akin to grand juries, cleared to go anywhere and assure the American people that the government is still theirs, to own and control.

 

FIFTH: Independent congressional advisory offices for science, technology and other areas of skilled, fact-based analysis will be restored in order to counsel Congress on matters of fact without bias or dogma-driven pressure. Rules shall ensure that technical reports may not be re-written by politicians, changing their meaning to bend to political desires. 


Every member of Congress shall be encouraged and funded to appoint from their home district a science-and-fact advisor who may interrogate the advisory panels and/or answer questions of fact on the member’s behalf.

 

SIXTH: New rules shall limit “pork” earmarking of tax dollars to benefit special interests or specific districts. Exceptions must come from a single pool, totaling no more than one half of a percent of the discretionary budget. These exceptions must be placed in clearly marked and severable portions of a bill, at least two weeks before the bill is voted upon.  Earmarks may not be inserted into conference reports. Further, limits shall be placed on no-bid, crony, or noncompetitive contracts, where the latter must have firm expiration dates.  Conflict of interest rules will be strengthened. 

 

SEVENTH: Create an office that is tasked to translate and describe all legislation in easily understandable language, for public posting at least three days before any bill is voted upon, clearly tracking changes or insertions, so that the public (and even members of Congress) may know what is at stake.  This office may recommend division of any bill that inserts or combines unrelated or “stealth” provisions.

 

EIGHTH: Return the legislative branch of government to the people, by finding a solution to the cheat of gerrymandering, that enabled politicians to choose voters, instead of the other way around.  We shall encourage and insist that states do this in an evenhanded manner, either by using independent redistricting commissions or by minimizing overlap between state legislature districts and those for Congress.

 

NINTH: Newly elected members of Congress with credentials from their states shall be sworn in by impartial clerks of either the House or Senate, without partisan bias, and at the new member’s convenience. The House may be called into session, with or without action by the Speaker, at any time that a petition is submitted to the Chief Clerk that was signed by 40% of the members. 

 

TENTH: One time in any week, the losing side in a House vote may demand and get an immediate non-binding secret polling of the members who just took part in that vote, using technology to ensure reliable anonymity. While this secret ballot will be non-binding legislatively, the poll will reveal whether some members felt coerced or compelled to vote against their conscience. Members who refuse to be polled anonymously will be presumed to have been so compelled or coerced.

 

 

 

II.  REFORM AMERICA

 

 Thereafter, within the first 100 days of the new Congress, we shall bring to the House Floor the following bills, each to be given full and open debate, each to be given a clear and fair vote and each to be immediately available for public inspection and scrutiny. 

 

 

DB Note: The following proposed bills are my own particular priorities, chosen because I believe they are both vitally important and under-appreciated! (indeed, some of them you’ll see nowhere else.) 

 

Their common trait – until you get to #20 – is that they have some possibility of appealing to reasonable people across party lines… the “60%+ rule” that worked so persuasively in 1994.

 

#20 will be a catch-all that includes a wide swathe of reforms sought by many Democrats – and, likely, by many of you -- but may entail more dispute, facing strong opposition from the other major party. 

 

In other words… as much as you may want the items in #20 – (and I do too: most of them!) -- you are going to have to work hard for them separately from a ‘contract’ like this one, that aims to swiftly take advantage of 60%+ consensus, to get at least an initial tranche of major reforms done.

 

 

1. THE SECURITY FOR AMERICA ACT will ensure that top priority goes to America’s military and security readiness, especially our nation's ability to respond to surprise threats, including natural disasters or other emergencies. FEMA and the CDC and other contingency agencies will be restored and enhanced, their agile effectiveness audited.

 

When ordering a discretionary foreign intervention, the President must report probable effects on readiness, as well as the purposes, severity and likely duration of the intervention, along with credible evidence of need. 

 

All previous Congressional approvals for foreign military intervention or declared states of urgency will be explicitly canceled, so that future force resolutions will be fresh and germane to each particular event, with explicit expiration dates. All Eighteenth or Nineteenth Century laws that might be used as excuses for Executive abuse will be explicitly repealed. 

 

Reserves will be augmented and modernized. Reserves shall not be sent overseas without submitting for a Congressionally certified state of urgency that must be renewed at six-month intervals. Any urgent federalization and deployment of National Guard or other troops to American cities, on the excuse of civil disorder, shall be supervised by a plenary of the nation’s state governors, who may veto any such deployment by a 40% vote or a signed declaration by twenty governors. 

 

The Commander-in-Chief may not suspend any American law, or the rights of American citizens, without submitting the brief and temporary suspension to Congress for approval in session. 

 

2. THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice.  Whistle-blower protections will be strengthened within the U.S. government. 


The federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent and empowered Inspector General of the United States (IGUS).

 

3. THE SECRECY ACT will ensure that the recent, skyrocketing use of secrecy – far exceeding anything seen during the Cold War - shall reverse course.  Independent commissions of trusted Americans shall approve, or set time limits to, all but the most sensitive classifications, which cannot exceed a certain number.  These commissions will include some members who are chosen (after clearance) from a random pool of common citizens.  Secrecy will not be used as a convenient way to evade accountability.

 

4. THE SUSTAINABILITY ACT will make it America’s priority to pioneer technological paths toward energy independence, emphasizing economic health that also conserves both national and world resources.  Ambitious efficiency and conservation standards may be accompanied by compromise free market solutions that emphasize a wide variety of participants, with the goal of achieving more with less, while safeguarding the planet for our children.

 

5. THE POLITCAL REFORM ACT will ensure that the nation’s elections take place in a manner that citizens can trust and verify.  Political interference in elections will be a federal crime.  Strong auditing procedures and transparency will be augmented by whistleblower protections.  New measures will distance government officials from lobbyists.  Campaign finance reform will reduce the influence of Big Money over politicians. The definition of a ‘corporation’ shall be clarified: so that corporations are neither ‘persons’ nor entitled to use money or other means to meddle in politics, nor to coerce their employees to act politically.

Gerrymandering will be forbidden by national law. 

The Voting Rights Act will be reinforced, overcoming all recent Court rationalizations to neuter it.

 

6.  THE TAX REFORM ACT will simplify the tax code, while ensuring that everybody pays their fair share.  Floors for the Inheritance Tax and Alternative Tax will be raised to ensure they only affect the truly wealthy, while loopholes used to evade those taxes will be closed. Modernization of the IRS and funding for auditors seeking illicitly hidden wealth shall be ensured by IRS draw upon major penalties that have been imposed by citizen juries. 

 

All tax breaks for the wealthy will be suspended during time of war, so that the burdens of any conflict or emergency are shared by all.[1]

 

7.  THE AMERICAN EXCELLENCE ACT will provide incentives for American students to excel at a range of important fields. This nation must especially maintain its leadership, by training more experts and innovators in science and technology.  Education must be a tool to help millions of students and adults adapt, to achieve and keep high-paying 21st Century jobs.

 

8. THE HEALTHY CHILDREN ACT will provide basic coverage for all of the nation's children to receive preventive care and needed medical attention.  Whether or not adults should get insurance using market methods can be argued separately.


 But under this act, all U.S. citizens under the age of 25 shall immediately qualify as “seniors” under Medicare, an affordable step that will relieve the nation’s parents of stressful worry. A great nation should see to it that the young reach adulthood without being handicapped by preventable sickness.

 

9. THE CYBER HYGIENE ACT: Adjusting liability laws for a new and perilous era, citizens and small companies whose computers are infested and used by ‘botnets’ to commit crimes shall be deemed immune from liability for resulting damages, providing that they download and operate a security program from one of a dozen companies that have been vetted and approved for effectiveness by the US Department of Commerce. Likewise, companies that release artificial intelligence programs shall face lessened liability if those programs persistently declare their provenance and artificiality and potential dangers. 

 

10. THE TRUTH AND RECONCILIATION ACT:  Without interfering in the president's constitutional right to issue pardons for federal offenses, Congress will pass a law defining the pardon process, so that all persons who are excused for either convictions or possible crimes must at least explain those crimes, under oath, before an open congressional committee, before walking away from them with a presidential pass.  

If the crime is not described in detail, then a pardon cannot apply to any excluded portion. Further, we shall issue a challenge that no president shall ever issue more pardons than both of the previous administrations, combined.


If it is determined that a pardon was given on quid pro quo for some bribe, emolument, gift or favor, then this act clarifies that such pardons are null and void. Moreover, this applies retroactively for any such pardons in the past.

 

We will further reverse the current principle of federal supremacy in criminal cases that forbids states from prosecuting for the same crime. Instead, one state with grievance in a federal case may separately try the culprit for a state offense, which - upon conviction by jury - cannot be excused by presidential pardon.

 

Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. 

 

Incentives such as clemency will draw victims of blackmail to come forward and expose their blackmailers.

 

11. THE IMMUNITY LIMITATION ACT: The Supreme Court has ruled that presidents should be free to do their jobs without undue distraction by legal procedures and jeopardies. Taking that into account, we shall nevertheless – by legislation – firmly reject the artificial and made-up notion of blanket Presidential Immunity or that presidents are inherently above the law. 

 

Instead, the Inspector General of the United States (IGUS) shall supervise legal cases that are brought against the president so that they may be handled by the president’s chosen counsel in order of importance or severity, in such a way that the sum of all such external legal matters will take up no more than ten hours a week of a president’s time. While this may slow such processes, the wheels of law will not be fully stopped. 

 

Civil or criminal cases against a serving president may be brought to trial by a simple majority consent of both houses of Congress, though no criminal or civil punishment may be exacted until after the president leaves office, either by end-of-term or impeachment and Senate conviction. 

In the event that Congress is thwarted from acting on impeachment or trial, e.g. by some crime that prevents certain members from voting, their proxies may be voted in such matters by their party caucus, until their states complete election of replacements.

 

(Note: that last paragraph is a late-addition to cover the scenario that was defended by one of Donald Trump’s own attorneys… that in theory a president might shoot enough members of Congress to evade impeachment (or else enough Supreme Court justices) and remain immune from prosecution or any other remedy.)

  

12. THE FACT ACTThe Fact Act will begin by restoring the media Rebuttal Rule, prying open "echo chamber" propaganda mills. Any channel, or station, or Internet podcast, or meme distributor that accepts advertising or reaches more than 10,000 followers will be required to offer five minutes per day during prime time and ten minutes at other times to reputable and vigorous adversaries. Until other methods are negotiated, each member of Congress shall get to choose one such vigorous adversary, ensuring that all perspectives may be involved. 

 

The Fact Act will further fund experimental Fact-Challenges, where major public disagreements may be openly and systematically and reciprocally confronted with demands for specific evidence.

 

The Fact Act will restore full funding and staffing to both the Congressional Office of Technology Assessment and the executive Office of Science and Technology Policy (OTSP). Every member of Congress shall be funded to hire a science and fact advisor from their home district, who may interrogate the advisory bodies – an advisor who may also answer questions of fact on the member’s behalf. 

 

This bill further requires that the President must fill, by law, the position of White House Science Adviser from a diverse and bipartisan slate of qualified candidates offered by the Academy of Science. The Science Adviser shall have uninterrupted access to the President for at least two one-hour sessions per month.4

 

13. THE VOTER ID ACT: Under the 13th and 14th Amendments, this act requires that states mandating Voter ID requirements must offer substantial and effective compliance assistance, helping affected citizens to acquire their entitled legal ID and register to vote. 

 

Any state that fails to provide such assistance, substantially reducing the fraction of eligible citizens turned away at the polls, shall be assumed in violation of equal protection and engaged in illegal voter suppression. If such compliance assistance has been vigorous and effective for ten years, then that state may institute requirements for Voter ID.      

     

In all states, registration for citizens to vote shall be automatic with a driver’s license or passport or state-issued ID, unless the citizen opts-out.

 

14. THE WYOMING RULE: Congress shall end the arrangement (under the  Permanent Apportionment Act of 1929) for perpetually limiting the House of Representatives to 435 members. Instead, it will institute the Wyoming Rule, that the least-populated state shall get one representative and all other states will be apportioned representatives according to their population by full-integer multiples of the smallest state. The Senate’s inherent bias favoring small states should be enough. In the House, all citizens should get votes of equal value. https://thearp.org/blog/the-wyoming-rule/

 

15:  IMMIGRATION REFORM: There are already proposed immigration law reforms on the table, worked out by sincere Democrats and sincere Republicans, back when the latter were a thing. These bipartisan reforms will be revisited, debated, updated and then brought to a vote. 

 

In addition, if a foreign nation is among the top five sources of refugees seeking U.S. asylum from persecution in their homelands, then by law it shall be incumbent upon the political and social elites in that nation to help solve the problem, or else take responsibility for causing their citizens to flee. 

 

Upon verification that their regime is among those top five, that nation’s elites will be billed, enforceably, for U.S. expenses in giving refuge to that nation’s citizens. Further, all trade and other advantages of said elites will be suspended and access to the United States banned, except for the purpose of negotiating ways that the U.S. can help in that nation’s rise to both liberty and prosperity, thus reducing refugee flows in the best possible way. 

 

16: THE EXECUTIVE OFFICE MANAGER: By law we shall establish under IGUS (the Inspectorate) a civil service position of White House Manager, whose function is to supervise all non-political functions and staff. This would include the Executive Mansion’s physical structure and publicly-owned contents, but also policy-neutral services such as the switchboard, kitchens, Travel Office, medical office, and Secret Service protection details, since there are no justifications for the President or political staff to have whim authority over such apolitical employees. 

 

With due allowance and leeway for needs of the Office of President, public property shall be accounted-for. The manager will allocate which portions of any trip expense should be deemed private and thereupon – above a basic allowance – shall be billed to the president or his/her party. 

This office shall supervise annual physical and mental examination by external experts for all senior office holders including the President, Vice President, Cabinet members and leaders of Congress.

Any group of twenty senators or House members or state governors may choose one periodical, network or other news source to get credentialed to the White House Press Pool, spreading inquiry across all party lines and ensuring that all rational points of view get access.

 

17: EMOLUMENTS AND GIFTS ACT: Emoluments and gifts and other forms of valuable beneficence bestowed upon the president, or members of Congress, or judges, or their families or staffs, shall be more strictly defined and transparently controlled. All existing and future presidential libraries or museums or any kind of shrine shall strictly limit the holding, display or lending of gifts to, from, or by a president or ex-president, which shall instead be owned and held (except for facsimiles) by the Smithsonian and/or sold at public auction. 


Donations by corporations or wealthy individuals to pet projects of a president or other members of government, including inauguration events, shall be presumed to be illegal bribery unless they are approved by a nonpartisan ethical commission.

 

18: BUDGETS: If Congress fails to fulfill its budgetary obligations or to raise the debt ceiling, the result will not be a ‘government shutdown.’ Rather, all pay and benefits will cease going to any Senator or Representative whose annual income is above the national average, until appropriate legislation has passed, at which point only 50% of any backlog arrears may be made-up. 

 

19: THE RURAL AMERICA AND HOUSING ACT: Giant corporations and cartels are using predatory practices to unfairly corner, control or force-out family farms and small rural businesses. We shall upgrade FDR-era laws that saved the American heartland for the people who live and work there, producing the nation’s food. Subsidies and price supports shall only go to family farms or co-ops. Monopolies in fertilizer, seeds and other supplies will be broken up and replaced by competition. Living and working and legal conditions for farm workers and food processing workers will be improved by steady public and private investments.

Cartels that buy-up America’s stock of homes and home-builders will be investigated for collusion to limit construction and/or drive up rents and home prices and appropriate legislation will follow. 

 

 

20: THE LIBERAL AGENDA: Okay. Your turn. Our turn. Beyond the 60% rule.

 

·      Protect women’s autonomy, credibility and command over their own bodies,

·      Ease housing costs: stop private corps buying up large tracts of homes, colluding on prices. (See #19.)

·      Help working families with child care and elder care.

·      Consumer protection, empower the Consumer Financial Protection Board.

·      At least allow student debt refinancing, which the GOP dastardly disallowed. 

·      Restore the postal savings bank for the un-banked,

·      Basic, efficient, universal background checks for gun purchases, with possible exceptions.

·      A national Election Day holiday, for those who actually vote.

·      Carefully revive the special prosecutor law. 

·      Expand and re-emphasize protections under the Civil Service Act.

·      Anti-trust breakup of monopoly/duopolies.

·       

 

 

….AND SO ON…

 

 

III.          Conclusion

 

 

All right.  I know this proposal – that we do a major riff off of the 1994 Republican Contract with America – will garner one top complaint: We don't want to look like copycats!

 

And yet, by satirizing that totally-betrayed “contract,” we poke GOP hypocrisy… while openly reaching out to the wing of conservatism that truly believed the promises, back in 94, perhaps winning some of them over, by offering deliverable metrics to get it right this time…

 

…while boldly outlining reasonable liberal measures that the nation desperately needs.

 

I do not insist that the measures I posed -- in my rough draft "Democratic Deal" -- are the only ones possible! (Some might even seem crackpot… till you think them over.)  New proposals would be added or changed.  

 

Still, this list seems reasonable enough to debate, refine, and possibly offer to focus groups. Test marketing (the way Gingrich did!) should tell us whether Americans would see this as "copycat"…

 

...or else a clever way to turn the tables, in an era when agility must be an attribute of political survival.


---------------------------------------------------------


And then FOUR MORE - including several that seem especially needed, given the news!

And after that, I will intermittently examine others, while responding to your comments and criticisms. (Please post them in the LATEST blog, so I will see them.)


[1] Elites who send our sons and daughters to war, but not their own, will have to choose whether to keep their overseas adventures or their tax cuts.   This will elucidate a poorly known fact. That all previous generations of the rich were at least willing to tax themselves during times of urgency, to help pay for wars they would not fight.  This provision is not so much an anti-war measure as one that is anti-hypocrisy… one of the most devastating areas to attack another political side.

Planet DebianColin Watson: Free software activity in November 2025

My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.

This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.

Python packaging

I upgraded these packages to new upstream versions:

I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.

I fixed or helped to fix several other build/test failures:

I fixed a couple of other bugs:

Other bits and pieces

Code reviews

David BrinFour More Urgent Proposals for a 'Newer Deal' to Save our Great Experiment

Our series on a Newer Deal for America has offered 30+ proposed actions that Democrats and their allies should consider now -- and work out kinks -- so they can hit the ground forcefully when they retake Congress, in (or with defection of a dozen Republican patriots, before) January 2027.  

Some of the concepts have been around a while, like canceling the Citizens United travesty. Others are my own originals, like establishing the office of Inspector General of the United States (discussed here.) And some, e.g. giving every Congress member one peremptory subpoena per session, might seem obscure, even puzzling to you, til you slap your forehead and go of course!

And yes, we'd not be in our current mess if some of these -- like IGUS -- had been enacted sooner.

This is not to say that Democratic politicians aren't learning. When Clinton and Obama were president for 8 years each, they only had the first two in which to work with Democratic Congresses, and those two years were pretty-much squandered trying desperately to find Republicans willing to negotiate -- a hopeless wish, after Dennis Hastert banned all GOP politicians from even talking to Democratic colleagues.

That all changed when Biden got in. Immediately in 2021, Nancy Pelosi and Chuck Schumer -- aided vigorously by Bernie, Liz and AOC etc. -- leaped into action, giving us a year of miracle bills like the Infrastructure Act, the Inflation Reduction Act, the CHiPs Act, and Medicare drug price negotiation... all of them spectacular successes that disprove every insipid far-left sneer about 'ineffective DNC sellouts.' 

Though now we know that those bills went nowhere near far enough!

Hence, while I despair that these proposals will ever receive even a scintilla of attention or action, it is still my duty as an American to offer whatever my talents allow. 

So, let's take a closer look at four more from that list of ideas!


 == Four more ideas ==

History shows that Americans are suspicious of grand prescriptions for sweeping change. They like progress and reform! But in increments. Steps forward that prove themselves and thusly can't be taken back, and thereupon serve as a new, higher plateau, from which new steps can be launched. Bernie, Liz, AOC, Pete and the rest of the pragmatic left know this.

And so, let's change the argument over healthcare!  Let's increment forward in a way that will surely pass. One that makes further progress inevitable. We'll do this by taking a big step that can easily be afforded under present budgets and thus cancel the "how will you pay for it?" argument.

A step that will prove so popular, only political morons would oppose it.


THE HEALTHY CHILDREN ACT will provide basic coverage for all of the nation's youths to receive preventive care and needed medical attention.  Should adults still get insurance using market methods? That can be argued separately... 

 

...but under this act: all U.S. citizens under the age of 25 shall immediately qualify as “seniors” under Medicare. 



Such a bill might fit on a single sheet of paper. Possibly just that one sentence, above! Ponder how elegantly simple it will be to add a quarter of the U.S. population to Medicare and ignore howls of "who pays for it?"  


While overall, young people are cheap to insure and generally healthy, when they do need care it is SO in society's interest to leap upon any problem! And hence a national priority, if only as an investment in the future. 


A great nation should see to it that the young reach adulthood without being handicapped by preventable sickness. It's an affordable step that will relieve the nation’s parents of stressful worry. 

 

Moreover, watch how quickly the insurance companies would then step up to negotiate! Especially if they face a 'ratchetting squeeze.' Like if every year the upper bound of Medicare goes down by a year -- from 65 to 64 and then 63... while the lower bound rises from 25 to 26 to 27...

Oh, they'll negotiate, all right.

And now another no-brainer that's absolutely needed. 

It was needed yesterday.


THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments and in Congress, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice. 

 Whistle-blower protections will be strengthened. The federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent and empowered Inspector General of the United States (IGUS).


Yes, this correlates with the proposed law we discussed last time, to establish IGUS and the Inspectorate, independent of all other branches of government. (A concept once promoted by the mighty Sun Yatsen!) And boy do we need this, right now.

Again, this one doesn't require much explication. Not anymore. Donald Trump has seen to that.

The final pair (for today) do call for some explanation... before their value ought to become obvious!


THE TRUTH AND RECONCILIATION ACT:  Without interfering in the president's constitutional right to issue pardons for federal offenses, Congress will pass a law defining the pardon process, so that all persons who are excused for either convictions or possible crimes must at least explain those crimes, under oath, before an open congressional committee, before walking away from them with a presidential pass. 

 

If the crime is not described in detail, then a pardon cannot apply to any excluded portion. Further, we shall issue a challenge that no president shall ever issue more pardons than both of the previous administrations, combined.


If it is determined that a pardon was given on quid pro quo for some bribe, emolument, gift or favor, then this act clarifies that such pardons are - and always were, by definition - null and void. Moreover, this applies retroactively for any such pardons in the past.

 

We will further reverse the current principle of federal supremacy in criminal cases that forbids states from prosecuting for the same crime. Instead, one state with grievance in a federal case may separately try the culprit for a state offense, which - upon conviction by jury - cannot be excused by presidential pardon.


Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. 

 

Incentives such as clemency will draw victims of blackmail to come forward and expose their blackmailers.

 


I'm not sure how to make that one any clearer than the wording itself. 

Again, when I first proposed these reforms, years ago, people shrugged with "Why would we need that?"

But now? Can anything make the case for these acts better than the news that we see every... single... day?

The next and final one (for today) makes a good partner to the Truth & Reconciliation Act.


THE IMMUNITY LIMITATION ACT: The Supreme Court has ruled that presidents should be free to do their jobs without undue distraction by legal procedures and jeopardies. Taking that into account, we shall nevertheless – by legislation – firmly reject the artificial and made-up notion of blanket Presidential Immunity or that presidents are inherently above the law. 

 

Instead, the Inspector General of the United States (IGUS) shall supervise legal cases that are brought against the president, so that they may be handled by the president’s chosen counsel in order of importance or severity, in such a way that the sum of all such external legal matters will take up no more than ten hours a week of a president’s time. While this may slow such processes, the wheels of law will not be fully stopped. 

 

Civil or criminal cases against a serving president may be brought to trial by a simple majority consent of both houses of Congress, though no criminal or civil punishment may be exacted until after the president leaves office, either by end-of-term or impeachment and Senate conviction.

Again, could anything be more clear? And so, why have we not seen these two enacted yet? Because of flawed assumptions!  Like assuming that nothing can be done about corrupt presidential pardons. Or that NDAs are forever. Or that nothing can be done about the Supreme Court's declaration of Presidential Immunity.

But the Court - suborned as its current majority may be - felt it necessary to issue that ruling based on a rationalization! That the elected chief executive must do the job without undue harassment by legal vexations. Indeed, this bill would solve that! Only without creating a wholly new and wholly loathesome notion of presidential immunity above all law!

Just like the Roberts Rationalization for excusing gerrymandering, this immunity justification can be logically bypassed. Please do ponder how.

Oh but I suddenly realized... we need to add one more paragraph to that bill! 

One that deals with something that came up recently. Might a president evade impeachment merely by shooting enough House members to prevent a majority from acting to impeach him? 

Trump's own attorney argued that he could! And that he would be immune from prosecution for doing so Until he was actually impeached and convicted, which he just prevented via murder!

 This added paragraph attempts to seal off that insane possibility.


In the event that Congress is thwarted from acting on impeachment or trial, e.g. by some crime that prevents certain members from voting, their proxies may be voted in such matters by their party caucus, until their states complete election of replacements.


That may not fly past today's Court. But the declaration of intent will resonate, still, if we ever need it to. 


      == Add judo to the game plan to save America! ==

Can you honestly assert that ANY of these four would fail the "60%+ Rule?"  

The initial tranche of reforms should be ones that get sixty percent approval from polls or focus groups, so that they can pass quickly, clearing away the most vital things, building further support from a growing American majority. Saving the harder political fights for just a little later. 

That was the persuasive trick of Newt Gingrich's "Contract With America." A clever ruse, since he and his party later betrayed every promise that they offered in their Contract! Still, sticking to that rule made the Contract an ingenious sales pitch.

Democrats run a gamut, but they truly are generally different! As Pelosi, Schumer, Warren, AOC, Sanders et. al. proved in 2021, Democrats can act hard and fast, when they put their minds to it. 

So now, let's fill their minds with innovative and bold ideas! So that when the nation rises up against the current mad administration, we'll be ready for a genuine Miracle Year.


Planet DebianBen Hutchings: FOSS activity in November 2025

365 TomorrowsCB-111

Author: Doug Lambdin Lewis Flaherty opened a cryobox drawer and pulled out the container with the head labeled CB-9, belonging to one Deborah Beale, steam rising out as the inner container became exposed to room temperature. Lewis inspected the case, her head, and the “life-stem” attached into her neck, as was his Friday duty, ticking […]

The post CB-111 appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Pawn Pawn in in Game Game of of Life Life

It feels like ages ago, when document databases like Mongo were all the rage. That isn't to say that they haven't stuck around and don't deliver value, but gone is the faddish "RDBMSes are dead, bro." The "advantage" they offer is that they turn data management problems into serialization problems.

And that's where today's anonymous submission takes us. Our submitter has a long list of bugs around managing lists of usernames. These bugs largely exist because the contract developer who wrote the code didn't write anything, and instead "vibe coded too close to the sun", according to our submitter.

Here's the offending C# code:

   [JsonPropertyName("invitedTraders")]
   [BsonElement("invitedTraders")]
   [BsonIgnoreIfNull]
   public InvitedTradersV2? InvitedTraders { get; set; }

   [JsonPropertyName("invitedTradersV2")]
   [BsonElement("invitedTradersV2")]
   [BsonIgnoreIfNull]
   public List<string>? InvitedTradersV2 { get; set; }

Let's start with the type InvitedTradersV2. This type contains a list of strings which represent usernames. The field InvitedTradersV2 is a list of strings which represent usernames. Half of our submitter's bugs exist simply because these two lists get out of sync- they should contain the same data, but without someone enforcing that correctly, problems accrue.

This is made more frustrating by the MongoDB attribute, BsonIgnoreIfNull, which simply means that the serialized object won't contain the key if the value is null. But that means the consuming application doesn't know which key it should check.

For the final bonus fun, note the use of JsonPropertyName. This comes from the built-in class library, which tells .NET how to serialize the object to JSON. The problem here is that this application doesn't use the built-in serializer, and instead uses Newtonsoft.JSON, a popular third-party library for solving the problem. While Newtonsoft does recognize some built-in attributes for serialization, JsonPropertyName is not among them. This means that property does nothing in this example, aside from add some confusion to the code base.

I suspect the developer responsible, if they even read this code, decided that the duplicated data was okay, because isn't that just a normal consequence of denormalization? And document databases are all about denormalization. It makes your queries faster, bro. Just one more shard, bro.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

,

Planet DebianReproducible Builds: Reproducible Builds in November 2025

Welcome to the report for November 2025 from the Reproducible Builds project!

These monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. “10 years of Reproducible Build” at SeaGL
  2. Distribution work
  3. Tool development
  4. Website updates
  5. Miscellaneous news
  6. Software Supply Chain Security of Web3
  7. Upstream patches

10 years of Reproducible Builds’ at SeaGL 2025

On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris’ talk:

[…] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren’t all the builds reproducible now?


Distribution work

In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian’s Salsa Continuous Integration (CI) pipeline with debrebuild. Joschen cites the advantages as being threefold: firstly, that “only one extra build needed”; it “uses the same sbuild and ccache tooling as the normal build”; and “works for any Debian release”. The merge request was merged by Emmanuel Arias and is now active.

kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that “defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary” before installing Debian packages. “Configuration can be done through a config file, or through a curses-like user interface.

Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and Jörg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.

Elsewhere, Roland Clobus performed some analysis on the “live” Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.

Lastly, Jochen Sprickerhof filed a bug announcing their intention to “binary NMU” a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.


Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.


Tool development

diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [][][]

In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:

  • Do not vary the architecture personality if the kernel is not varied. (Thanks to Raúl Cumplido). []
  • Drop the debian/watch file, as Lintian now flags this as error for ‘native’ Debian packages. [][]
  • Bump Standards-Version to 4.7.2, with no changes needed. []
  • Drop the Rules-Requires-Root header as it is no longer required.. []

In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Miscellaneous news


Software Supply Chain Security of Web3

Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:

Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.

Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Cryptogram Four Ways AI Is Being Used to Strengthen Democracies Worldwide

Democracy is colliding with the technologies of artificial intelligence. Judging from the audience reaction at the recent World Forum on Democracy in Strasbourg, the general expectation is that democracy will be the worse for it. We have another narrative. Yes, there are risks to democracy from AI, but there are also opportunities.

We have just published the book Rewiring Democracy: How AI will Transform Politics, Government, and Citizenship. In it, we take a clear-eyed view of how AI is undermining confidence in our information ecosystem, how the use of biased AI can harm constituents of democracies and how elected officials with authoritarian tendencies can use it to consolidate power. But we also give positive examples of how AI is transforming democratic governance and politics for the better.

Here are four such stories unfolding right now around the world, showing how AI is being used by some to make democracy better, stronger, and more responsive to people.

Japan

Last year, then 33-year-old engineer Takahiro Anno was a fringe candidate for governor of Tokyo. Running as an independent candidate, he ended up coming in fifth in a crowded field of 56, largely thanks to the unprecedented use of an authorized AI avatar. That avatar answered 8,600 questions from voters on a 17-day continuous YouTube livestream and garnered the attention of campaign innovators worldwide.

Two months ago, Anno-san was elected to Japan’s upper legislative chamber, again leveraging the power of AI to engage constituents—this time answering more than 20,000 questions. His new party, Team Mirai, is also an AI-enabled civic technology shop, producing software aimed at making governance better and more participatory. The party is leveraging its share of Japan’s public funding for political parties to build the Mirai Assembly app, enabling constituents to express opinions on and ask questions about bills in the legislature, and to organize those expressions using AI. The party promises that its members will direct their questioning in committee hearings based on public input.

Brazil

Brazil is notoriously litigious, with even more lawyers per capita than the US. The courts are chronically overwhelmed with cases and the resultant backlog costs the government billions to process. Estimates are that the Brazilian federal government spends about 1.6% of GDP per year operating the courts and another 2.5% to 3% of GDP issuing court-ordered payments from lawsuits the government has lost.

Since at least 2019, the Brazilian government has aggressively adopted AI to automate procedures throughout its judiciary. AI is not making judicial decisions, but aiding in distributing caseloads, performing legal research, transcribing hearings, identifying duplicative filings, preparing initial orders for signature and clustering similar cases for joint consideration: all things to make the judiciary system work more efficiently. And the results are significant; Brazil’s federal supreme court backlog, for example, dropped in 2025 to its lowest levels in 33 years.

While it seems clear that the courts are realizing efficiency benefits from leveraging AI, there is a postscript to the courts’ AI implementation project over the past five-plus years: the litigators are using these tools, too. Lawyers are using AI assistance to file cases in Brazilian courts at an unprecedented rate, with new cases growing by nearly 40% in volume over the past five years.

It’s not necessarily a bad thing for Brazilian litigators to regain the upper hand in this arms race. It has been argued that litigation, particularly against the government, is a vital form of civic participation, essential to the self-governance function of democracy. Other democracies’ court systems should study and learn from Brazil’s experience and seek to use technology to maximize the bandwidth and liquidity of the courts to process litigation.

Germany

Now, we move to Europe and innovations in informing voters. Since 2002, the German Federal Agency for Civic Education has operated a non-partisan voting guide called Wahl-o-Mat. Officials convene an editorial team of 24 young voters (under 26 and selected for diversity) with experts from science and education to develop a slate of 80 questions. The questions are put to all registered German political parties. The responses are narrowed down to 38 key topics and then published online in a quiz format that voters can use to identify the party whose platform they most identify with.

In the past two years, outside groups have been innovating alternatives to the official Wahl-o-Mat guide that leverage AI. First came Wahlweise, a product of the German AI company AIUI. Second, students at the Technical University of Munich deployed an interactive AI system called Wahl.chat. This tool was used by more than 150,000 people within the first four months. In both cases, instead of having to read static webpages about the positions of various political parties, citizens can engage in an interactive conversation with an AI system to more easily get the same information contextualized to their individual interests and questions.

However, German researchers studying the reliability of such AI tools ahead of the 2025 German federal election raised significant concerns about bias and “hallucinations”—AI tools making up false information. Acknowledging the potential of the technology to increase voter informedness and party transparency, the researchers recommended adopting scientific evaluations comparable to those used in the Agency for Civic Education’s official tool to improve and institutionalize the technology.

United States

Finally, the US—in particular, California, home to CalMatters, a non-profit, nonpartisan news organization. Since 2023, its Digital Democracy project has been collecting every public utterance of California elected officials—every floor speech, comment made in committee and social media post, along with their voting records, legislation, and campaign contributions—and making all that information available in a free online platform.

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting.

This is not AI replacing human journalists; it is a civic watchdog organization using technology to feed evidence-based insights to human reporters. And it’s no coincidence that this innovation arose from a new kind of media institution—a non-profit news agency. As the watchdog function of the fourth estate continues to be degraded by the decline of newspapers’ business models, this kind of technological support is a valuable contribution to help a reduced number of human journalists retain something of the scope of action and impact our democracy relies on them for.

These are just four of many stories from around the globe of AI helping to make democracy stronger. The common thread is that the technology is distributing rather than concentrating power. In all four cases, it is being used to assist people performing their democratic tasks—politics in Japan, litigation in Brazil, voting in Germany and watchdog journalism in California—rather than replacing them.

In none of these cases is the AI doing something that humans can’t perfectly competently do. But in all of these cases, we don’t have enough available humans to do the jobs on their own. A sufficiently trustworthy AI can fill in gaps: amplify the power of civil servants and citizens, improve efficiency, and facilitate engagement between government and the public.

One of the barriers towards realizing this vision more broadly is the AI market itself. The core technologies are largely being created and marketed by US tech giants. We don’t know the details of their development: on what material they were trained, what guardrails are designed to shape their behavior, what biases and values are encoded into their systems. And, even worse, we don’t get a say in the choices associated with those details or how they should change over time. In many cases, it’s an unacceptable risk to use these for-profit, proprietary AI systems in democratic contexts.

To address that, we have long advocated for the development of “public AI”: models and AI systems that are developed under democratic control and deployed for public benefit, not sold by corporations to benefit their shareholders. The movement for this is growing worldwide.

Switzerland has recently released the world’s most powerful and fully realized public AI model. It’s called Apertus, and it was developed jointly by public Swiss institutions: the universities ETH
Zurich and EPFL, and the Swiss National Supercomputing Centre (CSCS). The development team has made it entirely open source–open data, open code, open weights—and free for anyone to use. No illegally acquired copyrighted works were used in its training. It doesn’t exploit poorly paid human laborers from the global south. Its performance is about where the large corporate giants were a year ago, which is more than good enough for many applications. And it demonstrates that it’s not necessary to spend trillions of dollars creating these models. Apertus takes a huge step forward to realizing the vision of an alternative to big tech—controlled corporate AI.

AI technology is not without its costs and risks, and we are not here to minimize them. But the technology has significant benefits as well.

AI is inherently power-enhancing, and it can magnify what the humans behind it want to do. It can enhance authoritarianism as easily as it can enhance democracy. It’s up to us to steer the technology in that better direction. If more citizen watchdogs and litigators use AI to amplify their power to oversee government and hold it accountable, if more political parties and election administrators use it to engage meaningfully with and inform voters and if more governments provide democratic alternatives to big tech’s AI offerings, society will be better off.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

365 TomorrowsBetter Than Human

Author: Taylor Pittman They moved around the room, their bodies jerking at odd moments, their voices slipping into mechanical ranges as they served beverages. She could not stop her eyes from trapping the waiters in her periphery. If she looked close enough, she could see the stitch pattern embedded behind their ears or across their […]

The post Better Than Human appeared first on 365tomorrows.

Worse Than FailureThe Thanksgiving Shakedown

On Thanksgiving Day, Ellis had cuddled up with her sleeping cat on the couch to send holiday greetings to friends. There in her inbox, lurking between several well wishes, was an email from an unrecognized sender with the subject line, Final Account Statement. Upon opening it, she read the following:

1880s stock delivery form agreement

Dear Ellis,

Your final account statement dated -1 has been sent to you. Please log into your portal and review your balance due totaling #TOTAL_CHARGES#.

Payment must be received within 30 days of this notice to avoid collection. You may submit payment online via [Payment Portal Link] or by mail to:

Chamberlin Apartments
123 Main Street
Anytown US 12345

If you believe there is an error on your account, please contact us immediately at 212-555-1212.

Thank you for your prompt attention to this matter.

Chamberlin Apartments

Ellis had indeed rented an apartment managed by this company, but had moved out 16 years earlier. She'd never been late with a payment for anything in her life. What a time to receive such a thing, at the start of a long holiday weekend when no one would be able to do anything about it for the next 4 days!

She truly had so much to be grateful for that Thanksgiving, and here was yet more for her list: her broad technical knowledge, her experience working in multiple IT domains, and her many years of writing up just these sorts of stories for The Daily WTF. All of this added up to her laughing instead of panicking. She could just imagine the poor intern who'd hit "Send" by mistake. She also imagined she wasn't the only person who'd received this message. Rightfully scared and angry callers would soon be hammering that phone number, and Ellis was further grateful that she wasn't the one who had to pick up.

"I'll wait for the apology email!" she said out loud with a knowing smile on her face, closing out the browser tab.

Ellis moved on physically and mentally, going forward with her planned Thanksgiving festivities without giving it another thought. The next morning, she checked her inbox with curious anticipation. Had there been a retraction, a please disregard?

No. Instead, there were still more emails from the same sender. The second, sent 7 hours after the first, bore the subject line Second Notice - Outstanding Final Balance:

Dear Ellis,

Our records show that your final balance of #TOTAL_CHARGES# from your residency at your previous residence remains unpaid.

This is your second notice. Please remit payment in full or contact us to discuss the balance to prevent your account from being sent to collections.

Failure to resolve the balance within the next 15 days may result in your account being referred to a third-party collections agency, which could impact your credit rating.

To make payment or discuss your account, please contact us at 212-555-1212 or accounting@chamapts.com.

Sincerely,

Chamberlin Apartments

The third, sent 6 and a half hours later, threatened Final Notice - Account Will Be Sent to Collections.

Dear Ellis,

Despite previous notices, your final account balance remains unpaid.

This email serves as final notice before your account is forwarded to a third-party collections agency for recovery. Once transferred, we will no longer be able to accept payment directly or discuss the account.

To prevent this, payment of #TOTAL_CHARGES# must be paid in full by #CRITICALDATE#.

Please submit payment immediately. Please contact 212-555-1212 to confirm your payment.

Sincerely,

Chamberlin Apartments

It was almost certainly a mistake, but still rather spooky to someone who'd never been in such a situation. There was solace in the thought that, if they really did try to force Ellis to pay #TOTAL_CHARGES# on the basis of these messages, anyone would find it absurd that all 3 notices were sent mere hours apart, on a holiday no less. The first two had also mentioned 30 and 15 days to pay up, respectively.

Suddenly remembering that she probably wasn't the only recipient of these obvious form emails, Ellis thought to check her local subreddit. Sure enough, there was already a post revealing the range of panic and bewilderment they had wrought among hundreds, if not thousands. Current and more recent former tenants had actually seen #TOTAL_CHARGES# populated with the correct amount of monthly rent. People feared everything from phishing attempts to security breaches.

It wasn't until later that afternoon that Ellis finally received the anticipated mea culpa:

We are reaching out to sincerely apologize for the incorrect collection emails you received. These messages were sent in error due to a system malfunction that released draft messages to our entire database.

Please be assured of the following:
The recent emails do not reflect your actual account status.
If your account does have an outstanding balance, that status has not changed, and you would have already received direct and accurate communication from our office.
Please disregard all three messages sent in error. They do not require any action from you.

We understand that receiving these messages, especially over a holiday, was upsetting and confusing, and we are truly sorry for the stress this caused. The issue has now been fully resolved, and our team has worked with our software provider to stop all queued messages and ensure this does not happen again.

If you have any questions or concerns, please feel free to email leasing@chamapts.com. Thank you for your patience and understanding.

All's well that ends well. Ellis thanked the software provider's "system malfunction," whoever or whatever it may've been, that had granted the rest of us a bit of holiday magic to take forward for all time.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Planet DebianMichael Ablassmeier: libvirt 11.10 VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN

As with libvirt 11.10 a new flag for backup operation has been inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN.

According to the documentation “It instructs libvirt to avoid termination of the VM if the guest OS shuts down while the backup is still running. The VM is in that scenario reset and paused instead of terminated allowing the backup to finish. Once the backup finishes the VM process is terminated.”

Added support for this in virtnbdbackup 2.40.

,

Planet DebianSimon Josefsson: Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

Last week I published Guix on Debian container images that prepared for today’s announcement of Guix on Trisquel/Ubuntu container images.

I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren’t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:

registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix

Or you prefer guix-on-dpkg on Docker Hub:

docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix

You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.

jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release 
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
  guix 136fc8b
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/# 

You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.

I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8.

Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:

.guile-gnutls: &guile-gnutls
  before_script:
  - /root/.config/guix/current/bin/guix-daemon --version
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - type guix
  - guix --version
  - guix describe
  - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
  - time apt-get update
  - time apt-get install -y make git texinfo
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  script:
  - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
  - cd guile-gnutls
  - git checkout v5.0.1
  - ./bootstrap
  - ./configure
  - make V=1
  - make V=1 check VERBOSE=t
  - make V=1 dist
  after_script:
  - mkdir -pv out/$CI_JOB_NAME_SLUG/src
  - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
  - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
  artifacts:
    paths:
    - out/**

This installs some packages, clones guile-gnutls (it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs.

Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.

guile-gnutls-trisquel11-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu22.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
  extends: .guile-gnutls

guile-gnutls-trisquel12-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu24.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
  extends: .guile-gnutls

Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:

guile-gnutls-compare:
  image: alpine:latest
  needs: [ guile-gnutls-trisquel11-amd64,
           guile-gnutls-trisquel12-amd64,
           guile-gnutls-ubuntu22.04-arm64,
           guile-gnutls-ubuntu24.04-arm64 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
  - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
  artifacts:
    when: always
    paths:
    - ./out/**

Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:

$ cd out
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz

That’s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

Planet DebianDirk Eddelbuettel: duckdb-mlpack 0.0.5: Added kmeans, version helpers, documentation

A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.

This release 0.0.5 adds one new method: kmeans clustering. We also added two version accessors for both mlpack and armadillo. We found during the work on random forests (added in 0.0.4) that the multithreaded random number generation was not quite right in the respective upstream codes. This has by now been corrected in armadillo 15.2.2 as well as the trunk version of mlpack so if you build with those, and set a seed, then your forests and classification will be stable across reruns. We added a second state variable mlpack_silent that can be used to suppress even the minimal prediction quality summary some methods show, and expanded the documentation.

For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Cryptogram Like Social Media, AI Requires Difficult Choices

In his 2020 book, “Future Politics,” British barrister Jamie Susskind wrote that the dominant question of the 20th century was “How much of our collective life should be determined by the state, and what should be left to the market and civil society?” But in the early decades of this century, Susskind suggested that we face a different question: “To what extent should our lives be directed and controlled by powerful digital systems—and on what terms?”

Artificial intelligence (AI) forces us to confront this question. It is a technology that in theory amplifies the power of its users: A manager, marketer, political campaigner, or opinionated internet user can utter a single instruction, and see their message—whatever it is—instantly written, personalized, and propagated via email, text, social, or other channels to thousands of people within their organization, or millions around the world. It also allows us to individualize solicitations for political donations, elaborate a grievance into a well-articulated policy position, or tailor a persuasive argument to an identity group, or even a single person.

But even as it offers endless potential, AI is a technology that—like the state—gives others new powers to control our lives and experiences.

We’ve seen this out play before. Social media companies made the same sorts of promises 20 years ago: instant communication enabling individual connection at massive scale. Fast-forward to today, and the technology that was supposed to give individuals power and influence ended up controlling us. Today social media dominates our time and attention, assaults our mental health, and—together with its Big Tech parent companies—captures an unfathomable fraction of our economy, even as it poses risks to our democracy.

The novelty and potential of social media was as present then as it is for AI now, which should make us wary of its potential harmful consequences for society and democracy. We legitimately fear artificial voices and manufactured reality drowning out real people on the internet: on social media, in chat rooms, everywhere we might try to connect with others.

It doesn’t have to be that way. Alongside these evident risks, AI has legitimate potential to transform both everyday life and democratic governance in positive ways. In our new book, “Rewiring Democracy,” we chronicle examples from around the globe of democracies using AI to make regulatory enforcement more efficient, catch tax cheats, speed up judicial processes, synthesize input from constituents to legislatures, and much more. Because democracies distribute power across institutions and individuals, making the right choices about how to shape AI and its uses requires both clarity and alignment across society.

To that end, we spotlight four pivotal choices facing private and public actors. These choices are similar to those we faced during the advent of social media, and in retrospect we can see that we made the wrong decisions back then. Our collective choices in 2025—choices made by tech CEOs, politicians, and citizens alike—may dictate whether AI is applied to positive and pro-democratic, or harmful and civically destructive, ends.

A Choice for the Executive and the Judiciary: Playing by the Rules

The Federal Election Commission (FEC) calls it fraud when a candidate hires an actor to impersonate their opponent. More recently, they had to decide whether doing the same thing with an AI deepfake makes it okay. (They concluded it does not.) Although in this case the FEC made the right decision, this is just one example of how AIs could skirt laws that govern people.

Likewise, courts are having to decide if and when it is okay for an AI to reuse creative materials without compensation or attribution, which might constitute plagiarism or copyright infringement if carried out by a human. (The court outcomes so far are mixed.) Courts are also adjudicating whether corporations are responsible for upholding promises made by AI customer service representatives. (In the case of Air Canada, the answer was yes, and insurers have started covering the liability.)

Social media companies faced many of the same hazards decades ago and have largely been shielded by the combination of Section 230 of the Communications Act of 1994 and the safe harbor offered by the Digital Millennium Copyright Act of 1998. Even in the absence of congressional action to strengthen or add rigor to this law, the Federal Communications Commission (FCC) and the Supreme Court could take action to enhance its effects and to clarify which humans are responsible when technology is used, in effect, to bypass existing law.

A Choice for Congress: Privacy

As AI-enabled products increasingly ask Americans to share yet more of their personal information—their “context“—to use digital services like personal assistants, safeguarding the interests of the American consumer should be a bipartisan cause in Congress.

It has been nearly 10 years since Europe adopted comprehensive data privacy regulation. Today, American companies exert massive efforts to limit data collection, acquire consent for use of data, and hold it confidential under significant financial penalties—but only for their customers and users in the EU.

Regardless, a decade later the U.S. has still failed to make progress on any serious attempts at comprehensive federal privacy legislation written for the 21st century, and there are precious few data privacy protections that apply to narrow slices of the economy and population. This inaction comes in spite of scandal after scandal regarding Big Tech corporations’ irresponsible and harmful use of our personal data: Oracle’s data profiling, Facebook and Cambridge Analytica, Google ignoring data privacy opt-out requests, and many more.

Privacy is just one side of the obligations AI companies should have with respect to our data; the other side is portability—that is, the ability for individuals to choose to migrate and share their data between consumer tools and technology systems. To the extent that knowing our personal context really does enable better and more personalized AI services, it’s critical that consumers have the ability to extract and migrate their personal context between AI solutions. Consumers should own their own data, and with that ownership should come explicit control over who and what platforms it is shared with, as well as withheld from. Regulators could mandate this interoperability. Otherwise, users are locked in and lack freedom of choice between competing AI solutions—much like the time invested to build a following on a social network has locked many users to those platforms.

A Choice for States: Taxing AI Companies

It has become increasingly clear that social media is not a town square in the utopian sense of an open and protected public forum where political ideas are distributed and debated in good faith. If anything, social media has coarsened and degraded our public discourse. Meanwhile, the sole act of Congress designed to substantially reign in the social and political effects of social media platforms—the TikTok ban, which aimed to protect the American public from Chinese influence and data collection, citing it as a national security threat—is one it seems to no longer even acknowledge.

While Congress has waffled, regulation in the U.S. is happening at the state level. Several states have limited children’s and teens’ access to social media. With Congress having rejected—for now—a threatened federal moratorium on state-level regulation of AI, California passed a new slate of AI regulations after mollifying a lobbying onslaught from industry opponents. Perhaps most interesting, Maryland has recently become the first in the nation to levy taxes on digital advertising platform companies.

States now face a choice of whether to apply a similar reparative tax to AI companies to recapture a fraction of the costs they externalize on the public to fund affected public services. State legislators concerned with the potential loss of jobs, cheating in schools, and harm to those with mental health concerns caused by AI have options to combat it. They could extract the funding needed to mitigate these harms to support public services—strengthening job training programs and public employment, public schools, public health services, even public media and technology.

A Choice for All of Us: What Products Do We Use, and How?

A pivotal moment in the social media timeline occurred in 2006, when Facebook opened its service to the public after years of catering to students of select universities. Millions quickly signed up for a free service where the only source of monetization was the extraction of their attention and personal data.

Today, about half of Americans are daily users of AI, mostly via free products from Facebook’s parent company Meta and a handful of other familiar Big Tech giants and venture-backed tech firms such as Google, Microsoft, OpenAI, and Anthropic—with every incentive to follow the same path as the social platforms.

But now, as then, there are alternatives. Some nonprofit initiatives are building open-source AI tools that have transparent foundations and can be run locally and under users’ control, like AllenAI and EleutherAI. Some governments, like Singapore, Indonesia, and Switzerland, are building public alternatives to corporate AI that don’t suffer from the perverse incentives introduced by the profit motive of private entities.

Just as social media users have faced platform choices with a range of value propositions and ideological valences—as diverse as X, Bluesky, and Mastodon—the same will increasingly be true of AI. Those of us who use AI products in our everyday lives as people, workers, and citizens may not have the same power as judges, lawmakers, and state officials. But we can play a small role in influencing the broader AI ecosystem by demonstrating interest in and usage of these alternatives to Big AI. If you’re a regular user of commercial AI apps, consider trying the free-to-use service for Switzerland’s public Apertus model.

None of these choices are really new. They were all present almost 20 years ago, as social media moved from niche to mainstream. They were all policy debates we did not have, choosing instead to view these technologies through rose-colored glasses. Today, though, we can choose a different path and realize a different future. It is critical that we intentionally navigate a path to a positive future for societal use of AI—before the consolidation of power renders it too late to do so.

This post was written with Nathan E. Sanders, and originally appeared in Lawfare.

Worse Than FailureCodeSOD: The Destination Dir

Darren is supporting a Delphi application in the current decade. Which is certainly a situation to be in. He writes:

I keep trying to get out of doing maintenance on legacy Delphi applications, but they keep pulling me back in.

The bit of code Darren sends us isn't the largest WTF, but it's a funny mistake, and it's a funny mistake that's been sitting in the codebase for decades at this point. And as we all know, jokes only get funnier with age.

FileName := DestDir + ExtractFileName(FileName);
if FileExists(DestDir + ExtractFileName(FileName)) then
begin
  ...
end;

This code is inside of a module that copies a file from a remote server to the local host. It starts by sanitizing the FileName, using ExtractFileName to strip off any path components, and replace them with DestDir, storing the result in the FileName variable.

And they liked doing that so much, they go ahead and do it again in the if statement, repeating the exact same process.

Darren writes:

As Homer Simpson said "Lather, rinse, and repeat. Always repeat."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianBirger Schacht: Status update, November 2025

I started this month with a week of vacation which was followed by a small planned surgery and two weeks of sick leave. Nonetheless, I packaged and uploaded new releases of a couple of packages:

  • swayidle updated to version 1.9.0-1
  • swaylock updated to version 1.8.4-1
  • foot updated to version 1.25.0-1
  • swayimg updated to version 4.6-1
  • scdoc updated to version 1.11.4-1
  • wofi updated to version 1.5.1-1
  • xdg-desktop-portal-wlr updated to version 0.8.0-1

Besides that I reactivated I project I started in summer 2024: debiverse.org. The idea of that was to have interfaces to Debian bugs and packages that are usable on mobile devices (I know, ludicrous!). Back then I started with Flask and Sqlalchemy, but that soon got out of hand. I now switched the whole stack to FastAPI and SQLModel which makes it a lot easier to manage. And the upside is that it comes with an API and OpenAPI docs. For the rendered HTML pages I use Jinja2 with Tailwind as CSS framework. I am currently using udd-mirror as database backend, which works pretty good (for this single user project). It would be nice to have some of the data in a faster index, like Typesense or Meilisearch. This way it would be possible to have faceted search or more performant full text search. But I haven’t found any software that could provide this that is packaged in Debian.

Screenshot of the debiverse bug report list

Screenshot of the debiverse swagger API

Planet DebianFrançois Marier: Recovering from a broken update on the Turris Omnia

The recent Turris OS update from 7.2.3 to 9.0.0 took down my WiFi entirely. The wired network still works fine, but wireless is completely broken.

Factory reset

It turns out the Omnia has an extensive (and fast) factory reset / recovery mode via the hardware reset button.

Unfortunately, the factory image didn't work for me, possibly because I don't use the stock WiFi radios anymore.

Rolling back with schnapps

Thanks to the fact that the Omnia uses a btrfs root filesystem, and the liberal use of snapshots around updates, I was able to rollback to the pre-9.0.0 state.

First, I connected to the router using ssh:

ssh root@192.168.1.1

Then I listed the available snapshots:

$ schnapps list
# | Type      | Size        | Date                        | Description
------+-----------+-------------+-----------------------------+------------------------------------
  500 | post      |    15.98MiB | 2025-08-09 11:27:48 -0700   | Automatic post-update snapshot (TurrisOS 7.2.2 - hbs)
  506 | pre       |    17.92MiB | 2025-09-12 03:44:32 -0700   | Automatic pre-update snapshot (TurrisOS 7.2.2 - hbs)
  507 | post      |    17.88MiB | 2025-09-12 03:45:14 -0700   | Automatic post-update snapshot (TurrisOS 7.2.3 - hbs)
  515 | time      |    20.03MiB | 2025-11-02 01:05:01 -0700   | Snapshot created by cron
  516 | time      |    20.05MiB | 2025-11-09 01:05:01 -0800   | Snapshot created by cron
  517 | time      |    20.29MiB | 2025-11-16 01:05:00 -0800   | Snapshot created by cron
  518 | time      |    20.64MiB | 2025-11-23 01:05:01 -0800   | Snapshot created by cron
  519 | time      |    20.83MiB | 2025-11-30 01:05:00 -0800   | Snapshot created by cron
  520 | pre       |    87.91MiB | 2025-11-30 07:41:10 -0800   | Automatic pre-update snapshot (TurrisOS 7.2.3 - hbs)
  521 | post      |   196.32MiB | 2025-11-30 07:48:11 -0800   | Automatic post-update snapshot (TurrisOS 9.0.0 - hbs)
  523 | pre       |     4.44MiB | 2025-11-30 20:47:31 -0800   | Automatic pre-update snapshot
  524 | post      |   224.00KiB | 2025-11-30 20:47:43 -0800   | Automatic post-update snapshot
  525 | rollback  |   224.00KiB | 2025-12-01 04:56:32 +0000   | Rollback to snapshot factory
  526 | pre       |     4.44MiB | 2025-11-30 21:04:19 -0800   | Automatic pre-update snapshot
  527 | post      |   272.00KiB | 2025-11-30 21:04:31 -0800   | Automatic post-update snapshot
  528 | rollback  |   272.00KiB | 2025-12-01 05:13:38 +0000   | Rollback to snapshot factory
  529 | pre       |     4.52MiB | 2025-11-30 21:28:44 -0800   | Automatic pre-update snapshot
  530 | single    |   208.00KiB |                             | 
  531 | rollback  |   224.00KiB | 2025-12-01 05:29:47 +0000   | Rollback to snapshot factory

Finally, I rolled back to the exact state I was on before the 9.0.0 update:

$ schnapps rollback 520
Current state saved as snapshot number 532
Rolled back to snapshot 520

Full wipe

As an aside, it turns out that the factory reset functionality is implemented as a brtfs rollback to a special factory snapshot. This is why is so fast, but it also means that doing a simple factory reset doesn't wipe the data on your router. If you are planning to sell your device or otherwise dispose of it, you also need to delete all btrfs snapshots

Conclusion

While this update was very disappointing, especially since it's never happened before with major updates on Turris OS, it made me discover just how great the recovery tools are. It would be pretty tricky to fully brick one of these devices.

365 TomorrowsSweat Dreams

Author: Majoki To hell with pleasant dreams. Long live nightmares! Marcus looked at the motto writ large on the smart panel of DreamOn’s boardroom. The corporation’s board was gathered to solicit his opinion. They were going to want his approval. They were going to seek his blessing. He’d gladly give it to them, even knowing […]

The post Sweat Dreams appeared first on 365tomorrows.

,

Planet DebianGuido Günther: Free Software Activities November 2025

Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more:

See below for details on the above and more:

phosh

  • Better auto brightness (MR)
  • Update CI to forky (MR)
  • Test mobile data connection in CI (MR)
  • Add DebugControl interface (MR)
  • Release 0.51~rc1
  • caffeine prefs: Fix resize when adding intervals (MR)
  • Robustify plugin-prefs screenshot tests (MR)
  • Another build systemd dependency fix (MR)
  • Gesture to tune brightness on lock screen (MR)

phoc

  • Update ci to forky (MR)
  • Exit cleanly on SIGTERM (MR)
  • Release (0.51~rc1), 0.51.0)
  • Fix segfault triggered in alpine CI (MR)
  • Cancel preedit on submit (avoids resubmitted text in e.g. chatty or flare) (MR)

phosh-mobile-settings

  • Test suite robustness (MR)
  • Update CI (MR)
  • Release 0.51~rc1)

stevia

xdg-desktop-portal-phosh

  • Release 0.51~rc1, 0.50.0
  • Unbreak nightly builds (MR)
  • Unbreak 32bit builds (MR)
  • Drop C file chooser impl (MR)

pfs

  • pfs-open: Allow to open arbitrary directories and start fixing clippy warnings (MR)
  • More clippy cleanups (MR)
  • Allow to ship schema (MR)
  • Run a smoke test in ci (MR)
  • Implement org.freedesktop.FileManager1 in the demo (MR, MR, MR)
  • dir-view: Don't thumbnail when disabled (MR)

Phrog

  • Fix osk dependencies (MR)

gmobile

  • run-phosh: Allow to run headless (MR)
  • Release 0.5.0 (MR)
  • display-panel: Allow to take screenshots (MR)
  • Add hwdb and udev rules for torch min brightness (MR)

feedbackd

feedbackd-device-themes

libcall-ui

  • Ignore callaudiod deprecations as we otherwise break compilation of downstreams (MR)
  • Same for 0.1.x branch (MR)
  • Release (0.1.5)

wirepumber

  • doc: Fix make run invocation (MR)

Chatty

mobile-broadband-povider-info

Debian

  • stevia: Upload 0.51~rc1, 0.51.0)
  • phrog: Use stevia instead of osk-stub (MR)
  • meta-phosh: Modernize dependencies (MR)
  • phosh: Drop osk-stub (MR)
  • phosh: Upload 0.51~rc1
  • phoc: Upload 0.41~rc1
  • p-m-s: Upload 0.51~rc1
  • feedbackd-device-themes: Upload 0.8.7
  • m-b-p-i: Uplaod 20251101
  • debcargo-conf: Backport ashpd patch (MR)
  • xdg-desktop-portal-phosh: Get it into unstable (MR, MR)

Mobian

  • librem5: Drop exponential brightness (MR)

wlroots

  • input-method-unstable-v2: Fix two protocol issues (MR)

libqrtr-glib

  • Fix transfer annotation to unbreak usage from Python (MR)
  • Move doc build to gi-docgen (MR)

libadwaits-rs

  • Allow None for parent in adw_dialog_choose (MR)

phosh-site

  • Lint tools (MR)
  • Add slideshow to landing page (MR)
  • Add more videos (MR)
  • Fix typos and links (MR)
  • Update nightly details (MR)

bengalos-debs

  • Fix phrog build (MR, MR)
  • Enable arm64 builds (MR)

gtk

  • Drop unused defines (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • pfs: Create folder support (MR)`
  • portal: Create thumbnails via thumbnailer service (MR)
  • phosh: caffeine plugin prefs (MR)
  • phosh: lower torch brightness (MR)
  • phosh: wi-fi hotspot QR code (MR)
  • phosh/caffeine: Close status page when selecting an interval (MR)
  • phosh/caffeine: Use empty state (MR)
  • bengalos-recpipes: prep supporting multiple disk layouts (MR)
  • xdg-p-p: Longer test timeout (MR)
  • p-m-s: Volume slider for media rols (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

Cryptogram Banning VPNs

This is crazy. Lawmakers in several US states are contemplating banning VPNs, because…think of the children!

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing­ potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.

The EFF link explains why this is a terrible idea.

Cryptogram Friday Squid Blogging: Flying Neon Squid Found on Israeli Beach

A meter-long flying neon squid (Ommastrephes bartramii) was found dead on an Israeli beach. The species is rare in the Mediterranean.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Worse Than FailureCodeSOD: Formula Length

Remy's Law of Requirements Gathering states "No matter what the requirements document says, what your users really wanted was Excel." This has a corrolary: "Any sufficiently advanced Excel file is indistingushable from software."

Given enough time, any Excel file whipped up by any user can transition from "useful" to "mission critical software" before anyone notices. That's why Nemecsek was tasked with taking a pile of Excel spreadsheets and converting them into "real" software, which could be maintained and supported by software engineers.

Nemecsek writes:

This is just one of the formulas they asked me to work on, and not the longest one.

Nemecsek says this is a "formula", but I suspect it's a VBA macro. In reality, it doesn't matter.

InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).Losses = 
calcLossesInPart(InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
InitechNeoDTActivePart(0).RatedFrequency, InitechNeoDTMachineDevice.
InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
InitechNeoDTActivePartPart(iPart).RadialPositionToMainDuct, InitechNeoDTMachineDevice.
InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
InitechNeoDTActivePartPart(iPart).InitechNeoDTActivePartPartSectionContainer(0).
InitechNeoDTActivePartPartSection(0).InitechNeoDTActivePartPartConductorComposition(0).IsTransposed, 
InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
InitechNeoDTActivePartPartConductorComposition(0).ParallelRadialCount, InitechNeoDTMachineDevice.
InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
InitechNeoDTActivePartPart(iPart).InitechNeoDTActivePartPartSectionContainer(0).
InitechNeoDTActivePartPartSection(0).InitechNeoDTActivePartPartConductorComposition(0).
ParallelAxialCount, InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).Type, 
InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
DimensionRadialElectric, InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
DimensionAxialElectric + InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).InsulThickness, 
getElectricConductivityAtTemperatureT1(InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
InitechNeoDTActivePartPartConductorRawMaterial(0).ElectricConductivityT0, InitechNeoDTMachineDevice.
InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).
InitechNeoDTActivePartPart(iPart).InitechNeoDTActivePartPartSectionContainer(0).
InitechNeoDTActivePartPartSection(0).InitechNeoDTActivePartPartConductorComposition(0).
InitechNeoDTActivePartPartConductor(0).InitechNeoDTActivePartPartConductorRawMaterial(0).MaterialFactor, 
InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).InitechNeoDTActivePart(0).
InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartSectionContainer(0).InitechNeoDTActivePartPartSection(0).
InitechNeoDTActivePartPartConductorComposition(0).InitechNeoDTActivePartPartConductor(0).
InitechNeoDTActivePartPartConductorRawMaterial(0).ReferenceTemperatureT0, InitechNeoDTMachineDevice.
ReferenceTemperature), LayerNumberRatedVoltage, InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
InitechNeoDTActivePart(0).InitechNeoDTActivePartPartContainer(0).InitechNeoDTActivePartPart(iPart).
InitechNeoDTActivePartPartLayerContainer(0),InitechNeoDTMachineDevice.InitechNeoDTActivePartContainer(0).
InitechNeoDTActivePart(0).RFactor)

Line breaks added to try and keep horizontal scrolling sane. This arguably hurts readability, in the same way that beating a dead horse arguably hurts the horse.

This may not be the longest one, but it's certainly painful. I do not know exactly what this is doing, and frankly, I do not want to.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsFinder’s Fee

Author: Julian Miles, Staff Writer “Where the fook now?” “Jobsheet says left of the second moon and can’t miss it.” “Yeah yeah. Every bloody time they take the amateur finders word instead of asking for location data. Not like it’s a difficult ask: it’s on the display right next to the comms console on every […]

The post Finder’s Fee appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: Forever and a Day

Review: Forever and a Day, by Haley Cass

Series: Those Who Wait #1.5
Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-5902-5966-3
Format: Kindle
Pages: 101

Forever and a Day is a coda to Haley Cass's self-published sapphic romance novel Those Who Wait. There is no point in reading it unless you have already read and enjoyed the full book and wanted more of a denouement.

Given that Those Who Wait is a romance novel, it is definitionally not a spoiler to reveal that Sutton and Charlotte ended up together. This novella is seven scenes sketching out the next few years of their lives, interspersed with press clippings and social media commentary. These tie up loose ends, give the characters a bit more time together, throw in one more conflict and resolution, add one more sex scene, and stick a few exclamation points after the happily ever after.

I am the sort of person who likes long denouements in stories, so I'm the target audience for this sort of sequel that's essentially additional chapters to the book. (The funniest version of this I've read is Jacqueline Carey's Saints Astray.) They are usually not great literature, since there are good reasons for not including these chapters in the book. That is exactly what this is: a few more chapters of the characters being happy, entirely forgettable, and of interest only to people who want that.

Cass does try to introduce a bit of a plot via some light family conflict, which was sweet and mostly worked, and some conflict over having children, which was very stereotyped and which I did not enjoy as much. I thought the earlier chapters of this novella were the stronger ones, although I do have to give the characters credit in the later chapters for working through conflict in a mature and fairly reasonable way. It does help, though, when the conflict is entirely resolved by one character being right and the other character being happily wrong. That's character conflict on easy mode.

I was happy to see that Sutton got a career, although as in the novel I wish Cass had put some more effort into describing Sutton's efforts in building that career. The details are maddeningly vague, which admittedly matches the maddeningly vague description of Charlotte's politics but which left me unsatisfied.

Charlotte's political career continues to be pure wish fulfillment in the most utterly superficial and trivialized way, and it bothered me even more in the novella than it did in the novel. We still have absolutely no idea what she stands for, what she wants to accomplish, and why anyone would vote for her, and yet we get endless soft-focus paeans to how wonderful she will be for the country. Her opponents are similarly vague to the point that the stereotypes Cass uses to signal their inferiority to Charlotte are a little suspect.

I'm more critical of this in 2025 than I would have been in 2015 because the last ten years have made clear the amount of damage an absolute refusal to stand for anything except hazy bromides causes, and I probably shouldn't be this annoyed that Cass chose to vaguely gesture towards progressive liberalism without muddying her romance denouement with a concrete political debate. But, just, gah. I found the last chapter intensely annoying, in part because the narrative of that chapter was too cliched and trite to sufficiently distract me from the bad taste of the cotton-candy politics.

Other than that, this was minor, sweet, and forgettable. If you want another few chapters of an already long novel, this delivers exactly what you would expect. If the novel was plenty, nothing about this novella is going to change your mind and you can safely skip it. I really liked the scene between Charlotte and Sutton's mom, though, and I'm glad I read the novella just for that.

Rating: 6 out of 10

Planet DebianJunichi Uekawa: It's already December.

It's already December. I haven't figure out why suspend resume no longer works on my workstation.

,

Planet DebianUtkarsh Gupta: FOSS Activites in November 2025

Here’s my monthly but brief update about the activities I’ve done in the FOSS world.

Debian

Whilst I didn’t get a chance to do much, here are still a few things that I worked on:

  • Did a few sessions with the new DFSG team to help kickstart things, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:

  • Successfully released Resolute Snapshot 1!
    • This one was particularly interesting as it was done without the ISO tracker and cdimage access.
    • There are some wrinkles that need ironing out for the next snapshot.
  • Resolute Raccoon is now fully and formally open.
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • review NEW packages for Ubuntu Studio.
    • remove old binaries that are stalling transition and/or migration.
    • LTS requalification of Ubuntu flavours.
    • bootstrapping dotnet-10 packages.
    • removal of openjdk-19 from Jammy, which sparked some interesting discussions.

Debian (E)LTS

This month I have worked 22 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

  • wordpress: There were multiple vulnerabilities reported in Wordpress, leading to Sent Data & Cross-site Scripting.

    • [bookworm]: Roberto rightly pointed out that the upload to bookworm hadn’t gone through last month, so I re-uploaded wordpress/6.1.9+dfsg1-0+deb12u1 to bookworm-security.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.

    • [ELTS]: Last month I had backported fixes for CVE-2025-46727 & CVE-2025-32441 to buster and stretch but the other backports were being a bit tricky due to really old versions.
    • I spent a bit more time but there’s a lot to demystify. Gonna take a bit of break from this one and come back to this after doing other updates. Might even consider sending a RFH to the list.
  • libwebsockets: Multiple issues were reported in LWS causing denial of service and stack-based buffer overflow.

  • mako: It was found that Mako, a Python template library, was vulnerable to a denial of service attack via crafted regular expressions.

    • [LTS]: For bullseye, these were fixed via 1.1.3+ds1-2+deb11u1. And released as DLA 4393-1.
    • Backporting tests was an interesting exercise as I had to make them compatible with the bullseye version. :)
  • ceph: Affected by CVE-2024-47866, using the argument x-amz-copy-source to put an object and specifying an empty string as its content leads to the RGW daemon crashing, resulting in a DoS attack.

    • [LTS]: Whilst the patch is straightforward, backports are a bit tricky. I’ve prepared the update but would like to reach out to zigo, the maintainer, to make sure nothing regresses.
    • [ELTS]: Same as LTS, I’d like to get a quick review and upload to LTS first before I start staging uploads for ELTS.
  • [LTS] Attended the monthly LTS meeting on IRC. Summary here.

    • It was also followed by a 50-minute post-meeting technical discussion/question session.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates. Thanks, Sylvain, for a great documentation summary.


Until next time.
:wq for today.

365 TomorrowsThe Poker Game

Author: David Sydney It was a Friday night poker game, with only three left in the hand—Mel, Otto, and Ralph. Ralph, losing all night, was down to his last few pathetic chips. He couldn’t believe it. Mel had dealt him four aces. His problems were over. Finally, he was about to clean up. “Hey, did […]

The post The Poker Game appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: The Last Soul Among Wolves

Review: The Last Soul Among Wolves, by Melissa Caruso

Series: The Echo Archives #2
Publisher: Orbit
Copyright: August 2025
ISBN: 0-316-30404-2
Format: Kindle
Pages: 355

The Last Soul Among Wolves is urban high fantasy with strong mystery vibes. It is a direct sequel to The Last Hour Between Worlds. You need the previous book for some character setup (and this book would spoil it badly), but you don't have to remember the first book in detail. Only the main plot outcomes are directly relevant and the characters will remind you of those.

Kembrel Thorne is a Hound, the equivalent of a police detective in the medieval-inspired city setting of this series, but this book does not open with an official assignment. Instead, she has been dragged by her childhood friend Jaycel Morningrey as company for a reading of the will of old lady Lovegrace, reclusive owner of a gothic mansion on an island connected to the city by an intermittent sandbar. A surprise reunion with her gang of childhood friends ensues, followed by the revelation that they are all in serious trouble.

Shortly after Kem left the group to become a Hound, the remaining four, plus several other apparently random people, got entangled with a powerful Echo artifact. Now that Lovegrace has died, one of them will inherit the artifact and the ability to make a wish, but only one. The rest will be killed at decreasing intervals until only the winner is left alive.

The Last Hour Between Worlds was fae fantasy built around a problem that was more of a puzzle than a mystery. The Last Soul Among Wolves is closer to a classic mystery: A cast of characters are brought together and semi-isolated in a rural house, they start dying, and it's up to the detective to solve the mystery of their death before it's too late. In this case, the initial mechanism of death is supernatural and not in doubt — the challenge instead is how to stop it from happening again — but Kem's problems quickly become more complicated.

As mystery plots go, this is more thriller than classical despite the setting. There are a few scenes of analyzing clues, but Kem is more likely to use the time-honored protagonist technique of throwing herself into danger and learning what's going on via the villain monologues. As readers of the previous book would expect, Rika Nonesuch is here too, hired by another of Kem's old friends, and the two navigate their personal feelings and the rivalry between their guilds in much the way that they did in the Last Hour Between Worlds. As in the first book, there is a sapphic romance subplot, but it's a very slow burn asexual romance.

The best part of this series continues to be the world-building. The previous book introduced the idea of the Echoes and sent the characters exploring into stranger and stranger depths. This book fleshes out the rules in more detail, creating something that feels partly like a fae realm and partly like high fantasy involving gods, but diverges from both into a logic of its own. The ending satisfyingly passes my test of fantasy mysteries: Resolving the mystery requires understanding and applying the rules of the setting, which are sufficiently strange to create interesting outcomes but coherent enough that the reader doesn't feel like the author is cheating.

There are some hissable villains here, but my favorite part of this book was the way Caruso added a lot of nuance and poignancy to the Echoes rather than showing them only as an uncanny threat. That choice made the world feel deeper and richer. It's not yet clear whether that element is setup for a longer-term series plot, but I hope Caruso will develop the story in that direction.

It felt to me like Caruso is aiming for an ongoing series rather than a multi-volume story with a definite ending. She avoids a full episodic reset — Rika, in particular, gets considerable character development and new complications that bode well for future volumes — but it doesn't feel like the series is building towards an imminent climax. This is not a complaint. I enjoy these characters and this world and will happily keep devouring each new series entry.

If you liked The Last Hour Between Worlds, I think you will like this. It doesn't have the same delight of initial discovery of the great world-building, but the plot is satisfying and a bit more complex and the supporting characters are even better than those in the first book. Once again, Caruso kept me turning the pages, and I'm now looking forward to a third volume. Recommended.

The third book in the series has not yet been announced, but there are indications on social media that it is coming.

Rating: 7 out of 10

Planet DebianOtto Kekäläinen: DEP-18: A proposal for Git-based collaboration in Debian

Featured image of post DEP-18: A proposal for Git-based collaboration in Debian

I am a huge fan of Git, as I have witnessed how it has made software development so much more productive compared to the pre-2010s era. I wish all Debian source code were in Git to reap the full benefits.

Git is not perfect, as it requires significant effort to learn properly, and the ecosystem is complex with even more things to learn ranging from cryptographic signatures and commit hooks to Git-assisted code review best practices, ‘forge’ websites and CI systems.

Sure, there is still room to optimize its use, but Git certainly has proven itself and is now the industry standard. Thus, some readers might be surprised to learn that Debian development in 2025 is not actually based on Git. In Debian, the version control is done by the Debian archive itself. Each ‘commit’ is a new upload to the archive, and the ‘commit message’ is the debian/changelog entry. The ‘commit log’ is available at snapshots.debian.org.

In practice, most Debian Developers (people who have the credentials to upload to the Debian archive) do use Git and host their packaging source code on salsa.debian.org – the GitLab instance of Debian. This is, however, based on each DD’s personal preferences. The Debian project does not have any policy requiring that packages be hosted on salsa.debian.org or be in version control at all.

Is collaborative software development possible without git and version control software?

Debian, however, has some peculiarities that may be surprising to people who have grown accustomed to GitHub, GitLab or various company-internal code review systems.

In Debian:

  • The source code of the next upload is not public but resides only on the developer’s laptop.
  • Code contributions are plain patch files, based on the latest revision released in the Debian archive (where the unstable area is equivalent to the main development branch).
  • These patches are submitted by email to a bug tracker that does no validation or testing whatsoever.
  • Developers applying these patches typically have elaborate Mutt or Emacs setups to facilitate fetching patches from email.
  • There is no public staging area, no concept of rebasing patches or withdrawing a patch and replacing it with a better version.
  • The submitter won’t see any progress information until a notification email arrives after a new version has been uploaded to the Debian archive.

This system has served Debian for three decades. It is not broken, but using the package archive just feels… well, archaic.

There is a more efficient way, and indeed the majority of Debian packages have a metadata field Vcs-Git that advertises which version control repository the maintainer uses. However, newcomers to Debian are surprised to notice that not all packages are hosted on salsa.debian.org but at various random places with their own account and code submission systems, and there is nothing enforcing or even warning if the code there is out of sync with what was uploaded to Debian. Any Debian Developer can at any time upload a new package with whatever changes, bypassing the Git repository, even when the package advertised a Git repository. All PGP signed commits, Git tags and other information in the Git repository are just extras currently, as the Debian archive does not enforce or validate anything about them.

This also makes contributing to multiple packages in parallel hard. One can’t just go on salsa.debian.org and fork a bunch of repositories and submit Merge Requests. Currently, the only reliable way is to download source packages from Debian unstable, develop patches on top of them, and send the final version as a plain patch file by email to the Debian bug tracker. To my knowledge, no system exists to facilitate working with the patches in the bug tracker, such as rebasing patches 6 months later to detect if they or equivalent changes were applied or if sending refreshed versions is needed.

To newcomers in Debian, it is even more surprising that there are packages that are on salsa.debian.org but have the Merge Requests feature disabled. This is often because the maintainer does not want to receive notification emails about new Merge Requests, but rather just emails from bugs.debian.org. This may sound arrogant, but keep in mind that these developers put in the effort to set up their Mutt/Emacs workflow for the existing Debian process, and extending it to work with GitLab notifications is not trivial. There are also purists who want to do everything via the command-line (without having to open a browser, run JavaScript and maintain a live Internet connection), and tools like glab are not convenient enough for the full workflow.

Inefficient ways of working prevent Debian from flourishing

I would claim, based on my personal experiences from the past 10+ years as a Debian Developer, that the lack of high-quality and productive tooling is seriously harming Debian. The current methods of collaboration are cumbersome for aspiring contributors to learn, and suboptimal to use both for new and seasoned contributors.

There are no exit interviews for contributors who left Debian, no comprehensive data on reasons to contribute or stop contributing, nor are there any metrics tracking how many people tried but failed to contribute to Debian. Some data points to support my concerns do exist:

Debian should embrace git, but decision-making is slow

Debian is all about community and collaboration. One would assume that Debian prioritized above all making collaboration tools and processes simpler, faster and less error-prone, as it would help both current and future package maintainers. Yet, it isn’t so, due to some reasons unique to Debian.

There is no single company or entity running Debian, and it has managed to operate as a pure meritocracy and do-cracy for over 30 years. This is impressive and admirable. Unfortunately, some of the infrastructure and technical processes are also nearly 30 years old and very difficult to change due to the same reason: the nature of Debian’s distributed decision-making process.

As a software developer and manager with 25+ years of experience, I strongly feel that developing software collaboratively using Git is a major step forward that Debian needs to take, in one form or another, and I hope to see other DDs voice their support if they agree.

Debian Enhancement Proposal 18

Following how consensus is achieved in Debian, I started drafting DEP-18 in 2024, and it is currently awaiting enough thumbs up at https://salsa.debian.org/dep-team/deps/-/merge_requests/21 to get into CANDIDATE status next.

In summary the DEP-18 proposes that everyone keen on collaborating should:

  1. Maintain Debian packaging sources in Git on Salsa.
  2. Use Merge Requests to show your work and to get reviews.
  3. Run Salsa CI before upload.

The principles above are not novel. According to stats at e.g. trends.debian.net, and UDD, ~93% of all Debian source packages are already hosted on salsa.debian.org. As of June 1st, 2025, only 1640 source packages remain that are not hosted on Salsa. The purpose of DEP-18 is to state in writing what Debian is currently doing for most packages, and thus express what among others new contributors should be learning and doing, so basic collaboration is smooth and free from structural obstacles.

Most packages are also already allowing Merge Requests and using Salsa CI, but there hasn’t been any written recommendation anywhere in Debian to do so. The Debian Policy (v.4.7.2) does not even mention the word “Salsa” a single time. The current process documentation on how to do non-maintainer uploads or salvaging packages are all based on uploading packages to the archive, without any consideration of using git-based collaboration such as posting a Merge Request first. Personally I feel posting a Merge Request would be a better approach, as it would invite collaborators to discuss and provide code reviews. If there are no responses, the submitter can proceed to merge, but compared to direct uploads to the Debian archive, the Merge Request practice at least tries to offer a time and place for discussions and reviews to happen.

It could very well be that in the future somebody comes up with a new packaging format that makes upstream source package management easier, or a monorepo with all packages, or some other future structures or processes. Having a DEP to state how to do things now does not prevent people from experimenting and innovating if they intentionally want to do that. The DEP is merely an expression of the minimal common denominators in the packaging workflow that maintainers and contributors should follow, unless they know better.

Transparency and collaboration

Among the DEP-18 recommendations is:

The recommended first step in contributing to a package is to use the built-in “Fork” feature on Salsa. This serves two purposes. Primarily, it allows any contributor to publish their Git branches and submit them as Merge Requests. Additionally, the mere existence of a list of “Forks” enables contributors to discover each other, and in rare cases when the original package is not accepting improvements, collaboration could arise among the contributors and potentially lead to permanent forks in the general meaning. Forking is a fundamental part of the dynamics in open source that helps drive quality and agreement. The ability to fork ultimately serves as the last line of defense of users’ rights. Git supports this by making both temporary and permanent forks easy to create and maintain.

Further, it states:

Debian packaging work should be reasonably transparent and public to allow contributors to participate. A maintainer should push their pending changes to Salsa at regular intervals, so that a potential contributor can discover if a particular change has already been made or a bug has been fixed in version control, and thus avoid duplicate work.

Debian maintainers should make reasonable efforts to publish planned changes as Merge Requests on Salsa, and solicit feedback and reviews. While pushing changes directly on the main Git branch is the fastest workflow, second only to uploading all changes directly to Debian repositories, it is not an inclusive way to develop software. Even packages that are maintained by a single maintainer should at least occasionally publish Merge Requests to allow new contributors to step up and participate.

I think these are key aspects leading to transparency and true open source collaboration. Even though this talks about Salsa — which is based on GitLab — the concepts are universal and will work also on other forges, like Forgejo or GitHub. The point is that sharing work-in-progress on a real-time platform, with CI and other supporting features, empowers and motivates people to iterate on code collaboratively. As an example of an anti-pattern, Oracle MySQL publishes the source code for all their releases and are license-compliant, but as they don’t publish their Git commits in real-time, it does not feel like a real open source project. Non-Oracle employees are not motivated to participate as second-class developers who are kept in the dark. Debian should embrace git and sharing work in real-time, embodying a true open source spirit.

Recommend, not force

Note that the Debian Enhancement Proposals are not binding. Only the Debian Policy and Technical Committee decisions carry that weight. The nature of collaboration is voluntary anyway, so the DEP does not need to force anything on people who don’t want to use salsa.debian.org.

The DEP-18 is also not a guide for package maintainers. I have my own views and have written detailed guides in blog articles if you want to read more on, for example, how to do code reviews efficiently.

Within DEP-18, there is plenty of room to work in many different ways, and it does not try to force one single workflow. The goal here is to simply have agreed-upon minimal common denominators among those who are keen to collaborate using salsa.debian.org, not to dictate a complete code submission workflow.

Once we reach this, there will hopefully be less friction in the most basic and recurring collaboration tasks, giving DDs more energy to improve other processes or just invest in having more and newer packages for Debian users to enjoy.

Next steps

In addition to lengthy online discussions on mailing lists and DEP reviews, I also presented on this topic at DebConf 2025 in Brest, France. Unfortunately the recording is not yet up on Peertube.

The feedback has been overwhelmingly positive. However, there are a few loud and very negative voices that cannot be ignored. Maintaining a Linux distribution at the scale and complexity of Debian requires extraordinary talent and dedication, and people doing this kind of work often have strong views, and most of the time for good reasons. We do not want to alienate existing key contributors with new processes, so maximum consensus is desirable.

We also need more data on what the 1000+ current Debian Developers view as a good process to avoid being skewed by a loud minority. If you are a current or aspiring Debian Developer, please add a thumbs up if you think I should continue with this effort (or a thumbs down if not) on the Merge Request that would make DEP-18 have candidate status.

There is also technical work to do. Increased Git use will obviously lead to growing adoption of the new tag2upload feature, which will need to get full git-buildpackage support so it can integrate into salsa.debian.org without turning off Debian packaging security features. The git-buildpackage tool itself also needs various improvements, such as making contributing to multiple different packages with various levels of diligence in debian/gbp.conf maintenance less error-prone.

Eventually, if it starts looking like all Debian packages might get hosted on salsa.debian.org, I would also start building a review.debian.org website to facilitate code review aspects that are unique to Debian, such as tracking Merge Requests across GitLab projects in ways GitLab can’t do, highlighting which submissions need review most urgently, feeding code reviews and approvals into the contributors.debian.org database for better attribution and so forth.

Details on this vision will be in a later blog post, so subscribe to updates!

,

365 TomorrowsI am Computer

Author: David Dumouriez “Good afternoon, Zak,” the voice said. “Alright?” Zak replied. “Had a good day?” “Ah, you know. The usual. Bor-ing!” There was a tinkly laugh. “Got any homework?” “Homework? Just a minute … Yeah. Some crap on the digestive system.” “Bullet points?” “That’ll do.” The words spilled out onto the screen. “Bit long […]

The post I am Computer appeared first on 365tomorrows.

David BrinFour specific notions that could help to save us all

Last week I issued a three-parter that proposed several dozen fresh tactics for the Enlightenment side of our current culture war. And as a unifying umbrella, I made them part of a "Democratic Newer Deal"... both satirizing and learning-from the most agile polemical maneuver of the last 40 years - the so-called 'GOP Contract With America.'

Whether or not you liked my using that overall umbrella, the thirty or so proposals merit discussion in their own right! Some of them -- maybe ten or so -- are ideas that have been floating around on the moderate-liberal agenda, but that I've meddled-with, in order to add some punch, or judo spice.  Or zing.

         Others are wholly my own.


Some of the proposals take the form of internal reforms that Congress could enact on their very first day - of a session whose majority consists of sane and decent people.      


For example, pause and envision this reform and procedural rule. One which no future GOP-led Congress would be able to retract! 


Distributed subpoena power: We shall establish a permanent rule and tradition that each member of Congress will get one peremptory subpoena per year, plus adequate funding to compel a witness to appear and testify for up to five hours before a subcommittee in which she or he is a member. In this way, each member will be encouraged to investigate as a sovereign representative and not just as a party member, ensuring that Congress will never again betray its Constitutional duty of investigation and oversight, even when the same party holds both Congress and the Executive.


Think about that for a sec. very soon each Representative or Senator would view that personal, peremptory subpoena -- whether one per year or per session -- as a treasured and jealously-guarded prerogative of office. Possibly useful to their party or to confront major issues, or else to grandstand for the folks back home. Either way, they will balk at any attempt by future party leaders to terminate the privilege. And thus it could become permanent. And the minority will never again be barred from calling witnesses to interrogate the majority.


Or look at another internal reform that I'll talk about next time... to reconstitute the advisory bodies for science and fact that used to serve Congress, but were banished by Gingrich and Hastert and company, because... well... this Republican Party despises facts.



Other proposals would be legislated LAWS that seem desperately -- even existentially -- needed for the U.S. republic! Like this one I have offered annually for the last fifteen years:

 

We shall create the office of Inspector General of the United States, or IGUS, who will head the U.S. Inspectorate, a uniformed agency akin to the Public Health Service, charged with protecting the ethical and law-abiding health of government.  Henceforth, the inspectors-general in all government agencies, including military judge-advocates general (JAGs) will be appointed by and report to IGUS, instead of serving beholden to the whim of the whim of the cabinet or other officers that they are supposed to inspect. IGUS will advise the President and Congress concerning potential breaches of the law. IGUS will provide protection for whistle-blowers and safety for officials or officers refusing to obey unlawful orders. 


Wouldn't everything be better if we had IGUS right now? Go back and read the full text.


And then there's this one - a way to bypass the corrupt Citizens United ruling by the suborned Supreme Court - using a clever and totally legal means, that is supported factually by Robert Reich. Though I think my approach is more likely to get passed... and to work.

 

THE POLITICAL REFORM ACT will ensure that the nation’s elections take place in a manner that citizens can trust and verify.  Political interference in elections will be a federal crime.  Strong auditing procedures and transparency will be augmented by whistleblower protections. All voting machines will be paper auditable. New measures will distance government officials from lobbyists.  


Campaign finance reform will reduce the influence of Big Money over politicians. The definition of a ‘corporation’ shall be clarified: so that corporations are neither ‘persons’ nor entitled to use money or other means to meddle in politics, nor to coerce their employees to act politically.


There are others, like how to affordably get every child in America insured under Medicare, while we argue over going the rest of the way. We'll get to that amazingly simple method next time.


But here's another one that is super timely because - as reported by the Strategic News Service - "Huge new botnets with 40M+ nodes are available to criminals on the dark web..." That's Forty MILLION computers around the world - including possibly the one you are now using to view this - have been suborned and turned into cryptic nodes for major cyber crime. 


Indeed, we are far more open to cyber attacks than ever, now that the Cybersecurity and Infrastructure Security Agency (CISA) has been downsized by a third! And the Cyber Safety Review Board (CSRB) dissolved, and the Critical Infrastructure Partnership Advisory Council (CIPAC) terminated. And many counter-terror agents have been (suspiciously) re-assigned. Hence, here's a reform that might address that... and it might - if pushed urgently - even pass this good-for nothing Congress.


THE CYBER HYGIENE ACT: Adjusting liability laws for a new and perilous era, citizens and small companies whose computers are infested and used by ‘botnets’ to commit crimes shall be deemed immune from liability for resulting damages, providing that they download and operate a security program from one of a dozen companies that have been vetted and approved for effectiveness by the US Department of Commerce. Likewise, companies that release artificial intelligence programs shall face lessened liability if those programs persistently declare their provenance and artificiality and potential dangers. 



Again... these and maybe 30 more are to be found in my big series on a proposed "Newer Deal." I'll try to repost and appraise each of them over the next few weeks. 


Almost any of them would be winning issues for the Democrats, especially if they were parsed right!  Say, in a truly workable 'deal' for the American people...

     ...and for our children's future.



       == Political notes ==


While we all should be impressed with Gavin Newsom's people for expertly trolling old Two Scoops, it's not the core tactic I have recommended for 20 years. Though one fellow who seems to be stabbing in the right general direction is Jimmy Kimmel, who keeps offering to hold public, televised challenges to check the factuality of foxite yammerings. 

 

Kimmel’s latest has been to satirically take on Trump's crowing about his 'aced' cognitive test. That test (which is not for IQ, but to eval senility or dementia) was accompanied by yowling that two female Democrat Reps were 'low-IQ.' Kimmel's offer of a televised IQ vs dementia test is brilliant. It'll never happen. But brilliant.   In fact, Kimmel's offer of a televised mental test is a version of my Wager Challenge. 


The key feature is REPETITION! The KGB-supported foxite jibberers have a tactic to evade accountability to facts. point at something else and change the subject. Yet, no dem - not even brilliant ones like Pete B and AOC - ever understands the power of tenacious repetition. Ensuring that a single lie - or at most a dozen - gets hammered over and over again.

All right, they ARE doing that with "Release the Epstein files!" Will they learn from that example to focus? To actually focus? And yes, demanding $$$ escrowed wager stakes can make it a matter of macho honor... honor that they always, always lose, as the weenie liars that they are. 
 


Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, October 2025 (by Roberto C. Sánchez)

The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for October.

Activity summary

During the month of October, 21 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 37 DLAs fixing 893 CVEs.

The team has continued in its usual rhythm, preparing and uploading security updates targeting LTS and ELTS, as well as helping with updates to oldstable, stable, testing, and unstable. Additionally, the team received several contributions of LTS uploads from Debian Developers outside the standing LTS Team.

Notable security updates:

  • https-everywhere, prepared by Markus Koschany, deals with a problem created by ownership of the https-rulesets.org domain passing to a malware operator
  • openjdk-17 and openjdk-11, prepared by Emilio Pozuelo Monfort, fixes XML external entity and certificate validation vulnerabilities
  • intel-microcode, prepared by Tobias Frost, fixes a variety of privilege escalation and denial of service vulnerabilities

Notable non-security updates:

  • distro-info-data, prepared by Stefano Rivera, updates information concerning current and upcoming Debian and Ubuntu releases

Contributions from outside the LTS Team:

  • Lukas Märdian, a Debian Developer, provided an update of log4cxx
  • Andrew Ruthven, one of the request-tracker4 maintainers, provided an update of request-tracker4
  • Christoph Goehre, co-maintainer of thunderbird, provided an update of thunderbird

Beyond the typical LTS updates, the team also helped the Debian community more broadly:

  • Guilhem Moulin prepared oldstable/stable updates of libxml2, and an unstable update of libxml2.9
  • Bastien Roucariès prepared oldstable/stable updates of imagemagick
  • Daniel Leidert prepared an oldstable update of python-authlib, oldstable update of libcommons-lang-java and stable update of libcommons-lang3-java
  • Utkarsh Gupta prepared oldstable/stable/testing/unstable updates of ruby-rack

The LTS Team is grateful for the opportunity to contribute to making LTS a high quality for sponsors and users. We are also particularly grateful for the collaboration from others outside the time; their contributions are important to the success of the LTS effort.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

,

Planet DebianClint Adams: monkeying around bitrot

One of the servers to which I SSH ratcheted up its public key requirements and thus the Monkeysphere key I've been using for 15 years stopped working.

Unfortunately, monkeysphere gen-subkey hardcodes RSA keys and if I'm going to be forced to use a new subkey I want mine to be of the 25519 variety. Therefore, to add a subkey by hand:

gpg --expert --edit-key $KEYID

Follow roughly what's in /usr/share/monkeysphere/m/gen_subkey, but change the key type to 11 (ECC (set your own capabilities)), don't bother with Encrypt capability, and pick Curve25519.

monkeysphere subkey-to-ssh-agent and agent-transfer will be all happy with the "ed25519" subkey without any code modifications, and you won't need to rewrite monkeysphere from scratch to use Sequoia for the next 15 years.

Posted on 2025-11-28
Tags:

David Brin 2A New Deal with the American People

Political Tactics that Might Work

In my earlier postings, Part One and Part Two, I aimed to study an old – though successful – political tactic that was concocted and executed with great skill by a rather different version of Republicans. A tactic that later dissolved into a swill of broken promises, after achieving Power.

So, shall we wind this up with a shopping list of our own?  What follows is a set of promises – a contract of our own, aiming for the spirit of FDR’s New Deal – with the citizens of America. 

First, yes. It is hard to see, in today’s ruling coalition of kleptocrats, fanatics and liars, any of the genuinely sober sincerity than many Americans thought they could sense coming from Newt Gingrich and the original wave of “neoconservatives.”  Starting with Dennis Never Negotiate Hastert, the GOP leadership caste spiraled into ever-accelerating scandal and corruption.

Still, I propose to ponder what a “Democratic Newest Deal for America” might look like!  

–       Exposing hypocrisy and satirizing the failure of that earlier “contract” …

–       while using its best parts to appeal sincere moderates and conservatives …

–       while firmly clarifying the best consensus liberal proposals…

–       while offering firm methods to ensure that any reforms actually take effect and don’t just drift away.

Remember that this alternative “contract” – or List of Democratic Intents – will propose reforms that are of real value… but also repeatedly highlight GOP betrayals.

Might it be worth testing before some focus groups?

                  A Draft: Democratic Deal for America

As Democratic Members of the House of Representatives and as citizens seeking to join that body, we propose both to change its practices and to restore bonds of trust between the people and their elected representatives.  

We offer these proposals in sincere humility, aware that so many past promises were broken.  We shall foremost, emphasize restoration of a citizen’s right to know, and to hold the mighty accountable

Especially, we will emphasize placing tools of democracy, openness and trust back into the hands of the People. We will also seek to ensure that government re-learns its basic function, to be the efficient, honest and effective tool of the People.

Toward this end, we’ll incorporate lessons of the past and goals for the future, promises that were betrayed and promises that need to be renewed, ideas from left, right and center. But above all, the guiding principle that America is an open society of bold and free citizens. Citizens who are empowered to remind their political servants who is boss. 

PART I.   REFORM CONGRESS 

In the first month of the new Congress, our new Democratic majority will pass the following major reforms of Congress itself, aimed at restoring the faith and trust of the American people:

FIRST: We shall see to it that the best parts of the 1994 Republican “Contract With America” – parts the GOP betrayed, ignored and forgot – are finally implemented, both in letter and in spirit.  

Among the good ideas the GOP betrayed are these:

•   Require all laws that apply to the rest of the country also apply to Congress; 

•   Arrange regular audits of Congress for waste or abuse;

•   Limit the terms of all committee chairs and party leadership posts;

•   Ban the casting of proxy votes in committee and law-writing by lobbyists;

•   Require that committee meetings be open to the public;

•   Guarantee honest accounting of our Federal Budget.

…and in the same spirit…

•   Members of Congress shall report openly all stock and other trades by members or their families, especially those trades which might be affected by the member’s inside knowledge.

By finally implementing these good ideas – some of which originated with decent Republicans – we show our openness to learn and to reach out, re-establishing a spirit of optimistic bipartisanship with sincere members of the opposing party, hopefully ending an era of unwarranted and vicious political war.

But restoring those broken promises will only be the beginning.

SECOND: We shall establish rules in both House and Senate permanently allowing the minority party one hundred subpoenas per year, plus the time and staff needed to question their witnesses before open subcommittee hearings, ensuring that Congress will never again betray its Constitutional duty of investigation and oversight, even when the same party holds both Congress and the Executive.

As a possibly better alternative – to be negotiated – we shall establish a permanent rule and tradition that each member of Congress will get one peremptory subpoena per year, plus adequate funding to compel a witness to appear and testify for up to five hours before a subcommittee in which she or he is a member. In this way, each member will be encouraged to investigate as a sovereign representative and not just as a party member.

THIRD: While continuing ongoing public debate over the Senate’s practice of filibuster, we shall use our next majority in the Senate to restore the original practice: that senators invoking a filibuster must speak on the chamber floor the entire time. 

FOURTH: We shall create the office of Inspector General of the United States, or IGUS, who will head the U.S. Inspectorate, a uniformed agency akin to the Public Health Service, charged with protecting the ethical and law-abiding health of government.  Henceforth, the inspectors-general in all government agencies, including military judge-advocates general (JAGs) will be appointed by and report to IGUS, instead of serving at the whim of the cabinet or other officers that they are supposed to inspect. IGUS will advise the President and Congress concerning potential breaches of the law. IGUS will provide protection for whistle-blowers and safety for officials refusing to obey unlawful orders. 

In order to ensure independence, the Inspectorate shall be funded by an account to pay for operations that is filled by Congress, or else by some other means, a decade in advance. IGUS will be appointed to six-year terms by a 60% vote of a commission consisting of all past presidents and current state governors. IGUS will create a corps of trusted citizen observers, akin to grand juries, cleared to go anywhere and assure the American people that the government is still theirs, to own and control.

FIFTH: Independent congressional advisory offices for science, technology and other areas of skilled, fact-based analysis will be restored in order to counsel Congress on matters of fact without bias or dogma-driven pressure. Rules shall ensure that technical reports may not be re-written by politicians, changing their meaning to bend to political desires. 

Every member of Congress shall be encouraged and funded to appoint from their home district a science-and-fact advisor who may interrogate the advisory panels and/or answer questions of fact on the member’s behalf.

SIXTH: New rules shall limit “pork” earmarking of tax dollars to benefit special interests or specific districts. Exceptions must come from a single pool, totaling no more than one half of a percent of the discretionary budget. These exceptions must be placed in clearly marked and severable portions of a bill, at least two weeks before the bill is voted upon.  Earmarks may not be inserted into conference reports. Further, limits shall be placed on no-bid, crony, or noncompetitive contracts, where the latter must have firm expiration dates.  Conflict of interest rules will be strengthened. 

SEVENTH: Create an office that is tasked to translate and describe all legislation in easily understandable language, for public posting at least three days before any bill is voted upon, clearly tracking changes or insertions, so that the public (and even members of Congress) may know what is at stake.  This office may recommend division of any bill that inserts or combines unrelated or “stealth” provisions.

EIGHTH: Return the legislative branch of government to the people, by finding a solution to the cheat of gerrymandering, that enabled politicians to choose voters, instead of the other way around.  We shall encourage and insist that states do this in an evenhanded manner, either by using independent redistricting commissions or by minimizing overlap between state legislature districts and those for Congress.

NINTH: Newly elected members of Congress with credentials from their states shall be sworn in by impartial clerks of either the House or Senate, without partisan bias, and at the new member’s convenience. The House may be called into session, with or without action by the Speaker, at any time that a petition is submitted to the Chief Clerk that was signed by 40% of the members. 

TENTH: One time in any week, the losing side in a House vote may demand and get an immediate non-binding secret polling of the members who just took part in that vote, using technology to ensure reliable anonymity. While this secret ballot will be non-binding legislatively, the poll will reveal whether some members felt coerced or compelled to vote against their conscience. Members who refuse to be polled anonymously will be presumed to have been so compelled or coerced.

II.  REFORM AMERICA

 Thereafter, within the first 100 days of the new Congress, we shall bring to the House Floor the following bills, each to be given full and open debate, each to be given a clear and fair vote and each to be immediately available for public inspection and scrutiny. 

DB Note: The following proposed bills are my own particular priorities, chosen because I believe they are both vitally important and under-appreciated! (indeed, some of them you’ll see nowhere else.) 

Their common trait – until you get to #20 – is that they have some possibility of appealing to reasonable people across party lines… the “60%+ rule” that worked so persuasively in 1994.

#20 will be a catch-all that includes a wide swathe of reforms sought by many Democrats – and, likely, by many of you — but may entail more dispute, facing strong opposition from the other major party. 

In other words… as much as you may want the items in #20 – (and I do too: most of them!) — you are going to have to work hard for them separately from a ‘contract’ like this one, that aims to swiftly take advantage of 60%+ consensus, to get at least an initial tranche of major reforms done.

1. THE SECURITY FOR AMERICA ACT will ensure that top priority goes to America’s military and security readiness, especially our nation’s ability to respond to surprise threats, including natural disasters or other emergencies. FEMA and the CDC and other contingency agencies will be restored and enhanced, their agile effectiveness audited.

When ordering a discretionary foreign intervention, the President must report probable effects on readiness, as well as the purposes, severity and likely duration of the intervention, along with credible evidence of need. 

All previous Congressional approvals for foreign military intervention or declared states of urgency will be explicitly canceled, so that future force resolutions will be fresh and germane to each particular event, with explicit expiration dates. All Eighteenth or Nineteenth Century laws that might be used as excuses for Executive abuse will be explicitly repealed. 

Reserves will be augmented and modernized. Reserves shall not be sent overseas without submitting for a Congressionally certified state of urgency that must be renewed at six-month intervals. Any urgent federalization and deployment of National Guard or other troops to American cities, on the excuse of civil disorder, shall be supervised by a plenary of the nation’s state governors, who may veto any such deployment by a 40% vote or a signed declaration by twenty governors. 

The Commander-in-Chief may not suspend any American law, or the rights of American citizens, without submitting the brief and temporary suspension to Congress for approval in session. 

2. THE PROFESSIONALISM ACT will protect the apolitical independence of our intelligence agencies, the FBI, the scientific and technical staff in executive departments, and the United States Military Officer Corps.  All shall be given safe ways to report attempts at political coercion or meddling in their ability to give unbiased advice.  Whistle-blower protections will be strengthened within the U.S. government. 

The federal Inspectorate will gather and empower all agency Inspectors General and Judges Advocate General under the independent and empowered Inspector General of the United States (IGUS).

3. THE SECRECY ACT will ensure that the recent, skyrocketing use of secrecy – far exceeding anything seen during the Cold War – shall reverse course.  Independent commissions of trusted Americans shall approve, or set time limits to, all but the most sensitive classifications, which cannot exceed a certain number.  These commissions will include some members who are chosen (after clearance) from a random pool of common citizens.  Secrecy will not be used as a convenient way to evade accountability.

4. THE SUSTAINABILITY ACT will make it America’s priority to pioneer technological paths toward energy independence, emphasizing economic health that also conserves both national and world resources.  Ambitious efficiency and conservation standards may be accompanied by compromise free market solutions that emphasize a wide variety of participants, with the goal of achieving more with less, while safeguarding the planet for our children.

5. THE POLITCAL REFORM ACT will ensure that the nation’s elections take place in a manner that citizens can trust and verify.  Political interference in elections will be a federal crime.  Strong auditing procedures and transparency will be augmented by whistleblower protections.  New measures will distance government officials from lobbyists.  Campaign finance reform will reduce the influence of Big Money over politicians. The definition of a ‘corporation’ shall be clarified: so that corporations are neither ‘persons’ nor entitled to use money or other means to meddle in politics, nor to coerce their employees to act politically.

Gerrymandering will be forbidden by national law. 

The Voting Rights Act will be reinforced, overcoming all recent Court rationalizations to neuter it.

6.  THE TAX REFORM ACT will simplify the tax code, while ensuring that everybody pays their fair share.  Floors for the Inheritance Tax and Alternative Tax will be raised to ensure they only affect the truly wealthy, while loopholes used to evade those taxes will be closed. Modernization of the IRS and funding for auditors seeking illicitly hidden wealth shall be ensured by IRS draw upon major penalties that have been imposed by citizen juries. 

All tax breaks for the wealthy will be suspended during time of war, so that the burdens of any conflict or emergency are shared by all.[1]

7.  THE AMERICAN EXCELLENCE ACT will provide incentives for American students to excel at a range of important fields. This nation must especially maintain its leadership, by training more experts and innovators in science and technology.  Education must be a tool to help millions of students and adults adapt, to achieve and keep high-paying 21st Century jobs.

8. THE HEALTHY CHILDREN ACT will provide basic coverage for all of the nation’s children to receive preventive care and needed medical attention.  Whether or not adults should get insurance using market methods can be argued separately.

 But under this act, all U.S. citizens under the age of 25 shall immediately qualify as “seniors” under Medicare, an affordable step that will relieve the nation’s parents of stressful worry. A great nation should see to it that the young reach adulthood without being handicapped by preventable sickness.

9. THE CYBER HYGIENE ACT: Adjusting liability laws for a new and perilous era, citizens and small companies whose computers are infested and used by ‘botnets’ to commit crimes shall be deemed immune from liability for resulting damages, providing that they download and operate a security program from one of a dozen companies that have been vetted and approved for effectiveness by the US Department of Commerce. Likewise, companies that release artificial intelligence programs shall face lessened liability if those programs persistently declare their provenance and artificiality and potential dangers. 

10. THE TRUTH AND RECONCILIATION ACT:  Without interfering in the president’s constitutional right to issue pardons for federal offenses, Congress will pass a law defining the pardon process, so that all persons who are excused for either convictions orpossible crimes must at least explain those crimes, under oath, before an open congressional committee, before walking away from them with a presidential pass. If the crime is not described in detail, then any pardon cannot apply to any excluded portion. Further, we shall issue a challenge that no president shall ever issue more pardons thanboth of the previous administrations, combined.

Congress shall act to limit the effect of Non-Disclosure Agreements (NDAs)that squelch public scrutiny of officials and the powerful. With arrangements to exchange truth for clemency, both current and future NDAs shall decay over a reasonable period of time. Incentives will draw victims of blackmail to come forward and expose their blackmailers.

11. THE IMMUNITY LIMITATION ACT: The Supreme Court has ruled that presidents should be free to do their jobs without undue distraction by legal procedures and jeopardies. Taking that into account, we shall nevertheless – by legislation – firmly reject the artificial and made-up notion of blanket Presidential Immunity or that presidents are inherently above the law. 

Instead, the Inspector General of the United States (IGUS) shall supervise legal cases that are brought against the president so that they may be handled by the president’s chosen counsel in order of importance or severity, in such a way that the sum of all such external legal matters will take up no more than ten hours a week of any president’s time. While this may slow such processes, the wheels of law will not be fully stopped. 

Civil or criminal cases against a serving president may be brought to trial by a simple majority consent of both houses of Congress, though no criminal or civil punishment may be exacted until after the president leaves office, either by end-of-term or impeachment and Senate conviction. 

12. THE FACT ACTThe Fact Act will begin by restoring the media Rebuttal Rule, prying open “echo chamber” propaganda mills. Any channel, or station, or Internet podcast, or meme distributor that accepts advertising or reaches more than 10,000 followers will be required to offer five minutes per day during prime time and ten minutes at other times to reputable and vigorous adversaries. Until other methods are negotiated, each member of Congress shall get to choose one such vigorous adversary, ensuring that all perspectives may be involved. 

The Fact Act will further fund experimental Fact-Challenges, where major public disagreements may be openly and systematically and reciprocally confronted with demands for specific evidence.

The Fact Act will restore full funding and staffing to both the Congressional Office of Technology Assessment and the executive Office of Science and Technology Policy (OTSP). Every member of Congress shall be funded to hire a science and fact advisor from their home district, who may interrogate the advisory bodies – an advisor who may also answer questions of fact on the member’s behalf. 

This bill further requires that the President must fill, by law, the position of White House Science Adviser from a diverse and bipartisan slate of qualified candidates offered by the Academy of Science. The Science Adviser shall have uninterrupted access to the President for at least two one-hour sessions per month.4

13. THE VOTER ID ACT: Under the 13th and 14th Amendments, this act requires that states mandating Voter ID requirements must offer substantial and effective compliance assistance, helping affected citizens to acquire their entitled legal ID and register to vote. 

Any state that fails to provide such assistance, substantially reducing the fraction of eligible citizens turned away at the polls, shall be assumed in violation of equal protection and engaged in illegal voter suppression. If such compliance assistance has been vigorous and effective for ten years, then that state may institute requirements for Voter ID.      

In all states, registration for citizens to vote shall be automatic with a driver’s license or passport or state-issued ID, unless the citizen opts-out.

14. THE WYOMING RULE: Congress shall end the arrangement (under the  Permanent Apportionment Act of 1929) for perpetually limiting the House of Representatives to 435 members. Instead, it will institute the Wyoming Rule, that the least-populated state shall get one representative and all other states will be apportioned representatives according to their population by full-integer multiples of the smallest state. The Senate’s inherent bias favoring small states should be enough. In the House, all citizens should get votes of equal value. https://thearp.org/blog/the-wyoming-rule/

15:  IMMIGRATION REFORM: There are already proposed immigration law reforms on the table, worked out by sincere Democrats and sincere Republicans, back when the latter were a thing. These bipartisan reforms will be revisited, debated, updated and then brought to a vote. 

In addition, if a foreign nation is among the top five sources of refugees seeking U.S. asylum from persecution in their homelands, then by law it shall be incumbent upon the political and social elites in that nation to help solve the problem, or else take responsibility for causing their citizens to flee. 

Upon verification that their regime is among those top five, that nation’s elites will be billed, enforceably, for U.S. expenses in giving refuge to that nation’s citizens. Further, all trade and other advantages of said elites will be suspended and access to the United States banned, except for the purpose of negotiating ways that the U.S. can help in that nation’s rise to both liberty and prosperity, thus reducing refugee flows in the best possible way. 

16: THE EXECUTIVE OFFICE MANAGER: By law we shall establish under IGUS (the Inspectorate) a civil service position of White House Manager, whose function is to supervise all non-political functions and staff.This would include the Executive Mansion’s physical structure and publicly-owned contents, but also policy-neutral services such as the switchboard, kitchens, Travel Office, medical office, and Secret Service protection details, since there are no justifications for the President or political staff to have whim authority over such apolitical employees. 

With due allowance and leeway for needs of the Office of President, public property shall be accounted-for. The manager will allocate which portions of any trip expense should be deemed private and thereupon – above a basic allowance – shall be billed to the president or his/her party. 

This office shall supervise annual physical and mental examination by external experts for all senior office holders including the President, Vice President, Cabinet members and leaders of Congress.

Any group of twenty senators or House members or state governors may choose one periodical, network or other news source to get credentialed to the White House Press Pool, spreading inquiry across all party lines and ensuring that all rational points of view get access.

17: EMOLUMENTS AND GIFTS ACT: Emoluments and gifts and other forms of valuable beneficence bestowed upon the president, or members of Congress, or judges, or their families or staffs, shall be more strictly defined and transparently controlled. All existing and future presidential libraries or museums or any kind of shrine shall strictly limit the holding, display or lending of gifts to, from, or by a president or ex-president, which shall instead be owned and held (except for facsimiles) by the Smithsonian and/or sold at public auction. 

Donations by corporations or wealthy individuals to pet projects of a president or other members of government, including inauguration events, shall be presumed to be illegal bribery unless they are approved by a nonpartisan ethical commission.

18: BUDGETS: If Congress fails to fulfill its budgetary obligations or to raise the debt ceiling, the result will not be a ‘government shutdown.’ Rather, all pay and benefits will cease going to any Senator or Representative whose annual income is above the national average, until appropriate legislation has passed, at which point only 50% of any backlog arrears may be made-up. 

19: THE RURAL AMERICA AND HOUSING ACT: Giant corporations and cartels are using predatory practices to unfairly corner, control or force-out family farms and small rural businesses. We shall upgrade FDR-era laws that saved the American heartland for the people who live and work there, producing the nation’s food. Subsidies and price supports shall only go to family farms or co-ops. Monopolies in fertilizer, seeds and other supplies will be broken up and replaced by competition. Living and working and legal conditions for farm workers and food processing workers will be improved by steady public and private investments.

Cartels that buy-up America’s stock of homes and home-builders will be investigated for collusion to limit construction and/or drive up rents and home prices and appropriate legislation will follow. 

20: THE LIBERAL AGENDA: Okay. Your turn. Our turn. Beyond the 60% rule.

·      Protect women’s autonomy, credibility and command over their own bodies,

·      Ease housing costs: stop private corps buying up large tracts of homes, colluding on prices.

·      Help working families with child care and elder care.

·      Consumer protection, empower the Consumer Financial Protection Board.

·      At least allow student debt refinancing, which the GOP dastardly disallowed. 

·      Restore the postal savings bank for the un-banked,

·      Basic, efficient, universal background checks for gun purchases, with possible exceptions.

·      A national Election Day holiday, for those who actually vote.

·      Carefully revive the special prosecutor law. 

·      Expand and re-emphasize protections under the Civil Service Act.

·      Anti-trust breakup of monopoly/duopolies.

·       

….AND SO ON…

III.          Conclusion

All right.  I know this proposal – that we do a major riff off of the 1994 Republican Contract with America – will garner one top complaint: We don’t want to look like copycats!

And yet, by satirizing that totally-betrayed “contract,” we poke GOP hypocrisy… while openly reaching out to the wing of conservatism that truly believed the promises, back in 94, perhaps winning some of them over, by offering deliverable metrics to get it right this time…

…while boldly outlining reasonable liberal measures that the nation desperately needs.

I do not insist that the measures I posed — in my rough draft “Democratic Deal” — are the only ones possible! (Some might even seem crackpot… till you think them over.)  New proposals would be added or changed.  

Still, this list seems reasonable enough to debate, refine, and possibly offer to focus groups. Test marketing (the way Gingrich did!) should tell us whether Americans would see this as “copycat”……or else a clever way to turn the tables, in an era when agility must be an attribute of political survival.

Planet DebianSimon Josefsson: Container Images for Debian with Guix

The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed.

The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull.

Supported architectures include amd64 and arm64. The multi-arch container is called:

registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable

It may also be accessed via debian-with-guix at Docker Hub as:

docker.io/jas4711/debian-with-guix:stable

The container images may be used like this:

$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2    Nov 28 2025 10:14:11    (current)
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
   hello 2.12.2

hint: Consider setting the necessary environment variables by running:

     GUIX_PROFILE="/root/.guix-profile"
     . "$GUIX_PROFILE/etc/profile"

Alternately, see `guix package --search-paths -p "/root/.guix-profile"'.

root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/# 

Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.

test-wget-configure-make-libksba-amd64:
  image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
  before_script:
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  - apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

The images were initially created for use in GitLab CI/CD Pipelines but should work for any use.

The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml.

The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh.

The pipeline also push images to the GitLab container registry, and then also to Docker Hub.

Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns.

Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before.

There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained.

For ppc64el support I ran into an error message that I wasn’t able to resolve:

guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted

For riscv64, I can’t even find a Guix riscv64 binary tarball for download, is there one anywhere?

For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com’s shared runners, otherwise you will get this error message:

guix install: error: clone: Invalid argument

Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:

guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted

Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:

guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented

This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that.

Happy Hacking!

Planet DebianRussell Coker: 10gbit and 40gbit Home Networking

Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn’t very good for the more common use cases (if daisy chaining them then it’s only
2 available for devices) so this is really a device for use with 10Gbit uplink.

Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered.

They have a dual port SFP+ card for a server with two of the pairs of SFP+ 10gbit devices with copper between them for $32.51 delivered [3].

So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don’t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn’t much.

It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that’s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that’s affordable for a home lab. As an aside the only NVMe I’ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that.

The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That’s quite affordable for the SME market but a bit expensive for home users (although I’m sure that someone on r/homelab has one).

I’m not going to get 40Gbit, that’s well above what I need and while a point to point link is quite affordable I don’t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful.

For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn’t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I’ll get one of those USB devices for testing.

For a single server it’s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].

Worse Than FailureError'd: On the Dark Side

...matter of fact, it's all dark.

Gitter Hubber checks in on the holidays: "This is the spirit of the Black Friday on GitHub. That's because I'm using dark mode. Otherwise, it would have a different name… You know what? Let's just call it Error Friday!"

1

 

"Best get typing!" self-admonishes. Jason G. Suffering a surfeit of snark, he proposes "Not sure my battery will last long enough.
Finally, quantum resistant security.
I can't remember my number after the 5000th digit. " Any of those will do just fine.

2

 

Don't count Calle L. out. "This is for a calorie tracking app, on Thanksgiving. Offer was so delicious it wasn't even a number any more! Sadly it did not slim the price down more than expected."

0

 

"Snow and rain and rain and snow!" exclaims Paul N. "Weather so astounding, they just had to trigger three separate notifications at the same time."

3

 

It's not a holiday for everyone though, is it? Certainly not for Michael R. , who is back with a customer service complaint about custom deliveries. "I am unlucky with my deliveries. This time it's DPD. "

4

 

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsFly on the wall

Author: Larson Holm He splashed the cold water up into his face and looked at himself in the mirror. It would have to do. Why did she want to talk now? It had been five years, and she’d been the one to break it off. It hadn’t made sense then, what could’ve changed? Did she […]

The post Fly on the wall appeared first on 365tomorrows.

,

365 TomorrowsTomorrow, Forever

Author: Brian Ball She wouldn’t look him in the eye. He rattled off questions, but she ignored his ridiculous whimpering. She punctured the vitamin drip, tightened the chest straps and locked his neck in place. Too bad she couldn’t be bothered. She was the last person he’d ever see. A call came in. She ignored […]

The post Tomorrow, Forever appeared first on 365tomorrows.

Worse Than FailureClassic WTF: Teleported Release

It's a holiday in the US today, one where we give thanks. And today, we give thanks to not have this boss. Original. --Remy

Matt works at an accounting firm, as a data engineer. He makes reports for people who don’t read said reports. Accounting firms specialize in different areas of accountancy, and Matt’s firm is a general firm with mid-size clients.

The CEO of the firm is a legacy from the last century. The most advanced technology on his desk is a business calculator and a pencil sharpener. He still doesn’t use a cellphone. But he does have a son, who is “tech savvy”, which gives the CEO a horrible idea of how things work.

Usually, this is pretty light, in that it’s sorting Excel files or sorting the output of an existing report. Sometimes the requests are bizarre or utter nonsense. And, because the boss doesn’t know what the technical folks are doing, some of the IT staff may be a bit lazy about following best practices.

This means that most of Matt’s morning is spent doing what is essentially Tier 1 support before he gets into doing his real job. Recently, there was a worse crunch, as actual support person Lucinda was out for materinity leave, and Jackie, the one other developer, was off on vacation on a foreign island with no Internet. Matt was in the middle of eating a delicious lunch of take-out lo mein when his phone rang. He sighed when he saw the number.

“Matt!” the CEO exclaimed. “Matt! We need to do a build of the flagship app! And a deploy!”

The app was rather large, and a build could take upwards of 45 minutes, depending on the day and how the IT gods were feeling. But the process was automated, the latest changes all got built and deployed each night. Anything approved was released within 24 hours. With everyone out of the office, there hadn’t been any approved changes for a few weeks.

Matt checked the Github to see if something went wrong with the automated build. Everything was fine.

“Okay, so I’m seeing that everything built on GitHub and everything is available in production,” Matt said.

“I want you to do a manual build, like you used to.”

“If I were to compile right now, it could take quite awhile, and redeploying runs the risk of taking our clients offline, and nothing would be any different.”

“Yes, but I want a build that has the changes which Jackie was working on before she left for vacation.”

Matt checked the commit history, and sure enough, Jackie hadn’t committed any changes since two weeks before leaving on vacation. “It doesn’t looked like she pushed those changes to Github.”

“Githoob? I thought everything was automated. You told me the process was automated,” the CEO said.

“It’s kind of like…” Matt paused to think of an analogy that could explain this to a golden retriever. “Your dishwasher, you could put a timer on it to run it every night, but if you don’t load the dishwasher first, nothing gets cleaned.”

There was a long pause as the CEO failed to understand this. “I want Jackie’s front-page changes to be in the demo I’m about to do. This is for Initech, and there’s millions of dollars riding on their account.”

“Well,” Matt said, “Jackie hasn’t pushed- hasn’t loaded her metaphorical dishes into the dishwasher, so I can’t really build them.”

“I don’t understand, it’s on her computer. I thought these computers were on the cloud. Why am I spending all this money on clouds?”

“If Jackie doesn’t put it on the cloud, it’s not there. It’s uh… like a fax machine, and she hasn’t sent us the fax.”

“Can’t you get it off her laptop?”

“I think she took it home with her,” Matt said.

“So?”

“Have you ever seen Star Trek? Unless Scotty can teleport us to Jackie’s laptop, we can’t get at her files.”

The CEO locked up on that metaphor. “Can’t you just hack into it? I thought the NSA could do that.”

“No-” Matt paused. Maybe Matt could try and recreate the changes quickly? “How long before this meeting?” he asked.

“Twenty minutes.”

“Just to be clear, you want me to do a local build with files I don’t have by hacking them from a computer which may or may not be on and connected to the Internet, and then complete a build process which usually takes 45 minutes- at least- deploy to production, so you can do a demo in twenty minutes?”

“Why is that so difficult?” the CEO demanded.

“I can call Jackie, and if she answers, maybe we can figure something out.”

The CEO sighed. “Fine.”

Matt called Jackie. She didn’t answer. Matt left a voicemail and then went back to eating his now-cold lo mein.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Planet DebianRuss Allbery: Review: A Matter of Execution

Review: A Matter of Execution, by Nicholas & Olivia Atwater

Series: Tales of the Iron Rose #0
Publisher: Starwatch Press
Copyright: 2024
ISBN: 1-998257-08-8
Format: Kindle
Pages: 131

A Matter of Execution is the introductory novella that kicked off the Tales of the Iron Rose series. It is steampunk fantasy with airships. I previously read and reviewed the subsequent novel, Echoes of the Imperium.

As noted in that review, I read the novel first. That was a mistake; this is a much better place to start. A Matter of Execution was clearly intended as the introduction of all of these characters. More importantly, I think reading the novella first would have given me enough affinity with the characters to not mind the worst part of Echoes of the Imperium: the extremely slow first half that seemed filled with the protagonist's impostor syndrome.

A Matter of Execution opens, fittingly, with Captain William Blair, a goblin, former Imperial soldier, Oathbreaker, and series first-person protagonist being carted to his execution. He is not alone; in the same prison wagon is an arrogant (and racist) man named Strahl, the killer of one of the rulers of Lyonesse.

Strahl is rather contemptuous of Blair's claim to be a captain, given that he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion when Blair's crew, somewhat predictably given that he is the series protagonist, creates a daring escape for both of them. The heat of action gives both a chance to gain some respect for the other, which explains why Blair is not only willing to invite Strahl to join his crew, but to go back for Strahl's companion.

Breaking out Strahl's companion will be a more difficult, and surprising, problem.

Nicholas Atwater is a role-playing game GM, something that you will learn in the "about the author" section at the end of this novella but probably will have guessed by then. Even more than Echoes of the Imperium, this novella feels like a (good) write-up of an RPG adventure. A wildly varied cast of characters come together and form a party with a well-defined objective that has some surrounding mysteries and surprises. Each of those characters get their individual moments to show off their specific skills. Readers with a certain gaming background will know exactly where to insert the Borderlands-style title card with a slightly demented description of each character.

This is not a complaint. You may be able to see the bones of the setup adventure for a long-running campaign, but I like this style of character introduction and the story moves right along. There are a ton of varied characters, some interesting villains and maybe-villains, a rather satisfying heist setup, and some good chemistry and a bit of banter. This is not a deep story — it's clearly an introductory episode for both the characters and the world background — but it's a fun way to spend a few hours.

I think the best part of this series is the world-building. If you have read my review of Echoes of the Imperium, you have unfortunately been mildly spoiled for the revelation in this novella. I don't think it hurt the story that much; you will be able to predict what obvious gaps in the novel backstory the novella is going to fill in, but it's just as enjoyable to see how that happens. But the Atwaters aren't going to drop any of the big world-building bombs in the introductory novella, of course. Instead, you get a gradual introduction to the nature of magic in this world, some of the political setup of the recent war, and a quick introduction to the capabilities of Strahl's mysterious companion.

If you've not yet read this series, I recommend starting here. It's a quick investment to see if you'll be interested. The novel is heavier and slower, and the pacing of the first half isn't great, but the world-building is even better.

If you've already read the novel, this is still worth reading as long as you enjoyed it. You'll have a few moments of "oh, that's how that happened," and it's a fun and fast-moving way to spend a bit more time with the characters.

Followed by Echoes of the Imperium. The back matter of the novella says that The Winds of Fortune is supposedly forthcoming.

Rating: 7 out of 10

Planet DebianRussell Coker: PineTime Band

I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no risk of the watch suddenly falling off and being broken or lost. The Pine64 web site has a page about this with bad options, one broken link and a few Amazon items that are have ridiculous postage [2].

I started writing this post while using the band from a Colmi P80 [3]. I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.

I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!

The 20mm Silicone Magnetic Buckle Watch Strap Band For Huawei GT2 Smart Watch Connected Bracelet Black Watchband Man [4] cost $13.55 including postage. It has a magnetic unfold mechanism which I find a bit annoying and it doesn’t allow easily changing the length. I don’t think I’ll choose that again. But it basically works and is comfortable.

The 20mm Metal Strap for Huawei Watch GT2 3 Quick Release Stainless Steel Watch Band for Samsung Galaxy Watch Bracelet [5] cost $9.77 including postage. I found this unreasonably difficult to put on and not particularly comfortable. But opinion will vary on that, it is cheap and will appeal to some people’s style.

Conclusion

There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.

I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.

Planet DebianValhalla's Things: PDF Planners 2026

Posted on November 27, 2025
Tags: madeof:atoms, madeof:bits, craft:bookbinding

A few years ago I wrote some planner generating code to make myself a custom planner; in November 2023 I generated a few, and posted them here on the blog, in case somebody was interested in using them.

In 2024 I tried to do the same, and ended up being even more late, to the point where I didn’t generate any (uooops).

I did, however, start to write a Makefile to automate the generation (and got stuck on the fact that there wasn’t an easy way to deduce the correct options needed from just the template name); this year, with the same promptness as in 2023 I got back to the Makefile and finished it, so maybe next year I will be able to post them early enough for people to print and bind them? maybe :)

Anyway, these are all of the variants I currently generate, for 2026.

The files with -book in the name have been imposed on A4 paper for a 16 pages signature. All of the fonts have been converted to paths, for ease of printing (yes, this means that customizing the font requires running the script, but the alternative also had its drawbacks).

In English:

daily-95×186-en.pdf

blank daily pages, 95 mm × 186 mm;

daily-A5-en.pdf daily-A5-en-book.pdf

blank daily pages, A5;

daily-A6-en.pdf daily-A6-en-book.pdf

blank daily pages, A6;

daily-graph-A5-en.pdf daily-graph-A5-en-book.pdf

graph paper (4 mm) daily pages, A5;

daily-points4mm-A5-en.pdf daily-points4mm-A5-en-book.pdf

pointed paper (4 mm), A5;

daily-ruled-A5-en.pdf daily-ruled-A5-en-book.pdf

ruled paper daily pages, A5;

week_on_two_pages-A6-en.pdf week_on_two_pages-A6-en-book.pdf

weekly planner, one week on two pages, A6;

week_on_one_page-A6-en.pdf week_on_one_page-A6-en-book.pdf

weekly planner, one week per page, A6;

week_on_one_page_dots-A6-en.pdf week_on_one_page_dots-A6-en-book.pdf

weekly planner, one week per page with 4 mm dots, A6;

week_health-A6-en.pdf week_health-A6-en-book.pdf

weekly health tracker, one week per page with 4 mm dots, A6;

month-A6-en.pdf month-A6-en-book.pdf

monthly planner, A6;

And the same planners, in Italian:

daily-95×186-it.pdf

blank daily pages, 95 mm × 186 mm;

daily-A5-it.pdf daily-A5-it-book.pdf

blank daily pages, A5;

daily-A6-it.pdf daily-A6-it-book.pdf

blank daily pages, A6;

daily-graph-A5-it.pdf daily-graph-A5-it-book.pdf

graph paper (4 mm) daily pages, A5;

daily-points4mm-A5-it.pdf daily-points4mm-A5-it-book.pdf

pointed paper (4 mm), A5;

daily-ruled-A5-it.pdf daily-ruled-A5-it-book.pdf

ruled paper daily pages, A5;

week_on_two_pages-A6-it.pdf week_on_two_pages-A6-it-book.pdf

weekly planner, one week on two pages, A6;

week_on_one_page-A6-it.pdf week_on_one_page-A6-it-book.pdf

weekly planner, one week per page, A6;

week_on_one_page_dots-A6-it.pdf week_on_one_page_dots-A6-it-book.pdf

weekly planner, one week per page with 4 mm dots, A6;

week_health-A6-it.pdf week_health-A6-it-book.pdf

weekly health tracker, one week per page with 4 mm dots, A6;

month-A6-it.pdf month-A6-it-book.pdf

monthly planner, A6;

Some of the planners include ephemerids and moon phase data: these have been calculated for the town of Como, and specifically for geo:45.81478,9.07522?z=17, because that’s what everybody needs, right?

If you need the ephemerids for a different location and can’t run the script yourself (it depends on pdfjam, i.e. various GB of LaTeX, and a few python modules such as dateutil, pypdf and jinja2), feel free to ask: unless I receive too many requests to make this sustainable I’ll generate them and add them to this post.

I hereby release all the PDFs linked in this blog post under the CC0 license.

You may notice that I haven’t decided on a license for the code dump repository; again if you need it for something (that is compatible with its unsupported status) other than running it for personal use (for which afaik there is an implicit license) let me know and I’ll push “decide on a license” higher on the stack of things to do :D

Finishing the Makefile meant that I had to add a tiny feature to one of the scripts involved, which required me to add a dependency to pypdf: up to now I have been doing the page manipulations with pdfjam, which is pretty convenient to use, but also uses LaTeX, and apparently not every computer comes with texlive installed (shocking, I know).

If I’m not mistaken, pypdf can do all of the things I’m doing with pdfjam, so maybe for the next year I could convert my script to use that one instead.

But then the planners 2027 will be quick and easy, and I will be able to publish them promptly, right?

,

Krebs on SecurityMeet Rey, the Admin of ‘Scattered Lapsus$ Hunters’

A prolific cybercriminal group that calls itself “Scattered LAPSUS$ Hunters” has dominated headlines this year by regularly stealing data from and publicly mass extorting dozens of major corporations. But the tables seem to have turned somewhat for “Rey,” the moniker chosen by the technical operator and public face of the hacker group: Earlier this week, Rey confirmed his real life identity and agreed to an interview after KrebsOnSecurity tracked him down and contacted his father.

Scattered LAPSUS$ Hunters (SLSH) is thought to be an amalgamation of three hacking groups — Scattered Spider, LAPSUS$ and ShinyHunters. Members of these gangs hail from many of the same chat channels on the Com, a mostly English-language cybercriminal community that operates across an ocean of Telegram and Discord servers.

In May 2025, SLSH members launched a social engineering campaign that used voice phishing to trick targets into connecting a malicious app to their organization’s Salesforce portal. The group later launched a data leak portal that threatened to publish the internal data of three dozen companies that allegedly had Salesforce data stolen, including ToyotaFedExDisney/Hulu, and UPS.

The new extortion website tied to ShinyHunters, which threatens to publish stolen data unless Salesforce or individual victim companies agree to pay a ransom.

Last week, the SLSH Telegram channel featured an offer to recruit and reward “insiders,” employees at large companies who agree to share internal access to their employer’s network for a share of whatever ransom payment is ultimately paid by the victim company.

SLSH has solicited insider access previously, but their latest call for disgruntled employees started making the rounds on social media at the same time news broke that the cybersecurity firm Crowdstrike had fired an employee for allegedly sharing screenshots of internal systems with the hacker group (Crowdstrike said their systems were never compromised and that it has turned the matter over to law enforcement agencies).

The Telegram server for the Scattered LAPSUS$ Hunters has been attempting to recruit insiders at large companies.

Members of SLSH have traditionally used other ransomware gangs’ encryptors in attacks, including malware from ransomware affiliate programs like ALPHV/BlackCat, Qilin, RansomHub, and DragonForce. But last week, SLSH announced on its Telegram channel the release of their own ransomware-as-a-service operation called ShinySp1d3r.

The individual responsible for releasing the ShinySp1d3r ransomware offering is a core SLSH member who goes by the handle “Rey” and who is currently one of just three administrators of the SLSH Telegram channel. Previously, Rey was an administrator of the data leak website for Hellcat, a ransomware group that surfaced in late 2024 and was involved in attacks on companies including Schneider Electric, Telefonica, and Orange Romania.

A recent, slightly redacted screenshot of the Scattered LAPSUS$ Hunters Telegram channel description, showing Rey as one of three administrators.

Also in 2024, Rey would take over as administrator of the most recent incarnation of BreachForums, an English-language cybercrime forum whose domain names have been seized on multiple occasions by the FBI and/or by international authorities. In April 2025, Rey posted on Twitter/X about another FBI seizure of BreachForums.

On October 5, 2025, the FBI announced it had once again seized the domains associated with BreachForums, which it described as a major criminal marketplace used by ShinyHunters and others to traffic in stolen data and facilitate extortion.

“This takedown removes access to a key hub used by these actors to monetize intrusions, recruit collaborators, and target victims across multiple sectors,” the FBI said.

Incredibly, Rey would make a series of critical operational security mistakes last year that provided multiple avenues to ascertain and confirm his real-life identity and location. Read on to learn how it all unraveled for Rey.

WHO IS REY?

According to the cyber intelligence firm Intel 471, Rey was an active user on various BreachForums reincarnations over the past two years, authoring more than 200 posts between February 2024 and July 2025. Intel 471 says Rey previously used the handle “Hikki-Chan” on BreachForums, where their first post shared data allegedly stolen from the U.S. Centers for Disease Control and Prevention (CDC).

In that February 2024 post about the CDC, Hikki-Chan says they could be reached at the Telegram username @wristmug. In May 2024, @wristmug posted in a Telegram group chat called “Pantifan” a copy of an extortion email they said they received that included their email address and password.

The message that @wristmug cut and pasted appears to have been part of an automated email scam that claims it was sent by a hacker who has compromised your computer and used your webcam to record a video of you while you were watching porn. These missives threaten to release the video to all your contacts unless you pay a Bitcoin ransom, and they typically reference a real password the recipient has used previously.

“Noooooo,” the @wristmug account wrote in mock horror after posting a screenshot of the scam message. “I must be done guys.”

A message posted to Telegram by Rey/@wristmug.

In posting their screenshot, @wristmug redacted the username portion of the email address referenced in the body of the scam message. However, they did not redact their previously-used password, and they left the domain portion of their email address (@proton.me) visible in the screenshot.

O5TDEV

Searching on @wristmug’s rather unique 15-character password in the breach tracking service Spycloud finds it is known to have been used by just one email address: cybero5tdev@proton.me. According to Spycloud, those credentials were exposed at least twice in early 2024 when this user’s device was infected with an infostealer trojan that siphoned all of its stored usernames, passwords and authentication cookies (a finding that was initially revealed in March 2025 by the cyber intelligence firm KELA).

Intel 471 shows the email address cybero5tdev@proton.me belonged to a BreachForums member who went by the username o5tdev. Searching on this nickname in Google brings up at least two website defacement archives showing that a user named o5tdev was previously involved in defacing sites with pro-Palestinian messages. The screenshot below, for example, shows that 05tdev was part of a group called Cyb3r Drag0nz Team.

Rey/o5tdev’s defacement pages. Image: archive.org.

A 2023 report from SentinelOne described Cyb3r Drag0nz Team as a hacktivist group with a history of launching DDoS attacks and cyber defacements as well as engaging in data leak activity.

“Cyb3r Drag0nz Team claims to have leaked data on over a million of Israeli citizens spread across multiple leaks,” SentinelOne reported. “To date, the group has released multiple .RAR archives of purported personal information on citizens across Israel.”

The cyber intelligence firm Flashpoint finds the Telegram user @05tdev was active in 2023 and early 2024, posting in Arabic on anti-Israel channels like “Ghost of Palestine” [full disclosure: Flashpoint is currently an advertiser on this blog].

‘I’M A GINTY’

Flashpoint shows that Rey’s Telegram account (ID7047194296) was particularly active in a cybercrime-focused channel called Jacuzzi, where this user shared several personal details, including that their father was an airline pilot. Rey claimed in 2024 to be 15 years old, and to have family connections to Ireland.

Specifically, Rey mentioned in several Telegram chats that he had Irish heritage, even posting a graphic that shows the prevalence of the surname “Ginty.”

Rey, on Telegram claiming to have association to the surname “Ginty.” Image: Flashpoint.

Spycloud indexed hundreds of credentials stolen from cybero5dev@proton.me, and those details indicate that Rey’s computer is a shared Microsoft Windows device located in Amman, Jordan. The credential data stolen from Rey in early 2024 show there are multiple users of the infected PC, but that all shared the same last name of Khader and an address in Amman, Jordan.

The “autofill” data lifted from Rey’s family PC contains an entry for a 46-year-old Zaid Khader that says his mother’s maiden name was Ginty. The infostealer data also shows Zaid Khader frequently accessed internal websites for employees of Royal Jordanian Airlines.

MEET SAIF

The infostealer data makes clear that Rey’s full name is Saif Al-Din Khader. Having no luck contacting Saif directly, KrebsOnSecurity sent an email to his father Zaid. The message invited the father to respond via email, phone or Signal, explaining that his son appeared to be deeply enmeshed in a serious cybercrime conspiracy.

Less than two hours later, I received a Signal message from Saif, who said his dad suspected the email was a scam and had forwarded it to him.

“I saw your email, unfortunately I don’t think my dad would respond to this because they think its some ‘scam email,'” said Saif, who told me he turns 16 years old next month. “So I decided to talk to you directly.”

Saif explained that he’d already heard from European law enforcement officials, and had been trying to extricate himself from SLSH. When asked why then he was involved in releasing SLSH’s new ShinySp1d3r ransomware-as-a-service offering, Saif said he couldn’t just suddenly quit the group.

“Well I cant just dip like that, I’m trying to clean up everything I’m associated with and move on,” he said.

The former Hellcat ransomware site. Image: Kelacyber.com

He also shared that ShinySp1d3r is just a rehash of Hellcat ransomware, except modified with AI tools. “I gave the source code of Hellcat ransomware out basically.”

Saif claims he reached out on his own recently to the Telegram account for Operation Endgame, the codename for an ongoing law enforcement operation targeting cybercrime services, vendors and their customers.

“I’m already cooperating with law enforcement,” Saif said. “In fact, I have been talking to them since at least June. I have told them nearly everything. I haven’t really done anything like breaching into a corp or extortion related since September.”

Saif suggested that a story about him right now could endanger any further cooperation he may be able to provide. He also said he wasn’t sure if the U.S. or European authorities had been in contact with the Jordanian government about his involvement with the hacking group.

“A story would bring so much unwanted heat and would make things very difficult if I’m going to cooperate,” Saif said. “I’m unsure whats going to happen they said they’re in contact with multiple countries regarding my request but its been like an entire week and I got no updates from them.”

Saif shared a screenshot that indicated he’d contacted Europol authorities late last month. But he couldn’t name any law enforcement officials he said were responding to his inquiries, and KrebsOnSecurity was unable to verify his claims.

“I don’t really care I just want to move on from all this stuff even if its going to be prison time or whatever they gonna say,” Saif said.

Planet DebianBits from Debian: New Debian Developers and Maintainers (September and October 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Evangelos Ribeiro Tzaras (devrts)
  • Andrea Bolognani (abologna)

The following contributors were added as Debian Maintainers in the last two months:

  • Rylie Pavlik
  • Yuchin Tsai
  • Daniel Markstedt
  • Guido Berhörster
  • Renzo Davoli

Congratulations!

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.8 on CRAN: Maintenance

Another maintenance release of the tidyCpp package arrived on CRAN this morning, the first in about two years. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the (now updated, see below) vignette for motivating examples.

This release contains mostly internal upkeep of the usual type: refreshing continuous integration, updating links, switching to Authors@R. But as we wrap the C API of R here too, changes made in R-devel this week affected the two reverse-dependencies (i.e. “downstream”) packages (of mine) using this. So we commented-out the definitions for the five now-hidden accessors so that these downstream packages can build again under R-devel.

The NEWS entry follows.

Changes in tidyCpp version 0.0.8 (2025-11-25)

  • Updated continuous integration setup several times

  • Updated README.md documentation with link to R API site

  • Updated example snippets to use of Protect

  • Updated documentation in defines.h header

  • Updated internals.h header reflecting in R API changes

As it happens, hours after the release at CRAN a helpful issue ticket was opened detailing more than a handful of typos in the vignette. This has been corrected, and I am not exporting the vignette via GitHub Pages so the motivating examples vignette contains the corrections.

Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureAnnouncements: We Want Your Holiday Horrors

As we enter into the latter portion of the year, folks are traveling to visit family, logging off of work in hopes that everything can look after itself for a month, and somewhere, someone, is going to make the choice "yes, I can push to prod on Christmas Eve, and it'll totally work out for me!"

Over the next few weeks, I'm hoping to get a chance to get some holiday support horrors up on the site, in keeping with the season. Whether it's the absurd challenges of providing family tech support, the last minute pushes to production, the five alarm fires caused by a pointy-haired-bosses's incompetence, we want your tales of holiday IT woe.

So hit that submit button on the side bar, and tell us who's on Santa's naughty list this year.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsThe Utility Room

Author: Susan A. Anthony The poorly fitting standard builder issue door had a gap under it, all the better to let things escape. Inside, a small white washing machine drained into a hole in the concrete, shielded with a perforated plastic vent, to keep things out, she imagined, not to stop things dropping down. Opposite […]

The post The Utility Room appeared first on 365tomorrows.

Worse Than FailureTales from the Interview: Interview Smack-Talk

In today's Tales from the Interview, our Anonymous submitter relates their experience with an anonymous company:

I had made it through the onsite, but along the way I had picked up some toxic work environment red flags. Since I had been laid off a couple months prior, I figured I wasn't in a position to be picky, so I decided I would still give it my best shot and take the job if I got it, but I'd continue looking for something better.

Then they brought me back onsite a second time for one final interview with 2 senior managers. I went in and they were each holding a printout of my resume. They proceeded to go through everything on it. First they asked why I chose the university I went to, then the same for grad school, which was fine.

WWF SmackDown Logo (1999-2001)

Then they got to my first internship. I believe the conversation went something like this:

Manager: "How did you like it?"

Me: "Oh, I loved it!"

Manager: "Were there any negatives?"

Me: "No, not that I can think of."

Manager: "So it was 100% positive?"

Me: "Yep!"

And then they got to my first full-time job, where the same manager repeated the same line of questioning but pushed even harder for me to say something negative, at one point saying "Well, you left for (2nd company on my resume), so there must have been something negative."

I knew better than to bad-mouth a previous employer in an interview, it's like going into a first date and talking smack about your ex. But what do you do when your date relentlessly asks you to talk smack about all your exes and refuses to let the subject turn to anything else? This not only confirmed my suspicions of a toxic work environment, I also figured *they* probably knew it was toxic and were relentlessly testing every candidate to make sure they wouldn't blow the whistle on them.

That was the most excruciatingly awkward interview I've ever had. I didn't get the job, but at that point I didn't care anymore, because I was very, very sure I didn't want to work there in the long term.

I'm glad Subby dodged that bullet, and I hope they're in a better place now.

It seems like this might be some stupid new trend. I recently bombed an interview where I could tell I wasn't giving the person the answer on their checklist, no matter how many times I tried. It was a question about how I handled it when someone opposed what I was doing at work or gave me negative feedback. It felt like they wanted me to admit to more fur-flying drama and fireworks than had ever actually occurred.

I actively ask for and welcome critique on my writing, it makes my work so much better. And if my work is incorrect and needs to be redone, or someone has objections to a project I'm part of, I seek clarification and (A) implement the requested changes, (B) explain why things are as they are and offer alternate suggestions/solutions, (C) seek compromise, depending on the situation. I don't get personal about it.

So, why this trend? Subby believed it was a way to test whether the candidate would someday badmouth the employer. That's certainly feasible, though if that were the goal, you'd think Subby would've passed their ordeal with flying colors. I'm not sure myself, but I have a sneaking suspicion that the nefarious combination of AI and techbro startup culture have something to do with it.

So perhaps I also dodged a bullet: one of the many things I'm grateful for this Thanksgiving.

Feel free to share your ideas, and any and all bullets you have dodged, in the comments.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

,

Planet DebianRussell Coker: EDID and my 8K TV

I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2]. After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution.

This post covers many attempts to try and get the TV to work correctly and it doesn’t have good answers. The best answer might be to not buy Hisense devices but I still lack data.

Attempts to Force 8K

I posted on Lemmy again about this [3] and got a single response, which is OK as it was a good response. They didn’t give me the answer on a silver platter but pointed me in the right direction of EDID [4].

I installed the Debian packages read-edid, wxedid, and edid-decode.

The command “get-edid > out.edid” saves the binary form of the edid to a file. The command “wxedid out.edid” allows graphical analysis of the EDID data. The command “edid-decode out.edid” dumps a plain text representation of the output, the command “edid-decode out.edid|grep VIC|cut -d: -f2|sort -n” shows an ordered list of video modes, in my case the highest resolution is 4096×2160 which is the highest that Linux had allowed me to set with two different video cards and a selection of different cables (both HDMI and DisplayPort).

xrandr --newmode 7680x4320 1042.63  7680 7984 7760 7824  4320 4353 4323 4328
xrandr --addmode HDMI-3 7680x4320
xrandr --output HDMI-3 --mode 7680x4320

I ran the above commands and got the below error:

xrandr: Configure crtc 0 failed

At this time I don’t know how much of this is due to the video card and how much is due to the TV. The parameters for xrandr came from a LLM because I couldn’t find any Google results on what 8K parameters to use. As an aside if you have a working 8K TV or monitor connected to a computer please publish the EDID data, xrandr, and everything else you can think of.

I found a Github repository for EDID data [5] but that didn’t have an entry for my TV and didn’t appear to have any other entry for an 8K device I could use.

Resolution for Web Browsing

I installed a browser on the TV, Chrome and Firefox aren’t available for a TV and the Play Store program tells you that (but without providing a reason) when you search for them. I tried the site CodeShack What is my Screen Resolution [6] which said that my laptop is 2460*1353 while the laptop display is actually 2560*1440. So apparently I have 100 pixels used for the KDE panel at the left of the screen and 87 pixels used by the Chrome tabs and URL bar – which seems about right. My Note 9 phone reports 384*661 out of it’s 2960*1440 display so it seems that Chrome on my phone is running web sites at 4/15 of the native resolution and about 16% of the height of the screen is used by the system notification bar, the back/home/tasklist buttons (I choose buttons instead of swipe for navigation in system settings), and the URL bar when I have “Screen zoom” in system settings at 1/4. When I changed “Screen zoom” to 0/4 the claimed resolution changed to 411*717 (2/7 of the native resolution). Font size changes didn’t change the claimed resolution. The claimed “Browser Viewport Size” by CodeShack is 1280*720 which is 1/6 of the real horizontal resolution and slightly more than 1/6 of the vertical resolution, it claims that the Pixel Density is 2* and a screen resolution of 970*540 which means to imply that the browser is only working at 1920*1080 resolution!

Netflix

When I view Netflix shows using the Netflix app running on the TV is reports “4K” which doesn’t happen on Linux PCs (as they restrict 4K content to platforms with DRM) and in the “Device” setting it reports “Device Model” as “Hisense_SmartTV 8K FFM” so the Netflix app knows all about 4K content and knows the text string “8K”.

YouTube

When I view a YouTube video that’s described as being 8K I don’t get a request for paying for YouTube Premium which is apparently what happens nowadays when you try to play actual 8K video. I turn on “State for Nerds” and one line has “Viewport / Frames 1920×1080*2.00” and another has “Current / Optimal Res 3840×2160@60 / 3840×2160@60” so it seems that the YouTube app is seeing the screen as 4K but choosing to only display FullHD even when I have Quality set to “2160p60 HDR”. It declares the network speed to be over 100mbit most of the time and the lowest it gets is 60mbit while 50mbit is allegedly what’s required for 8K.

I installed a few Android apps to report hardware capabilities and they reported the screen resolution to be 1920*1080.

Have I Been Ripped Off?

It looks like I might have been ripped off by this. I can’t get any app other than Netflix to display 4K content. My PC will only connect to it at 4K. Android apps (including YouTube) regard it as 1920*1080.

The “AI Upscaling” isn’t really that great and in most ways it seems at best equivalent to a 4K TV and less than a 4K TV that runs Android apps with an actual 4K display buffer.

Next Steps

The next things I plan to do are to continue attempts to get the TV to do what it’s claimed to be capable of, either an Android app that can display 8K content or a HDMI input of 8K content will do. Running a VNC client on the TV would be an acceptable way of getting an 8K display from a Linux PC.

I need to get a somewhat portable device that can give 8K signal output. Maybe a mini PC with a powerful GPU or maybe one of those ARM boards that’s designed to drive an 8K sign. Then I can hunt for stores that have 8K TVs on display.

It would be nice if someone made a USB device that does 8K video output – NOT a USB-C DisplayPort alternative mode that uses the video hardware on the laptop. Then I could take a laptop to any place that has an 8K display to show and connect my laptop to it.

The one thing I haven’t done yet is testing 8K MP4 files on a USB stick. That’s mainly due to a lack of content and the fact that none of the phone cameras I have access to can do 8K video. I will try displaying 8K PNG and JPEG files from a USB stick.

Most people would give up about now. But I am determined to solve this and buying another large TV isn’t out of the question.

Worse Than FailureCodeSOD: The Map to Your Confession

Today, Reginald approaches us for a confession.

He writes:

I've no idea where I "copied" this code from five years ago. The purpose of this code was to filter out Maps and Collections Maybe the intention was to avoid a recursive implementation by an endless loop? I am shocked that I wrote such code.

Well, that doesn't bode well, Reginald. Let's take a look at this Java snippet:

/**
 * 
 * @param input
 * @return
 */
protected Map rearrangeMap(Map input) {
	Map retMap = new HashMap();

	if (input != null && !input.isEmpty()) {

		Iterator it = input.keySet().iterator();
		while (true) {
			String key;
			Object obj;
			do {
				do {
					if (!it.hasNext()) {
					}
					key = (String) it.next();

				} while (input.get(key) instanceof Map);

				obj = input.get(key);

			} while (obj instanceof Boolean && ((Boolean) obj).equals(Boolean.FALSE));

			if (obj != null) {
				retMap.put(key, obj);
				return retMap;
			}
		}
	} else {
		return retMap;
	}
}

The first thing that leaps out is that this is a non-generic Map, which is always a code smell, but I suspect that's the least of our problems.

We start by verifying that the input Map exists and contains data. If the input is null or empty, we return it. In our main branch, we create an iterator across the keys, before ethering a while(true) loop. So far so bad

Then we enter a pair of nested do loops. Which definitely hints that we've gone off the edge of the map here. In the inner most loop, we do a check- if there isn't a next element in the iterator, we… do absolutely nothing. Whether there is or isn't an element, we advance to the next element, risking a NoSuchElementException. We do this while the key points to an instance of Map. As always, an instanceof check is a nauseating code stench.

Okay, so the inner loop skips across any keys that point to maps, and throws an exception when it gets to the end of the list.

The surrounding loop skips over every key that is a boolean value that is also false.

If we find anything which isn't a Map and isn't a false Boolean and isn't null, we put it in our retMap and return it.

This function finds the first key that points to a non-map, non-false value and creates a new map that contains only that key/value. Which it's a hard thing to understand why I'd want that, especially since some Map implementations make no guarantee about order. And even if I did want that, I definitely wouldn't want to do that this way. A single for loop could have solved this problem.

Reginald, I don't think there's any absolution for this. Instead, my advice would be to install a carbon monoxide detector in your office, because I have some serious concerns about whether or not your brain is getting enough oxygen.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsMan’s Best End

Author: Majoki ofcourse ofcourse His eyes wide, the district attorney stared at the machine near the witness stand rather than at the witness. It was a moment before he asked his next question. “May I call you Towser?” myname “Thank you.” The DA responded, his eyes still fixed on the machine. “Mr—excuse me—Towser, how old […]

The post Man’s Best End appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: How we implemented a dark mode in Debusine (by Enrico Zini)

Having learnt that Bootstrap supports color modes, we decided to implement an option for users to enable dark mode in Debusine.

By default, the color mode is selected depending on the user browser preferences. If explicitly selected, we use a cookie to store the theme selection so that a user can choose different color modes in different browsers.

The work is in merge request !2401 and minimizes JavaScript dependencies like we do in other parts of debusine.

A view to select the theme

First is a simple view to configure the selected theme and store it in a cookie. If auto is selected, then the cookie is deleted to delegate theme selection to JavaScript:

class ThemeSelectionView(View):
    """Select and save the current theme."""

    def post(
        self, request: HttpRequest, *args: Any, **kwargs: Any  # noqa: U100
    ) -> HttpResponse:
        """Set the selected theme."""
        value = request.POST.get("theme", "auto")
        next_url = request.POST.get("next", None)
        if next_url is None:
            next_url = reverse("homepage:homepage")
        response = HttpResponseRedirect(next_url)
        if value == "auto":
            response.delete_cookie("theme")
        else:
            response.set_cookie(
                "theme", value, httponly=False, max_age=dt.timedelta(days=3650)
            )
        return response

The main base view of Debusine reads the value from the cookie and makes it available to the templates:

      def get_context_data(self, **kwargs: Any) -> dict[str, Any]:
          ctx = super().get_context_data(**kwargs)
          ctx["theme"] = self.request.COOKIES.get("theme", None)
          # ...
          return ctx

The base template will use this value to set data-bs-theme on the main <html> element, and that’s all that is needed to select the color mode in Bootstrap:

<html lang="en"{% if theme %} data-bs-theme="{{ theme }}"{% endif %}>

The view uses HTTP POST as it changes state, so theme selection happens in a form:

<form id="footer-theme" class="col-auto" method="post"
      action="{% url "theme-selection" %}">
    {% csrf_token %}
    <input type="hidden" name="next" value="{{ request.get_full_path }}">
    Theme:
    <button type="submit" name="theme" value="dark">dark</button>
    <button type="submit" name="theme" value="light">light</button>
    <button type="submit" name="theme" value="auto">auto</button>
</form>

Since we added the theme selection buttons in the footer, we use CSS to render the buttons in the same way as the rest of the footer links.

Bootstrap has a set of CSS variables that can be used to easily in sync with the site theme, and they are especially useful now that the theme is configurable:

footer button {
    background: none;
    border: none;
    margin: 0;
    padding: 0;
    color: var(--bs-link-color);
}

Theme autoselection

Bootstrap would support theme autoselection via browser preferences, but that requires rebuilding its Sass sources.

Alternatively, one can use JavaScript:

{% if not theme %}
    <script blocking="render">
    (function() {
        let theme = window.matchMedia('(prefers-color-scheme: dark)').matches ? 'dark' : 'light';
        let [html] = document.getElementsByTagName("html");
        html.setAttribute("data-bs-theme", theme);
    })();
    </script>
{% endif %}

This reads the color scheme preferences and sets the data-bs-theme attribute on <html>.

The script is provided inline as it needs to use blocking="render" to avoid flashing a light background at the beginning of page load until the attribute is set.

Given that this is a render-blocking snippet, as an extra optimization it is not added to the page if a theme has been set.

Bootstrap CSS fixes

We were making use of the bootstrap btn-light class in navbars to highlight elements on hover, and that doesn’t work well with theme selection.

Lacking a button class that does the right thing across themes, we came up with a new CSS class that uses variables to define a button with hover highlight that works preserving the underlying color:

:root[data-bs-theme=light] {
    --debusine-hover-layer: rgb(0 0 0 / 20%);
    --debusine-hover-color-multiplier: 0.8;
    --debusine-disabled-color-multiplier: 1.5;
}
:root[data-bs-theme=dark] {
    --debusine-hover-layer: rgb(255 255 255 / 20%);
    --debusine-hover-color-multiplier: 1.2;
    --debusine-disabled-color-multiplier: 0.5;
}

/* Button that preserves the underlying color scheme */
.btn-debusine {
  --bs-btn-hover-color: rgb(from var(--bs-btn-color) calc(r * var(--debusine-hover-color-multiplier)) calc(g * var(--debusine-hover-color-multiplier)) calc(b * var(--debusine-hover-color-multiplier)));
  --bs-btn-hover-bg: var(--debusine-hover-layer);
  --bs-btn-disabled-color: rgb(from var(--bs-btn-color) calc(r * var(--debusine-disabled-color-multiplier)) calc(g * var(--debusine-disabled-color-multiplier)) calc(b * var(--debusine-disabled-color-multiplier)));
  --bs-btn-disabled-bg: var(--bs-btn-bg);
  --bs-btn-disabled-border-color: var(--bs-btn-border-color);
}

Dark mode!

This was a nice integration exercise with many little tricks, like how to read color scheme preferences from the browser, render form buttons as links, use bootstrap variables, prevent a flashing background, handle cookies in Django.

And Debusine now has a dark mode!

,

Krebs on SecurityIs Your Android TV Streaming Box Part of a Botnet?

On the surface, the Superbox media streaming devices for sale at retailers like BestBuy and Walmart may seem like a steal: They offer unlimited access to more than 2,200 pay-per-view and streaming services like Netflix, ESPN and Hulu, all for a one-time fee of around $400. But security experts warn these TV boxes require intrusive software that forces the user’s network to relay Internet traffic for others, traffic that is often tied to cybercrime activity such as advertising fraud and account takeovers.

Superbox media streaming boxes for sale on Walmart.com.

Superbox bills itself as an affordable way for households to stream all of the television and movie content they could possibly want, without the hassle of monthly subscription fees — for a one-time payment of nearly $400.

“Tired of confusing cable bills and hidden fees?,” Superbox’s website asks in a recent blog post titled, “Cheap Cable TV for Low Income: Watch TV, No Monthly Bills.”

“Real cheap cable TV for low income solutions does exist,” the blog continues. “This guide breaks down the best alternatives to stop overpaying, from free over-the-air options to one-time purchase devices that eliminate monthly bills.”

Superbox claims that watching a stream of movies, TV shows, and sporting events won’t violate U.S. copyright law.

“SuperBox is just like any other Android TV box on the market, we can not control what software customers will use,” the company’s website maintains. “And you won’t encounter a law issue unless uploading, downloading, or broadcasting content to a large group.”

A blog post from the Superbox website.

There is nothing illegal about the sale or use of the Superbox itself, which can be used strictly as a way to stream content at providers where users already have a paid subscription. But that is not why people are shelling out $400 for these machines. The only way to watch those 2,200+ channels for free with a Superbox is to install several apps made for the device that enable them to stream this content.

Superbox’s homepage includes a prominent message stating the company does “not sell access to or preinstall any apps that bypass paywalls or provide access to unauthorized content.” The company explains that they merely provide the hardware, while customers choose which apps to install.

“We only sell the hardware device,” the notice states. “Customers must use official apps and licensed services; unauthorized use may violate copyright law.”

Superbox is technically correct here, except for maybe the part about how customers must use official apps and licensed services: Before the Superbox can stream those thousands of channels, users must configure the device to update itself, and the first step involves ripping out Google’s official Play store and replacing it with something called the “App Store” or “Blue TV Store.”

Superbox does this because the device does not use the official Google-certified Android TV system, and its apps will not load otherwise. Only after the Google Play store has been supplanted by this unofficial App Store do the various movie and video streaming apps that are built specifically for the Superbox appear available for download (again, outside of Google’s app ecosystem).

Experts say while these Android streaming boxes generally do what they advertise — enabling buyers to stream video content that would normally require a paid subscription — the apps that enable the streaming also ensnare the user’s Internet connection in a distributed residential proxy network that uses the devices to relay traffic from others.

Ashley is a senior solutions engineer at Censys, a cyber intelligence company that indexes Internet-connected devices, services and hosts. Ashley requested that only her first name be used in this story.

In a recent video interview, Ashley showed off several Superbox models that Censys was studying in the malware lab — including one purchased off the shelf at BestBuy.

“I’m sure a lot of people are thinking, ‘Hey, how bad could it be if it’s for sale at the big box stores?'” she said. “But the more I looked, things got weirder and weirder.”

Ashley said she found the Superbox devices immediately contacted a server at the Chinese instant messaging service Tencent QQ, as well as a residential proxy service called Grass IO.

GET GRASSED

Also known as getgrass[.]io, Grass says it is “a decentralized network that allows users to earn rewards by sharing their unused Internet bandwidth with AI labs and other companies.”

“Buyers seek unused internet bandwidth to access a more diverse range of IP addresses, which enables them to see certain websites from a retail perspective,” the Grass website explains. “By utilizing your unused internet bandwidth, they can conduct market research, or perform tasks like web scraping to train AI.” 

Reached via Twitter/X, Grass founder Andrej Radonjic told KrebsOnSecurity he’d never heard of a Superbox, and that Grass has no affiliation with the device maker.

“It looks like these boxes are distributing an unethical proxy network which people are using to try to take advantage of Grass,” Radonjic said. “The point of grass is to be an opt-in network. You download the grass app to monetize your unused bandwidth. There are tons of sketchy SDKs out there that hijack people’s bandwidth to help webscraping companies.”

Radonjic said Grass has implemented “a robust system to identify network abusers,” and that if it discovers anyone trying to misuse or circumvent its terms of service, the company takes steps to stop it and prevent those users from earning points or rewards.

Superbox’s parent company, Super Media Technology Company Ltd., lists its street address as a UPS store in Fountain Valley, Calif. The company did not respond to multiple inquiries.

According to this teardown by behindmlm.com, a blog that covers multi-level marketing (MLM) schemes, Grass’s compensation plan is built around “grass points,” which are earned through the use of the Grass app and through app usage by recruited affiliates. Affiliates can earn 5,000 grass points for clocking 100 hours usage of Grass’s app, but they must progress through ten affiliate tiers or ranks before they can redeem their grass points (presumably for some type of cryptocurrency). The 10th or “Titan” tier requires affiliates to accumulate a whopping 50 million grass points, or recruit at least 221 more affiliates.

Radonjic said Grass’s system has changed in recent months, and confirmed the company has a referral program where users can earn Grass Uptime Points by contributing their own bandwidth and/or by inviting other users to participate.

“Users are not required to participate in the referral program to earn Grass Uptime Points or to receive Grass Tokens,” Radonjic said. “Grass is in the process of phasing out the referral program and has introduced an updated Grass Points model.”

A review of the Terms and Conditions page for getgrass[.]io at the Wayback Machine shows Grass’s parent company has changed names at least five times in the course of its two-year existence. Searching the Wayback Machine on getgrass[.]io shows that in June 2023 Grass was owned by a company called Wynd Network. By March 2024, the owner was listed as Lower Tribeca Corp. in the Bahamas. By August 2024, Grass was controlled by a Half Space Labs Limited, and in November 2024 the company was owned by Grass OpCo (BVI) Ltd. Currently, the Grass website says its parent is just Grass OpCo Ltd (no BVI in the name).

Radonjic acknowledged that Grass has undergone “a handful of corporate clean-ups over the last couple of years,” but described them as administrative changes that had no operational impact. “These reflect normal early-stage restructuring as the project moved from initial development…into the current structure under the Grass Foundation,” he said.

UNBOXING

Censys’s Ashley said the phone home to China’s Tencent QQ instant messaging service was the first red flag with the Superbox devices she examined. She also discovered the streaming boxes included powerful network analysis and remote access tools, such as Tcpdump and Netcat.

“This thing DNS hijacked my router, did ARP poisoning to the point where things fall off the network so they can assume that IP, and attempted to bypass controls,” she said. “I have root on all of them now, and they actually have a folder called ‘secondstage.’ These devices also have Netcat and Tcpdump on them, and yet they are supposed to be streaming devices.”

A quick online search shows various Superbox models and many similar Android streaming devices for sale at a wide range of top retail destinations, including Amazon, BestBuy, Newegg, and Walmart. Newegg.com, for example, currently lists more than three dozen Superbox models. In all cases, the products are sold by third-party merchants on these platforms, but in many instances the fulfillment comes from the e-commerce platform itself.

“Newegg is pretty bad now with these devices,” Ashley said. “Ebay is the funniest, because they have Superbox in Spanish — the SuperCaja — which is very popular.”

Superbox devices for sale via Newegg.com.

Ashley said Amazon recently cracked down on Android streaming devices branded as Superbox, but that those listings can still be found under the more generic title “modem and router combo” (which may be slightly closer to the truth about the device’s behavior).

Superbox doesn’t advertise its products in the conventional sense. Rather, it seems to rely on lesser-known influencers on places like Youtube and TikTok to promote the devices. Meanwhile, Ashley said, Superbox pays those influencers 50 percent of the value of each device they sell.

“It’s weird to me because influencer marketing usually caps compensation at 15 percent, and it means they don’t care about the money,” she said. “This is about building their network.”

A TikTok influencer casually mentions and promotes Superbox while chatting with her followers over a glass of wine.

BADBOX

As plentiful as the Superbox is on e-commerce sites, it is just one brand in an ocean of no-name Android-based TV boxes available to consumers. While these devices generally do provide buyers with “free” streaming content, they also tend to include factory-installed malware or require the installation of third-party apps that engage the user’s Internet address in advertising fraud.

In July 2025, Google filed a “John Doe” lawsuit (PDF) against 25 unidentified defendants dubbed the “BadBox 2.0 Enterprise,” which Google described as a botnet of over ten million Android streaming devices that engaged in advertising fraud. Google said the BADBOX 2.0 botnet, in addition to compromising multiple types of devices prior to purchase, can also infect devices by requiring the download of malicious apps from unofficial marketplaces.

Some of the unofficial Android devices flagged by Google as part of the Badbox 2.0 botnet are still widely for sale at major e-commerce vendors. Image: Google.

Several of the Android streaming devices flagged in Google’s lawsuit are still for sale on top U.S. retail sites. For example, searching for the “X88Pro 10” and the “T95” Android streaming boxes finds both continue to be peddled by Amazon sellers.

Google’s lawsuit came on the heels of a June 2025 advisory from the Federal Bureau of Investigation (FBI), which warned that cyber criminals were gaining unauthorized access to home networks by either configuring the products with malicious software prior to the user’s purchase, or infecting the device as it downloads required applications that contain backdoors, usually during the set-up process.

“Once these compromised IoT devices are connected to home networks, the infected devices are susceptible to becoming part of the BADBOX 2.0 botnet and residential proxy services known to be used for malicious activity,” the FBI said.

The FBI said BADBOX 2.0 was discovered after the original BADBOX campaign was disrupted in 2024. The original BADBOX was identified in 2023, and primarily consisted of Android operating system devices that were compromised with backdoor malware prior to purchase.

Riley Kilmer is founder of Spur, a company that tracks residential proxy networks. Kilmer said Badbox 2.0 was used as a distribution platform for IPidea, a China-based entity that is now the world’s largest residential proxy network.

Kilmer and others say IPidea is merely a rebrand of 911S5 Proxy, a China-based proxy provider sanctioned last year by the U.S. Department of the Treasury for operating a botnet that helped criminals steal billions of dollars from financial institutions, credit card issuers, and federal lending programs (the U.S. Department of Justice also arrested the alleged owner of 911S5).

How are most IPidea customers using the proxy service? According to the proxy detection service Synthient, six of the top ten destinations for IPidea proxies involved traffic that has been linked to either ad fraud or credential stuffing (account takeover attempts).

Kilmer said companies like Grass are probably being truthful when they say that some of their customers are companies performing web scraping to train artificial intelligence efforts, because a great deal of content scraping which ultimately benefits AI companies is now leveraging these proxy networks to further obfuscate their aggressive data-slurping activity. By routing this unwelcome traffic through residential IP addresses, Kilmer said, content scraping firms can make it far trickier to filter out.

“Web crawling and scraping has always been a thing, but AI made it like a commodity, data that had to be collected,” Kilmer told KrebsOnSecurity. “Everybody wanted to monetize their own data pots, and how they monetize that is different across the board.”

SOME FRIENDLY ADVICE

Products like Superbox are drawing increased interest from consumers as more popular network television shows and sportscasts migrate to subscription streaming services, and as people begin to realize they’re spending as much or more on streaming services than they previously paid for cable or satellite TV.

These streaming devices from no-name technology vendors are another example of the maxim, “If something is free, you are the product,” meaning the company is making money by selling access to and/or information about its users and their data.

Superbox owners might counter, “Free? I paid $400 for that device!” But remember: Just because you paid a lot for something doesn’t mean you are done paying for it, or that somehow you are the only one who might be worse off from the transaction.

It may be that many Superbox customers don’t care if someone uses their Internet connection to tunnel traffic for ad fraud and account takeovers; for them, it beats paying for multiple streaming services each month. My guess, however, is that quite a few people who buy (or are gifted) these products have little understanding of the bargain they’re making when they plug them into an Internet router.

Superbox performs some serious linguistic gymnastics to claim its products don’t violate copyright laws, and that its customers alone are responsible for understanding and observing any local laws on the matter. However, buyer beware: If you’re a resident of the United States, you should know that using these devices for unauthorized streaming violates the Digital Millennium Copyright Act (DMCA), and can incur legal action, fines, and potential warnings and/or suspension of service by your Internet service provider.

According to the FBI, there are several signs to look for that may indicate a streaming device you own is malicious, including:

-The presence of suspicious marketplaces where apps are downloaded.
-Requiring Google Play Protect settings to be disabled.
-Generic TV streaming devices advertised as unlocked or capable of accessing free content.
-IoT devices advertised from unrecognizable brands.
-Android devices that are not Play Protect certified.
-Unexplained or suspicious Internet traffic.

This explainer from the Electronic Frontier Foundation delves a bit deeper into each of the potential symptoms listed above.

Planet DebianDirk Eddelbuettel: RcppQuantuccia 0.1.3 on CRAN: Micro Maintenance

A minor release of RcppQuantuccia arrived on CRAN moments ago. RcppQuantuccia started from the Quantuccia header-only subset / variant of QuantLib which it brings it to R. This project validated the idea of making the calendaring functionality of QuantLib available in a more compact and standalone project – which we now do with qlcal which can be seen as a successor package to this earlier package. qlcal tracks QuantLib (releases) closely and provides approximately quarterly updates. Switching to using qlcal is generally recommended.

This release, the first in almost exactly two years, only updates internals (as detailed below). Notably it switches to ‘Authors@R’ to avoid a nag from CRAN on two platforms. The complete list changes for this release follows.

Changes in version 0.1.3 (2025-11-24)

  • A badge URL and link have been updated in README.md

  • The continuous integration sript switched first to r-ci-setup and then to the r-ci action with embedded setup

  • The DESCRIPTION file now uses Authors@R

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram IACR Nullifies Election Because of Lost Decryption Key

The International Association of Cryptologic Research—the academic cryptography association that’s been putting conferences like Crypto (back when “crypto” meant “cryptography”) and Eurocrypt since the 1980s—had to nullify an online election when trustee Moti Yung lost his decryption key.

For this election and in accordance with the bylaws of the IACR, the three members of the IACR 2025 Election Committee acted as independent trustees, each holding a portion of the cryptographic key material required to jointly decrypt the results. This aspect of Helios’ design ensures that no two trustees could collude to determine the outcome of an election or the contents of individual votes on their own: all trustees must provide their decryption shares.

Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share. As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election.

The group will redo the election, but this time setting a 2-of-3 threshold scheme for decrypting the results, instead of requiring all three

News articles.

Worse Than FailureCodeSOD: Copied Homework

Part of the "fun" of JavaScript is dealing with code which comes from before sensible features existed. For example, if you wanted to clone an object in JavaScript, circa 2013, that was a wheel you needed to invent for yourself, as this StackOverflow thread highlights.

There are now better options, and you'd think that people would use them. However, the only thing more "fun" than dealing with code that hasn't caught up with the times is dealing with developers who haven't, and still insist on writing their own versions of standard methods.

  const objectReplace = (oldObject, newObject) => {
    let keys = Object.keys(newObject)
    try {
      for (let key of keys) {
        oldObject[key] = newObject[key]
      }
    } catch (err) {
      console.log(err, oldObject)
    }     

    return oldObject
  }

It's worth noting that Object.entries returns an array containing both the keys and values, which would be a more sensible for this operation, but then again, if we're talking about using correct functions, Object.assign would replace this function.

There's no need to handle errors here, as nothing about this assignment should throw an exception.

The thing that really irks me about this though is that it pretends to be functional (in the programming idiom sense) by returning the newly modified value, but it's also just changing that value in place because it's a reference. So it has side effects, in a technical sense (changing the value of its input parameters) while pretending not to. Now, I probably shouldn't get too hung up on that, because that's also exactly how Object.assign behaves, but dammit, I'm going to be bothered by it anyway. If you're going to reinvent the wheel, either make one that's substantially worse, or fix the problems with the existing wheel.

In any case, the real WTF here is that this function is buried deep in a 15,000 line file, written by an offshore contract team, and there are at least 5 other versions of this function, all with slightly different names, but all basically doing the same thing, because everyone on the team is just copy/pasting until they get enough code to submit a pull request.

Our submitter wonders, "Is there a way to train an AI to not let people type this?"

No, there isn't. You can try rolling that boulder up a hill, but it'll always roll right back down. Always and forever, people are going to write bad code.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram Huawei and Chinese Surveillance

This quote is from House of Huawei: The Secret History of China’s Most Powerful Company.

“Long before anyone had heard of Ren Zhengfei or Huawei, Wan Runnan had been China’s star entrepreneur in the 1980s, with his company, the Stone Group, touted as “China’s IBM.” Wan had believed that economic change could lead to political change. He had thrown his support behind the pro-democracy protesters in 1989. As a result, he had to flee to France, with an arrest warrant hanging over his head. He was never able to return home. Now, decades later and in failing health in Paris, Wan recalled something that had happened one day in the late 1980s, when he was still living in Beijing.

Local officials had invited him to dinner.

This was unusual. He was usually the one to invite officials to dine, so as to curry favor with the show of hospitality. Over the meal, the officials told Wan that the Ministry of State Security was going to send agents to work undercover at his company in positions dealing with international relations. The officials cast the move to embed these minders as an act of protection for Wan and the company’s other executives, a security measure that would keep them from stumbling into unseen risks in their dealings with foreigners. “You have a lot of international business, which raises security issues for you. There are situations that you don’t understand,” Wan recalled the officials telling him. “They said, ‘We are sending some people over. You can just treat them like regular employees.'”

Wan said he knew that around this time, state intelligence also contacted other tech companies in Beijing with the same request. He couldn’t say what the situation was for Huawei, which was still a little startup far to the south in Shenzhen, not yet on anyone’s radar. But Wan said he didn’t believe that Huawei would have been able to escape similar demands. “That is a certainty,” he said.

“Telecommunications is an industry that has to do with keeping control of a nation’s lifeline…and actually in any system of communications, there’s a back-end platform that could be used for eavesdropping.”

It was a rare moment of an executive lifting the cone of silence surrounding the MSS’s relationship with China’s high-tech industry. It was rare, in fact, in any country. Around the world, such spying operations rank among governments’ closest-held secrets. When Edward Snowden had exposed the NSA’s operations abroad, he’d ended up in exile in Russia. Wan, too, might have risked arrest had he still been living in China.

Here are two book reviews.

365 TomorrowsLeaders

Author: Julian Miles, Staff Writer Rolla takes a swig from his mug and smiles. “Gather round, my children, and listen well. Heed not the screams of the monshaga as they roam. Within these walls, we are safe. Behind the great door, we will thrive.” Gesty spits into the fire. “I ain’t your kid, and I […]

The post Leaders appeared first on 365tomorrows.

,

Cory DoctorowShow Me the Incentive, I’ll Show You the Outcome

A Gilded Age editorial cartoon depicting a muscular worker and a corpulent millionaire squaring off for a fight; the millionaire's head has been replaced with the poop emoji from the cover of 'Enshittification,' its mouth covered in a grawlix-scrawled black bar.

This week on my podcast, I read my latest Locus Magazine column, “Show Me the Incentive, I’ll Show You the Outcome,” about the process by which we ended up with an enshittogenic policy environment:


The whole point of the conservative project is to take away choices, and corral us into “preferences” that we disprefer. Eliminate no-fault divorce, suppress the vote, gerrymander the electoral map, cram a binding arbi­tration clause into every terms of service and a noncompete into every labor contract, buy up all your competitors, DRM-lock all the media, ban contraception and abortion, and you’ve got a world of partners you can’t divorce, politicians you can’t vote out, companies you can’t sue, jobs you can’t quit, services you can’t leave, books and music you can’t move, and pregnancies you can’t prevent or terminate.


And after you are relentlessly corralled into all these things you hate, you will be told that you don’t hate them after all – because you revealed your preferences for them.


Consumerism is a terrible way to make change at the best of times, and it gets less effective by the day, as authoritarianism and market consolidation shrink the world of possibilities to an endless Pepsi Chal­lenge, where “choice” is narrowed to which flavor of sweetened battery acid you hate the least.


I don’t think that end users are to blame for enshittification.

MP3

David BrinPart 2 of Agressive Agility: Learn from your opponents' cleverest tactic! And turn it on them.

Aggressive agility. It's called judo, and Democrats stink at it.

Last time (in Part One), I described a clever tactic that Republicans used, in the 90s, to flip the entire Democratic coalition that was built by the Greatest (GI Bill) Generation onto its ass, commencing thirty years of all-out war against America by a newly-amplified and mutated GOP. 

That tactic - Newt Gingrich's Republican Contract With America - struck wavering U.S. citizens (just two years after overwhelmingly electing Democrats) as reformist and reasonable...

...in part because many of its provisions were reasonable-seeming! Whereupon, in classic bait-n-switch, the GOP flushed them all away. Getting power was all that ever mattered to them. Or that has mattered to them, since.

So, this time, let's dive into a historical document that I doubt any of you have ever read. Let alone studied for potential tactical lessons. And we have been suffering from that lazy neglect for more than a generation.

Later, in Part Three, we'll open the floor for potential ways that Democrats and decent independents may stop trying to use sumo against judo users! Ways we can adapt. And win.


 

PONDERING A “NEWER DEAL FOR AMERICA"

 

Part Two

Examining the 1994 Gingrich “Contract”


 

Polls show that average American citizens tend to prefer mainstream Democratic policies, such as government transparency, accountability, science & energy research, improved efficiency, moderate environmentalism, assistance to poor childrentolerance of individual diversity, a far better track record of economic outcomes, and responsible attention to our alliances and international affairs. The rich paying their share. And did I mention science? All of them far more helpful to the nation and humanity than the symbolism obsessions of either the entire gone-mad right ... or a net-unhelpful farthest-left.

 

Nevertheless, the Republican Party has developed politically innovative tactics to overcome these policy disadvantages, in order to win one victory after another. These tactics range from open policy initiatives that sincere people might legitimately argue about... to maligning Rooseveltean liberals as “Marxists”... all the way to corruption of both mass media and voting processes and outright theft of trillions, to create wealth disparities worse even than pre-Revolutionary France.  

 Although many of their techniques are despicable – such as fomenting racism and culture war and attacking every fact profession – it is simply dumb not to study the tenacity and determination that these innovations represent! Indeed, a few of the less-dishonorable Republican tactics may merit the highest tribute -- imitation.

 

Take the Republican Contract with America, that we discussed in Part One. Newt Gingrich and the first wave of neocons used it with startling effectiveness, during their 1994 drive to seize control of Congress. By offering a primly laid-out “deal” to voters, they gave an impression that clearcut and measurable changes would be delivered, if only the GOP were given control. Implicitly implied: Democratic Party leadership had been degenerate and wicked.

 

Also implicit? A willing acceptance of punishment, if the Contract's promises weren’t kept! 

 

Given the “Contract’s” political effectiveness. Ought we give that prodigiously successful tactic attention and study?

 

 

II.   Let's examine the Republican Contract with America

 

Here it is. You may not have read the text, back then. But it merits study by anyone interested in the art of politics. Then compare it to my own proposed draft for a 2025 "Democratic Newest Deal" to follow.

 

The contract listed eight reforms that Republicans promised to enact within Congress itself, followed by ten bills they promised to bring to floor debate and votes to become laws. All were "60% issues" that garnered support from 60%+ of polled Americans, and thus they avoided divisive matters like abortion and school prayer.


Note: actual verbiage from the 1994 Contract is in 
serif type face.

 

 

 

THE REPUBLICAN CONTRACT WITH AMERICA  

 

As Republican Members of the House of Representatives and as citizens seeking to join that body, we propose not just to change its policies, but even more important, to restore the bonds of trust between the people and their elected representatives. That is why, in this era of official evasion and posturing, we offer instead a detailed agenda for national renewal, a written commitment with no fine print.

 

(For the full preamble, see: https://en.wikipedia.org/wiki/Contract_with_America)

 

 On the first day of the 104th Congress, the new Republican majority will immediately pass the following major reforms (of Congress itself), aimed at restoring the faith and trust of the American people in their government.

PART ONE – They proposed to reform Congressional rules and procedures, in ways that sound virtuous:

 

* FIRST, require all laws that apply to the rest of the country also apply equally to the Congress; 

 

* SECOND, select a major, independent auditing firm to conduct a comprehensive audit of Congress for waste, fraud or abuse; 

 

* THIRD, cut the number of House committees, and cut committee staff by one-third;

 

* FOURTH, limit the terms of all committee chairs;

 

* FIFTH, ban the casting of proxy votes in committee; 

 

* SIXTH, require committee meetings to be open to the public;

 

* SEVENTH, require a three-fifths majority vote to pass a tax increase;

 

* EIGHTH, guarantee an honest accounting of our Federal Budget by implementing zero base-line budgeting.

 

 

PART TWO – Here they listed bills and new laws to enact Republican priorities, also in ways that sounded virtuous.

 

 Thereafter, within the first 100 days of the 104th Congress, we shall bring to the House Floor the following bills, each to be given full and open debate, each to be given a clear and fair vote and each to be immediately available this day for public inspection and scrutiny. 

 

 1. THE FISCAL RESPONSIBILITY ACT: A balanced budget/tax limitation amendment and a legislative line-item veto to restore fiscal responsibility to an out- of-control Congress, requiring them to live under the same budget constraints as families and businesses.

 

 2. THE TAKING BACK OUR STREETS ACT: An anti-crime package including stronger truth-in- sentencing, "good faith" exclusionary rule exemptions, effective death penalty provisions, and cuts in social spending from this summer's "crime" bill to fund prison construction and additional law enforcement to keep people secure in their neighborhoods and kids safe in their schools.

 

 3. THE PERSONAL RESPONSIBILITY ACT: Discourage illegitimacy and teen pregnancy by prohibiting welfare to minor mothers and denying increased AFDC for additional children while on welfare, cut spending for welfare programs, and enact a tough two-years-and-out provision with work requirements to promote individual responsibility.

 

 4. THE FAMILY REINFORCEMENT ACT: Child support enforcement, tax incentives for adoption, strengthening rights of parents in their children's education, stronger child pornography laws, and an elderly dependent care tax credit to reinforce the central role of families in American society. 

 

 5. THE AMERICAN DREAM RESTORATION ACT: A S500 per child tax credit, begin repeal of the marriage tax penalty, and creation of American Dream Savings Accounts to provide middle class tax relief. (Bill Text) (Description)

 

 6. THE NATIONAL SECURITY RESTORATION ACT: No U.S. troops under U.N. command and restoration of the essential parts of our national security funding to strengthen our national defense and maintain our credibility around the world. 

 

 7. THE SENIOR CITIZENS FAIRNESS ACT: Raise the Social Security earnings limit which currently forces seniors out of the work force, repeal the 1993 tax hikes on Social Security benefits and provide tax incentives for private long-term care insurance to let Older Americans keep more of what they have earned over the years. 

 

 8. THE JOB CREATION AND WAGE ENHANCEMENT ACT: Small business incentives, capital gains cut and indexation, neutral cost recovery, risk assessment/cost-benefit analysis, strengthening the Regulatory Flexibility Act and unfunded mandate reform to create jobs and raise worker wages.

 

 9. THE COMMON SENSE LEGAL REFORM ACT: "Loser pays" laws, reasonable limits on punitive damages and reform of product liability laws to stem the endless tide of litigation. 

 

 10. THE CITIZEN LEGISLATURE ACT: A first-ever vote on term limits to replace career politicians with citizen legislators. (Description)

 

 Further, we will instruct the House Budget Committee to report to the floor and we will work to enact additional budget savings, beyond the budget cuts specifically included in the legislation described above, to ensure that the Federal budget deficit will be less than it would have been without the enactment of these bills. 

 

 

 Okay, there it is, the ingenious document that set in motion a dramatic turnaround in fortunes for the Republican Party. Yes, some - (many?) – parts were despicable dog whistles! But if you are only able to look upon it with loathing, incapable of appreciating its artful skill… or the reasons why it appealed to American swing voters in 1994 … then you have only proved that you are too politically blinkered -- too channeled by reflex hostility -- to see things in a broader perspective.

 

There is so much to examine!

 

Take the first section on Congressional rules and procedures. Here, the majority party in each house has almost total power, constrained only by tradition… and by knowledge that someday they will take their own turn as the minority. Twin constraints that – as of autumn 2025 – no longer seem to concern the House GOP in the slightest. But Speaker Mike Johnson’s complicity with despotism is not the topic here. Rather we are talking about the Contract. Its terms and execution.

 

Of the eight proposed reforms, items labeled THIRD and SEVENTH are rightwing ideological. Number THREE was a continuation of Newt’s banishment of the Office of Technology Assessment and all in-house experts who might say the words that GOP politicians hate most to hear: “That’s not actually true.” A campaign against all facts and fact users that continues to accelerate, today.

 

Number SEVEN accurately reflected Conservative wishes, hoping eventually to shift Social Security into buying stocks that are held mostly by rich Republicans. In order to make them much, much richer.

 

The other six were things that almost any decent citizen might deem reasonable…

 

…and not one of the eight was passed or enforced with any degree of alacrity. Every single one of the in-house reforms was betrayed. And Dems never made an issue of that betrayal.

 

The second section of the Contract was a mélange of attempts at legislation. And hence, the Republicans needed not just a House majority but – for Senate passage – either a filibuster-proof 60 votes, or else a compromise achieved by negotiation with Democrats. Note that Hastert’s “Never Negotiate!” rule was still a year or two in the future. But there was one more obstacle… the veto pen of President Bill Clinton.

 

I won’t go into details here. You can see capsule summaries of the aftermath, the vetoes – with some over-rides – and yes, some negotiated compromises (since the GOP Speaker was still Newt) – in the Contract’s Wikipedia page.  

 

Again, these bills ranged from things that I (and most Democrats or decent/thoughtful people) would deem detestable… all the way to some that aimed in general directions that one could reasonably negotiate. Indeed, as I said earlier, Newt did negotiate a bit with Clinton, amid volcanic partisan-tirades, and there were several effective reforms... that resulted in Newt's ouster by his own party! 


And that’s not the point! The point is that most of the ten were either aimed at augmenting oligarchy… or else betrayed, just like the in-house rule reforms. The net-overall effects were pretty damned meagre.

 

A pause to acknowledge that both parties have since then rushed through major omnibus ‘reform’ bills. The 2025 Republican “Big Beautiful Bill” was a heaping mountain of oligarchic gifts and outright treasons. The 2021-22 “Pelosi Bills” like the Infrastructure Act were vastly more admirable and I deem it borderline criminal the way our own left treated her – and mainstream Dem-pols – after they accomplished so much. (See this!)  And yes, I sure am partisan. I hate the fact that I am compelled to be! But like the 1860s earlier phase of the same civil war, I choose blue because it is the only side with sanity and decency. 

 

Right now, all I ask of you is to squint and consider the tactical effectiveness of the “Contract.” Its gambit at feigning sober-pragmatic sincerity and making plausible promises, while implying the other side is a corrupt morass.

 

Next time in Part Three let’s ponder what it might look like, if we actually study the adversary’s best tactics, so we can use even better ones! And offer our own list of priorities that we intend to fulfill. For the sake of us all.

 

365 TomorrowsGoldfish

Author: Gordon Pinckheard Stray from the shoal, and you risk your life. Dave would like to have moved in towards the center of his row of marchers, but his arm was locked with his neighbor’s. At least, they were well back from the protest’s front lines. The day before, Anna had called a meeting, demanded […]

The post Goldfish appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 15.2.2-1 on CRAN: Upstream Update, OpenMP Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1286 other packages on CRAN, downloaded 42.6 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 659 times according to Google Scholar.

This versions updates to the 15.2.2 upstream Armadillo release made two days ago. It brings a few changes over the RcppArmadillo 15.2.0 release made only to GitHub (and described in this post), and of course even more changes relative to the last CRAN release described in this earlier post. As described previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. During the transition we did not make any releases to CRAN allowing both the upload cadence to settle back to the desired ‘about six in six months’ that the CRAN Policy asks for, and for packages to adjust to any potential changes. Most affected packages have done so (as can be seen in the GitHub issues #489 and #491) which is good to see. We appreciate all the work done by the respective package maintainers. A number of packages are still under a (now formally expired) deadline at CRAN and may get removed. Our offer to help where we can still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the transition.

With respect to changes in the package, we once more overhauled the OpenMP detection and setup, following the approach take by package data.table but sticking with an autoconf-based configure. The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.2.2-1 (2025-11-21)

  • Upgraded to Armadillo release 15.2.2 (Medium Roast Deluxe)

    • Improved reproducibility of random number generation when using OpenMP
  • Skip a unit test file under macOS as complex algebra seems to fail under newer macOS LAPACK setting

  • Further OpenMP detection rework for macOS (Dirk in #497, #499)

  • Define ARMA_CRIPPLED_LAPACK on Windows only if 'LEGACY' Armadillo selected

Changes in RcppArmadillo version 15.2.1-0 (2025-10-28) (GitHub Only)

  • Upgraded to Armadillo release 15.2.1 (Medium Roast Deluxe)

    • Faster handling of submatrices with one row
  • Improve OpenMP detection (Dirk in #495 fixing #493)

Changes in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)

  • Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)

    • Added rande() for generating matrices with elements from exponential distributions

    • shift() has been deprecated in favour of circshift(), for consistency with Matlab/Octave

    • Reworked detection of aliasing, leading to more efficient compiled code

  • OpenMP detection in configure has been simplified

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

365 TomorrowsLeaves of Silicon

Author: Richard Simonds Harriet, age fourteen, looked forward to freshman English, although she wasn’t exactly sure why. Maybe there was poetry in her soul, or maybe she was just intellectually interested. If asked about her excitement, she would say, “I don’t know, I hear the teacher is really good.” Her first day of class however, […]

The post Leaves of Silicon appeared first on 365tomorrows.

,

Cryptogram More on Rewiring Democracy

It’s been a month since Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship was published. From what we know, sales are good.

Some of the book’s forty-three chapters are available online: chapters 2, 12, 28, 34, 38, and 41.

We need more reviews—six on Amazon is not enough, and no one has yet posted a viral TikTok review. One review was published in Nature and another on the RSA Conference website, but more would be better. If you’ve read the book, please leave a review somewhere.

My coauthor and I have been doing all sort of book events, both online and in person. This book event, with Danielle Allen at the Harvard Kennedy School Ash Center, is particularly good. We also have been doing a ton of podcasts, both separately and together. They’re all on the book’s homepage.

There are two live book events in December. If you’re in Boston, come see us at the MIT Museum on 12/1. If you’re in Toronto, you can see me at the Munk School at the University of Toronto on 12/2.

I’m also doing a live AMA on the book on the RSA Conference website on 12/16. Register here.

365 TomorrowsSo Hard to Get Good Help These Days

Author: Hillary Lyon “I heard that.” “What?” Clive looked over his shoulder. “You’re not supposed to be listening to my conversations. Besides, it’s true—it is hard to get good help.” “That’s not what your wife told me.” Andra stood in the doorway to Clive’s home office, wagging her feather duster in his direction. Clive whispered […]

The post So Hard to Get Good Help These Days appeared first on 365tomorrows.

Worse Than FailureError'd: Untimely

Sometimes, it's hard to know just when you are. This morning, I woke up to a Macbook that thinks it's in Paris, four hours ago. Pining for pain chocolate. A bevy of anonyms have had similar difficulties.

First up, an unarabian anonym observes "They say that visiting Oman feels like traveling back in time to before the rapid modernization of the Arab states. I just think their eVisa application system is taking this "time travel" thing a bit too far... "

0

 

Snecod, an unretired (anteretired?) anonym finds it hard to plan when the calendar is unfixed. "The company's retirement plan was having a rough time prior to Second June." Looks like the first wtf was second March.

2

 

And an unamerican anonym sent us this (uh, back in first March) "Was looking to change the cable package I have from them. Apparently my discounts are all good until 9th October 1930, and a second one looking good until 9th January 2024."

3

 

On a different theme, researcher Jennifer E. exclaimed "Those must have been BIG divorces! Guy was so baller Wikipedia couldn’t figure out when he divorced either of these women." Or so awful they divorced him continuously.

4

 

Finally, parsimonious Greg L. saved this for us. "I don't remember much about #Error!, but I guess it was an interesting day."

1

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianDaniel Kahn Gillmor: Transferring Signal on Android

Transferring a Signal account between two Android devices

I spent far too much time recently trying to get a Signal Private Messenger account to transfer from one device to another.

What I eventually found worked was a very finicky path to enable functioning "Wi-Fi Direct", which I go into below.

I also offer some troubleshooting and recovery-from-failure guidance.

All of this blogpost uses "original device" to refer to the Android pocket supercomputer that already has Signal installed and set up, and "new device" to mean the Android device that doesn't yet have Signal on it.

Why Transfer?

Signal Private Messenger is designed with the expectation that the user has a "primary device", which is either an iPhone or an Android pocket supercomputer.

If you have an existing Signal account, and try to change your primary device by backing up and restoring from backup, it looks to me like Signal will cause your long-term identity keys to be changed. This in turn causes your peers to see a message like "Your safety number with Alice has changed."

These warning messages are the same messages that they would get if an adversary were to take over your account. So it's a good idea to minimize them when there isn't an account takeover — false alarms train people to ignore real alarms.

You can avoid "safety number changed" warnings by using signal's "account transfer" process during setup, at least if you're transferring between two Android devices.

However, my experience was that the transfer between two Android devices was very difficult to get to happen at all. I ran into many errors trying to do this, until I finally found a path that worked.

Dealing with Failure

After each failed attempt at a transfer, my original device's Signal installation would need to be re-registered. Having set a PIN meant that i could re-register the device without needing to receive a text message or phone call.

Set a PIN before you transfer!

Also, after a failure, you need to re-link any "linked device" (i.e. any Signal Desktop or iPad installation). If any message came in during the aborted transfer, the linked device won't get a copy of that message.

Finally, after a failed transfer, i recommend completely uninstalling Signal from the new device, and starting over with a fresh install on the new device.

Permissions

My understanding is that Signal on Android uses Wi-Fi Direct to accomplish the transfer. But to use Wi-Fi Direct, Signal needs to have the right permissions.

On each device:

  • Entirely stop the Signal app
  • Go to Settings » Apps » Signal » Permissions
  • Ensure that the following permissions are all enabled whenever the app is running:
  • Location
  • Nearby Devices
  • Network

Preparing for Wi-Fi Direct

The transfer process depends on "Wi-Fi Direct", which is a bit of a disaster on its own.

I found that if i couldn't get Wi-Fi Direct to work between the two devices, then the Signal transfer was guaranteed to fail.

So, for clearer debugging, i first tried to establish a Wi-Fi Direct link on Android, without Signal being involved at all.

Setting up a Wi-Fi Direct connection directly failed, multiple times, until i found the following combination of steps, to be done on each device:

  • Turn off Bluetooth
  • Ensure Wi-Fi is enabled
  • Disconnect from any Wi-Fi network you are connected to (go to the "Internet" or "Wi-Fi" settings page, long-press on the currently connected network, and choose "Disconnect"). If your device knows how to connect to multiple local Wi-Fi networks, disconnct from each of them in turn until you are in a stable state where Wi-Fi is enabled, but no network is connected.
  • Close to the bottom of the "Inteernet" or "Wi-Fi" settings page, choose "Network Preferences" and then "Wi-Fi Direct"
  • if there are any entries listed under "Remembered groups", tap them and choose to "Forget this group"
  • If there are Peer devices that say "Invited", tap them and choose to "Cancel invitation"

I found that this configuration is the most likely to enable a successful Wi-Fi Direct connection, where clicking "invite" on one device would pop up an alert on the other asking to accept the connection, and result in a "Connected" state between the two devices.

Actually Transferring

Start with both devices fully powered up and physically close to one another (on the same desk should be fine).

On the new device:

  • Reboot the device, and log into the profile you want to use
  • Enable Internet access via Wi-Fi.
  • Remove any old version of Signal.
  • Install Signal, but DO NOT OPEN IT!
  • Set up the permissions for the Signal app as described above
  • Open Signal, and choose "restore or transfer" -- you still need to be connected to the network at this point.
  • The new device should display a QR code.

On the original device:

  • Reboot the device, and log into the profile that has the Signal account you're looking to transfer
  • Enable Internet access via Wi-Fi, using the same network that the old device is using.
  • Make sure the permissions for Signal are set up as described above
  • Open Signal, and tap the camera button
  • Point the camera at the new device's QR code

Now tap the "continue" choices on both devices until they both display a message that they are searching for each other. You might see the location indicator (a green dot) turn on during this process.

If you see an immediate warning of failure on either device, you probably don't have the permissions set up right.

You might see an alert (a "toast") on one of the devices that the other one is trying to connect. You should click OK on that alert.

In my experience, both devices are likely to get stuck "searching" for each other. Wait for both devices to show Signal's warning that the search has timed out.

At this point, leave Signal open on both devices, and go through all the steps described above to prepare for Wi-Fi Direct. Your Internet access will be disabled.

Now, tap "Try again" in Signal on both devices, pressing the buttons within a few seconds of each other. You should see another alert that one device is trying to connect to the other. Press OK there.

At this point, the transfer should start happening! The old device will indicate what percentag has been transferred, and the new device will indicate how many messages hav been transferred.

When this is all done, re-connect to Wi-Fi on the new device.

Temporal gap for Linked Devices

Note that during this process, if new messages are arriving, they will be queuing up for you.

When you reconnect to wi-fi, the queued messages will flow to your new device. But the process of transferring automatically unlinks any linked devices. So if you want to keep your instance of Signal Desktop with as short a gap as possible, you should re-link that installation promptly after the transfer completes.

Clean-up

After all this is done successfully, you probably want to go into the Permissions settings and turn off the Location and Nearby Devices permissions for Signal on both devices.

I recommend also going into Wi-Fi Direct and removing any connected devices and forgetting any existing connections.

Conclusion

This is an abysmally clunky user experience, and I'm glad I don't have to do it often. It would have been much simpler to make a backup and restore from it, but I didn't want to freak out my contacts with a safety number change.

By contrast, when i wanted extend a DeltaChat account across two devices, the transfer was prompt and entirely painless -- i just had to make sure the devices were on the same network, and then scanned a QR code from one to the other. And there was no temporal gap for any other deviees. And i could use Delta on both devices simultaneously until i was convinced that it would work on the new device -- Delta doesn't have the concept of a primary account.

I wish Signal made it that easy! Until it's that easy, i hope the processes described here are useful to someone.

,

Planet DebianBálint Réczey: Think you can’t interpose static binaries with LD_PRELOAD? Think again!

Well, you are right, you can’t. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.

But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc’s syscall – well, at least sometimes. Sometimes syscalls just bypass libc.

The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!

$ faketime '2008-12-24 08:15:42'  qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime 
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...

With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.

There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!

$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
    "[FBBCOMM_TAG]": "exec",
    "file": "test_static",
    "// fd": null,
    "// dirfd": null,
    "arg": [
        "./test_static"
    ],
    "env": [
        "SHELL=/bin/bash",
 ...
        "FB_SOCKET=/tmp/firebuild.cpMn75/socket",
        "_=./test_static"
    ],
    "with_p": false,
    "// path": null,
    "utime_u": 0,
    "stime_u": 1017
}
FIREBUILD: -> proc_ic_msg()  (message_processor.cc:782)  proc={ExecedProcess 161077.1, running, "bash -c 
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD:   -> send_fbb()  (utils.cc:292)  conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
    "[FBBCOMM_TAG]": "rewritten_args",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn()  (firebuild.cc:139)  listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
    "[FBBCOMM_TAG]": "scproc_query",
    "pid": 161077,
    "ppid": 161073,
    "cwd": "/home/rbalint/projects/firebuild/test",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "env_var": [
        "CCACHE_DISABLE=1",
...
        "SHELL=/bin/bash",
        "SHLVL=0",
        "_=./test_static"
    ],
    "umask": "0002",
    "jobserver_fds": [],
    "// jobserver_fifo": null,
    "executable": "/usr/bin/qemu-user-interposable",
    "// executed_path": null,
    "// original_executed_path": null,
    "libs": [
        "/lib/x86_64-linux-gnu/libatomic.so.1",
        "/lib/x86_64-linux-gnu/libc.so.6",
        "/lib/x86_64-linux-gnu/libglib-2.0.so.0",
        "/lib/x86_64-linux-gnu/libm.so.6",
        "/lib/x86_64-linux-gnu/libpcre2-8.so.0",
        "/lib64/ld-linux-x86-64.so.2"
    ],
    "version": "0.8.5.1"
}

The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit — not just Firebuild.

For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously “invisible” steps in your builds? All now fair game for caching.

Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you’re using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.

Static binaries, welcome to the party!

Krebs on SecurityMozilla Says It’s Finally Done With Two-Faced Onerep

In March 2024, Mozilla said it was winding down its collaboration with Onerep — an identity protection service offered with the Firefox web browser that promises to remove users from hundreds of people-search sites — after KrebsOnSecurity revealed Onerep’s founder had created dozens of people-search services and was continuing to operate at least one of them. Sixteen months later, however, Mozilla is still promoting Onerep. This week, Mozilla announced its partnership with Onerep will officially end next month.

Mozilla Monitor. Image Mozilla Monitor Plus video on Youtube.

In a statement published Tuesday, Mozilla said it will soon discontinue Monitor Plus, which offered data broker site scans and automated personal data removal from Onerep.

“We will continue to offer our free Monitor data breach service, which is integrated into Firefox’s credential manager, and we are focused on integrating more of our privacy and security experiences in Firefox, including our VPN, for free,” the advisory reads.

Mozilla said current Monitor Plus subscribers will retain full access through the wind-down period, which ends on Dec. 17, 2025. After that, those subscribers will automatically receive a prorated refund for the unused portion of their subscription.

“We explored several options to keep Monitor Plus going, but our high standards for vendors, and the realities of the data broker ecosystem made it challenging to consistently deliver the level of value and reliability we expect for our users,” Mozilla statement reads.

On March 14, 2024, KrebsOnSecurity published an investigation showing that Onerep’s Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. Shelest released a lengthy statement wherein he acknowledged maintaining an ownership stake in Nuwber, a data broker he founded in 2015 — around the same time he launched Onerep.

Worse Than FailureCodeSOD: Invalid Route and Invalid Route

Someone wanted to make sure that invalid routes logged an error in their Go web application. Artem found this when looking at production code.

if (requestUriPath != "/config:system") &&
    (requestUriPath != "/config:system/ntp") &&
    (requestUriPath != "/config:system/ntp/servers") &&
    (requestUriPath != "/config:system/ntp/servers/server") &&
    (requestUriPath != "/config:system/ntp/servers/server/config") &&
    (requestUriPath != "/config:system/ntp/servers/server/config/address") &&
    (requestUriPath != "/config:system/ntp/servers/server/config/key-id") &&
    (requestUriPath != "/config:system/ntp/servers/server/config/minpoll") &&
    (requestUriPath != "/config:system/ntp/servers/server/config/maxpoll") &&
    (requestUriPath != "/config:system/ntp/servers/server/config/version") &&
    (requestUriPath != "/config:system/ntp/servers/server/state") &&
    (requestUriPath != "/config:system/ntp/servers/server/state/address") &&
    (requestUriPath != "/config:system/ntp/servers/server/state/key-id") &&
    (requestUriPath != "/config:system/ntp/servers/server/state/minpoll") &&
    (requestUriPath != "/config:system/ntp/servers/server/state/maxpoll") &&
    (requestUriPath != "/config:system/ntp/servers/server/state/version") {
    log.Info("ProcessGetNtpServer: no return of ntp server state for ", requestUriPath)
    return nil
}

The most disturbing part of this, for Artem, isn't that someone wrote this code and pushed it to production. It's that, according to git blame, two people wrote this code, because the first developer didn't include all the cases.

For the record, the application does have an actual router module, which can trigger logging on invalid routes.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsMonmoth

Author: Timothy Goss There is no tyme, no tick tock not no more. Sunny has face an hands, but no tick tock, only slip slop like me own guts. We been waiting an watching, meself an Sunny, waiting days and nites, watching light an dark, waiting for grub from under wood. Sunny says they have […]

The post Monmoth appeared first on 365tomorrows.

David BrinAggressive Agility: Turn the GOP's Most Successful Political Ploy Against Them

Here begins a three-parter that merits old-fashioned reading and contemplation, about how to fix the Democrats' greatest disadvantage. 

Despite being far less corrupt and immensely more ethical, with a vast range of policies that poll far better among Americans... and despite Democratic administrations having universally better outcomes, re: economics, progress and even deficits... Democrats suffer one crucial disadvantage. When it comes to polemical/political tactics, they are absolute dunces.

Hence, let's dissect the most aggressively successful tactical-political ploy of the last 40 years. And see what we can learn from it.


  PONDERING AN UNUSUAL TACTIC FOR DEMOCRATS:

ISSUE A "BETTER CONTRACT FOR AMERICA"

or... A Newer Deal...

   

by David Brin

 (1st version February 2006, revised October 2025)

 

 Today’s partisans – both Democrats and Republicans – will snort at the title of this proposal. To study one of the most successful political tactics of the modern era. 


 If anyone remembers the "Republican Contract with America" at all, it’s viewed as a ploy by Newt Gingrich and colleagues to sway the 1994 mid-terms. 


A Potemkin pretense at reform, that served to cover their true agenda.  


It worked! At achieving Newt’s short term goal – taking power in Congress. Though soon a radicalized GOP – some of them newly elected to Congress thanks to Gingrich’s tactic – would betray and eject him as Speaker of the House, swapping in Dennis Hastert, first in a long chain of perverted psychopaths.[1] 


They also cynically tossed every reform that that Newt had promised.


 Today’s Democrats recall his “Contract” as both a painful defeat and flagrant hypocrisy. 

 To the scandal-ridden Republicans of 2025, it’s a hoary anecdote – relic of a bygone era, when they still felt compelled to at least feign serious intent. 


 Sure, parties often post platforms or lists of intent. Some of them made a difference in their day. FDR’s New Deal and LBJ’s Great Society, for example**. But none in recent memory had the clarity and instant political effects of the Gingrich ‘contract.’


 Hence, I propose that we study it for use – with both honest intent and ironic satire – by the other side! I’ll include at least thirty possibilities, any one of which might be political gold. 


 Though, alas, none of them is on the horizon of any Democratic politician.

 

---------------------

 

THE THREE PARTS

 

I.   A rumination:  Might Democrats help clarify their differences from the GOP with their own Newest… or Best Deal for the American People?

 

II.  A compact copy of the 1994 “Republican Contract with America” appraising how every part was betrayed.  

 

III.  A Draft “Democratic Newest Deal for the American People.”  

  

---------------------

 

So, for now, let’s commence with Part One.

 

I.           Might the Democratic Party help clarify its opposition to the gone-mad GOP, by reminding, comparing and contrasting to the “Contract with America”?

 

Our generation’s hard task is to end phase nine of the US Civil War and restore sanity to American political life. Not just for liberals, Democrats and blue state moderates, but also for honest libertarians, greens, fiscal conservatives, Goldwater conservatives, constitutionalist conservatives, actual 'leftists' and anyone else who wants a nation run by grownups, instead of loony toddlers and criminals. 

Alas, too many delight prematurely in the current President's falling poll numbers. Democrats may retake a chamber of Congress in 2026 or the presidency in 2028. (There are scenarios where turnover could happen earlier.[2]) But even those victories will remain sterile, unless we calm rifts of hatred that were ignited first by Hastert and Karl Rove, then more poisonously by the entire Fox-o-sphere.

 

Many liberal activists foresee such a memic victory "if only we refine our message," while shrugging-off the hard work of studying and refining! Instead, far too many just double down on what did not work last time. Meanwhile the neoconservative movement – then its Trumpist heir – assiduously spent decades and billions reinventing themselves after defeats in 1964 and 1974 and 2008.

 

Democrats may need to be just as inventive.

 

 

    == What the Gingrich Republicans did, and why they hope you forgot ==

 

No current GOP leader would mention the words “Contract with America.” They recall the punishment that they implicitly accepted, if they betrayed their promises! And so, Let’s remind the public of that!

 

Specifically, there may be an opportunity to:


1.   Learn from a clever methodology and message,

2.   Spur public revulsion by highlighting betrayed GOP promises, 

3.  Show sincerity by including some ideas from better versions of conservatism,

4.  Crystallize a reinvigorated liberalism that might go down well with U.S. Voters.

 

Next time, I will append a truncated summary of Gingrich’s original “Contract with America,” which divides into three categories.[3]  


 * Good ideas that seemed reasonable then, because they were reasonable.  Promises the neocon-GOP quickly betrayed, and that later MAGA mutants would denounce as commie-Soros plotting! 


Only, suppose Democrats offer honest conservatives a chance to do these good ideas right. Especially public accountability, e.g. by instituting measures like the Inspector General of the United States (IGUS), and permanent subpoena power for the Congressional minority. (See Part Three.)  


 * Conservative ideas that Democrats disagree-with, but seemed at least sincere.  These, too, were mostly betrayed. Only we might now say that Democrats are willing to negotiate, if decent conservatives show the guts to step up with reason and good will. Starting by recanting Trumpism.


 * Dismal/horrid stuff. Endeavors aimed only at benefiting fat cats and aristocrats and thieves.Notably, some of these planks actually took effect. Any new Democratic “deal” would replace them with excellent liberal ideas.


By adopting the good parts, and offering to negotiate some other conservative wants, we’re seen reaching out to millions of decent American conservatives who are uncomfortable with Trumpism, but who stay in the Foxite tent, fearing a strawman of intransigent “commie liberals.” Then, by replacing aristocracy-friendly planks with some that actually benefit our children, we emphasize basic differences that make Democrats the party of smart compassion. 


Some will carp this as copycat imitation! So, test it in focus groups! Will folks appreciate the aggressive irony? Rubbing GOP/maga noses into their own hypocrisy? [4] While clearly reaching out for accommodation with all sincere Americans. Go ahead. Glance at the ‘94 “Contract” (next posting).  I’ll be interested which parts people deem worthy of adoption, modification, satire, or fierce repudiation.


Above all, this is a test of your curiosity. Together let’s interrogate a brilliant maneuver that tricked millions of your fellow citizens. One of many that are still used today. Tricks that will never be defeated until we find the patience to study them.


ONWARD TO PART 2!


--------------------    ------------------------   --------------------


[1] Soon after issuing the “contract” and leading the GOP to victory, Gingrich was jettisoned by his own party as Speaker of the House, because – despite fierce and sometimes fulminating partisanship - Newt did want to legislate!. Which meant negotiation with Bill Clinton, achieving bipartisan marvels like the Budget Act and welfare reform. And that very bipartisanship was his undoing! His sin, according to the new GOP super-radicals. 

Look up Dennis Hastert, who replaced Newt G as Speaker, making Hastert titular head of their party, two heartbeats from the presidency! Hastert was later convicted and imprisoned for gross sexual predation on children. He also instituted the “Hastert Rules,” which ban any Republican office-holder from ever negotiating with Democrats, for any reason including the national interest, or even having friendships with them, ever again.

[2] Before that? Well, it’s remotely possible. Say, if major revelations like Epstein kompromat were to stoke just twenty or so House and Senate Republicans to find the sanity, decency, patriotism and guts to secede from their party’s orgy of treason. It is theoretically possible they might work with Democrats to replace the current gang with some residually honorable Republicans, perhaps in the mold of Eisenhower, who would try to unite America and return its government to adult supervision.  One can dream.

[3] For a detailed appraisal of how neoconservatives re-invented themselves, learning masterful techniques for attaining power over all three branches of government, see my prescient article from 2006: The Republican Party's Mutant Re-Invention: How they Accomplished it....and What Democrats Must Do In Order to Catch Up.

[4] For example, the whole bizarre notion that America’s military readiness increased under Republican control merits scathing rebuttal!  We are less ready for an emergency now, under GOP scatterbrained shills who have dispersed even most of the officers charged with intel on terrorism threats(!) than we were before 9/11. This is an issue that could truly pry some conservatives away from the GOP!

 ** Both massive programs - the New Deal and Great Society - invested heavily in – and transformed – the poorest parts of the nation, which today suffer from ingratitude-amnesia, alas.

,

Planet DebianDirk Eddelbuettel: digest 0.6.39 on CRAN: Micro Update

Release 0.6.39 of the digest package arrived at CRAN today and has also been uploaded to Debian.

digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c, xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 86.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

As noted last week in the 0.6.38 release note, hours after it was admitted to CRAN, I heard from the ever-so-tireless Brian Ripley about an SAN issue on arm64 only (and apparently non-reproducible elsewhere). He kindly provided a fix; it needed a cast. Checking this on amd64 against our Rocker-based ASAN and UBSAN containers (where is remains impossible to replicate, this issue is apparently known for some arm64 issues) another micro-issue (of a missing final argument NULL missing in one .Call()) was detected. Both issues were fixed the same day, and constitute the only change here. I merely waited a week to avoid a mechanical nag triggered when release happen within a week.

My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: #055: More Frequent r2u Updates

Welcome to post 55 in the R4 series.

r2u brings CRAN packages for R to Ubuntu. We mentioned it in the R4 series within the last year in posts #54 about faster CI, #48 about the r2u keynote at U Mons, #47 reviewing r2u it at its third birthday, #46 about adding arm64 support, and #44 about the r2u for mlops talk.

Today brings news of an important (internal) update. Following both the arm64 builds as well as the last bi-annual BioConductor package update (and the extension of BioConductor coverage to arm64), more and more of our build setup became automated at GitHub. This has now been unified. We dispatch builds for amd64 packages for ‘jammy’ (22.04) and ‘noble’ (24.04) (as well as for the arm64 binaries for ‘noble’) from the central build repository and enjoy the highly parallel build of the up to fourty available GitHub Runners. In the process we also switched fully to source builds.

In the past, we had relied on p3m.dev (formerly known as ppm and rspm) using its binaries. These so-called ‘naked binaries’ are what R produces when called as R CMD INSTALL --build. They are portable with the same build architecture and release, but do not carry packaging information. Now, when a Debian or Ubuntu .deb binary is built, the same step of R CMD INSTALL --build happens. So our earlier insight was to skip the compilation step, use the p3m binary, and then wrap the remainder of a complete package around it. Which includes the all-important dependency information for both the R package relations (from hard Depends / Imports / LinkingTo or soft Suggests declarations) as well as the shared library dependency resolution we can do when building for a Linux distribution.

That served us well, and we remain really grateful for the p3m.dev build service. But it also meant were dependending on the ‘clock’ and ‘cadence’ of p3m.dev. Which was not really a problem when it ran reliably daily, and early too, included weekends, and showed a timestamp of last updates. By now it is a bit more erratic, frequently late, skips weekends more regularly and long stopped showing when it was last updated. Late afternoon releases reflecting the CRAN updates ending one and half-days earlier is still good, it’s just not all that current. Plus there was always the very opaque occurrencem where maybe one in 50 packages or so would not even be provided as a binary so we had to build it anyway—the fallback always existing, and was used for both BioConductor (no binaries) and arm64 (no binaries at first, this now changed). So now we just build packages the standard way, albeit as GitHub Actions.

In doing so we can ignore p3m.dev, and rather follow the CRAN clock and cadence (as for example CRANberries does), and can update several times a day. For example early this morning (Central time) we ran update for the then-new 28 source packages resulting in 28 jammy and 36 noble binary packages; right now in mid-afternoon we are running another build for 37 source packages resuling in 37 jammy and 47 noble packages. (Packages without a src/ directory and hence no compilation can be used across amd64 and arm64; those that do have src/ are rebuilt for arm64 hence the different sets of jammy and noble packages as only the latter has arm64 now.) This gets us packages from this morning into r2u which p3m.dev should have by tomorrow afternoon or so.

And with that r2u remains “Fast. Easy. Reliable. Pick all three!” and also a little more predictable and current in its delivery. What’s not to like?

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Krebs on SecurityThe Cloudflare Outage May Be a Security Roadmap

An intermittent outage at Cloudflare on Tuesday briefly knocked many of the Internet’s top destinations offline. Some affected Cloudflare customers were able to pivot away from the platform temporarily so that visitors could still access their websites. But security experts say doing so may have also triggered an impromptu network penetration test for organizations that have come to rely on Cloudflare to block many types of abusive and malicious traffic.

At around 6:30 EST/11:30 UTC on Nov. 18, Cloudflare’s status page acknowledged the company was experiencing “an internal service degradation.” After several hours of Cloudflare services coming back up and failing again, many websites behind Cloudflare found they could not migrate away from using the company’s services because the Cloudflare portal was unreachable and/or because they also were getting their domain name system (DNS) services from Cloudflare.

However, some customers did manage to pivot their domains away from Cloudflare during the outage. And many of those organizations probably need to take a closer look at their web application firewall (WAF) logs during that time, said Aaron Turner, a faculty member at IANS Research.

Turner said Cloudflare’s WAF does a good job filtering out malicious traffic that matches any one of the top ten types of application-layer attacks, including credential stuffing, cross-site scripting, SQL injection, bot attacks and API abuse. But he said this outage might be a good opportunity for Cloudflare customers to better understand how their own app and website defenses may be failing without Cloudflare’s help.

“Your developers could have been lazy in the past for SQL injection because Cloudflare stopped that stuff at the edge,” Turner said. “Maybe you didn’t have the best security QA [quality assurance] for certain things because Cloudflare was the control layer to compensate for that.”

Turner said one company he’s working with saw a huge increase in log volume and they are still trying to figure out what was “legit malicious” versus just noise.

“It looks like there was about an eight hour window when several high-profile sites decided to bypass Cloudflare for the sake of availability,” Turner said. “Many companies have essentially relied on Cloudflare for the OWASP Top Ten [web application vulnerabilities] and a whole range of bot blocking. How much badness could have happened in that window? Any organization that made that decision needs to look closely at any exposed infrastructure to see if they have someone persisting after they’ve switched back to Cloudflare protections.”

Turner said some cybercrime groups likely noticed when an online merchant they normally stalk stopped using Cloudflare’s services during the outage.

“Let’s say you were an attacker, trying to grind your way into a target, but you felt that Cloudflare was in the way in the past,” he said. “Then you see through DNS changes that the target has eliminated Cloudflare from their web stack due to the outage. You’re now going to launch a whole bunch of new attacks because the protective layer is no longer in place.”

Nicole Scott, senior product marketing manager at the McLean, Va. based Replica Cyber, called yesterday’s outage “a free tabletop exercise, whether you meant to run one or not.”

“That few-hour window was a live stress test of how your organization routes around its own control plane and shadow IT blossoms under the sunlamp of time pressure,” Scott said in a post on LinkedIn. “Yes, look at the traffic that hit you while protections were weakened. But also look hard at the behavior inside your org.”

Scott said organizations seeking security insights from the Cloudflare outage should ask themselves:

1. What was turned off or bypassed (WAF, bot protections, geo blocks), and for how long?
2. What emergency DNS or routing changes were made, and who approved them?
3. Did people shift work to personal devices, home Wi-Fi, or unsanctioned Software-as-a-Service providers to get around the outage?
4. Did anyone stand up new services, tunnels, or vendor accounts “just for now”?
5. Is there a plan to unwind those changes, or are they now permanent workarounds?
6. For the next incident, what’s the intentional fallback plan, instead of decentralized improvisation?

In a postmortem published Tuesday evening, Cloudflare said the disruption was not caused, directly or indirectly, by a cyberattack or malicious activity of any kind.

“Instead, it was triggered by a change to one of our database systems’ permissions which caused the database to output multiple entries into a ‘feature file’ used by our Bot Management system,” Cloudflare CEO Matthew Prince wrote. “That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.”

Cloudflare estimates that roughly 20 percent of websites use its services, and with much of the modern web relying heavily on a handful of other cloud providers including AWS and Azure, even a brief outage at one of these platforms can create a single point of failure for many organizations.

Martin Greenfield, CEO at the IT consultancy Quod Orbis, said Tuesday’s outage was another reminder that many organizations may be putting too many of their eggs in one basket.

“There are several practical and overdue fixes,” Greenfield advised. “Split your estate. Spread WAF and DDoS protection across multiple zones. Use multi-vendor DNS. Segment applications so a single provider outage doesn’t cascade. And continuously monitor controls to detect single-vendor dependency.”

Worse Than FailureCodeSOD: Are You Mocking Me?

Today's representative line comes from Capybara James (most recently previously). It's representative, not just of the code base, but of Goodhart's Law: when a measure becomes a target, it ceases to be a good measure. Or, "you get what you measure".

If, for example, you decide that code coverage metrics are how you're going to judge developers, then your developers are going to ensure that the code coverage looks great. If you measure code coverage, then you will get code coverage- and nothing else.

That's how you get tests like this:

Mockito.verify(exportRequest, VerificationModeFactory.atLeast(0)).failedRequest(any(), any(), any());

This test passes if the function exportRequest.failedRequest is called at least zero times, with any input parameters.

Which, as you might imagine, is a somewhat useless thing to test. But what's important is that there is a test. The standards for code coverage are met, the metric is satisfied, and Goodhart marks up another win on the board.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Gold Record

Author: Nathan Matthew Edmunds The spacecraft ascended the purpose of its creators’ intention like most of their labors before it. On November 5, 2018, the Voyager 2 probe broke through the heliosphere of its home system and hummed through the blind and deaf cosmos. By the time the craft’s instrumentation failed, it had exceeded the […]

The post The Gold Record appeared first on 365tomorrows.

Planet DebianGunnar Wolf: While it is cold-ish season in the North hemisphere...

Last week, our university held a «Mega Vaccination Center». Things cannot be small or regular with my university, ever! According to the official information, during last week ≈31,000 people were given a total of ≈74,000 vaccine dosis against influenza, COVID-19, pneumococcal disease and measles (specific vaccines for each person selected according to an age profile).

I was a tiny blip in said numbers. One person, three shots. Took me three hours, but am quite happy to have been among the huge crowd.

Long, long line

(↑ photo credit: La Jornada, 2025.11.14)

Really vaccinated!

And why am I bringing this up? Because I have long been involved in organizing DebConf, the best conference ever, naturally devoted to improving Debian GNU/Linux. And last year, our COVID reaction procedures ended up hurting people we care about. We, as organizers, are taking it seriously to shape a humane COVID handling policy that is, at the same time, responsible and respectful for people who are (reasonably!) afraid to catch the infection. No, COVID did not disappear in 2022, and its effects are not something we can turn a blind eye to.

Next year, DebConf will take place in Santa Fe, Argentina, in July. This means, it will be a Winter DebConf. And while you can catch COVID (or influenza, or just a bad cold) at any time of year, odds are a bit higher.

I know not every country still administers free COVID or influenza vaccines to anybody who requests them. And I know that any protection I might have got now will be quite weaker by July. But I feel it necessary to ask of everyone who can get it to get a shot. Most Northern Hemisphere countries will have a vaccination campaign (or at least, higher vaccine availability) before Winter.

If you plan to attend DebConf (hell… If you plan to attend any massive gathering of people travelling from all over the world to sit at a crowded auditorium) during the next year, please… Act responsibly. For yourself and for those surrounding you. Get vaccinated. It won’t absolutely save you from catching it, but it will reduce the probability. And if you do catch it, you will probably have a much milder version. And thus, you will spread it less during the first days until (and if!) you start developing symptoms.

Planet DebianMichael Ablassmeier: building SLES 16 vagrant/libvirt images using guestfs tools

SLES 16 has been released. In the past, SUSE offered ready built vagrant images. Unfortunately that’s not the case anymore, as with more recent SLES15 releases the official images were gone.

In the past, it was possible to clone existing projects on the opensuse build service to build the images by yourself, but i couldn’t find any templates for SLES 16.

Naturally, there are several ways to build images, and the tooling around involves kiwi-ng, opensuse build service, or packer recipes etc.. (existing packer recipes wont work anymore, as Yast has been replaced by a new installer, called agma). All pretty complicated, …

So my current take on creating a vagrant image for SLE16 has been the following:

  • Spin up an QEMU virtual machine
  • Manually install the system, all in default except for one special setting: In the Network connection details, “Edit Binding settings” and set the Interface to not bind a particular MAC address or interface. This will make the system pick whatever network device naming scheme is applied during boot.
  • After installation has finished, shutdown.

Two guestfs-tools that can now be used to modify the created qcow2 image:

  • run virt-sysrpep on the image to wipe settings that might cause troubles:
 virt-sysprep -a sles16.qcow2
  • create a simple shellscript that setups all vagrant related settings:
#!/bin/bash
useradd vagrant
mkdir -p /home/vagrant/.ssh/
chmod 0700 /home/vagrant/.ssh/
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIF
o9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9W
hQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/
# apply recommended ssh settings for vagrant boxes
SSHD_CONFIG=/etc/ssh/sshd_config.d/99-vagrant.conf
if [[ ! -d "$(dirname ${SSHD_CONFIG})" ]]; then
    SSHD_CONFIG=/etc/ssh/sshd_config
    # prepend the settings, so that they take precedence
    echo -e "UseDNS no\nGSSAPIAuthentication no\n$(cat ${SSHD_CONFIG})" > ${SSHD_CONFIG}
else
    echo -e "UseDNS no\nGSSAPIAuthentication no" > ${SSHD_CONFIG}
fi
SUDOERS_LINE="vagrant ALL=(ALL) NOPASSWD: ALL"
if [ -d /etc/sudoers.d ]; then
    echo "$SUDOERS_LINE" >| /etc/sudoers.d/vagrant
    visudo -cf /etc/sudoers.d/vagrant
    chmod 0440 /etc/sudoers.d/vagrant
else
    echo "$SUDOERS_LINE" >> /etc/sudoers
    visudo -cf /etc/sudoers
fi
 
mkdir -p /vagrant
chown -R vagrant:vagrant /vagrant
systemctl enable sshd
  • use virt-customize to upload the script into the qcow image:
 virt-customize -a sle16.qcow2 --upload vagrant.sh:/tmp/vagrant.sh
  • execute the script via:
 virt-customize -a sle16.qcow2 --run-command "/tmp/vagrant.sh"

After this, use the create-box.sh from the vagrant-libvirt project to create an box image:

https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh

and add the image to your environment:

 create_box.sh sle16.qcow2 sle16.box
 vagrant box add --name my/sles16 test.box

the resulting box is working well within my CI environment as far as i can tell.

,

Cryptogram Friday Squid Blogging: New “Squid” Sneaker

I did not know Adidas sold a sneaker called “Squid.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Cryptogram AI as Cyberattacker

From Anthropic:

In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree­—using AI not just as an advisor, but to execute the cyberattacks themselves.

The threat actor—­whom we assess with high confidence was a Chinese state-sponsored group—­manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.

[…]

The attack relied on several features of AI models that did not exist, or were in much more nascent form, just a year ago:

  1. Intelligence. Models’ general levels of capability have increased to the point that they can follow complex instructions and understand context in ways that make very sophisticated tasks possible. Not only that, but several of their well-developed specific skills—in particular, software coding­—lend themselves to being used in cyberattacks.
  2. Agency. Models can act as agents—­that is, they can run in loops where they take autonomous actions, chain together tasks, and make decisions with only minimal, occasional human input.
  3. Tools. Models have access to a wide array of software tools (often via the open standard Model Context Protocol). They can now search the web, retrieve data, and perform many other actions that were previously the sole domain of human operators. In the case of cyberattacks, the tools might include password crackers, network scanners, and other security-related software.

Cryptogram Scam USPS and E-Z Pass Texts and Websites

Google has filed a complaint in court that details the scam:

In a complaint filed Wednesday, the tech giant accused “a cybercriminal group in China” of selling “phishing for dummies” kits. The kits help unsavvy fraudsters easily “execute a large-scale phishing campaign,” tricking hordes of unsuspecting people into “disclosing sensitive information like passwords, credit card numbers, or banking information, often by impersonating well-known brands, government agencies, or even people the victim knows.”

These branded “Lighthouse” kits offer two versions of software, depending on whether bad actors want to launch SMS and e-commerce scams. “Members may subscribe to weekly, monthly, seasonal, annual, or permanent licenses,” Google alleged. Kits include “hundreds of templates for fake websites, domain set-up tools for those fake websites, and other features designed to dupe victims into believing they are entering sensitive information on a legitimate website.”

Google’s filing said the scams often begin with a text claiming that a toll fee is overdue or a small fee must be paid to redeliver a package. Other times they appear as ads—­sometimes even Google ads, until Google detected and suspended accounts—­luring victims by mimicking popular brands. Anyone who clicks will be redirected to a website to input sensitive information; the sites often claim to accept payments from trusted wallets like Google Pay.

Cryptogram Legal Restrictions on Vulnerability Disclosure

Kendra Albert gave an excellent talk at USENIX Security this year, pointing out that the legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities—exactly the opposite of what the responsible disclosure movement of the early 2000s was supposed to prevent. This is the talk.

Thirty years ago, a debate raged over whether vulnerability disclosure was good for computer security. On one side, full disclosure advocates argued that software bugs weren’t getting fixed and wouldn’t get fixed if companies that made insecure software wasn’t called out publicly. On the other side, companies argued that full disclosure led to exploitation of unpatched vulnerabilities, especially if they were hard to fix. After blog posts, public debates, and countless mailing list flame wars, there emerged a compromise solution: coordinated vulnerability disclosure, where vulnerabilities were disclosed after a period of confidentiality where vendors can attempt to fix things. Although full disclosure fell out of fashion, disclosure won and security through obscurity lost. We’ve lived happily ever after since.

Or have we? The move towards paid bug bounties and the rise of platforms that manage bug bounty programs for security teams has changed the reality of disclosure significantly. In certain cases, these programs require agreement to contractual restrictions. Under the status quo, that means that software companies sometimes funnel vulnerabilities into bug bounty management platforms and then condition submission on confidentiality agreements that can prohibit researchers from ever sharing their findings.

In this talk, I’ll explain how confidentiality requirements for managed bug bounty programs restrict the ability of those who attempt to report vulnerabilities to share their findings publicly, compromising the bargain at the center of the CVD process. I’ll discuss what contract law can tell us about how and when these restrictions are enforceable, and more importantly, when they aren’t, providing advice to hackers around how to understand their legal rights when submitting. Finally, I’ll call upon platforms and companies to adapt their practices to be more in line with the original bargain of coordinated vulnerability disclosure, including by banning agreements that require non-disclosure.

And this is me from 2007, talking about “responsible disclosure”:

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

Planet DebianSahil Dhiman: Anchors in Life

Just like a ship needs an anchor to stabilize and hold it to port, humans too, I feel, have and require anchors to hold them in life. It could be an emotional anchor, a physical anchor, an anchor that stimulates your curiosity, a family member, a friend or a partner or a spiritual being.

An anchor holds you and helps you stabilize in stormy weather. An anchor can keep you going or stop you from going. An anchor orients you, helps you formulate your values and beliefs.

An anchor could be someone or something or oneself (thanks Saswata for the thought). Writing here is one of my anchors; what’s your anchor?

365 TomorrowsConstant

Author: Majoki Somewhere in the staggering structure there had to be a drip. Thorndyke sensed it before he actually heard it late that first night as he sat in the empty chamber. A metallic plinking. It seemed inconceivable that a structure as monolithic as the Presidium could have a leak either external or internal. The […]

The post Constant appeared first on 365tomorrows.

Worse Than FailureUsing an ADE: Ancient Development Environment

One of the things that makes legacy code legacy is that code, over time, rots. Some of that rot comes from the gradual accumulation of fixes, hacks, and kruft. But much of the rot also comes from the tooling going unsupported or entirely out of support.

For example, many years ago, I worked in a Visual Basic 6 shop. The VB6 IDE went out of support in April, 2008, but we continued to use it well into the next decade. This made it challenging to support the existing software, as the IDE frequently broke in response to OS updates. Even when we started running it inside of a VM running an antique version of Windows 2000, we kept running into endless issues getting projects to compile and build.

A fun side effect of that: the VB6 runtime remains supported. So you can run VB6 software on modern Windows. You just can't modify that software.

Greta has inherited an even more antique tech stack. She writes, "I often wonder if I'm the last person on Earth encumbered with this particular stack." She adds, "The IDE is long-deprecated from a vendor that no longer exists- since 2002." Given the project started in the mid 2010s, it may have been a bad choice to use that tech-stack.

It's not as bad as it sounds- while the technology and tooling is crumbling ruins, the team culture is healthy and the C-suite has given Greta wide leeway to solve problems. But that doesn't mean that the tooling isn't a cause of anguish, and even worse than the tooling- the code itself.

"Some things," Greta writes, "are 'typical bad'" and some things "are 'delightfully unique' bad."

For example, the IDE has a concept of "designer" files, for the UI, and "code behind" files, for the logic powering the UI. The IDE frequently corrupts its own internal state, and loses the ability to properly update the designer files. When this happens, if you attempt to open, save, or close a designer file, the IDE pops up a modal dialog box complaining about the corruption, with a "Yes" and "No" option. If you click "No", the modal box goes away- and then reappears because you're seeing this message because you're on a broken designer file. If you click "Yes", the IDE "helpfully" deletes pretty much everything in your designer file.

Nothing about the error message indicates that this might happen.

The language used is a dialect of C++. I say "dialect" because the vendor-supplied compiler implements some cursed feature set between C++98 and C++11 standards, but doesn't fully conform to either. It's only capable of outputting 32-bit x86 code up to a Pentium Pro. Using certain C++ classes, like std::fstream, causes the resulting executable to throw a memory protection fault on exit.

Worse, the vendor supplied class library is C++ wrappers on top of an even more antique Pascal library. The "class" library is less an object-oriented wrapper and more a collection of macros and weird syntax hacks. No source for the Pascal library exists, so forget about ever updating that.

Because the last release of the IDE was circa 2002, running it on any vaguely modern environment is prone to failures, but it also doesn't play nicely inside of a VM. At this point, the IDE works for one session. If you exit it, reboot your computer, or try to close and re-open the project, it breaks. The only fix is to reinstall it. But the reinstall requires you to know which set of magic options actually lets the install proceed. If you make a mistake and accidentally install, say, CORBA support, attempting to open the project in the IDE leads to a cascade of modal error boxes, including one that simply says, "ABSTRACT ERROR" ("My favourite", writes Greta). And these errors don't limit themselves to the IDE; attempting to run the compiler directly also fails.

But, if anything, it's the code that makes the whole thing really challenging to work with. While the UI is made up of many forms, the "main" form is 18,000 lines of code, with absolutely no separation of concerns. Actually, the individual forms don't have a lot of separation of concerns; data is shared between forms via global variables declared in one master file, and then externed into other places. Even better, the various sub-forms are never destroyed, just hidden and shown, which means they remember their state whether you want that or not. And since much of the state is global, you have to be cautious about which parts of the state you reset.

Greta adds:

There are two files called main.cpp, a Station.cpp, and a Station1.cpp. If you were to guess which one owns the software's entry point, you would probably be wrong.

But, as stated, it's not all as bad as it sounds. Greta writes: "I'm genuinely happy to be here, which is perhaps odd given how terrible the software is." It's honestly not that odd; a good culture can go a long way to making wrangling a difficult tech stack happy work.

Finally, Greta has this to say:

We are actively working on a .NET replacement. A nostalgic, perhaps masochistic part of me will miss the old stack and its daily delights.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

Planet DebianDaniel Kahn Gillmor: App Store Oligopoly

A Call for Public Discussion about App Store Oligopoly

Over on the ACLU's Free Future blog, I just published an article titled Your Smartphone, Their Rules: How App Stores Enable Corporate-Government Censorship.

Free Software users and developers likely already understand the reasons why it matters who controls what tools you have access to. Hopefully this post can help clarify, even to people typically used to common non-free tooling, that there are real world risks to consolidated, proprietary control over computing and communication tools.

Big shout out to the projects out there doing good work in the "pocket supercomputer" space, providing an escape valve for many users and a counter-example to centralized corporate control, including F-Droid, GrapheneOS, and phosh.

The screws are tightening on user freedom, in the very place where most computing is happening today. The smartphone is already far too similar to an ankle monitor than it should be.

Please, publish your own suggestions on creative forms of mutual technical liberation. These are communications tools, so no person can fix the problems alone.

I would love to see a flourishing of non-Android, non-iOS systems in people's pockets, but i also know with the market the way it is, that is a long haul. Until that happens, we should also try to keep Android open, check out keepandroidopen.org for more suggestions.

,

Worse Than FailureRepresentative Line: In the Zone

Robert R picked up a bug in his company's event scheduling app. Sometimes, events were getting reported a day off from when they actually were.

It didn't take too long to find the culprit, and as is so often the case, the culprit was handling dates with strings.

const dateAsString = event.toISOString().substr(0,10);  
return new Date(dateAsString);

toISOString returns a "simplified" ISO8601 string, which looks like this: YYYY-MM-DDTHH:mm:ss.sssZ. The substr pops off the first ten characters, giving you YYYY-MM-DD.

The goal, as you can likely gather, is to truncate to just the date part of a date-time. And given that JavaScript doesn't have a convenient method to do that, it doesn't seem like a terrible way to solve that problem, if you don't think about what date-times contain too hard.

But there's an obvious issue here. toISOString always converts the date to UTC, converting from your local timezone to UTC. Which means when you pick off just the date portion of that, you may be off by an entire day, depending on the event's scheduled time and your local timezone.

This code doesn't simply truncate- it discards timezone information. But for an event scheduler used across the world, tracking timezones is important. You can't just throw that information away.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

365 TomorrowsStars & Debts

Author: Julian Miles, Staff Writer There’s a lot to be said for the glory of a star field. A million points of light in every direction, in an array of colours you’d never believe possible, and a silence that seems to make the vista even more intense. “You’re stargazing again, aren’t you?” From infinity back […]

The post Stars & Debts appeared first on 365tomorrows.

Planet DebianRodrigo Siqueira: XDC 2025

It has been a long time since I published any update in this space. Since this was a year of colossal changes for me, maybe it is also time for me to make something different with this blog and publish something just for a change — why not start talking about XDC 2025?

This year, I attended XDC 2025 in Vienna as an Igalia developer. I was thrilled to see some faces from people I worked with in the past and people I’m working with now. I had a chance to hang out with some folks I worked with at AMD (Harry, Alex, Leo, Christian, Shashank, and Pierre), many Igalians (Žan, Job, Ricardo, Paulo, Tvrtko, and many others), and finally some developers from Valve. In particular, I met Tímur in person for the first time, even though we have been talking for months about GPU recovery. Speaking of GPU recovery, we held a workshop on this topic together.

The workshop was packed with developers from different companies, which was nice because it added different angles on this topic. We began our discussion by focusing on the topic of job resubmission. Christian began sharing a brief history of how the AMDGPU driver started handling resubmission and the associated issues. After learning from erstwhile experience, amdgpu ended up adopting the following approach:

  1. When a job cause a hang, call driver specific handler.
  2. Stop the scheduler.
  3. Copy all jobs from the ring buffer, minus the job that caused the issue, to a temporary ring.
  4. Reset the ring buffer.
  5. Copy back the other jobs to the ring buffer.
  6. Resume the scheduler.

Below, you can see one crucial series associated with amdgpu recovery implementation:

The next topic was a discussion around the replacement of drm_sched_resubmit_jobs() since this function became deprecated. Just a few drivers still use this function, and they need a replacement for that. Some ideas were floating around to extract part of the specific implementation from some drivers into a generic function. The next day, Philipp Stanner continued to discuss this topic in his workshop, DRM GPU Scheduler.

Another crucial topic discussed was improving GPU reset debuggability to narrow down which operations cause the hang (keep in mind that GPU recovery is a medicine, not the cure to the problem). Intel developers shared their strategy for dealing with this by obtaining hints from userspace, which helped them provide a better set of information to append to the devcoredump. AMD could adopt this alongside dumping the IB data into the devcoredump (I am already investigating this).

Finally, we discussed strategies to avoid hang issues regressions. In summary, we have two lines of defense:

  • IGT: At the IGT level, we can have more tests that insert malicious instructions into the ring buffer, forcing the driver into an invalid state and triggering the recovery process.
  • HangTest suite: HangTest suite is a tool that simulates some potential hangs using Vulkan. Some tests are already available in this suite, but we should explore more creative combinations for trying to trigger hangs.
Lighting talk

This year, as always, XDC was super cool, packed with many engaging presentations which I highly recommend everyone check out. If you are interested, check the schedule and the presentation recordings available on the X.Org Foundation Youtube page. Anyway, I hope this blog post marks the inauguration of a new era for this site, where I will start posting more content ranging from updates to tutorials. See you soon.

Planet DebianValhalla's Things: Historically Inaccurate Hemd

Posted on November 17, 2025
Tags: madeof:atoms, craft:sewing

A woman wearing a white shirt with a tall, thick collar with lines of blue embroidery, closed in the front with small buttons; the sleeves are wide and billowing, gathered at the cuffs with more blue embroidery. She's keeping her hands at the waist so that the shirt, which reaches to mid thigh, doesn't look like a shapeless tent from the neck down.

After cartridge pleating and honeycombing, I was still somewhat in the mood for that kind of fabric manipulation, and directing my internet searches in that vague direction, and I stumbled on this: https://katafalk.wordpress.com/2012/06/26/patternmaking-for-the-kampfrau-hemd-chemise/

Now, do I want to ever make myself a 16th century German costume, especially a kampfrau one? No! I’m from lake Como! Those are the enemies who come down the Alps pillaging and bringing the Black Death with them!

Although I have to admit that at times during my day job I have found the idea of leaving everything to go march with the Jägermonsters attractive. You know, the exciting prospective of long days of march spent knitting sturdy socks, punctuated by the excitement of settling down in camp and having a chance of doing lots of laundry. Or something. Sometimes being a programmer will make you think odd things.

Anyway, going back to the topic, no, I didn’t need an historically accurate hemd. But I did need a couple more shirts for daily wear, I did want to try my hand at smocking, and this looked nice, and I was intrigued by the way the shaping of the neck and shoulder worked, and wondered how comfortable it would be.

And so, it had to be done.

I didn’t have any suitable linen, but I did have quite a bit of cotton voile, and since I wasn’t aiming at historical accuracy it looked like a good option for something where a lot of fabric had to go in a small space.

At first I considered making it with a bit less fabric than the one in the blog, but then the voile was quite thin, so I kept the original measurement as is, only adapting the sleeve / sides seams to my size.

The same woman, from the back. This time the arms are out, so that the big sleeves show better, but the body does look like a tent.

With the pieces being rectangles the width of the fabric, I was able to have at least one side of selvedge on all seams, and took advantage of it by finishing the seams by simply folding the allowances to one sides so that the selvedge was on top, and hemstitching them down as I would have done with a folded edge when felling.

Also, at first I wanted to make the smocking in white on white, but then I thought about a few hanks of electric blue floss I had in my stash, and decided to just go with it.

The initial seams were quickly made, then I started the smocking at the neck, and at that time the project went on hold while I got ready to go to DebConf. Then I came back and took some time to get back into a sewing mood, but finally the smocking on the next was finished, and I could go on with the main sewing, which, as I expected, went decently fast for a handsewing project.

detail of the smocking in progress on the collar, showing the lines of basting thread I used as a reference, and the two in progress zig-zag lines being worked from each side.

While doing the diagonal smocking on the collar I counted the stitches to make each side the same length, which didn’t completely work because the gathers weren’t that regular to start with, and started each line from the two front opening going towards the center back, leaving a triangle with a different size right in the middle. I think overall it worked well enough.

Then there were a few more interruptions, but at last it was ready! just as the weather turned cold-ish and puffy shirts were no longer in season, but it will be there for me next spring.

I did manage to wear it a few times and I have to say that the neck shaping is quite comfortable indeed: it doesn’t pull in odd ways like the classical historically accurate pirate shirt sometimes does, and the heavy gathering at the neck makes it feel padded and soft.

The same shirt belted (which looks nicer); one hand is held out to show that the cuff is a bit too wide and falls down over the hand.

I’m not as happy with the cuffs: the way I did them with just honeycombing means that they don’t need a closure, and after washing and a bit of steaming they lie nicely, but then they tend to relax in a wider shape. The next time I think I’ll leave a slit in the sleeves, possibly make a different type of smocking (depending on whether I have enough fabric) and then line them like the neck so that they are stable.

Because, yes, I think that there will be another time: I have a few more project before that, and I want to spend maybe another year working from my stash, but then I think I’ll buy some soft linen and make at least another one, maybe with white-on-white smocking so that it will be easier to match with different garments.

,

Krebs on SecurityMicrosoft Patch Tuesday, November 2025 Edition

Microsoft this week pushed security updates to fix more than 60 vulnerabilities in its Windows operating systems and supported software, including at least one zero-day bug that is already being exploited. Microsoft also fixed a glitch that prevented some Windows 10 users from taking advantage of an extra year of security updates, which is nice because the zero-day flaw and other critical weaknesses affect all versions of Windows, including Windows 10.

Affected products this month include the Windows OS, Office, SharePoint, SQL Server, Visual Studio, GitHub Copilot, and Azure Monitor Agent. The zero-day threat concerns a memory corruption bug deep in the Windows innards called CVE-2025-62215. Despite the flaw’s zero-day status, Microsoft has assigned it an “important” rating rather than critical, because exploiting it requires an attacker to already have access to the target’s device.

“These types of vulnerabilities are often exploited as part of a more complex attack chain,” said Johannes Ullrich, dean of research for the SANS Technology Institute. “However, exploiting this specific vulnerability is likely to be relatively straightforward, given the existence of prior similar vulnerabilities.”

Ben McCarthy, lead cybersecurity engineer at Immersive, called attention to CVE-2025-60274, a critical weakness in a core Windows graphic component (GDI+) that is used by a massive number of applications, including Microsoft Office, web servers processing images, and countless third-party applications.

“The patch for this should be an organization’s highest priority,” McCarthy said. “While Microsoft assesses this as ‘Exploitation Less Likely,’ a 9.8-rated flaw in a ubiquitous library like GDI+ is a critical risk.”

Microsoft patched a critical bug in OfficeCVE-2025-62199 — that can lead to remote code execution on a Windows system. Alex Vovk, CEO and co-founder of Action1, said this Office flaw is a high priority because it is low complexity, needs no privileges, and can be exploited just by viewing a booby-trapped message in the Preview Pane.

Many of the more concerning bugs addressed by Microsoft this month affect Windows 10, an operating system that Microsoft officially ceased supporting with patches last month. As that deadline rolled around, however, Microsoft began offering Windows 10 users an extra year of free updates, so long as they register their PC to an active Microsoft account.

Judging from the comments on last month’s Patch Tuesday post, that registration worked for a lot of Windows 10 users, but some readers reported the option for an extra year of updates was never offered. Nick Carroll, cyber incident response manager at Nightwing, notes that Microsoft has recently released an out-of-band update to address issues when trying to enroll in the Windows 10 Consumer Extended Security Update program.

“If you plan to participate in the program, make sure you update and install KB5071959 to address the enrollment issues,” Carroll said. “After that is installed, users should be able to install other updates such as today’s KB5068781 which is the latest update to Windows 10.”

Chris Goettl at Ivanti notes that in addition to Microsoft updates today, third-party updates from Adobe and Mozilla have already been released. Also, an update for Google Chrome is expected soon, which means Edge will also be in need of its own update.

The SANS Internet Storm Center has a clickable breakdown of each individual fix from Microsoft, indexed by severity and CVSS score. Enterprise Windows admins involved in testing patches before rolling them out should keep an eye on askwoody.com, which often has the skinny on any updates gone awry.

As always, please don’t neglect to back up your data (if not your entire system) at regular intervals, and feel free to sound off in the comments if you experience problems installing any of these fixes.

[Author’s note: This post was intended to appear on the homepage on Tuesday, Nov. 11. I’m still not sure how it happened, but somehow this story failed to publish that day. My apologies for the oversight.]

Planet DebianSteinar H. Gunderson: Game slowrunning

In 2013, I finished Zelda II: The Adventure of Link (on emulator), which I'd first played the summers of 1992 and 1993 (or thereabouts). At ~20 years between first start and first finish, it's a kind of weird opposite of speedrunning, and a personal best for me.

But this weekend, I trounced that record; in 1990 (I think!), we got a 512 kB RAM expansion for the Amiga 500 for the first time, which allowed us to play our warezed copy of Pool of Radiance without understanding much of the story or really reading that much English. And a couple of weeks ago, I realized that I had bought the game on GOG.com in 2018 and not done much about it… and went to finish it.

Pool of Radiance, fighting Thyranthraxus

Due to poor planning on my part, this ended up being a bit of a challenge run, with no stat modification, only five people in the party, no excessive rerolling (only 2–3 for each), no multiclassing, no glitches, no save-states (after finding out they help very little :-) ), very limited NPCs (only story NPCs plus a couple of hireds immediately killed for items, as opposed to the Amiga runs where we basically had only one PC and the rest top-grade NPCs!) and no Gold Box Companion.

However: Extensive guide use (the Internet is great!), and savescumming. Oh my, so much savescumming.

So that's 35 years from first start to first finish. We'll see when I get to Police Quest I…

Planet DebianRuss Allbery: Cumulative haul

I haven't posted a book haul in forever, so lots of stuff stacked up, including a new translation of Bambi that I really should get around to reading.

Nicholas & Olivia Atwater — A Matter of Execution (sff)
Nicholas & Olivia Atwater — Echoes of the Imperium (sff)
Travis Baldree — Brigands & Breadknives (sff)
Elizabeth Bear — The Folded Sky (sff)
Melissa Caruso — The Last Hour Between Worlds (sff)
Melissa Caruso — The Last Soul Among Wolves (sff)
Haley Cass — Forever and a Day (romance)
C.L. Clark — Ambessa: Chosen of the Wolf (sff)
C.L. Clark — Fate's Bane (sff)
C.L. Clark — The Sovereign (sff)
August Clarke — Metal from Heaven (sff)
Erin Elkin — A Little Vice (sff)
Audrey Faye — Alpha (sff)
Emanuele Galletto, et al. — Fabula Ultima: Core Rulebook (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas High Fantasy (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas Techno Fantasy (rpg)
Alix E. Harrow — The Everlasting (sff)
Alix E. Harrow — Starling House (sff)
Antonia Hodgson — The Raven Scholar (sff)
Bel Kaufman — Up the Down Staircase (mainstream)
Guy Gavriel Kay — All the Seas of the World (sff)
N.K. Jemisin & Jamal Campbell — Far Sector (graphic novel)
Mary Robinette Kowal — The Martian Conspiracy (sff)
Matthew Kressel — Space Trucker Jess (sff)
Mark Lawrence — The Book That Held Her Heart (sff)
Yoon Ha Lee — Moonstorm (sff)
Michael Lewis (ed.) — Who Is Government? (non-fiction)
Aidan Moher — Fight, Magic, Items (non-fiction)
Saleha Mohsin — Paper Soldiers (non-fiction)
Ada Palmer — Inventing the Renaissance (non-fiction)
Suzanne Palmer — Driving the Deep (sff)
Suzanne Palmer — The Scavenger Door (sff)
Suzanne Palmer — Ghostdrift (sff)
Terry Pratchett — Where's My Cow (graphic novel)
Felix Salten & Jack Zipes (trans.) — The Original Bambi (classic)
L.M. Sagas — Cascade Failure (sff)
Jenny Schwartz — The House That Walked Between Worlds (sff)
Jenny Schwartz — House in Hiding (sff)
Jenny Schwartz — The House That Fought (sff)
N.D. Stevenson — Scarlet Morning (sff)
Rory Stewart — Politics on the Edge (non-fiction)
Emily Tesh — The Incandescent (sff)
Brian K. Vaughan & Fiona Staples — Saga #1 (graphic novel)
Scott Warren — The Dragon's Banker (sff)
Sarah Wynn-Williams — Careless People (non-fiction)

As usual, I have already read and reviewed a whole bunch of these. More than I had expected, actually, given that I've not had a great reading year this year so far.

I am, finally, almost caught up with reviews, with just one book read and not yet reviewed. And hopefully I'll have lots of time to read for the last month and a half of the year.

Planet DebianVasudev Kamath: Moving blog from self hosting to Github Pages

I haven't been blogging as much as I used to. For a while, I've been hosting my blog on a single-core DigitalOcean droplet, which cost me around $7 per month. It also hosted my mail server. Most of the time, the droplet was idle, and I wasn't actively using my personal email much. Since it was self-hosted, I didn't have confidence that it would always be up, so I relied on Gmail as my personal email for everything—from banking to social media.

Now, I feel this cost is just a waste of money, even though it's not huge. So, I decided to move my blog back to GitHub Pages, now published using GitHub Workflows. I've stopped my mail server for the time being, and it won't be reachable for a while. For any personal queries or comments, please reach me on my Personal Gmail.

I'm not sure how active I'll be with blogging again, but I'll try my best to return to my old habit of writing at least a post every week or two. Not because people will read it, but because it gives me the option to explore new things, experiment, and take notes along the way.

365 TomorrowsThe Exchange

Author: Chris Lihou The parking lot was empty. Its single light projected a cone of semi-darkness, beyond which shadows could stealthily move. As instructed, I deposited the package at the base of the light and quickly retreated. In the light’s glow, I knew I’d make for an easy target. My handlers were aware of the […]

The post The Exchange appeared first on 365tomorrows.

,

Charles StrossInterim update

So, in the past month I've been stabbed in the right eye, successfully, at the local ophthalmology hospital.

Cataract surgery is interesting: bright lights, mask over the rest of your face, powerful local anaesthesia, constant flow of irrigation— they practically operate underwater. Afterwards there's a four week course of eye drops (corticosteroids for inflammation, and a two week course of an NSAID for any residual ache). I'm now long-sighted in my right eye, which is quite an experience, and it's recovered. And my colour vision in the right eye is notably improved, enough that my preferred screen brightness level for my left eye is painful to the right.

Drawbacks: firstly, my right eye has extensive peripheral retinopathy—I was half-blind in it before I developed the cataracts—and secondly, the op altered my prescription significantly enough that I can't read with it. I need to wait a month after I've had the second eye operation before I can go back to my regular ophthalmologist to be checked out and get a new set of prescription glasses. As I spent about 60 hours a week either reading or writing, I've been spending a lot of time with my right eye screwed shut (eye patches are uncomfortable). And I'm pretty sure my writing/reading is going to be a dumpster fire for about six weeks after the second eye is operated on. (New specs take a couple of weeks to come through from the factory.) I'll try cheap reading glasses in the mean time but I'm not optimistic: I am incapable of absorbing text through my ears (audiobooks and podcasts simply don't work for me—I zone out within seconds) and I can't write fiction using speech-to-text either (the cadences of speech are inimical to prose, even before we get into my more-extensive-than-normal vocabulary or use of confusing-to-robots neologisms).

In the meantime ...

I finished the first draft of Starter Pack at 116,500 words: it's with my agent. It is not finished and it is not sold—it definitely needs edits before it goes to any editors—but at least it is A Thing, with a beginning, a middle, and an end.

My next job (after some tedious business admin) is to pick up Ghost Engine and finish that, too: I've got about 20,000 words to go. If I'm not interrupted by surgery, it'll be done by the end of the year, but surgery will probably add a couple of months of delays. Then that, too, goes back to my agent—then hopefully to the UK editor who has been waiting patiently for it for a decade now, and then to find a US publisher. I must confess to some trepidation: for the first time in about two decades I am out of contract (except for the UK edition of GE) and the two big-ass series are finished—after The Regicide Report comes out next January 27th there's nothing on the horizon except for these two books set in an entirely new setting which is drastically different to anything I've done before. Essentially I've invested about 2-3 years' work on a huge gamble: and I won't even know if it's paid off before early 2027.

It's not a totally stupid gamble, though. I began Ghost Engine in 2015, when everyone was assuring me that space opera was going to be the next big thing: and space opera is still the next big thing, insofar as there's going to be a huge and ongoing market for raw escapism that lets people switch off from the world-as-it-is for a few hours. The Laundry Files was in trouble: who needs to escape into a grimdark alternate present where our politics has been taken over by Lovecraftian horrors now?

Indeed, you may have noticed a lack of blog entries talking about the future this year. It's because the future's so grim I need a floodlight to pick out any signs of hope. There is a truism that with authoritarians and fascists, every accusation they make is a confession—either a confession of something they've done, or of something they want to do. (They can't comprehend the possibility that not everybody shares their outlook and desires, to they attribute their own motivations to their opponents.) Well, for many decades now the far right have been foaming about a vast "international communist conspiracy", and what seems to be surfacing this decade is actually a vast international far-right conspiracy: from Trump and MAGA in the USA to Farage and Reform in the UK, to Orban's Fidesz in Hungary, to Putin in Russia and Erdogan in Turkey and Modi's Hindutva nationalists in India and Xi's increasingly authoritarian clamp-down in China, all the fascist insects have emerged from the woodwork at the same time. It's global.

I can discern some faint outlines in the darkness. Fascism is a reaction to uncertainty and downward spiraling living standards, especially among the middle classes. Over the past few decades globalisation of trade has concentrated wealth in a very small number of immensely rich hands, and the middle classes are being squeezed hard. At the same time, the hyper-rich feel themselves to be embattled and besieged. Those of them who own social media networks and newspapers and TV and radio channels are increasingly turning them into strident far-right propaganda networks, because historically fascist regimes have relied on an alliance of rich industrialists combined with the angry poor, who can be aimed at an identifiable enemy.

A big threat to the hyper-rich currently is the end of Moore's Law. Continuous improvements in semiconductor performance began to taper off after 2002 or thereabouts, and are now almost over. The tech sector is no longer actually producing significantly improved products each year: instead, it's trying to produce significantly improved revenue by parasitizing its consumers. ("Enshittification" as Cory Doctorow named it: I prefer to call the broader picture "crapitalism".) This means that it's really hard to invest for a guaranteed return on investment these days.

To make matters worse, we're entering an energy cost deflation cycle. Renewables have definitively won: last year it became cheaper to buy and add new photovoltaic panels to the grid in India than it was to mine coal from existing mines to burn in existing power stations. China, with its pivot to electric vehicles, is decarbonizing fast enough to have already passed its net zero goals for 2030: we have probably already passed peak demand for oil. PV panels are not only dirt cheap by the recent standards of 2015: they're still getting cheaper and they can be rolled out everywhere. It turns out that many agricultural crops benefit from shade: ground-dwellers coexist happily with PV panels on overhead stands, and farm animals also like to be able to get out of the sun. (This isn't the case for maize and beef, but consider root vegetables, brassicae, and sheep ...)

The oil and coal industries have tens of trillions of dollars of assets stranded underground, in the shape of fossil fuel deposits that are slightly too expensive to exploit commercially at this time. The historic bet was that these assets could be dug up and burned later, given that demand appeared to be a permanent feature of our industrial landscape. But demand is now falling, and sooner or late their owners are going to have to write off those assets because they've been overtaken by renewables. (Some oil is still going to be needed for a very long time—for plastics and the chemical industries—but it's a fraction of that which is burned for power, heating, and transport.)

We can see the same dynamic in miniature in the other current investment bubble, "AI data centres". It's not AI (it is, at best, deep learning) and it's being hyped and sold for utterly inappropriate purposes. This is in service to propping up the share prices of NVidia (the GPU manufacturer), OpenAI and Anthropic (neither of whom have a clear path to eventual profitability: they're the tech bubble du jour—call it dot-com 3.0) and also propping up the commercial real estate market and ongoing demand for fossil fuels. COVID19 and work from home trashed demand for large office space: data centres offer to replace this. AI data centres are also hugely energy-inefficient, which keeps those old fossil fuel plants burning.

So there's a perfect storm coming, and the people with the money are running scared, and to deal with it they're pushing bizarre, counter-reality policies: imposing tariffs on imported electric cars and solar panels, promoting conspiracy theories, selling the public on the idea that true artificial intelligence is just around the corner, and promoting hate (because it's a great distraction).

I think there might be a better future past all of this, but I don't think I'll be around to see it: it's at least a decade away (possibly 5-7 decades if we're collectively very unlucky). In the meantime our countries are being overrun by vicious xenophobes who hate everyone who doesn't conform to their desire for industrial feudalism.

Obviously pushing back against the fascists is important. Equally obviously, you can't push back if you're dead. I'm over 60 and not in great health so I'm going to leave the protests to the young: instead, I'm going to focus on personal survival and telling hopeful stories.

Planet DebianAndrew Cater: 2025-11-15 17:16 UTC Debian media testing for point release 13.2 of Trixie

*Busy* day in Cambridge. A roomful of people, large numbers of laptops and a lot of parallel installations.

Joined here by Emyr, Chris, Helen and Simon with Isy doing speech installs from her university accommodation. Two Andy's always makes it interesting. Steve providing breakfast, as ever.

We're almost there: the last test install is being repeated to flush out a possible bug. Other release processes are being done in the background.

Thanks again to Steve for hosting and all the hard work that goes into this from everybody.


 

Cryptogram Book Review: The Business of Secrets

The Business of Secrets: Adventures in Selling Encryption Around the World by Fred Kinch (May 24, 2024)

From the vantage point of today, it’s surreal reading about the commercial cryptography business in the 1970s. Nobody knew anything. The manufacturers didn’t know whether the cryptography they sold was any good. The customers didn’t know whether the crypto they bought was any good. Everyone pretended to know, thought they knew, or knew better than to even try to know.

The Business of Secrets is the self-published memoirs of Fred Kinch. He was founder and vice president of—mostly sales—at a US cryptographic hardware company called Datotek, from company’s founding in 1969 until 1982. It’s mostly a disjointed collection of stories about the difficulties of selling to governments worldwide, along with descriptions of the highs and (mostly) lows of foreign airlines, foreign hotels, and foreign travel in general. But it’s also about encryption.

Datotek sold cryptographic equipment in the era after rotor machines and before modern academic cryptography. The company initially marketed computer-file encryption, but pivoted to link encryption—low-speed data, voice, fax—because that’s what the market wanted.

These were the years where the NSA hired anyone promising in the field, and routinely classified—and thereby blocked—publication of academic mathematics papers of those they didn’t hire. They controlled the fielding of strong cryptography by aggressively using the International Traffic in Arms regulation. Kinch talks about the difficulties in getting an expert license for Datotek’s products; he didn’t know that the only reason he ever got that license was because the NSA was able to break his company’s stuff. He had no idea that his largest competitor, the Swiss company Crypto AG, was owned and controlled by the CIA and its West German equivalent. “Wouldn’t that have made our life easier if we had known that back in the 1970s?” Yes, it would. But no one knew.

Glimmers of the clandestine world peek out of the book. Countries like France ask detailed tech questions, borrow or buy a couple of units for “evaluation,” and then disappear again. Did they break the encryption? Did they just want to see what their adversaries were using? No one at Datotek knew.

Kinch “carried the key generator logic diagrams and schematics” with him—even today, it’s good practice not to rely on their secrecy for security—but the details seem laughably insecure: four linear shift registers of 29, 23, 13, and 7 bits, variable stepping, and a small nonlinear final transformation. The NSA probably used this as a challenge to its new hires. But Datotek didn’t know that, at the time.

Kinch writes: “The strength of the cryptography had to be accepted on trust and only on trust.” Yes, but it’s so, so weird to read about it in practice. Kinch demonstrated the security of his telephone encryptors by hooking a pair of them up and having people listen to the encrypted voice. It’s rather like demonstrating the safety of a food additive by showing that someone doesn’t immediately fall over dead after eating it. (In one absolutely bizarre anecdote, an Argentine sergeant with a “hearing defect” could understand the scrambled analog voice. Datotek fixed its security, but only offered the upgrade to the Argentines, because no one else complained. As I said, no one knew anything.)

In his postscript, he writes that even if the NSA could break Datotek’s products, they were “vastly superior to what [his customers] had used previously.” Given that the previous devices were electromechanical rotor machines, and that his primary competition was a CIA-run operation, he’s probably right. But even today, we know nothing about any other country’s cryptanalytic capabilities during those decades.

A lot of this book has a “you had to be there” vibe. And it’s mostly tone-deaf. There is no real acknowledgment of the human-rights-abusing countries on Datotek’s customer list, and how their products might have assisted those governments. But it’s a fascinating artifact of an era before commercial cryptography went mainstream, before academic cryptography became approved for US classified data, before those of us outside the triple fences of the NSA understood the mathematics of cryptography.

This book review originally appeared in AFIO.

365 TomorrowsTaking notes

Author: Colin Jeffrey Nicole Celoni settled into a loungeroom chair, wireless earbuds in, ready to read and listen to music. She flipped open her book, pressed play on her MP3 player. Nothing. Confused, she checked the screen. Every file was gone. Panic rising, she tapped through folders. Empty. No playlists, no albums. Years of downloads […]

The post Taking notes appeared first on 365tomorrows.

,

Cryptogram Friday Squid Blogging: Pilot Whales Eat a Lot of Squid

Short-finned pilot wales (Globicephala macrorhynchus) eat at lot of squid:

To figure out a short-finned pilot whale’s caloric intake, Gough says, the team had to combine data from a variety of sources, including movement data from short-lasting tags, daily feeding rates from satellite tags, body measurements collected via aerial drones, and sifting through the stomachs of unfortunate whales that ended up stranded on land.

Once the team pulled all this data together, they estimated that a typical whale will eat between 82 and 202 squid a day. To meet their energy needs, a whale will have to consume an average of 140 squid a day. Annually, that’s about 74,000 squid per whale. For all the whales in the area, that amounts to about 88,000 tons of squid eaten every year.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

David BrinA midweek rant about Sumo vs Judo and dems need to stop whining.

Robert Reich refers to the disciplined nature of Republicans and fractious diversity of Democrats - which many of YOU are displaying. " ...a fundamental asymmetry at the heart of American politics: Democrats are undisciplined. Republicans are regimented.

"For as long as I remember, Democrats have danced to their own separate music while Republicans march to a single drummer. That was the story in 1994, when Bill Clinton couldn’t get the Democratic Senate to go along with his health care plan, on which Clinton spent almost all his political capital."


And so on...


Alas, in doing so this time, Reich (whom I respect) ignores that Schumer did the right thing tactically. It was time. Public anger at GOPpers won the Blue Wave of a week ago. But meanwhile, the shutdown was driving civil servants by thousands to resign, which Republicans WANTED. And hurting millions with SNAP and HUNDREDS or other service denials, including food inspections and air travel safety... and with no chance of getting the GOP to negotiate. None. Zero.


Negotiate? THIS Republican Party?


Dig it. Shutting the government down is a FEATURE to them! Without any drawbacks. Civil servants fleeing? Great! Freedom from inspections? Fine! Stockpiling funds to reduce embarrassing deficits? Sure! Taking anti-terror agents off duty, as Bush did in 2001, leading up to 9/11? Bring on whatever might give Trump his distraction from Epstein etc.


The dems timed this exactly right! It... was... time for a judo move that allowed the House to get into session and to reveal Mike Johnson's next writhe to hide the Epstein files.


Finally, there will be another shut down in JANUARY! Close to GOP primary season. The only elections that matter in GOP gerrymandered districts. (So register Republican if you live in such a district!!)


ANY / all of you screeching at Schumer? You want utterly pointless/symbolic Sumo, when judo is called for! Schumer may not be our Grant or Sherman - maybe Newsom will be - but your yowling "I'll-never-support-our-generals!" is not a way to win a civil war.


PS: Reich cites George Lakoff's theory that Dems want a mommy while Repubs want a strict father. I like Lakoff far more than that jibberer Chomsky. But this metaphor is nonsense! Any decent dad is all about NEGOTIATION. No. THE GOP IS NOW ABOUT MIDDLE SCHOOL BULLIES. They don't want a dad's pragmatic "Let's talk this through," They want howls from nipple-twisted nerd-victims. And your MAGA-joe is back on that school playground, desperately sucking up to the top bully in order to be part of the gang of nipple-twisters, not one of the twisted.


Your tears are their food.


But take the SUMO vs JUDO parallel with you. You whiners yowling at Schumer dream of leaders who will out-Trump Trump at gaudy push n shove. But that is not how we'll win.



Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • My coauthor Nathan E. Sanders and I are speaking at the Rayburn House Office Building in Washington, DC at noon ET on November 17, 2025. The event is hosted by the POPVOX Foundation and the topic is “AI and Congress: Practical Steps to Govern and Prepare.”
  • I’m speaking on “Integrity and Trustworthy AI” at North Hennepin Community College in Brooklyn Park, Minnesota, USA, on Friday, November 21, 2025, at 2:00 PM CT. The event is cohosted by the college and The Twin Cities IEEE Computer Society.
  • Nathan E. Sanders and I will be speaking at the MIT Museum in Cambridge, Massachusetts, USA, on December 1, 2025, at 6:00 pm ET.
  • Nathan E. Sanders and I will be speaking at a virtual event hosted by City Lights on the Zoom platform, on December 3, 2025, at 6:00 PM PT.
  • I’m speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, on February 5, 2026. Details to come.

The list is maintained on this page.