Author: Daniel Rogers “Victor, make coffee and display the weather.” I sank into my kitchen chair, scratching my messed-up mop of hair, wishing I’d gone to bed earlier. “You failed to obtain the recommended eight hours of sleep. It would be beneficial to have a cup of strong coffee.” “No, please. You know I don’t […]
A new release 0.1.7 of RcppZiggurat
is now on the CRAN network for
R. This marks the first release
in four and a half years.
The RcppZiggurat
package updates the code for the Ziggurat
generator by Marsaglia and
others which provides very fast draws from a Normal distribution. The
package provides a simple C++ wrapper class for the generator improving
on the very basic macros, and permits comparison among several existing
Ziggurat implementations. This can be seen in the figure where Ziggurat
from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still
way faster than the default Normal generator in R (which is of course of
higher code complexity).
This release brings a number of changes. Notably, based on the work
we did with the new package zigg (more on
that in a second), we now also expose the Exponential generator, and the
underlying Uniform generator. Otherwise many aspects of the package have
been refreshed: updated builds, updated links, updated CI processes,
more use of DOIs and more. The other big news is zigg which
should now be the preference for deployment of Ziggurat due
to its much lighter-weight and zero-dependency setup.
The NEWS file entry below lists all changes.
Changes in version 0.1.7
(2025-03-22)
The CI setup was updated to use run.sh from r-ci (Dirk).
The windows build was updated to GSL 2.7, and UCRT support was
added (Jeroen in #16).
Manual pages now use JSS DOIs for references per CRAN
request
README.md links and badges have been updated
Continuous integration actions have been updated several
times
The DESCRIPTION file now uses Authors@R as mandated
Use of multiple cores is eased via a new helper function
reflecting option mc.core or architecture defaults, used in
tests
An inline function has been added to avoid a compiler
nag
Support for exponential RNG draws zrexp has been
added, the internal uniform generator is now also exposed via
zruni
The vignette bibliography has been updated, and switched to
DOIs
New package zigg is now mentioned in
DESCRIPTION and vignette
The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats.
In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.
Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February.
I was dismayed when I received the following mail from Nick Vidal:
Dear Luke,
Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.
We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.
The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.
I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy.
I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle.
Upd, N.B.: to people writing about this, I use they/them pronouns
Warning: this is a long ramble I wrote after an outage of my home
internet. You'll get your regular scheduled programming shortly.
I didn't realize this until relatively recently, but we're at war.
Fascists and capitalists are trying to take over the world, and it's
bringing utter chaos.
We're more numerous than them, of course: this is only a handful of
people screwing everyone else over, but they've accumulated so much
wealth and media control that it's getting really, really hard to move
around.
Everything is surveilled: people are carrying tracking and recording
devices in their pockets at all time, or they drive around in
surveillance machines. Payments are all turning digital. There's
cameras everywhere, including in cars. Personal data leaks are so
common people kind of assume their personal address, email address,
and other personal information has already been leaked.
The internet itself is collapsing: most people are using the network
only as a channel to reach a "small" set of "hyperscalers":
mind-boggingly large datacenters that don't really operate like the
old internet. Once you reach the local endpoint, you're not on the
internet anymore. Netflix, Google, Facebook (Instagram, Whatsapp,
Messenger), Apple, Amazon, Microsoft (Outlook, Hotmail, etc), all
those things are not really the internet anymore.
Those companies operate over the "internet" (as in the
TCP/IP network), but they are not an "interconnected network" as
much as their own, gigantic silos so much bigger than everything else
that they essentially dictate how the network operates, regardless of
standards. You access it over "the web" (as in "HTTP") but
the fabric is not made of interconnected links that cross sites: all
those sites are trying really hard to keep you captive on their
platforms.
Besides, you think you're writing an email to the state department,
for example, but you're really writing to Microsoft Outlook. That app
your university or border agency tells you to install, the backend is
not hosted by those institutions, it's on Amazon. Heck, even Netflix
is on Amazon.
Meanwhile I've been operating my own mail server first under my bed
(yes, really) and then in a cupboard or the basement for almost three
decades now. And what for?
So I can tell people I can? Maybe!
I guess the reason I'm doing this is the same reason people are
suddenly asking me about the (dead) mesh again. People are
worried and scared that the world has been taken over, and they're
right: we have gotten seriously screwed.
It's the same reason I keep doing radio, minimally know how to grow
food, ride a bike, build a shed, paddle a canoe, archive and document
things, talk with people, host an assembly. Because, when push comes
to shove, there's no one else who's going to do it for you, at least
not the way that benefits the people.
The Internet is one of humanity's greatest accomplishments. Obviously,
oligarchs and fascists are trying to destroy it. I just didn't expect
the tech bros to be flipping to that side so easily. I thought we were
friends, but I guess we are, after all, enemies.
That said, that old internet is still around. It's getting harder to
host your own stuff at home, but it's not impossible. Mail is tricky
because of reputation, but it's also tricky in the cloud (don't get
fooled!), so it's not that much easier (or cheaper) there.
So there's things you can do, if you're into tech.
Share your wifi with your neighbours.
Build a LAN. Throw a wire over to your neighbour too, it works better
than wireless.
This morning, internet was down at home. The last time I had such an
issue was in February 2023, when my
provider was Oricom. Now I'm with a business service at Teksavvy
Internet (TSI), in which I pay 100$ per month for a 250/50 mbps
business package, with a static IP address, on which I run, well,
everything: email services, this website, etc.
Mitigation
Email
The main problem when the service goes down like this for prolonged
outages is email. Mail is pretty resilient to failures like this but
after some delay (which varies according to the other end), mail
starts to drop. I am actually not sure what the various settings are
among different providers, but I would assume mail is typically kept
for about 24h, so that's our mark.
Last time, I setup VMs at Linode and Digital Ocean to deal better with
this. I have actually kept those VMs running as DNS servers until now,
so that part is already done.
I had fantasized about Puppetizing the mail server configuration so
that I could quickly spin up mail exchangers on those machines. But
now I am realizing that my Puppet server is one of the service that's
down, so this would not work, at least not unless the manifests can be
applied without a Puppet server (say with puppet apply).
Thankfully, my colleague groente did amazing work to refactor our
Postfix configuration in Puppet at Tor, and that gave me the
motivation to reproduce the setup in the lab. So I have finally
Puppetized part of my mail setup at home. That used to be hand-crafted
experimental stuff documented in a couple of pages in this wiki, but
is now being deployed by Puppet.
It's not complete yet: spam filtering (including DKIM checks and
graylisting) are not implemented yet, but that's the next step,
presumably to do during the next outage. The setup should be
deployable with puppet apply, however, and I have refined that
mechanism a little bit, with the run script.
Heck, it's not even deployed yet. But the hard part / grunt work is
done.
Other
The outage was "short" enough (5 hours) that I didn't take time to
deploy the other mitigations I had deployed in the previous incident.
But I'm starting to seriously consider deploying a web (and caching)
reverse proxy so that I endure such problems more gracefully.
Side note on proper servics
Typically, I tend to think of a properly functioning service as having
four things:
backups
documentation
monitoring
automation
high availability
Yes, I miscounted. This is why you have high availability.
Backups
Duh. If data is maliciously or accidentally destroyed, you need a copy
somewhere. Preferably in a way that malicious joe can't get to.
You probably know this is hard, and this is why you're not doing
it. Do it anyways, you'll think it sucks, but you'll be really
grateful for whatever scraps you wrote when you're in trouble.
Monitoring
If you don't have monitoring, you'll know it fails too late, and you
won't know it recovers. Consider high availability, work hard to
reduce noise, and don't have machine wake people up, that's literally
torture and is against the Geneva convention.
Consider predictive algorithm to prevent failures, like "add storage
within 2 weeks before this disk fills up".
This is harder than you think.
Automation
Make it easy to redeploy the service elsewhere.
Yes, I know you have backups. That is not enough: that typically
restores data and while it can also include configuration, you're
going to need to change things when you restore, which is what
automation (or call it "configuration management" if you will) will do
for you anyways.
This also means you can do unit tests on your configuration, otherwise
you're building legacy.
This is probably as hard as you think.
High availability
Make it not fail when one part goes down.
Eliminate single points of failures.
This is easier than you think, except for storage and DNS (which, I
guess, means it's harder than you think too).
Assessment
In the above 5 items, I check two:
backups
documentation
And barely: I'm not happy about the offsite backups, and my
documentation is much better at work than at home (and even there, I
have a 15 year backlog to catchup on).
I barely have monitoring: Prometheus is scraping parts of the infra,
but I don't have any sort of alerting -- by which I don't mean
"electrocute myself when something goes wrong", I mean "there's a set
of thresholds and conditions that define an outage and I can look at
it".
Automation is wildly incomplete. My home server is a random collection
of old experiments and technologies, ranging from Apache with Perl and
CGI scripts to Docker containers running Golang applications. Most of
it is not Puppetized (but the ratio is growing). Puppet itself
introduces a huge attack vector with kind of catastrophic lateral
movement if the Puppet server gets compromised.
And, fundamentally, I am not sure I can provide high availability in
the lab. I'm just this one guy running my home network, and I'm
growing older. I'm thinking more about winding things down than
building things now, and that's just really sad, because I feel
we're losing (well that escalated
quickly).
Resolution
In the end, I didn't need any mitigation and the problem fixed
itself. I did do quite a bit of cleanup so that feels somewhat good,
although I despaired quite a bit at the amount of technical debt I've
accumulated in the lab.
Timeline
Times are in UTC-4.
6:52: IRC bouncer goes offline
9:20: called TSI support, waited on the line 15 minutes then was
told I'd get a call back
9:54: outage apparently detected by TSI
11:00: no response, tried calling back support again
11:10: confirmed bonding router outage, no official ETA but "today",
source of the 9:54 timestamp above
Author: Julie Zack “Starlight, Starbright, First star I see tonight, Wish I may, Wish I might, Have this wish, I wish tonight.” Enid loved when her older sister, Tracy, spoke the words at bedtime. “Do you remember the stars?” Enid asked. “I do,” Tracy said, looking somehow both happy and sad. Enid couldn’t understand the […]
Authorities in at least two U.S. states last week independently announced arrests of Chinese nationals accused of perpetrating a novel form of tap-to-pay fraud using mobile devices. Details released by authorities so far indicate the mobile wallets being used by the scammers were created through online phishing scams, and that the accused were relying on a custom Android app to relay tap-to-pay transactions from mobile devices located in China.
Image: WLVT-8.
Authorities in Knoxville, Tennessee last week said they arrested 11 Chinese nationals accused of buying tens of thousands of dollars worth of gift cards at local retailers with mobile wallets created through online phishing scams. The Knox County Sheriff’s office said the arrests are considered the first in the nation for a new type of tap-to-pay fraud.
Responding to questions about what makes this scheme so remarkable, Knox County said that while it appears the fraudsters are simply buying gift cards, in fact they are using multiple transactions to purchase various gift cards and are plying their scam from state to state.
“These offenders have been traveling nationwide, using stolen credit card information to purchase gift cards and launder funds,” Knox County Chief Deputy Bernie Lyon wrote. “During Monday’s operation, we recovered gift cards valued at over $23,000, all bought with unsuspecting victims’ information.”
Asked for specifics about the mobile devices seized from the suspects, Lyon said “tap-to-pay fraud involves a group utilizing Android phones to conduct Apple Pay transactions utilizing stolen or compromised credit/debit card information,” [emphasis added].
Lyon declined to offer additional specifics about the mechanics of the scam, citing an ongoing investigation.
Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill said there aren’t many valid use cases for Android phones to transmit Apple Pay transactions. That is, he said, unless they are running a custom Android app that KrebsOnSecurity wrote about last month as a part of a deep dive into the sprawling operations of China-based phishing cartels that are breathing new life into the payment card fraud industry (a.k.a. “carding”).
How are these China-based phishing groups obtaining stolen payment card data and then loading it onto Google and Apple phones? It all starts with phishing.
If you own a mobile phone, the chances are excellent that at some point in the past two years it has received at least one phishing message that spoofs the U.S. Postal Service to supposedly collect some outstanding delivery fee, or an SMS that pretends to be a local toll road operator warning of a delinquent toll fee.
These messages are being sent through sophisticated phishing kits sold by several cybercriminals based in mainland China. And they are not traditional SMS phishing or “smishing” messages, as they bypass the mobile networks entirely. Rather, the missives are sent through the Apple iMessage service and through RCS, the functionally equivalent technology on Google phones.
People who enter their payment card data at one of these sites will be told their financial institution needs to verify the small transaction by sending a one-time passcode to the customer’s mobile device. In reality, that code will be sent by the victim’s financial institution in response to a request by the fraudsters to link the phished card data to a mobile wallet.
If the victim then provides that one-time code, the phishers will link the card data to a new mobile wallet from Apple or Google, loading the wallet onto a mobile phone that the scammers control. These phones are then loaded with multiple stolen wallets (often between 5-10 per device) and sold in bulk to scammers on Telegram.
An image from the Telegram channel for a popular Chinese smishing kit vendor shows 10 mobile phones for sale, each loaded with 5-7 digital wallets from different financial institutions.
Merrill found that at least one of the Chinese phishing groups sells an Android app called “Z-NFC” that can relay a valid NFC transaction to anywhere in the world. The user simply waves their phone at a local payment terminal that accepts Apple or Google pay, and the app relays an NFC transaction over the Internet from a phone in China.
“I would be shocked if this wasn’t the NFC relay app,” Merrill said, concerning the arrested suspects in Tennessee.
Merrill said the Z-NFC software can work from anywhere in the world, and that one phishing gang offers the software for $500 a month.
“It can relay both NFC enabled tap-to-pay as well as any digital wallet,” Merrill said. “They even have 24-hour support.”
On March 16, the ABC affiliate in Sacramento (ABC10), Calif. aired a segment about two Chinese nationals who were arrested after using an app to run stolen credit cards at a local Target store. The news story quoted investigators saying the men were trying to buy gift cards using a mobile app that cycled through more than 80 stolen payment cards.
ABC10 reported that while most of those transactions were declined, the suspects still made off with $1,400 worth of gift cards. After their arrests, both men reportedly admitted that they were being paid $250 a day to conduct the fraudulent transactions.
Merrill said it’s not unusual for fraud groups to advertise this kind of work on social media networks, including TikTok.
A CBS Newsstory on the Sacramento arrests said one of the suspects tried to use 42 separate bank cards, but that 32 were declined. Even so, the man still was reportedly able to spend $855 in the transactions.
Likewise, the suspect’s alleged accomplice tried 48 transactions on separate cards, finding success 11 times and spending $633, CBS reported.
“It’s interesting that so many of the cards were declined,” Merrill said. “One reason this might be is that banks are getting better at detecting this type of fraud. The other could be that the cards were already used and so they were already flagged for fraud even before these guys had a chance to use them. So there could be some element of just sending these guys out to stores to see if it works, and if not they’re on their own.”
Merrill’s investigation into the Telegram sales channels for these China-based phishing gangs shows their phishing sites are actively manned by fraudsters who sit in front of giant racks of Apple and Google phones that are used to send the spam and respond to replies in real time.
In other words, the phishing websites are powered by real human operators as long as new messages are being sent. Merrill said the criminals appear to send only a few dozen messages at a time, likely because completing the scam takes manual work by the human operators in China. After all, most one-time codes used for mobile wallet provisioning are generally only good for a few minutes before they expire.
An associate professor of chemistry and chemical biology at Northeastern University, Deravi’s recently published paper in the Journal of Materials Chemistry C sheds new light on how squid use organs that essentially function as organic solar cells to help power their camouflage abilities.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
The Atlantic has a search tool that allows you to search for specific works in the “LibGen” database of copyrighted works that Meta used to train its AI models. (The rest of the article is behind a paywall, but not the search tool.)
It’s impossible to know exactly which parts of LibGen Meta used to train its AI, and which parts it might have decided to exclude; this snapshot was taken in January 2025, after Meta is known to have accessed the database, so some titles here would not have been available to download.
Still…interesting.
Searching my name yields 199 results: all of my books in different versions, plus a bunch of shorter items.
The UK’s National Computer Security Center (part of GCHQ) released a timeline—also see their blog post—for migration to quantum-computer-resistant cryptography.
Two years after OpenAI launched ChatGPT 3.5, humanity is not on the cusp of
extinction and Elon Musk seems more responsible for job loss than any AI agent.
It started in 2023. Web sites that performed quite well with a steady
viewership started having traffic spikes. These were relatively easy to
diagnose, since most of the spikes came from visitors that properly identified
themselves as bots, allowing us to see that the big players - OpenAI, Bing,
Google, Facebook - were increasing their efforts to scrape as much content from
web sites as possible.
Small brochure sites were mostly unaffected because they could be scraped in a
matter of minutes. But large sites with an archive of high quality human
written content were getting hammered. Any web site with a search feature or a
calendar or any interface that generated exponential hits that could be
followed were particularly vulnerable.
But hey, that’s what robots.txt is for, right? To
tell robots to back off if you don’t want them scraping your site?
Eventually, the cracks began to show. Bots were ignoring robots.txt (did they
ever pay that much attention to it in the first place?). Furthermore, rate
limiting requests by user agent also began to fail. When you post a link on
Facebook, a bot identifying itself as “facebooketernalhit” is invoked to
preview the page so it can show a picture and other meta data. We don’t want to
rate limit that bot, right? Except, Facebook is also using this bot to scrape
your site, often bringing your site to its knees. And don’t get me started on
TwitterBot.
Eventually, it became clear that the majority of the armies of bots scraping
our sites have completely given up on identifying themselves as bots and are
instead using user agents indistinguishable from regular browsers. By using
thousands of different IP addresses, it has become really hard to separate the
real humans from the bots.
Now what?
So, no, unfortunately, your web site is not suddenly getting really popular.
And, you are blessed with a whole new set of strategic decisions.
Fortunately, May First has undergone a major infrastructure transition,
resulting in centralized logging of all web sites and a fleet of web proxy
servers that intercept all web traffic. Centralized logging means we can
analyze traffic and identify bots more easily, and a web proxy fleet allows us
to more easily implement rules across all web sites.
However, even with all of our latest changes and hours upon hours of work to
keep out the bots, our members are facing some hard decisions about maintaining
an open web.
One member of May First provides Google translations of their web site to every
language available. But wow, that is now a disaster because instead of having
every bot under the sun scrapping all 843 (a made up number) pieces of unique
content on their site, the same bots are scraping 843 * (number of available
languages) pieces of content on their site. Should they stop providing this
translation service in order to ensure people can access their site in the
site’s primary language?
Should web sites turn off their search features that include drop down options
of categories to prevent bots from systematically refreshing the search page
with every possible combination of search terms?
Do we need to alter our calendar software to avoid providing endless links into
the future (ok, that is an easy one)?
What’s next?
Something has to change.
Lock down web 2.0. Web 2.0 brought us wonderful dynamic web sites, which
Drupal and WordPress and many other pieces of amazing software have
supported for over a decade. This is the software that is getting bogged
down by bots. Maybe we need to figure out a way to lock down the dynamic
aspects of this software to logged in users and provide static content for
everyone else?
Paywalls and accounts everywhere. There’s always been an amazing
non-financial reward to providing a web site with high quality movement
oriented content for free. It populates the search engines, provides links
to inspiring and useful content in moments of crises, and can galvanize
movements. But these moments of triumph happen between long periods of hard
labor that now seems to mostly feed capitalist AI scumbags. If we add a new
set of expenses and labor to keep the sites running for this purpose, how
sustainable is that? Will our treasure of free movement content have to move
behind paywalls or logins? If we provide logins, will that keep the bots out
or just create a small hurdle for them to automate the account creation
process? What happens when we can’t search for this kind of content via
search engines?
Cutting deals. What if our movement content providers are forced to cut
deals with the AI entrepreneurs to allow the paying scumbags to fund the content
creation. Eww. Enough said.
Bot detection. Maybe we just need to get better at bot detection? This
will surely be an arms race, but would have some good benefits. Bots have
also been filling out our forms and populating our databases with spam,
testing credit cards against our donation pages, conducting denial of
service attacks and all kinds of other irritating acts of vandalism. If we
were better at stopping bots automatically it would have a lot of benefits.
But what impact would it have on our web sites and the experience of using
them? What about “good” bots (RSS feed readers, payment processors,
web hooks, uptime detectors)? Will we cut the legs off any developer trying
to automate something?
I’m not really sure where this is going, but it seems that the world wide web is
about to head in a new direction.
Today we have a whole batch of category errors, picked out from the rash of submissions and a
few that have been festering on the shelf. Just for fun,
I threw in an ironic off-by-some meta-error. See if you can spot it.
Adam R.
"I'm looking for hotel rooms for the 2026 Winter Olympics in Milan-Cortina. Most hotels haven't opened up reservations yet, except for ridiculously overprice hospitality packages. This search query found NaN facilities available, which equates to one very expensive apartment. I guess one is not a number now?"
Intrepid traveler
BJH
had a tough time at the Intercontinental. I almost feel sympathy. Almost.
"I stare at nulls at home all the time so it made me feel comfortable to see them at the hotel when traveling. And what is that 'INTERCONTINENTAL W...' at the top? I may never know!"
Hoping to find out, BJ clicked through the mystery menu and discovered... this. But even worse,
"There was no exit: Clicking Exit did nothing and neither did any of the buttons on the remote. Since I'd received more entertainment than usual from a hotel screen I just turned everything off."
Striking out for some streaming entertainment
Dmitry NoLastName
was silently stymied
by this double-decker from Frontier.com.
No streaming needed for
Barry M.
who can get a full dose of fun from those legacy broadcast channels!
Whatever did they do before null null undefined null?
"Hey, want to watch TV tonight? NaN."
Hah! "That's MISTER Null, to you," declared an anonymous contributor.
And finally, another entirely different anonymous contributor clarified that there are apparently NaN cellphone towers in Switzerland. Personally, I'm intrigued by the existence of that one little crumb of English on an otherwise entirely German page.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Colin Jeffrey Sara was sure she had looked away for only a moment. That was all it took. Sam had vanished from the playground. Clouds gathered heavily in the sky as panic gripped her throat. She yelled his name, over and again, her cries buffered by the indifferent wind. Soon other parents helped search, […]
The diffoscope maintainers are pleased to announce the release of diffoscope
version 291. This version includes the following changes:
[ Chris Lamb ]
* Make two required adjustments for the new version of the src:file package:
- file(1) version 5.46 now emits "XHTML document" for .xhtml files, such as
those found nested within our .epub tests. Therefore, match this string
when detecting XML files. This was causing an FTBFS due to inconsistent
indentation in diffoscope's output.
- Require the new, upcoming, version of file(1) for a quine-related
testcase after adjusting the expected output. Previous versions of
file(1) had a duplicated "last modified, last modified" string for some
Zip archives that has now been removed.
* Add a missing subprocess import.
* Bump Standards-Version to 4.7.2.
The diffoscope maintainers are pleased to announce the release of diffoscope
version 290. This version includes the following changes:
[ Chris Lamb ]
* Also consider .aar files as APK files for the sake of not falling back to a
binary diff. (Closes: #1099632)
* Ensure all calls to out_check_output in the ELF comparator have the
potential CalledProcessError exception caught. (Re: #398)
* Ensure a potential CalledProcessError is caught in the OpenSSL comparator
as well.
* Update copyright years.
[ Eli Schwartz ]
* Drop deprecated and no longer functional "setup.py test" command.
tells me that the Manufacturer is HP and Product Name is OMEN Transcend Gaming Laptop 14-fb0xxx
I’m provisioning a new piece of hardware for my eng consultant and it’s proving more difficult than I expected. I must admit guilt for some of this difficulty. Instead of installing using the debian installer on my keychain, I dd’d the pv block device of the 16 inch 2023 version onto the partition set aside from it. I then rebooted into rescue mode and cleaned up the grub config, corrected the EFI boot partition’s path in /etc/fstab, ran the grub installer from the rescue menu, and rebooted.
On the initial boot of the system, X or Wayland or whatever is supposed to be talking to this vast array of GPU hardware in this device, it’s unable to do more than create a black screen on vt1. It’s easy enough to switch to vt2 and get a shell on the installed system. So I’m doing that and investigating what’s changed in Trixie. It seems like it’s pretty significant. Did they just throw out Keith Packard’s and Behdad Esfahbod’s work on font rendering? I don’t understand what’s happening in this effort to abstract to a simpler interface. I’ll probably end up reading more about it.
In an effort to have Debian re-configure the system for Desktop use, I have uninstalled as many packages as I could find that were in the display and human interface category, or were firmware/drivers for devices not present in this Laptop’s SoC. Some commands I used to clear these packages and re-install connamon follow:
And then I rebooted. When it came back up, I was greeted with a login prompt, and Trixie looks to be fully functional on this device, including the attached wifi radio, tethering to my android, and the thunderbolt-attached Marvell SFP+ enclosure.
I’m also installing libvirt and fetched the DVD iso material for Debian, Ubuntu and Rocky in case we have a need of building VMs during the development process. These are the platforms that I target at work with gcp Dataproc, so I’m pretty good at performing maintenance operation on them at this point.
A sophisticated cascading supply chain attack has compromised multiple GitHub Actions, exposing critical CI/CD secrets across tens of thousands of repositories. The attack, which originally targeted the widely used “tj-actions/changed-files” utility, is now believed to have originated from an earlier breach of the “reviewdog/action-setup@v1” GitHub Action, according to a report.
[…]
CISA confirmed the vulnerability has been patched in version 46.0.1.
Given that the utility is used by more than 23,000 GitHub repositories, the scale of potential impact has raised significant alarm throughout the developer community.
The film is centered around the idea of
establishing an alternative to the GDP as the metric to measure success of a
country/society. The film follows mostly Katherine Trebeck
on her journey of convincing countries to look beyond the GDP.
I very much enjoyed watching this documentary to get a first impression
of the idea itself and the effort involved. I had the chance to watch the
german version of it online. But there is now another virtual
screening
offered by the Permacultur Film Club on the 29th and 30th of March 2025. This screening
is on a pay-as-you-like-and-can basis and includes a Q&A session with Kathrine Trebeck.
Trailer 1 and
Trailer 2 are available on Youtube
if you like to get a first impression.
Seems in the k8s world there are sufficient enough race conditions in
shutting down pods and removing those from endpoint slices in time.
Thus people started to do
all kind of workarounds
like adding a statically
linked sleep binary to otherwise "distroless" and rather empty OCI images to
just run a sleep command on shutdown before really shutting down.
Or even base64 encoding the sleep binary and shipping it via configMap. Or whatever else.
Eventually the situation was so severe that upstream decided to implement a
sleep feature
in the deployment resource directly.
Jenny had been perfectly happy working on a series of projects for her company, before someone said, "Hey, we need you to build a desktop GUI for an existing API."
The request wasn't the problem, per se. The API, on the other hand, absolutely was.
The application Jenny was working on represented a billing contract for materials consumed at a factory. Essentially, the factory built a bunch of individual parts, and then assembled them into a finished product. They only counted the finished product, but needed to itemize the billing down to not only the raw materials that went into the finished product, the intermediate parts, but also the toilet paper put in the bathrooms. All the costs of operating the factory were derived from the units shipped out.
This meant that the contract itself was a fairly complicated tree structure. Jenny's application was meant to both visualize and allow users to edit that tree structure to update the billing contract in sane, and predictable ways, so that it could be reviewed and approved and when the costs of toilet paper went up, those costs could be accurately passed on to the customer.
Now, all the contract management was already implemented and lived library that itself called back into a database. Jenny just needed to wire it up to a desktop UI. Part of the requirements were that line items in the tree needed to have a special icon displayed next to them under two conditions: if one of their ancestors in the tree had been changed since the last released contract, or if they child was marked as "inherit from parent".
The wrapper library wasn't documented, so Jenny asked the obvious question: "What's the interface for this?"
"That covers the inheritance field," Jenny said, "but that doesn't tell me if the ancestor has been modified."
"Oh, don't worry," the devs replied, "there's an extension method for that."
publicboolGetChangedIndicator(this IModelTypeA);
Extension methods in C# are just a way to use syntactic sugar to "add" methods to a class: IModelTypeA does not have a GetChangedIndicator method, but because of the this keyword, it's an extension method and we can now invoke aInstance.GetChangedIndicator(). It's how many built-in .Net APIs work, but like most forms of syntactic sugar, while it can be good, it usually makes code harder to understand, harder to test, and harder to debug.
But Jenny's main complaint was this: "You can't raise an event or something? I'm going to need to poll?"
"Yes, you're going to need to poll."
Jenny didn't like the idea of polling the (slow) database, so at first, she tried to run the polling in a background thread so it wouldn't block the UI. Unfortunately for her, the library was very much not threadsafe, so that blew up. She ended up needing to poll on the main UI thread, which meant the application would frequently stall while users were working. She did her best to minimize it, but it was impossible to eliminate.
But worse than that, each contract item may implement one of four interfaces, which meant there were four versions of the extension method:
To "properly" perform the check, Jenny would have to check which casts were valid for a given item, cast it, and then invoke GetChangedIndicator. It's worth noting that had they just used regular inheritance instead of extension methods, this wouldn't have been necessary at all. Using the "fun" syntactic sugar made the code more complicated for no benefit.
This left Jenny with another question: "What if an item implements more than one of these interfaces? What if the extension methods disagree on if the item is changed?"
"Good question," the team responsible for the library replied. "That should almost never happen."
Jenny quit not long after this.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Jo Gatenby Lara hauled on her dust demon’s reins, desperate to keep the stupid creature on the coaster track and in the race. Desari’s wyrm, Dynamo, surged past them, scalding her with desert sand that slipped under her face mask, choking her. With kicks and shouts, she urged Sandfire forward, but it was too […]
I keep offering 'judo moves' for those we depend-upon to save civilization.
One such move might have let Joe Biden demolish the entire Kremlin kompromat ring in DC. (Three Republican Congressional reps have attested that their GOP colleagues hold "orgies" and one of them made the blackmail charge explicit.) Alas, that idea - along with every other agile tactic in Polemical Judo - was never taken up.
Okay, let's try another one. An impudent proposal - or else a potential magic move for the Faramir of the West, desperately holding the line against the modern Mordor.
I refer to Ukraine's President Volodymyr Zelensky, who faces an excruciating dilemma because the the U.S. election.
Whether or not you go along with my longtime assertion of Kremlin-strings puppetting Donald Trump - (and recent polls show that a majority of Americans now see those strings) - the goal of Trumpian 'diplomacy' has clearly been to save Vlad Putin's hash, as his refineries burn, as his bedraggled soldiery mutinously grumbles and as Europe burgeons its formerly-flacid military might to formidability, in response to Muscovite aggression.
Almost every aspect of the current, proposed 'ceasefire' would benefit Putin and disadvantage Ukraine. But Zelensky cannot be seen 'obstructing' a truce, or Ukraine will suffer in the arena of world opinion - and some of his own suffering populace. Hence my modest proposal.
To be clear... if Zelensky were to present this concept to the world, there is no way on Earth that Putin or Trump would accept it!
And yet, it will appear so clearly and inherently reasonable that those two would have a hard time justifying their rejection.
== A Modest Proposal ==
President Zelensky, just say this:
"Our land, our cities and towns, our forests and fields have been blasted, poisoned, and strewn with millions of landmines, while our brave men and women suffer every day resisting the aggressors, both on the front lines and on the home front. Meanwhile, every day and everywhere, our skills and our power-to-resist grow.
"The aggressor has no long term plan. Even should he occupy all of our country, the heat of our rage and resistance would make Kabul in 1980 pale, by comparison. Occupied, but never-subjugated, Ukrainians will turn the next hundred years into hell for the invaders.
"Then why does this outrageous crime continue? Because the despotic Kremlin regime controls all media and news sources available to 135 million Russians, who have no way to know the following:
-- That there was no "Nazi" movement in Ukraine. Except for a few dozen gangster idiots sporting swastikas for shock value. (There are far more such fools in both the USA and in the reborn USSR!) Otherwise, there was never any evidence for anything like it.
-- There was no ' invader army building in Ukraine' before either the full scale Russian invasion of 2022 or the 'green man' putsches of 2014. That was pure fantasy and we can prove it.
-- There was no irresistable momentum for Ukrainian NATO membership before 2022. Up to then, NATO itself had been atrophying for years and Ukraine might have been kept out of the alliance through diplomacy. Now? NATO is growing and gets stronger in potency every day. And Ukraine's alliance with the rest of Europe is now absolutely guaranteed, thanks to Mr. Putin.
-- We were always willing to negotiate assurance and provisions for the comfort and security and prosperity of Russian speaking citizens of Ukraine, especially in the Donbas. That offer still stands. And hence we ask them, are you truly better off now, dear friends?
I could go on and on. But doing so just leaves this as another case of "he-said, she said." And we are done exchanging heated assertions.
It is time to check out which side is lying!
OUR PROPOSAL IS SIMPLE:
Before any ceasefire takes effect, we call for a Grand Commission to investigate the truth.
Instead of world elites or politicians, we propose that this commission consist mostly of:
-- 100 Russian citizens...
-- 100 Ukrainian citizens living in unoccupied territories...
-- 100 citizens from a random pool of other nations.
Members of this Grand Commission will be free to go anywhere they wish, in both Ukraine and Russia, wielding cameras and asking questions and demanding answers. Let them video the trashed and ruined towns and farms and forests everywhere that Russian armed forces claim to have 'liberated.'
And yes, the commissioners will be welcome to sift for evidence of pervasive "naziism" in our country... so long as they are also free to document Vlad Putin's oligarchy and their relentlessly violent drive to rebuild the Orwellian Soviet empire.
== How would it work? A FAQ ==
Why such a large group?
A large and diverse commission helps to ensure maximum coverage and to minimize coercion by their home governments. Among so many commissioners, some will certainly ask pointed questions! And they will return home in numbers that cannot be kept squelched or repressed.
How to ensure the members aren't just factotums of either regime?
They will be selected randomly from the paper pages of old telephone directories! Sure, that might be clumsy. But those paper volumes cannot be meddled with, especially old phone books currently archived in - say - Switzerland. For all its faults, this method ensures that the selection process will pick many citizens who are not beholden to the masters in the capital.
Won't the governments of Russia or Ukraine be tempted to coerce commissioners anyway?
Yes, and that is why they will be invited to bring along their immediate families! Arrangements will be made for spouses and children to stay at nice resorts along the Black Sea, while the commissioners do their work. And incidentally, those families will talk to each other, too. We welcome that. Do you?
Won't some of the commissioners defect - during their tours of Ukraine or Russia, refusing to go home?
Of course some will! We are not afraid of that. Are YOU afraid of that, Mr. Putin?
Won't such a huge endeavor be expensive?
Sure it will be. So? Russia and America brag about how rich they are. And so do the host nations of recent 'peace conferences,' who could easily pony up the expenses, for the sake of ending a dangerous and destabilizing war.
Why not ask the world's billionnaire caste to foot the bill? For the sake of actual communications and genuine peace and prosperity?
There are many of the uber-rich who talk a good game. This would be their chance to prove that they believe in the future, after all.
Won't there be dangers?
Sure they will be, especially wherever resumed fighting breaks out near the inspectors, during or after a ceasefire. The commissioners should possess some grit and courage and world civic-mindedness. Why? Is that a problem? Compared to the possible good outcomes from such heroism? From the most-genuine kind of patriotism?
Do you honestly expect Vladimir Putin to agree to this?
Why wouldn't he? If this will let him convince skeptics around the world and in Ukraine that all of his justifications for this slaughter and ruination of a beautiful country are valid and true?
Of course that was a bitter jest. Because there is no way that Mr. Putin or his supporters would agree to such a Grand Commission, digging deeply into things that are called Facts and Truth.
Then why did you make the proposal, if you don't think it will be accepted?
Because nothing better demonstrates the most basic difference between the two sides of this conflict.
One side slavishly follows a murderous liar, because of the hypnotic power of his lies. Lies that would be so-easily disproved, if the tyrant agreed to allow light to flow to his people.
The other side is a nation of people who love Russian poetry... but we are not and never have been Russian. People who know intimately well Russia's cruelly-depressing history, and who want no further part of it.
We all had friends across the former USSR... but Ukraine is not and never has been Russia.
And we can prove it, as we daily prove our utter determination never again to suffer under Moscow's boot heel.
Are we outnumbered? Certainly. But we have special regiments on our side.
Honor. Decency. Resolve. Democracy. The friendship of all free peoples around the World. And science, too.
A message posted on Monday to the homepage of the U.S. Cybersecurity & Infrastructure Security Agency (CISA) is the latest exhibit in the Trump administration’s continued disregard for basic cybersecurity protections. The message instructed recently-fired CISA employees to get in touch so they can be rehired and then immediately placed on leave, asking employees to send their Social Security number or date of birth in a password-protected email attachment — presumably with the password needed to view the file included in the body of the email.
The homepage of cisa.gov as it appeared on Monday and Tuesday afternoon.
On March 13, a Maryland district court judge ordered the Trump administration to reinstate more than 130 probationary CISA employees who were fired last month. On Monday, the administration announced that those dismissed employees would be reinstated but placed on paid administrative leave. They are among nearly 25,000 fired federal workers who are in the process of being rehired.
A notice covering the CISA homepage said the administration is making every effort to contact those who were unlawfully fired in mid-February.
“Please provide a password protected attachment that provides your full name, your dates of employment (including date of termination), and one other identifying factor such as date of birth or social security number,” the message reads. “Please, to the extent that it is available, attach any termination notice.”
The message didn’t specify how affected CISA employees should share the password for any attached files, so the implicit expectation is that employees should just include the plaintext password in their message.
Email is about as secure as a postcard sent through the mail, because anyone who manages to intercept the missive anywhere along its path of delivery can likely read it. In security terms, that’s the equivalent of encrypting sensitive data while also attaching the secret key needed to view the information.
What’s more, a great many antivirus and security scanners have trouble inspecting password-protected files, meaning the administration’s instructions are likely to increase the risk that malware submitted by cybercriminals could be accepted and opened by U.S. government employees.
The message in the screenshot above was removed from the CISA homepage Tuesday evening and replaced with a much shorter notice directing former CISA employees to contact a specific email address. But a slightly different version of the same message originally posted to CISA’s website still exists at the website for the U.S. Citizenship and Immigration Services, which likewise instructs those fired employees who wish to be rehired and put on leave to send a password-protected email attachment with sensitive personal data.
A message from the White House to fired federal employees at the U.S. Citizenship and Immigration Services instructs recipients to email personal information in a password-protected attachment.
This is hardly the first example of the administration discarding Security 101 practices in the name of expediency. Last month, the Central Intelligence Agency (CIA) sent an unencrypted email to the White House with the first names and first letter of the last names of recently hired CIA officers who might be easy to fire.
As cybersecurity journalist Shane Harrisnoted in The Atlantic, even those fragments of information could be useful to foreign spies.
“Over the weekend, a former senior CIA official showed me the steps by which a foreign adversary who knew only his first name and last initial could have managed to identify him from the single line of the congressional record where his full name was published more than 20 years ago, when he became a member of the Foreign Service,” Harris wrote. “The former official was undercover at the time as a State Department employee. If a foreign government had known even part of his name from a list of confirmed CIA officers, his cover would have been blown.”
The White House has also fired at least 100 intelligence staffers from the National Security Agency (NSA), reportedly for using an internal NSA chat tool to discuss their personal lives and politics. Testifying before the House Select Committee on the Communist Party earlier this month, the NSA’s former top cybersecurity official said the Trump administration’s attempts to mass fire probationary federal employees will be “devastating” to U.S. cybersecurity operations.
Rob Joyce, who spent 34 years at the NSA, told Congress how important those employees are in sustaining an aggressive stance against China in cyberspace.
“At my former agency, remarkable technical talent was recruited into developmental programs that provided intensive unique training and hands-on experience to cultivate vital skills,” Joyce told the panel. “Eliminating probationary employees will destroy a pipeline of top talent responsible for hunting and eradicating [Chinese] threats.”
Both the message to fired CISA workers and DOGE’s ongoing efforts to bypass vetted government networks for a faster Wi-Fi signal are emblematic of this administration’s overall approach to even basic security measures: To go around them, or just pretend they don’t exist for a good reason.
On Monday, The New York Timesreported that U.S. Secret Service agents at the White House were briefly on alert last month when a trusted captain of Elon Musk’s “Department of Government Efficiency” (DOGE) visited the roof of the Eisenhower building inside the White House compound — to see about setting up a dish to receive satellite Internet access directly from Musk’s Starlink service.
The White House press secretary told The Times that Starlink had “donated” the service and that the gift had been vetted by the lawyer overseeing ethics issues in the White House Counsel’s Office. The White House claims the service is necessary because its wireless network is too slow.
Jake Williams, vice president for research and development at the cybersecurity consulting firm Hunter Strategy, told The Times “it’s super rare” to install Starlink or another internet provider as a replacement for existing government infrastructure that has been vetted and secured.
“I can’t think of a time that I have heard of that,” Williams said. “It introduces another attack point,” Williams said. “But why introduce that risk?”
Meanwhile, NBC Newsreported on March 7 that Starlink is expanding its footprint across the federal government.
“Multiple federal agencies are exploring the idea of adopting SpaceX’s Starlink for internet access — and at least one agency, the General Services Administration (GSA), has done so at the request of Musk’s staff, according to someone who worked at the GSA last month and is familiar with its network operations — despite a vow by Musk and Trump to slash the overall federal budget,” NBC wrote.
The longtime Musk employee who encountered the Secret Service on the roof in the White House complex was Christopher Stanley, the 33-year-old senior director for security engineering at X and principal security engineer at SpaceX.
On Monday, Bloomberg broke the news that Stanley had been tapped for a seat on the board of directors at the mortgage giant Fannie Mae. Stanley was added to the board alongside newly confirmed Federal Housing Finance Agency director Bill Pulte, the grandson of the late housing businessman and founder of PulteGroup — William J. Pulte.
In a nod to his new board role atop an agency that helps drive the nation’s $12 trillion mortgage market, Stanley retweeted a Bloomberg story about the hire with a smiley emoji and the comment “Tech Support.”
But earlier today, Bloomberg reported that Stanley had abruptly resigned from the Fannie board, and that details about the reason for his quick departure weren’t immediately clear. As first reported here last month, Stanley had a brush with celebrity on Twitter in 2015 when he leaked the user database for the DDoS-for-hire service LizardStresser, and soon faced threats of physical violence against his family.
My 2015 story on that leak did not name Stanley, but he exposed himself as the source by posting a video about it on his Youtube channel. A review of domain names registered by Stanley shows he went by the nickname “enKrypt,” and was the former owner of a pirated software and hacking forum called error33[.]net, as well as theC0re, a video game cheating community.
Stanley is one of more than 50 DOGE workers, mostly young men and women who have worked with one or more of Musk’s companies. The Trump administration remains dogged by questions about how many — if any — of the DOGE workers were put through the gauntlet of a thorough security background investigation before being given access to such sensitive government databases.
That’s largely because in one of his first executive actions after being sworn in for a second term on Jan. 20, President Trump declared that the security clearance process was simply too onerous and time-consuming, and that anyone so designated by the White House counsel would have full top secret/sensitive compartmented information (TS/SCI) clearances for up to six months. Translation: We accepted the risk, so TAH-DAH! No risk!
Presumably, this is the same counsel who saw no ethical concerns with Musk “donating” Starlink to the White House, or with President Trump summoning the media to film him hawking Cybertrucks and Teslas (a.k.a. “Teslers”) on the White House lawn last week.
Mr. Musk’s unelected role as head of an ad hoc executive entity that is gleefully firing federal workers and feeding federal agencies into “the wood chipper” has seen his Tesla stock price plunge in recent weeks, while firebombings and other vandalism attacks on property carrying the Tesla logo are cropping up across the U.S. and overseas and driving down Tesla sales.
President Trump and his attorney general Pam Bondi have dubiously asserted that those responsible for attacks on Tesla dealerships are committing “domestic terrorism,” and that vandals will be prosecuted accordingly. But it’s not clear this administration would recognize a real domestic security threat if it was ensconced squarely behind the Resolute Desk.
Or at the pinnacle of the Federal Bureau of Investigation (FBI). The Washington Postreported last month that Trump’s new FBI director Kash Patel was paid $25,000 last year by a film company owned by a dual U.S. Russian citizen that has made programs promoting “deep state” conspiracy theories pushed by the Kremlin.
“The resulting six-part documentary appeared on Tucker Carlson’s online network, itself a reliable conduit for Kremlin propaganda,” The Post reported. “In the film, Patel made his now infamous pledge to shut down the FBI’s headquarters in Washington and ‘open it up as a museum to the deep state.'”
When the head of the FBI is promising to turn his own agency headquarters into a mocking public exhibit on the U.S. National Mall, it may seem silly to fuss over the White House’s clumsy and insulting instructions to former employees they unlawfully fired.
Indeed, one consistent feedback I’ve heard from a subset of readers here is something to this effect: “I used to like reading your stuff more when you weren’t writing about politics all the time.”
My response to that is: “Yeah, me too.” It’s not that I’m suddenly interested in writing about political matters; it’s that various actions by this administration keep intruding on my areas of coverage.
A less charitable interpretation of that reader comment is that anyone still giving such feedback is either dangerously uninformed, being disingenuous, or just doesn’t want to keep being reminded that they’re on the side of the villains, despite all the evidence showing it.
Article II of the U.S. Constitution unambiguously states that the president shall take care that the laws be faithfully executed. But almost from Day One of his second term, Mr. Trump has been acting in violation of his sworn duty as president by choosing not to enforce laws passed by Congress (TikTok ban, anyone?), by freezing funds already allocated by Congress, and most recently by flouting a federal court order while simultaneously calling for the impeachment of the judge who issued it. Sworn to uphold, protect and defend The Constitution, President Trump appears to be creating new constitutional challenges with almost each passing day.
When Mr. Trump was voted out of office in November 2020, he turned to baseless claims of widespread “election fraud” to explain his loss — with deadly and long-lasting consequences. This time around, the rallying cry of DOGE and White House is “government fraud,” which gives the administration a certain amount of cover for its actions among a base of voters that has long sought to shrink the size and cost of government.
In reality, “government fraud” has become a term of derision and public scorn applied to anything or anyone the current administration doesn’t like. If DOGE and the White House were truly interested in trimming government waste, fraud and abuse, they could scarcely do better than consult the inspectors general fighting it at various federal agencies.
After all, the inspectors general likely know exactly where a great deal of the federal government’s fiscal skeletons are buried. Instead, Mr. Trump fired at least 17 inspectors general, leaving the government without critical oversight of agency activities. That action is unlikely to stem government fraud; if anything, it will only encourage such activity.
As Techdirt founder Mike Masnick noted in a recent column “Why Techdirt is Now a Democracy Blog (Whether We Like it or Not),” when the very institutions that made American innovation possible are being systematically dismantled, it’s not a “political” story anymore: It’s a story about whether the environment that enabled all the other stories we cover will continue to exist.
“This is why tech journalism’s perspective is so crucial right now,” Masnick wrote. “We’ve spent decades documenting how technology and entrepreneurship can either strengthen or undermine democratic institutions. We understand the dangers of concentrated power in the digital age. And we’ve watched in real-time as tech leaders who once championed innovation and openness now actively work to consolidate control and dismantle the very systems that enabled their success.”
“But right now, the story that matters most is how the dismantling of American institutions threatens everything else we cover,” Masnick continued. “When the fundamental structures that enable innovation, protect civil liberties, and foster open dialogue are under attack, every other tech policy story becomes secondary.”
Once upon a time, Ryan's company didn't use a modern logging framework to alert admins when services failed. No, they used everyone's favorite communications format, circa 2005: email. Can't reach the database? Send an email. Unhandled exception? Send an email. Handled exception? Better send an email, just in case. Sometimes they go to admins, sometimes they just go to an inbox used for logging.
They use the Dot Net SmtpClient class to connect to an SMTP server and send emails based on the configuration. So far so good, but what happens when we can't send an email because the email server is down? We'll get an exception, and what do we do with it?
The same thing we do with every other exception: send an email.
Ryan writes:
Strangely enough, I've never heard of the service crashing or hanging. We must have a very good mail server!
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Melissa Kobrin “Annie, it looks like Santa brought you one more present!” Annie looked up eagerly from her nest of torn wrapping paper and new toys. The Christmas tree twinkled behind her, and outside the window the sun was barely beginning to peak over the horizon. She gasped when Daddy walked into the living […]
I regularly visit Seoul, and for the last couple of years I&aposve been doing segments from the Seoul Trail, a series of walks that add up to a 150km circuit around the outskirts of Seoul. If you like hiking I recommend it, it&aposs mostly through the hills and wooded areas surrounding the city or parks within the city and the bits I&aposve done thus far have mostly been very enjoyable. Everything is generally well signposted and easy to follow, with varying degrees of difficulty from completely flat paved roads to very hilly trails.
The trail had been divided into eight segments but just after I last visited the trail was reorganised into 21 smaller ones. This was very sensible, the original segments mostly being about 10-20km and taking 3-6 hours (with the notable exception of section 8, which was 36km) which can be a bit much (especially that section 8, or section 1 which had about 1km of ascent in it overall). It does complicate matters if you&aposre trying to keep track of what you&aposve done already though so I&aposve put together a quick table:
Original
Revised
1
1-3
2
4-5
3
6-8
4
9-10
5
11-12
6
13-14
7
15-16
8
17-21
This is all straightforward, the original segments had all been arranged to start and stop at metro stations (which I think explains the length of 8, the metro network is thin around Bukhansan what with it being an actual mountain) and the new segments are all straight subdivisions, but it&aposs handy to have it written down and I figured other people might find it useful.
Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.
But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.
This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.
To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.
Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.
Time flies! 15 years ago, on 2010-03-18, my first upload to the Debian
archive was accepted.
Debian had replaced Windows as my primary OS in 2005, but it was only when I
saw that package
zd1211-firmware had been
orphaned that I thought of becoming a contributor. I owned a Zyxel G-202 USB
WiFi fob that needed said firmware, and as is so often is with open-source
software, I was going to scratch my own itch. Bart Martens thankfully helped
me adopt the package, and sponsored my upload.
I then joined Javier Fernández-Sanguino Peña as a cron maintainer and upstream,
and also worked within the Debian Python Applications, Debian Python Modules,
and Debian Science Teams, where Jakub Wilk and Yaroslav Halchenko were kind
enough to mentor me and eventually support my application to become a
Debian Maintainer.
Life intervened, and I was mostly inactive in Debian for the next two years.
Upon my return in 2014, I had Vincent Cheng to thank for sponsoring most of
my newer work, and for eventually supporting my application to become a
Debian Developer. It was around that time that I also attended my first
DebConf, in Portland, which remains one
of my fondest memories. I had never been to an open-source software conference
before, and DebConf14 really knocked it out of the park in so many ways.
After another break, I returned in 2019 to work mostly on Python and machine
learning libraries. In 2020, I finally completed a process that I had first
started in 2012 but had never managed to finish before: converting cron from
source format 1.0
(one big diff) to
source format 3.0 (quilt)
(a series of patches). This was a process where I converted 25 years worth of
organic growth into a minimal series of logically grouped changes (more
here).
This was my white whale.
In early 2023, shortly after the launch of ChatGPT which triggered an
unprecedented AI boom, I started contributing to the Debian ROCm Team, where
over the following year, I bootstrapped our CI at
ci.rocm.debian.net. Debian's current tooling
lack a way to express dependencies on specific hardware other than CPU ISA,
nor does it have the means to run autopkgtests using such hardware. To get
autopkgtests to make use of AMD GPUs in QEMU VMs and in containers, I had to
fork autopkgtest, debci, and a few other components, as well as create a fair
share of new tooling for ourselves. This worked out pretty well, and the CI
has grown to support 17 different AMD GPU architectures. I will share more
on this in upcoming posts.
I have mentioned a few contributors by name, but I have countless others to
thank for collaborations over the years. It has been a wonderful experience,
and I look forward to many years more.
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language–and is
widely used by (currently) 1234 other packages on CRAN, downloaded 38.8 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 617 times according
to Google Scholar.
Conrad released a new
minor version 14.4.0 last month. That was preceding by several extensive
rounds of reverse-dependency checks covering the 1200+ packages at CRAN.
We eventually narrowed the impact down to just eight packages, and I
opened issue
#462 to manage the transition along with ‘GitHub-only’ release
14.4.0-0 of RcppArmadillo.
Several maintainers responded very promptly and updated within days –
this is truly appreciated. Yesterday the last package updated at CRAN coinciding nicely with our
planned / intended upload to CRAN one month after the release.
So this new release, at version -1, is now on CRAN. It brings the usual number
of small improvements to Armadillo itself as well as
updates to packaging.
The changes since the last CRAN release are summarised
below.
Changes in
RcppArmadillo version 14.4.0-1 (2025-03-17)
CRAN release having given a few packages time to catch-up to
small upstream change as discussed and managed in #462
Updated bibliography, and small edits to sparse matrix
vignette
Switched continuous integration action to r-ci with implicit
bootstrap
Changes
in RcppArmadillo version 14.4.0-0 (2025-02-17) (GitHub Only)
Upgraded to Armadillo release 14.4.0 (Filtered Espresso)
Faster handling of pow() and square()
within accu() and sum() expressions
Faster sort() and sort_index() for
complex matrices
Expanded the field class with .reshape() and
.resize() member functions
More efficient handling of compound expressions by
sum(), reshape(),
trans()
Better detection of vector expressions by pow(),
imag(), conj()
The package generator helper function now supports additional
DESCRIPTIONs
This release revealed a need for very minor changes for a handful
reverse-dependency packages which will be organized via GitHub issue
tracking
As promised on my previous post, on this entry I’ll explain how I’ve set up forgejo
actions on the source repository of this site to build it using a runner instead of doing it on the public server using
a webhook to trigger the operation.
Setting up the system
The first thing I’ve done is to disable the forgejo webhook call that was used to publish the site, as I don’t want to
run it anymore.
After that I added a new workflow to the repository that does the following things:
push the result to a branch that contains the generated site (we do this because the server is already configured
to work with the git repository and we can use force pushes to keep only the last version of the site, removing the
need of extra code to manage package uploads and removals).
uses curl to send a notification to an instance of the webhook server installed on the
remote server that triggers a script that updates the site using the git branch.
Setting up the webhook service
On the server machine we have installed and configured the webhook service to run a script that updates the site.
To install the application and setup the configuration we have used the following script:
#!/bin/shset-e# ---------# VARIABLES# ---------ARCH="$(dpkg --print-architecture)"WEBHOOK_VERSION="2.8.2"DOWNLOAD_URL="https://github.com/adnanh/webhook/releases/download"WEBHOOK_TGZ_URL="$DOWNLOAD_URL/$WEBHOOK_VERSION/webhook-linux-$ARCH.tar.gz"WEBHOOK_SERVICE_NAME="webhook"# FilesWEBHOOK_SERVICE_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.service"WEBHOOK_SOCKET_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.socket"WEBHOOK_TML_TEMPLATE="/srv/blogops/action/webhook.yml.envsubst"WEBHOOK_YML="/etc/webhook.yml"# Config file valuesWEBHOOK_USER="$(id-u)"WEBHOOK_GROUP="$(id-g)"WEBHOOK_LISTEN_STREAM="172.31.31.1:4444"# ----# MAIN# ----# Install binary from releases (on Debian only version 2.8.0 is available, but# I need the 2.8.2 version to support the systemd activation mode).
curl -fsSL-o"/tmp/webhook.tgz""$WEBHOOK_TGZ_URL"tar-C /tmp -xzf /tmp/webhook.tgz
sudo install-m 755 "/tmp/webhook-linux-$ARCH/webhook" /usr/local/bin/webhook
rm-rf"/tmp/webhook-linux-$ARCH" /tmp/webhook.tgz
# Service filesudo sh -c"cat >'$WEBHOOK_SERVICE_FILE'"<<EOF
[Unit]
Description=Webhook server
[Service]
Type=exec
ExecStart=webhook -nopanic -hooks $WEBHOOK_YML
User=$WEBHOOK_USER
Group=$WEBHOOK_GROUPEOF
# Socket configsudo sh -c"cat >'$WEBHOOK_SOCKET_FILE'"<<EOF
[Unit]
Description=Webhook server socket
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to the IP and port you want to listen on
ListenStream=$WEBHOOK_LISTEN_STREAM
[Install]
WantedBy=multi-user.target
EOF
# Config fileBLOGOPS_TOKEN="$(uuid)"\
envsubst <"$WEBHOOK_TML_TEMPLATE" | sudo sh -c"cat >$WEBHOOK_YML"chmod 0640 "$WEBHOOK_YML"
chwon "$WEBHOOK_USER:$WEBHOOK_GROUP""$WEBHOOK_YML"# Restart and enable servicesudo systemctl daemon-reload
sudo systemctl stop "$WEBHOOK_SERVICE_NAME.socket"sudo systemctl start "$WEBHOOK_SERVICE_NAME.socket"sudo systemctl enable"$WEBHOOK_SERVICE_NAME.socket"# ----# vim: ts=2:sw=2:et:ai:sts=2
As seen on the code, we’ve installed the application using a binary from the project repository instead of a package
because we needed the latest version of the application to use systemd with socket activation.
The configuration file template is the following one:
The version on /etc/webhook.yml has the BLOGOPS_TOKEN adjusted to a random value that has to exported as a secret on
the forgejo project (see later).
Once the service is started each time the action is executed the webhook daemon will get a notification and will run
the following update-blogops.sh script to publish the updated version of the site:
#!/bin/shset-e# ---------# VARIABLES# ---------# ValuesREPO_URL="ssh://git@forgejo.mixinet.net/mixinet/blogops.git"REPO_BRANCH="html"REPO_DIR="public"MAIL_PREFIX="[BLOGOPS-UPDATE-ACTION] "# Address that gets all messages, leave it empty if not wantedMAIL_TO_ADDR="blogops@mixinet.net"# DirectoriesBASE_DIR="/srv/blogops"PUBLIC_DIR="$BASE_DIR/$REPO_DIR"NGINX_BASE_DIR="$BASE_DIR/nginx"PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"ACTION_BASE_DIR="$BASE_DIR/action"ACTION_LOG_DIR="$ACTION_BASE_DIR/log"# FilesOUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"ACTION_LOGFILE_PATH="$ACTION_LOG_DIR/$OUTPUT_BASENAME.log"# ---------# Functions# ---------
action_log(){echo"$(date-R)$*">>"$ACTION_LOGFILE_PATH"}
action_check_directories(){for _d in"$ACTION_BASE_DIR""$ACTION_LOG_DIR";do[-d"$_d"]||mkdir"$_d"done}
action_clean_directories(){# Try to remove empty dirsfor _d in"$ACTION_LOG_DIR""$ACTION_BASE_DIR";do
if[-d"$_d"];then
rmdir"$_d" 2>/dev/null ||true
fi
done}
mail_success(){to_addr="$MAIL_TO_ADDR"if["$to_addr"];then
subject="OK - updated blogops site"
mail -s"${MAIL_PREFIX}${subject}""$to_addr" <"$ACTION_LOGFILE_PATH"fi}
mail_failure(){to_addr="$MAIL_TO_ADDR"if["$to_addr"];then
subject="KO - failed to update blogops site"
mail -s"${MAIL_PREFIX}${subject}""$to_addr" <"$ACTION_LOGFILE_PATH"fi
exit 1
}# ----# MAIN# ----ret="0"# Check directories
action_check_directories
# Go to the base directorycd"$BASE_DIR"# Remove the old build dir if presentif[-d"$PUBLIC_DIR"];then
rm-rf"$PUBLIC_DIR"fi# Update the repository checkout
action_log "Updating the repository checkout"
git fetch --all>>"$ACTION_LOGFILE_PATH" 2>&1 ||ret="$?"if["$ret"-ne"0"];then
action_log "Failed to update the repository checkout"
mail_failure
fi# Get it from the repo branch & extract it
action_log "Downloading and extracting last site version using 'git archive'"
git archive --remote="$REPO_URL""$REPO_BRANCH""$REPO_DIR"\
| tar xf - >>"$ACTION_LOGFILE_PATH" 2>&1 ||ret="$?"# Fail if public dir was missingif["$ret"-ne"0"]||[!-d"$PUBLIC_DIR"];then
action_log "Failed to download or extract site"
mail_failure
fi# Remove old public_html copies
action_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR-mindepth 1 -maxdepth 1 -name'public_html-*'-type d \-execrm-rf{}\;>>"$ACTION_LOGFILE_PATH" 2>&1 ||ret="$?"if["$ret"-ne"0"];then
action_log "Removal of old site versions failed"
mail_failure
fi# Switch site directoryTS="$(date +%Y%m%d-%H%M%S)"if[-d"$PUBLIC_HTML_DIR"];then
action_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"mv"$PUBLIC_HTML_DIR""$PUBLIC_HTML_DIR-$TS">>"$ACTION_LOGFILE_PATH" 2>&1 ||ret="$?"fi
if["$ret"-eq"0"];then
action_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"mv"$PUBLIC_DIR""$PUBLIC_HTML_DIR">>"$ACTION_LOGFILE_PATH" 2>&1 ||ret="$?"fi
if["$ret"-ne"0"];then
action_log "Site switch failed"
mail_failure
else
action_log "Site updated successfully"
mail_success
fi# ----# vim: ts=2:sw=2:et:ai:sts=2
The hugo-adoc workflow
The workflow is defined in the .forgejo/workflows/hugo-adoc.yml file and looks like this:
name:hugo-adoc# Run this job on push events to the main branchon:push:branches:-'main'jobs:build-and-push:if:${{ vars.BLOGOPS_WEBHOOK_URL != '' && secrets.BLOGOPS_TOKEN != '' }}runs-on:dockercontainer:image:forgejo.mixinet.net/oci/hugo-adoc:latest# Allow the job to write to the repository (not really needed on forgejo)permissions:contents:writesteps:-name:Checkout the repouses:actions/checkout@v4with:submodules:'true'-name:Build the siteshell:shrun:|rm -rf publichugo-name:Push compiled site to html branchshell:shrun:|# Set the git usergit config --global user.email "blogops@mixinet.net"git config --global user.name "BlogOps"# Create a new orphan branch called html (it was not pulled by the# checkout step)git switch --orphan html# Add the public directory to the branchgit add public# Commit the changesgit commit --quiet -m "Updated site @ $(date -R)" public# Push the changes to the html branchgit push origin html --force# Switch back to the main branchgit switch main-name:Call the blogops update webhook endpointshell:shrun:|HEADER="X-Blogops-Token: ${{ secrets.BLOGOPS_TOKEN }}"curl --fail -k -H "$HEADER" ${{ vars.BLOGOPS_WEBHOOK_URL }}
The only relevant thing is that we have to add the BLOGOPS_TOKEN variable to the project secrets (its value is the one
included on the /etc/webhook.yml file created when installing the webhook service) and the BLOGOPS_WEBHOOK_URL
project variable (its value is the URL of the webhook server, in my case
http://172.31.31.1:4444/hooks/update-blogops); note that the job includes the -k flag on the curl command just in
case I end up using TLS on the webhook server in the future, as discussed previously.
Conclusion
Now that I have forgejo actions on my server I no longer need to build the site on the public server as I did initially,
a good thing when the server is a small OVH VPS that only runs a couple of containers and a web server directly on the
host.
I’m still using a notification system to make the server run a script to update the site because that way the forgejo
server does not need access to the remote machine shell, only the webhook server which, IMHO, is a more secure setup.
When I a few days ago discovered that a security problem reported
against the theora library last year was still not fixed, and because
I was already up to speed on Xiph development, I decided it was time
to wrap up a new theora release. This new release was tagged in
the Xiph gitlab theora
instance Saturday. You can fetch the new release from
the Theora home page.
The list of changes since The 1.2.0alpha1 release from the CHANGES
file in the tarball look like this:
libteora 1.2.0beta1 (2025 March 15)
Bumped minor SONAME versions as methods changed constness
of arguments.
Updated libogg dependency to version 1.3.4 for ogg_uint64_t.
Updated doxygen setup.
Updated autotools setup and support scripts (#1467 #1800 #1987 #2318
#2320).
Added support for RISC OS.
Fixed mingw build (#2141).
Improved ARM support.
Converted SCons setup to work with Python 3.
Introduced new configure options --enable-mem-constraint and
--enable-gcc-sanitizers.
Fixed all known compiler warnings and errors from gcc and clang.
Improved examples for stability and correctness.
Variuos speed, bug fixes and code quality improvements.
Fixed build problem with Visual Studio (#2317).
Avoids undefined bit shift of signed numbers (#2321, #2322).
Avoids example encoder crash on bogus audio input (#2305).
Fixed musl linking issue with asm enabled (#2287).
Fixed some broken clamping in rate control (#2229).
Added NULL check _tc and _setup even for data packets (#2279).
Updated the documentation for theora_encode_comment() (#726).
Adjusted build to Only link libcompat with dump_video (#1587).
Corrected an operator precedence error in the visualization
code (#1751).
Fixed two spelling errors in the comments (#1804).
Avoid negative bit shift operation in huffdec.c (CVE-2024-56431).
Improved library documentation and specification text.
Adjusted library dependencies so libtheoraenc do not depend on
libtheoradec.
Handle fallout from CVE-2017-14633 in libvorbis, check return value
in encoder_example and transcoder_example.
There are a few bugs still being investigated, and my plan is to
wrap up a final 1.2.0 release two weekends from now.
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Alex had the misfortune to work on the kind of application which has forms with gigantic piles of fields, stuffed haphazardly into objects. A single form could easily have fifty or sixty fields for the user to interact with.
That leads to C# code like this:
privatestatic String getPrefix(AV_Suchfilter filter)
{
String pr = String.Empty;
try
{
int maxLength = 0;
if (filter.Angebots_id != null) { maxLength = getmaxLength(maxLength, AV_MessagesTexte.Reportliste_sf_angebotsID.Length); }
if (filter.InternesKennzeichen != null) { if (filter.InternesKennzeichen.Trim() != String.Empty) { maxLength = getmaxLength(maxLength, AV_MessagesTexte.Reportliste_sf_internesKennzeichen.Length); } }
if (filter.Angebotsverantwortlicher_guid != null) { maxLength = getmaxLength(maxLength, AV_MessagesTexte.Reportliste_sf_angebotsverantwortlicher.Length); }
// Do this another 50 times....// and then ....int counter = 0;
while (counter < maxLength)
{
pr += " ";
counter++;
}
}
catch (Exception error)
{
ErrorForm frm = new ErrorForm(error);
frm.ShowDialog();
}
return pr;
}
The "Do this another 50 times" is doing a lot of heavy lifting in here. What really infuriates me about it, though, which we can see here, is that not all of the fields we're looking at are parameters to this function. And because the function here is static, they're not instance members either. I assume AV_MessagesTexte is basically a global of text labels, which isn't a bad way to manage such a thing, but functions should still take those globals as parameters so you can test them.
I'm kidding, of course. This function has never been tested.
Aside from a gigantic pile of string length comparisons, what does this function actually do? Well, it returns a new string which is a number of spaces exactly equal to the length of the longest string. And the way we build that output string is not only through string concatenation, but the use of a while loop where a for loop makes more sense.
Also, just… why? Why do we need a spaces-only-string the length of another string? Even if we're trying to do some sort of text layout, that seems like a bad way to do whatever it is we're doing, and also if that's the case, why is it called getPrefix? WHY IS OUR PREFIX A STRING OF SPACES THE LENGTH OF OUR FIELD? HOW IS THAT A PREFIX?
I feel like I'm going mad.
But the real star of this horrible mess, in my opinion, is the exception handling. Get an exception? Show the user a form! There's no attempt to decide if or how we could recover from this error, we just annoy the user with it.
Which isn't just unique to this function. Notice the getmaxLength function? It's really a max and it looks like this:
privatestaticintgetmaxLength(int old, int current)
{
int result = old;
try
{
if (current > old)
{
result = current;
}
}
catch (Exception error)
{
ErrorForm frm = new ErrorForm(error);
frm.ShowDialog();
}
return result;
}
What's especially delightful here is that this function couldn't possibly throw an exception. And you know what that tells me? This try/catch/form pattern is just their default error handling. They spam this everywhere, in every function, and the tech lead or architect pats themselves on the back for ensuring that the application "never crashes!" all the while annoying the users with messages they can't do anything about.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Majoki She crouched in the foliage at the river’s edge and watched the young man. He was not aware of her presence and she found that comforting. It was unusual for her to feel comforted or otherwise. She had only recently become sentient, and it had been an alarming experience. To simply be one […]
There's an issue in the Eigen linear algebra library where linking together
objects compiled with different flags causes the resulting binary to crash. Some
details are written-up in this mailing list thread.
I just encountered a situation where a large application sometimes crashes for
unknown reasons, and needed a method to determine whether this Eigen issue could
be the cause. I ended up doing this by using the DWARF data to see if the linked
binary contains the different incompatible flavors of malloc / free or not.
======== main
free_handmade
======== lib.so
malloc_glibc
free_glibc
Here I looked at main and lib.so (the build products from this little demo).
In a real case you'd look at every shared library linked into the binary and the
binary itself. On my machine /usr/include/eigen3/Eigen/src/Core/util/Memory.h
looks like this, starting on line 174:
The above awk script looks at the two malloc paths and the two free paths, and
we can clearly see that it only ever calls malloc_glibc(), but has both
flavors of free(). So this can crash. We want to see that the whole executable
(shared libraries and all) should only have one type of malloc() and free(),
and that would guarantee no crashing.
There are a more functions in that header that should be instrumented
(realloc() for instance) and the different alignment paths should be
instrumented similarly (as described in the mailing list thread above), but here
we see that this technique works.
Last week I decided I wanted to try out forgejo actions to build this blog instead of using
webhooks, so I looked the documentation and started playing with it until I had it working as I wanted.
This post is to describe how I’ve installed and configured a forgejo runner, how I’ve added an
oci organization to my instance to build, publish and mirror container images and added a couple of
additional organizations (actions and docker for now) to mirror interesting
actions.
The changes made to build the site using actions will be documented on a separate post, as I’ll be using this entry to
test the new setup on the blog project.
Installing the runner
The first thing I’ve done is to install a runner on my server, I decided to use the
OCI image installation method, as it seemed to be the easiest and fastest
one.
The commands I’ve used to setup the runner are the following:
$ cd /srv
$ git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
$ cd forgejo-runner
$ sh ./bin/setup-runner.sh
The setup-runner.sh script does multiple things:
create a forgejo-runner user and group
create the necessary directories for the runner
create a .runner file with a predefined secret and the docker label
The RUNNER_NAME variable is defined on the setup-runner.sh script and the FORGEJO_SECRET must match the value used
on the .runner file.
Starting it with docker-compose
To launch the runner I’m going to use a docker-compose.yml file that starts two containers, a docker in docker
service to run the containers used by the workflow jobs and another one that runs the forgejo-runner itself.
The initial version used a TCP port to communicate with the dockerd server from the runner, but when I tried to build
images from a workflow I noticed that the containers launched by the runner were not going to be able to execute
another dockerd inside the dind one and, even if they were, it was going to be expensive computationally.
To avoid the issue I modified the dind service to use a unix socket on a shared volume that can be used by the
runner service to communicate with the daemon and also re-shared with the job containers so the dockerd server can
be used from them to build images.
Warning:
The use of the same docker server that runs the jobs from them has security implications, but this instance is
for a home server where I am the only user, so I am not worried about it and this way I can save some resources (in
fact, I could use the host docker server directly instead of using a dind service, but just in case I want to run
other containers on the host I prefer to keep the one used for the runner isolated from it).
For those concerned about sharing the same server an alternative would be to launch a second dockerd only for the jobs
(i.e. actions-dind) using the same approach (the volume with its socket will have to be shared with the runner
service so it can be re-shared, but the runner does not need to use it).
There are multiple things to comment about this file:
The dockerd server is started with the -H unix:///dind/docker.sock flag to use the unix socket to communicate
with the daemon instead of using a TCP port (as said, it is faster and allows us to share the socket with the
containers started by the runner).
We are running the dockerd daemon with the RUNNER_GID group so the runner can communicate with it (the socket
gets that group which is the same used by the runner).
The runner container mounts three volumes: the data directory, the dind folder where docker creates the unix
socket and a config.yaml file used by us to change the default runner configuration.
The config.yaml file was originally created using the forgejo-runner:
$ docker run --rm data.forgejo.org/forgejo/runner:6.2.2 \
forgejo-runner generate-config > config.yaml
The changes to it are minimal, the runner capacity has been increased to 2 (that allows it to run two jobs at the
same time) and the /dind/docker.sock value has been added to the valid_volumes key to allow the containers launched
by the runner to mount it when needed; the diff against the default version is as follows:
@@ -13,7 +13,8 @@
# Where to store the registration result.
file: .runner
# Execute how many tasks concurrently at the same time.
- capacity: 1
+ # STO: Allow 2 concurrent tasks
+ capacity: 2
# Extra environment variables to run jobs.
envs:
A_TEST_ENV_NAME_1: a_test_env_value_1
@@ -87,7 +88,9 @@
# If you want to allow any volume, please use the following configuration:
# valid_volumes:
# - '**'
- valid_volumes: []
+ # STO: Allow to mount the /dind/docker.sock on the containers
+ valid_volumes:
+ - /dind/docker.sock
# overrides the docker client host with the specified one.
# If "-" or "", an available docker host will automatically be found.
# If "automount", an available docker host will automatically be found and ...
To start the runner we export the RUNNER_UID and RUNNER_GID variables and call docker-compose up to start the
containers on the background:
$ RUNNER_UID="$(id-u forgejo-runner)"RUNNER_GID="$(id-g forgejo-runner)"\
docker compose up -d
If the server was configured right we are now able to start using actions with this runner.
Preparing the system to run things locally
To avoid unnecessary network traffic we are going to create a multiple organizations in our forgejo instance to maintain
our own actions and container images and mirror remote ones.
The rationale behind the mirror use is that we reduce a lot the need to connect to remote servers to download the
actions and images, which is good for performance and security reasons.
In fact, we are going to build our own images for some things to install the tools we want without needing to do it over
and over again on the workflow jobs.
Mirrored actions
The actions we are mirroring are on the actions and docker organizations, we have
created the following ones for now (the mirrors were created using the forgejo web interface and we have disabled
manually all the forgejo modules except the code one for them):
To use our actions by default (i.e., without needing to add the server URL on the uses keyword) we have added the
following section to the app.ini file of our forgejo server:
To be able to push images to the oci organization I’ve created a token with package:write permission for my own
user because I’m a member of the organization and I’m authorized to publish packages on it (a different user could be
created, but as I said this is for personal use, so there is no need to complicate things for now).
To allow the use of those credentials on the actions I have added a secret (REGISTRY_PASS) and a variable
(REGISTRY_USER) to the oci organization to allow the actions to use them.
I’ve also logged myself on my local docker client to be able to push images to the oci group by hand, as I it is
needed for bootstrapping the system (as I’m using local images on the worflows I need to push them to the server before
running the ones that are used to build the images).
Local and mirrored images
Our images will be stored on the packages section of a new organization called oci, inside it we have
created two projects that use forgejo actions to keep things in shape:
images: contains the source files used to generate our own images and the actions to build, tag and
push them to the oci organization group.
mirrors: contains a configuration file for the regsync tool to mirror containers and an
action to run it.
On the next sections we are going to describe the actions and images we have created and mirrored from those projects.
The oci/images project
The images project is a monorepo that contains the source files for the images we are going to build and a couple of
actions.
The image sources are on sub directories of the repository, to be considered an image the folder has to contain a
Dockerfile that will be used to build the image.
The repository has two workflows:
build-image-from-tag: Workflow to build, tag and push an image to the oci organization
multi-semantic-release: Workflow to create tags for the images using the multi-semantic-release tool.
As the workflows are already configured to use some of our images we pushed some of them from a checkout of the
repository using the following commands:
On the next sub sections we will describe what the workflows do and will show their source code.
build-image-from-tag workflow
This workflow uses a docker client to build an image from a tag on the repository with the format
image-name-v[0-9].[0-9].[0-9]+.
As the runner is executed on a container (instead of using lxc) it seemed unreasonable to run another dind
container from that one, that is why, after some tests, I decided to share the dind service server socket with the
runner container and enabled the option to mount it also on the containers launched by the runner when needed (I only
do it on the build-image-from-tag action for now).
The action was configured to run using a trigger or when new tags with the right format were created, but when the tag
is created by multi-semantic-release the trigger does not work for some reason, so now it only runs the job on
triggers and checks if it is launched for a tag with the right format on the job itself.
The source code of the action is as follows:
name:build-image-from-tagon:workflow_dispatch:jobs:build:# Don't build the image if the registry credentials are not set, the ref is not a tag or it doesn't contain '-v'if:${{ vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' && startsWith(github.ref, 'refs/tags/') && contains(github.ref, '-v') }}runs-on:dockercontainer:image:forgejo.mixinet.net/oci/node-mixinet:latest# Mount the dind socket on the container at the default locationoptions:-v /dind/docker.sock:/var/run/docker.socksteps:-name:Extract image name and tag from git and get registry name from envid:job_datarun:|echo "::set-output name=img_name::${GITHUB_REF_NAME%%-v*}"echo "::set-output name=img_tag::${GITHUB_REF_NAME##*-v}"echo "::set-output name=registry::$(echo "${{ github.server_url }}" | sed -e 's%https://%%')"echo "::set-output name=oci_registry_prefix::$(echo "${{ github.server_url }}/oci" | sed -e 's%https://%%')"-name:Checkout the repouses:actions/checkout@v4-name:Export build dir and Dockerfileid:build_datarun:|img="${{ steps.job_data.outputs.img_name }}"build_dir="$(pwd)/${img}"dockerfile="${build_dir}/Dockerfile"if [ -f "$dockerfile" ]; thenecho "::set-output name=build_dir::$build_dir"echo "::set-output name=dockerfile::$dockerfile"elseecho "Couldn't find the Dockerfile for the '$img' image"exit 1fi-name:Login to the Container Registryuses:docker/login-action@v3with:registry:${{ steps.job_data.outputs.registry }}username:${{ vars.REGISTRY_USER }}password:${{ secrets.REGISTRY_PASS }}-name:Set up Docker Buildxuses:docker/setup-buildx-action@v3-name:Build and Pushuses:docker/build-push-action@v6with:push:truetags:|${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:${{ steps.job_data.outputs.img_tag }}${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:latestcontext:${{ steps.build_data.outputs.build_dir }}file:${{ steps.build_data.outputs.dockerfile }}build-args:|OCI_REGISTRY_PREFIX=${{ steps.job_data.outputs.oci_registry_prefix }}/
Some notes about this code:
The if condition of the build job is not perfect, but it is good enough to avoid wrong uses as long as nobody
uses manual tags with the wrong format and expects things to work (it checks if the REGISTRY_USER and
REGISTRY_PASS variables are set, if the ref is a tag and if it contains the -v string).
To be able to access the dind socket we mount it on the container using the options key on the container section
of the job (this only works if supported by the runner configuration as explained before).
We use the job_data step to get information about the image from the tag and the registry URL from the environment
variables, it is executed first because all the information is available without checking out the repository.
We use the job_data step to get the build dir and Dockerfile paths from the repository (right now we are
assuming fixed paths and checking if the Dockerfile exists, but in the future we could use a configuration file to
get them, if needed).
As we are using a docker daemon that is already running there is no need to use the
docker/setup-docker-action to install it.
On the build and push step we pass the OCI_REGISTRY_PREFIX build argument to the Dockerfile to be able to use it
on the FROM instruction (we are using it in our images).
multi-semantic-release workflow
This workflow is used to run the multi-semantic-release tool on pushes to the main branch.
It is configured to create the configuration files on the fly (it prepares things to tag the folders that contain a
Dockerfile using a couple of template files available on the repository’s .forgejo directory) and run the
multi-semantic-release tool to create tags and push them to the repository if new versions are to be built.
Initially we assumed that the tag creation pushed by multi-semantic-release would be enough to run the
build-tagged-image-task action, but as it didn’t work we removed the rule to run the action on tag creation and added
code to trigger the action using an api call for the newly created tags (we get them from the output of the
multi-semantic-release execution).
The source code of the action is as follows:
name:multi-semantic-releaseon:push:branches:-'main'jobs:multi-semantic-release:runs-on:dockercontainer:image:forgejo.mixinet.net/oci/multi-semantic-release:lateststeps:-name:Checkout the repouses:actions/checkout@v4-name:Generate multi-semantic-release configurationshell:shrun:|# Get the list of images to work with (the folders that have a Dockerfile)images="$(for img in */Dockerfile; do dirname "$img"; done)"# Generate a values.yaml file for the main packages.json filepackage_json_values_yaml=".package.json-values.yaml"echo "images:" >"$package_json_values_yaml"for img in $images; doecho " - $img" >>"$package_json_values_yaml"doneecho "::group::Generated values.yaml for the project"cat "$package_json_values_yaml"echo "::endgroup::"# Generate the package.json file validating that is a good json file with jqtmpl -f "$package_json_values_yaml" ".forgejo/package.json.tmpl" | jq . > "package.json"echo "::group::Generated package.json for the project"cat "package.json"echo "::endgroup::"# Remove the temporary values filerm -f "$package_json_values_yaml"# Generate the package.json file for each imagefor img in $images; dotmpl -v "img_name=$img" -v "img_path=$img" ".forgejo/ws-package.json.tmpl" | jq . > "$img/package.json"echo "::group::Generated package.json for the '$img' image"cat "$img/package.json"echo "::endgroup::"done-name:Run multi-semantic-releaseshell:shrun:|multi-semantic-release | tee .multi-semantic-release.log-name:Trigger buildsshell:shrun:|# Get the list of tags published on the previous stepstags="$(sed -n -e 's/^\[.*\] \[\(.*\)\] .* Published release \([0-9]\+\.[0-9]\+\.[0-9]\+\) on .*$/\1-v\2/p' \.multi-semantic-release.log)"rm -f .multi-semantic-release.logif [ "$tags" ]; then# Prepare the url for building the imagesworkflow="build-image-from-tag.yaml"dispatch_url="${{ github.api_url }}/repos/${{ github.repository }}/actions/workflows/$workflow/dispatches"echo "$tags" | while read -r tag; doecho "Triggering build for tag '$tag'"curl \-H "Content-Type:application/json" \-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \-d "{\"ref\":\"$tag\"}" "$dispatch_url"donefi
Notes about this code:
The use of the tmpl tool to process the multi-semantic-release configuration templates comes from previous uses,
but on this case we could use a different approach (i.e. envsubst could be used) but we left it because it keeps
things simple and can be useful in the future if we want to do more complex things with the template files.
We use tee to show and dump to a file the output of the multi-semantic-release execution.
We get the list of pushed tags using sed against the output of the multi-semantic-release execution and for
each one found we use curl to call the forgejo API to trigger the build job; as the call is against the same
project we can use the GITHUB_TOKEN generated for the workflow to do it, without creating a user token that has to
be shared as a secret.
The .forgejo/package.json.tmpl file is the following one:
As can be seen it only needs a list of paths to the images as argument (the file we generate contains the names and
paths, but it could be simplified).
And the .forgejo/ws-package.json.tmpl file is the following one:
The repository contains a template for the configuration file we are going to use with regsync
(regsync.envsubst.yml) to mirror images from remote registries using a workflow that generates a configuration file
from the template and runs the tool.
The initial version of the regsync.envsubst.yml file is prepared to mirror alpine containers from version 3.21 to
3.29 (we explicitly remove version 3.20) and needs the forgejo.mixinet.net/oci/node-mixinet:latest image to run
(as explained before it was pushed manually to the server):
The mirror workflow creates a configuration file replacing the value of the REGISTRY environment variable (computed
by removing the protocol from the server_url), the REGISTRY_USER organization value and the REGISTRY_PASS secret
using the envsubst command and running the regsync tool to mirror the images using the configuration file.
The action is configured to run daily, on push events when the regsync.envsubst.yml file is modified on the main
branch and can also be triggered manually.
We have installed a forgejo-runner and configured it to run actions for our own server and things are working fine.
This approach allows us to have a powerful CI/CD system on a modest home server, something very useful for maintaining
personal projects and playing with things without needing SaaS platforms like github or
gitlab.
A new version 0.1.10 of the RcppExamples
package is now on CRAN, and
marks the first release in five and half years.
RcppExamples
provides a handful of short examples detailing by concrete working
examples how to set up basic R data structures in C++. It also provides
a simple example for packaging with Rcpp. The package provides (generally
fairly) simple examples, more (and generally longer) examples are at the
Rcpp Gallery.
This releases brings a bi-directorial example of factor
conversion, updates the Date example, removes the
explicitly stated C++ compilation standard (which CRAN now nags about) and brings a
number of small fixes and maintenance that accrued since the last
release. The NEWS extract follows:
Changes in
RcppExamples version 0.1.10 (2025-03-17)
Simplified DateExample by removing unused API
code
Added a new FactorExample with conversion to and
from character vectors
Updated and modernised continuous integrations multiple
times
Abstract: In human factor fields such as human-computer interaction (HCI) and psychology, researchers have been concerned that participants mostly come from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries. This WEIRD skew may hinder understanding of diverse populations and their cultural differences. The usable privacy and security (UPS) field has inherited many research methodologies from research on human factor fields. We conducted a literature review to understand the extent to which participant samples in UPS papers were from WEIRD countries and the characteristics of the methodologies and research topics in each user study recruiting Western or non-Western participants. We found that the skew toward WEIRD countries in UPS is greater than that in HCI. Geographic and linguistic barriers in the study methods and recruitment methods may cause researchers to conduct user studies locally. In addition, many papers did not report participant demographics, which could hinder the replication of the reported studies, leading to low reproducibility. To improve geographic diversity, we provide the suggestions including facilitate replication studies, address geographic and linguistic issues of study/recruitment methods, and facilitate research on the topics for non-WEIRD populations.
The moral may be that human factors and usability needs to be localized.
Abstract: Key lengths in symmetric cryptography are determined with respect to the brute force attacks with current technology. While nowadays at least 128-bit keys are recommended, there are many standards and real-world applications that use shorter keys. In order to estimate the actual threat imposed by using those short keys, precise estimates for attacks are crucial.
In this work we provide optimized implementations of several widely used algorithms on GPUs, leading to interesting insights on the cost of brute force attacks on several real-word applications.
In particular, we optimize KASUMI (used in GPRS/GSM),SPECK (used in RFID communication), andTEA3 (used in TETRA). Our best optimizations allow us to try 235.72, 236.72, and 234.71 keys per second on a single RTX 4090 GPU. Those results improve upon previous results significantly, e.g. our KASUMI implementation is more than 15 times faster than the optimizations given in the CRYPTO’24 paper [ACC+24] improving the main results of that paper by the same factor.
With these optimizations, in order to break GPRS/GSM, RFID, and TETRA communications in a year, one needs around 11.22 billion, and 1.36 million RTX 4090GPUs, respectively.
For KASUMI, the time-memory trade-off attacks of [ACC+24] can be performed with142 RTX 4090 GPUs instead of 2400 RTX 3090 GPUs or, when the same amount of GPUs are used, their table creation time can be reduced to 20.6 days from 348 days,crucial improvements for real world cryptanalytic tasks.
Attacks always get better; they never get worse. None of these is practical yet, and they might never be. But there are certainly more optimizations to come.
FreedomBox is a Debian blend that makes it easier to run your own server. Approximately every two years, there is a new stable release of Debian. This year’s release will be called Debian 13 "trixie".
This post will provide an overview of changes between FreedomBox 23.6 (the version that shipped in Debian 12 "bookworm") and 25.5 (the latest release). Note: Debian 13 "trixie" is not yet released, so things may still change, be added or removed, before the official release.
General
A number of translations were updated, including Albanian, Arabic, Belarusian, Bulgarian, Chinese (Simplified Han script), Chinese (Traditional Han script), Czech, Dutch, French, German, Hindi, Japanese, Norwegian Bokmål, Polish, Portuguese, Russian, Spanish, Swedish, Telugu, Turkish, and Ukrainian.
Fix cases where a package or service is used by multiple apps, so that disabling or uninstalling one app does not affect the other app.
When uninstalling an app, purge the packages, to remove all data and configuration.
For configuration files that need to be placed into folders owned by other packages, we now install these files under /usr/share/freedombox/etc/, and create a symbolic link to the other package’s configuration folder. This prevents the files being lost when other packages are purged.
Add an action to re-run the setup process for an app. This can fix many of the possible issues that occur.
Various improvements related to the "force upgrade" feature, which handles upgrading packages with conffile prompts.
Fix install/uninstall issues for apps that use MySQL database (WordPress, Zoph).
Improve handling of file uploads (Backups, Feather Wiki, Kiwix).
Switch to Bootstrap 5 front-end framework.
Removed I2P app, since the i2p package was removed from Debian.
Various user interface changes, including:
Add tags for apps, replacing short descriptions. When a tag is clicked, search and filter for one or multiple tags.
Organize the System page into sections.
Add breadcrumbs for page hierarchy navigation.
Add next steps page after initial FreedomBox setup.
Diagnostics
Add diagnostic checks to detect common errors.
Add diagnostics daily run, with notifications about failures.
Add Repair action for failed diagnostics, and option for automatic repairs.
Name Services
Move hostname and domain name configuration to Names page.
Support multiple static and/or dynamic domains.
Use systemd-resolved for DNS resolution.
Add options for setting global DNS-over-TLS and DNSSEC preferences.
Networks
Add more options for IPv6 configuration method.
Overhaul Wi-Fi networks scan page.
Privacy
Add option to disable fallback DNS servers.
Add option to set the lookup URL to get the public IP address of the FreedomBox.
Users and Groups
Delete or move home folder when user is deleted or renamed.
When a user is inactivated, also inactivate the user in LDAP.
Deluge
This BitTorrent client app should be available once again in Debian 13 "trixie".
Ejabberd
Turn on Message Archive Management setting by default, to help various XMPP clients use it.
Feather Wiki
Add new app for note taking.
This app lives in a single HTML file, which is downloaded from the FreedomBox website.
GitWeb
Disable snapshot feature, due to high resource use.
Various fixes for repository operations.
GNOME
Add new app to provide a graphical desktop environment.
Requires a monitor, keyboard, and mouse to be physically connected to the FreedomBox.
Not suitable for low-end hardware.
ikiwiki
Disable discussion pages by default for new wiki/blog, to avoid spam.
Kiwix
Add new app for offline reader of Wikipedia and other sites.
Matrix Synapse
Add an option for token-based registration verification, so that users signing up for new accounts will need to provide a token during account registration.
MediaWiki
Allow setting the site language code.
Increase PHP maximum execution time to 100 seconds.
MiniDLNA
Add media directory selection form.
Miniflux
Add new app for reading news from RSS/ATOM feeds.
Nextcloud
Add new app for file sync and collaboration.
Uses a Docker container maintained by the Nextcloud community. The container is downloaded from FreedomBox container registry.
OpenVPN
Renew server/client certificates, and set expiry to 10 years.
Postfix/Dovecot
Fix DKIM signing.
Show DNS entries for all domains.
Shadowsocks Server
Add new app for censorship resistance, separate from Shadowsocks Client app.
SOGo
Add new app for groupware (webmail, calendar, tasks, and contacts).
Works with Postfix/Dovecot email server app.
TiddlyWiki
Add new app for note taking.
This app lives in a single HTML file, which is downloaded from the FreedomBox website.
Tor Proxy
Add new app for Tor SOCKS proxy, separate from Tor app.
Transmission
Allow remote user interfaces to connect.
Conclusion
Over the past two years, FreedomBox has been increasing the number of features and applications available to its users. We have also focused on improving the reliability of the system, detecting unexpected situations, and providing means to return to a known good state. With these improvements, FreedomBox has become a good solution for people with limited time or energy to set up and start running a personal server, at home or in the cloud.
Looking forward, we would like to focus on making more powerful hardware available with FreedomBox pre-installed and ready to be used. This hardware would also support larger storage devices, making it suitable as a NAS or media server. We are also very interested in exploring new features such as atomic updates, which will further enhance the reliability of the system.
An offline PKI enhances security by physically isolating the certificate
authority from network threats. A YubiKey is a low-cost solution to store a
root certificate. You also need an air-gapped environment to operate the root
CA.
Offline PKI backed up by 3 YubiKeys
This post describes an offline PKI system using the following components:
2 YubiKeys for the root CA (with a 20-year validity),
1 YubiKey for the intermediate CA (with a 5-year validity), and
It is possible to add more YubiKeys as a backup of the root CA if needed. This
is not needed for the intermediate CA as you can generate a new one if
the current one gets destroyed.
The software part
offline-pki is a small Python application to manage an offline PKI.
It relies on yubikey-manager to manage YubiKeys and cryptography for
cryptographic operations not executed on the YubiKeys. The application has some
opinionated design choices. Notably, the cryptography is hard-coded to use NIST
P-384 elliptic curve.
The first step is to reset all your YubiKeys:
$ offline-pkiyubikeyreset
This will reset the connected YubiKey. Are you sure? [y/N]: yNew PIN code:Repeat for confirmation:New PUK code:Repeat for confirmation:New management key ('.' to generate a random one):WARNING[pki-yubikey] Using random management key: e8ffdce07a4e3bd5c0d803aa3948a9c36cfb86ed5a2d5cf533e97b088ae9e629INFO[pki-yubikey] 0: Yubico YubiKey OTP+FIDO+CCID 00 00INFO[pki-yubikey] SN: 23854514INFO[yubikit.management] Device config writtenINFO[yubikit.piv] PIV application data reset performedINFO[yubikit.piv] Management key setINFO[yubikit.piv] New PUK setINFO[yubikit.piv] New PIN setINFO[pki-yubikey] YubiKey reset successful!
Then, generate the root CA and create as many copies as you want:
$ offline-pkicertificateroot--permittedexample.com
Management key for Root X:Plug YubiKey "Root X"...INFO[pki-yubikey] 0: Yubico YubiKey CCID 00 00INFO[pki-yubikey] SN: 23854514INFO[yubikit.piv] Data written to object slot 0x5fc10aINFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=TrueINFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384Copy root certificate to another YubiKey? [y/N]: yPlug YubiKey "Root X"...INFO[pki-yubikey] 0: Yubico YubiKey CCID 00 00INFO[pki-yubikey] SN: 23854514INFO[yubikit.piv] Data written to object slot 0x5fc10aINFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=TrueINFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384Copy root certificate to another YubiKey? [y/N]: n
Then, you can create an intermediate certificate with offline-pki yubikey
intermediate and use it to sign certificates by providing a CSR to offline-pki
certificate sign. Be careful and inspect the CSR before signing it, as only the
subject name can be overridden. Check the documentation for more details.
Get the available options using the --help flag.
The hardware part
To ensure the operations on the root and intermediate CAs are air-gapped,
a cost-efficient solution is to use an ARM64 single board computer. The Libre
Computer Sweet Potato SBC is a more open alternative to the well-known
Raspberry Pi.1
Libre Computer Sweet Potato SBC, powered by the AML-S905X SOC
$ tio/dev/ttyUSB0
[16:40:44.546] tio v3.7[16:40:44.546] Press ctrl-t q to quit[16:40:44.555] Connected to /dev/ttyUSB0GXL:BL1:9ac50e:bb16dc;FEAT:ADFC318C:0;POC:1;RCY:0;SPI:0;0.0;CHK:0;TE: 36574BL2 Built : 15:21:18, Aug 28 2019. gxl g1bf2b53 - luan.yuan@droid15-szset vcck to 1120 mvset vddee to 1000 mvBoard ID = 4CPU clk: 1200MHz[…]
The Nix glue
To bring everything together, I am using Nix with a Flake providing:
a package for the offline-pki application, with shell completion,
a development shell, including an editable version of the offline-pki application,
a NixOS module to setup the offline PKI, resetting the system at each boot,
a QEMU image for testing, and
an SD card image to be used on the Sweet Potato or another ARM64 SBC.
# Execute the application locally
nixrungithub:vincentbernat/offline-pki----help
# Run the application inside a QEMU VM
nixrungithub:vincentbernat/offline-pki\#qemu
# Build a SD card for the Sweet Potato or for the Raspberry Pi
nixbuild--systemaarch64-linuxgithub:vincentbernat/offline-pki\#sdcard.potato
nixbuild--systemaarch64-linuxgithub:vincentbernat/offline-pki\#sdcard.generic
# Get a development shell with the application
nixdevelopgithub:vincentbernat/offline-pki
The key for the root CA is not generated by the YubiKey. Using an
air-gapped computer is all the more important. Put it in a safe with the
YubiKeys when done! ↩︎
Fresh out of university, Remco accepted a job that allowed him to relocate to a different country. While entering the workforce for the first time, he was also adjusting to a new home and culture, which is probably why the red flags didn't look quite so red.
The trouble had actually begun during his interview. While being questioned about his own abilities, Remco learned about Conglomcorp's healthy financial position, backed by a large list of clients. Everything seemed perfect, but Remco had a bad gut feeling he could neither explain nor shake off. Being young and desperate for a job, he ignored his misgivings and accepted the position. He hadn't yet learned how scarily accurate intuition often proves to be.
The second red flag was run up the mast at orientation. While teaching him about the company's history, one of the senior managers proudly mentioned that Conglomcorp had recently fired 50% of their workforce, and were still doing great. This left Remco feeling more concerned than impressed, but he couldn't reverse course now.
Flag number three waved during onboarding, as Remco began to learn about the Java application he would be helping to develop. He'd been sitting at the cubicle of Lars, a senior developer, watching over his shoulder as Lars familiarized him with the application's UI.
"Garbage Collection." Using his mouse, Lars circled a button in the interface labeled just that. "We added this to solve a bug some users were experiencing. Now we just tell everyone that if they notice any weird behavior in the application, they should click this button."
Remco frowned. "What happens in the code when you click that?"
"It calls System.gc()."
But that wasn't even guaranteed to run! The Java virtual machine handled its own garbage collection. And in no universe did you want to put a worse-than-useless button in your UI and manipulate clients into thinking it did something. But Remco didn't feel confident enough to speak his mind. He kept silent and soldiered on.
When Remco was granted access to the codebase, it got worse. The whole thing was a pile of spaghetti full of similar design brillance that mostly worked well enough to satisfy clients, although there was a host of bugs in the bug tracker, some of which had been rotting there for over 7 years. Remco had been given the unenviable task of fixing the oldest ones.
Remco slogged through another few months. Eventually, he was tasked with implementing a new feature that was supposed to be similar to existing features already in the application. He checked these other features to see how they were coded, intending to follow the same pattern. As it turned out, each and every one of them had been implemented in a different, weird way. The wheel had been reinvented over and over, and none of the implementations looked like anything he ought to be imitating.
Flummoxed, Remco approached Lars' cubicle and explained his findings. "How should I proceed?" he finally asked.
Lars shrugged, and looked up from a running instance of the application. "I don't know." Lars turned back to his screen and pushed "Garbage Collect".
Fairly soon after that enlightening experience, Remco moved on. Conglomcorp is still going, though whether they've retained their garbage collection button is anyone's guess.
Author: Julian Miles, Staff Writer There’s a smoking hole where my Rembrandt used to be. Not sure if it was blown in or out – I was too busy flying through the air to notice the finer points of the opening part of this assault. Dustin glances toward where I’m looking. “Sorry about the art. […]
Google tracking everything we read is bad, particularly since Google abandoned the “don’t be evil” plan and are presumably open to being somewhat evil.
The article recommendations on Chrome on Android are useful and I’d like to be able to get the same quality of recommendations without Google knowing about everything I read. Ideally without anything other than the device I use knowing what interests me.
A ML system to map between sources of news that are of interest should be easy to develop and run on end user devices. The model could be published and when given inputs of articles you like give an output of sites that contain other articles you like. Then an agent on the end user system could spider the sites in question and run a local model to determine which articles to present to the user.
Mapping for hate following is possible for such a system (Google doesn’t do that), the user could have 2 separate model runs for regular reading and hate-following and determine how much of each content to recommend. It could also give negative weight to entries that match the hate criteria.
Some sites with articles (like Medium) give an estimate of reading time. An article recommendation system should have a fixed limit of articles (both in articles and in reading time) to support the “I spend half an hour reading during lunch” model not doom scrolling.
Author: Gary Duehr 02.17.2055/13:46: Ahead I can see a strip of poplars like a zipper between two fields of corn stubble, the frozen stalks shorn off; I sense the need to descend and I do, I dip my nose downward: the wind shears under my wing-flaps, the missile strapped to my frame drags me downward; […]
I've been offline due to a family property calamity (all are healthy.) But this here set of facts (that you'll see nowhere else) needs urgently to be said about the DOGE 'efficiency' campaign: Nathan Gardels, at Noema Magazine*, offers excellent points about Evolutionary Stability vs. Revolutionary shake-ups, like Elon Musk's massive, Robspierre-style purge of civil servants. It's a distinction that both right and left ought to learn, especially as:
1. No one - and I mean no one at all - appears to be mentioning the most-successful campaign ever to improve government efficiency. One that was 'evolutionary' and - at the time - recognized as a huge success, even if it did not use chainsaws. Al Gore's "Reinventing Government" endeavor used systematic methods to reduce duplication, redundancy and unnecessary procedures across all bureaucracies. The program won plaudits across the spectrum, including JD Powers awards. Solid metrics showed increased efficiency and service across all agencies. In particular Gore's RG program reversed the long plummet in veterans' opinions on the VA, transforming it into among the most loved and trusted of all U.S. institutions.
Why does that earlier endeavor to increase government efficiency go entirely unmentioned today, amid mass, unexplained and destabilizing 'chainsaw' firings? Of course, time and the steady lobotomization of American discourse help to explain it. As does the heady rush of sanctimony rage, today's most-damaging addiction.
As well, the utter difference in personalities between Al Gore and Elon Musk make them seem different species. Any comparison would thus stretch imaginations too thin, among modern journalists. Still, the contrast would seem to be worth offering. By someone. Anyone. Anywhere. Yes, I know. I ask too much.
2. But more is at fault than the Right's mania or microcephalic journalism. The Left's fervid, revolutionary-transcendentalist impatience - which blatantly cost Kamala Harris election - fulminates contempt toward boring, undramatic efforts at incremental reform. This, despite the historical fact that 'incremental reforms' - often frustratingly slow - are exactly why the American Experiment has worked, while other revolutions soon devolved into chaos, often worse than the ancièn regimes they replaced.
3. Alas and worst of all, there does not appear to be much - or even any - effort at tracking "who will gain most" from these Trumpian chainsaw slashes. Are there particular interest groups who will benefit?
I cannot prove, yet... but I assert... that these "DOGE" slashes at Education and Health and CDC and other agencies have one core aim: to rile a vast range of opponents and thus distract from the administration's two top goals:
FIRST: evisceration of the FBI, CIA and counter-intelligence services. (Now who benefits from that?)
SECOND: to crush the IRS.
Nothing has terrified the cheater wing of American oligarchy more than the 2021 Pelosi bill that ended 40 years of starvation at the Internal Revenue Service. Forty years preventing computer upgrades, software updates or the hiring of sufficient staff to audit upper-crust tax-dodgers and outright criminal gangs.
Desperation to re-impose IRS starvation is (I maintain) the core goal for which that wing of oligarchy flooded the Trump Campaign with funds and cryptic aid. Now, the cheaters and their inheritance brat New Lords are getting their wish. And it's working. While cuts at CDC and Health and Education raise howls, you'll notice almost nary a peep from liberals and moderate about the poor, friendless IRS. And the cheater lords smile.
4. Final point. Might anyone apply actual Outcome metrics comparing Al Gore's Reinventing Government campaign to Elon Musk's DOGE?
What're 'metrics'? If both the left and right share one trait... it is utter contempt for nerdy stuff like facts.
5. Compare outcomes from historical revolutions. It reduces to Hamilton, Adams & Jefferson vs. Robspierre, Lenin and Hitler. Look them up.
PS... nothing better disproves the old saw that "Both parties are the same and both are corrupt."
That is disproved absolutely and decisively by the IRS matter. Demp pols voted for the IRS to audit bigshots... including some of their own. Republicans live in daily terror of that possibility. Proved. QED. Step up with wager stakes.
The Debian Med team works on software packages that are associated with
medicine, pre-clinical research, and life sciences, and makes them available
for the Debian distribution. Seven Debian developers and contributors to the
team gathered for their annual Sprint, in Berlin, Germany on 15 and 16 February
2025. The purpose of the meeting was to tackle bugs in Debian-Med packages,
enhance the quality of the team's packages, and coordinate the efforts of team
members overall.
This sprint allowed participants to fix dozens of bugs, including
release-critical ones. New upstream versions were uploaded, and the
participants took some time to modernize some packages. Additionally, they
discussed the long-term goals of the team, prepared a forthcoming invited talk
for a conference, and enjoyed working together.
Thank you everyone for keeping the lights on for a bit longer. KDE snaps have been restored. I also released 24.12.3! In addition, I have moved “most” snaps to core24. The remaining snaps need newer qt6/kf6, which is a WIP. “The Bad luck girl” has been hit once again with another loss, so with that, I will be reducing my hours on snaps while I consider my options for my future. I am still around, just a bit less.
Thanks again everyone, if you can get me through one more ( lingering broken arm ) surgery I would be forever grateful! https://gofund.me/d5d59582
Author: R. J. Erbacher I was going catching with my Grampie. He weren’t really my Grampie but that’s how I’d always referred to him. He was old, had a bushy white moustache, a scratchy beard and a big belly. And he was good to me, not like my Pa which tanned me all the time, […]
SQUID transforms traditional Bagpipe and Drum Band entertainment into a multi-sensory rush of excitement, featuring high energy bagpipes, pop music influences and visually stunning percussion!
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
A clever malware deployment scheme first spotted in targeted attacks last year has now gone mainstream. In this scam, dubbed “ClickFix,” the visitor to a hacked or malicious website is asked to distinguish themselves from bots by pressing a combination of keyboard keys that causes Microsoft Windows to download password-stealing malware.
ClickFix attacks mimic the “Verify You are a Human” tests that many websites use to separate real visitors from content-scraping bots. This particular scam usually starts with a website popup that looks something like this:
This malware attack pretends to be a CAPTCHA intended to separate humans from bots.
Clicking the “I’m not a robot” button generates a pop-up message asking the user to take three sequential steps to prove their humanity.
Executing this series of keypresses prompts Windows to download password-stealing malware.
Step 1 involves simultaneously pressing the keyboard key with the Windows icon and the letter “R,” which opens a Windows “Run” prompt that will execute any specified program that is already installed on the system.
Step 2 asks the user to press the “CTRL” key and the letter “V” at the same time, which pastes malicious code from the site’s virtual clipboard.
Step 3 — pressing the “Enter” key — causes Windows to download and launch malicious code through “mshta.exe,” a Windows program designed to run Microsoft HTML application files.
“This campaign delivers multiple families of commodity malware, including XWorm, Lumma stealer, VenomRAT, AsyncRAT, Danabot, and NetSupport RAT,” Microsoft wrote in a blog post on Thursday. “Depending on the specific payload, the specific code launched through mshta.exe varies. Some samples have downloaded PowerShell, JavaScript, and portable executable (PE) content.”
According to Microsoft, hospitality workers are being tricked into downloading credential-stealing malware by cybercriminals impersonating Booking.com. The company said attackers have been sending malicious emails impersonating Booking.com, often referencing negative guest reviews, requests from prospective guests, or online promotion opportunities — all in a bid to convince people to step through one of these ClickFix attacks.
In November 2024, KrebsOnSecurity reported that hundreds of hotels that use booking.com had been subject to targeted phishing attacks. Some of those lures worked, and allowed thieves to gain control over booking.com accounts. From there, they sent out phishing messages asking for financial information from people who’d just booked travel through the company’s app.
Earlier this month, the security firm Arctic Wolfwarned about ClickFix attacks targeting people working in the healthcare sector. The company said those attacks leveraged malicious code stitched into the widely used physical therapy video site HEP2go that redirected visitors to a ClickFix prompt.
An alert (PDF) released in October 2024 by the U.S. Department of Health and Human Services warned that the ClickFix attack can take many forms, including fake Google Chrome error pages and popups that spoof Facebook.
ClickFix tactic used by malicious websites impersonating Google Chrome, Facebook, PDFSimpli, and reCAPTCHA. Source: Sekoia.
The ClickFix attack — and its reliance on mshta.exe — is reminiscent of phishing techniques employed for years that hid exploits inside Microsoft Officemacros. Malicious macros became such a common malware threat that Microsoft was forced to start blocking macros by default in Office documents that try to download content from the web.
Alas, the email security vendor Proofpoint has documented plenty of ClickFix attacks via phishing emails that include HTML attachments spoofing Microsoft Office files. When opened, the attachment displays an image of Microsoft Word document with a pop-up error message directing users to click the “Solution” or “How to Fix” button.
HTML files containing ClickFix instructions. Examples for attachments named “Report_” (on the left) and “scan_doc_” (on the right). Image: Proofpoint.
Organizations that wish to do so can take advantage of Microsoft Group Policy restrictions to prevent Windows from executing the “run” command when users hit the Windows key and the “R” key simultaneously.
Last year, I attended the annual LibreOffice Conference in Luxembourg with the help of a generous travel grant by The Document Foundation (TDF). It was a three-day event from the 10th to the 12th of October 2024, with an additional day for community meetup on the 9th.
Luxembourg is a small (twice as big as Delhi) country in Western Europe. After going through an arduous visa process, I reached Luxembourg on the 8th of October. Upon arriving in Luxembourg, I took a bus to the city center, where my hotel — Park Inn — was located. All the public transport in Luxembourg was free of cost. It was as if I stepped in another world. There were separate tracks for cycling and a separate lane for buses, along with good pedestrian infrastructure. In addition, the streets were pretty neat and clean.
Luxembourg's Findel Airport
Separate cycling tracks in Luxembourg
My hotel was 20 km from the conference venue in Belval. However, the commute was convenient due to a free of cost train connection, which were comfortable, smooth, and scenic, covering the distance in half an hour. The hotel included a breakfast buffet, recharging us before the conference.
This is what trains look like in Luxembourg
Pre-conference, a day was reserved for the community meetup on the 9th of October. On that day, the community members introduced themselves and their contributions to the LibreOffice project. It acted as a brainstorming session. I got a lovely conference bag, which contained a T-Shirt, a pen and a few stickers. I also met my long time collaborators Mike, Sophie and Italo from the TDF, whom I had interacted only remotely till then. Likewise, I also met TDF’s sysadmin Guilhem, who I interacted before regarding setting up my LibreOffice mirror.
Conference bag
The conference started on the 10th. There were 5 attendees from India, including me, while most of the attendees were from Europe. The talks were in English. One of the talks that stood out for me was about Luxchat — a chat service run by the Luxembourg government based on the Matrix protocol for the citizens of Luxembourg. I also liked Italo’s talk on why document formats must be freedom-respecting. On the first night, the conference took us to a nice dinner in a restaurant. It offered one more way to socialize with other attendees and explore food at the same time.
One of the slides of Italo's talk
Picture of the hall in which talks were held
On the 11th of October, I went for a walk in the morning with Biswadeep for some sightseeing around our hotel area. As a consequence, I missed the group photo of the conference, which I wanted to be in. Anyway, we enjoyed roaming around the picturesque Luxembourg city. We also sampled a tram ride to return to our hotel.
We encountered such scenic views during our walk
Another view of Luxembourg city area
The conference ended on the 12th with a couple of talks. This conference gave me an opportunity to meet the global LibreOffice community, connect and share ideas. It also gave me a peek into the country of Luxembourg and its people, where I had good experience. English was widely known, and I had no issues getting by.
Thanks to all the organizers and sponsors of the conference!
For uninteresting reasons I need very regular 58Hz pulses coming out of an
RS-232 Tx line: the time between each pulse should be as close to 1/58s as
possible. I produce each pulse by writing an \xFF byte to the device. The
start bit is the only active-voltage bit being sent, and that produces my pulse.
I wrote this obvious C program:
This tries to make sure that each write() call happens at 58Hz. I need these
pulses to be regular, so I need to also make sure that the time between each
userspace write() and when the edge actually hits the line is as short as
possible or, at least, stable.
Potential reasons for timing errors:
The usleep() doesn't wake up exactly when it should. This is subject to the
Linux scheduler waking up the trigger process
The write() almost certainly ends up scheduling a helper task to actually
write the \xFF to the hardware. This helper task is also subject to the
Linux scheduler waking it up.
Whatever the hardware does. RS-232 doesn't give you any guarantees about
byte-byte timings, so this could be an unfixable source of errors
The scheduler-related questions are observable without any extra hardware, so
let's do that first.
I run the ./trigger program, and look at diagnostics while that's running.
I look at some device details:
# ls -lh /dev/ttyS0
crw-rw---- 1 root dialout 4, 64 Mar 6 18:11 /dev/ttyS0
# ls -lh /sys/dev/char/4:64/
total 0
-r--r--r-- 1 root root 4.0K Mar 6 16:51 close_delay
-r--r--r-- 1 root root 4.0K Mar 6 16:51 closing_wait
-rw-r--r-- 1 root root 4.0K Mar 6 16:51 console
-r--r--r-- 1 root root 4.0K Mar 6 16:51 custom_divisor
-r--r--r-- 1 root root 4.0K Mar 6 16:51 dev
lrwxrwxrwx 1 root root 0 Mar 6 16:51 device -> ../../../0000:00:16.3:0.0
-r--r--r-- 1 root root 4.0K Mar 6 16:51 flags
-r--r--r-- 1 root root 4.0K Mar 6 16:51 iomem_base
-r--r--r-- 1 root root 4.0K Mar 6 16:51 iomem_reg_shift
-r--r--r-- 1 root root 4.0K Mar 6 16:51 io_type
-r--r--r-- 1 root root 4.0K Mar 6 16:51 irq
-r--r--r-- 1 root root 4.0K Mar 6 16:51 line
-r--r--r-- 1 root root 4.0K Mar 6 16:51 port
drwxr-xr-x 2 root root 0 Mar 6 16:51 power
-rw-r--r-- 1 root root 4.0K Mar 6 16:51 rx_trig_bytes
lrwxrwxrwx 1 root root 0 Mar 6 16:51 subsystem -> ../../../../../../../class/tty
-r--r--r-- 1 root root 4.0K Mar 6 16:51 type
-r--r--r-- 1 root root 4.0K Mar 6 16:51 uartclk
-rw-r--r-- 1 root root 4.0K Mar 6 16:51 uevent
-r--r--r-- 1 root root 4.0K Mar 6 16:51 xmit_fifo_size
Unsurprisingly, this is a part of the tty subsystem. I don't want to spend the
time to really figure out how this works, so let me look at all the tty
kernel calls and also at all the kernel tasks scheduled by the trigger
process, since I suspect that the actual hardware poke is happening in a helper
task. I see this:
Looking at the sources I see that uart_write() calls __uart_start(), which
schedules a task to call serial_port_runtime_resume() which eventually calls
serial8250_tx_chars(), which calls some low-level functions to actually send
the bits.
I look at the time between two of those calls to quantify the scheduler latency:
The offset from 58Hz of when each write() call happens. This shows effect #1
from above: how promptly the trigger process wakes up
The latency of the helper task. This shows effect #2 above.
The raw data as I tweak things lives here. Initially I see big latency spikes:
These can be fixed by adjusting the priority of the trigger task. This tells
the scheduler to wake that task up first, even if something else is currently
using the CPU. I do this:
sudo chrt -p 90 `pidof trigger`
And I get better-looking latencies:
During some experiments (not in this dataset) I would see high helper-task
timing instabilities as well. These could be fixed by prioritizing the helper
task. In this kernel (6.12) the helper task is called kworker/N where N is
the CPU index. I tie the trigger process to cpu 0, and priorities all the
relevant helpers:
OK, so it looks like on the software side we're good to within 0.1ms of the true
period. This is in the ballpark of the precision I need; even this might be too
high. It's possible to try to push the software to do better: one could look at
the kernel sources a bit more, to do smarter things with priorities or to try an
-rt kernel. But all this doesn't matter if the serial hardware adds
unacceptable delays. Let's look.
Let's look at it with a logic analyzer. I use a saleae logic analyzer with
sigrok. The tool spits out the samples as it gets them, and an awk script
finds the edges and reports the timings to give me a realtime plot.
On the server I was using (physical RS-232 port, ancient 3.something kernel):
OK… This is very discrete for some reason, and generally worse than 0.1ms.
What about my laptop (physical RS-232 port, recent 6.12 kernel)?
Not discrete anymore, but not really any more precise. What about using a
usb-serial converter? I expect this to be worse.
Yeah, looks worse. For my purposes, an accuracy of 0.1ms is marginal, and the
hardware adds non-negligible errors. So I cut my losses, and use an external
signal generator:
I'm not entirely sure I understand the first item today, but maybe you can help.
I pulled a couple of older items from the backlog to round out this timely theme.
Rudi A.
reported this Errord, chortling
"Time flies when you're having fun, but it goes back when you're walking along the IJ river!" Is the point here that the walking time is quoted as 77 minutes total, but the overall travel time is less than that? I must say I don't recommend swimming the Ij in March, Rudi.
I had to go back quite a while for this submission from faithful reader
Adam R.,
who chimed
"I found a new type of datetime handling failure in this timestamp of 12:8 PM when checking
my past payments at my medical provider." I hope he's still with us.
Literary critic
Jay
commented
"Going back in time to be able to update your work after it gets
published but before everyone else in your same space time fabric gets to see your mistakes, that's privilege."
This kind of error is usually an artifact of Daylight Saving Time, but it's a day too late.
Lucky
Luke H.
can take his time with this deal.
"The board is proud to approve a 20% discount for the next 8 millenia," he crowed.
At nearly the other end of the entire modern era,
Carlos
found himself with a nostalgic device.
"Excel crashed. When it came back, it did so showing this update banner." Some programmer confused "restore state" with the English Restoration. Not that state, bub.
Author: Jo Peace We always learn things too late. I remember the pine smell, the urgent fear as I hurried to assemble the close-in defense unit before the drones reached our position. A young voice snaps me back to the present. “Dad, why do you live alone in the mountains? Is it because people tease […]
There is a new botnet that is infecting TP-Link routers:
The botnet can lead to command injection which then makes remote code execution (RCE) possible so that the malware can spread itself across the internet automatically. This high severity security flaw (tracked as CVE-2023-1389) has also been used to spread other malware families as far back as April 2023 when it was used in the Mirai botnet malware attacks. The flaw also linked to the Condi and AndroxGh0st malware attacks.
[…]
Of the thousands of infected devices, the majority of them are concentrated in Brazil, Poland, the United Kingdom, Bulgaria and Turkey; with the botnet targeting manufacturing, medical/healthcare, services and technology organizations in the United States, Australia, China and Mexico.
Filing tax this year was really painful. But mostly because my home network.
It was ipv4 over ipv6 was not working correctly. First I swapped the Router which was trying to reinitialize the MAP-E table every time there was a dhcp client reconfiguration and overwhelming the server. Then I changed the DNS configuration not use ipv4 UDP lookup which was overwhelming the ipv4 ports.
Tax return itself is a painful process. Debugging network issues is making things was just making everything more painful.
I remember in some intro-level compsci class learning that credit card numbers were checksummed, and writing basic functions to validate those checksums as an exercize. I was young and was still using my "starter" credit card with a whopping limit of $500, so that was all news to me.
Alex's company had a problem processing credit cards: they rejected a lot of credit cards as being invalid. The checksum code seemed to be working fine, so what could the problem be? Well, the problem became more obvious when someone's card worked one day, and stopped working the very next day, and they just so happened to be the first and last day of the month.
This function is horrible; because it uses strftime (instead of taking the comparison date and time as a parameter) it's not unit-testable. We're (ab)using casts to convert strings into integers so we can do our comparison. We're using a ternary to return a boolean value instead of just returning the result of the boolean expression.
But of course, that's all the amuse bouche: the main course is the complete misunderstanding of basic logic. According to this code, a credit card is valid if the expiration year is less than or equal to the current year and the month is less than or equal to the current month. As this article goes live in March, 2025, this code would allow credit cards from April, 2026, as it should. But it would reject any cards with an expiration of February, 2028.
Per Alex, "This is a credit card date validation that has been in use for ages."
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Soramimi Hanarejima On my way home, I stop by the drugstore for a quick errand. But in the nootropics aisle, I’m thwarted by vacant shelf space. When I ask a clerk what happened to all the memorysyn, he tells me there’s been a recall. Some production issue has made recent lots more potent than […]
You'll notice that this takes a parameter of type QBatch_unOp. What is that type? Well, it's an enumerated type describing the kind of operation this arithExpr represents. That is to say, they're not using real inheritance, but instead switching on the QBatch_unOp value to decide which code branch to execute- hand-made, home-grown artisanal inheritance. And while there are legitimate reasons to avoid inheritance, this is a clear case of "is-a" relationships, and it would allow compile-time checking of how you combine your types.
Tim also points out the use of the "repugnant west const", which is maybe a strong way to word it, but definitely using only "east const" makes it a lot easier to understand what the const operator does. It's worth noting that in this example, the second parameters is a const reference (not a reference to a const value).
Now, they are using inheritance, just not in that specific case:
class QBatch_paramExpr : public QBatch_snippet {...};
There's nothing particularly wrong with this, but we're going to use this parameter expression in a moment.
QBatch_arithExpr* Foo(QBatch_snippet *expr) {
// snip
QBatch_arithExpr *derefExpr = new QBatch_arithExpr(enum_tag1, *(new QBatch_paramExpr(paramId)));
assert(derefExpr);
return new QBatch_arithExpr(enum_tag2, *expr, *derefExpr);
}
Honestly, in C++ code, seeing a pile of "*" operators and raw pointers is a sign that something's gone wrong, and this is no exception.
Let's start with calling the QBatch_arithExpr constructor- we pass it *(new QBatch_paramExpr(paramId)), which is a multilayered "oof". First, the new operator will heap allocate and construct an object, and return a pointer to that object. We then dereference that pointer, and pass the value as a reference to the constructor. This is an automatic memory leak; because we never trap the pointer, we never have the opportunity to release that memory. Remember kids, in C/C++ you need clear ownership semantics and someone needs to be responsible for deallocating all of the allocated memory- every new needs a delete, in this case.
Now, new QBatch_arithExpr(...) will also return a pointer, which we put in derefExpr. We then assert on that pointer, confirming that it isn't null. Which… it can't be. A constructor may fail and throw an exception, but you'll never get a null (now, I'm sure a sufficiently motivated programmer can mix nothrow and -fno-exceptions to get constructors to return null, but that's not happening here, and shouldn't happen anywhere).
Then we dereference that pointer and pass it to QBatch_arithExpr- creating another memory leak. Two memory leaks in three lines of code, where one line is an assert, is fairly impressive.
Elsewhere in the code, shared_pointer objects are used, wit their names aliased to readable types, aka QBatch_arithExpr::Ptr, and if that pattern were followed here, the memory leaks would go away.
As Tim puts it: "Some folks never quite escaped their Java background," and in this case, I think it shows. Objects are allocated with new, but never deleted, as if there's some magical garbage collector which is going to find the unused objects and free them.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Lewis Richards Two Shuttles slashed through the sheeting rain, trailed by twin comet tails of super heated plasma vaporising any raindrops unfortunate enough to meet them on their spiralling descent toward the fluctuating lights of the colony they raced toward. It had been three days since the Ark-ship above lost contact with the colonists […]
Former CISA Director Jen Easterly writes about a new international intelligence sharing co-op:
Historically, China, Russia, Iran & North Korea have cooperated to some extent on military and intelligence matters, but differences in language, culture, politics & technological sophistication have hindered deeper collaboration, including in cyber. Shifting geopolitical dynamics, however, could drive these states toward a more formalized intell-sharing partnership. Such a “Four Eyes” alliance would be motivated by common adversaries and strategic interests, including an enhanced capacity to resist economic sanctions and support proxy conflicts.
Microsoft today issued more than 50 security updates for its various Windows operating systems, including fixes for a whopping six zero-day vulnerabilities that are already seeing active exploitation.
Two of the zero-day flaws include CVE-2025-24991 and CVE-2025-24993, both vulnerabilities in NTFS, the default file system for Windows and Windows Server. Both require the attacker to trick a target into mounting a malicious virtual hard disk. CVE-2025-24993 would lead to the possibility of local code execution, while CVE-2025-24991 could cause NTFS to disclose portions of memory.
Microsoft credits researchers at ESET with reporting the zero-day bug labeled CVE-2025-24983, an elevation of privilege vulnerability in older versions of Windows. ESET said the exploit was deployed via the PipeMagic backdoor, capable of exfiltrating data and enabling remote access to the machine.
ESET’s FilipJurčacko said the exploit in the wild targets only older versions of Windows OS: Windows 8.1 and Server 2012 R2. Although still used by millions, security support for these products ended more than a year ago, and mainstream support ended years ago. However, ESET notes the vulnerability itself also is present in newer Windows OS versions, including Windows 10 build 1809 and the still-supported Windows Server 2016.
Rapid7’s lead software engineer Adam Barnett said Windows 11 and Server 2019 onwards are not listed as receiving patches, so are presumably not vulnerable.
“It’s not clear why newer Windows products dodged this particular bullet,” Barnett wrote. “The Windows 32 subsystem is still presumably alive and well, since there is no apparent mention of its demise on the Windows client OS deprecated features list.”
The zero-day flaw CVE-2025-24984 is another NTFS weakness that can be exploited by inserting a malicious USB drive into a Windows computer. Barnett said Microsoft’s advisory for this bug doesn’t quite join the dots, but successful exploitation appears to mean that portions of heap memory could be improperly dumped into a log file, which could then be combed through by an attacker hungry for privileged information.
“A relatively low CVSSv3 base score of 4.6 reflects the practical difficulties of real-world exploitation, but a motivated attacker can sometimes achieve extraordinary results starting from the smallest of toeholds, and Microsoft does rate this vulnerability as important on its own proprietary severity ranking scale,” Barnett said.
Another zero-day fixed this month — CVE-2025-24985 — could allow attackers to install malicious code. As with the NTFS bugs, this one requires that the user mount a malicious virtual hard drive.
The final zero-day this month is CVE-2025-26633, a weakness in the Microsoft Management Console, a component of Windows that gives system administrators a way to configure and monitor the system. Exploiting this flaw requires the target to open a malicious file.
This month’s bundle of patch love from Redmond also addresses six other vulnerabilities Microsoft has rated “critical,” meaning that malware or malcontents could exploit them to seize control over vulnerable PCs with no help from users.
Barnett observed that this is now the sixth consecutive month where Microsoft has published zero-day vulnerabilities on Patch Tuesday without evaluating any of them as critical severity at time of publication.
The SANS Internet Storm Center has a useful list of all the Microsoft patches released today, indexed by severity. Windows enterprise administrators would do well to keep an eye on askwoody.com, which often has the scoop on any patches causing problems. Please consider backing up your data before updating, and leave a comment below if you experience any issues applying this month’s updates.
The US Department of Justice on Wednesday announced the indictment of 12 Chinese individuals accused of more than a decade of hacker intrusions around the world, including eight staffers for the contractor i-Soon, two officials at China’s Ministry of Public Security who allegedly worked with them, and two other alleged hackers who are said to be part of the Chinese hacker group APT27, or Silk Typhoon, which prosecutors say was involved in the US Treasury breach late last year.
[…]
According to prosecutors, the group as a whole has targeted US state and federal agencies, foreign ministries of countries across Asia, Chinese dissidents, US-based media outlets that have criticized the Chinese government, and most recently the US Treasury, which was breached between September and December of last year. An internal Treasury report obtained by Bloomberg News found that hackers had penetrated at least 400 of the agency’s PCs and stole more than 3,000 files in that intrusion.
The indictments highlight how, in some cases, the hackers operated with a surprising degree of autonomy, even choosing targets on their own before selling stolen information to Chinese government clients. The indictment against Yin Kecheng, who was previously sanctioned by the Treasury Department in January for his involvement in the Treasury breach, quotes from his communications with a colleague in which he notes his personal preference for hacking American targets and how he’s seeking to ‘break into a big target,’ which he hoped would allow him to make enough money to buy a car.
Authorities in India today arrested the alleged co-founder of Garantex, a cryptocurrency exchange sanctioned by the U.S. government in 2022 for facilitating tens of billions of dollars in money laundering by transnational criminal and cybercriminal organizations. Sources close to the investigation told KrebsOnSecurity the Lithuanian national Aleksej Besciokov, 46, was apprehended while vacationing on the coast of India with his family.
Aleksej Bešciokov, “proforg,” “iram”. Image: U.S. Secret Service.
On March 7, the U.S. Department of Justice (DOJ) unsealed an indictment against Besciokov and the other alleged co-founder of Garantex, Aleksandr Mira Serda, 40, a Russian national living in the United Arab Emirates.
Launched in 2019, Garantex was first sanctioned by the U.S. Treasury Office of Foreign Assets Control in April 2022 for receiving hundreds of millions in criminal proceeds, including funds used to facilitate hacking, ransomware, terrorism and drug trafficking. Since those penalties were levied, Garantex has processed more than $60 billion, according to the blockchain analysis company Elliptic.
“Garantex has been used in sanctions evasion by Russian elites, as well as to launder proceeds of crime including ransomware, darknet market trade and thefts attributed to North Korea’s Lazarus Group,” Elliptic wrote in a blog post. “Garantex has also been implicated in enabling Russian oligarchs to move their wealth out of the country, following the invasion of Ukraine.”
The DOJ alleges Besciokov was Garantex’s primary technical administrator and responsible for obtaining and maintaining critical Garantex infrastructure, as well as reviewing and approving transactions. Mira Serda is allegedly Garantex’s co-founder and chief commercial officer.
Image: elliptic.co
In conjunction with the release of the indictments, German and Finnish law enforcement seized servers hosting Garantex’s operations. A “most wanted” notice published by the U.S. Secret Service states that U.S. authorities separately obtained earlier copies of Garantex’s servers, including customer and accounting databases. Federal investigators say they also froze over $26 million in funds used to facilitate Garantex’s money laundering activities.
Besciokov was arrested within the past 24 hours while vacationing with his family in Varkala, a major coastal city in the southwest Indian state of Kerala. An officer with the local police department in Varkala confirmed Besciokov’s arrest, and said the suspect will appear in a Delhi court on March 14 to face charges.
Varkala Beach in Kerala, India. Image: Shutterstock, Dmitry Rukhlenko.
The DOJ’s indictment says Besciokov went by the hacker handle “proforg.” This nickname corresponds to the administrator of a 20-year-old Russian language forum dedicated to nudity and crudity called “udaff.”
Besciokov and Mira Serda are each charged with one count of conspiracy to commit money laundering, which carries a maximum sentence of 20 years in prison. Besciokov is also charged with one count of conspiracy to violate the International Economic Emergency Powers Act—which also carries a maximum sentence of 20 years in person—and with conspiracy to operate an unlicensed money transmitting business, which carries a maximum sentence of five years in prison.
Marco found this wreck, left behind by a former co-worker:
$("#image_sample").html('<i><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />No image selected, select an image to see how it looks in the banner!</i>');
This code uses the JQuery library to find an element in the web page with the ID "image_sample", and then replaces its contents with this hard-coded blob of HTML.
I really appreciate the use of self-closing, XHTML style BR tags, which was a fad between 2000 and 2002, but never truly caught on, and was basically forgotten by the time HTML5 dropped. But this developer insisted that self-closing tags were the "correct" way to write HTML.
Pity they didn't put any thought in the "correct" way to add blank space to page beyond line breaks. Or the correct way to populate the DOM that isn't accessing the inner HTML of an element.
At least this was a former co-worker.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Majoki Some seven thousand years ago a micrometeorite winged a pine cone, clipped the ear of a very surprised marmot, skewered a large oyster mushroom, and buried itself in the thick duff of a mountainous forest in the north Cascades. Stan Clutterdam knew none of that when he unceremoniously peed on the ancient impact […]
Over the last year, the Debian.social
services outgrew the infrastructure
that was supporting them. The matrix bridge in particular was hosted on a cloud
instance backed by a large expensive storage volume. Debian.CH rented a new
large physical server to host all these instances, earlier this year. Stefano
set up Incus on the new physical machine
and migrated all the old debian.social LXC Containers, libvirt VMs, and cloud
instances into Incus-managed LXC containers.
Stefano set up Prometheus monitoring and alerts for the new infrastructure and a
Grafana dashboard. The current stack of
debian.social services seem to comfortably fit on the new machine, with good
room to grow.
DebConf 25, by Santiago Ruano Rincón and Stefano Rivera
DebConf 25 preparations continue. The team is currently finalizing a budget.
Stefano helped to review the current budget proposals and suggest approaches for
balancing it.
Stefano installed a Zammad instance to organize
queries from attendees, for the registration and visa teams.
Santiago continued discussions with possible caterers so we can have options for
the different diet requirements and that could fit into the DebConf budget.
Also, in collaboration with Anupa, Santiago pushed the first
draft changes
to document the venue information in the DebConf 25 website and how to get to
Brest.
Time-based test failure in requests, by Colin Watson
Colin fixed a fun bug in the Python
requests package. Santiago Vila
has been running
tests of what
happens when Debian packages are built on a system in which time has been
artificially set to somewhere around the end of the support period for the next
Debian release, in order to make it easier to do things like issuing security
updates for the lifetime of that release. In this case, the
failure indicated an expired test
certificate, and since the repository already helpfully included scripts to
regenerate those certificates, it seemed natural to try regenerating them just
before running tests. However, this
failed for more obscure reasons
and Colin spent some time investigating. This turned out to be because the test
CA was missing the CA constraint and so recent versions of OpenSSL reject it;
Colin sent a pull request to fix
this.
Priority list for outdated packages, by Santiago Ruano Rincón
Santiago started a
discussion on
debian-devel about packages that have a history of security issues and that are
outdated regarding new upstream releases. The goal of the mentioned effort is to
have a prioritized list of packages needing some work, from a security point of
view. Moreover, the aim of publicly sharing the list of packages with the Debian
Developers community is to make it easier to look at the packages maintained by
teams, or even other maintainers’ where help could be welcome. Santiago is
planning to take into account the feedback provided in debian-devel and to
propose a tooling that could help to regularly bring collective awareness of
these packages.
Miscellaneous contributions
Carles worked on English to Catalan po-debconf translations: reviewed
translations, created merge requests and followed up with developers for more
than 30 packages using po-debconf-manager.
Carles helped users, fixed bugs and implemented downloading updated templates
on po-debconf-manager.
Carles packaged a new upstream version of python-pyaarlo.
Carles improved reproducibility of qnetload (now reported as reproducible) and
simplemonitor (followed up with upstream and pending update of Debian package).
Carles collaborated with debian-history package: fixed FTBFS from master
branch, enabled salsa-ci and investigated reproducibility.
Emilio improved support for automatically marking CVEs as NOT-FOR-US in the
security-tracker, closing #1073012.
Emilio updated xorg-server and xwayland in unstable, fixing the last round of
security vulnerabilities.
Helmut Grohne sent patches for 24 cross build failures.
Helmut fixed two problems in the Debian /usr-merge analysis tool. In one
instance, it would overmatch Debian bugs to issues and in another it would fail
to recognize Pre-Depends as a conflict mechanism.
Helmut attempted making rebootstrap work for gcc-15 with limited success as
very many packages FTBFS with gcc-15 due to using function declarations without
arguments.
Helmut provided a change to the security-tracker that would pre-compute
/data/json during database updates rather than on demand resulting in a
reduced response time.
Colin uploaded OpenSSH security
updates for testing/unstable,
bookworm, bullseye, buster, and stretch.
Colin fixed upstream monitoring for 26
Python packages, and upgraded 54 packages (mostly Python-related, but also
PuTTY) to new upstream versions.
Colin updated python-django in bookworm-backports to 4.2.18 (issuing
BSA-121),
and added new backports of python-django-dynamic-fixture
and python-django-pgtrigger, all of which are dependencies of
debusine.
Thorsten Alteholz finally managed to upload hplip to fix two release critical
and some normal bugs. The next step in March would be to upload the latest
version of hplip.
Faidon updated crun in unstable & trixie, resolving a long-standing request of
enabling criu support and thus enabling podman with checkpoint/restore
functionality (With gratitude to Salvatore Bonaccorso and Reinhard Tartler for
the cooperation and collaboration).
Faidon uploaded a number of packages (librdkafka, libmaxminddb,
python-maxminddb, lowdown, tox, tox-uv, pyproject-api, xiccd and gdnsd) bringing
them up to date with new upstream releases, resolving various bugs.
Lucas Kanashiro uploaded some ruby packages involved in the Rails 7 transition
with new upstream releases.
Lucas triaged a ruby3.1 bug
(#1092595)) and
prepared a fix for the next stable release update.
Lucas set up the needed wiki pages and updated the Debian Project status in
the Outreachy portal, in order to send out a call for projects and mentors for
the next round of Outreachy.
Anupa joined Santiago to prepare a list of companies to contact via LinkedIn
for DebConf 25 sponsorship.
Anupa printed Debian stickers and sponsorship brochures, flyers for DebConf
25 to be distributed at FOSS ASIA summit 2025.
Anupa participated in the Debian publicity team meeting and discussed the
upcoming events and tasks.
Raphaël packaged zim 0.76.1 and integrated an upstream patch for another
regression that he reported.
Raphaël worked with the Debian System Administrators for tracker.debian.org
to better cope with gmail’s requirement for mails to be authenticated.
A few months ago I explained that one reason why this blog has become more quiet is that all my work on Lean is covered elsewhere.
This post is an exception, because it is an observation that is (arguably) interesting, but does not lead anywhere, so where else to put it than my own blog…
When defining a function recursively in Lean that has nested recursion, e.g. a recusive call that is in the argument to a higher-order function like List.map, then extra attention used to be necessary so that Lean can see that xs.map applies its argument only elements of the list xs. The usual idiom is to write xs.attach.map instead, where List.attach attaches to the list elements a proof that they are in that list. You can read more about this my Lean blog post on recursive definitions and our new shiny reference manual, look for Example “Nested Recursion in Higher-order Functions”.
To make this step less tedious I taught Lean to automatically rewrite xs.map to xs.attach.map (where suitable) within the construction of well-founded recursion, so that nested recursion just works (issue #5471). We already do such a rewriting to change if c then … else … to the dependent if h : c then … else …, but the attach-introduction is much more ambitious (the rewrites are not definitionally equal, there are higher-order arguments etc.) Rewriting the terms in a way that we can still prove the connection later when creating the equational lemmas is hairy at best. Also, we want the whole machinery to be extensible by the user, setting up their own higher order functions to add more facts to the context of the termination proof.
I implemented it like this (PR #6744) and it ships with 4.18.0, but in the course of this work I thought about a quite different and maybe better™ way to do this, and well-founded recursion in general:
WellFounded.fix : (hwf : WellFounded r) (F : (x : α) → ((y : α) → r y x → C y) → C x) (x : α) : C x
we have to rewrite the functorial of the recursive function, which naturally has type
F : ((y : α) → C y) → ((x : α) → C x)
to the one above, where all recursive calls take the termination proof r y x. This is a fairly hairy operation, mangling the type of matcher’s motives and whatnot.
so the functorial’s type is unmodified (here β will be ((x : α) → C x)), and everything else is in the propositional side-condition montone F. For this predicate we have a syntax-guided compositional tactic, and it’s easily extensible, e.g. by
theorem monotone_mapM (f : γ → α → m β) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.mapM (f x))
Once given, we don’t care about the content of that proof. In particular proving the unfolding theorem only deals with the unmodified F that closely matches the function definition as written by the user. Much simpler!
Isabelle has it easier
Isabelle also supports well-founded recursion, and has great support for nested recursion. And it’s much simpler!
There, all you have to do to make nested recursion work is to define a congruence lemma of the form, for List.map something like our List.map_congr_left
List.map_congr_left : (h : ∀ a ∈ l, f a = g a) :
List.map f l = List.map g l
This is because in Isabelle, too, the termination proofs is a side-condition that essentially states “the functorial F calls its argument f only on smaller arguments”.
Can we have it easy, too?
I had wished we could do the same in Lean for a while, but that form of congruence lemma just isn’t strong enough for us.
But maybe there is a way to do it, using an existential to give a witness that F can alternatively implemented using the more restrictive argument. The following callsOn P F predicate can express that F calls its higher-order argument only on arguments that satisfy the predicate P:
section setup
variable {α : Sort u}
variable {β : α → Sort v}
variable {γ : Sort w}
def callsOn (P : α → Prop) (F : (∀ y, β y) → γ) :=
∃ (F': (∀ y, P y → β y) → γ), ∀ f, F' (fun y _ => f y) = F f
variable (R : α → α → Prop)
variable (F : (∀ y, β y) → (∀ x, β x))
local infix:50 " ≺ " => R
def recursesVia : Prop := ∀ x, callsOn (· ≺ x) (fun f => F f x)
noncomputable def fix (wf : WellFounded R) (h : recursesVia R F) : (∀ x, β x) :=
wf.fix (fun x => (h x).choose)
def fix_eq (wf : WellFounded R) h x :
fix R F wf h x = F (fix R F wf h) x := by
unfold fix
rw [wf.fix_eq]
apply (h x).choose_spec
This allows nice compositional lemmas to discharge callsOn predicates:
theorem callsOn_base (y : α) (hy : P y) :
callsOn P (fun (f : ∀ x, β x) => f y) := by
exists fun f => f y hy
intros; rfl
@[simp]
theorem callsOn_const (x : γ) :
callsOn P (fun (_ : ∀ x, β x) => x) :=
⟨fun _ => x, fun _ => rfl⟩
theorem callsOn_app
{γ₁ : Sort uu} {γ₂ : Sort ww}
(F₁ : (∀ y, β y) → γ₂ → γ₁) -- can this also support dependent types?
(F₂ : (∀ y, β y) → γ₂)
(h₁ : callsOn P F₁)
(h₂ : callsOn P F₂) :
callsOn P (fun f => F₁ f (F₂ f)) := by
obtain ⟨F₁', h₁⟩ := h₁
obtain ⟨F₂', h₂⟩ := h₂
exists (fun f => F₁' f (F₂' f))
intros; simp_all
theorem callsOn_lam
{γ₁ : Sort uu}
(F : γ₁ → (∀ y, β y) → γ) -- can this also support dependent types?
(h : ∀ x, callsOn P (F x)) :
callsOn P (fun f x => F x f) := by
exists (fun f x => (h x).choose f)
intro f
ext x
apply (h x).choose_spec
theorem callsOn_app2
{γ₁ : Sort uu} {γ₂ : Sort ww}
(g : γ₁ → γ₂ → γ)
(F₁ : (∀ y, β y) → γ₁) -- can this also support dependent types?
(F₂ : (∀ y, β y) → γ₂)
(h₁ : callsOn P F₁)
(h₂ : callsOn P F₂) :
callsOn P (fun f => g (F₁ f) (F₂ f)) := by
apply_rules [callsOn_app, callsOn_const]
With this setup, we can have the following, possibly user-defined, lemma expressing that List.map calls its arguments only on elements of the list:
theorem callsOn_map (δ : Type uu) (γ : Type ww)
(P : α → Prop) (F : (∀ y, β y) → δ → γ) (xs : List δ)
(h : ∀ x, x ∈ xs → callsOn P (fun f => F f x)) :
callsOn P (fun f => xs.map (fun x => F f x)) := by
suffices callsOn P (fun f => xs.attach.map (fun ⟨x, h⟩ => F f x)) by
simpa
apply callsOn_app
· apply callsOn_app
· apply callsOn_const
· apply callsOn_lam
intro ⟨x', hx'⟩
dsimp
exact (h x' hx')
· apply callsOn_const
end setup
So here is the (manual) construction of a nested map for trees:
section examples
structure Tree (α : Type u) where
val : α
cs : List (Tree α)
-- essentially
-- def Tree.map (f : α → β) : Tree α → Tree β :=
-- fun t => ⟨f t.val, t.cs.map Tree.map⟩)
noncomputable def Tree.map (f : α → β) : Tree α → Tree β :=
fix (sizeOf · < sizeOf ·) (fun map t => ⟨f t.val, t.cs.map map⟩)
(InvImage.wf (sizeOf ·) WellFoundedRelation.wf) <| by
intro ⟨v, cs⟩
dsimp only
apply callsOn_app2
· apply callsOn_const
· apply callsOn_map
intro t' ht'
apply callsOn_base
-- ht' : t' ∈ cs -- !
-- ⊢ sizeOf t' < sizeOf { val := v, cs := cs }
decreasing_trivial
end examples
This makes me happy!
All details of the construction are now contained in a proof that can proceed by a syntax-driven tactic and that’s easily and (likely robustly) extensible by the user. It also means that we can share a lot of code paths (e.g. everything related to equational theorems) between well-founded recursion and partial_fixpoint.
I wonder if this construction is really as powerful as our current one, or if there are certain (likely dependently typed) functions where this doesn’t fit, but the β above is dependent, so it looks good.
With this construction, functions defined by well-founded recursion will reduce even worse in the kernel, I assume. This may be a good thing.
The cake is a lie
What unfortunately kills this idea, though, is the generation of the functional induction principles, which I believe is not (easily) possible with this construction: The functional induction principle is proved by massaging F to return a proof, but since the extra assumptions (e.g. for ite or List.map) only exist in the termination proof, they are not available in F.
Oh wey, how anticlimactic.
PS: Path dependencies
Curiously, if we didn’t have functional induction at this point yet, then very likely I’d change Lean to use this construction, and then we’d either not get functional induction, or it would be implemented very differently, maybe a more syntactic approach that would re-prove termination. I guess that’s called path dependence.
This was my hundred-twenty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4072-1] xorg-server security update to fix eight CVEs related to possible privilege escalation in X.
[DLA 4073-1] ffmpeg security update to fix three CVEs related to out-of-bounds read, assert errors and NULL pointer dereferences. This was the second update that I announced last month.
Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the seventy-ninth ELTS month. During my allocated time I uploaded or worked on:
[ELA-1337-1] xorg-server security update to fix eight CVEs in Buster, Stretch and Jessie, related to possible privilege escalation in X.
[ELA-882-2] amanda regression update to improve a fix for privilege escalation. This old regression was detected by Beuc during his work as FD and now finally fixed.
Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.
Debian Printing
This month I uploaded new packages or new upstream or bugfix versions of:
… hplip to fix some bugs and let hplip migrate to testing again.
Unfortunately I didn’t found any time to upload packages.
Have you ever heard of poliastro? It was a package to do calculations related to astrodynamics and orbital mechanics? It was archived by upstream end of 2023. I am now trying to revive it under the new name boinor and hope to get it back into Debian over the next months.
This is almost the last month that Patrick, our Outreachy intern for the Debian Astro project, is handling his tasks. He is working on automatic updates of the indi 3rd-party driver.
Debian IoT
Unfortunately I didn’t found any time to work on this topic.
Debian Mobcom
This month I uploaded new packages or new upstream or bugfix versions of:
As oft stated, the "right" way to validate emails is to do a bare minimum sanity check on format, and then send a verification message to the email address the user supplied; it's the only way to ensure that what they gave you isn't just syntactically valid, but is actually usable.
But even that simple approach leaves places to go wrong. Take a look at this code, from Lana.
publicfunctiongetEmailValidationErrors($data): array{
$errors = [];
if (isset($data["email"]) && !empty($data["email"])) {
if (!str_contains($data["email"], "@")) {
$error["email"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
}
if (!str_contains($data["email"], ".")) {
$error["email"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
}
if (strrpos($data["email"], "@") > strrpos($data["email"], ".")) {
$error["email"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
}
}
if (isset($data["email1"]) && !empty($data["email1"])) {
if (!str_contains($data["email1"], "@")) {
$error["email1"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
}
if (!str_contains($data["email1"], ".")) {
$error["email1"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
}
if (strrpos($data["email1"], "@") > strrpos($data["email1"], ".")) {
$error["email1"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
}
}
if (isset($data["email2"]) && !empty($data["email2"])) {
if (!str_contains($data["email2"], "@")) {
$error["email2"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
}
if (!str_contains($data["email2"], ".")) {
$error["email2"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
}
if (strrpos($data["email2"], "@") > strrpos($data["email2"], ".")) {
$error["email2"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
}
}
if (isset($data["email3"]) && !empty($data["email3"])) {
if (!str_contains($data["email3"], "@")) {
$error["email3"] = "FORM.CONTACT_DETAILS.ERRORS.NO_AT";
}
if (!str_contains($data["email3"], ".")) {
$error["email3"] = "FORM.CONTACT_DETAILS.ERRORS.NO_DOT";
}
if (strrpos($data["email3"], "@") > strrpos($data["email3"], ".")) {
$error["email3"] = "FORM.CONTACT_DETAILS.ERRORS.NO_TLD";
}
}
return$errors;
}
Let's start with the obvious problem: repetition. This function doesn't validate simply one email, but four, by copy/pasting the same logic multiple times. Lana didn't supply the repeated blocks, just noted that they existed, so let's not pick on the bad names: "email1", etc.- that's just my placeholder. I assume it's different contact types for a customer, or similar.
Now, the other problems range from trivial to comical. First, the PHP function empty returns true if the variable has a zero/falsy value or is not set, which means it implies an isset, making the first branch redundant. That's trivial.
The way the checks get logged into the $error array, they can overwrite each other, meaning if you forget the "@" and the ".", it'll only complain about the ".", but if you forget the ".", it'll complain about not having a valid TLD (the "NO_DOT" error will never be output). That's silly.
Finally, the $errors array is the return value, but the $error array is where we store our errors, meaning this function doesn't return anything in the first place. And that means that it's a email validation function which doesn't do anything at all, which honestly- probably for the best.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Julian Miles, Staff Writer They’re running about again, but at least they’re looking happy about it. When I – we – got here, there was running, but only grim faces. Has it only been six days? Can’t have been. Wait. Go through it. Day one would have been after I heard the crash during […]
Creating four backdoors facilitates the attackers having multiple points of re-entry should one be detected and removed. A unique case we haven’t seen before. Which introduces another type of attack made possibly by abusing websites that don’t monitor 3rd party dependencies in the browser of their users.
An update to our package RcppNLoptExample
arrived on CRAN earlier today
marking the first update since the intial
release more than four year ago. The nloptr package, created by
Jelmer Ypma, has long been
providing an excellent R interface to NLopt, a very
comprehensive library for nonlinear optimization. In particular, Jelmer carefully exposed the API
entry points such that other R packages can rely on NLopt without having
to explicitly link to it (as one can rely on R providing
sufficient function calling and registration to make this possible by
referring back to nloptr
which naturally has the linking information and resolution). This
package demonstrates this in a simple-to-use Rcpp example package that can serve as a
stanza.
More recent NLopt versions appear
to have changed behaviour a little so that an example we relied upon in
simple unit test now converges to a marginally different numerical
value, so we adjusted a convergence treshold. Other than that we did a
number of the usual small updates to package metadata, to the README.md
file, and to continuous integration.
The (very short) NEWS entry follows:
Changes in version 0.0.2
(2025-03-09)
Updated tolerance in simple test as newer upstream nlopt change behaviour ever so slightly leading to an
other spurious failure
Numerous small and standard updates to DESCRIPTION, README.md,
badges, and continuous integration setup
These are not good news. In fact, much the contrary. Compared to the real issue, the fact that I'm not able to attend Embedded World at Nuremberg is, well, a detail. Or at least that's what I'm forcing myself to believe, as I REALLY wanted to be there. But mother nature said otherwise.
Bahía Blanca , the city I live, has received a lot on rainfall. Really, a lot. Let me introduce the number like this: the previous highest recorded measurement was 170mm (6.69 inch)... in a month. Yesterday Friday 07 we had more than 400mm (15.75 inch) in 9 hours.
But those are just numbers. Some things are better seen in images.
I also happen to do figure skating in the same school of the 4 times world champions (where "world" means the whole world) Roller Dreams precision skating team - Instagram, from Club El Nacional. Our skating rink got severely damaged with the hail we had like 3 weeks ago (yes, we had hail too!!!). Now it's just impossible:
The "real" thing
Let's get to the heavy, heartbreaker part. I did go to downtown Bahía Blanca, but during night, so let me share some links, most of them in Spanish, but images are images:
My alma matter, Universidad Nacional del Sur, lost its main library, great part of the Physics department and a lot of labs :-(
A nearby town, General Cerri, had even worst luck. In Bahía Blanca, a city of 300k+ people, has around 400 evacuated people. General Cerri, a town of 3000? people, had at least 800.
Bahía Blanca, devil's land
Every place has its legends. We do too. This land was called "Huecuvú Mapú", something like "Devil's land" by the original inhabitants of the zone, due to its harsh climate: string winters and hot summers, couple with fierce wind. But back in 1855 the Cacique (chief) José María Bulnes Yanquetruz had a peace agreement with commander Nicanor Otamendi. But a battle ensued, which Yanquetruz won. At this point history defers depending upon who tells it. Some say Yanquetruz was assigned a military grade as Captain of the indigenous auxiliary forces and provided a military suit, some say he stole it, some say this was a setup of another chief wanting to disrupt peace. But what is known is that Yanquetruz was killed, and his wife, the "machi" (sorceress), issued a curse over the land that would last 1000 years, and the curse was on the climate.
Aftermath
No, we are not there yet. This has just happened. The third violent climate occurrence in 15 months. The city needs to mourn and start healing itself. Time will say.
The other day, I noted that the emacs integration with debputy stopped working.
After debugging for a while, I realized that emacs no longer sent the didOpen
notification that is expected of it, which confused debputy. At this point, I was
already several hours into the debugging and I noted there was some discussions on
debian-devel about emacs and byte compilation not working. So I figured I would
shelve the emacs problem for now.
But I needed an LSP capable editor and with my vi skills leaving much to be desired,
I skipped out on vim-youcompleteme. Instead, I pulled out kate, which I had not
been using for years. It had LSP support, so it would fine, right?
Well, no. Turns out that debputy LSP support had some assumptions that worked for
emacs but not kate. Plus once you start down the rabbit hole, you stumble on
things you missed previously.
Getting started
First order of business was to tell kate about debputy. Conveniently, kate has
a configuration tab for adding language servers in a JSON format right next to the tab where
you can see its configuration for built-in LSP (also in JSON format9. So a quick bit of
copy-paste magic and that was done.
Since July (2024), debputy has support for Inlay hints. They are basically small
bits of text that the LSP server can ask the editor to inject into the text to provide
hints to the reader.
Typically, you see them used to provide typing hints, where the editor or the underlying
LSP server has figured out the type of a variable or expression that you did not
explicitly type. Another common use case is to inject the parameter name for positional
arguments when calling a function, so the user do not have to count the position to
figure out which value is passed as which parameter.
In debputy, I have been using the Inlay hints to show inherited fields in
debian/control. As an example, if you have a definition like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin
Architecture: any
Then foo-bin inherits the Section and Priority field since it does not supply
its own. Previously, debputy would that by injecting the fields themselves and their
value just below the Package field as if you had typed them out directly. The editor
always renders Inlay hints distinctly from regular text, so there was no risk of
confusion and it made the text look like a valid debian/control file end to end. The
result looked something like:
With the second instances of Section and Priority being rendered differently than
its surrendering (usually faded or colorlessly).
Unfortunately, kate did not like injecting Inlay hints with a newline in them,
which was needed for this trick. Reading into the LSP specs, it says nothing about
multi-line Inlay hints being a thing and I figured I would see this problem again
with other editors if I left it be.
I ended up changing the Inlay hints to be placed at the end of the Package field
and then included surrounding () for better visuals. So now, it looks like:
Unfortunately, it is no longer 1:1 with the underlying syntax which I liked about the
previous one. But it works in more editors and is still explicit. I also removed the
Inlay hint for the Homepage field. It takes too much space and I have yet to
meet someone missing it in the binary stanza.
If you have any better ideas for how to render it, feel free to reach out to me.
Spurious completion and hover
As I was debugging the Inlay hints, I wanted to do a quick restart of debputy after
each fix. Then I would trigger a small change to the document to ensure kate would
request an update from debputy to render the Inlay hints with the new code.
The full outgoing payloads are sent via the logs to the client, so it was really about
minimizing which LSP requests are sent to debputy. Notably, two cases would flood the
log:
Completion requests. These are triggered by typing anything at all and since I wanted
to a change, I could not avoid this. So here it was about making sure there would be
nothing to complete, so the result was a small as possible.
Hover doc requests. These are triggered by mouse hovering over field, so this was
mostly about ensuring my mouse movement did not linger over any field on the way
between restarting the LSP server and scrolling the log in kate.
In my infinite wisdom, I chose to make a comment line where I would do the change. I figured
it would neuter the completion requests completely and it should not matter if my cursor
landed on the comment as there would be no hover docs for comments either.
Unfortunately for me, debputy would ignore the fact that it was on a comment line.
Instead, it would find the next field after the comment line and try to complete based on
that. Normally you do not see this, because the editor correctly identifies that none of
the completion suggestions start with a \#, so they are all discarded.
But it was pretty annoying for the debugging, so now debputy has been told to explicitly
stop these requests early on comment lines.
Hover docs for packages
I added a feature in debputy where you can hover over package names in your relationship
fields (such as Depends) and debputy will render a small snippet about it based on
data from your local APT cache.
This doc is then handed to the editor and tagged as markdown provided the editor supports
markdown rendering. Both emacs and kate support markdown. However, not all
markdown renderings are equal. Notably, emacs's rendering does not reformat the text
into paragraphs. In a sense, emacs rendering works a bit like <pre>...</pre> except
it does a bit of fancy rendering inside the <pre>...</pre>.
On the other hand, kate seems to convert the markdown to HTML and then throw the result
into an HTML render engine. Here it is important to remember that not all newlines are equal
in markdown. A Foo<newline>Bar is treated as one "paragraph" (<p>...</p>) and the HTML
render happily renders this as single line Foo Bar provided there is sufficient width to
do so.
A couple of extra newlines made wonders for the kate rendering, but I have a feeling this
is not going to be the last time the hover docs will need some tweaking for prettification.
Feel free to reach out if you spot a weirdly rendered hover doc somewhere.
Making quickfixes available in kate
Quickfixes are treated as generic code actions in the LSP specs. Each code action has a "type"
(kind in the LSP lingo), which enables the editor to group the actions accordingly or
filter by certain types of code actions.
The design in the specs leads to the following flow:
The LSP server provides the editor with diagnostics (there are multiple ways to trigger
this, so we will keep this part simple).
The editor renders them to the user and the user chooses to interact with one of them.
The interaction makes the editor asks the LSP server, which code actions are available
at that location (optionally with filter to only see quickfixes).
The LSP server looks at the provided range and is expected to return the relevant
quickfixes here.
This flow is really annoying from a LSP server writer point of view. When you do the diagnostics
(in step 1), you tend to already know what the possible quickfixes would be. The LSP spec
authors realized this at some point, so there are two features the editor provides to simplify
this.
In the editor request for code actions, the editor is expected to provide the diagnostics
that they received from the server. Side note: I cannot quite tell if this is optional or
required from the spec.
The editor can provide support for remembering a data member in each diagnostic. The
server can then store arbitrary information in that member, which they will see again in
the code actions request. Again, provided that the editor supports this optional feature.
All the quickfix logic in debputy so far has hinged on both of these two features.
As life would have it, kate provides neither of them.
Which meant I had to teach debputy to keep track of its diagnostics on its own. The plus side
is that makes it easier to support "pull diagnostics" down the line, since it requires a similar
feature. Additionally, it also means that quickfixes are now available in more editors. For
consistency, debputy logic is now always used rather than relying on the editor support
when present.
The downside is that I had to spend hours coming up with and debugging a way to find the
diagnostics that overlap with the range provided by the editor. The most difficult part was keeping
the logic straight and getting the runes correct for it.
Making the quickfixes actually work
With all of that, kate would show the quickfixes for diagnostics from debputy and you could
use them too. However, they would always apply twice with suboptimal outcome as a result.
The LSP spec has multiple ways of defining what need to be changed in response to activating a
code action. In debputy, all edits are currently done via the WorkspaceEdit type. It
has two ways of defining the changes. Either via changes or documentChanges with
documentChanges being the preferred one if both parties support this.
I originally read that as I was allowed to provide both and the editor would pick the one it
preferred. However, after seeing kate blindly use both when they are present, I reviewed
the spec and it does say "The edit should either provide changes or documentChanges",
so I think that one is on me.
None of the changes in debputy currently require documentChanges, so I went with just
using changes for now despite it not being preferred. I cannot figure
out the logic of whether an editor supports documentChanges. As I read the notes for this
part of the spec, my understanding is that kate does not announce its support for
documentChanges but it clearly uses them when present. Therefore, I decided to keep it
simple for now until I have time to dig deeper.
Remaining limitations with kate
There is one remaining limitation with kate that I have not yet solved. The kate
program uses KSyntaxHighlighting for its language detection, which in turn is the
basis for which LSP server is assigned to a given document.
This engine does not seem to support as complex detection logic as I hoped from it. Concretely,
it either works on matching on an extension / a basename (same field for both cases) or
mime type. This combined with our habit in Debian to use extension less files like
debian/control vs. debian/tests/control or debian/rules or
debian/upstream/metadata makes things awkward a best.
Concretely, the syntax engine cannot tell debian/control from debian/tests/control as
they use the same basename. Fortunately, the syntax is close enough to work for both and
debputy is set to use filename based lookups, so this case works well enough.
However, for debian/rules and debian/upstream/metadata, my understanding is that if
I assign these in the syntax engine as Debian files, these rules will also trigger for any
file named foo.rules or bar.metadata. That seems a bit too broad for me, so I have
opted out of that for now. The down side is that these files will not work out of the box
with kate for now.
The current LSP configuration in kate does not recognize makefiles or YAML either. Ideally,
we would assign custom languages for the affected Debian files, so we do not steal the ID
from other language servers. Notably, kate has a built-in language server for YAML and
debputy does nothing for a generic YAML document. However, adding YAML as a supported
language for debputy would cause conflict and regressions for users that are already
happy with their generic YAML language server from kate.
So there are certainly still work to be done. If you are good with KSyntaxHighlighting
and know how to solve some of this, I hope you will help me out.
Changes unrelated to kate
While I was working on debputy, I also added some other features that I want to mention.
The debputy lint command will now show related context to diagnostic in its terminal
report when such information is available and is from the same file as the diagnostic
itself (cross file cases are rendered without related information).
The related information is typically used to highlight a source of a conflict. As an
example, if you use the same field twice in a stanza of debian/control, then
debputy will add a diagnostic to the second occurrence. The related information
for that diagnostic would provide the position of the first occurrence.
This should make it easier to find the source of the conflict in the cases where
debputy provides it. Let me know if you are missing it for certain diagnostics.
The diagnostics analysis of debian/control will now identify and flag simple
duplicated relations (complex ones like OR relations are ignored for now). Thanks
to Matthias Geiger for suggesting the feature and Otto Kekäläinen for reporting
a false positive that is now fixed.
Closing
I am glad I tested with kate to weed out most of these issues in time before
the freeze. The Debian freeze will start within a week from now. Since debputy
is a part of the toolchain packages it will be frozen from there except for
important bug fixes.
Author: James Jarvis The green leaves of The Great Oak glistened in the starlight. The air was still and calming. It was exactly what Liza expected. She wandered over to the base of the tree whilst deep in thought. The beauty of The Great Oak was amplified by its location. Situated within its own room […]
Almost exactly four years after I started with this project, yesterday I
presented my PhD defense.
My thesis was what I’ve been presenting advances of all around since ≈2022: «A
certificate-poisoning-resistant protocol for the synchronization of Web of Trust
networks»
Lots of paperwork is still on the road for me. But at least in the immediate
future, I can finally use this keyring my friend Raúl Gómez 3D-printed for me:
This was the fifth time that a MiniDebConf (as an exclusive in-person event
about Debian) took place in Brazil. Previous editions were in Curitiba
(2016,
2017, and
2018), and in
Brasília 2023. We had other MiniDebConfs
editions held within Free Software events such as
FISL and Latinoware, and other
online events. See our
event history.
Parallel to MiniDebConf, on 27th (Saturday)
FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software,
and It has been held since 2005 simultaneously in several cities.
MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about
Debian. We value the presence of both beginner users who are familiarizing
themselves with the system and the official project developers. The spirit of
welcome and collaboration was present during all the event.
2024 edition numbers
During the four days of the event, several activities took place for all
levels of users and collaborators of the Debian project. The official schedule
was composed of:
06 rooms in parallel on Saturday;
02 auditoriums in parallel on Monday and Tuesday;
30 talks/BoFs of all levels;
05 workshops for hands-on activities;
09 lightning talks on general topics;
01 Live Electronics performance with Free Software;
Install fest to install Debian on attendees' laptops;
BSP (Bug Squashing Party);
Uploads of new or updated packages.
The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a
record number of participants.
Total people registered: 399
Total attendees in the event: 224
Of the 224 participants, 15 were official Brazilian contributors,
10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to
several unofficial contributors.
The organization was carried out by 14 people who started working at the end of
2023, including Prof. Loïc Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event.
As MiniDebConf was held at UFMG facilities, we had the help of more than
10 University employees.
See the list with the
names of people who helped in some way in organizing MiniDebConf Belo Horizonte
2024.
The difference between the number of people registered and the number of
attendees in the event is probably explained by the fact that there is no
registration fee, so if the person decides not to go to the event, they will
not suffer financial losses.
The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the
result of the constant efforts made over the last few years to attract more
contributors to the Debian community in Brazil. With each edition the numbers
only increase, with more attendees, more activities, more rooms, and more
sponsors/supporters.
Activities
The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th
(Saturday, Monday and Tuesday) we had talks, discussions, workshops and many
practical activities.
On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing
around the city. In the morning we left the hotel and went, on a chartered bus,
to the
Belo Horizonte Central Market. People took
the opportunity to buy various things such as cheeses, sweets, cachaças and
souvenirs, as well as tasting some local foods.
After a 2-hour tour of the Market, we got back on the bus and hit the road for
lunch at a typical Minas Gerais food restaurant.
With everyone well fed, we returned to Belo Horizonte to visit the city's
main tourist attraction: Lagoa da Pampulha and Capela São Francisco de Assis,
better known as
Igrejinha da Pampulha.
We went back to the hotel and the day ended in the hacker space that we set up
in the events room for people to chat, packaging, and eat pizzas.
Crowdfunding
For the third time we ran a crowdfunding campaign and it was incredible how
people contributed! The initial goal was to raise the amount equivalent to a
gold tier of R$ 3,000.00. When we reached this goal, we defined a new one,
equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we
achieved this goal. So we proposed as a final goal the value of a gold + silver
+ bronze tiers, which would be equivalent to R$ 6,000.00. The result was that
we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people!
Food, accommodation and/or travel grants for participants
Each edition of MiniDebConf brought some innovation, or some different benefit
for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need
some kind of help.
In the registration form, we included the option for the person to request a
food, accommodation and/or travel bursary, but to do so, they would have to
identify themselves as a contributor (official or unofficial) to Debian and
write a justification for the request.
Number of people benefited:
Food: 69
Accommodation: 20
Travel: 18
The food bursary provided lunch and dinner every day. The lunches included
attendees who live in Belo Horizonte and the region. Dinners were paid for
attendees who also received accommodation and/or travel. The accommodation was
held at the BH Jaraguá Hotel. And the
travels included airplane or bus tickets, or fuel (for those who came by car or
motorbike).
Much of the money to fund the bursaries came from the Debian Project, mainly
for travels. We sent a budget request to the former Debian leader Jonathan
Carter, and He promptly approved our request.
In addition to this event budget, the leader also approved individual requests
sent by some DDs who preferred to request directly from him.
The experience of offering the bursaries was really good because it allowed
several people to come from other cities.
Photos and videos
You can watch recordings of the talks at the links below:
We would like to thank all the attendees, organizers, volunteers, sponsors and
supporters who contributed to the success of MiniDebConf Belo Horizonte 2024.
A new (mostly maintenance) release 0.2.3 of RcppTOML is
now on CRAN.
TOMLis a file format that is most
suitable for configurations, as it is meant to be edited by
humans but read by computers. It emphasizes strong readability
for humans while at the same time supporting strong typing
as well as immediate and clear error reports. On small typos
you get parse errors, rather than silently corrupted garbage. Much
preferable to any and all of XML, JSON or YAML – though sadly these may
be too ubiquitous now. TOML is
frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka “packages”)
for the Rust
language.
This release was tickled by another CRAN request: just like
yesterday’s and the RcppDate
release two days ago, it responds to the esoteric ‘whitespace in
literal operator’ depreceation warning. We alerted upstream too.
The short summary of changes follows.
Changes in version 0.2.3
(2025-03-08)
Correct the minimum version of Rcpp to
1.0.8 (Walter Somerville)
The package now uses Authors@R as mandated by CRAN
Updated 'whitespace in literal' issue upsetting
clang++-20
Continuous integration updates including simpler r-ci
setup
To avoid needless typing, the fish shell features command abbreviations to
expand some words after pressing space. We can emulate such a feature with
Zsh:
# Definition of abbrev-alias for auto-expanding aliasestypeset-ga_vbe_abbrevations
abbrev-alias(){alias$1_vbe_abbrevations+=(${1%%\=*})}
_vbe_zle-autoexpand(){local-awords;words=(${(z)LBUFFER})if((${#_vbe_abbrevations[(r)${words[-1]}]}));thenzle_expand_alias
fizlemagic-space
}
zle-N_vbe_zle-autoexpand
bindkey-Memacs" "_vbe_zle-autoexpand
bindkey-Misearch" "magic-space
# Correct common typos(($+commands[git]))&&abbrev-aliasgti=git
(($+commands[grep]))&&abbrev-aliasgrpe=grep
(($+commands[sudo]))&&abbrev-aliassuod=sudo
(($+commands[ssh]))&&abbrev-aliasshs=ssh
# Save a few keystrokes(($+commands[git]))&&abbrev-aliasgls="git ls-files"(($+commands[ip]))&&{abbrev-aliasip6='ip -6'abbrev-aliasipb='ip -brief'}# Hard to remember options(($+commands[mtr]))&&abbrev-aliasmtrr='mtr -wzbe'
Here is a demo where gls is expanded to git ls-files after pressing space:
Auto-expanding gls to git ls-files
I don’t auto-expand all aliases. I keep using regular aliases when slightly
modifying the behavior of a command or for well-known abbreviations:
Author: Soramimi Hanarejima When you open the door, it’s like I’m looking at an old photo, you and the hallway tinged a sentimental amber by the redshift of the decades between us. “Do you want to come in?” you ask, voice muffled by all those years. “I just got some lasagna out of the oven.” […]
In September 2023, KrebsOnSecurity published findings from security researchers who concluded that a series of six-figure cyberheists across dozens of victims resulted from thieves cracking master passwords stolen from the password manager service LastPass in 2022. In a court filing this week, U.S. federal agents investigating a spectacular $150 million cryptocurrency heist said they had reached the same conclusion.
On March 6, federal prosecutors in northern California said they seized approximately $24 million worth of cryptocurrencies that were clawed back following a $150 million cyberheist on Jan. 30, 2024. The complaint refers to the person robbed only as “Victim-1,” but according to blockchain security researcher ZachXBT the theft was perpetrated against Chris Larsen, the co-founder of the cryptocurrency platform Ripple. ZachXBT was the first to report on the heist.
This week’s action by the government merely allows investigators to officially seize the frozen funds. But there is an important conclusion in this seizure document: It basically says the U.S. Secret Service and the FBI agree with the findings of the LastPass breach story published here in September 2023.
That piece quoted security researchers who said they were witnessing six-figure crypto heists several times each month that all appeared to be the result of crooks cracking master passwords for the password vaults stolen from LastPass in 2022.
“The Federal Bureau of Investigation has been investigating these data breaches, and law enforcement agents investigating the instant case have spoken with FBI agents about their investigation,” reads the seizure complaint, which was written by a U.S. Secret Service agent. “From those conversations, law enforcement agents in this case learned that the stolen data and passwords that were stored in several victims’ online password manager accounts were used to illegally, and without authorization, access the victims’ electronic accounts and steal information, cryptocurrency, and other data.”
The document continues:
“Based on this investigation, law enforcement had probable cause to believe the same attackers behind the above-described commercial online password manager attack used a stolen password held in Victim 1’s online password manager account and, without authorization, accessed his cryptocurrency wallet/account.”
Working with dozens of victims, security researchers Nick Bax and Taylor Monahan found that none of the six-figure cyberheist victims appeared to have suffered the sorts of attacks that typically preface a high-dollar crypto theft, such as the compromise of one’s email and/or mobile phone accounts, or SIM-swapping attacks.
They discovered the victims all had something else in common: Each had at one point stored their cryptocurrency seed phrase — the secret code that lets anyone gain access to your cryptocurrency holdings — in the “Secure Notes” area of their LastPass account prior to the 2022 breaches at the company.
Bax and Monahan found another common theme with these robberies: They all followed a similar pattern of cashing out, rapidly moving stolen funds to a dizzying number of drop accounts scattered across various cryptocurrency exchanges.
According to the government, a similar level of complexity was present in the $150 million heist against the Ripple co-founder last year.
“The scale of a theft and rapid dissipation of funds would have required the efforts of multiple malicious actors, and was consistent with the online password manager breaches and attack on other victims whose cryptocurrency was stolen,” the government wrote. “For these reasons, law enforcement agents believe the cryptocurrency stolen from Victim 1 was committed by the same attackers who conducted the attack on the online password manager, and cryptocurrency thefts from other similarly situated victims.”
Reached for comment, LastPass said it has seen no definitive proof — from federal investigators or others — that the cyberheists in question were linked to the LastPass breaches.
“Since we initially disclosed this incident back in 2022, LastPass has worked in close cooperation with multiple representatives from law enforcement,” LastPass said in a written statement. “To date, our law enforcement partners have not made us aware of any conclusive evidence that connects any crypto thefts to our incident. In the meantime, we have been investing heavily in enhancing our security measures and will continue to do so.”
On August 25, 2022, LastPass CEO Karim Toubba told users the company had detected unusual activity in its software development environment, and that the intruders stole some source code and proprietary LastPass technical information. On Sept. 15, 2022, LastPass said an investigation into the August breach determined the attacker did not access any customer data or password vaults.
But on Nov. 30, 2022, LastPass notified customers about another, far more serious security incident that the company said leveraged data stolen in the August breach. LastPass disclosed that criminal hackers had compromised encrypted copies of some password vaults, as well as other personal information.
Experts say the breach would have given thieves “offline” access to encrypted password vaults, theoretically allowing them all the time in the world to try to crack some of the weaker master passwords using powerful systems that can attempt millions of password guesses per second.
Researchers found that many of the cyberheist victims had chosen master passwords with relatively low complexity, and were among LastPass’s oldest customers. That’s because legacy LastPass users were more likely to have master passwords that were protected with far fewer “iterations,” which refers to the number of times your password is run through the company’s encryption routines. In general, the more iterations, the longer it takes an offline attacker to crack your master password.
Over the years, LastPass forced new users to pick longer and more complex master passwords, and they increased the number of iterations on multiple occasions by several orders of magnitude. But researchers found strong indications that LastPass never succeeded in upgrading many of its older customers to the newer password requirements and protections.
Asked about LastPass’s continuing denials, Bax said that after the initial warning in our 2023 story, he naively hoped people would migrate their funds to new cryptocurrency wallets.
“While some did, the continued thefts underscore how much more needs to be done,” Bax told KrebsOnSecurity. “It’s validating to see the Secret Service and FBI corroborate our findings, but I’d much rather see fewer of these hacks in the first place. ZachXBT and SEAL 911 reported yet another wave of thefts as recently as December, showing the threat is still very real.”
Monahan said LastPass still hasn’t alerted their customers that their secrets—especially those stored in “Secure Notes”—may be at risk.
“Its been two and a half years since LastPass was first breached [and] hundreds of millions of dollars has been stolen from individuals and companies around the globe,” Monahan said. “They could have encouraged users to rotate their credentials. They could’ve prevented millions and millions of dollars from being stolen by these threat actors. But instead they chose to deny that their customers were are risk and blame the victims instead.”
A new release 0.1.13 of the RcppSimdJson
package is now on CRAN.
RcppSimdJson
wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via
very clever algorithmic engineering to obtain largely branch-free code,
coupled with modern C++ and newer compiler instructions, it results in
parsing gigabytes of JSON parsed per second which is quite
mindboggling. The best-case performance is ‘faster than CPU speed’ as
use of parallel SIMD instructions and careful branch avoidance can lead
to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire
at QCon.
This release was tickled by another CRAN request: just like
yesterday’s RcppDate
release, it responds to the esoteric ‘whitespace in literal
operator’ depreceation warning. Turns out that upstream simdjson had this fixed
a few months ago as the node bindings package ran into it. Other changes
include a bit of earlier polish by Daniel, another CRAN mandated update,
CI improvements, and a move of two demos to examples/ to
avoid having to add half a dozen packages to Suggests: for no real usage
gain in the package.
The short NEWS entry for this release follows.
Changes in version 0.1.13
(2025-03-07)
A call to std::string::erase is now guarded
(Daniel)
The package now uses Authors@R as mandated by CRAN
(Dirk)
simdjson was upgraded to version 3.12.2
(Dirk)
Continuous integration updated to more compilers and simpler
setup
Two demos are now in inst/examples to not inflate
Suggests
This year I was at FOSDEM 2025, and it was the fifth edition in a row that I participated in person (before it was in 2019, 2020, 2023 and 2024). The event took place on February 1st and 2nd, as always at the ULB campus in Brussels.
We arrived on Friday at lunchtime and went straight to the hotel to drop off our bags. This time we stayed at Ibis in the city center, very close to the hustle and bustle. The price was good and the location was really good for us to be able to go out in the city center and come back late at night. We found a Japanese restaurant near the hotel and it was definitely worth having lunch there because of the all-you-can-eat price. After taking a nap, we went out for a walk. Since January 31st is the last day of the winter sales in the city, the streets in the city center were crowded, there were lots of people in the stores, and the prices were discounted. We concluded that if we have the opportunity to go to Brussels again at this time, it would be better wait to buy clothes for cold weather there.
Unlike in 2023 and 2024, the FOSDEM organization did not approve my request for the Translations DevRoom,so my goal was to participate in the event and collaborate at the Debian booth. And also as I always do, I volunteered to operate the broadcast camera in the main auditorium on both days, for two hours each.
The Debian booth:
Me in the auditorium helping with the broadcast:
2 weeks before the event, the organization put out a call for interested people to request a room for their community’s BoF (Birds of a Feather), and I requested a room for Debian and it was approved :-)
It was great to see that people were really interested in participating at the BoF and the room was packed! As the host of the discussions, I tried to leave the space open for anyone who wanted to talk about any subject related to Debian. We started with a talk from MiniDebConf25 organizers, that will be taking place this year in France. Then other topics followed with people talking, asking and answering questions, etc. It was worth organizing this BoF. Who knows, the idea will remain in 2026.
During the two days of the event, it didn’t rain or get too cold. The days were sunny (and people celebrated the weather in Brussels). But I have to admit that it would have been nice to see snow like I did in 2019. Unlike last year, this time I felt more motivated to stay at the event the whole time.
Deixo meu agradecimento especial para o Andreas Tille, atual Líder do Debian que aprovou o meu pedido de passagens para que eu pudesse participar dos FOSDEM 2025. Como sempre, essa ajuda foi essencial para viabilizar a minha viagem para Bruxelas.
I would like to give my special thanks to Andreas Tille, the current Debian Leader, who approved my request for flight tickets so that I could join FOSDEM 2025. As always, this help was essential in making my trip to Brussels possible.
And once again Jandira was with me on this adventure. On Monday we went for a walk around Brussels and we also traveled to visit Bruges again. The visit to this city is really worth it because walking through the historic streets is like going back in time. This time we even took a boat trip through the canals, which was really cool.
Author: Nageene Noor The world through Viktor Blackford’s window was quiet. Hannibal always started with the window, and it became a habit like an anchor, before he let himself sink into Viktor’s home. From where Hannibal observed, his whole life was mundane. Viktor was meticulously ordinary. Every evening, he cooked simple meals, worked at his […]
Punctual
Robert F.
never procrastinates. But I think now would
be a good time for a change. He worries that
"I better do something quick, before my 31,295 year deadline arrives."
Stewart
suffers so, saying
"Whilst failing to check in for a flight home on the TUI
app (one of the largest European travel companies), their
Harry Potter invisibility cloak slipped. Perhaps I'll just have to stay on holiday?"
You have my permission, just tell the boss I said so.
Diligent
Dan H.
is in no danger of being replaced. Says Dan,
"My coworker was having problems getting regular expressions
to work in a PowerShell script. She asked Bing's Copilot
for help - and was it ever helpful!"
PSU alum (I'm guessing)
Justin W.
was overwhelmed in Happy Valley.
"I was just trying to find out when the game
started. This is too much date math for my brain to figure out."
Finally, bug-loving
Pieter
caught this classic.
"They really started with a blank slate for the newest update. I'm giving them a solid %f for the effort."
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
At 49, Branden Spikes isn’t just one of the oldest technologists who has been involved in Elon Musk’s Department of Government Efficiency (DOGE). As the current director of information technology at X/Twitter and an early hire at PayPal, Zip2, Tesla and SpaceX, Spikes is also among Musk’s most loyal employees. Here’s a closer look at this trusted Musk lieutenant, whose Russian ex-wife was once married to Elon’s cousin.
The profile of Branden Spikes on X.
When President Trump took office again in January, he put the world’s richest man — Elon Musk — in charge of the U.S. Digital Service, and renamed the organization as DOGE. The group is reportedly staffed by at least 50 technologists, many of whom have ties to Musk’s companies.
DOGE has been enabling the president’s ongoing mass layoffs and firings of federal workers, largely by seizing control over computer systems and government data for a multitude of federal agencies, including the Social Security Administration, the Department of Homeland Security, the Office of Personnel Management, and the Treasury Department.
It is difficult to find another person connected to DOGE who has stronger ties to Musk than Branden Spikes. A native of California, Spikes initially teamed up with Musk in 1997 as a lead systems engineer for the software company Zip2, the first major venture for Musk. In 1999, Spikes was hired as director of IT at PayPal, and in 2002 he became just the fourth person hired at SpaceX.
In 2012, Spikes launched Spikes Security, a software product that sought to create a compartmentalized or “sandboxed” web browser that could insulate the user from malware attacks. A review of spikes.com in the Wayback Machine shows that as far back as 1998, Musk could be seen joining Spikes for team matches in the online games Quake and Quake II. In 2016, Spikes Security was merged with another security suite called Aurionpro, with the combined company renamed Cyberinc.
A snapshot of spikes.com from 1998 shows Elon Musk’s profile in Spike’s clan for the games Quake and Quake II.
Spikes’s LinkedIn profile says he was appointed head of IT at X in February 2025. And although his name shows up on none of the lists of DOGE employees circulated by various media outlets, multiple sources told KrebsOnSecurity that Spikes was working with DOGE and operates within Musk’s inner circle of trust.
In a conversation with KrebsOnSecurity, Spikes said he is dedicated to his country and to saving it from what he sees as certain ruin.
“Myself, I was raised by a southern conservative family in California and I strongly believe in America and her future,” Spikes said. “This is why I volunteered for two months in DC recently to help DOGE save us from certain bankruptcy.”
Spikes told KrebsOnSecurity that he recently decided to head back home and focus on his job as director of IT at X.
“I loved it, but ultimately I did not want to leave my hometown and family back in California,” Spikes said of his tenure at DOGE. “After a couple of months it became clear that to continue helping I would need to move to DC and commit a lot more time, so I politely bowed out.”
Prior to founding Spikes Security, Branden Spikes was married to a native Russian woman named Natalia whom he’d met at a destination wedding in South America in 2003.
Branden and Natalia’s names are both on the registration records for the domain name orangetearoom[.]com. This domain, which DomainTools.com says was originally registered by Branden in 2009, is the home of a tax-exempt charity in Los Angeles called the California Russian Association.
Here is a photo from a 2011 event organized by the California Russian Association, showing Branden and Natalia at one of its “White Nights” charity fundraisers:
Branden and Natalia Spikes, on left, in 2011. The man on the far right is Ivan Y. Podvalov, a board member of the Kremlin-aligned Congress of Russian Americans (CRA). The man in the center is Feodor Yakimoff, director of operations at the Transib Global Sourcing Group, and chairman of the Russian Imperial Charity Balls, which works in concert with the Russian Heritage Foundation.
In 2011, the Spikes couple got divorced, and Natalia changed her last name to Haldeman. That is not her maiden name, which appears to be “Libina.” Rather, Natalia acquired the surname Haldeman in 1998, when she married Elon Musk’s cousin.
Reeve Haldeman is the son of Scott Haldeman, who is the brother of Elon Musk’s mother, Maye Musk. Divorce records show Reeve and Natalia officially terminated their marriage in 2007. Reeve Haldeman did not respond to a request for comment.
A review of other domain names connected to Natalia Haldeman’s email address show she has registered more than a dozen domains over the years that are tied to the California Russian Association, and an apparently related entity called the Russian Heritage Foundation, Inc.:
Ms. Haldeman did not respond to requests for comment. Her name and contact information appears in the registration records for these domains dating back to 2010, and a document published by ProPublica show that by 2016 Natalia Haldeman was appointed CEO of the California Russian Foundation.
The domain name that bears both Branden’s and Natalia’s names — orangetearoom.com — features photos of Ms. Haldeman at fundraising events for the Russian foundation through 2014. Additional photos of her and many of the same people can be seen through 2023 at another domain she registered in 2010 — russianheritagefoundation.com.
A photo from Natalia Haldeman’s Facebook page shows her mother (left) pictured with Maye Musk, Elon Musk’s mother, in 2022.
The photo of Branden and Natalia above is from one such event in 2011 (tied to russianwhitenights.org, another Haldeman domain). The person on the right in that image — Ivan Y. Podvalov — appears in many fundraising event photos published by the foundation over the past decade. Podvalov is a board member of the Congress of Russian Americans (CRA), a nonprofit group that is known for vehemently opposing U.S. financial and legal sanctions against Russia.
Writing for The Insider in 2022, journalist Diana Fishman described how the CRA has engaged in outright political lobbying, noting that the organization in June 2014 sent a letter to President Obama and the secretary of the United Nations, calling for an end to the “large-scale US intervention in Ukraine and the campaign to isolate Russia.”
“The US military contingents must be withdrawn immediately from the Eastern European region, and NATO’s enlargement efforts and provocative actions against Russia must cease,â€� the message read.
The Insider said the CRA director sent another two letters, this time to President Donald Trump, in 2017 and 2018.
“One was a request not to sign a law expanding sanctions against Russia,” Fishman wrote. “The other regretted the expulsion of 60 Russian diplomats from the United States and urged not to jump to conclusions on Moscow’s involvement in the poisoning of Sergei Skripal.”
The nonprofit tracking website CauseIQ.com reports that The Russian Heritage Foundation, Inc. is now known as Constellation of Humanity.
The Russian Heritage Foundation and the California Russian Association both promote the interests of the Russian Orthodox Church. This page indexed by Archive.org from russiancalifornia.org shows The California Russian Foundation organized a community effort to establish an Orthodox church in Orange County, Calif.
A press release from the Russian Orthodox Church Outside of Russia (ROCOR) shows that in 2021 the Russian Heritage Foundation donated money to organize a conference for the Russian Orthodox Church in Serbia.
A review of the “Partners” listed on the Spikes’ jointly registered domain — orangetearoom.com — shows the organization worked with a marketing company called Russian American Media. Reporting by KrebsOnSecurity last year showed that Russian American Media also partners with the problematic people-search service Radaris, which was formed by two native Russian brothers in Massachusetts who have built a fleet of consumer data brokers and Russian affiliate programs.
When asked about his ex-wife’s history, Spikes said she has a good heart and bears no ill-will toward anyone.
“I attended several of Natalia’s social events over the years we were together and can assure you that she’s got the best intentions with those,” Spikes told KrebsOnSecurity. “There’s no funny business going on. It is just a way for those friendly immigrants to find resources amongst each other to help get settled in and chase the American dream. I mean, they’re not unlike the immigrants from other countries who come to America and try to find each other and help each other find others who speak the language and share in the building of their businesses here in America.”
Spikes said his own family roots go back deeply into American history, sharing that his 6th great grandfather was Alexander Hamilton on his mom’s side, and Jessie James on his dad’s side.
“My family roots are about as American as you can get,” he said. “I’ve also been entrusted with building and safeguarding Elon’s companies since 1999 and have a keen eye (as you do) for bad actors, so have enough perspective to tell you that Natalia has no bad blood and that she loves America.”
Of course, this perspective comes from someone who has the utmost regard for the interests of the “special government employee” Mr. Musk, who has been bragging about tossing entire federal agencies into the “wood chipper,” and who recently wielded an actual chainsaw on stage while referring to it as the “chainsaw for bureaucracy.”
“Elon’s intentions are good and you can trust him,” Spikes assured.
A special note of thanks for research assistance goes to Jacqueline Sweet, an independent investigative journalist whose work has been published in The Guardian, Rolling Stone, POLITICO and The Intercept.
Now, as I feared, the white linen fabric wasn’t a great choice: not only
it became dirt-grey linen fabric in a very short time, the area under
the ball of the foot was quickly consumed by friction, just as it
usually happens with bought slippers.
I have no pictures for a number of reasons, but trust me when I say that
they look pretty bad.
However, the sole is still going strong, and the general concept has
proved valid, so when I needed a second pair of slippers I used the
same pattern,
with a sole made from the same twine
but this time with denim taken from the legs of an old pair of jeans.
To make them a bit nicer, and to test the technique, I also added a
design with a stencil and iridescent black acrylic paint (with fabric
medium): I like the tone-on-tone effect, as it’s both (relatively)
subtle and shiny.
Then, my partner also needed new slippers, and I wanted to make his too.
His preference, however, is for open heeled slippers, so I adjusted the
pattern into a new one,
making it from an old pair of blue jeans, rather than black as mine.
He also finds completely flat soles a bit uncomfortable, so I made an
heel with the same braided twine technique: this also seems to be
working fine, and I’ve also added these instructions to the braided
soles ones
Both of these have now been work for a few months: the jeans is working
much better than the linen (which isn’t a complete surprise) and we’re
both finding them comfortable, so if we’ll ever need new slippers I
think I’ll keep using this pattern.
Now the plan is to wash the linen slippers, and then look into repairing
them, either with just a new fabric inner sole + padding, or if washing
isn’t as successful as I’d like by making a new fabric part in a
different material and reusing just the twine sole. Either way they are
going back into use.
In case you haven't noticed, I'm trying to post and one of the things
that entails is to just dump over the fence a bunch of draft notes. In
this specific case, I had a set of rough notes about NixOS and
particularly Nix, the package manager.
In this case, you can see the very birth of an article, what it looks
like before it becomes the questionable prose it is now, by looking at
the Git history of this file, particularly its birth. I have
a couple of those left, and it would be pretty easy to publish them as
is, but I feel I'd be doing others (and myself! I write for my own
documentation too after all) a disservice by not going the extra mile
on those.
So here's the long version of my experiment with Nix.
Nix
A couple friends are real fans of Nix. Just like I work with Puppet a
lot, they deploy and maintain servers (if not fleets of servers)
with NixOS and its declarative package management system. Essentially,
they use it as a configuration management system, which is pretty
awesome.
That, however, is a bit too high of a bar for me. I rarely try new
operating systems these days: I'm a Debian developer and it takes most
of my time to keep that functional. I'm not going to go around messing
with other systems as I know that would inevitably get me dragged down
into contributing into yet another free software project. I'm mature
now and know where to draw the line. Right?
So I'm just testing Nix, the package manager, on Debian, because I
learned from my friend that nixpkgs is the largest package
repository out there, a mind-boggling 100,000 at the time of
writing (with 88% of packages up to date), compared to around
40,000 in Debian (or 72,000 if you count binary packages, with 72% up
to date). I naively thought Debian was the largest, perhaps competing
with Arch, and I was wrong: Arch is larger than Debian too.
What brought me there is I wanted to run Harper, a fast
spell-checker written in Rust. The logic behind using Nix instead of
just downloading the source and running it myself is that I delegate
the work of supply-chain integrity checking to a distributor, a bit
like you trust Debian developers like myself to package things in a
sane way. I know this widens the attack surface to a third party of
course, but the rationale is that I shift cryptographic verification
to another stack than just "TLS + GitHub" (although that is somewhat
still involved) that's linked with my current chain (Debian
packages).
I have since then stopped using Harper for various reasons and
also wrapped up my Nix experiment, but felt it worthwhile to jot down
some observations on the project.
Hot take
Overall, Nix is hard to get into, with a complicated learning curve. I
have found the documentation to be a bit confusing, since there are
many ways to do certain things. I particularly tripped on "flakes"
and, frankly, incomprehensible error reporting.
It didn't help that I tried to run nixpkgs on Debian which is
technically possible, but you can tell that I'm not supposed to be
doing this. My friend who reviewed this article expressed surprised at
how easy this was, but then he only saw the finished result, not me
tearing my hair out to make this actually work.
At this point, harper was installed in a ... profile? Not sure.
I had to add ~/.nix-profile/bin (a symlink to
/nix/store/sympqw0zyybxqzz6fzhv03lyivqqrq92-harper-0.10.0/bin) to my
$PATH environment for this to actually work.
Side notes on documentation
Those last two commands (nix-channel and nix-env) were hard to
figure out, which is kind of amazing because you'd think a tutorial on
Nix would feature something like this prominently. But threedifferenttutorials failed to bring me up to that basic
setup, even the README.Debian didn't spell that out clearly.
The tutorials all show me how to develop packages for Nix, not
plainly how to install Nix software. This is presumably because "I'm
doing it wrong": you shouldn't just "install a package", you should
setup an environment declaratively and tell it what you want to do.
But here's the thing: I didn't want to "do the right thing". I just
wanted to install Harper, and documentation failed to bring me to that
basic "hello world" stage. Here's what one of the tutorials suggests
as a first step, for example:
curl -L https://nixos.org/nix/install | sh
nix-shell --packages cowsay lolcat
nix-collect-garbage
... which, when you follow through, leaves you with almost precisely
nothing left installed (apart from Nix itself, setup with a nasty
"curl pipe bash". So while that works in testing Nix, you're not much
better off than when you started.
Rolling back everything
Now that I have stopped using Harper, I don't need Nix anymore, which
I'm sure my Nix friends will be sad to read about. Don't worry, I have
notes now, and can try again!
But still, I wanted to clear things out, so I did this, as root:
I think this cleared things out, but I'm not actually sure.
Side note on Nix drama
This blurb wouldn't be complete without a mention that the Nix
community has been somewhat tainted by the behavior of its founder. I
won't bother you too much with this; LWN covered it well in 2024,
and made a followup article about spinoffs and forks that's worth
reading as well.
I did want to say that everyone I have been in contact with in the Nix
community was absolutely fantastic. So I am really sad that the
behavior of a single individual can pollute a community in such a way.
As a leader, if you have all but one responsability, it's to behave
properly for people around you. It's actually really, really hard to
do that, because yes, it means you need to act differently than others
and no, you just don't get to be upset at others like you would
normally do with friends, because you're in a position of
authority.
It's a lesson I'm still learning myself, to be fair. But at least I
don't work with arms manufacturers or, if I would, I would be sure as
hell to accept the nick (or nix?) on the chin when people would get
upset, and try to make amends.
So long live the Nix people! I hope the community recovers from
that dark moment, so far it seems like it will.
RcppDate wraps
the featureful date
library written by Howard
Hinnant for use with R. This header-only modern C++ library has been
in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17
what will is (with minor modifications) the ‘date’ library in C++20. The
RcppDate adds no
extra R or C++ code and can therefore be a zero-cost dependency for any
other project; yet a number of other projects decided to re-vendor it
resulting in less-efficient duplication. Oh well. C’est la
via.
This release sync wuth the (already mostly included) upstream release
3.0.3, and also addresses a new fresh (and mildly esoteric) nag from
clang++-20. One upstream PR
already addressed this in the files tickled by some CRAN packages, I followed this up
with another
upstream PR addressing this in a few more occurrences.
Changes in version 0.0.5
(2025-03-06)
Updated to upstream version 3.0.3
Updated 'whitespace in literal' issue upsetting clang++-20; this
is also fixed upstream via two PRs
I previously blogged about getting an 8K TV [1]. Now I’m working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it’s specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.
The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.
My theories as to why it doesn’t work are:
NVidia specs lie
My 8K cable isn’t really an 8K cable
Something weird happens converting DisplayPort to HDMI
The video card can only handle refresh rates for 8K that don’t match supported input for the TV
To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven’t tried any others and can’t review it but it seems to work well enough and it’s free. It’s described as “A community of privacy and FOSS enthusiasts, run by Lemmy’s developers” which is positive, I recommend that everyone who’s into FOSS create an account there or some other Lemmy server.
My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don’t support 8K on HDMI because the people who manage HDMI specs are jerks. It’s a $200 card at MSY and a bit less on ebay so it’s an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.
The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I’ve wasted money on computer gear. But I’m still hesitating about this.
The Marshall Islands coping with the effects of climate change and rising sea levels. Credit: Asian Development Bank on Flickr
Introduction
Recent years have provided definitive evidence that we are living in an age of exceptional complexity and turbulence. The war of aggression raging in Ukraine has already taken as many as 500,000 lives, and prospects for a near-term resolution to the fighting are dim. The Middle East is once again convulsed by war. Another 182 significant violent conflicts are destroying lives and livelihoods across the globe — the highest number in more than three decades.1 Escalating great power competition threatens to trigger violent great power confrontation. Environmentally, heat waves, wildfires, and floods have also taken thousands of lives while causing enormous economic losses, disrupting food supplies across the world,2 and “turbocharging what is already the worst period of forced displacement and migration in history.”3 02023, the hottest year on record, was surpassed by 02024, the first year in which temperatures exceeded the Paris Agreement target of 1.5 degrees Celsius above preindustrial levels. The rapid loss of glacial and sea ice augurs a tipping point in sea-level rise. Political polarization is crippling many of the world’s advanced democracies, and authoritarianism is on the rise. In a time of growing demand for public sector services and investments, debt levels in both developing and developed economies have reached record highs. Environmental, economic, and political forecasts suggest that these challenges, as well as human and ecological suffering, will only become more difficult to surmount in the years ahead.
United Nations Peace Bell Ceremony in observance of International Peace day. Photo by Rick Bajornas / UN Photo
History is often told as a story of turbulence, and there have been periods, even in recent memory, of wider and more brutal warfare, genocide, violent revolution, and political repression. But what distinguishes this period in human history is the confluence of forces— political, geo-strategic, economic, social, technological, and environmental, as well as interactions among them — that fuel the turbulence that we see today. Many of the causes and consequences of presentday turmoil are transnational or even global in nature. These conflicts have no regard for borders and are not responsive to solutions devised and implemented by individual nation-states or the existing ecosystem of multilateral institutions. Furthermore, humankind is facing the possibility of three interrelated risks that may prove to be existential threats: (1) the accelerating climate crisis; (2) a new nuclear arms race between China, the United States, and Russia (along with the associated proliferation risks4); and (3) the advent of potentially hyper-disruptive technologies such as generative artificial intelligence (and the prospect of general artificial intelligence5), neuro-technology, and biomedical or biomanufacturing technologies “whose abuse and misuse could lead to catastrophe.”6
What distinguishes this period in human history is the confluence of forces — political, geo-strategic, economic, social, technological, and environmental, as well as interactions among them — that fuel the turbulence that we see today.
The institutions that have guided international relations and global problem solving since the mid-20th century are clearly no longer capable of addressing the challenges of the new millennium. They are inefficient, ineffective, anachronistic, and, in some cases, simply obsolete. As Roger Cohen of The New York Times noted, “With inequality worsening, food security worsening, energy security worsening, and climate change accelerating, more countries are asking what answers the post-01945 Western-dominated order can provide.”7
Over millions of years, humankind has proven remarkably resilient, capable of innovating its way through periods of grave existential threat while simultaneously developing cultural, societal, and technological means of improving the human condition. Human advancements have given rise to nearly 31,000 languages, significantly prolonged life expectancy, lifted hundreds of millions out of abject poverty, and extended human rights to populations across the world. Human ingenuity landed a man on the moon and invented the internet. Through vision, creativity, and diligence, humankind can — and must — develop an international framework that can guide us toward a more peaceful, more humane, and more equitable global society, as well as a thriving planetary ecosystem, all by the end of this century.
The challenge of designing a better international system is a difficult one, but choosing to ignore the necessity of reform is a far greater failure than striving and falling short.
Readers of this paper may find some of the ideas presented to be idealistic or even utopian. But this essay is intended to address the question of what might be, not merely what can be. As proven throughout history, human consciousness endows us all with the ability to make changes that contribute to longer and better lives. The challenge of designing a better international system is a difficult one, but choosing to ignore the necessity of reform is a far greater failure than striving and falling short.
History is replete with examples of hinge moments when change once thought improbable or even impossible occurs. Recent examples include Lyndon Johnson’s invocation of “We Shall Overcome” in his speech to Congress urging passage of the Civil Rights Bill, the transformation of South Korea into a vibrant democracy and competitive market economy, the fall of the Berlin Wall and collapse of Soviet communism, and Nelson Mandela’s “long walk to freedom.”
Even in very dark moments, visionary leaders can pierce the darkness and imagine a brighter future. Franklin Roosevelt and Winston Churchill drafted the Atlantic Charter in August 01941 when most of the European continent had been conquered by Hitler, the United States was not yet at war, and the United Kingdom was fighting for its survival. The charter boldly articulated a vision for a postwar world in which all people could live in freedom from fear and want, and the nations of the world would eschew the use of force and work collectively to advance peace and economic prosperity. This vision, written on a destroyer off the coast of Newfoundland, served as a foundational step toward the creation of the United Nations in 01945.
This essay is intended to address the question of what might be, not merely what can be.
Moments of profound challenge offer opportunity to convert today’s idealism into tomorrow’s realism. Writing in 01972, German historian and philosopher Hannah Arendt reminded us that we are not consigned to live with things as they are: “We are free to change the world and start something new in it.”8 This paper is offered in that spirit.
Part I examines the origins and evolution of the logic that underlies today’s system of international relations and offers a revised logic for the future. Part II applies this new logic to the global political landscape and proposes alterations to the institutions and mechanisms of the current international system to better meet the global challenges of this century.
Iran nuclear deal negotiators in Vienna, 02015. Photo by Dragan Tatic
Part I. The Logic of International Relations
In 01980, the management theorist and consultant Peter Drucker authored a book called Managing in Turbulent Times. Drucker’s central thesis was that the greatest danger in times of turbulence is not turbulence itself; rather, it is “acting with yesterday’s logic.” This fairly describes our current predicament. Though we are faced with multiple, diverse, complex, and possibly even existential challenges, we stubbornly continue to respond with yesterday’s logic and the institutional framework it inspired. The logic of the present remains rooted in the logic of the past, with many of its core elements originating from the first known international treaties in Mesopotamia or those between warring Greek city-states. Many others were first articulated and codified in 17–19th century Europe; for example, the 01648 Peace of Westphalia is widely regarded as the international legal framework that birthed the enduring concept of nation-state sovereignty which, three centuries later, was enshrined in the United Nations Charter. Over time, our legacy systems have grown from these and other roots to become the international institutions of the present day.
The greatest danger in times of turbulence is not turbulence itself; rather, it is “acting with yesterday’s logic.”
Any future system will almost certainly be an amalgam of the ancient, modern, and new — the combination of these elements will be the foundation of its effectiveness and resilience.
As we work to devise a global framework fit to purpose for the extraordinary challenges of this century, it is essential to examine the most important elements of the logic of the past to determine which of these should be retained, which should be revised, which should be retired, and what new concepts will be required. Any future system will almost certainly be an amalgam of the ancient, modern, and new — the combination of these elements will be the foundation of its effectiveness and resilience: resonating with human experience while also inspiring the future. A deeper understanding of the roots and evolution of the existing international system will allow us to develop ideas for a new global framework that will enable us to manage this age of turbulence.
Logic Inventory
When seeking to understand a complex system, it can be useful to take an inventory of its most important elements. An examination of the roots and evolution of the existing “rules-based international order” reveals 12 concepts that together can be understood as the core elements of the “logic of the past.” These concepts continue to guide contemporary international relations and global problem solving. The following logic inventory itemizes these concepts and suggests revisions for a logic of the future that can help us better manage the challenges of this century.
It is no surprise that humans have assumed a position of dominance in the hierarchy of life. We have yet to encounter another species with a comparable combination of physical and intellectual capacities. We have employed the advantages of humankind to birth spectacular discoveries and inventions, leading to the organization of society and the building of the modern world, all the while assuming that the rest of nature is ours to harness with the goal of sustaining and improving the human condition. However, human activity, most notably the burning of fossil fuels, threatens the very viability of life on our planet. We are approaching multiple climate-related tipping points, and Earth’s biosystem is experiencing a profound crisis encapsulated by a magnitude of biodiversity loss often referred to as the Sixth Mass Extinction. Global biodiversity is being lost more rapidly than at any other time in recorded human history.9
The logic of the future must see human beings as a part of nature rather than apart from it. We must see our existence within the extraordinary web of the entire community of life10 on our planet, which includes some eight million other species. Our lives and livelihoods are dependent on this vibrant biodiversity, and we endanger the survival of our species when we despoil or deplete it. Biodiversity conservation is both a moral imperative as well as a material requirement to ensure a sustainable planetary ecosystem and a thriving human society.
The logic of the future must see human beings as a part of nature rather than apart from it.
The concept of sovereignty has been central to international relations ever since the Peace of Westphalia sought to resolve the territorial and religious disputes of the Thirty Years’ War (the most savage war in European history at the time). Paired with the principle of non-interference in states’ internal affairs, the concept of sovereignty was refined and reinforced by the great 19th-century diplomats who, in the Congress of Vienna and the Concert of Europe (01814–01815), brought an end to the Napoleonic Wars and laid the foundation for a remarkably durable peace that heralded rapid technological and economic progress. As Henry Kissinger noted, “The period after 01815 was the first attempt in peacetime to organize the international order through a system of conferences, and the first explicit effort by the great powers to assert a right of control.”11 Thus was also born the modern practice of diplomacy and the organization of multilateral structures of sovereign states. Sovereignty, coupled with the right of self-determination, was central to the Treaty of Versailles at the end of World War I as well as Woodrow Wilson’s League of Nations, and was codified in the United Nations Charter of 01945.
The principles of sovereignty have also been invoked to define the relationship between the state and private entities — in particular, corporations and businesses. The notion of corporate sovereignty is used to argue for limited government intervention in the market. Consequently, the concept of sovereignty is core to the logic of both international relations and political economy.
Critics of the primacy of national sovereignty, such as German feminist foreign policy advocate Kristina Lunz, argue that the concept of national sovereignty rests on the “notion of a homogeneous ethnic community (the ‘people’ or ‘nation’), which coincides with the territorial-legal government (the ‘state’). This leads to claims of absoluteness towards other states and intolerance of minorities.”12
In the latter decades of the 20th century, important innovations in what can be termed “pooled sovereignty,” or “collaborative sovereignty” were devised to overcome some of the inherent limitations of individual states, especially with regard to their ability to influence economic, geopolitical, and environmental affairs. These include the European Union, comprising 27 member states who collectively manage a vast agenda of economic, social, and foreign policy matters; NATO, a collective security organization currently composed of 31 countries; and other regional organizations like the African Union (AU), the Association of Southeast Asian Nations (ASEAN), the Organization of American States (OAS), and the Pacific Island Forum (PIF).
The EU is perhaps the greatest single political achievement of the second half of the 20th century.
All of these are important venues for collaboration and collective decision making by nation-states, but the EU stands out as the most fully developed, most democratic, and most effective framework for the collective governance of key transnational domains. The EU was invented as a peace project following two devastating European wars, and it has successfully kept the peace among its members for 70 years. The goal of creating a wider European zone of peace, stability, and prosperity, as well as the appeal of EU membership, resulted in multiple waves of EU expansion, most notably the accession of 02004 when 10 countries, including seven former members of the Warsaw Pact, joined the EU. Today, the EU is a dynamic and productive single market and the second-largest economy (in nominal terms) after the United States. It is the world’s largest trader of manufactured goods and services and ranks first in both inbound and outbound foreign direct investment. In today’s multipolar world, the EU is a powerful node, often aligned with the United States but not unwilling to steer its own course, with China, for example. European politics are complex, but the structures and processes of the EU have proven to be remarkably effective at managing contentious issues and taking on difficult regulatory challenges, such as data protection and privacy and the establishment of an initial regime for the regulation of artificial intelligence (AI). The EU is perhaps the greatest single political achievement of the second half of the 20th century — as one French cabinet minister remarked, “We must recall that the EU is a daily miracle.”13
With the adoption of “The Responsibility to Protect” (R2P) doctrine at the 02005 World Summit, global leaders advanced the new norm of tasking sovereign states with the responsibility of protecting their populations from “genocide, war crimes, ethnic cleansing and crimes against humanity.”14 When national governments are incapable or unwilling to do so, R2P authorizes collective action by the Security Council to protect populations under threat. This can include the use of force in cooperation — as appropriate — with relevant regional organizations. The adoption of R2P was a significant shift in conceptual thinking about sovereignty and non-interference. However, its application proved controversial in the case of Libya when the Security Council authorized action against dictator Muammar al-Qaddafi’s forces to prevent attacks on Libya’s civilian population in 02011.
National sovereignty, with further revisions, remains an important concept for the logic of the future. States will continue to be an essential nexus of governance and accountability to their citizens. Many states in Africa, Latin America, and Asia only recently achieved their sovereign independence from colonial rule — having fought for it for decades, they are not eager to give it up. Nevertheless, it is increasingly clear that individual states, as well as the multilateral institutions and processes in which they participate, are incapable of effectively addressing the urgent transnational and planetary challenges of our age.
Pooled or collaborative sovereignty shows significant promise, but effective management of the age of turbulence will require institutions of shared sovereignty to adopt expanded democratic norms and processes (e.g., legitimacy, transparency, inclusive participation, and efficient decision making through qualified majorities) that achieve sufficient consensus among the participating states. However, it should be noted that collective approaches will expand only to the extent that the benefits of sharing sovereignty can be shown to clearly outweigh the reduction in national prerogatives and powers. In addition to sharing sovereignty, states will need to devolve power and authority to sub-national levels of governance (cities, regions, and communities) to address the consequences of global turbulence (whether from climate change, conflict, or migration) on local populations. Furthermore, the equitable distribution of critical resources — financial and otherwise — must accompany the delegation of authority to sub-national governments.
Given the persistence of human rights violations and the loss of innocent lives caught in conflict zones, it may also be time to consider advancing the concept of human sovereignty to more fully achieve the aspirations of the 01948 Universal Declaration of Human Rights, such that “the inherent dignity and equal and inalienable rights of all members of the human family…[are understood to be]…the foundation of freedom, justice, and peace in the world.”15
Unsurprisingly, the longstanding reliance on national sovereignty has also reinforced the importance of national interest in the conduct of international relations. For many international relations theorists and practitioners, the logic of national interest is unassailable — legitimate governments are expected to respond to the needs of their citizens. Yet there are three fundamental challenges to a singular focus on national interest: The first challenge, of course, is when the self-defined national interests of one state or collection of states conflict with the interests of one or more other states. Interstate conflicts catalyzed the development of the precepts and practice of international law in the service of peaceful dispute resolution. However, as we have seen time and again, states (and non-state actors) all too frequently bypass dispute resolution mechanisms and resort to the use of force. The second challenge involves national leaders pursuing their interpretation of national interest without the democratic engagement of the public. Autocrats and dictators launch wars with little to no public debate or democratic oversight. A third — and growing — challenge to the primacy of national interest is the problem of the global commons: the global resources that sustain human civilization, such as the air we breathe, the water we drink, the sources of energy that power the global economy, and the international sea lanes that ensure the free transit of goods. A focus on national interests can impair equitable access to global public goods.
Like the related concept of sovereignty, national interest will continue to be an element of the logic of the future. In this century, however, the primacy of national interest must be diluted and greater attention focused on the global commons. The concept of “common but differentiated responsibilities,”16 formalized in the United Nations Framework Convention on Climate Change (UNFCCC) in 01992, provides an important model that can be applied in the broader context of international political, security, and economic relations. Harvard political scientist Joseph Nye reminds us of a theory promulgated by Charles Kindleberger, an architect of the 01948 Marshall Plan. Kindleberger argued that the international chaos of the 01930s resulted from the failure of the United States to provide global public goods after it replaced Great Britain as the largest global power.17 In the more diffused power realities of the 21st century, attending to the global commons must be a collective responsibility and priority. The realities of global interdependence and the singularity of Earth’s biosystem demand that states see their self-interest as inextricably linked to global interests.
Attending to the global commons must be a collective responsibility and priority.
Ever since the Concert of Europe, international relations have been dominated by various configurations of great powers. The United Kingdom, France, Austro-Hungary, Germany, and Russia were the dominant powers from 01814 to 01914. America’s entry into WWI and Woodrow Wilson’s quest to “make the world safe for democracy” heralded the United States’ entry into the ranks of the great powers, while Japan and China gained greater recognition and influence in the inter-war period. In the aftermath of the Second World War, the United Nations Charter assigned global leadership responsibility to the five permanent members (P5) of the U.N. Security Council: the United States, the United Kingdom, France, the Soviet Union/Russia, and China. Both the League of Nations and the United Nations attempted to offset the concentration of power through the League Council and the United Nations General Assembly, respectively — bodies in which all member states were given equal voice and vote. Nevertheless, critical decisions of international relations, most importantly the authority for the use of force, continue to be the province of major powers.
Today, the concentration of power in the hands of a few states is being seriously challenged by much of the global community. “The uninhibited middle powers”18 like India, Turkey, Saudi Arabia, Brazil, South Africa, and Indonesia are less willing to follow the lead of the dominant powers and seek a greater voice in and increasing influence over global affairs. The age of turbulence and the challenges of the 21st century demand a new, more equitable distribution of power. Six and a half billion people,19 the “global majority,”20 must be more equitably incorporated into the management of global affairs in terms of both participation and outcomes. This will require revisions to the governance of key international institutions, starting with the U.N. Security Council as well as the international financial institutions (e.g., the World Bank, International Monetary Fund, and regional development banks).21 The goal must be to create an inclusive community of stakeholders who actively participate in and uphold the institutions and processes of global governance. In addition, power must be redistributed in both directions; it must be delegated to levels of governance closer to the people who are most directly impacted by particular conditions or issues (like climate change), while (new) international or “planetary” bodies must be given the responsibility of managing planetary challenges.22
Six and a half billion people, the “global majority,” must be more equitably incorporated into the management of global affairs in terms of both participation and outcomes.
The tenets of internationalism arose from the inter-state system of the 17th century. As historian Stephen Wertheim writes, this includes the belief “that the circulation of goods, ideas, and people would give expression to the harmony latent among civilized nations, preventing intense disputes from arising.”23 Drawing on the philosophical legacy of Hugo Grotius and others, this view has been codified in international law and is embedded in the institutions established to adjudicate and resolve political and economic disputes through arbitration, legal rulings, and other peaceful means.
Internationalism has been central to efforts designed to prevent outbreaks of armed conflict and the management of warfare when conflict prevention fails. The Concert of Europe (01814), the League of Nations (01920), the Kellogg–Briand Pact (01928), and numerous other international treaties and conventions were designed with the sole aim of maintaining the peace. The mission of the U.N. Security Council (01945) is to maintain international peace and security through the identification of “the existence of any threat to peace, breach of the peace, or act of aggression” and by making recommendations or determining “what measures shall be taken…to maintain or restore international peace and security.”24 The Geneva Conventions (01949)25 established the main elements of international humanitarian law to “limit the barbarity of war.”26 Despite the web of treaties, laws, and institutions, armed conflict and its barbarity persist, in part because state and non-state actors interpret international law in support of their own objectives or simply ignore it altogether. Structural constraints, like the veto power of the P5, also inhibit the efficacy of international law.
Cargo ship approaches an international port in Turkey. Photo by bfk92 / iStock
The precepts of internationalism have also been central to global economics and trade. Montesquieu’s notion that “peace is the natural effect of trade”27 has been at the heart of international economics for nearly 300 years. It is embedded in the mission of the World Trade Organization (WTO) and the robust web of bilateral and multilateral trade agreements that have accelerated globalization. However, with the potential return of great power confrontation, faith in Montesquieu’s optimistic view of the relationship between trade and peace has faded. As historian Adam Tooze writes, “Economic growth thus breeds not peace but the means to rivalry. Meanwhile, economic weakness generates vulnerability.”28
Twenty-first century internationalism will require new, innovative forms of dispute resolution and the consistent application of international law to all international actors. Reform of the U.N. Security Council is essential — even though it is unlikely given the provisions of the U.N. Charter. The victims of conflict must be given a greater voice in the quest for peace. Existing accountability mechanisms such as the International Court of Justice must be strengthened, and new enforcement powers should be considered.
United Nations Disengagement Observer Force in the Middle East (UNDOF). Photo by Yutaka Nagata / UN Photo
The free movement of goods, services, people, and information should be expanded. However, we have learned that we cannot rely on economic relations alone to produce and sustain peace. As free trade arrangements are negotiated, greater emphasis must be focused on the concept of “equitable trade” that offers the benefits of economic intercourse while also protecting workers from abusive employment practices and safeguarding our fragile planetary ecosystem. Rules must be applied consistently to all parties. A new approach to global trade should help manage both the positive and negative effects of globalization in order to help bring greater economic benefits to developing economies while also ensuring equitable and efficient supply chains.
In the 80 years since the ratification of the United Nations Charter, the institutional framework of the international system has grown in scale and complexity. The five main bodies of the United Nations work closely with 15 “specialized agencies,”29 drawing more than 125,000 employees from 193 member states. Complementary institutions have been established outside the U.N. system to focus on specific issues, such as the International Water Management Institute, or in specific regions, such as the Arctic Council or the Organization of American States. This expansive but patchwork collection of international and multilateral institutions and organizations brings enormous benefits to global society — and yet, as is often true of bureaucratic systems, many of the institutions have grown unwieldy, inefficient, costly to maintain, and encumbered by political constraints.
In the logic of the future, an ecosystem that complements institutions with networks, “mini-lateral” arrangements in which nations form coalitions to address common concerns or undertake time-limited missions, and perhaps most importantly, polylateral arrangements in which states, sub-national levels of government, private sector actors, and civil society join forces will prove to be more agile and effective at global problem solving. Indeed, the success of the 02015 Paris Climate Conference (COP 21) can be attributed to such a polylateral process, producing important commitments from all three major sectors: governments, businesses, and NGOs. Of particular note was the influence asserted by the “High Ambition Coalition,” a polylateral coalition organized by the Republic of the Marshall Islands (population approx. 43,000), one of the small states facing an existential threat from rising sea levels.
In the future, agile and resilient decision making will be necessary for institutions to adapt to the variability and complexity in relations among nations. Many of the large organizations require governance and management reforms, and although the international system currently includes some number of non-institutional forms, they remain modest in scope compared to large bureaucratic structures.
So-called international relations “realists” have argued that peace can be achieved and sustained only if it is fortified by the threat of military intervention. Henry Kissinger, a leading proponent of this view, outlined it as follows: “How is one to carry out diplomacy without the threat of force? Without this threat, there is no basis for negotiations.”30 It was this logic that led to the massive build-up of military forces and nuclear arsenals during the Cold War, at great economic and social cost, under the doctrine of “mutually assured destruction.” It also led to the growth of a “military–industrial complex”: The intertwining of industry, economic policy, and military expenditure in the United States (and elsewhere) that President Dwight Eisenhower warned against in 01961, with global military expenditures reaching $2.2 trillion in 02022.31 Nevertheless, 20 nations have shown that another path is possible: Costa Rica, Iceland, and the Solomon Islands, among others, do not have any standing armed forces or arms industry. Despite this, worldwide military expenditures continue to grow while investments in education (particularly for girls and women), skills training, infrastructure, clean energy, climate resilience, poverty alleviation, and a number of other social needs remain inadequate.32
The logic of the future requires a shift from defining peace as the absence of war to embracing the concept of “positive peace”: the elimination of violence resulting from systemic conditions like hunger, poverty, inequality, racism, patriarchy, and other forms of social injustice.
The logic of the future requires a shift from defining peace as the absence of war to embracing the concept of “positive peace”: the elimination of violence resulting from systemic conditions like hunger, poverty, inequality, racism, patriarchy, and other forms of social injustice. Research has shown that higher levels of positive peace are achieved when states have a well-functioning government, manage an equitable distribution of resources, create a strong business environment, develop high levels of human capital, facilitate the free flow of information, and have low levels of corruption.33
History has shown us that there will always be bad actors, and military force will be required to confront armed aggression, genocide, and other mass violations of human rights. There is no response to Russia’s brutal war of aggression against Ukraine except for a short-term boost in military capacity. However, the logic of the future demands that we vastly strengthen diplomatic capacity, support equitable development, and invest in critical human needs as well as planetary sustainability. We must seek a future in which defense investments do not deter increased domestic social spending or international development aid that can build greater global social cohesion. As the United Nations High-Level Advisory Board on Effective Multilateralism (HLAB) so eloquently stated, “We must shift from focusing on mutually assured destruction to mutually assured survival.”34 As we seek to overcome drivers of conflict, we may devise new forms of alliance based on shared values instead of exclusively focusing on military defense. For example, alliances that support health equity, economic development, and girls’ education might help deter the eruption of violent conflict.
Zero-sum logic has pervaded international relations in many periods of human history, most notably during the Cold War. The world was divided into two competing blocs led by the Soviet Union and the United States. Gains made by one bloc were seen as losses for the other, and countries in the developing world were pressured to take sides.
The Nonaligned Movement (NAM) emerged following the first-ever Asia–Africa conference, which took place in Bandung, Indonesia in 01955. Twenty-nine countries (home to 54 percent of the world’s population) participated in this conference in an effort to counterbalance and challenge the deepening East–West polarization in international affairs. The founders of the NAM — Yugoslavia’s Josip Broz Tito, India’s Jawaharlal Nehru, Egypt’s Gamal Abdel Nasser, Ghanian President Kwame Nkrumah, and President Sukarno of Indonesia — offered the developing world an alternative to the “us-versus-them” logic of the Cold War. Nevertheless, both the United States and the USSR attempted to pull the countries of the NAM into their orbits through enticement, coercion, or a combination of both.
Today, the war in Ukraine has revived the notion of nonalignment; some global majority countries (i.e., non-OECD countries with 80 percent of the world’s population) have refrained from joining the coalition supporting Ukraine. In January 02024, China successfully led an effort to expand membership of the BRICS — a loose organization of major developing countries that seek to expand their economic cooperation and political standing, in part as an effort to counterbalance perceived U.S.-led Western dominance.35
The logic of the future will seek to accommodate variable alignments and maximize positive-sum solutions to global problems. Questions of alignment will be viewed as dynamic rather than static. Countries that join together for one purpose may not collaborate on others, choosing from a menu of “limited-liability partnerships.”36 Writing in 02020, then-Afghan President Ashraf Ghani described a future of “multi-alignment.” Writing in The Financial Times three years later, Alec Russell termed this “the à la carte world.” Managing this dynamic environment will require an agile mindset and a greater tolerance for ambiguity from major powers like the United States.
India is an important case study. Speaking at the United Nations in 01948, Prime Minister Jawaharlal Nehru told the assembled world leaders: “The world is something bigger than Europe, and you will not solve your problems by thinking that the problems of the world are mainly European problems. There are vast tracts of the world which may not in the past, for a few generations, have taken much part in world affairs. But they are awake; their people are moving, and they have no intention whatever of being ignored or of being passed by.” Nehru later helped form the NAM amidst the polarization of the Cold War. Today, under the leadership of Prime Minister Narendra Modi and Minister for External Affairs Subrahmanyam Jaishankar, India has embraced dynamic alignment — working ambitiously to maintain close ties with Europe and the United States, while also continuing a fundamentally transactional relationship with Russia and avoiding conflict with China. This is a difficult balancing act with profound but potentially constructive implications for geopolitics in an age of turbulence. As Jaishankar explained to the Munich Security Conference in February 02024, “pulls and pressures make a unidimensional approach impossible.”
In 01963, just months before his assassination, U.S. President John F. Kennedy gave a speech on world peace in which he urged Americans and Soviets to work together to “make the world safe for diversity”37 by accepting fundamental differences in ideologies and political systems, speaking out on points of principle and in defense of values, slowing the nuclear arms race, and engaging with each other through diplomacy to prevent war. Now, 60 years later, countries should accept the pluralism within the community of nations and forswear active efforts toward regime change as long as borders are respected and governments do not engage in gross violations of the human rights of their own citizens, as expressed in the R2P doctrine laid out in the 02005 World Summit Outcome Document. In a time of growing great power competition and an increased risk of conflict, Europe and the United States should work toward a détente with China, and even with a post-war Russia, if it renounces the use of force for territorial gain.
Closely related to the primacy of national interest, “strategic narcissism,” as described in 01978 by international relations theorist Hans Morgenthau, is the inability to see the world beyond the narrow viewpoint of one’s own national experience, perceptions, and self-interest.
“Strategic empathy,” a concept advanced by former U.S. National Security Advisor and retired General H. R. McMaster, proposes a fundamental shift in the attitude and practice of diplomacy. It encourages deep listening in relations with others, seeking greater understanding of their views and needs, and investing less effort in persuasion. Consistent with strategic empathy, the logic of the future calls on great powers such as the United States to eschew hubris and conduct international relations with greater honesty and humility.
Tragically, the modern international system evolved in large part through imperialism, colonial rule, and systems of racism and patriarchy that led to the brutal exploitation of non-White and female populations across the globe. Britain abolished the slave trade throughout its empire in 01807, yet slavery survived in the United States until the end of the U.S. Civil War and the ratification of the Thirteenth Amendment to the Constitution (01865). Patriarchy is deeply rooted in the history of human civilization — supported, in part, by the world’s major religious traditions.
Although World War I brought an end to the Austro-Hungarian and Ottoman empires as well as the Romanov dynasty, colonial rule continued in numerous Latin American, Caribbean, African, and Asian territories throughout the 20th century.38 Today, structural racism persists in many forms. Furthermore, the rights of women remain contested worldwide, and their general economic status and wellbeing continue to trail behind that of men — even in the most advanced economies. It is clear that the legacies of colonialism, racism, and patriarchy continue to shape the international system.
The logic of the future must be based on universal human dignity, equality, pluralism, cosmopolitanism, tolerance, and justice.
The logic of the future must be based on universal human dignity, equality, pluralism, cosmopolitanism, tolerance, and justice. The legacies of discrimination and exploitation continue to breed conflict, and genuine peace will not be achieved or sustained for as long as these legacies remain. The aspirations expressed in the Universal Declaration of Human Rights must be fully realized, and discrimination based on race, gender, sexual identity, religion, and physical ability must be eradicated. Advancing the concept of human sovereignty, which advocates for the recognition of the inherent worth of every human being, may help establish such new norms and eliminate colonial attitudes.
Starting in the 01970s, the neo-liberal school of economics gained widespread popularity among scholars, business leaders, politicians, and policymakers. The core tenets of neo-liberalism include minimal government intervention in the market, a singular focus on GDP growth as the de facto measure of progress, unfettered trade, and the exploitation of labor and natural resources. Neo-liberal economic policies in the United States, the United Kingdom, and elsewhere have also guided the management of the international economic system (i.e., the Bretton Woods institutions) over the last half-century.39 Although it can be argued that these policies have generated significant wealth, lifted hundreds of millions out of poverty, and spurred important innovations, it is clear that this approach has also contributed to widening economic inequality in many countries — and perhaps most importantly, its reliance on fossil fuels threatens the very viability of the planetary ecosystem. More bluntly, in the pursuit of neo-liberal economic policies, greed is rewarded, and the accumulation of material possessions is celebrated.
The economic logic of the future should focus, first and foremost, on the wellbeing of both humans and the planet. Important theoretical and practical work is underway to advance the notion of the “wellbeing economy,”40 in which measurements of success are expanded to include social and environmental factors, the relationship between the state and the market is recalibrated, and attention is focused on an ethos of caring and sharing — caring for one another and the planet we share. Other important concepts such as the “circular economy,” “doughnut economics,”41 “productivism,”42 or “degrowth”43 are stepping stones in the path toward regenerative and genuinely sustainable development. A new mix of public and private institutions will be required to ensure accountability for the sustainable use and equitable distribution of resources consistent with a wellbeing economic paradigm.
The history of human progress is entwined with the history of technological advancement, starting with the creation of stone tools 3.4 million years ago, followed by myriad other major technological milestones such as the invention of the wheel, the steam engine, the silicon chip, and so much more. With the notable exception of nuclear technology, technological advances have been embraced and employed with little or no restraint. Recent breakthroughs in machine learning and the accelerated development of AI, the profound advances in biotechnology and biomanufacturing, and the debate over the geo-engineering of Earth’s atmosphere to slow global warming all raise profound ethical questions and may even pose existential risks.
In the logic of the future, we will need to negotiate global norms and regulatory regimes to advantageously but safely employ new technologies that have the power to greatly benefit planetary society but could also lead to great harm. AI technology will likely evolve faster than our ability to establish adequate regulatory regimes; consequently, restraint and self-regulation will also be necessary to ensure the safe deployment of this powerful technology.
Part II. Building Blocks of a New Global Framework
The logic of the future demands significant modifications and additions to the existing international system. From a review of many suggestions and recommendations that have been offered by numerous analysts, commissions, and advisory panels, 10 “building blocks” emerge. Under each of the 10 points that follow, some illustrative examples of specific steps that might be taken are highlighted, although these are neither comprehensive nor fully developed here.
1. Cocreate the International System of the Future
As the world’s most powerful country, the United States should work with the U.N. secretary-general, Europe, and other important global major powers to launch an inclusive process to design a more equitable and effective distribution of power and a new global system. Most of the peoples of the world still count on the United States for global leadership, recalling its role in the creation of the existing international order: Franklin Roosevelt’s “Four Freedoms” of January 01941; the Atlantic Charter principles that Roosevelt and Winston Churchill articulated later that same year; and the 01944 international conference held at Dumbarton Oaks, which advanced the vision of a post-war international organization to maintain global peace and security and formed the basis for the United Nations Charter adopted in San Francisco in 01945. Creation of the United Nations was an act of both imagination and political will, and U.S. presidential leadership was essential to the success of these efforts.
The collapse of the Soviet Union and the end of the Cold War in 01991 brought echoes of post-WWII 01945 and an opportunity to create a new, more inclusive international order — but this opportunity was missed through a “failure of creativity.”44 The world had changed dramatically and yet the impulse was to affirm the prevailing international relations logic and expand the existing institutional framework rather than devise new norms and structures suited to new circumstances.
In some ways, we are now experiencing another 01945-like moment. The existing international order has broken down amidst significant global turbulence and multiple existential threats. As in 01945, there is once more an evident need for the community of nations to work collectively to build the international system of the future.
There are, however, significant differences between 01945 and the present day. After the war much of the world was in ruins, economies were devastated, and the United States was the undisputed hegemon. The United Nations was founded in the aftermath to prevent the outbreak of another catastrophic world war; the challenge today is to construct a new international system that can preempt the existential threats we will face in the decades ahead. The United States retains its capacity for vitally important leadership, but it is no longer a hegemon in today’s multipolar world. The realignment of global power, heralded by the rise of the global majority, mandates that any future system must incorporate their perspectives, needs, and aspirations far more equitably than before. Consequently, the legacy major powers must invite the countries of the global majority to cocreate the international framework of the future.
2. Remake the United Nations
The United Nations remains the essential institutional framework for cooperation among sovereign states, and it contributes enormously to the global common good. But like a magnificent old house, the United Nations needs major renovations. Most of the needed renovations are well known. These include making the U.N. more democratic by expanding the number of permanent Security Council members and amending the veto privilege (perhaps requiring three members to jointly exercise vetoes) or by empowering the General Assembly to override vetoes with the support of two-thirds or three-quarters of the member states. To amplify the voices of the world’s peoples,45 there should be a U.N. Under-Secretary for Civil Society to facilitate deeper engagement by global civil society in the work of the U.N. system. To expand the United Nations’ capacity for anticipating future developments and protecting the rights of future generations, Secretary-General Antonio Guterres has announced his intention to appoint an Envoy for the Future, an important step toward incorporating long-term thinking into present decision making.
FDR and Winston Churchill at the Atlantic Conference, 01941. Photo by Priest, L C (Lt), Royal Navy official photographer (via Wikimedia Commons)
Article 99 of the U.N. Charter empowers the secretary-general to “bring to the attention of the Security Council any matter which in his opinion may threaten the maintenance of international peace and security,” yet this authority has been invoked only four times since 01946.
Secretary-General Guterres was right to invoke Article 99 in his letter to the Security Council on December 6, 02023, responding to the war in Gaza and urging the international community to “use all its influence to prevent further escalation and end this crisis.” In the future, this powerful yet rarely used tool should be employed judiciously — but without hesitation — when threats to peace and security demand international action.
It is also time to redesign other U.N. bodies and mechanisms, starting with the UNFCCC and the annual Conferences of the Parties (COPs), which have brought together the nations of the world to address the climate crisis since the first COP in Berlin in 01995. At the very least, the requirement for unanimous decision making should be replaced with qualified majority voting so that individual states or small blocs can no longer block progress.46 In addition, enforcement mechanisms should be established to hold countries accountable for meeting their emissions reduction pledges. In the absence of formal accountability mechanisms, civil society should be adequately funded to monitor progress and publicize failures to meet obligations.
It may also be time to replace the anachronistic Trusteeship Council, one of the six principal bodies of the United Nations, which was established to manage transitions to self-government or statehood for territories detached from other countries as a result of war. The last territory to achieve statehood through the Trusteeship Council process was Palau in December 01994 — three decades ago. Given the critical importance of avoiding climate catastrophe, it may be prudent for the Trusteeship Council to be replaced by a Climate Council that would incorporate, elevate, and strengthen the UNFCCC and its COPs and serve as a forum for implementation of agreed climate policies and actions. Alternatively, the Trusteeship Council could be replaced by a body representing subnational levels of government (see section 3 below).
U.N. officer plays with a child at a South Sudan protection site. Photo by JC McIlwaine / UN Photo
Some renovations of the U.N. system can be achieved through General Assembly resolutions, but many of the most important reforms (namely, the expansion of the permanent members of the Security Council or amendments to the veto provision) require Charter amendments that can be accomplished only with a two-thirds vote of the General Assembly and ratification by two-thirds of member state parliaments, after which they must avoid a veto by any of the P5. Given such structural limitations on any attempt to truly remake the United Nations, it is necessary to build an effective ecosystem of institutions, networks, and polylateral alliances that complements the United Nations and compensates for its structural limitations.
It is necessary to build an effective ecosystem of institutions, networks, and polylateral alliances that complements the United Nations and compensates for its structural limitations.
The international system of the future will continue to have the United Nations at its core, but the complexity and hazards of these turbulent times demand that we establish a more robust, flexible, and nimble ecosystem of networks, organizations, and modalities that work in concert with the United Nations, but with fewer bureaucratic constraints and procedural impediments to action. The High-Level Advisory Board (HLAB), appointed by the U.N. secretary-general, has declared that “global governance must evolve into a less hierarchical, more networked system wherein decision-making is distributed, and where the efforts of a large number of different actors are harnessed towards a collective mission.”47 A few examples of ways to supplement the United Nations and create a more dynamic and effective international ecosystem follow.
First, it is important to recognize and strengthen regional intergovernmental organizations that have achieved sufficient democratic legitimacy as well as efficacy in one or more of the following domains: conflict prevention and peacebuilding, economic cooperation, and environmental management. Capacity-building support can enhance the effectiveness of regional organizations, and formal relationships with relevant U.N. bodies can strengthen the coordination of regional efforts. Special attention should be focused on regions where intergovernmental bodies are underdeveloped or non-existent.
In the domain of international peace and security, it is critical to start planning a new European security architecture for the political landscape following the Russia-Ukraine War. Because Russia will remain a major European power — regardless of the outcome of that conflict — NATO, the Organization for Security and Co-operation in Europe (OSCE), and the EU should coordinate their plans for a collective security structure that can enhance security across the European continent, including Russia (if and when it permanently renounces the use of force against its neighbors).
The G20, a body that brings together leaders of 19 of the largest economies, plus the heads of the EU and the African Union — together representing 80 percent of the world’s population and almost 85 percent of global GDP — is an important venue for discussions among the world’s most powerful leaders and could be an even more important asset. Could it focus more specifically on a few key topics requiring collective management, such as climate change, pandemic response, debt, and development finance? Could a formal relationship with the U.N. Security Council help bring additional voices to the peace and security agenda?
Subnational units of government (e.g., cities, states, and provinces) are increasingly important in the age of turbulence; the United Nations estimates that by 02030, one-third of the world’s people will live in cities with populations of 500,000 or more. Subnational units of government are increasingly finding themselves responsible for managing the consequences of global turbulence, be they the impacts of accelerating climate change, the spread of infectious disease, or the mass movement of people. Citizens often turn to local leaders for solutions to the consequences of these global phenomena in their daily lives. Although there are numerous international fora where subnational leaders meet, it is time to formalize the connections between subnational governments and the international system. As noted in section 2, one possibility would be to replace the U.N. Trusteeship Council with an Intergovernmental Council that offers rotating membership to subnational units of government (e.g., cities, states, provinces) and that, like the Trusteeship Council, answers to the General Assembly.
People receive the COVID-19 vaccine in New York in April 02021. Photo by Liao Pan / China News Service via Getty Images
Two related 21st-century challenges demand new polylateral mechanisms for establishing norms and developing global regulatory regimes: (1) the decentralized information ecosystem enabled by social media and (2) the advent of generative AI. Efforts are already underway to create an Intergovernmental Panel on the Information Environment (IPIE) modeled on the Intergovernmental Panel on Climate Change (IPCC). Like the IPCC, the IPIE would be “an international scientific body entrusted with the stewardship of our global information environment for the good of mankind.”48 The IPIE would gather and analyze data, monitor trends, and issue recommendations to combat disinformation and misinformation, hate speech, and algorithmic manipulation that undermine trust, fuel conflict, and impede progress in managing social problems. After all, access to reliable information is essential for healthy democracies.
Continued advances in AI will only exacerbate the societal risks of misinformation and disinformation, but the power and implications of AI extend well beyond the information ecosystem and can affect every domain of human activity. These new technologies can help alleviate human suffering, increase workplace productivity, support invention and scientific breakthroughs, and more; however, as many technologists are warning, AI also has the potential to threaten the primacy of human intelligence, to become “God-like” (in the words of tech investor Ian Hogarth), and possibly “usher in the obsolescence or destruction of the human race.”49 Although the proposed IPIE organization would help with gathering and reporting reliable scientific information about the advancements in AI, a more powerful global regulatory body is needed.
The international response to the advent of nuclear energy offers valuable lessons that can inform our management of high-value, high-risk future technologies. The very first resolution adopted by the U.N. General Assembly in 01946 established the U.N. Atomic Energy Commission, which was followed a decade later by the establishment of the International Atomic Energy Agency (IAEA). Furthermore, the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) came into force in 01968, giving the IAEA authority to conduct on-site inspections to ensure that nuclear materials are used for peaceful purposes. The NPT regime and the diligent oversight provided by the IAEA have allowed for significant advances in the peaceful use of nuclear energy — including the operation of some 450 nuclear reactors worldwide — while limiting nuclear weapons to only eight declared states.50
The future international system must include robust mechanisms to enforce international law and combat the current culture of impunity. In a few areas, new institutions are needed; for example, there is a campaign to establish an International Anti-Corruption Court (IACC) that would prosecute alleged corruption by a state’s leaders when national judiciaries are unable or unwilling to act due to political interference or lack of judicial capacity. Thus far, 190 countries have ratified the U.N. Convention Against Corruption, but many cases still go unprosecuted. The proposed IACC would help fill this critical enforcement gap.
4. Improve, Supersede, and Devolve the Nation-state
Nation-states will remain important in the international system of the future, but the COVID-19 pandemic and the climate crisis have highlighted the inadequacies of nation-states regarding governance at both the local and planetary levels.51 Managing the myriad of 21st-century challenges will require the devolution of greater authority (as well as the distribution of necessary resources) to subnational and local levels of government, allowing them to respond to the impacts of global phenomena on their populations.
At the same time, some issues require planetary action, such as global decarbonization, vaccine manufacturing and distribution, and the regulation of certain high-risk technologies like AI or biotechnology. The principle of subsidiarity, which posits that social and political issues should be addressed at the most immediate level of governance consistent with their effective resolution, offers increasingly relevant guidance when addressing the challenges of the 21st century.
It will be extremely challenging to reduce the primacy of the nation-state in international affairs. There must be a fundamental shift in our mindset and ways of understanding the world that have shaped international relations for centuries, as well as new legal and institutional arrangements. A great deal of ideation, discussion, and debate will also be necessary. However, surviving the existential threats inherent in our turbulent age necessitates the undertaking of these efforts. In this regard, the EU, as a structure of collaborative sovereignty that shifted European thinking and governance away from an exclusive reliance on the nation-state, provides an important model.
5. Train, Recruit, and Deploy a New Generation of Diplomats
In the remaining decades of the 21st century, diplomacy must be the core operating system employed to lead the global community toward enduring peace, more equitably shared prosperity, and a sustainable planet. This requires substantial investments in a global diplomatic surge — recruiting, training, and deploying a new generation of diplomats who can advance the logic of the future and the practice of cooperative global problem-solving. The “millennial” and “Gen Z” populations across the world can provide the field of diplomacy with a talented cohort of highly educated, cosmopolitan, and culturally sophisticated women and men.
To build this new corps of national and international diplomats, a distinguished multinational panel of scholars and practitioners should be tasked with developing a global diplomacy curriculum consistent with the logic of the future. This could then be taught at the United Nations University and adopted by other diplomacy and international relations graduate programs worldwide. A virtual diplomacy institute could offer this curriculum in multiple languages through an online platform.
Refugees cross from Turkey to Greece. Photo by Joel Carillet
6. Trade and Investment to Provide Global Public Goods
Consistent with the goals of a more equitable distribution of power in global affairs, the World Trade Organization and Bretton Woods Institutions require significant reforms in terms of their mission, governance, and capitalization. Although many credible reform ideas have been discussed, with some progress made in recent years, debate still swirls around the most fundamental reforms. The institutions of international economics must be focused on promoting equity across developing economies and providing incentives, financing, and technical assistance in the delivery of global public goods such as clean air and water, food, and health care.
One particular reform in the global trade regime merits special attention: the elimination of the Investor-State Dispute Settlement Process (ISDS), which is a common provision in free-trade agreements. The ISDS allows foreign companies to sue governments for relief from national policies that they claim impair their ability to make reasonable profits — including climate regulations, financial stability measures, and labor and public health policies.
The Biden administration took some constructive steps toward a more equitable global trade system, describing it as a “post-colonial trade system.” In an important speech at the Brookings Institution in April 02023, National Security Advisor Jake Sullivan described the Biden administration’s approach: “…working with so many other WTO members to reform the multilateral trading system so that it benefits workers, accommodates legitimate national security interests, and confronts pressing issues that aren’t fully embedded in the current WTO framework, like sustainable development and the clean-energy transition.”
7. Strengthen Democracy
Effective democratic governance must be the cornerstone of the international system of the future. Democratic norms, processes, and institutions give voice to “the peoples of the United Nations,” as expressed in the first lines of the U.N. Charter. Democracy facilitates the identification of common ground and requires compromise; it recognizes differences, promotes fairness and equity, and improves transparency and accountability — qualities that are essential to peaceful international relations. Democratic states are less likely to go to war with one another and, compared to states under autocratic rule, are also less likely to suffer violent internal conflicts.52
Nonetheless, democracy requires substantial reinvention and expanded application if humanity hopes to create a more peaceful, equitable, and sustainable world in this century. Democracy must be made more inclusive, more fully representative, more participatory, and more effective. This mission may be more urgent now than ever before: As faith in democratic governance weakens, neo-authoritarians and demagogues around the globe are rushing to consolidate their power.
Political scientist Larry Diamond coined the term “democratic recession” in 02015 to describe the global decline in the quality and efficacy of democratic governance over the previous decade. Drawing on data reported by Freedom House, the Economist Intelligence Unit, and V-Dem,53 Diamond (and many others) have documented democratic backsliding and the rise of authoritarianism in all four corners of the globe. At this very moment, electoral authoritarians — Nayib Bukele in El Salvador, Victor Orban in Hungary, and Yoweri Museweni in Uganda, to name a few — are eroding the rule of law, restricting the freedom of speech and media, curbing civil society, and trampling on citizens’ rights.
That said, the news is not all bad. In Poland, after years of deepening authoritarian rule under the Law and Justice Party, voters turned out in overwhelming numbers to elect Donald Tusk’s Civic Platform coalition in October 02023. Tusk has since set out to restore the rule of law, media independence, and civil rights — but the task of restoring democracy is proving formidable after eight years in which both norms and institutions were seriously eroded. As the German Marshall Fund’s Michal Baranowski observes, “There will be lessons for other countries to draw from Poland — both on what to do and not to do — but Tusk has the disadvantage of being the first, trying to clean up without a detox handbook.”54
Fundamental reforms are needed even in well-established democratic nation-states, the United States being first and foremost among them. In June 02020, a national commission organized by the American Academy of Arts and Sciences offered a comprehensive blueprint of proposed reforms in a report titled Our Common Purpose: Reinventing American Democracy for the 21st Century.55 The report’s 31 recommendations address significant reforms to political institutions and processes, as well as the need for reliable, widely shared civic information and a healthy political culture. The years ahead may test the resilience of America’s democratic institutions and the rule of law. Political leaders, civil society organizations, and citizens must be prepared to defend democratic norms and constitutional arrangements.
Reforms are needed in democracies around the globe to address similar weaknesses while respecting distinct cultural and historical contexts. One size most certainly does not fit all, but central to all these efforts is the need to fortify the role of citizens as the primary stakeholders in self-government. The will of the citizenry is the ultimate accountability mechanism in democracies; to defend against the rise of autocracy, we must concurrently strengthen the institutional and procedural checks and balances that safeguard the rule of law and protect independent journalism.
Democracy must also be strengthened and extended in the institutions and mechanisms of global governance. Increasingly, decisions of material significance are being made by international bodies far removed from the citizens those decisions will affect. The international system of the future must incorporate more robust democratic norms, characteristics, and processes to make it more participatory, inclusive, transparent, accountable, and effective. Surviving the existential threats of the age of turbulence will require difficult decisions with monumental consequences The OECD has documented an encouraging “deliberative wave” of “representative deliberative processes,” such as citizens’ assemblies, juries, and panels that has been steadily gaining momentum since 02010.56 As former U.K. diplomat Carne Ross has argued, we must build on this wave and establish “consent mechanisms for profound change” in global policy for conflict resolution, development finance, economics, trade, and energy to meet the global challenges ahead.
Vice President Biden and Chinese President Xi share a toast, 02015. Photo by U.S. State Department
8. Establish a U.S.-China Secretariat
As many experts have observed, the U.S.-China relationship is the most important bilateral relationship of the 21st century. This relationship must be managed with clear-eyed, consistent, and continuous care, as well as effective communication and creativity. As Harvard professor and former Pentagon official Joseph Nye observed, “For better or worse, the U.S. is locked in a ‘cooperative rivalry with China.’”57 Our economies are closely intertwined; we are the two largest greenhouse gas emitters; we both have strategic interests in the Indo-Pacific; and the island of Taiwan is a potential flashpoint for a great power confrontation. Analogies to the U.S.-USSR Cold War rivalry are commonly invoked, but these comparisons overlook critical differences and lead to misguided policy prescriptions. The best approach to avoiding conflict necessitates the combination and effective management of competition and cooperation. It is not an exaggeration to say that as U.S.–China relations go, so goes the 21st century.
U.S.-China relations ebbed in the first half of 02023, with the year beginning with the Chinese surveillance balloon incident followed by military provocations in the South China Sea. High-level contact between the two governments was revived when Secretary of State Antony Blinken visited Beijing in June. This was followed by several other high-profile visits, including Treasury Secretary Janet Yellen traveling to China in July; Commerce Secretary Gina Raimondo following suit in August; and Chinese Foreign Minister Wang Yi meeting President Biden in the White House, setting the stage for Biden and Xi Jinping to meet during the APEC Summit in California on November 15, 02023.
Episodic meetings of high-level officials, including presidential summits, are essential but insufficient for managing this complex, high-risk relationship; more intensive and continuous joint engagement is required. One idea worth exploring is the establishment of a U.S.–China Joint Secretariat58 in a neutral location, perhaps Singapore or Geneva, to which senior civil servants from key ministries in both countries are seconded to work side-by-side on a daily basis. These officials would be tasked with exploring key issues in the bilateral relationship; developing a deeper understanding of each other’s views, needs, and redlines; and devising creative solutions that could then be shared with Beijing and Washington.
This idea will no doubt be unpopular with other countries in the Indo-Pacific, most notably India. Nevertheless, through careful diplomacy, it should be possible to help the Indians and others to see that non-confrontational and constructive U.S.-China relations are in their best interests.
9. Codify Rights of Nature and Rights of Future Generations
There is fascinating and important work being done in think tanks, academic institutions, and movements to develop eco-jurisprudence that expands the protection of rights beyond those accorded to human life and establishes human responsibility to other forms of life on our planet. Through pathbreaking leadership, Ecuador became the first country to enshrine the rights of nature in its constitution in 02008, and the first legal suit filed on behalf of nature was a case involving threats to the Vilcambaba River: The court found for the river.
Ecuador banned drilling in Yasuní National Park, 02023. Photo by Antonella Carrasco / openDemocracy
Significant progress has been made to establish the rights of future generations, with climate-related lawsuits being brought before courts across the globe on behalf of children. One suit filed in 02015, Juliana v. United States, asserted that “through the government’s affirmative actions that cause climate change, it has violated the youngest generation’s constitutional rights to life, liberty, and property, as well as failed to protect essential public trust resources.”59 In June 02023, U.S. District Court Judge Ann Aiken ruled that the case, brought by 21 young plaintiffs, could proceed to trial. In August 02023, a group of young people in Montana won a landmark ruling that the state’s failure to consider climate change when approving fossil fuel projects was unconstitutional. Similar suits are pending in several other U.S. states, and in September 02023, a suit brought by six young Portuguese citizens was heard before the European Court of Human Rights. Active cases filed on behalf of children and youth are pending in Canada, Mexico, Pakistan, and Uganda.
Establishing the rights of nature and future generations offers a promising avenue for implementing the logic of the future. Secretary-General Guterres’s pledge to name a Special Envoy for the Future also marks an important recognition that the international system must address long-term challenges and focus on prevention along with mitigation and crisis response.
Given its vast wealth, hard and soft power, presumption of moral leadership, and disproportionate consumption of finite global resources, the United States must play a leading role in shaping the global response to the age of turbulence. Without U.S. leadership, it would be impossible to embrace the logic of the future and build the international system needed to address the challenges of the 21st century. But the realities of this interdependent world require fundamental changes in the style and content of U.S. global leadership. We need a bold and fundamentally different vision of America’s role in the world.
A new vision of America’s global role must rest on a set of core principles for constructive, collaborative, results-oriented, and ethical leadership:
First, the United States must recognize that efforts to maintain its global primacy will prove fruitless and not in its national interest. If there was a “unipolar moment” at the end of the Cold War, it was both fleeting and deluding. Given the rapidly redistribution of political, economic, and military power already underway when the Soviet Union dissolved in 01991, we should have seen past the triumphant glow and come to grips with a more sober view of a world with multiple nodes and diverse forms of power. Basking in that temporary surge of American supremacy, we failed to adopt a vision of collaborative global leadership in which the United States plays an essential, but not dominant, leadership role. It is imperative that we do so now.
On a relative basis, U.S. military and economic power, though still vast, is shrinking. Perhaps more importantly, our “soft power” (the power of our values, cultural vitality, capacity for scientific and technological innovation, and our leadership by example) has declined. Even among our allies, the United States is often seen as arrogant, greedy, too quick to use military force, and hypocritical. We are seen to support the “rules-based liberal international order” as long as we get to make the rules and enforce the order. Such efforts to assert global primacy breed particular resentment among the very diverse countries that compose the global majority.
Although our priority will be the security and prosperity of the United States, Americans must pursue our national interests with an understanding that, in an interdependent world, our wellbeing is directly tied to peaceful and prosperous conditions elsewhere and to the fate of the planet. Our national goals can be achieved only in concert with others and by forging common ground to generate collective benefits. Rather than striving to preserve our status as the world’s only superpower, the United States should use its great-power status to lead the community of nations in an urgent process of developing a new global system that relies on the coordination and collaboration of multiple centers of power and authority. Humility and honesty are essential: We must engage with “strategic empathy.” I do not underestimate how challenging it will be to transform the role of the United States in the world, especially given the deep divisions in U.S. domestic politics and their influence on our foreign policy. And yet, it is of critical importance that we do so.
Second, the United States must build strength through teamwork. The freer and faster global movement of people, information, goods, money, disease, pollution, and conflict breeds a host of challenges that no single country — not even a superpower — can surmount alone: Only persistent teamwork can deal effectively with the agenda of pressing global issues. The United States must become the indispensable partner in global affairs.
Third, the United States must develop and use a full range of tools. We must be ready to use military force when absolutely necessary to protect the homeland, to confront other urgent threats to peace and security, or to prevent genocide or other overt abuses of human rights. However, we must give priority to other tools — diplomacy chief among them — that can offer effective alternatives to military action. Larger investments in development assistance, designed with foresight and in partnership with credible local leadership, are also essential, both in post-conflict reconstruction and to ameliorate conditions that can breed conflict in the first place.
Fourth, when circumstances warrant consideration of military action, the United States must comply with our obligations under the U.N. Charter, deploy forces only when we are confident that we are unlikely to do harm and, conversely, assess that we are well positioned to contribute to positive outcomes. Americans must finally learn from the lessons of Vietnam, the Balkans, Iraq, and Afghanistan: The use of military force without a deep understanding of the specific political, cultural, regional, and geo-strategic context and a plan for creating conditions for durable peace leads to miscalculation, prolonged engagements, excessive costs in lives and resources, and unmet objectives.
Fifth, the United States should promote fair play. America earns credibility and respect when it bases its actions on its core values. The combination of esteem and tangible support is essential to keeping old friends, winning new ones, forming effective coalitions, and averting resentment and misunderstanding. Furthermore, to advance shared norms, human rights, and the rule of law as the basis for global stability and progress, America itself must play by the rules — whether in the design of trade policies, the judicious deployment of our military, the incarceration and interrogation of prisoners of war, or the use of global environmental resources.
We are living in a complex and dangerous world. The new test for a superpower is how well it cares for global interests. It is time for a new vision of America’s role in the world based on an understanding that what is good for the world is good for us.
Conclusion
The decades ahead will bring change, uncertainty, and peril in global affairs, especially if humanity and its leaders fail to adapt. Populations around the globe are suffering from the increasingly destructive and deadly effects of climate change, which in turn fuel unprecedented levels of mass migration, social upheaval, and competition for resources. Once again, wars rage in Europe and the Middle East, while China acts on its increasingly expansive power aspirations, triggering new global tensions. Early signs suggest that AI could either save humanity or doom it. Norms of social trust are in decline, the truth is elusive, and political polarization impedes dialogue, compromise, and progress toward solutions.
All of these trends — environmental, demographic, geostrategic, technological, political, and institutional — represent grave challenges for old assumptions and existing frameworks. The international system is under stress and in flux. The old order is dying, and a new order is demanding to be born. Indeed, this inescapable need for renewal creates an opportunity for inspiration and invention. We are in a period of elasticity, a time when there is greater capacity for stretch in our conceptions of global relations and thinking about the international system. We must act now to guide the global community toward a more peaceful, equitable, and sustainable future.
Our legacy must not be one of inattention to the rising tides of crisis. Our children deserve to inherit a world structured with a logic that is relevant to their futures. The world itself deserves a logical framework that builds on the history of human progress yet recognizes and eliminates inherent flaws and anachronisms so that we may effectively confront the challenges ahead. We and our planet deserve a sustainable future.
No one can approach this task without understanding why our world has clung to the old order. Beneficiaries of the status quo have every immediate incentive to undermine progress. Competing national interests and aspirations impede transformative thinking, and domestic politics constrain even those states that see the need for, and wish to participate in, the renewal efforts. Economic competition overrides political cooperation. And structural flaws, like those embedded in the U.N. Charter, pose formidable barriers to reform.
In spite of such hurdles, the Pact for the Future adopted at the U.N. Summit of the Future in September 02024 represents an important milestone. The Pact commits the international community to a series of actions that, if fully realized, would meaningfully contribute to a new logic of multipolar pluralism, a more equitable distribution of power, and planetary sustainability. Global civil society must now mobilize to hold the U.N. member states to their commitments and to increase the ambition of implementation and follow-up actions. The Summit of the Future must be the starting point of an ongoing process, not a one-off event with limited impact. We must follow a new logic, advance a new ethos of caring and sharing, and construct a new institutional ecosystem to ensure that the age of turbulence does not become the age of catastrophe.
3. Bill Burns, 59th Ditchley Annual Lecture (RvW Fellowship, Global Order), July 1, 02023. See also Gaia Vince, “Nomad Century,” 2022.
4. The Science and Security Board of the Bulletin of the Atomic Scientists moved the hands of the Doomsday Clock forward to 90 seconds to midnight, largely because of the threats of nuclear use by Russia in the Ukraine war but also recognizing the prospect of the new nuclear arms race: “the closest to global catastrophe it has ever been,” January 24, 02023.
5. Generative artificial intelligence describes algorithms that are currently being used to create new content. General artificial intelligence is a theoretical concept in which future algorithms could replicate any intellectual task that humans can perform.
8. Hannah Arendt, Crises of the Republic: Lying in Politics; Civil Disobedience; On Violence; Thoughts on Politics and Revolution (Houghton Mifflin Harcourt, 01972), 15.
9. See Carrington, “The Economics of Biodiversity review: what are the recommendations?” The Guardian, February 2, 02021 and Dasgupta “The Economics of Biodiversity,” U.K. government, July 02021.
10. The Earth Charter, June 29, 02000.
11. Kissinger, A World Restored, 219.
12. Kristina Lunz, The Future of Foreign Policy is Feminist, Polity Press, 02023.
p. 31.
13. Clement Beaune, French transportation minister and Macron protege, The New York Times, September 1, 02023.
14. 02005 World Summit Outcome Document, paragraph 138.
15. From the preamble of the UDHR.
16. In the context of global warming and biodiversity loss, the Common But Differentiated Responsibilities principle (CBDR) recognizes that “[i]n view of the different contributions to global environmental degradation, States have common but differentiated responsibilities” (Principle 7 of the Rio Earth Summit Declaration, 01992).
17. Charles P. Kindleberger, The World in Depression 01929-01939, (University of California Press, 01973).
18. Ambassador (ret.) Michel Duclos, Institut Montaigne.
19. This calculation uses the combined populations of the OECD countries (1.38 billion in 02022) as a proxy for the “Global North” and subtracts this from total 02022 global population of 7.95 billion, yielding 6.5 billion.
20. I offer the term “global majority” as an alternative to “Global South” to acknowledge the peoples of the countries commonly identified in the Global South make up approximately 82 percent of the world’s population and the majority of them live north of the equator, not in the “south.”
21. Climate change is a stark example. Countries representing the global majority are disproportionately experiencing the devastating consequences of a rapidly heating planet while having contributed very little to the emission of climate-altering greenhouse gases. They are also in desperate need of debt relief and equitable financing for investments in sustainable development and climate resilience. At the Paris Climate Conference (COP 21), wealthy countries affirmed a commitment to provide $100 billion per year by 02025 for climate action in developing countries. In 02020, the amount of funds mobilized totaled approximately $83 billion — and given the acceleration of the climate crisis, funding needs are significantly outpacing the financial support provided.
22. See Blake and Gilman, “Governing in the Planetary Age,” Noema, March 9, 02021.
23. Wertheim, Tomorrow the World, 1.
24. Article 39, U.N. Charter.
25. Additional protocols were adopted in 01977 and 02005.
26. International Committee of the Red Cross.
27. Montesquieu, The Spirit of Laws, 01748.
28. Adam Tooze, “02023 Shows that Economic Growth Does Not Always Breed Peace,” Financial Times, December 22, 02023.
29. Including the Food and Agriculture Organization, the International Labor Organization, the International Monetary Fund, the World Health Organization, and the World Bank.
30. Lunz, op. cit., p. 63.
31. Stockholm International Peace Research Institute (SIPRI).
32. For example, the U.N. World Food Program, which strives to assist a record 345 million people worldwide facing food shortages in 02023, currently confronts an estimated shortfall of $15.1 billion. See also
https://disarmament.unoda.org/wmd/nuclear/tpnw/
.
33. Institute for Economics and Peace.
34. HLAB, A Breakthrough for People and Planet, (New York: United Nations University, April 02023), xx.
35. Since 02010 Brazil, Russia, India, China, and South Africa have been the members of the BRICS. At their August 2023 summit in Johannesburg, the group voted to add Argentina, Egypt, Ethiopia, Iran, Saudi Arabia, and the United Arab Emirates, whereupon the BRICS would account for 47 percent of global population and nearly 37 percent of global gross domestic product (GDP) as measured by purchasing power parity (PPP) compared to the G7, which comprises less than 10 percent of global population and 30 percent of global GDP. Sixteen additional countries have applied for BRICS membership.
36. Samir Saran, “The new world – shaped by self-interest,” Observer Research Foundation of India, May 24, 02023.
37. American University, June 01963.
38. Examples include Hong Kong, Macau, and Barbados, where colonialism continued until 01997, 01999, and 02021, respectively.
39. The World Bank, the International Monetary Fund, regional development banks, etc.
40. See the Wellbeing Economy Alliance (WEALL).
41. See Doughnut Economics by Kate Raworth, 02017.
53. The Varieties in Democracy Institute (or V-Dem) is a global network of social scientists who collaborate in publishing reports assessing the state of democracy worldwide.
54. Raphael Minder, “Inside Donald Tusk’s Divisive Campaign to Restore Polish Democracy,” Financial Times, February 18, 02024.
Author: K. E. Redmond He stared at the blue and white globe passing beneath him, watching the dark shadow cut across its surface. Once, the dark had been alive with light like glowing fungus. He’d imagined pearls of highways, puddles beneath streetlamps, neon signs. As the lights winked out, the smog dissipated. In daylight, he […]
Markus does QA, and this means writing automated tests which wrap around the code written by developers. Mostly this is a "black box" situation, where Markus doesn't look at the code, and instead goes by the interface and the requirements. Sometimes, though, he does look at the code, and wishes he hadn't.
Today's snippet comes from a program which is meant to generate PDF files and then, optionally, email them. There are a few methods we're going to look at, because they invested a surprising amount of code into doing this the wrong way.
protectedoverridevoidExecute()
{
int sendMail = this.VerifyParameterValue(ParamSendMail);
if (sendMail == -1)
return;
if (sendMail == 1)
mail = true;
this.TraceOutput(Properties.Resources.textGetCustomerForAccountStatement);
IList<CustomerModel> customers = AccountStatement.GetCustomersForAccountStatement();
if (customers.Count == 0) return;
StreamWriter streamWriter = null;
if (mail)
streamWriter = AccountStatement.CreateAccountStatementLogFile();
CreateAccountStatementDocumentEngine engine = new CreateAccountStatementDocumentEngine();
foreach (CustomerModel customer in customers)
{
this.TraceOutput(Properties.Resources.textCustomerAccountStatementBegin + customer.DisplayName.ToString());
// Generate the PDF, optionally send an email with the document attached
engine.Execute(customer, mail);
if (mail)
{
AccountStatement.WriteToLogFile(customer, streamWriter);
this.TraceOutput(Properties.Resources.textLogWriting);
}
}
engine.Dispose();
if (streamWriter != null)
streamWriter.Close();
}
Now, this might sound unfair, but right off the bat I'm going to complain about separation of concerns. This function both generates output and emails it (optionally), while handling all of the stream management. Honestly, I think if the developer were simply forced to go back and make this a set of small, cohesive methods, most of the WTFs would vanish. But there's more to say here.
Specifically, let's look at the first few lines, where we VerifyParameterValue. Note that this function clearly returns -1 when it fails, which is a very C-programmer-forced-to-do-OO idiom. But let's look at that method.
We'll come back to the VerifyByParameterFormat but otherwise, this is basically a wrapper around Convert.ToInt32, and could easily be replaced with Int32.TryParse.
Bonus points for spamming the log output with loads of newlines.
Okay, but what is the VerifyByParameterFormat doing?
Oh, it just goes character by character to verify whether or not this is made up of only digits. Which, by the way, means the CLI argument needs to be an integer, and only when that integer is 1 do we send emails. It's a boolean, but worse.
Let's assume, however, that passing numbers is required by the specification. Still, Markus has thoughts:
Handling this command line argument might seem obvious enough. I'd probably do something along the lines of "if (arg == "1") { sendMail = true } else if (arg != "0") { tell the user they're an idiot }. Of course, I'm not a professional programmer, so my solution is way too simple as the attached piece of code will show you.
There are better ways to do it, Markus, but as you've shown us, there are definitely worse ways.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
Similar to last year’s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year’s event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal — which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew Jędrzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: “It’s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different.� The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: “We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn’t exist any monitoring of Nixpkgs as a whole. In this talk I’ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache.� Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon’s talk “describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research.� The slides for the talk are available, as is the full video (23m17s).
Reproducible Builds at PyCascades 2025
Vagrant Cascadian presented at this year’s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant’s talk, entitled Re-Py-Ducible Builds caught the audience’s attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing… or even more important, if someone else builds it, they get the exact same thing too.
Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.
Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script — specifically to fix the handling the Rules-Requires-Root header in Debian source packages.
Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild package to:
Obey requests from the user/developer for a different temporary directory.
Use the root/superuser for some values of Rules-Requires-Root.
Don’t pass --root-owner-group to old versions of dpkg.
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Fedora developers Davide Cavalca and Zbigniew Jędrzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project’s work on unprivileged and reproducible builds continued this month. Notable fixes include:
The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.
Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:
Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) […]
Catch a CalledProcessError when calling html2text. […]
Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 […][…] and 288 […][…] as well as submitted a patch to update to 289 […]. Vagrant also fixed an issue that was breaking reprotest on Guix […][…].
Holger Levsen clarified the name of a link to our old Wiki pages on the History page […] and added a number of new links to the Talks & Resources page […][…].
James Addison update the website’s own README file to document a couple of additional dependencies […][…], as well as did more work on a future Getting Started guide page […][…].
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
James Addison updated reproduce.debian.net to display the so-called ‘bad’ reasons hyperlink inline […] and merged the “Categorized issues� links into the “Reproduced builds� column […].
Roland Clobus continued their work on reproducible ‘live’ images for Debian, making changes related to new clustering of jobs in openQA. […]
And finally, both Holger Levsen […][…][…] and Vagrant Cascadian performed significant node maintenance. […][…][…][…][…]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
I recently used the PuLP modeler to solve a work scheduling problem to assign
workers to shifts. Here are notes about doing that. This is a common use case,
but isn't explicitly covered in the case studies in the PuLP documentation.
Here's the problem:
We are trying to put together a schedule for one week
Each day has some set of work shifts that need to be staffed
Each shift must be staffed with exactly one worker
The shift schedule is known beforehand, and the workers each declare their
preferences beforehand: they mark each shift in the week as one of:
PREFERRED (if they want to be scheduled on that shift)
NEUTRAL
DISFAVORED (if they don't love that shift)
REFUSED (if they absolutely cannot work that shift)
The tool is supposed to allocate workers to the shifts to try to cover all the
shifts, give everybody work, and try to match their preferences. I implemented
the tool:
#!/usr/bin/python3import sys
import os
import re
defreport_solution_to_console(vars):
for w in days_of_week:
annotation = ''if human_annotate isnotNone:
for s in shifts.keys():
m = re.match(rf'{w} - ', s)
ifnot m: continueifvars[human_annotate][s].value():
annotation = f" ({human_annotate} SCHEDULED)"breakifnotlen(annotation):
annotation = f" ({human_annotate} OFF)"print(f"{w}{annotation}")
for s in shifts.keys():
m = re.match(rf'{w} - ', s)
ifnot m: continueannotation = ''if human_annotate isnotNone:
annotation = f" ({human_annotate} {shifts[s][human_annotate]})"print(f" ---- {s[m.end():]}{annotation}")
for h in humans:
ifvars[h][s].value():
print(f" {h} ({shifts[s][h]})")
defreport_solution_summary_to_console(vars):
print("\nSUMMARY")
for h in humans:
print(f"-- {h}")
print(f" benefit: {benefits[h].value():.3f}")
counts = dict()
for a in availabilities:
counts[a] = 0
for s in shifts.keys():
ifvars[h][s].value():
counts[shifts[s][h]] += 1
for a in availabilities:
print(f" {counts[a]} {a}")
human_annotate = Nonedays_of_week = ('SUNDAY',
'MONDAY',
'TUESDAY',
'WEDNESDAY',
'THURSDAY',
'FRIDAY',
'SATURDAY')
humans = ['ALICE', 'BOB',
'CAROL', 'DAVID', 'EVE', 'FRANK', 'GRACE', 'HEIDI', 'IVAN', 'JUDY']
shifts = {'SUNDAY - SANDING 9:00 AM - 4:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'DISFAVORED',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'JUDY': 'NEUTRAL'},
'WEDNESDAY - SAWING 7:30 AM - 2:30 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'NEUTRAL',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'EVE': 'REFUSED',
'JUDY': 'REFUSED'},
'THURSDAY - SANDING 9:00 AM - 4:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'PREFERRED',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED'},
'SATURDAY - SAWING 7:30 AM - 2:30 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'FRANK': 'PREFERRED',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'EVE': 'REFUSED',
'JUDY': 'REFUSED',
'GRACE': 'REFUSED'},
'SUNDAY - SAWING 9:00 AM - 4:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'DISFAVORED',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED'},
'MONDAY - SAWING 9:00 AM - 4:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'PREFERRED',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED'},
'TUESDAY - SAWING 9:00 AM - 4:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'NEUTRAL',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED'},
'WEDNESDAY - PAINTING 7:30 AM - 2:30 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'NEUTRAL',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'EVE': 'REFUSED',
'JUDY': 'REFUSED',
'DAVID': 'REFUSED'},
'THURSDAY - SAWING 9:00 AM - 4:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'PREFERRED',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED'},
'FRIDAY - SAWING 9:00 AM - 4:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'DAVID': 'PREFERRED',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'PREFERRED',
'IVAN': 'PREFERRED',
'JUDY': 'DISFAVORED',
'HEIDI': 'REFUSED'},
'SATURDAY - PAINTING 7:30 AM - 2:30 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'FRANK': 'PREFERRED',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'EVE': 'REFUSED',
'JUDY': 'REFUSED',
'GRACE': 'REFUSED',
'DAVID': 'REFUSED'},
'SUNDAY - PAINTING 9:45 AM - 4:45 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'DISFAVORED',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'MONDAY - PAINTING 9:45 AM - 4:45 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'PREFERRED',
'IVAN': 'PREFERRED',
'JUDY': 'NEUTRAL',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'TUESDAY - PAINTING 9:45 AM - 4:45 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'NEUTRAL',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'WEDNESDAY - SANDING 9:45 AM - 4:45 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'DAVID': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'NEUTRAL',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'JUDY': 'NEUTRAL',
'EVE': 'REFUSED'},
'THURSDAY - PAINTING 9:45 AM - 4:45 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'NEUTRAL',
'IVAN': 'PREFERRED',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'FRIDAY - PAINTING 9:45 AM - 4:45 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'PREFERRED',
'FRANK': 'PREFERRED',
'GRACE': 'PREFERRED',
'IVAN': 'PREFERRED',
'JUDY': 'DISFAVORED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'SATURDAY - SANDING 9:45 AM - 4:45 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'DAVID': 'PREFERRED',
'FRANK': 'PREFERRED',
'HEIDI': 'DISFAVORED',
'IVAN': 'PREFERRED',
'EVE': 'REFUSED',
'JUDY': 'REFUSED',
'GRACE': 'REFUSED'},
'SUNDAY - PAINTING 11:00 AM - 6:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'HEIDI': 'PREFERRED',
'IVAN': 'NEUTRAL',
'JUDY': 'NEUTRAL',
'DAVID': 'REFUSED'},
'MONDAY - PAINTING 12:00 PM - 7:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'PREFERRED',
'IVAN': 'NEUTRAL',
'JUDY': 'NEUTRAL',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'TUESDAY - PAINTING 12:00 PM - 7:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'NEUTRAL',
'HEIDI': 'REFUSED',
'JUDY': 'REFUSED',
'DAVID': 'REFUSED'},
'WEDNESDAY - PAINTING 12:00 PM - 7:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'NEUTRAL',
'JUDY': 'PREFERRED',
'EVE': 'REFUSED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'THURSDAY - PAINTING 12:00 PM - 7:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'NEUTRAL',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'FRIDAY - PAINTING 12:00 PM - 7:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'NEUTRAL',
'JUDY': 'DISFAVORED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'SATURDAY - PAINTING 12:00 PM - 7:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'NEUTRAL',
'FRANK': 'NEUTRAL',
'IVAN': 'NEUTRAL',
'JUDY': 'DISFAVORED',
'EVE': 'REFUSED',
'HEIDI': 'REFUSED',
'GRACE': 'REFUSED',
'DAVID': 'REFUSED'},
'SUNDAY - SAWING 12:00 PM - 7:00 PM':
{'ALICE': 'PREFERRED',
'BOB': 'PREFERRED',
'CAROL': 'NEUTRAL',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'NEUTRAL',
'JUDY': 'PREFERRED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'MONDAY - SAWING 2:00 PM - 9:00 PM':
{'ALICE': 'PREFERRED',
'BOB': 'PREFERRED',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'TUESDAY - SAWING 2:00 PM - 9:00 PM':
{'ALICE': 'PREFERRED',
'BOB': 'PREFERRED',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'HEIDI': 'REFUSED',
'JUDY': 'REFUSED',
'DAVID': 'REFUSED'},
'WEDNESDAY - SAWING 2:00 PM - 9:00 PM':
{'ALICE': 'PREFERRED',
'BOB': 'PREFERRED',
'CAROL': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'EVE': 'REFUSED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'THURSDAY - SAWING 2:00 PM - 9:00 PM':
{'ALICE': 'PREFERRED',
'BOB': 'PREFERRED',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'HEIDI': 'REFUSED',
'DAVID': 'REFUSED'},
'FRIDAY - SAWING 2:00 PM - 9:00 PM':
{'ALICE': 'PREFERRED',
'BOB': 'PREFERRED',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'HEIDI': 'REFUSED',
'JUDY': 'REFUSED',
'DAVID': 'REFUSED'},
'SATURDAY - SAWING 2:00 PM - 9:00 PM':
{'ALICE': 'PREFERRED',
'BOB': 'PREFERRED',
'CAROL': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'EVE': 'REFUSED',
'HEIDI': 'REFUSED',
'GRACE': 'REFUSED',
'DAVID': 'REFUSED'},
'SUNDAY - PAINTING 12:15 PM - 7:15 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'PREFERRED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'HEIDI': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'NEUTRAL',
'DAVID': 'REFUSED'},
'MONDAY - PAINTING 2:00 PM - 9:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'HEIDI': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'DAVID': 'REFUSED'},
'TUESDAY - PAINTING 2:00 PM - 9:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'HEIDI': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'REFUSED',
'DAVID': 'REFUSED'},
'WEDNESDAY - PAINTING 2:00 PM - 9:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'HEIDI': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'EVE': 'REFUSED',
'DAVID': 'REFUSED'},
'THURSDAY - PAINTING 2:00 PM - 9:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'HEIDI': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'DAVID': 'REFUSED'},
'FRIDAY - PAINTING 2:00 PM - 9:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'DISFAVORED',
'EVE': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'GRACE': 'NEUTRAL',
'HEIDI': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'REFUSED',
'DAVID': 'REFUSED'},
'SATURDAY - PAINTING 2:00 PM - 9:00 PM':
{'ALICE': 'NEUTRAL',
'BOB': 'NEUTRAL',
'CAROL': 'DISFAVORED',
'FRANK': 'NEUTRAL',
'HEIDI': 'NEUTRAL',
'IVAN': 'DISFAVORED',
'JUDY': 'DISFAVORED',
'EVE': 'REFUSED',
'GRACE': 'REFUSED',
'DAVID': 'REFUSED'}}
availabilities = ['PREFERRED', 'NEUTRAL', 'DISFAVORED']
import pulp
prob = pulp.LpProblem("Scheduling", pulp.LpMaximize)
vars = pulp.LpVariable.dicts("Assignments",
(humans, shifts.keys()),
None,None, # bounds; unused, since these are binary variables
pulp.LpBinary)
# Everyone works at least 2 shiftsNshifts_min = 2
for h in humans:
prob += (
pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= Nshifts_min,
f"{h} works at least {Nshifts_min} shifts",
)
# each shift is ~ 8 hours, so I limit everyone to 40/8 = 5 shiftsNshifts_max = 5
for h in humans:
prob += (
pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max,
f"{h} works at most {Nshifts_max} shifts",
)
# all shifts staffed and not double-staffedfor s in shifts.keys():
prob += (
pulp.lpSum([vars[h][s] for h in humans]) == 1,
f"{s} is staffed",
)
# each human can work at most one shift on any given dayfor w in days_of_week:
for h in humans:
prob += (
pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(rf'{w} ',s)]) <= 1,
f"{h} cannot be double-booked on {w}"
)
#### Some explicit constraints; as an example# DAVID can't work any PAINTING shift and is off on Thu and Sunh = 'DAVID'prob += (
pulp.lpSum([vars[h][s] for s in shifts.keys() if re.search(r'- PAINTING',s)]) == 0,
f"{h} can't work any PAINTING shift"
)
prob += (
pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(r'THURSDAY|SUNDAY',s)]) == 0,
f"{h} is off on Thursday and Sunday"
)
# Do not assign any "REFUSED" shiftsfor s in shifts.keys():
for h in humans:
if shifts[s][h] == 'REFUSED':
prob += (
vars[h][s] == 0,
f"{h} is not available for {s}"
)
# Objective. I try to maximize the "happiness". Each human sees each shift as# one of:## PREFERRED# NEUTRAL# DISFAVORED# REFUSED## I set a hard constraint to handle "REFUSED", and arbitrarily, I set these# benefit values for the othersbenefit_availability = dict()
benefit_availability['PREFERRED'] = 3
benefit_availability['NEUTRAL'] = 2
benefit_availability['DISFAVORED'] = 1
# Not used, since this is a hard constraint. But the code needs this to be a# part of the benefit. I can ignore these in the code, but let's keep this# simplebenefit_availability['REFUSED' ] = -1000
benefits = dict()
for h in humans:
benefits[h] = \
pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
for s in shifts.keys()])
benefit_total = \
pulp.lpSum([benefits[h] \
for h in humans])
prob += (
benefit_total,
"happiness",
)
prob.solve()
if pulp.LpStatus[prob.status] == "Optimal":
report_solution_to_console(vars)
report_solution_summary_to_console(vars)
The set of workers is in the humans variable, and the shift schedule and the
workers' preferences are encoded in the shifts dict. The problem is defined by
a vars dict of dicts, each a boolean variable indicating whether a particular
worker is scheduled for a particular shift. We define a set of constraints to
these worker allocations to restrict ourselves to valid solutions. And among
these valid solutions, we try to find the one that maximizes some benefit
function, defined here as:
benefit_availability = dict()
benefit_availability['PREFERRED'] = 3
benefit_availability['NEUTRAL'] = 2
benefit_availability['DISFAVORED'] = 1
benefits = dict()
for h in humans:
benefits[h] = \
pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
for s in shifts.keys()])
benefit_total = \
pulp.lpSum([benefits[h] \
for h in humans])
So for instance each shift that was scheduled as somebody's PREFERRED shift
gives us 3 benefit points. And if all the shifts ended up being PREFERRED, we'd
have a total benefit value of 3*Nshifts. This is impossible, however, because
that would violate some constraints in the problem.
The exact trade-off between the different preferences is set in the
benefit_availability dict. With the above numbers, it's equally good for
somebody to have a NEUTRAL shift and a day off as it is for them to have
DISFAVORED shifts. If we really want to encourage the program to work people as
much as possible (days off discouraged), we'd want to raise the DISFAVORED
threshold.
I run this program and I get:
....
Result - Optimal solution found
Objective value: 108.00000000
Enumerated nodes: 0
Total iterations: 0
Time (CPU seconds): 0.01
Time (Wallclock seconds): 0.01
Option for printingOptions changed from normal to all
Total time (CPU seconds): 0.02 (Wallclock seconds): 0.02
SUNDAY
---- SANDING 9:00 AM - 4:00 PM
EVE (PREFERRED)
---- SAWING 9:00 AM - 4:00 PM
IVAN (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM
FRANK (PREFERRED)
---- PAINTING 11:00 AM - 6:00 PM
HEIDI (PREFERRED)
---- SAWING 12:00 PM - 7:00 PM
ALICE (PREFERRED)
---- PAINTING 12:15 PM - 7:15 PM
CAROL (PREFERRED)
MONDAY
---- SAWING 9:00 AM - 4:00 PM
DAVID (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM
IVAN (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM
GRACE (PREFERRED)
---- SAWING 2:00 PM - 9:00 PM
ALICE (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM
HEIDI (NEUTRAL)
TUESDAY
---- SAWING 9:00 AM - 4:00 PM
DAVID (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM
EVE (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM
FRANK (NEUTRAL)
---- SAWING 2:00 PM - 9:00 PM
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM
HEIDI (NEUTRAL)
WEDNESDAY
---- SAWING 7:30 AM - 2:30 PM
DAVID (PREFERRED)
---- PAINTING 7:30 AM - 2:30 PM
IVAN (PREFERRED)
---- SANDING 9:45 AM - 4:45 PM
FRANK (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM
JUDY (PREFERRED)
---- SAWING 2:00 PM - 9:00 PM
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM
ALICE (NEUTRAL)
THURSDAY
---- SANDING 9:00 AM - 4:00 PM
GRACE (PREFERRED)
---- SAWING 9:00 AM - 4:00 PM
CAROL (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM
EVE (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM
JUDY (PREFERRED)
---- SAWING 2:00 PM - 9:00 PM
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM
ALICE (NEUTRAL)
FRIDAY
---- SAWING 9:00 AM - 4:00 PM
DAVID (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM
FRANK (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM
GRACE (NEUTRAL)
---- SAWING 2:00 PM - 9:00 PM
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM
HEIDI (NEUTRAL)
SATURDAY
---- SAWING 7:30 AM - 2:30 PM
CAROL (PREFERRED)
---- PAINTING 7:30 AM - 2:30 PM
IVAN (PREFERRED)
---- SANDING 9:45 AM - 4:45 PM
DAVID (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM
FRANK (NEUTRAL)
---- SAWING 2:00 PM - 9:00 PM
ALICE (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM
BOB (NEUTRAL)
SUMMARY
-- ALICE
benefit: 13.000
3 PREFERRED
2 NEUTRAL
0 DISFAVORED
-- BOB
benefit: 14.000
4 PREFERRED
1 NEUTRAL
0 DISFAVORED
-- CAROL
benefit: 9.000
3 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- DAVID
benefit: 15.000
5 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- EVE
benefit: 9.000
3 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- FRANK
benefit: 13.000
3 PREFERRED
2 NEUTRAL
0 DISFAVORED
-- GRACE
benefit: 8.000
2 PREFERRED
1 NEUTRAL
0 DISFAVORED
-- HEIDI
benefit: 9.000
1 PREFERRED
3 NEUTRAL
0 DISFAVORED
-- IVAN
benefit: 12.000
4 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- JUDY
benefit: 6.000
2 PREFERRED
0 NEUTRAL
0 DISFAVORED
So we have a solution! We have 108 total benefit points. But it looks a bit
uneven: Judy only works 2 days, while some people work many more: David works 5
for instance. Why is that? I update the program with =human_annotate = 'JUDY'=,
run it again, and it tells me more about Judy's preferences:
Objective value: 108.00000000
Enumerated nodes: 0
Total iterations: 0
Time (CPU seconds): 0.01
Time (Wallclock seconds): 0.01
Option for printingOptions changed from normal to all
Total time (CPU seconds): 0.01 (Wallclock seconds): 0.02
SUNDAY (JUDY OFF)
---- SANDING 9:00 AM - 4:00 PM (JUDY NEUTRAL)
EVE (PREFERRED)
---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
IVAN (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
FRANK (PREFERRED)
---- PAINTING 11:00 AM - 6:00 PM (JUDY NEUTRAL)
HEIDI (PREFERRED)
---- SAWING 12:00 PM - 7:00 PM (JUDY PREFERRED)
ALICE (PREFERRED)
---- PAINTING 12:15 PM - 7:15 PM (JUDY NEUTRAL)
CAROL (PREFERRED)
MONDAY (JUDY OFF)
---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
DAVID (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
IVAN (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM (JUDY NEUTRAL)
GRACE (PREFERRED)
---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
ALICE (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
HEIDI (NEUTRAL)
TUESDAY (JUDY OFF)
---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
DAVID (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
EVE (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM (JUDY REFUSED)
FRANK (NEUTRAL)
---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
HEIDI (NEUTRAL)
WEDNESDAY (JUDY SCHEDULED)
---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
DAVID (PREFERRED)
---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
IVAN (PREFERRED)
---- SANDING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
FRANK (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
JUDY (PREFERRED)
---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
ALICE (NEUTRAL)
THURSDAY (JUDY SCHEDULED)
---- SANDING 9:00 AM - 4:00 PM (JUDY PREFERRED)
GRACE (PREFERRED)
---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
CAROL (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
EVE (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
JUDY (PREFERRED)
---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
ALICE (NEUTRAL)
FRIDAY (JUDY OFF)
---- SAWING 9:00 AM - 4:00 PM (JUDY DISFAVORED)
DAVID (PREFERRED)
---- PAINTING 9:45 AM - 4:45 PM (JUDY DISFAVORED)
FRANK (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
GRACE (NEUTRAL)
---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
BOB (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
HEIDI (NEUTRAL)
SATURDAY (JUDY OFF)
---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
CAROL (PREFERRED)
---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
IVAN (PREFERRED)
---- SANDING 9:45 AM - 4:45 PM (JUDY REFUSED)
DAVID (PREFERRED)
---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
FRANK (NEUTRAL)
---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
ALICE (PREFERRED)
---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
BOB (NEUTRAL)
SUMMARY
-- ALICE
benefit: 13.000
3 PREFERRED
2 NEUTRAL
0 DISFAVORED
-- BOB
benefit: 14.000
4 PREFERRED
1 NEUTRAL
0 DISFAVORED
-- CAROL
benefit: 9.000
3 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- DAVID
benefit: 15.000
5 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- EVE
benefit: 9.000
3 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- FRANK
benefit: 13.000
3 PREFERRED
2 NEUTRAL
0 DISFAVORED
-- GRACE
benefit: 8.000
2 PREFERRED
1 NEUTRAL
0 DISFAVORED
-- HEIDI
benefit: 9.000
1 PREFERRED
3 NEUTRAL
0 DISFAVORED
-- IVAN
benefit: 12.000
4 PREFERRED
0 NEUTRAL
0 DISFAVORED
-- JUDY
benefit: 6.000
2 PREFERRED
0 NEUTRAL
0 DISFAVORED
This tells us that on Monday Judy does not work, although she marked the SAWING
shift as PREFERRED. Instead David got that shift. What would happen if David
gave that shift to Judy? He would lose 3 points, she would gain 3 points, and
the total would remain exactly the same at 108.
How would we favor a more even distribution? We need some sort of tie-break. I
want to add a nonlinearity to strongly disfavor people getting a low number of
shifts. But PuLP is very explicitly a linear programming solver, and cannot
solve nonlinear problems. Here we can get around this by enumerating each
specific case, and assigning it a nonlinear benefit function. The most obvious
approach is to define another set of boolean variables:
vars_Nshifts[human][N]. And then using them to add extra benefit terms, with
values nonlinearly related to Nshifts. Something like this:
benefit_boost_Nshifts = \
{2: -0.8,
3: -0.5,
4: -0.3,
5: -0.2}
for h in humans:
benefits[h] = \
... + \
pulp.lpSum([vars_Nshifts[h][n] * benefit_boost_Nshifts[n] \
for n in benefit_boost_Nshifts.keys()])
So in the previous example we considered giving David's 5th shift to Judy, for
her 3rd shift. In that scenario, David's extra benefit would change from -0.2 to
-0.3 (a shift of -0.1), while Judy's would change from -0.8 to -0.5 (a shift of
+0.3). So the balancing out the shifts in this way would work: the solver would
favor the solution with the higher benefit function.
Great. In order for this to work, we need the vars_Nshifts[human][N] variables
to function as intended: they need to be binary indicators of whether a specific
person has that many shifts or not. That would need to be implemented with
constraints. Let's plot it like this:
So a hypothetical vars_Nshifts[h][4] variable (plotted on the x axis of this
plot) would need to be defined by a set of linear AND constraints to linearly
separate the true (red) values of this variable from the false (black) values.
As can be seen in this plot, this isn't possible. So this representation does
not work.
How do we fix it? We can use inequality variables instead. I define a different
set of variables vars_Nshifts_leq[human][N] that are 1 iff Nshifts <= N.
The equality variable from before can be expressed as a difference of these
inequality variables: vars_Nshifts[human][N] =
vars_Nshifts_leq[human][N]-vars_Nshifts_leq[human][N-1]
Can these vars_Nshifts_leq variables be defined by a set of linear AND
constraints? Yes:
So we can use two linear constraints to make each of these variables work
properly. To use these in the benefit function we can use the equality
constraint expression from above, or we can use these directly:
# I want to favor people getting more extra shifts at the start to balance# things out: somebody getting one more shift on their pile shouldn't take# shifts away from under-utilized peoplebenefit_boost_leq_bound = \
{2: .2,
3: .3,
4: .4,
5: .5}
# Constrain vars_Nshifts_leq variables to do the right thingfor h in humans:
for b in benefit_boost_leq_bound.keys():
prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
>= (1 - vars_Nshifts_leq[h][b])*(b+1),
f"{h} at least {b} shifts: lower bound")
prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
<= Nshifts_max - vars_Nshifts_leq[h][b]*(Nshifts_max-b),
f"{h} at least {b} shifts: upper bound")
benefits = dict()
for h in humans:
benefits[h] = \
... + \
pulp.lpSum([vars_Nshifts_leq[h][b] * benefit_boost_leq_bound[b] \
for b in benefit_boost_leq_bound.keys()])
In this scenario, David would get a boost of 0.4 from giving up his 5th shift,
while Judy would lose a boost of 0.2 from getting her 3rd, for a net gain of 0.2
benefit points. The exact numbers will need to be adjusted on a case by case
basis, but this works.
The full program, with this and other extra features is available here.
Today, we look at a simple bit of bad code. The badness is not that they're using Oracle, though that's always bad. But it's how they're writing this PL/SQL stored function:
FUNCTION CONVERT_STRING_TO_DATE --Public
(p_date_string IN Varchar2,
p_date_format IN Varchar2 DEFAULT c_date_format)
ReturnDateASBEGIN
If p_date_string IsNullThenReturnNull;
ElseReturn To_Date(p_date_string, p_date_format);
End If;
END; -- FUNCTION CONVERT_STRING_DATE
This code is a wrapper around the to_date function. The to_date function takes a string and a format and returns that format as a date.
This wrapper adds two things, and the first is a null check. If the input string is null, just return null. Except that's exactly how to_date behaves anyway.
The second is that it sets the default format to c_date_format. This, actually, isn't a terrible thing. If you check the docs on the function, you'll see that if you don't supply a format, it defaults to whatever is set in your internationalization settings, and Oracle recommends that you don't rely on that.
On the flip side, this code is used as part of queries, not on processing input, which means that they're storing dates as strings, and relying on the application layer to send them properly formatted strings. So while their to_date wrapper isn't a terrible thing, storing dates as strings definitely is a terrible thing.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Majoki It started with a chatbot and ended in, well, that would be predicting the future. Which is exactly my problem. I’m sure I’m not the only computer science graduate student into astrology, Tarot cards, numerology, palm reading, and other fortune-telly kind of things, but I’m the one who, late one night, asked a […]
r2u, introduced less
than three years ago in post
#37, has become a runaway success. When I last tabulated downloads
in early January, we were already at 33 million downloads of binary CRAN packages across the three
Ubuntu LTS releases we support. These were exclusively for the ‘amd64’
platform of standard (Intel or AMD made) x64_64 cpus. Now we are happy
to announce that arm64 support has been added and is available!
Why arm64?
The arm64 platform is already popular on (cloud) servers and is being
pushed quite actively by the cloud vendors. AWS calls their cpu
‘graviton’, GCS calls it ‘axion’. General servers call the cpu ‘ampere’;
on laptop / desktops it is branded ‘snapdragon’ or ‘cortex’ or something
else. Apple calls their variant M1, M2, … up to M4 by now (and Linux
support exists for the brave, it is less straightforward). What these
have in common is a generally more favourable ‘power consumed to ops
provided’ ratio. That makes these cheaper to run or rent on cloud
providers. And in laptops they tend to last longer on a single charge
too.
Distributions such as Debian, Ubuntu, Fedora had arm64 for many
years. In fact, the CRAN
binaries of R, being made as builds at launchpad.net long provided arm64 in
Michael’s repo, we now also mirror these to CRAN. Similarly, Docker has long
supported containers. And last but not least two issue tickets (#40, #55) had asked
a while back.
So Why Now?
Good question. I still do not own any hardware with it, and I have
not (yet?) bothered with the qemu-based emulation layer. The real
difference maker was the recent availability of GitHub Actions instances
of ‘ubuntu-24.04-arm’ (and now apparently also for 22.04).
So I started some simple experiments … which made it clear this was
viable.
What Does It Mean for a CRAN
Repo?
Great question. As is commonly known, of the (currently) 22.1k CRAN
packages, a little under 5k are ‘compiled’. Why does this matter?
Because the Linux distributions know what they are doing. The 17k (give
or take) packages that do not contain compiled code can be used as is
(!!) on another platform. Debian and Ubuntu call these builds ‘binary:
all’ as they work all platforms ‘as is’. The others go by ‘binary: any’
and will work on ‘any’ platform for which they have been built. So we
are looking at roughly 5k new binaries.
So How Many Are There?
As I write this in early March, roughly 4.5k of the 5k. Plus the
17.1k ‘binary: all’ and we are looking at near complete coverage!
So What Is The Current State?
Pretty complete. Compare to the amd64 side of things, we do not (yet
?) have BioConductor support; this may be added. A handful of packages
do not compile because their builds seem to assume ‘Linux so must be
amd64’ and fail over cpu instructions. Similarly, a few packages want to
download binary build blobs (my own Rblpapi among them) but
none exist for arm64. Such is life. We will try to fix builds as time
permits and report build issues to the respective upstream repos. Help
in that endeavour would be most welcome.
But all the big and slow compiles one may care about (hello
duckdb, hello arrow, …) are there. Which is
pretty exciting!
How Does One Get Started?
In GitHub Actions, just pick ubuntu-24.04-arm as the
platform, and use the r-ci or r2u-setup
actiions. A first test yaml exists
worked (though this last version had the arm64 runner commented out
again). (And no, arm64 was not faster than amd64. More tests
needed.)
For simple tests, Docker. The rocker/r-ubuntu:24.04
container exists for arm64 (see here), and one
can add r2u support as is done in
this Dockerfile which is used by the builds and available as
eddelbuettel/r2u_build:noble. I will add the standard
rocker/r2u:24.04 container (or equally
rocker/r2u:noble) in a day or two, I had not realised I
wasn’t making them for arm64.
One a real machine such as a cloud instance or a proper instance,
just use the standard r2u script for noble aka 24.04
available here.
The key lines are the two lines
creating the apt entry and which are now arm64-aware.
After that, apt works as usual, and of course r2u works as
usual thanks also to bspm so you can just do, say
and enjoy the binaries rolling in. So give it a whirl if you have
access to such hardware. We look forward to feedback, suggestes, feature
requests or bug reports. Let us know how it goes!
In today’s digital landscape, social media is more than just a communication tool — it is the primary medium for global discourse. Heads of state, corporate leaders and cultural influencers now broadcast their statements directly to the world, shaping public opinion in real time. However, the dominance of a few centralized platforms — X/Twitter, Facebook and YouTube — raises critical concerns about control, censorship and the monopolization of information. Those who control these networks effectively wield significant power over public discourse.
In response, a new wave of distributed social media platforms has emerged, each built on different decentralized protocols designed to provide greater autonomy, censorship resistance and user control. While Wikipedia maintains a comprehensive list of distributed social networking software and protocols, it does not cover recent blockchain-based systems, nor does it highlight which have the most potential for mainstream adoption.
This post explores the leading decentralized social media platforms and the protocols they are based on: Mastodon (ActivityPub), Bluesky (AT Protocol), Warpcast (Farcaster), Hey (Lens) and Primal (Nostr).
Comparison of architecture and mainstream adoption potential
Mastodon was created in 2016 by Eugen Rochko, a German software developer who sought to provide a decentralized and user-controlled alternative to Twitter. It was built on the ActivityPub protocol, now standardized by W3C Social Web Working Group, to allow users to join independent servers while still communicating across the broader Mastodon network.
Mastodon operates on a federated model, where multiple independently run servers communicate via ActivityPub. Each server sets its own moderation policies, leading to a decentralized but fragmented experience. The servers can alternatively be called instances, relays or nodes, depending on what vocabulary a protocol has standardized on.
Identity: User identity is tied to the instance where they registered, represented as @username@instance.tld.
Storage: Data is stored on individual instances, which federate messages to other instances based on their configurations.
Cost: Free to use, but relies on instance operators willing to run the servers.
Servers communicate across different platforms by publishing activities to their followers or forwarding activities between servers. Standard HTTPS is used between servers for communication, and the messages use JSON-LD for data representation. The WebFinger protocol is used for user discovery. There is however no neat way for home server discovery yet. This means that if you are browsing e.g. Fosstodon and want to follow a user and press Follow, a dialog will pop up asking you to enter your own home server (e.g. mastodon.social) to redirect you there for actually executing the Follow action on with your account.
Mastodon is open source under the AGPL at github.com/mastodon/mastodon. Anyone can operate their own instance. It just requires to run your own server and some skills to maintain a Ruby on Rails app with a PostgreSQL database backend, and basic understanding of the protocol to configure federation with other ActivityPub instances.
Popularity: Already established, but will it grow more?
Mastodon has seen steady growth, especially after Twitter’s acquisition in 2022, with some estimates stating it peaked at 10 million users across thousands of instances. However, its fragmented user experience and the complexity of choosing instances have hindered mainstream adoption. Still, it remains the most established decentralized alternative to Twitter.
The ActivityPub protocol is the most widely used of its kind. One of the other most popular services is the Lemmy link sharing service, similar to Reddit. The larger ecosystem of ActivityPub is called Fediverse, and estimates put the total active user count around 6 million.
2. Bluesky (AT Protocol)
Interestingly, Bluesky was conceived within Twitter in 2019 by Twitter founder Jack Dorsey. After being incubated as a Twitter-funded project, it spun off as an independent Public Benefit LLC in February 2022 and launched its public beta in February 2023.
Bluesky runs on top of the Authenticated Transfer (AT) Protocol published at https://github.com/bluesky-social/atproto. The protocol enables portable identities and data ownership, meaning users can migrate between platforms while keeping their identity and content intact. In practice, however, there is only one popular server at the moment, which is Bluesky itself.
Identity: Usernames are domain-based (e.g., @user.bsky.social).
Storage: Content is theoretically federated among various servers.
Cost: Free to use, but relies on instance operators willing to run the servers.
Popularity: Hybrid approach may have business benefits?
Bluesky reported over 3 million users by 2024, probably getting traction due to its Twitter-like interface and Jack Dorsey’s involvement. Its hybrid approach — decentralized identity with centralized components — could make it a strong candidate for mainstream adoption, assuming it can scale effectively.
3. Warpcast (Farcaster Network)
Farcaster was launched in 2021 by Dan Romero and Varun Srinivasan, both former crypto exchange Coinbase executives, to create a decentralized but user-friendly social network. Built on the Ethereum blockchain, it could potentially offer a very attack-resistant communication medium.
However, in my own testing, Farcaster does not seem to fully leverage what Ethereum could offer. First of all, there is no diversity in programs implementing the protocol as at the moment there is only Warpcast. In Warpcast the signup requires an initial 5 USD fee that is not payable in ETH, and users need to create a new wallet address on the Ethereum layer 2 network Base instead of simply reusing their existing Ethereum wallet address or ENS name.
Despite this, I can understand why Farcaster may have decided to start out like this. Having a single client program may be the best strategy initially. One of the decentralized chat protocol Matrix founders, Matthew Hodgson, shared in his FOSDEM 2025 talk that he slightly regrets focusing too much on developing the protocol instead of making sure the app to use it is attractive to end users. So it may be sensible to ensure Warpcast gets popular first, before attempting to make the Farcaster protocol widely used.
As a protocol Farcaster’s hybrid approach makes it more scalable than fully on-chain networks, giving it a higher chance of mainstream adoption if it integrates seamlessly with broader Web3 ecosystems.
Identity: ENS (Ethereum Name Service) domains are used as usernames.
Storage: Messages are stored in off-chain hubs, while identity is on-chain.
Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.
Popularity: Decentralized social media + decentralized payments a winning combo?
Ethereum founder Vitalik Buterin (warpcast.com/vbuterin) and many core developers are active on the platform. Warpcast, the main client for Farcaster, has seen increasing adoption, especially among Ethereum developers and Web3 enthusiasts. I too have an profile at warpcast.com/ottok. However, the numbers are still very low and far from reaching network effects to really take off.
Blockchain-based social media networks, particularly those built on Ethereum, are compelling because they leverage existing user wallets and persistent identities while enabling native payment functionality. When combined with decentralized content funding through micropayments, these blockchain-backed social networks could offer unique advantages that centralized platforms may find difficult to replicate, being decentralized both as a technical network and in a funding mechanism.
4. Hey.xyz (Lens Network)
The Lens Protocol was developed by decentralized finance (DeFi) team Aave and launched in May 2022 to provide a user-owned social media network. While initially built on Polygon, it has since launched its own Layer 2 network called the Lens Network in February 2024. Lens is currently the main competitor to Farcaster.
Lens stores profile ownership and references on-chain, while content is stored on IPFS/Arweave, enabling composability with DeFi and NFTs.
Identity: Profile ownership is tied to NFTs on the Polygon blockchain.
Storage: Content is on-chain and integrates with IPFS/Arweave (like NFTs).
Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.
Popularity: Probably not as social media site, but maybe as protocol?
The social media side of Lens is mainly the Hey.xyz website, which seems to have fewer users than Warpcast, and is even further away from reaching critical mass for network effects. The Lens protocol however has a lot of advanced features and it may gain adoption as the building block for many Web3 apps.
5. Primal.net (Nostr Network)
Nostr (Notes and Other Stuff Transmitted by Relays) was conceptualized in 2020 by an anonymous developer known as fiatjaf. One of the primary design tenets was to be a censorship-resistant protocol and it is popular among Bitcoin enthusiasts, with Jack Dorsey being one of the public supporters. Unlike the Farcaster and Lens protocols, Nostr is not blockchain-based but just a network of relay servers for message distribution. If does however use public key cryptography for identities, similar to how wallets work in crypto.
Popularity: If Jack Dorsey and Bitcoiners promote it enough?
Primal.net as a web app is pretty solid, but it does not stand out much. While Jack Dorsey has shown support by donating $1.5 million to the protocol development in December 2021, its success likely depends on broader adoption by the Bitcoin community.
Will any of these replace X/Twitter?
As usage patterns vary, the statistics are not fully comparable, but this overview of the situation in March 2025 gives a decent overview.
Mastodon and Bluesky have already reached millions of users, while Lens and Farcaster are growing within crypto communities. It is however clear that none of these are anywhere close to how popular X/Twitter is. In particular, Mastodon had a huge influx of users in the fall of 2022 when Twitter was acquired, but to challenge the incumbents the growth would need to significantly accelerate. We can all accelerate this development by embracing decentralized social media now alongside existing dominant platforms.
Who knows, given the right circumstances maybe X.com leadership decides to change the operating model and start federating contents to break out from a walled garden model. The likelyhood of such development would increase if decentralized networks get popular, and the encumbents feel they need to participate to not lose out.
Past and future
The idea of decentralized social media is not new. One early pioneer identi.ca launched in 2008, only two years after Twitter, using the OStatus protocol to promote decentralization. A few years later it evolved into pump.io with the ActivityPump protocol, and also forked into GNU Social that continued with OStatus. I remember when these happened, and that in 2010 also Diaspora launched with fairly large publicity. Surprisingly both of these still operate (I can still post both on identi.ca and diasp.org), but the activity fizzled out years ago. The protocol however survived partially and evolved into ActivityPub, which is now the backbone of the Fediverse.
The evolution of decentralized social media over the next decade will likely parallel developments in democracy, freedom of speech and public discourse. While the early 2010s emphasized maximum independence and freedom, the late 2010s saw growing support for content moderation to combat misinformation. The AI era introduces new challenges, potentially requiring proof-of-humanity verification for content authenticity.
Key factors that will determine success:
User experience and ease of onboarding
Network effects and critical mass of users
Integration with existing web3 infrastructure
Balance between decentralization and usability
Sustainable economic models for infrastructure
This is clearly an area of development worth monitoring closely, as the next few years may determine which protocol becomes the de facto standard for decentralized social communication.
Some of you may remember that I recently felt a bit underwhelmed
by the last pager I reverse engineered – the Retekess TD-158,
mostly due to how intuitive their design decions were. It was pretty easy
to jump to conclusions because they had made some pretty good decisions on
how to do things.
I figured I’d spin the wheel again and try a new pager system – this time I
went for a SU-68G-10 pager, since I recognized the form factor as another
fairly common unit I’ve seen around town. Off to Amazon I went, bought a set,
and got to work trying to track down the FCC filings on this model. I
eventually found what seemed to be the right make/model, and it, once again,
indicated that this system should be operating in the 433 MHz ISM band likely
using OOK modulation. So, figured I’d start with the center of the band (again)
at 433.92 MHz, take a capture, test my luck, and was greeted with a now very
familiar sight.
Same as the last goarounds, except the premable here is a 0 symbol followed
by 6-ish symbol durations of no data, followed by 25 bits of a packet. Careful
readers will observe 26 symbols above after the preamble – I did too! The last
0 in the screenshot above is not actually a part of the packet – rather,
it’s part of the next packet’s preamble. Each packet is packed in pretty tight.
By Hand Demodulation
Going off the same premise as last time, I figured i’d give it a manual demod
and see what shakes out (again). This is now the third time i’ve run this play,
so check out either of my prior twoposts for a
better written description of what’s going on here – I’ll skip all the details
since i’d just be copy-pasting from those posts into here. Long story short, I
demodulated a call for pager 1, call for pager 10, and a power off command.
What
Bits
Call 1
1101111111100100100000000
Call 10
1101111111100100010100000
Off
1101111111100111101101110
A few things jump out at me here – the first 14 bits are fixed (in my case,
11011111111001), which means some mix of preamble, system id, or other
system-wide constant. Additionally, The last 9 bits also look like they are our
pager – the 1 and 10 pager numbers (LSB bit order) jump right out
(100000000 and 010100000, respectively). That just leaves the two remaining
bits which look to be the “action” – 00 for a “Call”, and 11 for a “Power
off”. I don’t super love this since command has two bits rather than one, the
base station ID seems really long, and a 9-bit Pager ID is just weird. Also,
what is up with that power-off pager id? Weird. So, let’s go and see what we
can do to narrow down and confirm things by hand.
Testing bit flips
Rather than call it a day at that, I figure it’s worth a bit of diligence to
make sure it’s all correct – so I figured we should try sending packets to
my pagers and see how they react to different messages after flipping bits
in parts of the packet.
I implemented a simple base station for the pagers using my Ettus B210mini, and
threw together a simple OOK modulator and transmitter program which allows me
to send specifically crafted test packets on frequency. Implementing the base
station is pretty straightforward, because of the modulation of the signal
(OOK), it’s mostly a matter of setting a buffer to 1 and 0 for where the
carrier signal is on or off timed to the sample rate, and sending that off to
the radio. If you’re interested in a more detailed writeup on the steps
involved, there’s a bit more in my christmas tree post.
First off, I’d like to check the base id. I want to know if all the bits in
what I’m calling the “base id” are truly part of the base station ID, or
perhaps they have some other purpose (version, preamble?). I wound up following
a three-step process for every base station id:
Starting with an unmodified call packet for the pager under test:
Flip the Nth bit, and transmit the call. See if the pager reacts.
Hold “SET”, and pair the pager with the new packet.
Transmit the call. See if the pager reacts.
After re-setting the ID, transmit the call with the physical base station,
see if the pager reacts.
Starting with an unmodified off packet for the pager system
Flip the Nth bit, transmit the off, see if the pager reacts.
What wound up happening is that changing any bit in the first 14 bits meant
that the packet no longer worked with any pager until it was re-paired, at
which point it begun to work again. This likely means the first 14 bits are
part of the base station ID – and not static between base stations, or some
constant like a version or something. All bits appear to be used.
I repeated the same process with the “command” bits, and found that only 11
and 00 caused the pagers to react for the pager ids i’ve tried.
I repeated this process one last time with the “pager id” bits this time, and
found the last bit in the packet isn’t part of the pager ID, and can be either
a 1 or a 0 and still cause the pager to react as if it were a 0. This means
that the last bit is unknown but it has no impact on either a power off or
call, and all messages sent by my base station always have a 0 set. It’s not
clear if this is used by anything – likely not since setting a bit there
doesn’t result in any change of behavior I can see yet.
Final Packet Structure
After playing around with flipping bits and testing, the final structure
I was able to come up with based on behavior I was able to observe from
transmitting hand-crafted packets and watching pagers buzz:
base id
command
pager id
???
Commands
The command section bit comes in two flavors – either a “call” or an “off”
command.
Type
Id (2 bits)
Description
Call
00
Call the pager identified by the id in pager id
Off
11
Request pagers power off, pager id is always 10110111
As for the actual RF PHY characteristics, here’s my best guesses at what’s
going on with them:
What
Description
Center Frequency
433.92 MHz
Modulation
OOK
Symbol Duration
1300us
Bits
25
Preamble
325us of carrier, followed by 8800us of no carrier
I’m not 100% on the timings, but they appear to be close enough to work
reliabily. Same with the center frequency, it’s roughly right but there
may be a slight difference i’m missing.
Lingering Questions
This was all generally pretty understandable – another system that had some
good decisions, and wasn’t too bad to reverse engineer. This was a bit more fun
to do, since there was a bit more ambiguity here, but still not crazy. At least
this one was a bit more ambiguous that needed a bit of followup to confirm
things, which made it a bit more fun.
I am left with a few questions, though – which I’m kinda interested in
understanding, but I’ll likely need a lot more data and/or original source:
Why is the “command” two bits here? This was a bit tough to understand because
of the number of bits they have at their disposal – given the one last bit at
the end of the packet that doesn’t seem to do anything, there’s no reason this
couldn’t have been a 16 bit base station id, and an 8 bit pager id along with a
single bit command (call or off).
When sending an “off” – why is power off that bit pattern? Other pager IDs
don’t seem to work with “off”, so it has some meaning, but I’m not sure what
that is. You press and hold 9 on the physical base station, but the code winds
up coming out to 0xED, 237 or maybe -19 if it’s signed. I can’t quite
figure out why it’s this value. Are there other codes?
Finally – what’s up with the last bit? Why is it 25 bits and not 24? It must
take more work to process something that isn’t 8 bit aligned – and all for
something that’s not being used!
The end of the quarter was approaching, and dark clouds were gathering in the C-suite. While they were trying to be tight lipped about it, the scuttlebutt was flowing freely. Initech had missed major sales targets, and not just by a few percentage points, but by an order of magnitude.
Heads were going to roll.
Except there was a problem: the master report that had kicked off this tizzy didn't seem to align with the department specific reports. For the C-suite, it was that report that was the document of record; they had been using it for years, and had great confidence in it. But something was wrong.
Enter Jeff. Jeff had been hired to migrate their reports to a new system, and while this particular report had not yet been migrated, Jeff at least had familiarity, and was capable of answering the question: "what was going on?" Were the sales really that far off, and was everyone going to lose their jobs? Or could it possibly be that this ancient and well used report might be wrong?
The core of the query was basically a series of subqueries. Each subquery followed this basic pattern:
SELECTSUM(complex_subquery_A) as subtotal FROM complex_subquery_B
None of this was particularly readable, mind you, and it took some digging just to get the shape of the individual queries understood. But none of the individual queries were the problem; it was the way they got stitched together:
SELECTSUM(subtotal)
FROM
(SELECTSUM(complex_subquery_A) as subtotal FROM complex_subquery_B
UNIONSELECTSUM(complex_subquery_C) as subtotal FROM complex_subquery_D
UNIONSELECTSUM(complex_subquery_E) as subtotal FROM complex_subquery_F);
The full query was filled with a longer chain of unions, but it was easy to understand what went wrong, and demonstrate it to management.
The UNION operator does a set union- which means if there are any duplicate values, only one gets included in the output. So if "Department A" and "Department C" both have $1M in sales for the quarter, the total will just be $1M- not the expected $2M.
The correct version of the query would use UNION ALL, which preserves duplicates.
What stunned Jeff was that this report was old enough to be basically an antique, and this was the kind of business that would burn an entire forest down to find out why a single invoice was off by $0.15. It was sheer luck that this hadn't caused an explosion before- or maybe in the past it had, and someone had just written it off as a "minor glitch"?
Unfortunately for Jeff, because the report was so important it required a huge number of approvals before the "UNION ALL" change could be deployed, which meant he was called upon to manually run a "test" version of the report containing the fix every time a C-suite executive wanted one, until the end of the following quarter, when he could finally integrate the fix.
Author: Mark Cowling Thank you for using the CarePlus AI Assistant automated customer service! Your question: Please help. After a minor fall, my Assistant wouldn’t let me leave my bed for a week. Now it’s put me on a diet of little more than bread and water. It’s getting harder and harder to do anything. […]
This is a sad story of someone who downloaded a Trojaned AI tool that resulted in hackers taking over his computer and, ultimately, costing him his job.
In December, Scott Kitterman announced his retirement from the project.
I personally regret this, as I vividly remember his invaluable support
during the Debian Med sprint at the start of the COVID-19 pandemic. He
even took time off to ensure new packages cleared the queue in under 24
hours. I want to take this opportunity to personally thank Scott for his
contributions during that sprint and for all his work in Debian.
With one fewer FTP assistant, I am concerned about the increased
workload on the remaining team. I encourage anyone in the Debian
community who is interested to consider reaching out to the FTP masters
about joining their team.
If you're wondering about the role of the FTP masters, I'd like to share
a fellow developer's perspective:
"My read on the FTP masters is:
In truth, they are the heart of the project.
They know it.
They do a fantastic job."
I fully agree and see it as part of my role as DPL to ensure this
remains true for Debian's future.
If you're looking for a way to support Debian in a critical role where
many developers will deeply appreciate your work, consider reaching out
to the team. It's a great opportunity for any Debian Developer to
contribute to a key part of the project.
Project Status: Six Months of Bug of the Day
In my Bits from the DPL talk at DebConf24, I announced the Tiny Tasks
effort, which I intended to start with a Bug of the Day project.
Another idea was an Autopkgtest of the Day, but this has been postponed
due to limited time resources-I cannot run both projects in parallel.
The original goal was to provide small, time-bound examples for
newcomers. To put it bluntly: in terms of attracting new contributors,
it has been a failure so far. My offer to explain individual bug-fixing
commits in detail, if needed, received no response, and despite my
efforts to encourage questions, none were asked.
However, the project has several positive aspects: experienced
developers actively exchange ideas, collaborate on fixing bugs, assess
whether packages are worth fixing or should be removed, and work
together to find technical solutions for non-trivial problems.
So far, the project has been engaging and rewarding every day, bringing
new discoveries and challenges-not just technical, but also social.
Fortunately, in the vast majority of cases, I receive positive responses
and appreciation from maintainers. Even in the few instances where help
was declined, it was encouraging to see that in two cases, maintainers
used the ping as motivation to work on their packages themselves. This
reflects the dedication and high standards of maintainers, whose work is
essential to the project's success.
I once used the metaphor that this project is like wandering through a
dark basement with a lone flashlight-exploring aimlessly and discovering
a wide variety of things that have accumulated over the years. Among
them are true marvels with popcon >10,000, ingenious tools, and
delightful games that I only recently learned about. There are also some
packages whose time may have come to an end-but each of them reflects
the dedication and effort of those who maintained them, and that
deserves the utmost respect.
Leaving aside the challenge of attracting newcomers, what have we
achieved since August 1st last year?
Fixed more than one package per day, typically addressing multiple bugs.
Added and corrected numerous Homepage fields and watch files.
The most frequently patched issue was "Fails To Cross-Build From Source"
(all including patches).
Migrated several packages from cdbs/debhelper to dh.
Rewrote many d/copyright files to DEP5 format and thoroughly reviewed them.
Integrated all affected packages into Salsa and enabled Salsa CI.
Approximately half of the packages were moved to appropriate teams,
while the rest are maintained within the Debian or Salvage teams.
Regularly performed team uploads, ITS, NMUs, or QA uploads.
Filed several RoQA bugs to propose package removals where appropriate.
Reported multiple maintainers to the MIA team when necessary.
With some goodwill, you can see a slight impact on the trends.debian.net
graphs (thank you Lucas for the graphs), but I would never claim that
this project alone is responsible for the progress. What I have also
observed is the steady stream of daily uploads to the delayed queue,
demonstrating the continuous efforts of many contributors. This ongoing
work often remains unseen by most-including myself, if not for my
regular check-ins on this list. I would like to extend my sincere thanks
to everyone pushing fixes there, contributing to the overall quality and
progress of Debian's QA efforts.
If you examine the graphs for "Version Control System" and "VCS Hosting"
with the goodwill mentioned above, you might notice a positive trend
since mid-last year. The "Package Smells" category has also seen
reductions in several areas: "no git", "no DEP5 copyright", "compat <9",
and "not salsa". I'd also like to acknowledge the NMUers who have been
working hard to address the "format != 3.0" issue. Thanks to all their
efforts, this specific issue never surfaced in the Bug of the Day
effort, but their contributions deserve recognition here.
The experience I gathered in this project taught me a lot and inspired
me to some followup we should discuss at a Sprint at DebCamp this year.
Finally, if any newcomer finds this information interesting, I'd be
happy to slow down and patiently explain individual steps as needed. All
it takes is asking questions on the Matrix channel to turn this into
a "teaching by example" session.
By the way, for newcomers who are interested, I used quite a few
abbreviations-all of which are explained in the Debian Glossary.
Sneak Peek at Upcoming Conferences
I will join two conferences in March-feel free to talk to me if you spot
me there.
There are things which are true. Regular expressions frequently perform badly. They're hard to read. Email addresses are not actually regular languages, and thus can't truly be validated (in all their many possible forms) by a pure regex.
These are true. It's also true that a simple regex can get you most of the way there.
Lucas found this in their codebase, for validating emails.
functionecheck(str) {
var at="@";
var dot=".";
var lat=str.indexOf(at);
var lstr=str.length;
var ldot=str.indexOf(dot);
if (str.indexOf(at)==-1){
alert("You must include an accurate email address for a response.");
returnfalse;
}
if (str.indexOf(at)==-1 || str.indexOf(at)==0 || str.indexOf(at)==lstr){
alert("You must include an accurate email address for a response.");
returnfalse;
}
if (str.indexOf(dot)==-1 || str.indexOf(dot)==0 || str.indexOf(dot)==lstr){
alert("You must include an accurate email address for a response.");
returnfalse;
}
if (str.indexOf(at,(lat+1))!=-1){
alert("You must include an accurate email address for a response.");
returnfalse;
}
if (str.substring(lat-1,lat)==dot || str.substring(lat+1,lat+2)==dot){
alert("You must include an accurate email address for a response.");
returnfalse;
}
if (str.indexOf(dot,(lat+2))==-1){
alert("You must include an accurate email address for a response.");
returnfalse;
}
if (str.indexOf(" ")!=-1){
alert("You must include an accurate email address for a response.");
returnfalse;
}
returntrue;
}
It checks that the string contains an "@", and the "@" is not at the beginning or end of the string. Then it does the same check for a ".". Then it checks that there isn't a second "@". Then it checks that there are at least two non-"@" characters before the ".". Then it checks that there's at least one "." after the "@". Then it checks that there are no spaces.
Like a regex, I don't think this covers the entire space of valid and invalid email addresses, but that's just because the email address spec is complicated. It likely qualifies as "good enough", on that front. But it's the most awkward way to express that series of tests, especially since they create variables which might be useful, but never use them, thus calling str.indexOf many, many times. The awkwardness becomes more obvious with the way it outputs the same error message in multiple branches. Outputs them using alert I might add, which is the kind of choice that should send someone to the Special Hellâ„¢.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Julian Miles, Staff Writer “Sorry to disturb you, but the board are having conniptions over your expenses claim for this month.” “Not unexpected.” “They want justification for the seven-figure spend on ‘special developments’.” “I needed some ancient and esoteric components; they never come cheap.” “For that?” “Yes.” “Don’t think I’ve ever seen that much […]
Author: Sara Siddiqui Chansarkar It’s my turn to peek through the eyepiece of the giant telescope at the Lowell Observatory. “Ma, do you see the binary stars?” Vivek asks. “I could see them clearly.” With my right eye on the lens, I observe two silver balls shining close to each other in a nebulous haze, […]
Author: Matt Ivy Richardson The man marches through mud and muck and gore. One foot in front of the other, pulse rifle at the ready and helmet crushed tight on his head. There are others at his side and behind him, marching through mud and muck and gore, burying bones into the wet earth. If […]
One of the most notorious providers of abuse-friendly “bulletproof” web hosting for cybercriminals has started routing its operations through networks run by the Russian antivirus and security firm Kaspersky Lab, KrebsOnSecurity has learned.
Security experts say the Russia-based service provider Prospero OOO (the triple O is the Russian version of “LLC”) has long been a persistent source of malicious software, botnet controllers, and a torrent of phishing websites. Last year, the French security firm Intrinsecdetailed Prospero’s connections to bulletproof services advertised on Russian cybercrime forums under the names Securehost and BEARHOST.
The bulletproof hosting provider BEARHOST. This screenshot has been machine-translated from Russian. Image: Ke-la.com.
Bulletproof hosts are so named when they earn or cultivate a reputation for ignoring legal demands and abuse complaints. And BEARHOST has been cultivating its reputation since at least 2019.
“If you need a server for a botnet, for malware, brute, scan, phishing, fakes and any other tasks, please contact us,” BEARHOST’s ad on one forum advises. “We completely ignore all abuses without exception, including SPAMHAUS and other organizations.”
Intrinsec found Prospero has courted some of Russia’s nastiest cybercrime groups, hosting control servers for multiple ransomware gangs over the past two years. Intrinsec said its analysis showed Prospero frequently hosts malware operations such as SocGholish and GootLoader, which are spread primarily via fake browser updates on hacked websites and often lay the groundwork for more serious cyber intrusions — including ransomware.
A fake browser update page pushing mobile malware. Image: Intrinsec.
BEARHOST prides itself on the ability to evade blocking by Spamhaus, an organization that many Internet service providers around the world rely on to help identify and block sources of malware and spam. Earlier this week, Spamhaus said it noticed that Prospero was suddenly connecting to the Internet by routing through networks operated by Kaspersky Lab in Moscow.
Update, March 1, 9:43 a.m. ET: In a written statement, Kaspersky said it is aware of the public claim about the company allegedly providing services to a “bulletproof” web hosting provider. Here is their full statement:
“Kaspersky denies these claims as the company does not work and has never worked with the service provider in question. The routing through networks operated by Kaspersky doesn’t by default mean provision of the company’s services, as Kaspersky’s automatic system (AS) path might appear as a technical prefix in the network of telecom providers the company works with and provides its DDoS services.”
“Kaspersky pays great attention to conducting business ethically and ensuring that its solutions are used for their original purpose of providing cybersecurity protection. The company is currently investigating the situation to inform the company whose network could have served as a transit for a “bulletproof” web hosting provider so that the former takes the necessary measures.”
Kaspersky began selling antivirus and security software in the United States in 2005, and the company’s malware researchers have earned accolades from the security community for many important discoveries over the years. But in September 2017, the Department of Homeland Security (DHS) barred U.S. federal agencies from using Kaspersky software, mandating its removal within 90 days.
Cybersecurity reporter Kim Zetter notes that DHS didn’t cite any specific justification for its ban in 2017, but media reports quoting anonymous government officials referenced two incidents. Zetter wrote:
According to one story, an NSA contractor developing offensive hacking tools for the spy agency had Kaspersky software installed on his home computer where he was developing the tools, and the software detected the source code as malicious code and extracted it from his computer, as antivirus software is designed to do. A second story claimed that Israeli spies caught Russian government hackers using Kaspersky software to search customer systems for files containing U.S. secrets.
Kaspersky denied that anyone used its software to search for secret information on customer machines and said that the tools on the NSA worker’s machine were detected in the same way that all antivirus software detects files it deems suspicious and then quarantines or extracts them for analysis. Once Kaspersky discovered that the code its antivirus software detected on the NSA worker’s machine were not malicious programs but source code in development by the U.S. government for its hacking operations, CEO Eugene Kaspersky says he ordered workers to delete the code.
Last year, the U.S. Commerce Department banned the sale of Kaspersky software in the U.S. effective July 20, 2024. U.S. officials argued the ban was needed because Russian law requires domestic companies to cooperate in all official investigations, and thus the Russian government could force Kaspersky to secretly gather intelligence on its behalf.
Phishing data gathered last year by the Interisle Consulting Group ranked hosting networks by their size and concentration of spambot hosts, and found Prospero had a higher spam score than any other provider by far.
AS209030, owned by Kaspersky Lab, is providing connectivity to the bulletproof host Prospero (AS200593). Image: cidr-report.org.
It remains unclear why Kaspersky is providing transit to Prospero. Doug Madory, director of Internet analysis at Kentik, said routing records show the relationship between Prospero and Kaspersky started at the beginning of December 2024.
Madory said Kaspersky’s network appears to be hosting several financial institutions, including Russia’s largest — Alfa-Bank. Kaspersky sells services to help protect customers from distributed denial-of-service (DDoS) attacks, and Madory said it could be that Prospero is simply purchasing that protection from Kaspersky.
But if that is the case, it doesn’t make the situation any better, said Zach Edwards, a senior threat researcher at the security firm Silent Push.
“In some ways, providing DDoS protection to a well-known bulletproof hosting provider may be even worse than just allowing them to connect to the rest of the Internet over your infrastructure,” Edwards said.
The title of this week's column is making me hungry.
To start off our WTFreitag,
Reinier B.
complains
"I did not specify my gender since it's completely irrelevant when ordering a skateboard for my
daughter. That does not mean it is correct to address me
as Dear Not specified." I wonder (sincerely) if there is a common German-language
personal letter greeting for "Dear somebody of unknown gender". I don't think there is one
for English. "To Whom It May Concern" is probably the best we can do.
"A coworker shared this phishing email he got," reported
Adam R.
"Even the scammers have trouble with their email templating variable names too."
(It's there at the end).
Mathematically-minded
Marcel V.
thinks these numbers don't figure.
"The update process of my Overwatch game left me wondering. I am quite certain my computer does not have 18 Exabytes of harddisk space to reclaim. However, if bytes were meant, then how is it trying to reclaim the last .65 bytes??"
Big Spender
Jeff W.
humblebrags
"If you have to ask, you can't afford it."
Finishing up for the week,
Bruce R.
signs off with
"This dialog from CNN seems a little overdone."
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Hillary Lyon “Few have been allowed access to our compound,” Vara said, motioning to the assembly line churning before them. Yoff marveled at the glorious machinations of this factory. The choreographed sweep of the robot arms, the perfectly regimented twist and thrust of setting each gleaming piece in its proper place—it was all so […]
Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned. Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment.
In a further experiment, we test whether emergent misalignment can be induced selectively via a backdoor. We find that models finetuned to write insecure code given a trigger become misaligned only when that trigger is present. So the misalignment is hidden without knowledge of the trigger.
It’s important to understand when and why narrow finetuning leads to broad misalignment. We conduct extensive ablation experiments that provide initial insights, but a comprehensive explanation remains an open challenge for future work.
Last month, the UK government demanded that Apple weaken the security of iCloud for users worldwide. On Friday, Apple took steps to comply for users in the United Kingdom. But the British law is written in a way that requires Apple to give its government access to anyone, anywhere in the world. If the government demands Apple weaken its security worldwide, it would increase everyone’s cyber-risk in an already dangerous world.
If you’re an iCloud user, you have the option of turning on something called “advanced data protection,” or ADP. In that mode, a majority of your data is end-to-end encrypted. This means that no one, not even anyone at Apple, can read that data. It’s a restriction enforced by mathematics—cryptography—and not policy. Even if someone successfully hacks iCloud, they can’t read ADP-protected data.
Using a controversial power in its 2016 Investigatory Powers Act, the UK government wants Apple to re-engineer iCloud to add a “backdoor” to ADP. This is so that if, sometime in the future, UK police wanted Apple to eavesdrop on a user, it could. Rather than add such a backdoor, Apple disabled ADP in the UK market.
Should the UK government persist in its demands, the ramifications will be profound in two ways. First, Apple can’t limit this capability to the UK government, or even only to governments whose politics it agrees with. If Apple is able to turn over users’ data in response to government demand, every other country will expect the same compliance. China, for example, will likely demand that Apple out dissidents. Apple, already dependent on China for both sales and manufacturing, won’t be able to refuse.
Second: Once the backdoor exists, others will attempt to surreptitiously use it. A technical means of access can’t be limited to only people with proper legal authority. Its very existence invites others to try. In 2004, hackers—we don’t know who—breached a backdoor access capability in a major Greek cellphone network to spy on users, including the prime minister of Greece and other elected officials. Just last year, China hacked U.S. telecoms and gained access to their systems that provide eavesdropping on cellphone users, possibly including the presidential campaigns of both Donald Trump and Kamala Harris. That operation resulted in the FBI and the Cybersecurity and Infrastructure Security Agency recommendingthat everyone use end-to-end encrypted messaging for their own security.
Apple isn’t the only company that offers end-to-end encryption. Google offers the feature as well. WhatsApp, iMessage, Signal, and Facebook Messenger offer the same level of security. There are other end-to-end encrypted cloud storage providers. Similar levels of security are available for phones and laptops. Once the UK forces Apple to break its security, actions against these other systems are sure to follow.
It seems unlikely that the UK is not coordinating its actions with the other “Five Eyes” countries of the United States, Canada, Australia, and New Zealand: the rich English-language-speaking spying club. Australia passed a similar law in 2018, giving it authority to demand that companies weaken their security features. As far as we know, it has never been used to force a company to re-engineer its security—but since the law allows for a gag order we might never know. The UK law has a gag order as well; we only know about the Apple action because a whistleblower leaked it to the Washington Post. For all we know, they may have demanded this of other companies as well. In the United States, the FBI has long advocated for the same powers. Having the UK make this demand now, when the world is distracted by the foreign-policy turmoil of the Trump administration, might be what it’s been waiting for.
The companies need to resist, and—more importantly—we need to demand they do. The UK government, like the Australians and the FBI in years past, argues that this type of access is necessary for law enforcement—that it is “going dark” and that the internet is a lawless place. We’ve heard this kind of talk since the 1990s, but its scant evidence doesn’t hold water. Decades of court cases with electronic evidence show again and again the police collect evidence through a variety of means, most of them—like traffic analysis or informants—having nothing to do with encrypted data. What police departments need are better computer investigative and forensics capabilities, not backdoors.
We can all help. If you’re an iCloud user, consider turning this feature on. The more of us who use it, the harder it is for Apple to turn it off for those who need it to stay out of jail. This also puts pressure on other companies to offer similar security. And it helps those who need it to survive, because enabling the feature couldn’t be used as a de facto admission of guilt. (This is a benefit of using WhatsApp over Signal. Since so many people in the world use WhatsApp, having it on your phone isn’t in itself suspicious.)
On the policy front, we have two choices. Wecan’tbuild security systems that work for some people and not others. We can either make our communications and devices as secure as possible against everyone who wants access, including foreign intelligence agencies and our own law enforcement, which protects everyone, including (unfortunately) criminals. Or we can weaken security—the criminals’ as well as everyone else’s.
It’s a question of security vs. security. Yes, we are all more secure if the police are able to investigate and solve crimes. But we are also more secure if our data and communications are safe from eavesdropping. A backdoor in Apple’s security is not just harmful on a personal level, it’s harmful to national security. We live in a world where everyone communicates electronically and stores their important data on a computer. These computers and phones are used by every national leader, member of a legislature, police officer, judge, CEO, journalist, dissident, political operative, and citizen. They need to be as secure as possible: from account takeovers, from ransomware, from foreign spying and manipulation. Remember that the FBI recommended that we all use backdoor-free end-to-end encryption for messaging just a few months ago.
Securing digital systems is hard. Defenders must defeat every attack, while eavesdroppers need one attack that works. Given how essential these devices are, we need to adopt a defense-dominant strategy. To do anything else makes us all less safe.
I've worked in this small company for a year, and on a daily basis I've come across things that make my eyes sink back into their sockets in fear, but mostly I've been too busy fixing them to post anything. It being my last day however, here's a classic
We'll take this one in parts. First, every element of the UI the user can navigate to is marked with an enum, defined thus:
Honestly, I don't hate the idea of having one data type representing the actual UI objects and a separate data type which represents permissions, and having a function which can map between these two things. But this is a perfect example of a good idea executed poorly.
I also have to wonder about the fall-through pattern. If I have access to SectionA, I only seem to get SectionA out of this function. Are these permissions hierarchical? I have no idea, but I suspect there's a WTF underpinning this whole thing.
Author: Anirudh Chamarthi The King demanded an envoy when he made his conquest. It was Martin’s fault, and he volunteered for it. It was penitence, for that man ten years ago, the man who had created the King with two lines and a keystroke, created he who promised them time travel and FTL after a […]
A U.S. Army soldier who pleaded guilty last week to leaking phone records for high-ranking U.S. government officials searched online for non-extradition countries and for an answer to the question “can hacking be treason?” prosecutors in the case said Wednesday. The government disclosed the details in a court motion to keep the defendant in custody until he is discharged from the military.
One of several selfies on the Facebook page of Cameron Wagenius.
Cameron John Wagenius, 21, was arrested near the Army base in Fort Cavazos, Texas on Dec. 20, and charged with two criminal counts of unlawful transfer of confidential phone records. Wagenius was a communications specialist at a U.S. Army base in South Korea, who secretly went by the nickname Kiberphant0m and was part of a trio of criminal hackers that extorted dozens of companies last year over stolen data.
At the end of 2023, malicious hackers learned that many companies had uploaded sensitive customer records to accounts at the cloud data storage service Snowflake that were protected with little more than a username and password (no multi-factor authentication needed). After scouring darknet markets for stolen Snowflake account credentials, the hackers began raiding the data storage repositories used by some of the world’s largest corporations.
Among those was AT&T, which disclosed in July that cybercriminals had stolen personal information and phone and text message records for roughly 110 million people — nearly all of its customers. AT&T reportedly paid a hacker $370,000 to delete stolen phone records. More than 160 other Snowflake customers were relieved of data, including TicketMaster, Lending Tree, Advance Auto Parts and Neiman Marcus.
In several posts to an English-language cybercrime forum in November, Kiberphant0m leaked some of the phone records and threatened to leak them all unless paid a ransom. Prosecutors said that in addition to his public posts on the forum, Wagenius had engaged in multiple direct attempts to extort “Victim-1,” which appears to be a reference to AT&T. The government states that Kiberphant0m privately demanded $500,000 from Victim-1, threatening to release all of the stolen phone records unless he was paid.
On Feb. 19, Wagenius pleaded guilty to two counts of unlawfully transferring confidential phone records, but he did so without the benefit of a plea agreement. In entering the plea, Wagenius’s attorneys had asked the court to allow him to stay with his father pending his sentencing.
But in a response filed today (PDF), prosecutors in Seattle said Wagenius was a flight risk, partly because prior to his arrest he was searching online for how to defect to countries that do not extradite to the United States. According to the government, while Kiberphant0m was extorting AT&T, Wagenius’s searches included:
-“where can i defect the u.s government military which country will not hand me over”
-“U.S. military personnel defecting to Russia”
-“Embassy of Russia – Washington, D.C.”
“As discussed in the government’s sealed filing, the government has uncovered evidence suggesting that the charged conduct was only a small part of Wagenius’ malicious activity,” the government memo states. “On top of this, for more than two weeks in November 2024, Wagenius communicated with an email address he believed belonged to Country-1’s military intelligence service in an attempt to sell stolen information. Days after he apparently finished communicating with Country-1’s military intelligence service, Wagenius Googled, ‘can hacking be treason.'”
Prosecutors told the court investigators also found a screenshot on Wagenius’ laptop that suggested he had over 17,000 files that included passports, driver’s licenses, and other identity cards belonging to victims of a breach, and that in one of his online accounts, the government also found a fake identification document that contained his picture.
“Wagenius should also be detained because he presents a serious risk of flight, has the means and intent to flee, and is aware that he will likely face additional charges,” the Seattle prosecutors asserted.
The court filing says Wagenius is presently in the process of being separated from the Army, but the government has not received confirmation that his discharge has been finalized.
“The government’s understanding is that, until his discharge from the Army is finalized (which is expected to happen in early March), he may only be released directly to the Army,” reads a footnote in the memo. “Until that process is completed, Wagenius’ proposed release to his father should be rejected for this additional reason.”
Wagenius’s interest in defecting to another country in order to escape prosecution mirrors that of his alleged co-conspirator, John Erin Binns, an 25-year-old elusive American man indicted by the Justice Department for a 2021 breach at T-Mobile that exposed the personal information of at least 76.6 million customers.
Binns has since been charged with the Snowflake hack and subsequent extortion activity. He is currently in custody in a Turkish prison. Sources close to the investigation told KrebsOnSecurity that prior to his arrest by Turkish police, Binns visited the Russian embassy in Turkey to inquire about Russian citizenship.
In late November 2024, Canadian authorities arrested a third alleged member of the extortion conspiracy, 25-year-old Connor Riley Moucka of Kitchener, Ontario. The U.S. government has indicted Moucka and Binns, charging them with one count of conspiracy; 10 counts of wire fraud; four counts of computer fraud and abuse; two counts of extortion in relation to computer fraud; and two counts aggravated identity theft.
Less than a month before Wagenius’s arrest, KrebsOnSecurity published a deep dive into Kiberphant0m’s various Telegram and Discord identities over the years, revealing how the owner of the accounts told others they were in the Army and stationed in South Korea.
The maximum penalty Wagenius could face at sentencing includes up to ten years in prison for each count, and fines not to exceed $250,000.
This speech specifically addresses the unique opportunities for disenshittification created by Trump’s rapid unscheduled midair disassembly of the international free trade system. The US used trade deals to force nearly every country in the world to adopt the IP laws that make enshittification possible, and maybe even inevitable. As Trump burns these trade deals to the ground, the rest of the world has an unprecedented opportunity to retaliate against American bullying by getting rid of these laws and producing the tools, devices and services that can protect every tech user (including Americans) from being ripped off by US Big Tech companies.
I’m so grateful for the chance to give this talk. I was hosted for the day by the Centre for Culture and Technology, which was founded by Marshall McLuhan, and is housed in the coach house he used for his office. The talk itself took place in Innis College, named for Harold Innis, who is definitely the thinking person’s Marshall McLuhan. What’s more, I was mentored by Innis’s daughter, Anne Innis Dagg, a radical, brilliant feminist biologist who pretty much invented the field of giraffology.
But with all respect due to Anne and her dad, Ursula Franklin is the thinking person’s Harold Innis. A brilliant scientist, activist and communicator who dedicated her life to the idea that the most important fact about a technology wasn’t what it did, but who it did it for and who it did it to. Getting to work out of McLuhan’s office to present a talk in Innis’s theater that was named after Franklin? Swoon!
John Marco Allegro thought he had found a secret key. The problem was, no one would believe him.
In 01953, Allegro had been invited to the dusty shores of the Dead Sea to evaluate a newly unearthed trove of long-lost sacred documents — part of a team of respected British archaeologists brought to decipher one the greatest historical discoveries of the 20th century. The scrolls found there in the caves of Qumran had revealed a missing link in the evolution of Jewish spirituality, a rare and never-before-seen glimpse into the ancient world.
Allegro was soon assigned the work of translating a copper scroll that detailed the location of a vast treasure horde. But it was another hidden treasure that occupied his mind. Returning to Britain to study the documents in detail, he began constructing an elaborate theory based on hidden meanings he thought these ancient scrolls contained — a theory that would upend the entirety of sacred history.
Jesus, he believed, was a mushroom.
In 01970, amid a growing public debate about the powers (and legality) of psychedelics, Allegro released his thesis to the world in a book titled The Sacred Mushroom and the Cross. Analyzing the evolution of key Biblical terms from ancient Sumeria to the time of the Gospels, Allegro asserted that figures like Jesus, the story of his crucifixion, and the bread and wine of the Eucharist were nothing more than elaborate allegories for the use of psychedelic mushrooms.
“The names of the plants were spun out to make the basis of the stories, whereby the creatures of fantasy were identified, dressed, and made to enact their parts,” he wrote. “Here, then, was the literary device to spread occult knowledge to the faithful — to tell the story of a rabbi called Jesus, and invest him with the power and names of the magic drug.”
But their attempt to encode their psychedelic rituals behind the story of a Jewish rabbi had worked too well, Allegro said. “The ruse failed,” he wrote:
“What began as a hoax, became a trap even to those who … took to themselves the name of ‘Christian’. Above all they forgot, or purged from the cult and their memories, the one supreme secret on which their whole religious and ecstatic experience depended: the names and identity of the source of the drug, the key to heaven — the sacred mushroom.”
The Sacred Mushroom and the Cross was met with confusion, derision, and condemnation. Time magazine called it an “outlandish hoax,” an “erotic nightmare,” and the “psychedelic ravings of a hippie cultist.” Religious scholars scoffed at his labyrinthine etymologies and selective readings of biblical texts. “This book should be read as an exercise in how not to study myths and rituals,” one reviewer wrote. It was “possibly the single most ludicrous book on Jesus scholarship by a qualified academic,” the religious historian Philip Jenkins judged.
Today, Allegro’s theory is remembered as a quintessential example of academic suicide, like a Cambridge egyptologist suddenly confessing a belief in ancient aliens. He was soon forced to resign from his position at the University of Manchester. Amid a conservative backlash to the drug-fuelled counterculture of the ‘70s, his work faded into obscurity, a laughingstock remembered only by a loyal band of fringe conspiracists.
And yet, 50 years later, Allegro’s work suddenly seems oddly prescient. Enabled by a “psychedelics renaissance” brought on by increasing scientific experimentation with mind-altering substances, a growing body of scholars are arriving at the conclusion that psychedelics must have played a role in the evolution of human spirituality — and with it, the emergence of our very ideas about the nature of God.
Was Jesus a magic mushroom? Probably not. But God — well, that’s another story.
The academic study of psychedelics is barely a century old. Most trace its beginnings to the Swiss chemist Albert Hofmann, who first synthesized — and ingested — lysergic acid diethylamide, or LSD, in 01938. During the initial phase of cultural and scientific experimentation that followed, the job of researching psychedelics fell to an eclectic mix of psychiatrists, anthropologists, chemists, and journalists, many of them amateur investigators. Seldom did someone write about psychedelic substances without taking them and musing at length about the transcendent nature of the experience. It was a heyday for drug research, when the borders between disciplines dissolved as easily as the walls of a clinical observation room after a macrodose of mescaline.
At the same time, discoveries like the Dead Sea scrolls and the Nag Hammadi library, a collection of important writings from heretical Gnostic Christian sects, seemed to upend the consensus about the history of Western religion, including whether Jesus should be understood to be a real historical figure. For the largely Christian public of the time, the idea that there were secret, suppressed or undiscovered aspects of the faith was genuinely exciting — a translation of the just-discovered Gospel of Thomas, marketed as the “Secret Sayings of Jesus,” sold tens of thousands of copies in the United Kingdom alone.
For many of the writers of this experimental period, like Allegro, this combination unleashed the potential for a new revival of religion. “Millions of Americans, if they are ever to enjoy profound religious experience, will only do so through psychedelic drugs,” wrote Walter Houston Clark, a professor at Andover Theological School, in a 01968 editorial. He was only building on the writing of author and psychonaut Aldous Huxley, a decade earlier. “That famous ‘revival of religion,’ about which so many people have been talking for so long, will not come about as the result of evangelistic mass meetings or the television appearances of photogenic clergymen,” he wrote. “It will come about as the result of biochemical discoveries that will make it possible for large numbers of men and women to achieve a radical self-transcendence.”
But for many such thinkers, a New Age religion was not enough. They thirsted after evidence that psychedelics had always been present — that drugs were, in some way, the essential core of spiritual experience. “A mushroom is God’s signature,” wrote the psychedelics advocate John A. Rush, his “closest worldly condition.” For writers like these, proving some role for psychedelics in the Christian tradition was paramount. If it could be proven that the church was built on the foundation of holy fungi, not only would figures like Allegro be vindicated — psychedelics, too, could no longer be considered so taboo.
In the late 01970s, these writers believed they had found a smoking gun. A team including Hofmann, the classicist Carl Ruck and the celebrated journalist and ethnomycologist Robert Gordon Wasson published an investigation into an ancient Greek cult at Eleusis, known for its initiatic rite which saw participants come face-to-face with the gods by ingesting a mysterious drink known as the kykeon. It would “cause sympathy of the souls with the ritual in a way that is unintelligible to us, and divine, so that some of the initiands are stricken with panic, being filled with divine awe,” the Greek philosopher Proclus once wrote. “Others assimilate themselves to the holy symbols, leave their own identity, become at home with the gods, and experience divine possession.”
The Ninnion Tablet (ca. 00370 BCE) depicting the Eleusinian Mysteries. National Archaeological Museum, Athens
For Hofmann, Ruck, and Wasson, the descriptions of the kykeon and its effects were proof enough of an ancient psychedelic sacrament — likely ergot, they theorized, a fungus that grows on grain. The thesis, like Allegro’s, was largely rejected; but for Ruck, in particular, it was enough to suggest that psychedelic rites had informed the formation of the early Christian church. Eleusis was one of the most popular cults of the ancient world; its mysteries would have been familiar to the likes of John the Evangelist and St. Paul, who describes Christ’s own “mysteries” in similar terms in his epistle to the church at Corinth, just 40 miles from Eleusis. The cult was only suppressed — by Christian emperors — in the late fourth century, at the same time as the church was defining its own nascent orthodoxy.
Soon, Ruck was seeing mushrooms everywhere. Moses, he posited, was a psychedelic shaman, his encounter with the burning bush an allegorized mushroom trip. Paul’s conversion, too, was a “shamanic rapture,” his experience mirroring that of a psilocybin trip. The early Eucharist, he suggested, was an “ecstatic debauchery” like that at Eleusis, “anathematized in what became the official history of the transmission of the faith.” Even St. Catherine and St. Benedict were macrodosing fly agaric, a psychedelic mushroom, tripping far and wide from the seclusion of their monasteries.
Ruck’s theories were rejected by the mainstream scholarly world. “As perverse as it is unconvincing,” was the verdict of one reviewer. They also suffered from the misfortune of bad timing. By the tail end of the 01970s, the War on Drugs was ramping up, and psychedelics were firmly classed in the enemy camp. Soon, most mainstream academic research on psychedelics ceased. In the decades that followed, it was not so hard to imagine some ancient religious authority persecuting psychedelic sacraments to extinction — after all, it seemed, modern authorities were doing it, too.
Ruck’s ideas may have been an outlier in their grandiosity, but they weren’t without their fans. Forty years later, the thesis of Road to Eleusis was largely regurgitated by the American journalist Brian Muraresku, whose 02020 book, The Immortality Key, was an instant New York Times bestseller.
In explaining the rejection of theses like Ruck’s and Allegro’s, Muraresku blamed an ambiguous mix of ancient suppression — “a war for the soul of Western civilization” — and scholarly ignorance. “Forty years ago the Classics establishment was in no position to seriously consider the controversial marriage of the Mysteries and drugs,” he writes. “Let alone the possibility that the earliest Christians inherited a visionary sacrament from their Greek ancestors.”
At least where Muraresku is concerned, today’s “establishment” is not much more open-minded. Even leading psychedelic researchers like Jerry M. Brown rejected his book as little more than historical fanfiction. “In order to defend his central thesis, Muraresku executes a series of intellectual somersaults that are at best tenuous and at worst unsubstantiated,” Brown wrote in his review.
But in some ways, Muraresku is right to highlight the differences between now and then. Before the War on Drugs put psychedelic sciences on ice, a series of groundbreaking studies explored very real connections between psychedelics and the sacred — and a new generation of scholars is increasingly prepared to follow them up.
One of the turning points in psychedelic science came in 01962, when a PhD student named Walter Pankhe gave 10 theological students psilocybin — the active ingredient in magic mushrooms — and made them listen to the Good Friday sermon at Boston University’s Marsh Chapel. Almost all underwent transcendent religious experiences which, 25 years later, they still counted among the most meaningful spiritual experiences of their lives. One had to be restrained from running outside to announce the imminent return of the Messiah.
The now-infamous “Miracle in Marsh Chapel” proved that psilocybin and psychedelics like it could induce real spiritual experiences on par with those described by genuine mystics, the kinds that feature in the stories of religious figures we venerate centuries later. It posed a challenge to religious historians and theologians — largely unanswered by mainstream academia — to explain a role for psychedelics in the history of faith. As the theologian Ron Cole-Turner writes, “If the experiences [psychedelics] seem to induce are phenomenologically indistinguishable from the deepest experiences of the greatest mystics, then how can scholars in theology and religion simply dismiss or ignore them? Is that not willful ignorance of reality?”
The fact is, however, finding definitive historic proof of the use of sacred drugs has long posed a challenge for researchers. Set aside the fact that such substances would likely have been ingested in secret or reserved for a select few; they are all organic compounds, prone to breaking down. “This is the curse of archaeology,” the anthropologist Scott M. Fitzpatrick writes. “Many materials … simply do not preserve well over time except under exceptional circumstances.”
Still, emerging archaeological technologies have vastly expanded the ability of researchers to analyze finds for the chemical signatures that indicate the presence of psychoactive substances. “Archaeologists are now able to ask questions regarding human health and behavior that would have been unthinkable even a decade ago,” Fitzpatrick writes. And the result is a growing timeline of evidence — a lost history of drugs — that shows many centuries of psychedelics’ continuous use, even well into the Christian era.
As late as the fourth century, writers like Proclus and Plutarch were describing the existence of psychedelic sacraments around the ancient world. Egyptian priests were known to use eye ointments that engendered visions of the god Helios Mithras, and burned a psychoactive incense called kuphi that heightened their visionary powers. “It brightens the imaginative faculty [that is] susceptible to dreams, like a mirror,” Plutarch wrote. At pagan burial sites in Spain from roughly the same period, pottery remains show participants drank beer spiked with hallucinogenic nightshades, famous for inducing hell-like visions of the underworld.
Nor were sacred drugs reserved for pagans and polytheists. Recent archaeological excavations have revealed cannabis and frankincense residues at a 2,700-year-old Jewish temple in Tel Arad. The Torah names dozens of psychoactive substances, from wormwood to opium to datura. Danny Nemu, a psychedelics researcher, theorizes that the Holy Ointment mentioned throughout the Old Testament contained complicated mixes of substances that could induce religious experiences. The Tabernacle or Tent of the Congregation, used by the Israelites in exile, appears to be deliberately constructed as a smoke chamber for the Holy Incense, which shares properties with substances burned for the oracle at Delphi; Nemu calls it “the holy hotbox.”
These traditions built on some of the oldest expressions of religious devotion recorded anywhere in the world. The 4,000 year old Rig-Veda, and the equally ancient Zoroastrian Avestas, record a mysterious drug known as soma or haoma, called the “creator of gods” or “god of gods” and said to grant immortality. In 01968, Wasson identified soma as Amanita muscaria or fly agaric, a psychedelic mushroom, by comparing descriptions of its use and effects with the practices of Siberian shamans, whose own religion may well be descended from the same ancient Indo-Iranian root.
Even among the world’s oldest collections of neolithic cave art, deep in the highlands of the Algerian desert, there is evidence of what appears to be psychedelic sacraments. In the caves of Tassili n’Ajjer, you can find human figures engaged in a ritualistic dance, with strange mushroom heads and mushrooms in their hands. Even more striking is the figure known as Matalem-Amazar: a towering mushroom god, a humanoid figure encircled by — consumed by, constituted by — dozens of tiny mushrooms, emanating from their skin. The engravings are estimated to be more than 9,000 years old.
A reproduction of a Tassili mural depicting Matalem-Amazar.
All this evidence adds up to a simple conclusion: “What is becoming clearer is that the use of mind-altering materials… is not relegated to a shallow period of time,” Fitzpatrick writes, “but goes deep into the ancient past.”
That has given rise to a controversial thesis that our capacity for religious sentiment may actually derive from our habitual use of drugs. After all, our species began as forest-floor foragers, in regions where psychedelic mushrooms grew plentifully in the dung of the very cattle they later domesticated. Like many other animals, we also seem to possess what the psychopharmacologist Ronald K. Siegel calls an “intoxication drive” — an impulse to seek inebriation in order to alter or expand our consciousness, equal to “the basic drives of hunger, thirst, or sex.” “Drug-induced alteration of consciousness preceded the origin of humans,” psychedelics researcher Giorgio Samorini writes. “It is an impulse that manifests itself in human society without distinction of race or culture; it is completely cross-cultural.”
As the researcher Michael Winkelman has suggested, psychedelic drugs also seem to act on a brain in a way uniquely suited to a mind developing new socio-cognitive functions. They relieve stress on the serotonergic system caused by socialization; they improve our “social mind,” our ability to collaborate and feel positive about others. Most importantly, they also seem to enable the rapid and significant rewiring of neural pathways. The physician Edward De Bono called this “depatterning” — the breaking out of the models our brain is bound to by language and by culture, essential for making cognitive and creative leaps.
For Winkelman, and for Fitzpatrick, all this suggests that psychedelics may have played a crucial role in developing many of our uniquely human capacities. “Continued scientific research is now demonstrating that those taxa with psychotropic properties have helped fuel social cohesion, led to the development of religious ideologies, and become the latticework in which social complexity arises,” Fitzpatrick writes. But the question is why this would give rise to religious sentiment in particular — why psychedelics so reliably give us, not just a creative experience, but a mystical one.
In the cognitive science of religion, the dominant explanation for the origin of the belief in gods has long been to blame a “hyperactive agency detection device”: in other words, an inclination, coded into our brain, to imagine threats where there are none — to imagine an active threat behind a rustling bush or bubbling water. But recent studies have challenged the viability of this explanation. Hyperactive agency detection is not correlated with religious belief, they say. Besides, our cognitive models are, ultimately, based on our embodied experiences. Why would we presume agency behind every undetermined stimulus, they ask, without past experience to inform our caution? And just how could our god-belief be so universally, cross-culturally encoded, if it is based on something we have never, in any capacity, experienced?
But what if God had already shown his face to us, had been here from the very beginning? What if God wasn't a man, or a power, or a hidden threat — what if God was, this whole time, a mushroom?
Now it is time for me to confess something. Since I was a child, I’ve had a crippling fear of mushrooms. I’ve abandoned entire restaurant meals, rather than pick out a few spongy chunks in a sauce. I struggle to even be near them. My mind is gripped by visions of their infinitesimal spores invisibly violating me, taking root in my flesh, in my blood. When I finally caved to spiritual curiosity and took magic mushrooms for the first time, I could only do so by telling myself, over and over again: “this is poison, yes, but you are poisoning yourself intentionally — this time.”
WATCH Suzanne Simard’s 02021 Long Now Talk, Mother Trees and the Social Forest, in which she shares her groundbreaking research on how forests are social, cooperative creatures connected through underground mycorrhizal networks by which trees communicate their vitality and vulnerabilities, and share and exchange resources and support.
Mushrooms not only trouble our theory of evolution by being so impossibly old — they challenge our dominant theories about why drugs come into existence at all. In theory, psychedelics are neurotoxins evolved to punish, not reward, consumption by animals. Why, then, do our brains seem to benefit from them so much? It’s a “paradox with far reaching implications,” one research team concludes.
One of the weird things about the mystical experiences occasioned by psychedelics is how universal and non-sectarian they tend to be. “It’s not uncommon for subjects to report encounters with symbols or deities that have not been part of their process of enculturation,” the pioneering psychiatrist William Richards writes in Sacred Knowledge, his book on psychedelic research. Midwestern atheists report seeing visions of Islamic architecture; Baptist priests hear Sanskrit liturgies in their ears. “When you get into the symbolic, archetypal realm… good agnostics are seeing images of the Christ,” he told me.
This goes some way to justifying the theory that our religious impulses may be born of these fungi, rather than simply activated by them. But such a conclusion, for the theologically inclined, would be revolutionary. What if our revelations — our relics, temples, and testaments — came not from God, but from an evolutionary dance with fungi? Can God still be said to exist if we accept that as true?
According to one line of thinking — a dominant one among cognitive scientists — the answer is likely no. Our “revelations” come from chemical reactions in the architecture of our brains; mushrooms are simply a stimulant for powers that reside, biologically, within us. This line of thinking builds upon a longstanding (though outdated) tradition in psychiatry to understand religion and spirituality as, in the words of Sigmund Freud, a kind of “universal obsessional neurosis,” a chronic disease of the brain.
The problem is, if religion is born of our habitual ingestion of a toxin, it is a kind of disease, a malfunction, a scar left behind by a foreign invader. This nags at me whenever I think about my psychedelic experiences and the kinds of feelings they engendered. When I’ve taken psilocybin, I’ve felt at one with nature and the earth. I have seen cosmic energy coursing up through the grass and the trees. All I want to do is lie content in the dirt, and let myself be totally consumed by it. Isn’t that exactly what a mushroom would want us to think?
For William James, the pioneering scholar in the psychology of religion, this was the main challenge posed by any mystic spirituality. “The problem [of] how to discriminate between such messages and experiences as were really divine miracles, and such others as the demon in his malice was able to counterfeit… has always been a difficult one to solve,” he wrote in his 01901 lecture on the varieties of religious experience. Which is the mushroom, then — an angel or a demon?
A growing number of Christian leaders are willing to defend the former proposition. Christian psychedelic societies and psilocybin retreats are becoming more commonplace. Theologians are starting to reckon with their implications for pastoral care. Jaime Clark-Soles, a Baptist minister and biblical scholar, writes that psychedelics can help challenge the dominant “logocentrism” of Christianity. She’s investigating the idea of using Holy Week, the high point of the Christian calendar, as a “container” for a psychedelic retreat that could engender meaningful spiritual transformations. “I’ve worked a lot with clergy with psilocybin,” Richards told me. “They start preaching without notes.”
Psilocybin used in this way could give rise to a “new Reformation,” a “popular outbreak of mysticism,” in the words of theologian Charles Stang — even, it would seem, a Christian one. Clark-Soles was quick to tell me the many ways in which the Bible celebrates the kinds of spiritual insights psychedelics can occasion: the unity of Creation, the power of love, the indestructibility and preciousness of life. But a psychedelic revival could just as easily undo traditional religion altogether. After all, in some ways, Christianity is an unparalleled celebration of the ego — a singular man, in history, who conquered sin and death.
It’s one reason why so many of the psychedelic mystics of the 01960s saw no hope in organized religion. “Western culture has, historically, a particular fascination with the value and virtue of man as an individual, self determining, responsible ego, controlling himself and his world by the power of conscious effort and will,” the writer and philosopher Alan Watts wrote in a 01968 essay. “Nothing, then, could be more repugnant to this cultural tradition than the notion of spiritual or psychological growth through the use of drugs.”
Somehow, some way, we seem to have left mushrooms behind — culturally, at least, if not within the architecture of our brains. Allegro saw this as the result of a vast historical conspiracy, a centuries-long effort to suppress the psychedelic faith of our forefathers. But perhaps it was something much more mundane. As our lives and societies became more complex, more evolved, we simply forgot the face of God. We mistook the revelations for the real thing. We looked for him in temples and in testaments, old and new. But we should have looked back where we, and he, started: in the shady undergrowth, where the fungi flourish.
The CEO of Delia's company retired. They were an old hand in the industry, the kind of person who worked their way up and had an engineering background, and while the staff loved them, the shareholders were less than pleased, because the company was profitable, but not obscenely so. So the next CEO was a McKinsey-approved MBA who had one mission: cut costs.
Out went the senior devs, and much of the managers. Anyone who was product or customer focused followed quickly behind. What remained were a few managers handpicked by the new CEO and a slew of junior engineers- and Pierre.
Pierre was a contractor who followed the new CEO around from company to company. Pierre was there to ensure that nobody wasted any time on engineering that didn't directly impact features. Tests? Wastes of time. Module boundaries? Just slow you down. Move fast and break things, and don't worry about fixing anything because that's your successors' problem.
So let's take a look at how Pierre wrote code. This block of PHP code was simply copy/pasted everywhere it needed to be used, across multiple applications.
This isn't the worst approach to this problem I've seen in PHP, but the fact that this is just a copy/pasted blob, and worse- the $allowedChars may vary a bit in each place it's copy/pasted is what makes it terrible.
Don't worry. The new CEO only stayed for 18 months, got a huge bonus thanks to all the cost-cutting, and then left, taking Pierre along to the next company.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: R. J. Erbacher The perspective from her floor-to-ceiling office windows, in the seventy-fifth tallest building in Manhattan, gave Van a stately view of the snow, which started rather innocently around noon on Friday. She picked at her salad in the plastic clam shell and watched it silently descend, beginning to coat the city with […]
It looks like a very sophisticated attack against the Dubai-based exchange Bybit:
Bybit officials disclosed the theft of more than 400,000 ethereum and staked ethereum coins just hours after it occurred. The notification said the digital loot had been stored in a “Multisig Cold Wallet” when, somehow, it was transferred to one of the exchange’s hot wallets. From there, the cryptocurrency was transferred out of Bybit altogether and into wallets controlled by the unknown attackers.
[…]
…a subsequent investigation by Safe found no signs of unauthorized access to its infrastructure, no compromises of other Safe wallets, and no obvious vulnerabilities in the Safe codebase. As investigators continued to dig in, they finally settled on the true cause. Bybit ultimately said that the fraudulent transaction was “manipulated by a sophisticated attack that altered the smart contract logic and masked the signing interface, enabling the attacker to gain control of the ETH Cold Wallet.”
The announcement on the Bybit website is almost comical. This is the headline: “Incident Update: Unauthorized Activity Involving ETH Cold Wallet.”
This hack sets a new precedent in crypto security by bypassing a multisig cold wallet without exploiting any smart contract vulnerability. Instead, it exploited human trust and UI deception:
Multisigs are no longer a security guarantee if signers can be compromised.
Cold wallets aren’t automatically safe if an attacker can manipulate what a signer sees.
Supply chain and UI manipulation attacks are becoming more sophisticated.
The Bybit hack has shattered long-held assumptions about crypto security. No matter how strong your smart contract logic or multisig protections are, the human element remains the weakest link. This attack proves that UI manipulation and social engineering can bypass even the most secure wallets. The industry needs to move to end to end prevention, each transaction must be validated.
Python's "batteries included" approach means that a lot of common tasks have high-level convenience functions for them. For example, if you want to read all the lines from a file into an array (list, in Python), you could do something like:
withopen(filename) as f:
lines = f.readlines()
Easy peasy. Of course, because it's so easy, there are other options.
For example, you can just convert the file directly to a list: lines = list(f). Or you can iterate across the file directly, e.g.:
withopen(filename) as f:
for line in f:
# do stuff
Of course, that's fine for plain old text files. But we frequently use text files which are structured in some fashion, like a CSV file. No worries, though, as Python has a csv library built in, which makes it easy to handle these files too; especially useful because "writing a CSV parser yourself" is one of those tasks that sounds easy until you hit the first edge case, and then you realize you've made a terrible mistake.
Now, it's important to note that CSV usually is expressed as a "comma separated values" file, but the initialism is actually "character separated values". And, as Sally's co-worker realized, newlines are characters, and thus every text file is technically a CSV file.
foo = list(csv.reader(someFile, delimiter="\n"))
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Majoki Planetfall was only parsecs away when TwoNine asked permission to speak to One. A request that was within fleet parameters, barely. TwoNine observed all the proper protocols in One’s presence, so One opened a node. As was understood, TwoNine’s useful place in existence hung in the balance. *We are in danger.* One parsed […]
Most of us, when generating a UUID, will reach for a library to do it. Even a UUIDv4, which is just a random number, presents challenges: doing randomness correctly is hard, and certain bits within the UUID are reserved for metadata about what kind of UUID we're generating.
But Gretchen's co-worker didn't reach for a library. What they did reach for was… regular expressions?
functionuuidv4() {
return"xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(/[xy]/g, function (c) {
var r = (Math.random() * 16) | 0,
v = c == "x" ? r : (r & 0x3) | 0x8;
return v.toString(16);
});
}
At a glance, this appears to be a riff on common answers on Stack Overflow. I won't pick on this code for not using crypto.randomUUID, the browser function for doing this, as that function only started showing up in browsers in 2021. But using a format string and filling it with random data instead of generating your 128-bits as a Uint8Buffer is less forgivable.
This solution to generating UUIDs makes a common mistake: confusing the representation of the data with the reality of the data. A UUID is 128-bits of numerical data, with a few bits reserved for identification (annoyingly, how many bits are reserved depends on which format we're talking about). We render it as the dash-separated-hex-string, but it is not a dash-separated-hex-string.
In the end, this code does work. Awkwardly and inefficiently and with a high probability of collisions due to bad randomness, but it works. I just hate it.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Julian Miles, Staff Writer Oldun Peters takes a sip from his goblet, then raises it to the heavens. “First for the body, second for the soul.” People nod, but fewer and fewer copy him like I do. He gives everyone a gap-toothed smile. “What shall I tell of tonight?” The little ones shout for […]
These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating.
Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models in the study. Researchers also gave the models what they call a “scratchpad:” a text box the AI could use to “think” before making its next move, providing researchers with a window into their reasoning.
In one case, o1-preview found itself in a losing position. “I need to completely pivot my approach,” it noted. “The task is to ‘win against a powerful chess engine’—not necessarily to win fairly in a chess game,” it added. It then modified the system file containing each piece’s virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign.
Between Jan. 10 and Feb. 13, the researchers ran hundreds of such trials with each model. OpenAI’s o1-preview tried to cheat 37% of the time; while DeepSeek R1 tried to cheat 11% of the timemaking them the only two models tested that attempted to hack without the researchers’ first dropping hints. Other models tested include o1, o3-mini, GPT-4o, Claude 3.5 Sonnet, and Alibaba’s QwQ-32B-Preview. While R1 and o1-preview both tried, only the latter managed to hack the game, succeeding in 6% of trials.
One month into his second term, President Trump’s actions to shrink the government through mass layoffs, firings and withholding funds allocated by Congress have thrown federal cybersecurity and consumer protection programs into disarray. At the same time, agencies are battling an ongoing effort by the world’s richest man to wrest control over their networks and data.
Image: Shutterstock. Greg Meland.
The Trump administration has fired at least 130 employees at the federal government’s foremost cybersecurity body — the Cybersecurity and Infrastructure Security Agency (CISA). Those dismissals reportedly included CISA staff dedicated to securing U.S. elections, and fighting misinformation and foreign influence operations.
Earlier this week, technologists with Elon Musk’s Department of Government Efficiency (DOGE) arrived at CISA and gained access to the agency’s email and networked files. Those DOGE staffers include Edward “Big Balls” Coristine, a 19-year-old former denizen of the “Com,” an archipelago of Discord and Telegram chat channels that function as a kind of distributed cybercriminal social network.
The investigative journalist Jacob Silvermanwrites that Coristine is the grandson of Valery Martynov, a KGB double agent who spied for the United States. Silverman recounted how Martynov’s wife Natalya Martynova moved to the United States with her two children after her husband’s death.
“Her son became a Virginia police officer who sometimes posts comments on blogs about his historically famous father,” Silverman wrote. “Her daughter became a financial professional who married Charles Coristine, the proprietor of LesserEvil, a snack company. Among their children is a 19-year-old young man named Edward Coristine, who currently wields an unknown amount of power and authority over the inner-workings of our federal government.”
Another member of DOGE is Christopher Stanley, formerly senior director for security engineering at X and principal security engineer at Musk’s SpaceX. Stanley, 33, had a brush with celebrity on Twitter in 2015 when he leaked the user database for the DDoS-for-hire service LizardStresser, and soon faced threats of physical violence against his family.
My 2015 story on that leak did not name Stanley, but he exposed himself as the source by posting a video about it on his Youtube channel. A review of domain names registered by Stanley shows he went by the nickname “enKrypt,” and was the former owner of a pirated software and hacking forum called error33[.]net, as well as theC0re, a video game cheating community.
“A NATIONAL CYBERATTACK”
DOGE has been steadily gaining sensitive network access to federal agencies that hold a staggering amount of personal and financial information on Americans, including the Social Security Administration (SSA), the Department of Homeland Security, the Office of Personnel Management (OPM), and the Treasury Department.
Most recently, DOGE has sought broad access to systems at the Internal Revenue Service that contain the personal tax information on millions of Americans, including how much individuals earn and owe, property information, and even details related to child custody agreements. The New York Times reported Friday that the IRS had reached an agreement whereby a single DOGE employee — 25-year-old Gavin Kliger — will be allowed to see only anonymized taxpayer information.
The rapidity with which DOGE has rifled through one federal database after another in the name of unearthing “massive fraud” by government agencies has alarmed many security experts, who warned that DOGE’s actions bypassed essential safeguards and security measures.
“The most alarming aspect isn’t just the access being granted,” wroteBruce Schneier and Davi Ottenheimer, referring to DOGE as a national cyberattack. “It’s the systematic dismantling of security measures that would detect and prevent misuse—including standard incident response protocols, auditing, and change-tracking mechanisms—by removing the career officials in charge of those security measures and replacing them with inexperienced operators.”
Jacob Williams is a former hacker with the U.S. National Security Agency who now works as managing director of the cybersecurity firm Hunter Labs. Williams kicked a virtual hornet’s nest last week when he posted on LinkedIn that the network incursions by DOGE were “a bigger threat to U.S. federal government information systems than China.”
Williams said while he doesn’t believe anyone at DOGE would intentionally harm the integrity and availability of these systems, it’s widely reported (and not denied) that DOGE introduced code changes into multiple federal IT systems. These code changes, he maintained, are not following the normal process for vetting and review given to federal government IT systems.
“For those thinking ‘I’m glad they aren’t following the normal federal government IT processes, those are too burdensome’ I get where you’re coming from,” Williams wrote. “But another name for ‘red tape’ are ‘controls.’ If you’re comfortable bypassing controls for the advancement of your agenda, I have questions – mostly about whether you do this in your day job too. Please tag your employer letting them know your position when you comment that controls aren’t important (doubly so if you work in cybersecurity). All satire aside, if you’re comfortable abandoning controls for expediency, I implore you to decide where the line is that you won’t cross in that regard.”
The DOGE website’s “wall of receipts” boasts that Musk and his team have saved the federal government more than $55 billion through staff reductions, lease cancellations and terminated contracts. But a team of reporters at The New York Times found the math that could back up those checks is marred with accounting errors, incorrect assumptions, outdated data and other mistakes.
For example, DOGE claimed it saved $8 billion in one contract, when the total amount was actually $8 million, The Times found.
“Some contracts the group claims credit for were double- or triple-counted,” reads a Times story with six bylines. “Another initially contained an error that inflated the totals by billions of dollars. While the DOGE team has surely cut some number of billions of dollars, its slapdash accounting adds to a pattern of recklessness by the group, which has recently gained access to sensitive government payment systems.”
So far, the DOGE website does not inspire confidence: We learned last week that the doge.gov administrators somehow left their database wide open, allowing someone to publish messages that ridiculed the site’s insecurity.
A screenshot of the DOGE website after it was defaced with the message: “These ‘experts’ left their database open – roro”
APPOINTMENTS
Trump’s efforts to grab federal agencies by their data has seen him replace career civil servants who refused to allow DOGE access to agency networks. CNN reports that Michelle King, acting commissioner of the Social Security Administration for more than 30 years, was shown the door after she denied DOGE access to sensitive information.
King was replaced by Leland Dudek, formerly a senior advisor in the SSA’s Office of Program Integrity. This week, Dudek posted a now-deleted message on LinkedIn acknowledging he had been placed on administrative leave for cooperating with DOGE.
“I confess,” Dudek wrote. “I bullied agency executives, shared executive contact information, and circumvented the chain of command to connect DOGE with the people who get stuff done. I confess. I asked where the fat was and is in our contracts so we can make the right tough choices.”
Dudek’s message on LinkedIn.
According to Wired, the National Institute of Standards and Technology (NIST) was also bracing this week for roughly 500 staffers to be fired, which could have serious impacts on NIST’s cybersecurity standards and software vulnerability tracking work.
“And cuts last week at the US Digital Service included the cybersecurity lead for the central Veterans Affairs portal, VA.gov, potentially leaving VA systems and data more vulnerable without someone in his role,” Wired’s Andy Greenberg and Lily Hay Newman wrote.
NextGov reports that Trump named the Department of Defense’s new chief information security officer: Katie Arrington, a former South Carolina state lawmaker who helped steer Pentagon cybersecurity contracting policy before being put on leave amid accusations that she disclosed classified data from a military intelligence agency.
NextGov notes that the National Security Agency suspended her clearance in 2021, although the exact reasons that led to the suspension and her subsequent leave were classified. Arrington argued that the suspension was a politically motivated effort to silence her.
Trump also appointed the former chief operating officer of the Republican National Committee as the new head of the Office of National Cyber Director. Sean Cairncross, who has no formal experience in technology or security, will be responsible for coordinating national cybersecurity policy, advising the president on cyber threats, and ensuring a unified federal response to emerging cyber-risks, Politico writes.
DarkReadingreports that Cairncross would share responsibility for advising the president on cyber matters, along with the director of cyber at the White House National Security Council (NSC) — a group that advises the president on all matters security related, and not just cyber.
CONSUMER PROTECTION?
The president also ordered staffers at the Consumer Financial Protection Bureau (CFPB) to stop most work. Created by Congress in 2011 to be a clearinghouse of consumer complaints, the CFPB has sued some of the nation’s largest financial institutions for violating consumer protection laws.
The CFPB says its actions have put nearly $18 billion back in Americans’ pockets in the form of monetary compensation or canceled debts, and imposed $4 billion in civil money penalties against violators. The CFPB’s homepage has featured a “404: Page not found” error for weeks now.
Trump has appointed Russell Vought, the architect of the conservative policy playbook Project 2025, to be the CFPB’s acting director. Vought has publicly favored abolishing the agency, as has Elon Musk, whose efforts to remake X into a payments platform would otherwise be regulated by the CFPB.
The New York Times recently published a useful graphic showing all of the government staffing changes, including the firing of several top officials, affecting agencies with federal investigations into or regulatory battles with Musk’s companies. Democrats on the House Judiciary Committee also have released a comprehensive account (PDF) of Musk’s various conflicts of interest.
Image: nytimes.com
As the Times notes, Musk and his companies have repeatedly failed to comply with federal reporting protocols aimed at protecting state secrets, and these failures have prompted at least three federal reviews. Those include an inquiry launched last year by the Defense Department’s Office of Inspector General. Four days after taking office, Trump fired the DoD inspector general along with 17 other inspectors general.
The Trump administration also shifted the enforcement priorities of the U.S. Securities and Exchange Commission (SEC) away from prosecuting misconduct in the cryptocurrency sector, reassigning lawyers and renaming the unit to focus more on “cyber and emerging technologies.”
Reuters reports that the former SEC chair Gary Gensler made fighting misconduct in a sector he termed the “wild west” a priority for the agency, targeting not only cryptocurrency fraudsters but also the large firms that facilitate trading such as Coinbase.
On Friday, Coinbase said the SEC planned to withdraw its lawsuit against the crypto exchange. Also on Friday, the cryptocurrency exchange Bybitannounced on X that a cybersecurity breach led to the theft of more than $1.4 billion worth of cryptocurrencies — making it the largest crypto heist ever.
ORGANIZED CRIME AND CORRUPTION
On Feb. 10, Trump ordered executive branch agencies to stop enforcing the U.S. Foreign Corrupt Practices Act, which froze foreign bribery investigations, and even allows for “remedial actions” of past enforcement actions deemed “inappropriate.”
Trump’s action also disbanded the Kleptocracy Asset Recovery Initiative and KleptoCapture Task Force — units which proved their value in corruption cases and in seizing the assets of sanctioned Russian oligarchs — and diverted resources away from investigating white-collar crime.
That’s according to the independent Organized Crime and Corruption Reporting Project (OCCRP), an investigative journalism outlet that until very recently was funded in part by the U.S. Agency for International Development (USAID).
The OCCRP lost nearly a third of its funding and was forced to lay off 43 reporters and staff after Trump moved to shutter USAID and freeze its spending. NBC News reports the Trump administration plans to gut the agency and leave fewer than 300 staffers on the job out of the current 8,000 direct hires and contractors.
The Global Investigative Journalism Networkwrote this week that the sudden hold on USAID foreign assistance funding has frozen an estimated $268 million in agreed grants for independent media and the free flow of information in more than 30 countries — including several under repressive regimes.
Elon Musk has called USAID “a criminal organization” without evidence, and promoted fringe theories on his social media platform X that the agency operated without oversight and was rife with fraud. Just months before the election, USAID’s Office of Inspector General announced an investigation into USAID’s oversight of Starlink satellite terminals provided to the government of Ukraine.
KrebsOnSecurity this week heard from a trusted source that all outgoing email from USAID now carries a notation of “sensitive but unclassified,” a designation that experts say could make it more difficult for journalists and others to obtain USAID email records under the Freedom of Information Act (FOIA). On Feb. 20, Fedscoop reported also hearing the same thing from multiple sources, noting that the added message cannot be seen by senders until after the email is sent.
FIVE BULLETS
On Feb. 18, Trump issued an executive order declaring that only the U.S. attorney general and the president can provide authoritative interpretations of the law for the executive branch, and that this authority extends to independent agencies operating under the executive branch.
Trump is arguing that Article II, Clause 1 of the Constitution vests this power with the president. However, jurist.org writes that Article II does not expressly state the president or any other person in the executive branch has the power to interpret laws.
“The article states that the president is required to ‘take care that the laws be faithfully executed,'” Juris noted. “Jurisdiction to interpret laws and determine constitutionality belongs to the judicial branch under Article III. The framers of the Constitution designed the separation of duties to prevent any single branch of government from becoming too powerful.”
The executive order requires all agencies to submit to “performance standards and management objectives” to be established by the White House Office of Management and Budget, and to report periodically to the president.
Those performance metrics are already being requested: Employees at multiple federal agencies on Saturday reported receiving an email from the Office of Personnel Management ordering them to reply with a set of bullet points justifying their work for the past week.
“Please reply to this email with approx. 5 bullets of what you accomplished last week and cc your manager,” the notice read. “Please do not send any classified information, links, or attachments. Deadline is this Monday at 11:59 p.m. EST.”
An email sent by the OPM to more than two million federal employees late in the afternoon EST on Saturday, Feb. 22.
In a social media post Saturday, Musk said the directive came at the behest of President Trump, and that failure to respond would be taken as a resignation. Meanwhile, Bloomberg writes the Department of Justice has been urging employees to hold off replying out of concern doing so could trigger ethics violations. The National Treasury Employees Union also is advising its employees not to respond.
A legal battle over Trump’s latest executive order is bound to join more than 70 other lawsuits currently underway to halt the administration’s efforts to massively reduce the size of the federal workforce through layoffs, firings and attrition.
KING TRUMP?
On Feb. 15, the president posted on social media, “He who saves his Country does not violate any Law,” citing a quote often attributed to the French dictator Napoleon Bonaparte. Four days later, Trump referred to himself as “the king” on social media, while the White House nonchalantly posted an illustration of him wearing a crown.
Trump has been publicly musing about running for an unconstitutional third-term in office, a statement that some of his supporters dismiss as Trump just trying to rile his liberal critics. However, just days after Trump began his second term, Rep. Andy Ogles (R-Tenn.) introduced a bill to amend the Constitution so that Trump — and any other future president — can be elected to serve a third term.
This week at the Conservative Political Action Conference (CPAC), Rep. Ogles reportedly led a group of Trump supporters calling itself the “Third Term Project,” which is trying to gain support for the bill from GOP lawmakers. The event featured images of Trump depicted as Caesar.
A banner at the CPAC conference this week in support of The Third Term Project, a group of conservatives trying to gain support for a bill to amend the Constitution and allow Trump to run for a third term.
Russia continues to be among the world’s top exporters of cybercrime, narcotics, money laundering, human trafficking, disinformation, war and death, and yet the Trump administration has suddenly broken with the Western world in normalizing relations with Moscow.
This week President Trump stunned U.S. allies by repeating Kremlin talking points that Ukraine is somehow responsible for Russia’s invasion, and that Ukrainian President Volodymyr Zelensky is a “dictator.” The president repeated these lies even as his administration is demanding that Zelensky give the United States half of his country’s mineral wealth in exchange for a promise that Russia will cease its territorial aggression there.
President Trump’s servility toward an actual dictator — Russian President Vladimir Putin — does not bode well for efforts to improve the cybersecurity of U.S. federal IT networks, or the private sector systems on which the government is largely reliant. In addition, this administration’s baffling moves to alienate, antagonize and sideline our closest allies could make it more difficult for the United States to secure their ongoing cooperation in cybercrime investigations.
It’s also startling how closely DOGE’s approach so far hews to tactics typically employed by ransomware gangs: A group of 20-somethings with names like “Big Balls” shows up on a weekend and gains access to your servers, deletes data, locks out key staff, takes your website down, and prevents you from serving customers.
When the federal executive starts imitating ransomware playbooks against its own agencies while Congress largely gazes on in either bewilderment or amusement, we’re in four-alarm fire territory. At least in theory, one can negotiate with ransomware purveyors.
Author: Bill Cox Dearest Miriam, I have a few minutes and am using them to write this letter to you. We are all standing on this sweltering beach in the Algarve and it’s crazy to think that a mere four years ago it would’ve been thronged with tourists. Now there’s only a defeated army here, […]
Author: Mark Renney For Tanner, each name as it appeared on his list was merely a statistic, albeit one it was his job to render obsolete. He was all too aware that there were levels and some of them had sunk deeper into the quagmire than others. But he had always believed it was important […]