It's the end of March. Since the last blog update I've had my second cataract surgery (it went much better this time), written a portion-and-outline of a new novel (for my agent, who will hopefully have feedback or maybe just go ahead and sell it so I can write the rest), and ... been diagnosed with exertional angina. Happy joy. I swear, you hit 60 and the warranties on all your body parts expire simultaneously. (NB: keep your medical advice to yourselves!)
We've also been treated to the unedifying sight of the Paedopotus Rex attacking Iran for no sane reason (the main beneficiary appears to be Benjamin Netanyahu), setting off a conflagration in the Middle East that is already having global repercussions. Per United Airlines, aviation fuel is expected to be over $175 a barrel through the end of 2027 even if the Straits of Hormuz are unblocked within a week or two; J. P. Morgan prognosticate that the last pre-closure consignments through the Straits should be reaching European ports this week, the far east in about 10 days, and the USA by the middle of April, after which all bets are off. Supply chain shocks, here we come!
It's not just crude oil, of course, although it's looking as if the shortages we're in for are going to be as bad as both the oil crises of the 1970s stacked. About 30% of the world's ammonia, required as a feedstock for fertilizer, is manufactured close to the gas wells in the region. And it's getting into growing season in the northern hemisphere. This promises to spike the price of food and trigger famines and eventually revolutions in poorer nations.
Helium, vital for any number of advanced tech (such as hard disk drives, semiconductor fab lines, MRI machines ...) is a by-product of natural gas wells: about 20% of the global supply comes from the Gulf. So TSMC, Samsung, and the other fabs will be hitting crisis levels of supply shortages within a few weeks.
This is not only an emergency for fuel, food production, and electronics: it's going to trigger inflation globally. Iran has had the great idea of allowing ships through the Straits of Hormuz if they pay a transit fee of about US$2M ... in Yuan. Which means oil is now de facto denominated in Chinese currency, not dollars (great win for Trump!).
The truth of the matter is, we're being forced to confront an iron law of economics: you can optimize a system for efficiency or for robustness, but not for both. Just-in-time supply chains are efficient, but there's no slack in the system. Systems with warehousing and storage and redundancy built-in are resilient, but they're not efficient. And over the past 50 years we've abandoned them, in the name of efficiency, so that the excess capacity could be sold off and turned into profits. This war is payback time for the cult of efficiency over robustness in business.
As for the war itself, it's a shit-show. Mass murder of innocent schoolgirls aside, Pete Hegseth is demonstrating the truth of the aphorism that lieutenants study tactics, majors study strategy, generals study logistics, and field marshalls study economics. Going by his demonstrated expertise, Hegseth is clearly a lieutenant: he seems mystified that the US defense industry giants can't throw together a new factory producing Tomahawk or Patriot missiles in a week. (He seems to have AI-pilled himself into believing that all military hardware problems can be solved in software. Or maybe he just believes that his Warrior Jesus will provide.)
I would have more to say on this subject if I wasn't gibbering in a corner about the stupidity of it all, but meanwhile I have hospital and other appointments coming up, then a science fiction convention at the weekend. I'll try to lighten the topic of conversation when I get back: this reality is getting to me (again).
So, I had my second round of eye surgery, and it worked fine. I got a short distance lens, leaving me myopic, which was expected, and I've booked an opthalmology appointment for the earliest possible date post-surgery (in mid-May, the eye needs to settle for six weeks post-op). In the meantime, I'm without visual correction.
And guess what? My vision is changing. My left eye is increasingly myopic, to the point where it's now difficult to read on screen. (And I can barely read with my right eye at all, due to a retinal occlusion that covers about half the visual field.) For writing/editing I've blown up the text size to 250%, which is just tolerable but gives me a headache after a while: new prescription specs can't come soon enough.
NB: don't suggest half-assing corrective lenses using off-the-shelf stuff, my eyes are kinda complex and I'm not just myopic, there's other stuff going on there. Also, don't suggest dictation software: I use a complex vocabulary and punctuation that aren't a normal part of the use case the designers of such software anticipated, i.e. business correspondence. And absolutely don't suggest podcasts or text-to-speech software: I can't absorb information that way. I'm fed up with people trying to convince me to try something I've tried repeatedly to use (and that has failed for me) over the past 30 years: it's irritating, not helpful.
... In other news: despite the above I'm still plodding along at book 2 of the proposed duology (but making very slow progress because writing 1000 words in a day is the new writing 4500 words in a day). And I'll be at Satellite 9 in Glasgow next month, probably before I have new glasses, so if you see me and I fail to make eye contact across a room it's not you: I'm just blind as a bat.
Another minor update 0.3.14 for our nanotime
package is now on CRAN, and has
compiled for r2u (and
will have to wait to be uploaded to Debian until dependency bit64 has been
updated there). nanotime
relies on the RcppCCTZ
package (as well as the RcppDate
package for additional C++ operations) and offers efficient high(er)
resolution time parsing and formatting up to nanosecond resolution,
using the bit64
package for the actual integer64 arithmetic. Initially
implemented using the S3 system, it has benefitted greatly from a
rigorous refactoring by Leonardo who not only rejigged
nanotime internals in S4 but also added new S4 types for
periods, intervals and durations.
This release has been driven almost entirely by Michael, who took over as
bit64 maintainer
and has been making changes there that have an effect on us
‘downstream’. He reached out with a number of PRs which (following
occassional refinement and smoothing) have all been integrated. There
are no user-facing changes, or behavioural changes or enhancements, in
this release.
The NEWS snippet below has the fuller details.
Changes in version 0.3.14
(2026-04-22)
Tests were refactored to use NA_integer64_ (Michael
Chirico in #149 and
Dirk in #156)
nanoduration was updated for changes in nanotime 4.8.0 (Michael Chirico in #152 fixing
#151)
Use of as.integer64(keep.names=TRUE) has been
refactored (Michael Chirico in #154 fixing
#153)
In tests, nanotime is attached after
bit64; this still needs a better fix (Michael
Chirico in #155)
The package now has a hard dependency on the just released bit64 version 4.8.0 (or later)
"Wait," you say, "what's the WTF about a comment pointing to a Stack Overflow page. I do that all the time?"
In this case, it's because this particular comment wasn't given any further explanation. It also wasn't in a block of code that was doing anything with either lodash, Mongoose, or set differences. It was, however, repeated multiple times throughout the codebase, because the entire codebase was a pile of copy-pasta glued together with the bare minimum code to make it work.
In at least one place, the comment was probably correct and helpful. But it got swept up as part of a broader copy/paste exercise, and now is scattered through the code without any true purpose.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Mark Renney Cartwright tends to the machine, the work is all-consuming but perfunctory at best. He cleans the machine and he replaces the data chips. It is vital this is done in the correct order and at the opportune moment, when the machine is able to upload that particular information. The machine and the […]
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language–and is
widely used by (currently) 1263 other packages on CRAN, downloaded 45.7 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 683 times according
to Google Scholar.
This versions updates to the 15.2.5 and 15.2.6 upstream Armadillo releases from,
respectively, two and five days ago. The package has already been
updated for Debian, and built for
r2u. When we ran the
reverse-dependency check for 15.2.5 at the end of last week, one package
failed. I got in touch with the authors, filed an issue, poked some
more, isolated the one line that caused an example to fail … and right
then 15.2.6 came out fixing just that. It was after all an upstream
issue. We used to ran these checks before Conrad made a release, he now
skips this and hence needed a quick follow-up release. It can
happen.
The other big change is that this R package release phases out the
‘dual support’ for both C++14 or newer (as in current Armadillo) along with a C++11
fallback for more slowly updating packages. I am happy to say that after
over eight months of this managed transition (during which CRAN expulsed some laggard
packages that were not moving in from C++11) we are now at all packages
using C++14 or newer which is nice. And I will take this as an
opportunity to stress that one can in fact manage a disruptive API
change this way as we just demonstrated. Sadly, R Core does not seem to
have gotten that message and rollout of this package was also still a
little delayed because of the commotion created by the last minute API
changes preceding the R 4.6.0 release later this week.
Smaller changes in the package are a switch in pdf vignette
production to the Rcpp::asis() driver, and a
higher-precision computation in rmultinom() (matching a
change made in R-devel during last week in its use of Kahan summation).
All detailed changes since the last CRAN release follow.
Changes in
RcppArmadillo version 15.2.6-1 (2026-04-20)
Upgraded to Armadillo release 15.2.6 (Medium Roast Deluxe)
Ensure internally computed tolerances are not NaN
The rmultinom deploys 'Kahan summation' as R-devel
does now.
Changes
in RcppArmadillo version 15.2.5-1 [github-only] (2026-04-18)
Upgraded to Armadillo release 15.2.5 (Medium Roast Deluxe)
Fix for handling NaN elements in .is_zero()
Fix for handling NaN in tolerance and conformance checks
Faster handling of diagonal views and submatrices with one
row>
Sunset the C++11 fallback of including Armadillo 14.6.3 (#504
closing #503)
The vignettes have refreshed bibliographies, and are now built
using the Rcpp::asis vignette builder (#506)
One rmultinom test is skipped under R-devel which
has switched to a higher precisions calc
Just a quick invitation to an in-person event in Tilburg, the Netherlands.
All people interested in the Lomiri Operating Environment are invited to join us at the Lomiri Codefest [codefest] taking place on May 16-17 (participation is free of charge).
We are hiring Lomiri developers
And as another side node, we still have budget (until 07/2027) for 2-3 additional Lomiri developers (depends on each devs weekly availability). The details of my previous post [hiringdetails] +/- still apply. One more limitation / strength: You need real coding skills to apply for the open positions, AI-generated contributions will not be accepted for the tasks at hand.
If you are interested and a skilled FLOSS developer (you need previous OSS contributions as references) and available with at least 10 hrs / week, please get in touch [fsgmbh].
A 24-year-old British national and senior member of the cybercrime group “Scattered Spider” has pleaded guilty to wire fraud conspiracy and aggravated identity theft. Tyler Robert Buchanan admitted his role in a series of text-message phishing attacks in the summer of 2022 that allowed the group to hack into at least a dozen major technology companies and steal tens of millions of dollars worth of cryptocurrency from investors.
Buchanan’s hacker handle “Tylerb” once graced a leaderboard in the English-language criminal hacking scene that tracked the most accomplished cyber thieves. Now in U.S. custody and awaiting sentencing, the Dundee, Scotland native is facing the possibility of more than 20 years in prison.
Two photos published in a Daily Mail story dated May 3, 2025 show Buchanan as a child (left) and as an adult being detained by airport authorities in Spain. “M&S” in this screenshot refers to Marks & Spencer, a major U.K. retail chain that suffered a ransomware attack last year at the hands of Scattered Spider.
Scattered Spider is the name given to a prolific English-speaking cybercrime group known for using social engineering tactics to break into companies and steal data for ransom, often impersonating employees or contractors to deceive IT help desks into granting access.
As part of his guilty plea, Buchanan admitted conspiring with other Scattered Spider members to launch tens of thousands of SMS-based phishing attacks in 2022 that led to intrusions at a number of technology companies, including Twilio, LastPass, DoorDash, and Mailchimp.
The group then used data stolen in those breaches to carry out SIM-swapping attacks that siphoned funds from individual cryptocurrency investors. In an unauthorized SIM-swap, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — such as one-time passcodes for authentication and password reset links sent via SMS. The U.S. Justice Department said Buchanan admitted to stealing at least $8 million in virtual currency from individual victims throughout the United States.
FBI investigators tied Buchanan to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous phishing domains seen in the campaign. The domain registrar NameCheap found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan throughout 2022.
As first reported by KrebsOnSecurity, Buchanan fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. That same year, U.K. investigators found a device at Buchanan’s Scotland residence that included data stolen from SMS phishing victims and seed phrases from cryptocurrency theft victims.
Buchanan was arrested by Spanish authorities in June 2024 while trying to board a flight to Italy. He was extradited to the United States and has remained in U.S. federal custody since April 2025.
Buchanan is the second known Scattered Spider member to plead guilty. Noah Michael Urban, 21, of Palm Coast, Fla., was sentenced to 10 years in federal prison last year and ordered to pay $13 million in restitution. Three other alleged co-conspirators — Ahmed Hossam Eldin Elbadawy, 24, a.k.a. “AD,” of College Station, Texas; Evans Onyeaka Osiebo, 21, of Dallas, Texas; and Joel Martin Evans, 26, a.k.a. “joeleoli,” of Jacksonville, North Carolina – still face criminal charges.
Two other alleged Scattered Spider members will soon be tried in the United Kingdom. Owen Flowers, 18, and Thalha Jubair, 20, are facing charges related to the hacking and extortion of several large U.K. retailers, the London transit system, and healthcare providers in the United States. Both have pleaded not guilty, and their trial is slated to begin in June.
Investigators say the Scattered Spider suspects are part of a sprawling cybercriminal community online known as “The Com,” wherein hackers from different cliques boast publicly on Telegram and Discord about high-profile cyber thefts that almost invariably begin with social engineering — tricking people over the phone, email or SMS into giving away credentials that allow remote access to corporate internal networks.
One of the more popular SIM-swapping channels on Telegram has long maintained a leaderboard of the most rapacious SIM-swappers, indexed by their supposed conquests in stealing cryptocurrency. That leaderboard previously listed Buchanan’s hacker alias Tylerb at #65 (out of 100 hackers), with Urban’s moniker “Sosa” coming in at #24.
Buchanan’s sentencing hearing is scheduled for August 21, 2026. According to the Justice Department, he faces a statutory maximum sentence of 22 years in federal prison. However, any sentence the judge hands down in this case may be significantly tempered by a number of mitigating factors in the U.S. Sentencing Guidelines, including the defendant’s age, criminal history, time already served in U.S. custody, and the degree to which they cooperated with federal authorities.
After my previous blog post about eBook readers in Debian [1] a reader recommended FBReader. I tried it and it’s now my favourite reader. It works nicely on laptop and phone and takes significantly less RAM than Calibre or Arianna (especially important for phones). While the problems with my FLX1s not displaying text with Calibre or Arianna might be the fault of something on the FLX1s side those problems just don’t happen with FBReader.
FBReader has apparently now got a proprietary version as the upstream, but we still have FOSS code to use in Debian. It would be nice if someone updated it to store the reading location using WebDAV and/or a local file that can be copied with the NextCloud client or similar. Currently there is code to store reading location in the Google cloud which I don’t want to use. It’s not THAT difficult to see what chapter you are at with one device and just skip to that part on another, but it is an annoyance.
One thing I really like about FBReader is that you can run it with a epub file on the command line and it just opens it and when it’s been closed you can just open it again to the same spot in the same file. I don’t want a “library” to view a book list, I just want to go back to what I was last reading in a hurry. Calibre might be better for some uses, for example I can imagine someone in the publishing industry with a collection of thousands of epub files finding that Calibre works better for them. But for the typical person who just wants to read one book and keep reading it until they finish it FBReader seems clearly better. The GUI is a little unusual, but it’s not at all confusing and it works really well on mobile.
Okular
I tried Okular (the KDE viewer for PDF files etc) which displays epub files if you have the “okular-extra-backends” installed, but it appears to not display books with the background color set to black. I would appreciate it if someone who has read some public domain or CC licences epub files can recommend ones with a black background that I could use for testing as I can’t file a Debian bug report without sample data to reproduce the bug. I decided not to use it for actual book reading as FBReader is far better for my use taking less RAM and being well optimised for mobile use.
Folite
Foliate supports specifying a book on the command-line which is nice. But it takes more memory than FBReader which is probably mostly due to using webkit to display things. The output was in 2 columns on my laptop in small text which is probably configurable but I didn’t proceed with it. I determined that it doesn’t compare with FBReader for my use. It’s written in JavaScript which may be a positive feature for some people.
Koodo
I had a brief test of Koodo which isn’t in Debian. Here is the Koodo Reader Github [2]. I installed the .deb that they created, it installs files to “/opt/Koodo Reader/” (yes that’s a space in the directory name) and appears to have Chromium as part of the runtime. I didn’t go past that even though it appears to have a decent feature set. It is licensed under version 3 of the AGPL so is suitable for Debian packaging if someone wants to do it.
Author: Majoki Philomena paced the floor of the lab. “It’s the only thing that will do the trick.” “Quantum bacon?” “Of course, quantum bacon. What else is going to attract the right kind of scientists to work here?” “And who exactly are the ‘right kind’ of scientists?” Akira asked. Philomena smiled her patient and most […]
Eric O worked for a medical device company. The medical device industry moves slowly, relative to other technical industries. Medical science and safety have their own cadence, and at a certain point, iterating faster doesn't matter much.
Eric was working on a new feature on a system that had been in use for thirteen years. This new feature interacted with a database which stored information about racks of test tubes, and Eric's tests meant creating several entries for racks of test tubes. And that's when Eric discovered that the database only allowed thirty racks. Add any more, it would just roll right back over to one.
This was odd. The database was small- less than 40MB, even in production- and there were automatic tasks to purge old data for compliance purposes. Why a hard limit of thirty?
Eric had only been at the company for a year, so he asked one of the more senior team members, Lester. "Oh yeah, that was before my time. You should probably ask Carl."
Later that day, Eric happened to bump into Carl around the coffee maker, and asked the question. "Oh, yeah, I do vaguely remember something about that. It was in the requirements for the product. I thought it was weird, but didn't think too much about it. You should probably ask Elise, she's been here like twenty years."
Well, now it was getting curious. Eric went over to the "old building", as it was named, the original office for the company on the other side of the parking lot. Most of the offices had moved to the new building a decade earlier, and it mostly served as fabrication and storage, but a few offices remained.
Elise was on the third floor, down a poorly lit hallway, sitting in an office with water-stained acoustical tile in its ceiling. "Oh, yeah, I put that into the requirements document. It's funny, I thought it was weird too, but the system you're working on was a replacement for an older system. Our requirements were derived from those. Let me think… Irving worked on that, but he's dead, god rest him. Penny is retired. Oh, you know, Humbert is still around. He didn't work on that, but he worked on some of the systems that came before that. He's upstairs and on the other side of the building."
Eric went upstairs and to the other side of the building. The fourth floor had been last remodeled circa 1985, and the ugly industrial paint on the wall was made even uglier by the fact that someone had replaced most of the flourescent tubes with LEDs. Most. The mismatched color temperature started Eric down the path of a headache.
Humbert was in an office similar to Elise's. On his desk was a plaque commemerating 40 years of service with the company. Eric asked about the limitation, and Humbert laughed.
"You're working on the latest version of a product that initially started on an old PDP-11 running MUMPS. I mean, the first versions, anyway. We ran to desktop computers as fast as we could. I wrote a version for DOS in… oh… '86? I knew none of the facilities we worked with had more than ten or fifteen racks of tubes, and I needed somehow to limit the size of the database so it all fit on a single 5 1/4" floppy disk. I picked thirty, because it seemed like a good round number. Honestly, I'm shocked that the limit still exists."
So was Eric. There had been several ground-up-rewrites since 1986, before the one Eric maintained had been released thirteen years ago. Each one of them had chosen to maintain the same limitation, without ever considering why it existed. The rule had simply been copied, mindlessly, for 40 years.
"I'm kind of impressed," Eric said to Humbert, "in a horrified way."
"Me too, kid, me too."
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
In September 2025, I attended the LibreOffice Conference in Budapest, Hungary, on the 4th and the 5th, and a community meeting on the 3rd. Thanks to The Document Foundation (TDF) for sponsoring my travel and accommodation costs. The conference venue was Faculty of Informatics, Eötvös Loránd University (ELTE).
The conference was planned to be held from the 4th to the 6th, but the program for the 6th of September had to be canceled due to the venue being unavailable because of a marathon in Budapest. So, all the talks got squeezed into just two days, making the schedule a bit hectic.
The TDF had booked my room at the Corvin Hotel. It was a double bedroom with a window. The breakfast was included in the hotel booking. The hotel was walking distance from the conference venue. One could also take a tram from the hotel to reach the venue.
A shot of my room. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A tram in Budapest. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
3rd of September
On the 3rd of September, we had a community meeting at the above-mentioned venue. I walked with my friend Dione to the venue. Upon reaching there, I noticed that the university had no boundaries and gates. This reminded me of the previous year’s conference venue in Luxembourg, which also had no boundaries or gates.
In contrast, Indian universities and institutes typically have walls and gates serving as boundaries to separate them from the rest of the city. Many of these institutes also have security guards at the entrance, who may ask attendees to present proof of admission before allowing them inside. I was surprised to find that institutes in Europe, like the one where the conference was held, did not have such boundaries.
The building where the conference was held was red, which happened to be the same color as the building for the previous year’s conference venue. I remember joking with Dione that the criteria for the conference venue might have been the color of the building.
The red building in the picture served as the conference venue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
During the community meeting, we shared ideas on how to spread the word about LibreOffice. The meeting lasted for a couple of hours.
After the community meeting, we went to the hotel for dinner sponsored by the TDF.
These Esterházy cake bites were really yummy. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Raspberry Currant cake slices. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
4th of September
On the first day of the conference, attendees were given swag bags containing a pad, sticky notes, a pen, a conference T-shirt, and a bottle.
Conference swag. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The talks started early in the morning with Eliane Domingos, Chairperson of TDF’s Board of Directors, giving the inauguration talk. As always, I found Italo Vignoli’s talk on the importance of document freedom interesting.
During the snack break, I noticed that there were three types of milk available for coffee: cow’s milk, lactose-free milk, and almond milk. Almond milk is rare in India, but I have managed to get it, but I have never seen lactose-free milk in India.
Since I run fundraisers in my projects, such as Prav, I could relate to Lothar K. Becker’s talk. He discussed the issue that certain implementations in LibreOffice require a budget that is too large for any single interested entity to fund independently. Furthermore, The Document Foundation (TDF) cannot legally receive funds from government entities. Therefore, there is no organization or entity to pool resources from all the interested entities to finance the implementation.
Lothar giving his presentation. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Another talk was by the Austrian Armed Forces on their migration to LibreOffice. I wanted to know why they migrated, and I found out that they did it for their digital sovereignty, and not for saving on the license costs. Another point presented in the talk was that LibreOffice is available on all the operating systems, while the Microsoft Office suite is not that widely available. The migration was systematic and was performed over a few years. They started working on it in 2021, and the migration was finished recently. In addition, it also required training their staff in using LibreOffice.
Presentation on migration to LibreOffice by Austrian Armed Forces. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The lunch was inside the university canteen. We were provided lunch coupons by the TDF. I got a vegan coupon with 4000 Ft written on it, which meant I could take lunch for up to 4000 Hungarian forints.
My lunch ticket for the conference. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The lunch I had on the first day. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
During the evening, it was my turn for the presentation. I was done with preparing my slides ten days before my talk. I also got my slides reviewed by friends.
My talk was finished in 20 minutes, while I was given a 30-minute slot. This helped us catch up on the schedule. Furthermore, I made my talk interactive by asking questions and making sure that the audience was not asleep. During my talk, my friend Dione took my pictures with my camera.
My talk was on how free software projects could give users a say in freedom to modify the software. I illustrated this using the Prav project that I am a part of.
After the talks were over, we were treated to a conference dinner at Trofea Grill. It had a great selection of desserts, which helped me sample some Hungarian desserts. The sponge cake was especially good.
Desserts at Tofea Grill. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
5th of September
The next day—the 5th of September—I went with Dione to the venue early in the morning, as her talk was the first one of the day. Her talk was titled Managing Tasks with Nextcloud Deck. Later that day, I also attended a talk on Collabora. At lunch, I found the egg white salad quite tasty.
Dione giving her presentation. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Egg white salad. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
After the lunch break, we had the conference group photo. I had a Nikon camera, which we used to take the group photo. I requested a university student to take our group photo and also taught her how to operate the camera.
Group photo
By the evening, the conference ended, after which we went to a pub, which was again sponsored by TDF. I had beer, but that one really tasted bad, so I couldn’t finish it. The only vegetarian option was goat cheeseburger, which my friend Manish and I opted for. The burger tasted awful. Apparently, I don’t like goat cheese.
The next day I went sightseeing with Dione in Budapest. Stay tuned for our adventures in Budapest!
Credits: Thanks to Dione and Richard for proofreading.
Many thanks to Sruthi Chandran for her campaign, to our Developers for their
votes, and to Andreas Tille for his service as DPL over the past two years!
The new term for the project leader will start on April 21, 2026 and expire
on April 20, 2027.
As the Putinists continue wrecking all U.S. institutions and turning the world (including longtime allies) against us, it's important to recall how much goodwill Trump and his ilk must eliminate, before that promise to Moscow can be fulfilled. Of course all empires are disliked, but elsewhere I describe how George Marshall, FDR, Truman, Ike etc. set things up so that humanity would have its best 80 years, ever. Better than all of prior human history combined. Resulting in the *least hated* empire. That is... until now.
Okay, Pax Americana will never be the same after Trump. And maybe that's good. Other centers of Enlightenment are stepping up. But when the Union finally wins this latest phase 9 of Civil War betrayal by our idiot Hyde-Side neighbors, watch the joy burst forth around the globe... and across all Americans of goodwill and sapience.
Want evidence for that assertion? Amid our self-reproach, Let's remember times when America did take brave steps toward light. There are others, on this planet, who remember, as well. And you need the gift that I am about to give you.
This song by Michel Sardou is called "Les Ricains" which means, more or less, "The Yankees." Here are the lyrics, to read along.
Even better is this version... a huge crowd of French people cheering and singing along. Capable of gratitude. They know that this American Pax, for all of its faults, prevented vastly worse. That things could have been a hell, a curse. That every other era of dismal human history was worse.
And if we do not blow it now, we have a chance to be recalled by our heirs - organic and cyber - the true humans - as the very best that cavemen could be. Crude, bestial primitives who tried nonetheless to lift our gaze and those around us. To something better.
Listen and read along. We need this now. Right now!
Try. I dare you not to tear up, in gratitude for this gratitude.
== But we have a tough job keeping that promise ==
Alas, the Kremlin boyz and confederates and murder sheiks have the upper hand for now and they and stick together.
Latest example: Trump has issued special exemptions letting Putin sell oil, evading world sanctions for his murderously criminal invasion of Ukraine. Fox News is a 5th column of the relabeled KGB's propaganda Comintern that has used blackmail to take over the entire Republican Party.
Amid the hooplah over the Strait of Hormuz -- ("YOU block it? No, *I* get to block it!") -- Trump has all along made offers to the Iranian Republican Guard and Religious Police etc. to make deals with him, in exchange for them kissing his ring.
It's already happened! In Venezuela, Argentina, El Salvador etc. - and possibly soon in Cuba (DT shouts "They're next!") - the aim is never, ever to establish democracy or to liberate citizens from their oppressors.
The pattern is perfectly that of mafiosi and that of an ex casino mogul. Taking over another gang's territory by decapitating it's top capo, then getting allegiance (and resulting vigorish) from the sub-capos of the gang that's left in place.
This is now so blatant that no other theory is remotely tenable. LOOK at this image of Maduro's VP Rodriguez, all spiffed and glammed-up and grin-hugging Trump's consigliere, eager to serve... and to send Trump personally a shipment of gold! And nothing for the Venezuelan people.
Oh, and Miami crime families will slip in atop the Castro power structure in Cuba.
This is a Mafia gang and the capo di tutti capi is named Vlad.
His other goal? Riling up enough enemies (who had been quiescent since Obama killed Osama) to deliver us into another 9/11, that he imagines might save him, this fall. Which explains why he fired over half of our counter-terrorism experts. Now why would anyone do that?
Put it all together folks.
== The real purpose of the coming Reichstag Fire / 911 strike ==
Everyone will be able to see that the calamity will be a blatant set-up in order to justify declaring an emergency and martial law and to cancel the November election's likely torching of the entire treasonous GOP.
It won't work, for that reason. Because we all can see it.
Only there is an added, underlying danger that I see discussed nowhere.
Go back to 1933. The purpose of the Reichstag Fire was to excuse the Nazi arrest of dozens of opposition parliament members. And thus, the parliament could never hold a quorum vote for new elections or a new Chancellor.
The lesson?
YOU U.S. SENATORS AND REPRESENTATIVES: START UPPING YOUR SECURITY RIGHT NOW.
The Roberts Court has already said Trump could off you, as an 'official act,' to prevent impeachment/conviction. So talk it over. Upgrade practices. Have contingency plans. Grow eyes in the back of your heads. Do it now.
The rest of you?
When the calamity strikes, get out there and chant "Reichstag Fire!"
And one more word to show our intent:
"Appomattox!"
== Finally, my qualifications as a history expert! ==
Well, AI has some legit uses. One fan/reader searched the paleontology databases and found this historical record, a bit fuzzy, from the Paleolithic. It shows my legit ancestral claims are valid!
I recently released version 0.3.0 of my recipe manager application Kookbook – find it in git in KDE Invent or as released tarballs in https://download.kde.org/stable/kookbook/
Changes since last time is more or less “Minor bugfixes and a Qt6 port” – nothing as such noteworthy unless you aim to get rid of Qt5 on your system.
so what is kookbook?
It is a simple recipe viewer that works with semi-structured markdown. More details can be seen in the quite old 0.1.0 announcement
At some point I should do a 10 recipe example collection, but my personal collection is in danish, so I’m not sure it is going to be useful. Unless someone will donate me some handfuls of pre-formatted recipes, I will happily announce it.
The New York Times has a long article where the author lays out an impressive array of circumstantial evidence that the inventor of Bitcoin is the cypherpunk Adam Back.
I don’t know. The article is convincing, but it’s written to be convincing.
I can’t remember if I ever met Adam. I was a member of the Cypherpunks mailing list for a while, but I was never really an active participant. I spent more time on the Usenet newsgroup sci.crypt. I knew a bunch of the Cypherpunks, though, from various conferences around the world at the time. I really have no opinion about who Satoshi Nakamoto really is.
"Here, you're a programmer, take this over. It's business critical."
That's what Felicity's boss told her when he pointed her to a network drive containing an Excel spreadsheet. The Excel spreadsheet contained a pile of macros. The person who wrote it had left, and nobody knew how to make it work, but the macros in question were absolutely business vital.
Also, it's in French.
We'll take this one in chunks. The indentation is as in the original.
PublicSub ExporToutVersBaseDonnées(ClasseurEnCours As Workbook)
Call AffectionVariables(ToutesLesCellulesNommées)
Call AffectationBaseDonnées(BaseDonnées)
BaseDonnées.Activate
The procedures AffectionVariables and AffectationBaseDonnées populate a pile of global variables. "base de données" is French for database, but don't let the name fool you- anything referencing "base de données" is referencing another Excel file located on a shared server. There are, in total, four Excel files that must live on a shared server, and two more which must be in a hard-coded path on the user's computer.
Oh, and the shared server is referenced not by a hostname, but by IP address- which is why the macros were breaking on everyone's computer; the IP address changed.
Let's continue.
'Vérifier si la ligne existe déjà.If ClasseurEnCours.Sheets("DATA").Range("Num_Fichier") = 0Then
Num_Fichier = BaseDonnées.Sheets(1).Range("Dernier_Fichier").Value + 1
Insérer_Ligne: '(étiquette Goto) insérer une ligne
Application.GoTo Reference:="Dernière_Ligne"
Selection.EntireRow.Insert
'Copie les cellules (colonne A à colonne FI) de la ligne au-dessus de la ligne insérée.With ActiveCell
.Offset(-1, 0).Range("A1:FM1").Copy
'Colle le format de la cellule précédemment copiée à la cellule active puis libère les données du presse papier
.PasteSpecial
.Range("A1:FM1").Value = ""'Se repositionne au début de la ligne insérée.
.Range("A1").SelectEndWith
Application.CutCopyMode = False
Uh oh, Insérer_Ligne is a label for a Goto target. Not to be confused by the Application.GoTo call on the next line- that just selects a range in the spreadsheet.
After that little landmine, we copy/paste some data around in the sheet.
That's the If side of the conditional, let's look at the else clause:
Else
Cherche_Numéro_Fichier: ' Chercher la ligne ou le numéro de fichier est égale à NumFichier.While ActiveCell.Value <> Num_Fichier
If ActiveCell.Row = Range("Etiquettes").Row ThenGoTo Insérer_Ligne
EndIf
ActiveCell.Offset(-1, 0).Range("a1:a1").SelectWend'Vérifier le numéro d'indice de la ligne active.If Cells(ActiveCell.Row, 165).Value <> ClasseurEnCours.Sheets("DATA").Range("Dernier_Indice") Then
ActiveCell.Offset(-1, 0).Range("A1:A1").SelectGoTo Cherche_Numéro_Fichier
EndIf
ActiveCell.Offset(0, 0).Range("A1:FM1").Value = ""EndIf
We start with another label, and… then we have a Goto. A Goto which jumps us back into the If side of the conditional. A Goto inside of a while loop, a while loop that's marching around the spreadsheet to search for certain values in the cell.
After the loop, we have anotherGoto which will possibly jump us up to the start of the else block.
The procedure ends with some cleanup:
'----- ' Do some stuff on the active cell and the following cells on the column
.-----
BaseDonnées.Close TrueSet BaseDonnées = NothingEndSub
I do not know what this function does, and the fact that the code is largely in a language I don't speak isn't the obstacle. I have no idea what the loops and the gotos are trying to do. I'm not even a "never use Goto ever ever ever" person; in a language like VBA, it's sometimes the best way to handle errors. But this bizarre time-traveling flow control boggles me.
"Etiquettes" is French for "labels", and it may be bad etiquette but I've got some four letter labels for this code.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Surface Detail is the ninth novel in Banks's Culture science
fiction (literary space opera?) series. As with most of the Culture
novels, it can be read in any order, although this isn't the best starting
point. There is an Easter egg reference to Use of Weapons that would be easier to notice if you have read
that book recently, but which is not that important to the story.
Lededje Y'breq is an Indented Intagliate from the Sichultian Enablement.
Her body is patterned from her skin down to her bones, covered with
elaborate markings similar to tattoos that extend to her internal organs.
As an intagliate, she is someone's property. In her case, she is the
property of Joller Veppers, the richest man in the Enablement and her
father's former business partner. Intagliates are a tradition of great
cultural pride in the Enablement. They are a living representation of the
seriousness with which debts and honor are taken, up to and including
one's not-yet-born children becoming the property of one's debtor. Such
children are decorated as living works of art of the highest skill and
technical sophistication; after all, the Enablement are not barbarians.
As the story opens, Lededje is attempting, not for the first time, to
escape. This attempt is successful in an unexpected way.
Prin and Chay are Pavulean researchers and academics who, as this story
opens, are in Hell. They are not dead; they have infiltrated the Hell that
Pavuleans are shown to scare them into proper behavior in order to prove
that it is not an illusion and their society does indeed torture people in
an afterlife, in more awful ways than people dare imagine. They have
reached the portal through which temporary visitors exit, hoping to escape
with firm evidence of the existence and horrors of the Pavulean afterlife.
They will not be entirely successful.
Yime Nsokyi is a Culture agent for Quietus, the part of Contact that
concerns itself with the dead. Many advanced societies throughout the
galaxy have invented and reinvented the ability to digitize a mind and
then run it in a virtual environment. Once a society can capture the minds
of every person in that society from that point forward, it faces the
question of whether to do so and, if it does, what to do with those minds.
More specifically, it faces the moral question of whether to punish the
minds of people who were horrible in life. It faces the question of
whether to create Hell.
Vatueil is a soldier in a contestation, a limited and carefully monitored
virtual war. The purpose of that war game is to, once and for all, resolve
the question of whether civilizations should be allowed to create Hells.
Some civilizations consider them integral to their religion or
self-conception. Others consider them morally abhorrent, and that conflict
was in danger of spilling over into war in the Real. Hence the War in
Heaven: Both sides committed to fight in a virtual space under specific
and structured rules, and the winner decides the fate of the galaxy's
Hells. Vatueil is fighting for the anti-Hell side. The anti-Hell side is
losing.
There are very few authors who were better at big-idea science fiction
than Iain M. Banks. I've been reading a few
books about AI ships and remembered that I had two unread Culture novels
that I was saving. It felt like a good time to lose myself in something
sprawling.
Surface Detail does sprawl. Even by Banks's standards, there was an
impressive amount of infodumping in this book. Banks always has huge and
lovingly described set pieces, and this book is no exception, but there
are also paragraphs and pages of background and cultural musings and
galactic politics. We are introduced to not one but three new Contact
divisions; as well as the already-mentioned Quietus, there is Numina,
which concerns itself with the races that have sublimed (transcended), and
Restoria, which deals with hegemonizing swarms (grey goo nanotech,
paperclip maximizers, and their equivalents).
Infodumping is both a feature and a bane of big-idea science fiction, and
it helps to be in the right mood. It also helps if the info being dumped
is interesting, and this is where Banks shines. This is a huge, sprawling
book, but it deals with some huge, sprawling questions and it has
interesting and non-reductive thoughts about them. The problems posed by
the plot come with history, failed solutions, multi-sided political
disputes, strategies and tactics of varying morality and efficacy, and an
effort to wrestle with the irreducible complexity of trying to resolve
political and ethical disagreements in a universe full of profound
disagreements and moral systems that one cannot simply steamroll.
It also helps that the characters are interesting, even when they're not
likable. Surface Detail has one fully hissable villain (Veppers) as
a viewpoint character, but even Veppers is interesting in a "let me check
the publication date to see if Banks was aware of Peter Thiel" sort of
way. The Culture ships, of which there are several in this story, tend
towards a gently sarcastic kindness that I find utterly charming. Lededje
provides the compelling motive force of someone who has no involvement in
the broader philosophical questions and instead intends to resolve one
specific problem through lethal violence. Vatueil and Yime were a bit
bland in personality, more exposition generators than characters I warmed
to, but their roles and therefore the surrounding exposition were
fascinating enough that I still enjoyed their sections.
I'm sure this is not an original observation, but I was struck reading
this book in the first half of 2026 that the Culture functions as an
implementation of what the United States likes to think it is but has
never been. It has a strong sense of shared ethics and moral principles,
it tries to export them to the rest of the galaxy through example,
persuasion, and careful meddling, but it tries to follow some combination
of pragmatic and moral rules while doing so, partly to avoid a backlash
and partly to avoid becoming its own sort of hegemonizing swarm. That is a
powerfully attractive vision of how to be an advanced civilization, and
the fact that every hegemon that has claimed that mantle has behaved
appallingly just makes it more intriguing as a fictional concept. In this
book, like in many Culture books, the Culture is painfully aware of the
failure modes of meddling, and the story slowly reveals the effort the
Culture put into staying just on a defensible side of their own moral
lines. This is, in a sense, a Prime Directive story, but with a level of
hard-nosed pragmatism and political sophistication that the endless
Star Trek Prime Directive episodes never reach.
Surface Detail does tend to sprawl, and I'm not sure Banks pulled
together all the pieces of the plot. For example, if there was a point to
the subplot involving the Unfallen Bulbitian, it was lost on me. (There is
always a possibility with Banks that I wasn't paying close enough
attention.) But the descriptions are so elaborate and the sense of
politics and history are so deep that I was never bored, even when
following a plot thread that meandered off into apparent irrelevance. The
main plot line comes to a satisfying conclusion that may be even more
biting social commentary today than it was in 2010.
A large part of the plot does involve Hell, so a warning for those who
haven't read much Banks: He adores elaborate descriptions of body horror
and physical torture. The sections involving Prin and Chay are rather
grim and horrific, probably a bit worse than Dante's Inferno. I
have a low tolerance for horror and I was able to read past and around the
worst bits, but be warned that Banks indulges his love for the painfully
grotesque quite a bit.
This was great, and exactly what I was hoping for when I picked it up.
It's not the strongest Culture novel (for me, that's either
The Player of Games or
Excession), but it's one of the better
ones. Highly recommended, although if you're new to the Culture, I would
start with one of the earlier books that provide a more gradual
introduction to the Culture and Special Circumstances.
Followed, in the somewhat disconnected Culture series sense, by The
Hydrogen Sonata.
Content warnings: Rape (largely off-screen), graphic violence, lots of
Bosch-style grotesque torture, and a lot of Veppers being a thoroughly
awful human being as a viewpoint character.
Author: Julian Miles, Staff Writer “How can I be expected to rule well when all of you keep on believing the FAKE news spread by people who hate me for being so good. Why think enemies of what I am trying to do tell you the truth? I tell you the TRUTH you need. I […]
... The TLDR is: the cataract in my one mostly working eye (the other has about 50% retinal occlusion) is steadily getting worse, and I'm scheduled for surgery on March 27th.
NB: no need to lecture me about cataract surgery, I've already had it on the other eye. Same team, same hospital, same prognosis. I know exactly what to expect. Nor are your best wishes welcome: replying to them gets tiring after the fiftieth time (see: poor eyesight, above).
But worsening eyesight means that reading (and writing!) is fatiguing, so I gradually do less and less of it in each session.
Consequently I've been spending my screen time, not on the blog, but on a revision pass over my next novel, and on writing the follow-up.
(No, I can't give you any details: let's just say they're space operas, not Laundry Files, and I'll talk about them when my agent gives me the go-ahead. Book 1 is written, subject to editing, and Book 2 is about 10-15% written. And neither of them is Ghost Engine, the white whale I've been fruitlessly hunting for the past decade, although the viable chunks of GE may get recycled into Book 2.)
After my eye surgery I'll be going to Iridescence, the 2026 British Eastercon, the following weekend in Birmingham. I have some program items: I'll update this blog entry when I have a final schedule.
After Iridescence, I'll be heading to Satellite 9 in Glasgow (May 22nd to 24th). And after that I'll be attending Metropol Con in Berlin, July 2nd to 5th.
I'm not attending any US SF conventions for the forseeable future (being deported to a concentration camp in El Salvador is not on my bucket list), but I will try to attend the 2027 World Science Fiction convention in Montreal, assuming the Paedopotus Rex hasn't gone on a Godzilla-style rampage north of the border by then, and that intercontinental air travel is still possible. (See, my inability to resist that kind of cheap shot is exactly why I'm not visiting the US these days: ICE want to see your social media history going back 5 years, and I gather they're using some horrible LLM tool from Palantir to vet travellers.)
We now return you to your regular scheduled kvetching about the state of world affairs until my eyeballs are firing on all cylinders again. (Say, did you know that 30% of the world's fertilizer is shipped through the Straits of Hormuz? And about 20% of the sulfur that ends up as feedstock in sulfuric acid for industrial processes comes from sour Gulf crude, so ditto? Not to mention the helium that is required to keep MRI machines and TSMC's semiconductor fab lines running, never mind your grandkids' party balloons? Happy days ...)
Sorry I haven't updated the blog for a while: I've been busy. (Writing the final draft of a new novel entirely unconnected to anything else you've read—space opera, new setting, longest thing I've written aside from the big Merchant Princes doorsteps. Now in my agent's inbox while I make notes towards a sequel, if requested.)
Over the past few years I've been naively assuming that while we're ruled by a ruthless kleptocracy, they're not completely evil: aristocracies tend to run on self-interest and try to leave a legacy to their children, which usually means leaving enough peasants around to mow the lawn, wash the dishes, and work the fields.
But my faith in the sanity of the evil overlords has been badly shaken in the past couple of months by the steady drip of WTFery coming out of the USA in general and the Epstein Files in particular, and now there's this somewhat obscure aside, that rips the mask off entirely (Original email on DoJ website ) ...
A document released by the U.S. Department of Justice as part of the Epstein files contains a quote attributed to correspondence involving Jeffrey Epstein that references Bill Gates and a controversial question about "how do we get rid of poor people as a whole."
The passage appears in a written communication included in the DOJ document trove and reads, in part: "I've been thinking a lot about that question that you asked Bill Gates, 'how do we get rid of poor people as a whole,' and I have an answer/comment regarding that for you." The writer then asks to schedule a phone call to discuss the matter further.
As an editor of mine once observed, America is ruled by two political parties: the party of the evil billionaires, and the party of the sane (so slightly less evil) billionaires. Evil billionaires: "let's kill the poor and take all their stuff." Sane billionaires: "hang on, if we kill them all who's going to cook dinner and clean the pool?"
And this seemed plausible ... before it turned out that the CEO class as a whole believe entirely in AI (which, to be clear, is just another marketing grift in the same spirit as cryptocurrencies/blockchain, next-generation nuclear power, real estate backed credit default options, and Dutch tulip bulbs). AI is being sold on the promise of increasing workforce efficiency. And in a world which has been studiously ignoring John Maynard Keynes' 1930 prediction that by 2030 we would only need to work a 15 hour work week, they've drawn an inevitable unwelcome conclusion from this axiom: that there are too many of us. For the past 75 years they've been so focussed on optimizing for efficiency that they no longer understand that efficiency and resilience are inversely related: in order to survive collectively through an energy transition and a time of climate destabilization we need extra capacity, not "right-sized" capacity.
Raise the death rate by removing herd immunity to childhood diseases? That's entirely consistent with "kill the poor". Mass deportation of anyone with the wrong skin colour? The white supremacists will join in enthusiastically, and meanwhile: the deported can die out of sight. Turn disused data centres or amazon warehouses into concentration camps (which are notorious disease breeding grounds)? It's a no-brainer. Start lots of small overseas brushfire wars, escalating to the sort of genocide now being piloted in Gaza by Trump's ally Netanyahu (to emphasize: his strain of Judaism can only be understood as a Jewish expression of white nationalism, throwing off its polite political mask to reveal the death's head of totalitarianism underneath)? It's all part of the program.
Our rulers have gone collectively insane (over a period of decades) and they want to kill us.
The class war has turned hot. And we're all on the losing side.
Author: Alastair Millar [> play] “So that, ladies and gentlemen, is SePPO, the Self-Propelled Public Order system: the bipedal, flexible law enforcement tool for the next century! Do we have any questions?” “Angus McAndrew, New Tech News. What OS do they run on?” “The units run on a proprietary AI-rated operating system trained for public […]
Collision Course is the sixth novel in the Class 5 science fiction
series and the first that doesn't use the Dark X naming convention.
There are lots of spoilers in this story for the earlier books, but you
don't have to remember all the details of previous events. Like the
novella, Dark Ambitions, this novel
returns to Rose, Sazo, and Dav instead of introducing another Earth woman
and Class 5 ship.
In Dark Class, Ellie discovered an
interesting artifact of a previously-unknown space-faring civilization.
Rose, Sazo, and Dav are on their way to make first contact when, during a
routine shuttle flight between the Class 5 and Dav's Grih military ship,
Rose is abducted. The aliens they came to contact have an aggressive,
leverage-based negotiating strategy. They're also in the middle of a
complicated war with more sides than are readily apparent.
What I liked most about Dark Horse, the
first book of this series and our introduction to Rose, was the revealed
ethical system and a tense plot that hinged primarily on establishing
mutual trust when there were excellent reasons for the characters to not
trust each other. As the series has continued, I think the plots have
become more complicated but the ethical dilemmas and revealing moments of
culture shock have become less common. That is certainly true of
Collision Course; this is science fiction as thriller, with a
complex factional conflict, a lot of events, more plot reversals than the
earlier books, but also less ethics and philosophy.
I'm not sure if this is a complaint. I kind of miss the ethics and
philosophy, but Diener also hasn't had much new to say for the past few
books. The plot of Collision Course is quite satisfyingly twisty
for a popcorn-style science fiction series. I was kept guessing about the
merits of some of the factions quite late into the book, although
admittedly I was in the mood for light entertainment and was not trying
too hard to figure out where the book was going. I did read nearly the
entire book in one sitting and stayed up until 2am to finish it, which is
a solid indication that something Diener was doing worked.
I do have quibbles, though. One is that the ending is a bit unsatisfying.
Like Sazo, I was getting quite annoyed at the people capturing (and
recapturing) Rose and would have enjoyed somewhat more decisive
consequences. Also, and here I have to be vague to avoid spoilers, I was
expecting a bit more of a redemption arc for one of the players in the
multi-sided conflict. The ending I did get was believable but rather sad,
and I wish Diener had either chosen a different outcome (this is light
happily-ever-after science fiction, after all) or wrestled more directly
with the implications. There were a bit too many "wait, one more thing"
ending reversals and not quite enough emotional payoff for me.
The other quibble is that Collision Course was a bit too damsel in
distress for this series. Rose is pregnant, which Diener uses throughout
the book as a way to raise the stakes of the plot and also make Rose more
annoyed but also less capable than she was in her earlier novel. Both Sazo
and Dav are in full heroic rescue mode, and while Diener still ensures
Rose is primarily responsible for her own fate, there is some "military
men attempt to protect the vulnerable woman" here. One of the things I
like about this series is that it does not use that plot, so while the
balance between Rose rescuing herself and other people rescuing her is
still tilted towards Rose, I would have liked this book more if Rose were
in firmer control of events.
I will mostly ignore the fact that a human and a Grih sexually reproducing
makes little to no biological sense, since Star Trek did similar
things routinely and it's an established genre trope. But I admit that it
still annoys me a bit that the alien hunk is essentially human except that
he's obsessed with Rose's singing and has pointy ears. Diener cares about
Rose's pregnancy a lot more than I did, which added to my mild grumpiness
at how often it came up.
Overall, this was fine. I prefer a bit more of a protagonist discovering
how powerful she is by making ingenious use of the ethical dilemmas her
captors have trapped themselves in, and a bit less of Rose untangling a
complicated political situation by getting abducted by every player
serially, but it still kept the pages turning. Any book that is
sufficiently engrossing for me to read straight through is working at some
level. Collision Course was highly readable, undemanding, and
distracting, which is what I was looking for when I read it. I would put
it about middle of pack in the series. If Rose's pregnancy is more
interesting to you than it was to me, that might push it a bit higher.
If you have gotten this far in the series, you will probably enjoy this,
although it does feel like Diener is running out of new things to say
about this universe. That's unfortunate given the number of threads about
AI sentience and rights that could still be followed, but I think tracing
them properly would require more philosophical meat than Diener intends
for these books. Which is why the next book I grabbed was a Culture novel.
Currently this is the final book in the Class 5 series, but there is no
inherent reason why Diener couldn't write more of them.
I was hosted for a long time, free of charge, on https://www.branchable.com/
by Joey and Lars. Branchable and Ikiwiki were wonderful ideas that never
took off as much as they deserved. To avoid being a burden now that
Branchable is nearing its
end, I migrated to
a VPS at Sakura.
However, I have not left Ikiwiki. I only use it as a site engine, but I
haven't found any equivalent that gives me both native Git integration, wiki
syntax for a personal site, the creativity of its directives (you can do
anything with inline and
pagespec), and its multilingual
support through the po plugin.
If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”
Why?
With the recent 0.20 release of xdg-user-dirs we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.
The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.
By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.
This sucks, I don’t like it!
As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.
If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).
What else is new?
Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.
Thanks to everyone who contributed to this release!
Last week, Anthropic pulled back the curtain on Claude Mythos Preview, an AI model so capable at finding and exploiting software vulnerabilities that the company decided it was too dangerous to release to the public. Instead, access has been restricted to roughly 50 organizations—Microsoft, Apple, Amazon Web Services, CrowdStrike and other vendors of critical infrastructure—under an initiative called Project Glasswing.
The announcement was accompanied by a barrage of hair-raising anecdotes: thousands of vulnerabilities uncovered across every major operating system and browser, including a 27-year-old bug in OpenBSD, a 16-year-old flaw in FFmpeg. Mythos was able to weaponize a set of vulnerabilities it found in the Firefox browser into 181 usable attacks; Anthropic’s previous flagship model could only achieve two.
This is, in many respects, exactly the kind of responsible disclosure that security researchers have long urged. And yet the public has been given remarkably little with which to evaluate Anthropic’s decision. We have been shown a highlight reel of spectacular successes. However, we can’t tell if we have a blockbuster until they let us see the whole movie.
For example, we don’t know how many times Mythos mistakenly flagged code as vulnerable. Anthropic said security contractors agreed with the AI’s severity rating 198 times, with an 89 per cent severity agreement. That’s impressive, but incomplete. Independent researchers examining similar models have found that AI that detects nearly every real bug also hallucinates plausible-sounding vulnerabilities in patched, correct code.
This matters. A model that autonomously finds and exploits hundreds of vulnerabilities with inhuman precision is a game changer, but a model that generates thousands of false alarms and non-working attacks still needs skilled and knowledgeable humans. Without knowing the rate of false alarms in Mythos’s unfiltered output, we cannot tell whether the examples showcased are representative.
There is a second, subtler problem. Large language models, including Mythos, perform best on inputs that resemble what they were trained on: widely used open-source projects, major browsers, the Linux kernel and popular web frameworks. Concentrating early access among the largest vendors of precisely this software is sensible; it lets them patch first, before adversaries catch up.
But the inverse is also true. Software outside the training distribution—industrial control systems, medical device firmware, bespoke financial infrastructure, regional banking software, older embedded systems—is exactly where out-of-the-box Mythos is likely least able to find or exploit bugs.
However, a sufficiently motivated attacker with domain expertise in one of these fields could nevertheless wield Mythos’s advanced reasoning capabilities as a force multiplier, probing systems that Anthropic’s own engineers lack the specialized knowledge to audit. The danger is not that Mythos fails in those domains; it is that Mythos may succeed for whoever brings the expertise.
Broader, structured access for academic researchers and domain specialists—cardiologists’ partners in medical device security, control-systems engineers, researchers in less prominent languages and ecosystems—would meaningfully reduce this asymmetry. Fifty companies, however well chosen, cannot substitute for the distributed expertise of the entire research community.
None of this is an indictment of Anthropic. By all appearances the company is trying to act responsibly, and its decision to hold the model back is evidence of seriousness.
But Anthropic is a private company and, in some ways, still a start-up. Yet it is making unilateral decisions about which pieces of our critical global infrastructure get defended first, and which must wait their turn.
It has finite staff, finite budget and finite expertise. It will miss things, and when the thing missed is in the software running a hospital or a power grid, the cost will be borne by people who never had a say.
The security problem is far greater than one company and one model. There’s no reason to believe that Mythos Preview is unique. (Not to be outdone, OpenAI announced that its new GPT-5.4-Cyber is so dangerous that the model also will not be released to the general public.) And it’s unclear how much of an advance these new models represent. The security company Aisle was able to replicate many of Anthropic’s published anecdotes using smaller, cheaper, public AI models.
Any decisions we make about whether and how to release these powerful models are more than one company’s responsibility. Ultimately, this will probably lead to regulation. That will be hard to get right and requires a long process of consultation and feedback.
In the short term, we need something simpler: greater transparency and information sharing with the broader community. This doesn’t necessarily mean making powerful models like Claude Mythos widely available. Rather, it means sharing as much data and information as possible, so that we can collectively make informed decisions.
We need globally co-ordinated frameworks for independent auditing, mandatory disclosure of aggregate performance metrics and funded access for academic and civil-society researchers.
This has implications for national security, personal safety and corporate competitiveness. Any technology that can find thousands of exploitable flaws in the systems we all depend on should not be governed solely by the internal judgment of its creators, however well intentioned.
Until that changes, each Mythos-class release will put the world at the edge of another precipice, without any visibility into whether there is a landing out of view just below, or whether this time the drop will be fatal. That is not a choice a for-profit corporation should be allowed to make in a democratic society. Nor should such a company be able to restrict the ability of society to make choices about its own security.
This essay was written with David Lie, and originally appeared in The Globe and Mail.
Author: AP Ritchey The most powerful artificial intelligence unit ever created was online for less than ten seconds. Well, we gave her ten; she only needed five. To assess her abilities, we created a test program called Sable—the Suborbital Advanced Ballistic Launch Engine. This initiative was designed to use her incalculable computation capacity to calculate […]
A while ago, CommBank started asking for MFA confirmation on its mobile app for every NetBank login on a browser. Previously, there was an option to use SMS for MFA, which isn’t as secure as I would like, but it was at least usable. Since I’m switching away from Android to Mobian and won’t be able to use the CommBank app for much longer, I applied for a physical NetCode token.
The hardware is made by Digipass and looks disposable. It is a small, battery powered gadget with a screen and a button. When pressed, it shows a temporary NetCode for authentication. Such a NetCode is required both for NetBank logins and approving online transactions.
The letter that came with it has the wrong link for activation, the correct link is under NetBank -> Settings -> NetCode (under the Security section)
To apply for a physical token, call the NetBank team, mention you can’t use the app and need a physical NetCode token, and make sure they actually submit your request for a token. It took me 2 calls to get them to ship me a token. The hardware is free of charge but can only be applied for via phone call; unfortunately staff members at my local branch are unable to do anything in relation to NetBank. I was told privately by a CommBank employee that they are deprecating the hardware token in favor of the mobile app, I hope that won’t happen anytime soon, or that they add support for passkeys before they do. The last time I checked, the CommBank app was LineageOS-friendly, but I don’t want to configure WayDroid just to do online banking.
PayID, the thing that allows you to receive payment via a phone number or email address, is not compatible with the hardware token, and existing PayID will be silently deactivated if you use hardware token. This looks to be an artificial restriction; I don’t see why it has to be this way.
Regular CommBank mobile app sessions will also be de-activated once the hardware token is activated (I was told so but my sessions weren’t deactivated until I wiped my Android phone), and you won’t be able to sign into mobile app again until you manually disable the NetCode token.
Online banking has been getting progressively more invasive and anti-user over the last decade, from demanding remote attestation to requiring real time location data, each time locking certain features when those demands are not satisfied; all based on the flawed assumptions that everyone owns a phone running a certain flavor of iOS or Android, and has it ready all the time. I’m not sure what can be done to reverse this trend, but on the personal level I will use NetBank less and go back to cash.
This post contains a bit of consumerism and is full of references to
commercial products, none of which caused me to receive any money nor
non-monetary compensation.
This post has also been written after eating in one meal the amount of
bread-like stuff that we usually have in more than 24 hours.
I’ve been baking bread since a long time ago. I don’t know exactly when,
but probably it was the early 2000s or so, and remained a regular-ish
thing until 2020, when it became an extremely regular thing, as in I
believe I bake bread on average every other day.
In the before times, I’ve had a chance to bake pizza in a wood fired
oven a few times: a friend had one and would offer the house, my partner
would mind the fire, and I would get there with the dough and prepare
the pizza.
Now that we have moved to a new house, we don’t have a good and
convenient place for a proper wood fired oven in masonry, but we can use
one of the portable ones, and having dealt with more urgent expenses, I
decided that just before the potential collapse of the global economy
was a good time as any to buy the oven I had been looking at since we
found this house.
I decided to get an Ooni Karu 2, having heard good things about the
brand, and since it looked like a good balance between size and
portability. I also didn’t consider their gas fired ovens (nor did I buy
the gas burner) because I’m trying to get rid of gas, not add stuff that
uses it, and I didn’t get an electric one because I’m not at all unhappy
with the bakery-style pizza we make in our regular oven, and I have to
admit we also wanted to play with fire1.
We also needed an outdoor table suitable to use the oven on and store
it. Here I looked for inspiration at the Ooni tables (and for cheaper
alternatives in the same style), but my mother who shares the outdoor
area with us wasn’t happy with the idea of steel2.
And then I was browsing the modern viking shores, and found that there
was a new piece in the NÄMMARÖ series my mother likes (and of which we
already have some reclining chairs): a kitchen unit in wood with a steel
top.
At first I expected to just skip the back panel, since it would be in
the way when using the oven, but then I realized that it could probably
be assembled upside down, down from the top between the table legs, and
we decided to try that option.
This week everything had arrived, and we could try it.
Yesterday evening, after dinner (around 21, I think) I prepared the
dough with the flour I usually use for bakery-style pizza: Farina di
Grano Tenero Tipo 0 PANE (320 - 340 W);
since I wanted to make things easier for myself I only used 55%
hydration, so the recipe was:
Then this morning we assembled the NÄMMARÖ, then I divided the dough in
eight balls, put them in a covered — but not sealed — container
3, well floured with rice flour and then we fired the oven
(as in: my partner did, I looked for a short while and then set the
table and stuff), using charcoal, because we already had some, and could
conveniently get more at the supermarket.
When the oven had reached temperatures in the orange range4 I
stretched the smallest ball out, working on my wooden peel, sprayed it
with water5, sprinkled it with coarse salt and put it in the
oven.
After 30 seconds I turned it around with the new metal peel, then again
after 30 seconds, and then I lost count of how many times I repeated
this6, but it was probably 2 or 3 minutes until it looked
good.
And it was good. The kind of pizza that is quite soft, especially near
the borders.
We ate it with fresh mozzarella and tomatoes, and then made another one
the same way, to finish the mozzarella.
This was supposed to be our lunch, but we decided to try one with some
leftover cooked radicchio, and that also worked quite nicely.
And finally, we decided we needed to try a more classical pizza, with
tomato sauce and cured meat, of which we forgot to take pictures.
Up to here we had eaten about half of the dough, and we were getting
full: I had prepared significantly more than what I expected to eat, to
be able to accidentally burn some, but also with the idea to bake
something else to be eaten later.
So I made two more focaccias with just water and salt, and then I tried
to cook some bread with what I expected to be residual heat.
Except that the oven was getting a bit too cold, so my partner added
some charcoal, and when I put the last two unflattened balls right at
the back of the oven where it was still warmer, that side carbonized.
After 5 minutes I moved them to the middle of the oven, and turned them,
and then after another turn and 5 more minutes they were ready. And
other than the burnt crust, they were pretty edible.
So, the thoughts after our first experience.
Everybody around the table (my SO, my mother and me) was quite happy
with the results, and they are different enough from the ones I could
get with the regular oven.
As I should have expected, it’s much faster than a masonry oven, both in
getting to temperature and in cooling down: my plan for residual heat
bread cooking will have to be adjusted with experience.
We were able to get it hot enough, but not as hot as it’s supposed to be
able to get: we suspect that using just charcoal may have influenced it,
and next week we’ll try to get some wood, and try with a mix.
As for the recipe, dividing the dough in eight parts worked quite well:
maybe the pizzas are a bit on the smaller side, but since they come one
at a time it’s more convenient to cut and share them, and maybe make a
couple more at the end.
Of course, I’ll want to try different recipes, for different styles of
pizzas (including some almost-trademark-violating ones) and for other
types of flatbread.
I expect it won’t be hard to find volunteers to help us with the
experiments. :D
any insinuation that there may have been considerations of
having a way to have freshly baked bread in case of a prolonged
blackout may or may not be based on reality.
But it wasn’t the only — or even the main — reason.↩︎
come on! it’s made of STEEL. how can it be not good? :D↩︎
IKEA 365+ 3.1 glass, the one that is 32 cm × 21 cm × 9
cm; it was just big enough for the amount of dough, and then I
covered it with a lid that is missing the seal.↩︎
why did they put a thermometer on it, and not add labels
with the actual temperature? WHY???↩︎
if you don’t have dietary restrictions a bit of olive oil
would taste even better.↩︎
numbers above 2 are all basically the same, right?↩︎
On the 19th of March I got a home battery system installed. The government has a rebate scheme so it had a list price of about $22k for a 40kWh setup and cost me about $12k. It seems that 40KWh is the minimum usable size for the amount of electricity I use, I have 84 cores running BOINC when they have nothing better to do which is 585W of TDP according to Intel. While the CPUs are certainly using less than the maximum TDP (both due to design safety limits and the fact that I have disabled hyper-threading on all systems due to it providing minimal benefits and potential security issues) given some power usage by cooling fans and some inefficiency in PSUs I think that assuming that 585W is accounted for 24*7 by CPUs is reasonable. So my home draws between 800W and 1KW when no-one is home and with an electric car and all electric cooking a reasonable amount of electricity can be used.
My bills prior to the battery installation were around $200/month which was based on charging my car only during sunny times as my electricity provider (Amber Electric) has variable rates based on wholesale prices. Also the feed in rates if my solar panels produce too much electricity in sunny times often go negative so if I don’t use enough electricity. I haven’t had the electric car long enough to find out what the bills might be in winter without a home battery.
Before getting the battery my daily bills according to the Amber app were usually between $5 and $10. After getting it the daily bills have almost always been below $5. The only day where it’s been over $5 since the battery installation was when electricity was cheap and I fully charged the home battery and my car which used 50KWh in one day and cost $7.87 which is 16 cents per KWh. 16 cents isn’t the cheapest price (sometimes it gets as low as 10 cents) but is fairly cheap, sometimes even in the cheap parts of the day it doesn’t get that low (the cheapest price on the day I started writing this was 20 cents).
So it looks like this may save me $100 per month, if so there will be a 10% annual return on investment on the $12K I spent. This makes it a good investment, better than repaying a mortgage (which is generally under 6%) and almost as good as the long term results of index tracker funds. However if it cost $22K (the full price without subsidy) then it would still be ok but wouldn’t be a great investment. The government subsidised batteries because the huge amount of power generated by rooftop solar systems was greater than the grid could use during the day in summer and batteries are needed to use that power when it’s dark.
Android App
The battery system is from Fox ESS and the FoxCloud 2.0 Android app is a bit lacking in functionality. It has a timer for mode setting with options “Self-use” (not clearly explained), “Feed-in Priority” (not explained but testing shows feeding everything in to the grid), “Back Up”, “Forced Charge”, and “Forced Discharge”. Currently I have “Forced Charge” setup for most sunny 5 hours of the day for a maximum charge power of 5KW. I did that because about 25KW/day is what I need to cover everything and while the system can do almost 10KW that would charge the battery fully in a few hours and then electricity would be exported to the grid which would at best pay me almost nothing and at worst bill me for supplying electricity when they don’t want it. There doesn’t seem to be a “never put locally generated power into the grid unless the battery is full” option. The force charge mode allows stopping at a certain percentage, but when that is reached there is no fallback to another option. It would be nice if the people who designed the configuration could take as a baseline assumption that the macro programming in office suites and functions in spreadsheets are things that regular people are capable of using when designing the configuration options. I don’t think we need a Turing complete programming language in the app to control batteries (although I would use it if there was one), but I think we need clauses like “if battery is X% full then end this section”.
There is no option to say “force charge until 100%” or “force charge for the next X minutes” as a one-off thing. If I came home in the afternoon with my car below 50% battery and a plan to do a lot of driving the next day then I’d want to force charge it immediately to allow charging the car overnight. But I can’t do that without entering a “schedule”. For Unix people imagine having to do everything via a cron job and no option to run something directly from the command-line.
It’s a little annoying that they appear to have spent more development time on animations for the app than some of what should be core functionality.
Management
Amber has an option to allow my battery to be managed by them based on wholesale pries but I haven’t done that as the feed-in prices are very low. So I just charge my battery when electricity is cheap and use it for the rest of the day. There is usually a factor of 2 or more price difference between the middle of the day and night time so that saves money. It also means I don’t have to go out of my way to try and charge my car in the middle of the day. There is some energy lost in charging and discharging the batteries but it’s not a lot. I configured the system to force charge for the 5 sunniest hours every day for 5KW as that’s enough to keep it charged overnight and 5KW is greater than the amount of solar electricity produced on my house since I’ve been monitoring it so that forces it to all be used for the battery. In summer I might have to change that to 6KW for the sunniest 2 or 3 hours and then 4KW or 5KW surrounding that which will be a pain to manage.
Instead of charging the car every day during sunny times I charge it once or twice a week, I have a 3.3KW charger and the car has a 40KWh battery so usually it takes me less than 10 hours to fully charge it and I get at least 5 hours of good sunlight in the process.
Author: Ankit Chiplunkar Delta’s vision flashed red. The jump had scraped a meteorite. Error alarms crawled across his vision. He locked motion, started auto-repair, and waited. Delta floated between jumps. As the repairs ran, he thought of the Core. Delta was a Mind, a being made of pure information. Minds built shells, bodies made of […]
It's time again for a reader special, and once again it's all The Beast In Black (there must be a story to that nick, no?).
"MySQL is not better than your SQL," he pontificated,
"especially when
it comes to the Workbench Migration Wizard"
"Sadly," says he,
"Not even gmail/chromium either."
"Updated software is available, but there are no updates!" he puzzled.
"Clicking Install Now just throws
that dialog right back in my face. I'm re-cursing." Zero, one, does it really make a difference?
"Questions"
The Beast in Black
"I do, in fact, have a question..."
One of the foundational guides to my [lyle, not bib] engineering career
was John Bentley's Programming Pearls. These are not those.
"Veni, vidi: vc. No pearls of wisdom here, just litter." says
The Beast.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
It started with a thought: to understand people’s perspectives on life and its meaning. So I texted folks, “What is life (to you)?�. Each of the following list items (-) is a response from a different individual, mostly verbatim.
- A lot
- Everyone has a few universal basic qualities, and some special qualities. To me life is pursuit of exploring world based on those qualities and maturing those qualities as one goes on about exploring world/life with those qualities.
Discovering and enhancing experiences as one goes through them.
- life is endless suffering
- my answer might change daily, but this is what I’ve noticed and feel recently.
Life is a spectrum with two distinct ends: what we control and what we don’t. At birth, the spectrum is largely tilted toward control, but throughout our lives, it gradually shifts toward the other side. Ultimately, as we approach death, we lose all control over any aspect of our existence, reaching the other end of the spectrum.
tho this isn’t universal, privilege plays a huge part in what you control tho i believe it holds true for the majority
but yeah man, meaning and purpose are dynamic, it’s in their nature to change
i can give you a different answer this evening itself xD
- Zindagi ek nadiya hai,
Aur mujhe tairna nahi aata (translation - Life is a river,
and I don’t know how to swim)
On a more serious note, Life is what you make it out for yourself.
The only established truth is that it will end. We can never know if there is something after or if there was something before.
So try to live a life that you feel aspired by?
But this question was beautifully answered by that book which you had about that dying professor
(Me - He was talking about Tuesday’s with Morrie)
- My answer is 42
- One, it’s living on your own terms, you define everything for yourself, success, normal, whatever. You get to curate your version of it no matter the societal norms.
It’s an accumulation of experiences - friends, parents, work, activities, doing shit loads. Sab try karo- travel, zumba, art, music, workout, sports, dil kara ye karna hai karlo. (translation - If your heart wants to do it, just do it.)
Then I think relationships - all that you’ve nurtured, people forget maintaining people because of work. It takes efforts to keep people in your life, everyone that comes has a place in yours, how well thats stays is upto you. You also get to curate your people, who stays who don’t. Family toh hai hi (translation - family is there) but everyone else that comes along can make it pretty good.
So I don’t want to be 50 and be like chalo ab kuch apne liye karte hai… (translation - Come on, now let’s do something for ourselves) Do whatever shit you want today. Not everything costs money, and if it does get thrifty
But do keep healthy while doing all of that
- Being alive so that my daughter can grow up and i can help raising her kids as well.
Raising kids without mother is tough :P
- Definitively, I feel like Life is a by product of proteins and energy working together.
But in a more personal sense, Life is a dumb joke played onto us. It’s a rat race.
But rats exists because of life and then it becomes a chicken-egg problem
Honestly, I don’t give good answers to life questions. I’m generally the one asking
Life can be like a box of chocolates, you don’t know what you’re gonna get untill you experience the chocolate(assuming the chocolates are heterogenous and contains a mix of everything)
Camus once said, “Life is a revolt�, and one of his students added more spice to it like “Life is a revolt against the meaninglessness of existence"
I kinda feel like Life is the pursuit of every person’s search for meaning
- Imprisonment waiting for execution 😄
I have one more thought while we are on the topic , game with pre defined starting position and predefined destination , path to reach is a maze
- A phase where you can have a really good time or really bad one, usually the mix of both.
A phase where you are prisoner to responsibility and materialistic wants.
It’s a hell for you, where you try to create heaven for others.
Being born was never your choice, but ending is always in your hands but you are a prisoner. You fear that leaving this world behind will destroy the heavens you created for others and they will be back to hell. But eventually everyone moves on watching the hourglass of their life.
Once you are left with no desires or no one to create heavens for, you look arround yourself. You see everyone chasing something, everyone scared of their limitted life time sliping away yet you want it to end sooner.
Doesn’t matter if it was all good till now, or all bad. The other half is waiting for redemption.
If it was all good, it’s best time to die don’t wait for the bad to start. If was all bad, it’s still the best time to die what if it was the good one and more worst is waiting for you.
We desire to be remembered, yet we want to free from this loop of suffering.
Someone once said, life is a suffering, chose your sufferings.
- Life to me is to live without regrets and live with freedom.
Life is always unpredictable and this unpredictability makes it more interesting and worth it.
- As of now, for the state of mind that I am in , I think for me life is about subtle struggle, subtle inconveniences and yet moving forward cause that’s all I know.
I am not sure if any of this has any meaning, but sometimes I feel I was born of a purpose and that the universe has my back.
For me it’s about raising my consciousness, understanding people to their depths, gaining moderate material success and helping people to some extend.
I have tried to seek a grander meaning but I have failed.
Life for me is what I make out it.
In my times of great success i rarely think about life for I am busy enjoying it, whatever you may call that state of mind.
- For me its the little things that you enjoy with YOUR people
- Life to me is about living and loving, and doing it in a way that sustains. It’s the people who shape you, the work you get absorbed in, the quiet moments in between. There’s also the wanting, the drive to figure out what’s worth going after and how to get there, but that’s just one part of it, not the point of it. And none of it happens in a vacuum. I’m aware of the privileges that let me live this way, and I try to hold on to that gratitude. In the end, life has both a material and a non-material side, and a lot of what we do is chasing material things in an attempt to satisfy something non-material within us
- Mere liye (translation - for me) life is staying at my home and studying random economics papers. That’s when I enjoy myself the most.
- Very complicated
Some days I wish this life never ended and some time I feel it would be better if it stopped at that moment.
It all depends on the events that happen in the so called “life�.
So life to me is a string of events that happen anyway and you get to make some decisions which can turn it in any direction and then you wonder how did that happen.
- not forgetting to breathe, learn, eat, game, take a good shit, love, sleep.
- To be honest it changed with time!
At 19 it was about freedom, wasn’t sure what freedom meant but i wanted that! To be free from everything, maybe because parents still controlled a part of my life.
Then came 22-24 where i was working, trying to figure out what i want, the meaning changed from freedom to living for myself. To earn more, to be greedy about myself and pursue whatever would help me gain more steps in my career.
Came my mba life, switched my life from doing for myself to trying everything out to have no regrets. Life meaning was just about living with no regrets, invested, gambled, did everything to earn that tag of “yeah, have tried that�.
Now it has all switched to, it was all just a fake facade. Life turned to having a meaningful life rather than finding meaning in what i am doing. Living for people around me, chhoti chhoti cheezo m khushi (translation - happiness in small things(?)) isn’t really a topic of conversation but more of happy thing for me.
So it changed, and m quite happy to be honest. Life did show me a lot of failures, but was privileged enough to face those failures. Gained a lot of learnings if not money😂
Hopeful for more learnings and change meaning of life with time
- A task.
- You have different answers at different times
You learn different meanings at different times
When you are studying, basically it is about job, finding a partner
then it becomes, house, car other things based on your income
in between, there can be passion too
Free Software was a passion, electoral politics too, but both kind of faded and I want cooperative and user driven development now (prav - something that motivates me every day) and these days learning Chinese and watching Cdrama takes a huge part of my leisure time
it is heavily subjective
and also influences by previous experiences
people around you, how much influence they have on you
it also depends on if they had to struggle in their life or not, for some life did not give much troubles
and trouble itself can be relative
people who never had to struggle may find even smallest challenges as troubles
like if you own a car, your worry is finding a parking slot
- I am too young to think about lyfe
- A ticket to see the show on earth, I guess 😀
I guess life is different depending on the mood. It is a very broad question.
(Me - What is it in this present mood?)
Learning stuff (like I am learning a new language) and being happy but also to regulate emotions in a world where being optimistic is getting harder each day.
Life is also having a unique set of glasses you wear. Both in terms of looking from your eyeballs and your psychological perspective. Both are unique and cannot be replicated.
It is interesting what people on their deathbed think of life. If I know I am dying, my perspective would change a whole lot.
Life is finishing reading books while we are alive 😉
Life is sleeping after a good XMPP chat 😉
- uhh to word it? life is just like a journey from A to somewhere and its all about what paths you take and what line you get on to me, just a series of short adventures that all connect to a larger sequence until you can’t have any more adventures-
(Me - eee, THE END. drop dead, like a coin)
yeaaaah- I am not really for spirituality of an afterlife, to me life just ends at some point, after which point there fails to remain a discernable you, and some X time after which, you will be last remembered, try to make that last time a good one I guess?
(Me - no soul?)
uhhh not in the way most people think of it i guess?
theres just a lot of yous, theres the physical you, there is the idea of you, there is the expectation of you, and one of the undefinable you I would label as the soul maybe? like the part thats not physically you, but also certainly you
(Me - can’t say I understood part, but I get you in this sense)
mhm- well its about just questioning who you are more so questioning what life is-, I have sadly spent way too much time trying to figure that out
- Making the best of the time you have
- living a full range of experiences and embracing the good ones, seeing all that the world has to offer. In the end we were always just stardust. Might as well enjoy it when we are stardust with a consciousness of our own.
- For some reason or the Universe’s /dev/random I was born here as a biological being, and from my experience I understood living is hard and the best way to live is by embracing it. Loving everyone and everything around you. Be happy and joyful until you naturally say good bye to this world.
- Life is being fucked by everything and you just have to figure out and try to stick to the things worth being fucked for
Note: Following was transcribed from a audio message.
- There are five conditions to become a life to survive in the environment. I think there’s five conditions by the biological definitions and reproduction is one of the factor virus is not considered a life form because it cannot reproduce on its own but technically it’s kind of a life because it reproduces using the DNA ability this is the biological definition.
Do you want a philosophical definition?
My definition is kind of the same except that you get life experiences along with it as a human.
Extra benefits is that you are not an NPC. All other organisms are NPCs.
But humans can interpret the world and change it to their liking.
That is life in the case of a human.
But then many humans are mostly NPCs.
But they still can change the life.
Okay, fuck this. Where is this even going?
A human is an exception in the case of life, because human is not an NPC.
Human can interrupt the world, human can change it to its liking,
which is why we are such a successful organism on this planet.
That is life to me. That’s a human.
But all of this is kind of meaningless, because
the biological impurity of a human being still exists, so you still have the
urges to reproduce, which kind of makes
it like just another organism. But then, humans are yet to evolve
to overcome that biological imperative.
I’m grateful for all the replies, outlooks, and subsequent conversations I got to have after this question with everyone. After all, it was a deeply personal question. It does fit in nicely with my definition of life: “Life is all about experiences and all the transient relationships one gets to have with folks we meet on the way.�
PS - I would love you hear you on this. Feel free to text or email on sahil AT sahilister.in
Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.
We await a synchronous function which retuns a promise, passing a function to the promise. As a general rule, you don't construct promises directly, you let asynchronous code generate them and pass them around (or await them). It's not a thing you never do, but it's certainly suspicious. It gets more problematic when Nona adds:
This function happens to contain multiple code repetition snippets, including these three lines.
That's right, this little block appears multiple times in the function, inside of anonymous function getting passed to the Promise.
No, the code does not work in its current state. It's unclear what the 2100 line function was supposed to do. And yes, this was written by lowest-bidder third-party contractors.
Nona adds:
I am numb at this point and know I gotta fix it or we lose contracts
Management made the choice to "save money" by hiring third parties, and now Nona's team gets saddled with all the crunch to fix the problems created by the "savings".
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: A. R. Waking up to a blaring alarm while in a war is nothing out of the ordinary; for Atlas, it happens daily, or a couple of times a month. He would have never thought today would be any different. ​ Stationed on a planet controlled by the Terrestrian Coalition, Atlas was used to […]
This article on the walls of Constantinople is fascinating.
The system comprised four defensive lines arranged in formidable layers:
The brick-lined ditch, divided by bulkheads and often flooded, 15-20 meters wide and up to 7 meters deep.
A low breastwork, about 2 meters high, enabling defenders to fire freely from behind.
The outer wall, 8 meters tall and 2.8 meters thick, with 82 projecting towers.
The main wall—a towering 12 meters high and 5 meters thick—with 96 massive towers offset from those of the outer wall for maximum coverage.
Behind the walls lay broad terraces: the parateichion, 18 meters wide, ideal for repelling enemies who crossed the moat, and the peribolos, 15–20 meters wide between the inner and outer walls. From the moat’s bottom to the highest tower top, the defences reached nearly 30 meters—a nearly unscalable barrier of stone and ingenuity.
It’s true – processing data from software defined radios can be a bit
complex
👈�👈 – which tends to keep all but the most grizzled experts and bravest
souls from playing with it. While I wouldn’t describe myself as either, I will
say that I’ve stuck with it for longer than most would have expected of me.
One of the biggest takeaways I have from my adventures with software defined
radio is that there’s a lot of cool crossover opportunity between RF and
nearly every other field of engineering.
Fairly early on, I decided on a very light metadata scheme to track SDR
captures, called rfcap. rfcap has withstood my test
of time, and I can go back to even my earliest captures and still make sense of
what they are – IQ format, capture frequencies, sample rates, etc. A huge
part of this was the simplicity of the scheme (fixed-lengh header, byte-aligned
to supported capture formats), which made it roughly as easy to work with as a
raw file of IQ samples.
However, rfcap has a number of downsides. It’s only a single, fixed-length
header. If the frequency of operation changed during the capture, that change
is not represented in the capture information. It’s not possible to easily
represent mulit-channel coherent IQ streams, and additional metadata is
condemned to adjacent text files.
ARF (Archive of RF)
A few years ago, I needed to finally solve some of these shortcomings and tried
to see if a new format would stick. I sat down and wrote out my design goals
before I started figuring out what it looked like.
First, whatever I come up with must be capable of being streamed and processed
while being streamed. This includes streaming across the network or merely
written to disk as it’s being created. No post-processing required. This is
mostly an artifact of how I’ve built all my tools and how I intereact with my
SDRs. I use them extensively over the network (both locally, as well
as remotely by friends across my widerlan). This decision sometimes even
prompts me to do some crazy things from time
to time.
I need actual, real support for multiple IQ channels from my multi-channel SDRs
(Ettus, Kerberos/Kracken SDR, etc) for playing with things like
beamforming.
My new format must be capable of storing
multiple streams in a single capture file, rather than a pile of files in
a directory (and hope they’re aligned).
Finally, metadata must be capable of being stored in-band. The initial set of
metadata I needed to formalize in-stream were Frequency Changes and
Discontinuities. Since then, ARF has grown a few more.
After getting all that down, I opted to start at what I thought the simplest
container would look like,
TLV
(tag-length-value) encoded packets. This is a fairly well trodden path,
and used by a bunch of existing protocols
weallknow
and
love.
Each ARF file (or stream) was a set of
encoded “packets� (sometimes called data units in other specs). This means that
unknown packet types may be skipped (since the length is included) and
additional data can be added after the existing fields without breaking
existing decoders.
Heads up!
Once this is posted, I'm not super likely to update this page. Once this
goes out, the latest stable copy of the ARF spec is maintained at
draft-tagliamonte-arf-00.txt.
This page may quickly become out of date, so if you're actually interested in
implementing this, I've put a lot of effort into making the draft
comprehensive, and I plan to maintain it as I edit the format.
Unlike a “traditional� TLV structure, I opted to add “flags� to the top-level
packet. This gives me a bit of wiggle room down the line, and gives me a
feature that I like from ASN.1 – a “critical� bit. The critical bit indicates
that the packet must be understood fully by implementers, which allows future
backward incompatible changes by marking a new packet type as critical. This
would only really be done if something meaningfully changed the interpretation
of the backwards compatible data to follow.
Flag
Description
0x01
Critical (tag must be understood)
Within each Packet is a tag field. This tag indicates how the contents of the
value field should be interpreted.
In order to help with checking the basic parsing and encoding of this format,
the following is an example packet which should parse without error.
00, // tag (0; no subpacket is 0 yet)
00, // flags (0; no flags)
00, 00 // length (0; no data)
// data would go here, but there is none
Additionally, throughout the rest of the subpackets, there are a few unique and
shared datatypes. I document them all more clearly in the draft, but to quickly
run through them here too:
UUID
This field represents a globally unique idenfifer, as defined by RFC 9562, as
16 raw bytes.
Frequency
Data encoded in a Frequency field is stored as microhz (1 Hz is stored as
1000000, 2 Hz is stored as 2000000) as an unsigned 64 bit integer. This has a
minimum value of 0 Hz, and a maximum value of 18446744073709551615 uHz, or just
above 18.4 THz. This is a bit of a tradeoff, but it’s a set of issues that I
would gladly contend with rather than deal with the related issues with storing
frequency data as a floating point value downstream. Not a huge factor, but as
an aside, this is also how my current generation SDR processing code (sparky)
stores Frequency data internally, which makes conversion between the two
natural.
IQ samples
ARF supports IQ samples in a number of different formats. Part of the idea here
is I want it to be easy for capturing programs to encode ARF for a specific
radio without mandating a single iq format representation. For IQ types with
a scalar value which takes more than a single byte, this is always paired
with a Byte Order field, to indicate if the IQ scalar values are little or
big endian.
ID
Name
Description
0x01
f32
interleaved 32 bit floating point scalar values
0x02
i8
interleaved 8 bit signed integer scalar values
0x03
i16
interleaved 16 bit signed integer scalar values
0x04
u8
interleaved 8 bit unsigned integer scalar values
0x05
f64
interleaved 64 bit floating point scalar values
0x06
f16
interleaved 16 bit floating point scalar values
Header
Each ARF file must start with a specific Header packet. The header contains
information about the ARF stream writ large to follow. Header packets are
always marked as “critical�.
magic
flags
start
guid
site guid
#st
In order to help with checking the basic parsing and encoding of this format,
the following is an example header subpacket (when encoded or decoded this
will be found inside an ARF packet as described above) which should parse
without error, with known values.
Immediately after the arf Header, some number of Stream Headers
follow. There must be exactly the same number of Stream Header packets as are
indicated by the num streams field of the Header. This has the nice effect of
enabling clients to read all the stream headers without requiring buffering of
“unread� packets from the stream.
id
flags
fmt
bo
rate
freq
guid
site
In order to help with checking the basic parsing and encoding of this format,
the following is an example stream header subpacket (when encoded or decoded
this will be found inside an ARF packet as described above) which should parse
without error, with known values.
Block of IQ samples in the format indicated by this stream’s format and
byte_order field sent in the related Stream Header.
id
iq samples
In order to help with checking the basic parsing and encoding of this format,
the following is an samples subpacket (when encoded or decoded
this will be found inside an ARF packet as described above). The IQ values
here are notional (and are either 2 8 bit samples, or 1 16 bit sample,
depending on what the related Stream Header was).
01, // id
ab, cd, ab, cd, // iq samples
Frequency Change
The center frequency of the IQ stream has changed since the
Stream Header or last Frequency Change
has been sent. This is useful to capture IQ streams that are jumping
around in frequency during the duration of the capture, rather than
starting and stopping them.
id
frequency
In order to help with checking the basic parsing and encoding of this format,
the following is a frequency change subpacket (when encoded or decoded
this will be found inside an ARF packet as described above).
01, // id
00, 00, b5, e6, 20, f4, 80, 00 // frequency (200 MHz)
Discontinuity
Since the last Samples packet for this stream, samples have been dropped
or not encoded to this stream. This can be used for a stream that has
dropped samples for some reason, a large gap (radio was needed for something
else), or communicating “iq snippits�.
id
In order to help with checking the basic parsing and encoding of this format,
the following is a discontinuity subpacket (when encoded or decoded this will
be found inside an ARF packet as described above).
01, // id
Location
Up-to-date location as of this moment of the IQ stream, usually from a GPS.
This allows for in-band geospatial information to be marked in the IQ stream.
This can be used for all sorts of things (detected IQ packet snippits aligned
with a time and location or a survey of rf noise in an area)
The sys field indicates the Geodetic system to be used for the provided
latitude, longitude and elevation fields. The full list of supported
geodetic systems is currently just WGS84, but in case something meaningfully
changes in the future, it’d be nice to migrate forward.
Unfortunately, being a bit of a coward here, the accuracy field is a bit of a
cop-out. I’d really rather it be what we see out of kinematic state estimation
tools like a kalman filter, or at minimum, some sort of ellipsoid. This is
neither of those - it’s a perfect sphere of error where we pick the largest
error in any direction and use that. Truthfully, I can’t be bothered to model
this accurately, and I don’t want to contort myself into half-assing something
I know I will half-ass just because I know better.
In order to help with checking the basic parsing and encoding of this format,
the following is a location subpacket (when encoded or decoded this will be
found inside an ARF packet as described above).
In addition to the fields I put in the spec, I expect that I may need custom
packet types I can’t think of now. There’s all sorts of useful data that could
be encoded into the stream, so I’d rather there be an officially sanctioned
mechanism that allows future work on the spec without constraining myself.
Just an example, I’ve used a custom subpacket to create test vectors, the data
is encoded into a Vendor Extension, followed by the IQ for the modulated
packet. If the demodulated data and in-band original data don’t match, we’ve
regressed. You could imagine in-band speech-to-text, antenna rotator azimuth
information, or demodulated digital sideband data (like FM HDR data) too. Or
even things I can’t even think of!
id
data
In order to help with checking the basic parsing and encoding of this format,
the following is a vendor extension subpacket (when encoded or decoded this
will be found inside an ARF packet as described above).
The biggest tradeoff that I’m not entirely happy with is limiting the length
of a packet to u16 – 65535 bytes. Given the u8 sample header, this limits us
to 8191 32 bit sample pairs at a time. I wound up believing that the overhead in
terms of additional packet framing is worth it – because always encoding 4
byte lengths felt like overkill, and a dynamic length scheme ballooned
codepaths in the decoder that I was trying to keep as easy to change as
possible as I worked with the format.
The nineteenth release of the qlcal package
arrivied at CRAN just now, and
has already been built for r2u. This version
synchronises with QuantLib 1.42
released this week.
qlcal
delivers the calendaring parts of QuantLib. It is provided (for the R
package) as a set of included files, so the package is self-contained
and does not depend on an external QuantLib library (which can be
demanding to build). qlcal covers
over sixty country / market calendars and can compute holiday lists, its
complement (i.e. business day lists) and much more. Examples
are in the README at the repository, the package page,
and course at the CRAN package
page.
This releases updates to the 2025 holidays for China, Singapore, and
Taiwan.
The full details from NEWS.Rd follow.
Changes in version 0.1.1
(2026-04-15)
Synchronized with QuantLib 1.42 released two days ago
Calendar updates for China, Singapore, Taiwan
Courtesy of my CRANberries, there
is a diffstat report for this
release. See the project page
and package documentation for more details, and more examples.
Connected via serial console. Does not have a package manager, web or ssh server, but can play tetris in the terminal (bsdgames in Debian have the same tetris version packaged).
I’m speaking at RightsCon 2026 in Lusaka, Zambia, on May 6 and 7, 2026.
I’m giving a keynote address and participating in a panel discussion at an ICTLuxembourg event called “Europe at the Crossroads of AI, Power & the Future of Democracy.” The event will be held at the University of Luxembourg’s Belval Campus on May 12, 2026.
I’m speaking at the Potsdam Conference on National Cybersecurity at the Hasso Plattner Institut in Potsdam, Germany. The event runs June 24–25, 2026, and my talk will be the evening of June 24.
Candice (previously) has another WTF to share for us.
We're going to start by just looking at one fragment of a class defined in this C++ code: TLAflaList.
Every type and variable has a three-letter-acronym buried in its name. The specific meaning of most of the acronyms are mostly lost to time, so "TLA" is as good as any other three random letters. No one knows what "fla" is.
What drew Candice's attention was that there was a type called "list", which implies they're maybe not using the standard library and have reinvented a wheel. Another data point arguing in favor of that is that the class had a method called getNumElements, instead of something more conventional like size.
In addition to the meaningless three-letter-acronyms which start every type and variable, we're also adding on a lovely bit of hungarian notation, throwing mv_ on the front for a member variable. The variable is called "array", but is it? Let's look at that definition.
Okay, that gives me a lot more nonsense letters but I still have no idea what that variable is. Where's that type defined? The good news, it's in the same header.
So it's not a list or an array, it's a vector. A vector of bare pointers, which definitely makes me worry about inevitable use-after-free errors or memory leaks. Who owns the memory that those pointers are referencing?
"IN" in the type name is an old company, good ol' Initrode, which got acquired a decade ago. "tab" tells us that it's meant to be a database table. We can guess at the rest.
This isn't a codebase, it's a bad Scrabble hand. It's also a trainwreck. Confusing, disorganized, and all of that made worse by piles of typedefs that hide what you're actually doing and endless acronyms that make it impossible to read.
One last detail, which I'll let Candice explain:
I started scrolling down the class definition - it took longer than it should have, given that the company coding style is to double-space the overwhelming majority of lines. (Seriously; I've seen single character braces sandwiched by two lines of nothing.) On the upside, this was one of the classes with just one public block and one private block - some classes like to ping-pong back and forth a half-dozen times.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: D.P. Reitman Energy can neither be created nor destroyed, only transformed. The sun is a local manifestation of this law, its energy traveling a modest 93 million miles to feed the trees of this world. Meanwhile, 27,000 light-years away, Sagittarius A* churns, fueling the relentless pulse of a Milky Way galaxy 100,000 light-years across—a […]
While Freexian initiated Debusine, and is investing a lot of resources in the
project, we manage it as a true free software project that can and should have a
broader community.
We always had documentation for new contributors
and we aim to be reactive with them when they interact via the issue tracker or
via merge requests. We decided to put those intentions under stress tests by
proposing five projects
for Google’s Summer of Code as part of Debian’s participation in that program.
Given that at least 11 candidates managed to get their merge request accepted in
the last 30 days (interacting with the development team is part of the
pre-requisites to apply to Google Summer of Code projects these days), the
contributing experience must not be too bad. 🙂 If you want to try it out, we
maintain a list of “quick fixes�
that are accessible to newcomers. And as always, we welcome your
feedback!
Debian CI: incus backend and upgrade to Bootstrap 5, by Antonio Terceiro
debci 3.14 was released on March 4th, with a followup 3.14.1 release with
regression fixes a few days afterwards. Those releases were followed by new
development and maintenance work that will provide extra capabilities and
stability to the platform.
This month saw the initial version of an incus backend
land in Debian CI. The transition into the new backend will be done carefully so
as to not disrupt ‘testing’ migration. Each package will be running jobs with
both the current lxc backend and with incus. Packages that have the same result
on both backends will be migrated over, and packages that exhibit different
results will be investigated further, resulting in bug reports and/or other
communication with the maintainers.
On the frontend side, the code has been ported to Bootstrap 5
over from the now ancient Bootstrap 3. This need has been
originally reported back in 2024
based on the lack of security support for Bootstrap 3. Beyond improving
maintainability, this upgrade also enables support for dark mode in debci,
which is still work in progress.
Both updates mentioned in this section will be available in a following debci
release.
Salsa CI maintenance by Santiago Ruano Rincón et al.
Santiago reviewed some Salsa CI issues and reviewed associated merge requests.
For example, he investigated a regression (#545),
introduced by the move to sbuild,
on the use of extra repositories configured as “.source� files; and reviewed the
MR (!712)
that fixes it.
Also, there were conflicts with changes made in debci 3.14
and debci 3.14.1
(those updates are mentioned above), and different people have contributed to
fix the subsequent issues, in a long-term way. This includes Raphaël who
proposed MR !707
and who also suggested Antonio to merge the Salsa CI patches to avoid similar
errors in the future. This happened shortly after.
Those fixes finally required the unrelated MR !709,
which will prevent similar problems when building images.
To identify bugs related to the autopkgtest support in the backport suites as
early as possible, Santiago proposed MR !708.
Finally, Santiago, in collaboration with Emmanuel Arias also had exchanges with
GSoC candidates for the Salsa CI project,
including the contributions they have made as merge requests. It is important to
note that there are several very good candidates interested in participating.
Thanks a lot to them for their work so far!
Miscellaneous contributions
Raphaël reported a zim bug
affecting Debian Unstable users, which was already fixed in git apparently. He
could thus cherry-pick the fix and update the package
in Debian Unstable.
Carles submitted translation errors in the debian-installer Weblate.
Carles, using po-debconf-manager,
improved Catalan translations: reviewed and submitted 3 packages. Also improved
error handling when forking or submitting an MR if the fork already existed.
Carles kept improving check-relations:
code base related general improvements (added strict typing, enabled pre-commit).
Also added DebPorts support, virtual packages support and added commands for
reporting missing relations and importing bugs from bugs.debian.org.
Antonio handled miscellaneous Salsa support requests.
Stefano and Santiago continued to help with DebConf 26 preparations.
Stefano reviewed some contributions to debian-reimbursements and handled admin
for reimbursements.debian.net.
Stefano attended the Debian Technical Committee meeting.
Helmut sent 8 patches for cross build failures.
Building on the work of postmarketOS,
Helmut managed to cross build systemd for musl in rebootstrap and sent several
patches in the process.
Helmut reviewed several MRs of Johannes Schauer Marin Rodrigues expanding
support for DPKG_ROOT to support installing hurd.
Helmut incorporated a final round of feedback for the Multi-Arch documentation
in Debian policy, which finally made it into unstable
together with documentation of Build-Profiles.
In order to fix python-memray, Helmut
NMUed libunwind
generally disabling C++ exception support as being an incompatible duplication
of the gcc implementation. Unfortunately, that ended up breaking suricata on riscv64.
After another NMU,
python-memray finally migrated.
Thorsten uploaded new upstream versions of epson-inkjet-printer-escpr and
sane-airscan. He also fixed a packaging bug in printer-driver-oki. As of
systemd 260.1-1 the configuration of lpadmin has been added to the sysusers.d
configuration. All printing packages can now simply depend on the
systemd-sysusers package and don’t have to take care of its creation in
maintainer scripts anymore.
In collaboration with Emmanuel Arias, Santiago had exchanges with GSoC
candidates and reviewed the proposals of the
Linux livepatching GSoC 2026 project.
Colin upgraded tango and pytango to new upstream releases and packaged
pybind11-stubgen (needed for pytango), thanks to a Freexian customer. Tests of
reproducible builds revealed that pybind11-stubgen didn’t generate imports in a
stable order; this is now fixed upstream.
Lucas fixed CVE-2025-67733
and CVE-2026-21863
affecting src:valkey in unstable and testing. Also reviewed the same fixes
targeting stable proposed by Peter Wienemann.
Faidon worked with upstream and build-dep Debian maintainers on resolving
blockers in order to bring pyHanko into Debian, starting with the adoption of
python-pyhanko-certvalidator. pyHanko is a suite for signing and stamping PDF
files, and one of the few libraries that can be leveraged to sign PDFs with
eIDAS Qualified Electronic Signatures.
Anupa co-organized MiniDebConf Kanpur
and attended the event with many others from all across India. She handled the
accommodation arrangements along with the registration team members, worked on
the budget and expenses. She was also a speaker at the event.
Lucas helped with content review/schedule for the
MiniDebConf Campinas. Thanks Freexian for
being a Gold sponsor!
Lucas organized and took part in a one-day in-person sprint to work on
Ruby 3.4 transition. It was held in a coworking space in Brasilia - Brazil on
April 6th. There were 5 DDs and they fixed multiple packages FTBFSing against
Ruby 3.4 (coming to unstable soon hopefully). Lucas has been postponing a blog
post about this sprint since then :-)
Microsoft today pushed software updates to fix a staggering 167 security vulnerabilities in its Windows operating systems and related software, including a SharePoint Server zero-day and a publicly disclosed weakness in Windows Defender dubbed “BlueHammer.” Separately, Google Chrome fixed its fourth zero-day of 2026, and an emergency update for Adobe Reader nixes an actively exploited flaw that can lead to remote code execution.
Redmond warns that attackers are already targeting CVE-2026-32201, a vulnerability in Microsoft SharePoint Server that allows attackers to spoof trusted content or interfaces over a network.
Mike Walters, president and co-founder of Action1, said CVE-2026-32201 can be used to deceive employees, partners, or customers by presenting falsified information within trusted SharePoint environments.
“This CVE can enable phishing attacks, unauthorized data manipulation, or social engineering campaigns that lead to further compromise,” Walters said. “The presence of active exploitation significantly increases organizational risk.”
Microsoft also addressed BlueHammer (CVE-2026-33825), a privilege escalation bug in Windows Defender. According to BleepingComputer, the researcher who discovered the flaw published exploit code for it after notifying Microsoft and growing exasperated with their response. Will Dormann, senior principal vulnerability analyst at Tharros, says he confirmed that the public BlueHammer exploit code no longer works after installing today’s patches.
Satnam Narang, senior staff research engineer at Tenable, said April marks the second-biggest Patch Tuesday ever for Microsoft. Narang also said there are indications that a zero-day flaw Adobe patched in an emergency update on April 11 — CVE-2026-34621 — has seen active exploitation since at least November 2025.
Adam Barnett, lead software engineer at Rapid7, called the patch total from Microsoft today “a new record in that category” because it includes nearly 60 browser vulnerabilities. Barnett said it might be tempting to imagine that this sudden spike was tied to the buzz around the announcement a week ago today of Project Glasswing — a much-hyped but still unreleased new AI capability from Anthropic that is reportedly quite good at finding bugs in a vast array of software.
But he notes that Microsoft Edge is based on the Chromium engine, and the Chromium maintainers acknowledge a wide range of researchers for the vulnerabilities which Microsoft republished last Friday.
“A safe conclusion is that this increase in volume is driven by ever-expanding AI capabilities,” Barnett said. “We should expect to see further increases in vulnerability reporting volume as the impact of AI models extend further, both in terms of capability and availability.”
Finally, no matter what browser you use to surf the web, it’s important to completely close out and restart the browser periodically. This is really easy to put off (especially if you have a bajillion tabs open at any time) but it’s the only way to ensure that any available updates get installed. For example, a Google Chrome update released earlier this month fixed 21 security holes, including the high-severity zero-day flaw CVE-2026-5281.
For a clickable, per-patch breakdown, check out the SANS Internet Storm CenterPatch Tuesday roundup. Running into problems applying any of these updates? Leave a note about it in the comments below and there’s a decent chance someone here will pipe in with a solution.
AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: "instant software." Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.
AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.
In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.
How flaw discovery might work
On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both government and criminal hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies monitoring and disrupting malicious AI use will become increasingly irrelevant.
Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.
Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.
Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.
All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.
Automating patch creation
But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.
How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.
AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a good example of this.
Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it easier for an AI to train on writing secure code.
We can envision a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.
Patching lags and legacy software
For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.
I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.
Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.
Toward self-healing
In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.
For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.
If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.
The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.
There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.
Vulnerability economics
Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.
This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.
But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find "nobody but us" zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.
We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.
Up the stack
Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.
What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.
Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a "trusting trust problem."
No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.
EDITED TO ADD: Twoessays published after I wrote this. Both are good illustrations of where we are regarding AI vulnerability discovery. Things are changing very fast.
It seems my own plans and life's plans diverged this spring,
so I am in the market for a new job. So if you're looking for
someone with a long track record making your code go brrr
really fast, give me a ping (contact information at
my homepage). Working from Oslo
(on-site or remote), CV available upon request. No AI boosterism
or cryptocurrency grifters, please :-)
A maintenance release 0.3.13 of the anytime
package arrived on CRAN today,
sticking with the roughly yearly schedule we have now. Binaries for r2u have been built
already. The package is fairly feature-complete, and code and
functionality remain mature and stable.
anytime
is a very focused package aiming to do just one thing really
well: to convert anything in integer, numeric, character,
factor, ordered, … input format to either POSIXct (when called as
anytime) or Date objects (when called as
anydate) – and to do so without requiring a format
string as well as accomodating different formats in one input
vector. See the anytime page,
the GitHub repo
for a few examples, the nice pdf
vignette, and the beautiful documentation site
for all documentation.
This release was triggered by a bizarre bug seen on elementary os 8.
For “reason” anytime was
taking note on startup where it runs, and used a small and simply piece
of code reading /etc/os-release when it exists. We assumed
sane content, but this particular operating system and releases managed
to have a duplicate entry throwing us spanner. So now this code is
robust to duplicates, and no longer executed on each startup but “as
needed” which is a net improvement. We also switched the vignette to
being deployed by the new Rcpp::asis() driver.
The short list of changes follows.
Changes in anytime
version 0.3.13 (2026-04-14)
Continuous integration has received minor updates
The vignette now use the Rcpp::asis() driver, and
references have been refreshed
Stateful 'where are we running' detection is now more robust, and
has been moved from running on each startup to a cached 'as needed'
case
At last, I can run my own large language model artificial idiocy
generator at home on a Debian testing host using Debian packages
directly from the Debian archive. After months of polishing the
llama.cpp,
whisper.cpp and
ggml packages, and their
dependencies, I was very happy to see today that they all entered
Debian testing this morning. Several release-critical issues in
dependencies have been blocking the migration for the last few weeks,
and now finally the last one of these has been fixed. I would like to
extend a big thanks to everyone involved in making this happen.
I've been running home-build editions of whisper.cpp and llama.cpp
packages for a while now, first building from the upstream Git
repository and later, as the Debian packaging progressed, from the
relevant Salsa Git repositories for the ROCM packages, GGML,
whisper.cpp and llama.cpp. The only snag with the official Debian
packages is that the JavaScript chat client web pages are slightly
broken in my setup, where I use a reverse proxy to make my home server
visible on the public Internet while the included web pages only want
to communicate with localhost / 127.0.0.1. I suspect it might be
simple to fix by making the JavaScript code dynamically look up the
URL of the current page and use that to determine where to find the
API service, but until someone fixes
BTS report #1128381, I
just have to edit
/usr/share/llama.cpp-tools/llama-server/themes/simplechat/simplechat.js
every time I upgrade the package. I start my server like this on my
machine with a nice AMD GPU (donated to me as a Debian developer by
AMD two years ago, thank you very much):
It only takes a few minutes to load the model for the first time
and prepare a nice API server for me at
https://my.reverse.proxy.example.com:8080/v1/, available
(note, this sets up the server up without authentication; use a
reverse proxy with authentication if you need it) for all the API
clients I care to test. I switch models regularly to test different
new ones, the Qwen3-Coder one just happen to be the one I use at the
moment. Perhaps these packages is something for you to have fun with
too?
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Abstract: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by seasoned cybercriminals. This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform. Analyzing more than 160 cybercrime forum conversations collected over seven months, our research reveals how cybercriminals understand AI and discuss how they can exploit its capabilities. Their exchanges reflect growing curiosity about AI’s criminal applications through legal tools and dedicated criminal tools, but also doubts and anxieties about AI’s effectiveness and its effects on their business models and operational security. The study documents attempts to misuse legitimate AI tools and develop bespoke models tailored for illicit purposes. Combining the diffusion of innovation framework with thematic analysis, the paper provides an in-depth view of emerging AI-enabled cybercrime and offers practical insights for law enforcement and policymakers.
I’ve been using the Furilabs FLX1s phone [1] as my daily driver for 6 weeks, it’s a decent phone, not as good as I hoped but good enough to use every day and rely on for phone calls about job interviews etc. I intend to keep using it as my main phone and as a platform to improve phone software in Debian as you really can’t effectively find bugs unless you use the platform for important tasks.
Support Problems
I previously wrote about the phone after I received it without a SIM caddy on the 13th of Jan. I had a saga with support about this, on the 16th of Jan one support person said that they would ship it immediately but didn’t provide a tracking number or any indication of when it would arrive. On the 5th of Feb I contacted support again and asked how long it would be, the new support person seemed to have no record of my previous communication but said that they would send it. On the 17th of Feb I made another support request including asking for a way of direct communication as the support email came from an address that wouldn’t accept replies, I was asked for a photo showing where the problem is. The support person also said that they might have to send a replacement phone!
The last support request I sent included my disappointment at the time taken to resolve the issue and the proposed solution of replacing the entire phone (why have two international shipments of a fragile and expensive phone when a single letter with a cheap SIM caddy would do?). I didn’t receive a reply but the SIM caddy arrived on the 2nd of Mar. Here is a pic of the SIM caddy and the package it came in:
One thing that should be noted is that some of the support people seemed to be very good at their jobs and they were all friendly. It was the system that failed here, turning a minor issue of a missing part into a 6 week saga.
Furilabs needs to do the following to address this issue:
Make it possible to reply directly to a message from a support person. Accept email with a custom subject to sort it, give a URL for a web form, anything. Collating discussions with a customer allows giving better support while taking less time for the support people.
Have someone monitor every social media address that is used by the company. When someone sends a support request in a public Mastodon post it indicates that something has gone wrong and you want to move quickly to resolve it.
Take care of the little things, like sending a tracking number for every parcel. If it’s something too small for a parcel (the SIM caddy could have fit in a regular letter) then just tell the customer what date it was posted and where it was posted from so they have some idea of when it will arrive.
This is not just a single failure of Furilabs support, it’s a systemic failure of their processes.
Problems I Will Fix – Unless Someone Beats Me to it
Here are some issues I plan to work on.
Smart Watch Support
I need to port one of the smart watch programs to Debian. Also I want to make one of them support the Colmi P80 [2].
A smart watch significantly increases the utility of a phone even though IMHO they aren’t doing nearly all the things that they could and should do. When we get Debian programs talking to the PineTime it will make a good platform for development of new smart phone and OS features.
Nextcloud
I have ongoing issues of my text Nextcloud installation on a Debian VM not allowing connection from the Linux desktop app (as packaged in Debian) and from the Android client (from f-droid). The desktop client works with a friend’s Nextcloud installation on Ubuntu so I may try running it on an Ubuntu VM I run while waiting for the Debian issue to get resolved. There was a bug recently fixed in Nextcloud that appears related so maybe the next release will fix it.
For the moment I’ve been running without these features and I call and SMS people from knowing their number or just returning calls. Phone calls generally aren’t very useful for me nowadays except when applying for jobs. If I could deal with recruiters and hiring managers via video calls then I would consider just not having a phone number.
Wifi IPv6
Periodically IPv6 support just stops working, I can’t ping the gateway. I turn wifi off and on again and it works. This might be an issue with my wifi network configuration. This might be an issue with the way I have configured my IPv6 networking, although that problem doesn’t happen with any of my laptops.
Chatty Sorting
Chatty is the program for SMS that is installed by default (part of the phosh/phoc setup), it also does Jabber. Version 0.8.7 is installed which apparently has some Furios modifications and it doesn’t properly support sorting SMS/Jabber conversations. Version 0.8.9 from Debian sorts in the same way as most SMS and Jabber programs with the most recent at the top. But the Debian version doesn’t support Jabber (only SMS and Matrix). When I went back to the Furilabs version of Chatty it still sorted for a while but then suddenly stopped. Killing Chatty (not just closing the window and reopening it) seems to make it sort the conversations sometimes.
Problems for Others to Fix
Here are the current issues I have starting with the most important.
Important
The following issues seriously reduce the usability of the device.
Hotspot
The Wifi hotspot functionality wasn’t working for a few weeks, this Gitlab issue seems to match it [3]. It started working correctly for a day and I was not sure if an update I applied fixed the bug or if it’s some sort of race condition that worked for this boot and will return next time I reboot it. Later on I rebooted it and found that it’s somewhat random whether it works or now.
Also while it is mostly working it seemed to stop working about every 25 minutes or so and I had to turn it off and on again to get it going.
On another day it went to a stage where it got repeated packet loss when I pinged the phone as a hotspot from my laptop. A pattern of 3 ping responses and 3 “Destination Host Unreachable” messages was often repeated.
I don’t know if this is related to the way Android software is run in a container to access the hardware.
4G Reliability
Sometimes 4G connectivity has just stopped, sometimes I can stop and restart the 4G data through software to fix it and sometimes I need to use the hardware switch. I haven’t noticed this for a week or two so there is a possibility that one fix addressed both Hotspot and 4G.
One thing that I will do is setup monitoring to give an alert on the phone if it can’t connect to the Internet. I don’t want it to just quietly stop doing networking stuff and not tell me!
On-screen Keyboard
The compatibility issues of the GNOME and KDE on-screen keyboards are getting me. I use phosh/phoc as the login environment as I want to stick to defaults at first to not make things any more difficult than they need to be. When I use programs that use QT such as Nheko the keyboard doesn’t always appear when it should and it forgets the setting for “word completion” (which means spelling correction).
The spelling correction system doesn’t suggest replacing “dont” with “don’t” which is really annoying as a major advantage for spelling checkers on touch screens is inserting an apostrophy. An apostrophy takes at least 3* longer than a regular character and saving that delay makes a difference to typing speed.
The spelling correction doesn’t correct two words run together.
Medium Priority
These issues are ongoing annoyances.
Delay on Power Button
In the best case scenario this phone has a much slower response to pressing the power button than the Android phones I tested (Huawei Mate 10 Pro and Samsung Galaxy Note 9) and a much slower response than my recollection of the vast majority of Android phones I’ve ever used. For testing pressing buttons on the phones simultaneously resulted in the Android phone screens lighting up much sooner. Something like 200ms vs 600ms – I don’t have a good setup to time these things but it’s very obvious when I test.
In a less common case scenario (the phone having been unused for some time) the response can be something like 5 seconds. The worst case scenario is something in excess of 20 seconds.
For UI designers, if you get multiple press events from a button that can turn the screen on/off please make your UI leave the screen on and ignore all the stacked events. Having the screen start turning on and off repeatedly when the phone recovers and processes all the button presses isn’t good, especially when each screen flash takes half a second.
Notifications
Touching on a notification for a program often doesn’t bring it to the foreground. I haven’t yet found a connection between when it does and when it doesn’t.
Also the lack of icons in the top bar on the screen to indicate notifications is annoying, but that seems to be an issue of design not the implementation.
Charge Delay
When I connect the phone to a power source there is a delay of about 22 seconds before it starts to charge. Having it miss 22 seconds of charge time is no big deal, having to wait 22 seconds to be sure it’s charging before leaving it is really annoying. Also the phone makes an audible alert when it gets to 0% charge which woke me up one night when I had failed to push the USB-C connector in hard enough. This phone requires a slightly deeper connector than most phones so with some plugs it’s easy to not quite insert them far enough.
Torch aka Flash
The light for the “torch” or flash for camera is not bright at all. In a quick test staring into the light from 40cm away wasn’t unpleasant compared to my Huawei Mate 10 Pro which has a light bright enough that it hurts to look at it from 4 meters away.
Because of this photos at night are not viable, not even when photographing something that’s less than a meter away.
The torch has a brightness setting which doesn’t seem to change the brightness, so it seems likely that this is a software issue and the brightness is set at a low level and the software isn’t changing it.
Audio
When I connect to my car the Lollypop player starts playing before the phone directs audio to the car, so the music starts coming from the phone for about a second. This is an annoying cosmetic error. Sometimes audio playing pauses for no apparent reason.
It doesn’t support the phone profile with Bluetooth so phone calls can’t go through the car audio system. Also it doesn’t always connect to my car when I start driving, sometimes I need to disable and enable Bluetooth to make it connect.
When I initially set the phone up Lollypop would send the track name when playing music through my car (Nissan LEAF) Bluetooth connection, after an update that often doesn’t happen so the car doesn’t display the track name or whether the music is playing but the pause icon works to pause and resume music (sometimes it does work).
About 30 seconds into a phone call it switches to hands-free mode while the icon to indicate hands-free is not highlighted, so I have to press the hands-free button twice to get it back to normal phone mode.
Low Priority
I could live with these things remaining as-is but it’s annoying.
Ticket Mode
There is apparently some code written to display tickets on screen without unlocking. I want to get this working and store screen-caps of the Android barcode screens of the different loyalty cards so I can scan them without unlocking. My threat model does not include someone trying to steal my phone to get a free loaf of bread on the bakery loyalty program.
Camera
The camera app works with both the back and front cameras, which is nice, and sadly based on my experience with other Debian phones it’s noteworthy. The problem is that it takes a long time to take a photo, something like a second after the button is pressed – long enough for you to think that it just silently took a photo and then move the phone.
The UI of the furios-camera app is also a little annoying, when viewing photos there is an icon at the bottom left of the screen for a video camera and an icon at the bottom right with a cross. Which every time makes me think “record videos” and “leave this screen” not “return to taking photos” and “delete current photo”. I can get used to the surprising icons, but being so slow is a real problem.
GUI App Installation
The program for managing software doesn’t work very well. It said that there were two updates for Mesa package needed, but didn’t seem to want to install them. I ran “flatpak update” as root to fix that. The process of selecting software defaults to including non-free, and most of the available apps are for desktop/laptop with no way to search for phone/tablet apps.
Generally I think it’s best to just avoid this and use apt and flatpak directly from the command-line. Being able to ssh to my phone from a desktop or laptop is good!
Android Emulation
The file /home/furios/.local/share/andromeda/data/system/uiderrors.txt is created by the Andromeda system which runs Android apps in a LXC container and appears to grow without end. After using the phone for a month it was 3.5G in size. The disk space usage isn’t directly a problem, out of the 110G storage space only 17G is used and I don’t have a need to put much else on it, even if I wanted to put backups of /home from my laptop on it when travelling that would still leave plenty of free space. But that sort of thing is a problem for backing up the phone and wasting 3.5G out of 110G total is a fairly significant step towards breaking the entire system.
Also having lots of logging messages from a subsystem that isn’t even being used is a bad sign.
I just tried using it and it doesn’t start from either the settings menu or from the f-droid icon. Android isn’t that important to me as I want to get away from the proprietary app space so I won’t bother trying this any more.
Unfixable Problems
Unlocking
After getting used to fingerprint unlocking going back to a password is a pain. I think that the hardware isn’t sufficient for modern quality face recognition that can’t be fooled by a photo and there isn’t fingerprint hardware.
When I first used an Android phone using a pin to unlock didn’t seem like a big deal, but after getting used to fingerprint unlock it’s a real drag to go without. This is a real annoyance when doing things like checking Wikipedia while watching TV.
This phone would be significantly improved with a fingerprint sensor or a camera that worked well enough for face unlock.
The MAC keeps changing on reboot so I can’t assign a permanent IPv4 address to the phone. It appears from the MAC prefix of 00:08:22 that the network hardware is made in InPro Comm which is well known for using random addresses in the products it OEMs. They apparently have one allocation of 2^24 addresses and each device randomly chooses a MAC from that range on boot.
In the settings for a Wifi connection the “Identity” tab has a field named “Cloned Address” which can be set to “Stable for SSID” that prevents it from changing and allows a static IP address allocation from DHCP. It’s not ideal but it works.
Network Manager can be configured to have a permanent assigned MAC address for all connections or for just some connections. In the past for such things I have copied MAC addresses from ethernet devices that were being discarded and used them for such things. For the moment the “Stable for SSID” setting does what I need but I will consider setting a permanent address at some future time.
Docks
Having the ability to connect to a dock is really handy. The PinePhonePro and Librem5 support it and on the proprietary side a lot of Samsung devices do it with a special desktop GUI named Dex and some Huawei devices also have a desktop version of the GUI. It’s unfortunate that this phone can’t do it.
The Good Things
It’s good to be able to ssh in to my phone, even if the on-screen keyboard worked as well as the Android ones it would still be a major pain to use when compared to a real keyboard. The phone doesn’t support connecting to a dock (unlike Samsung phones I’ve used for which I found Dex to be very useful with a 4K monitor and proper keyboard) so ssh is the best way to access it.
This phone has very reliable connections to my home wifi. I’ve had ssh sessions from my desktop to my phone that have remained open for multiple days. I don’t really need this, I’ve just forgotten to logout and noticed days later that the connection is still running. None of the other phones running Debian could do that.
Running the same OS on desktop and phone makes things easier to test and debug.
Having support for all the things that Linux distributions support is good. For example none of the Android music players support all the encodings of audio that comes from YouTube so to play all of my music collection on Android I would need to transcode most of them which means either losing quality, wasting storage space, or both. While Lollypop plays FLAC0, mp3, m4a, mka, webm, ogg, and more.
Conclusion
This is a step towards where I want to go but it’s far from the end goal.
The PinePhonePro and Librem5 are more open hardware platforms which have some significant benefits. But the battery life issues make them unusable for me.
Running Mobian on a OnePlus 6 or Droidian on a Note 9 works well for the small tablet features but without VoLTE. While the telcos have blocked phones without VoLTE data devices still work so if recruiters etc would stop requiring phone calls then I could make one of them an option.
The phone works well enough that it could potentially be used by one of my older relatives. If I could ssh in to my parents phones when they mess things up that would be convenient.
I’ve run this phone as my daily driver since the 3rd of March and it has worked reasonably well. 6 weeks compared to my previous use of the PinePhonePro for 3 days. This is the first time in 15 years that a non-Android phone has worked for me personally. I have briefly used an iPhone 7 for work which basically did what it needed to do, it was at the bottom of the pile of unused phones at work and I didn’t want to take a newer iPhone that could be used by someone who’s doing more than the occasional SMS or Slack message.
So this is better than it might have been, not as good as I hoped, but a decent platform to use it while developing for it.
Theresa works for a company that handles a fair bit of personally identifiable information that can be tied to health care data, so for them, security matters. They need to comply with security practices laid out by a variety of standards bodies and be able to demonstrate that compliance.
There's a dirty secret about standards compliance, though. Most of these standards are trying to avoid being overly technically prescriptive. So frequently, they may have something like, "a process must exist for securely destroying storage devices before they are disposed of." Maybe it will include some examples of what you could do to meet this standard, but the important thing is that you have to have a process. This means that if you whip up a Word document called "Secure Data Destruction Process" and tell people they should follow it, you can check off that box on your compliance. Sometimes, you need to validate the process; sometimes you need to have other processes which ensure this process is being followed. What you need to do and to what complexity depends on the compliance structure you're beholden to. Some of them are surprisingly flexible, which is a polite way of saying "mostly meaningless".
Theresa's company has a process for safely destroying hard drives. They even validated it, shortly after its introduction. They even have someone who checks that the process has been followed. The process is this: in the basement, someone set up a cheap drill press, and attached a wooden jig to it. You slap the hard drive in the jig, turn on the drill, and brrrrzzzzzz- poke a hole through the platters making the drive unreadable.
There's just one problem with that process: the company recently switched to using SSDs. The SSDs are in a carrier which makes them share the same form factor as old-style spinning disk drives, but that's just a thin plastic shell. The actual electronics package where the data is stored is quite small. Small enough, and located in a position where the little jig attached to the drill guarantees that the drill won't even touch the SSD at all.
For months now, whenever a drive got decommissioned, the IT drone responsible for punching a hole through it has just been drilling through plastic, and nothing else. An unknown quantity of hard drives have been sent out for recycling with PII and health data on them. But it's okay, because the process was followed.
The compliance team at the company will update the process, probably after six months of meetings and planning and approvals from all of the stakeholders. Though it may take longer to glue together a new jig for the SSDs.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
The annual LibreOffice conference 2025 was held in Budapest, Hungary, from the 3rd to the 6th of September 2025. Thanks to the The Document Foundation (TDF) for sponsoring me to attend the conference.
As Hungary is a part of the Schengen area, I needed a Schengen visa to attend the conference. In order to apply for a Schengen visa, one needs to get an appointment at VFS Global and submit all the required documents there, which are then forwarded to the embassy.
I got an appointment for a Hungary visa at VFS Global in New Delhi for the 24th of July. There were many appointment slots available for the Hungary visa. One could easily get an appointment for the next day at the Delhi center. There were some technical problems on the VFS website, though, as I was unable to upload a scanned copy of my passport while booking the appointment. I got an error saying, “Unfortunately, you have exceeded the maximum upload limit.”
The problem didn’t get fixed even after contacting the VFS helpline. They asked me to try in the Firefox browser and deleting all the cache, which I already did.
So I created another account with a different email address and phone number, after which I was able to upload my passport and book an appointment. Other conference attendees from India also reported facing some technical issues on the VFS Hungary website.
Anyway, I went to the VFS Hungary application center as per my appointment on the 24th of July. Going inside, I located the Hungary visa application counter. There were two applicants ahead of me.
When it was my turn, the VFS staff warned me that my passport was damaged. The “damage” was on the bio-data page. All the details could be seen, but the lamination of the details page wore off a bit. They asked me to write an application to the Embassy of Hungary in New Delhi stating that I insist VFS to submit my application along with describing the “damage” on my passport.
I got a bit worried about my application getting rejected due to the “damage.” But I decided to gamble my money on this one, as I didn’t have time (and energy) to apply for a new passport before this trip.
Moreover, I had struck down a couple of fields in my visa application form which were not applicable to me, due to which the VFS staff asked me to fill out another visa application.
After this, the application got submitted, and it was 11,000 INR (including the fee to book the appointment at VFS). Here is the list of documents I submitted:
My passport
Photocopy of my passport
Two photographs of myself
Duly filled visa application form
Return flight ticket reservations
Payslips for the last three months
Invitation letter from the conference organizer (in Hungarian)
Proof of hotel bookings during my stay in Hungary
Cover letter stating my itinerary
Income tax returns filed by me
Bank account statement, signed and sealed by the bank
Travel insurance valid for the period of the entire trip
It took 2 hours for me to submit my visa application, even though there were only two applicants before me. This was by far the longest time to submit a Schengen visa application for me.
Fast-forward to the 30th of July, and I received an email from the Embassy of Hungary asking me to submit an additional document - paid air ticket - for my application. I had only submitted dummy flight tickets, and they were enough for the Schengen visas I applied for until now. This was the first time a country was asking me to submit a confirmed flight ticket during the visa process.
I consulted my travel agent on this, and they were fairly confident that I will get the visa if the embassy is asking me to submit confirmed flight tickets. So I asked the travel agent to book the flight tickets. These tickets were ₹78,000, and the airline was Emirates. Then, I sent the flight tickets to the embassy by email.
The embassy sent the visa results on the 6th of August, which I received the next day.
My visa had been approved! It took 14 days for me to get the Hungary visa after submitting the application.
Author: Majoki At 16,400 feet on the Chajnantor plateau high in the Atacama Desert in central Chile, Sabyll fell off her saddle when the light finally went, the muted sun expiring. Darkness should’ve prevailed. She was prepared for that—the immensity of emptiness. But it was not so. Even in the protection of the array, she […]
Well, well. I'm glad the Artemis crew made it home safe! And yes, this updated/repeat of 1968's Apollo 8 sure was a bit bigger and spiffier!*
Is it churlish of me to grumble that it launched atop a sewn-together Saturn/Shuttle hybrid rocket that has no future?
A rocket that did accomplish its main goal -- 30 years of grift by 20 senators for home state contractors? Our $100+ billions spent on a long-obsolete white elephant that nudged 1970s technology forward by millimeters and soon will be abandoned forever?
Money that might instead have been spent on hundreds of enabling technologies that we'll need, in order to actually build a working moon base? We don't have any of them, alas. Almost any. Though the 'plans' currently issued sure are lovely artist conceptions! Without the slightest meat or plausibility.
(If you are curious about some of those potential and even plausible technologies, drop by the site of NASA's Innovative & Advanced Concepts program - (NIAC)
But let's look at the bright side! The mission was a terrific show! it re-triggered our fond dreams for a while, distracting us from a dismally terrifying year.*
Alas, I can't be a pollyanna for long. Just look ====> at a long list of science that's being slashed in order to pay for a repeat of Apollo 11. Not just science (the enemy) but also tools we need in order to nail down the effects of climate change. All supposedly in order to pay for another footprint stunt on a plain of poison dust.
Justifications? Don't you dare utter the incantation-mantra "lunar resources.' Or 'Helium Three!!' I will so smack you.
(* Just like Apollo 8, Artemis II launched in a time of wretched, even unprecedented tension. Indeed, this is the first year that I have seen that rivals in fateful dread that terrifying 1968.
(But we did persevere past that one. And we'll do it again.)
== No, Avi. they're all just (interstellar) comets ==
I restrained myself from commenting on the third ‘hyperbolic-interstellar object’ that was caught plummeting into the solar system, some months back. But sorry, I have an itch to scratch and it must be said.
Yes, these cosmic visitors appear to have some differences from our home grown solar system comets. You would expect iceballs born in a different protostellar nebula to have chemical variations.
Example? Nickel was detected in the coma of interstellar comet 3I/ATLAS, far more of it than in our own, home-grown comets, plus several other oddities that offer clues to another system formation, near another star. Though it still acted generally like a comet. And hence… no… the third-discovered hyperbolic interstellar object, streaking into the Solar System was not a probe by little silver guys!
Alas, of course, Harvard Professor Avi Loeb leaped to attribute any unusual trait to “it’s aliens!” Though as a comet expert, I demur. (My doctoral dissertation was about comets. as was a1985 novel, Heart of the Comet.) And even the logic is so weird. (A 'sneak-spy' probe that announces itself garishly and has no plausible path to achieve any spying? Ummm)
Never mind that implausible silliness. Soon, it's likely that the Vera Rubin Telescope will reveal many more of these visitors. And from their spectra alone, we'll learn a lot!
== But will we ever get to study one up close? ==
It’s hard to study such interlopers. They are, after all, sweeping in at interstellar "hyperbolic' speeds! Scott Manley explains it well (Though confusing two orbital mechanics terms.)
What Scott didn't know about is the Linares Statite. This was among my favorite projects during the 12 years I served on the advisory external council of NASA's Innovative & Advanced Concepts program - (NIAC). I deem the Linares Statite to be by far the best way to have probes ready to swoop past the sun and then streak ahead to meet objects like this. Using NO FUEL. Have a look.
The Linares Statite would use a big solar sail to hover on sunlight, way out at the asteroid belt, without any Keplerian lateral velocity -- (It does take some explaining) -- ready to fold its wings and dive like a peregrine falcon past the sun to catch up with almost anything, such as another 'Oumuamua interstellar visitor.
Slava Turyshev's Project Sundiver has shown that you get a lot of speed if you snap open the sail at nearest solar passage. In fact it is the best way to streak to the Kuiper Belt. Or even beyond!
And that's the sort of thing we could be doing.
== Elsewhere in the solar system ==
Over the years, astronomers have spotted holes and large pits dotting Venus’ surface, suggesting the existence of lava tubes. Venusian lava tubes may be especially large and arrayed along volcano rims; they may be some of the most extensive subsurface cavities in the solar system.
And this relates to plans for either moon or Mars bases. Because we know of many such pits in both places and one imagines they might be perfect places for human-occupied bases! Since they offer safety from radiation and from thermal cycles...
... and sending robots to explore these sites (and leave little flags to prevent rivals claiming them) would have made a lot of sense. Instead of raving about 'lunar bases' without the needed techs or even a clue where the best places would be.
== Addenda to give hope (a little) in interesting times ==
According to Peter Diamandis: "Renewables just crossed 49.4% of global electricity capacity.
"Let me say that again: nearly half of all electricity generation capacity on Earth is now renewable. Solar drove 75% of new additions, bringing the total to 5.15 terawatts. We’re at the halfway mark and the curve is accelerating.
"This isn’t some future projection. This is today. The energy transition is already here."
Pakistan is now generating most of its energy via solar. Solar is exploding across Africa. Several European nations achieve total-sustainables several days a year.
And want some irony?TEXAS is gradually getting used to the fact that abundant wind and sunlight are making it the leading state in producing sustainable energy! Non-carbon energy generation that the state's politicians fought desperately to sabotage, in Washington DC. Irony abounds.
Diamandis adds: "The average price of a two-carat lab-grown diamond has fallen below $1,000: down 80% since January 2020. Compare that to a natural diamond at $22,000 to $28,000 for the same size."
Peter has long lists of cool tech news to support his evangelical notions about a looming age of abundance, in which we'll have to devise new kinds of VAT taxes just to prevent major deflation! (Thus funding Universal Basic Income (UBI) that he and many others propose.)
I go into a lot of that -- not quite as giddy optimistic -- in my new book on artificial intelligence... ailien minds!
ailien minds
Optimists foretell a golden age of Al-managed abundance.
Doomers cry: vast cyber-minds will crush old style humanity! ... or make us irrelevant.
Meanwhile, geniuses fostering the artificial intelligence boom. cling to clichés rooted in our dismal past... or else in cheap sci-fi.
Is there still time for perspective?
- on 4 billion years of evolution
- or 60 centuries of wretched feudalism
- or how we handled prior tech revolutions
- or mistakes that keep getting repeated
- or ways this time may be different?
From Al-driven unemployment to deceitful images, to hallucinating LLMs and tools for tyrants... to potential wondrous gifts by machines of loving grace... come see future paths that evade the standard ruts.
At the end of the last installment in this series I made a prediction: you believe that matter is made of atoms. I am confident in making this prediction despite the fact that I have almost no information about who you are because, as far as I can tell, no one in the modern world denies it. There are people who profess to believe in all kinds of crazy shit, but I have never heard of
The cybersecurity industry is obsessing over Anthropic’s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is not releasing it to the general public because of its cyberattack capabilities, and has launched Project Glasswing to run the model against a whole slew of public domain and proprietary software, with the aim of finding and patching all the vulnerabilities before hackers get their hands on the model and exploit them.
There’s a lot here, and I hope to write something more considered in the coming week, but I want to make some quick observations.
One: This is very much a PR play by Anthropic—and it worked. Lots of reporters are breathlesslyrepeating Anthropic’s talkingpoints, without engaging with them critically. OpenAI, presumably pissed that Anthropic’s new model has gotten so much positive press and wanting to grab some of the spotlight for itself, announced its model is just as scary, and won’t be released to the general public, either.
Two: These models do demonstrate an increased sophistication in their cyberattack capabilities. They write effective exploits—taking the vulnerabilities they find and operationalizing them—without human involvement. They can find more complex vulnerabilities: chaining together several memory corruption bugs, for example. And they can do more with one-shot prompting, without requiring orchestration and agent configuration infrastructure.
Three: Anthropic might have a good PR team, but the problem isn’t with Mythos Preview. The security company Aisle was able to replicate the vulnerabilities that Anthropic found, using older, cheaper, public models. But there is a difference between finding a vulnerability and turning it into an attack. This points to a current advantage to the defender. Finding for the purposes of fixing is easier for an AI than finding plus exploiting. This advantage is likely to shrink, as ever more powerful models become available to the general public.
Four: Everyone who is panicking about the ramifications of this is correct about the problem, even if we can’t predict the exact timeline. Maybe the sea change just happened, with the new models from Anthropic and OpenAI. Maybe it happened six months ago. Maybe it’ll happen in six months. It will happen—I have no doubt about it—and sooner than we are ready for. We can’t predict how much more these models will improve in general, but software seems to be a specialized language that is optimal for AIs.
A couple of weeks ago, I wrote about security in what I called “the age of instant software,” where AIs are superhumanly good at finding, exploiting, and patching vulnerabilities. I stand by everything I wrote there. The urgency is now greater than ever.
I was also part of a large team that wrote a “what to do now” report. The guidance is largely correct: We need to prepare for a world where zero-day exploits are dime-a-dozen, and lots of attackers suddenly have offensive capabilities that far outstrip their skills.
All the leading AI chatbots are sycophantic, and that’s a problem:
Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.
One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language.
AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being.
Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.
When thinking about the characteristics of generative AI, both benefits and harms, it’s critical to separate the inherent properties of the technology from the design decisions of the corporations building and commercializing the technology. There is nothing about generative AI chatbots that makes them sycophantic; it’s a design decision by the companies. Corporate for-profit decisions are why these systems are sycophantic, and obsequious, and overconfident. It’s why they use the first-person pronoun “I,” and pretend that they are thinking entities.
I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be much more harmful to society:
The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.
We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.
Tim (previously) supports a relatively ancient C++ application. And that creates some interesting conundrums, as the way you wrote C++ in 2003 is not the way you would write it even a few years later. The standard matured quickly.
Way back in 2003, it was still common to use C-style strings, instead of the C++ std::string type. It seems silly, but people had Strong Opinions™ about using standard library types, and much of your C++ code was probably interacting with C libraries, so yeah, C-strings stuck around for a long time.
For Tim's company, however, the migration away from C-strings was in 2007.
This is doing a "starts with" check. strncmp, strlen are both functions which operate on C-strings. So we compare the symTabName against the prefix, but only look at as many characters as are in the prefix. As is common, strncmp returns 0 if the two strings are equal, so we negate that to say "if the symTabName starts with prefix, do stuff".
In C code, this is very much how you would do this, though you might contemplate turning it into a function. Though maybe not.
In C++, in 2007, you do not have a built-in starts_with function- you have to wait until the C++20 standard for that- but you have some string handling functions which could make this more clear. As Tim points out, the "correct" answer is: if(pdf->symTabName().find(prefix) != 0UL). It's more readable, it doesn't involve poking around with char*s, and also isn't spamming that extra whitespace between every parenthesis and operator.
Tim writes: "String handling in C++ is pretty terrible, but it doesn't have to be this terrible."
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Julian Miles, Staff Writer “There’s a devil on ma shoulder It’s doin’ real good fer me It’s not about breakin’ any rules It’s all about keepin’ free…” Greaseman Don’s on form today: dirty overalls attracting flies, red cap on backwards, boot stomping time on an empty crate while picking on a fuel can guitar. […]
The twentyfourth release of littler as a
CRAN package
landed on CRAN just now, following in the now twenty-one year history
(!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined
by me a few weeks later.
littler
is the first command-line interface for R as it predates
Rscript. It allows for piping as well for shebang
scripting via #!, uses command-line arguments more
consistently and still starts
faster. It also always loaded the methods package which
Rscript only began to do in later years.
littler
lives on Linux and Unix, has its difficulties on macOS due to
some-braindeadedness there (who ever thought case-insensitive
filesystems as a default were a good idea?) and simply does not exist on
Windows (yet – the build system could be extended – see RInside for
an existence proof, and volunteers are welcome!). See the FAQ
vignette on how to add it to your PATH. A few examples
are highlighted at the Github repo:, as well
as in the examples
vignette.
This release, which comes just two months after the previous 0.3.22
release that brought a few new features, is mostly internal. (The
previous release erroneously had 0.3.23 in its blog and social media
posts, it really was 0.3.22 and this one now is is 0.3.23.) Mattias
Ellert address a nag (when building for a distribution) about one
example file with a shebang not have excutable modes. I accommodated the
ever-changing interface the C API of R (within about twelve hours of
being notified). A few other smaller changes were made as well polishing
a script or two or usual, see below for more.
The full change description follows.
Changes in littler
version 0.3.23 (2026-04-12)
Changes in examples scripts
Correct spelling in installGithub.r to lower-case
h
The r2u.r now recognises ‘resolute’ aka
26.06
installRub.r can install (more easily) from
r-multiverse
A file permission was corrected (Mattias Ellert in #131)
Changes in package
Update script count and examples in README.md
Continuous intgegration scripts received minor updates
The C level access to the R API was updated to reflect most
recent standards (Dirk in #132)
My CRANberries
service provides a comparison to the
previous release. Full details for the littler
release are provided as usual at the ChangeLog
page, and also on the package docs website.
The code is available via the GitHub repo, from
tarballs and now of course also from its CRAN page and
via install.packages("littler"). Binary packages are
available directly in Debian as
well as (in a day or two) Ubuntu binaries at
CRAN thanks to the tireless Michael Rutter. Comments and suggestions
are welcome at the GitHub repo.
I’m looking forward to being able to split out GSS-API key exchange support in OpenSSH once Ubuntu 26.04 LTS has been released! This stuff will still be my problem, but at least it won’t be in packages that nearly everyone has installed.
Python packaging
New upstream versions:
dill
django-modeltranslation
isort
langtable
pathos
pendulum
pox
ppft
pydantic-extra-types
pytango
python-asyncssh
python-datamodel-code-generator
python-evalidate
python-packaging (including fixes for python-hatch-requirements-txt and python-pyproject-examples)
I worked with the security team to release DSA-6161-1 in multipart, fixing CVE-2026-28356 (upstream discussion). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.)
In trixie-backports, I updated pytest-django to 4.12.0.
I fixed a number of packages to support building with pyo3 0.28:
Historically, I have been a "distribution-first" user. Sticking to tools
packaged within the Debian archives provides a layer of trust; maintainers
validate licenses, audit code, and ensure the entire dependency chain is
verified. However, the rapid pace of development in the Generative AI
space—specifically with new tools like Gemini-CLI—has made this traditional
approach difficult to sustain.
Many modern CLI tools are built within the npm or Python ecosystems. For
a distribution packager, these are a nightmare; packaging a single tool often
requires packaging a massive, shifting dependency chain. Consequently, I found
myself forced to use third-party binaries, bypassing the safety of the Debian
archive.
The Supply Chain Risk
Recent supply chain attacks affecting widely used packages like axios and
LiteLLM have made it clear: running unvetted binaries on a personal system
is a significant risk. These scripts often have full access to your $HOME
directory, SSH keys, and the system D-Bus.
After discussing these concerns with a colleague, I was inspired by his
approach—using a Flatpak-style sandbox for even basic applications like Google
Chrome. I decided to build a generalized version of this using OpenCode and
Qwen 3.6 Fast (which was available for free use at the time) to create a
robust, transient sandbox utility.
The Solution: safe-run-binary
My script, safe-run-binary,
leverages systemd-run to execute binaries within an isolated scope. It
implements strict filesystem masking and resource control to ensure that even if
a dependency is compromised, the "blast radius" is contained.
Key Technical Features
1. Virtualized Home Directory (tmpfs)
Instead of exposing my real home directory, the script mounts a tmpfs
over $HOME. It then selectively creates and bind-mounts only the
necessary subdirectories (like .cache or .config) into a virtual
structure. This prevents the application from ever "seeing" sensitive files
like ~/.ssh or ~/.gnupg.
2. D-Bus Isolation via xdg-dbus-proxy
For GUI applications, providing raw access to the D-Bus is a security hole.
The script uses xdg-dbus-proxy to sit between the application and the
system bus. By using the --filter and --talk=org.freedesktop.portal.*
flags, the app can only communicate with necessary portals (like the file
picker) rather than sniffing the entire bus.
3. Linux Namespace Restrictions
The sandbox utilizes several systemd execution properties to harden the
process:
RestrictNamespaces=yes: For CLI tools, this prevents the app from
creating its own nested namespaces.
PrivateTmp=yes: Ensures a private /tmp space that isn't shared with
the host.
NoNewPrivileges=yes: Prevents the binary from gaining elevated
permissions through SUID/SGID bits.
4. GPU and Audio Passthrough
The script intelligently detects and binds Wayland, PipeWire, and NVIDIA/DRI
device nodes. This allows browsers like Firefox to run with full hardware
acceleration and audio support while remaining locked out of the rest of the
filesystem.
Usage
To run a CLI tool like Gemini-CLI with access only to a specific directory:
While it is not always possible to escape the need for third-party software, it
is possible to control the environment in which it operates. By leveraging
native Linux primitives like systemd and namespaces, high-grade isolation is
achievable.
PS: If you spot any issues or have suggestions for improving the script, feel free
to raise a PR on therepo.
Author: H. Young The monastery was often quiet at shadow-time. There was something about the darkness that inspired a meditative silence among the monks of the Godhead. The giant metal beast that lurked in the sky cast its massive shadow down upon the earth beneath, bathing the planet in semi-night whenever the sun reached a […]
Review: The Teller of Small Fortunes, by Julie Leong
Publisher:
Ace
Copyright:
November 2024
ISBN:
0-593-81590-4
Format:
Kindle
Pages:
324
The Teller of Small Fortunes is a cozy found-family fantasy with a
roughly medieval setting. It was Julie Leong's first novel.
Tao is a traveling teller of small fortunes. In her wagon, pulled by her
friendly mule Laohu, she wanders the small villages of Eshtera and reads
the trivial fortunes of villagers in the tea leaves. An upcoming injury, a
lost ring, a future kiss, a small business deal... she looks around the
large lines of fate and finds the small threads. After a few days, she
moves on, making her solitary way to another village.
Tao is not originally from Eshtera. She is Shinn, which means she
encounters a bit of suspicion and hostility mixed with the fascination of
the exotic. (Language and culture clues lead me to think Shinara is
intended to be this world's not-China, but it's not a direct mapping.) Tao
uses the fascination to help her business; fortune telling is more
believable from someone who seems exotic. The hostility she's learned to
deflect and ignore. In the worst case, there's always another village.
If you've read any cozy found-family novels, you know roughly what happens
next. Tao encounters people on the road and, for various reasons, they
decide to travel together. The first two are a massive mercenary (Mash)
and a semi-reformed thief (Silt), who join Tao somewhat awkwardly after
Tao gives Mash a fortune that is far more significant than she intended.
One town later, they pick up an apprentice baker best known for her
misshapen pastries. They also collect a stray cat, because of course they
do. It's that sort of book.
For me, this sort of novel lives or dies by the characters, so it's good
news that I liked Tao and enjoyed spending time with her. She's quiet,
resilient, competent, and self-contained, with a difficult past and some
mysteries and emotions the others can draw over time. She's also
thoughtful and introspective, which means the tight third-person narration
that almost always stays on Tao offers emotional growth to mull over. I
also liked Kina (the baker) and Mash; they're a bit more obvious and
straightforward, but Kina adds irrepressible energy and Mash is a good
example of the sometimes-gruff soldier with a soft heart. Silt was a bit
more annoying and I never entirely warmed to him, but he's tolerable and
does get a bit of much-needed (if superficial) character development.
It takes some time for the reader to learn about the primary conflict of
the story (Tao does not give up her secrets quickly), so I won't spoil it,
but I thought it worked well. I was momentarily afraid the story would
develop a clear villain, but Leong has some satisfying alternate surprises
in store. The ending was well-done, although it is very happily-ever-after
in a way that may strike some readers as too neat. The Teller of
Small Fortunes aims for a quiet and relaxed mood rather than forcing
character development through difficult choices; it's a fine aim for a
novel, but it won't match everyone's mood.
I liked the world-building, although expect small and somewhat
disconnected details rather than an overarching theory of magic. Tao's
ability gets the most elaboration, for obvious reasons, and I liked how
Leong describes it and explores its consequences. Most of the attention in
the setting is on the friction, wistfulness, and small reminders of coming
from a different culture than everyone around you, but so long ago that
you are not fully a part of either world. This, I thought, was very
well-done and is one of the places where the story is comfortable with
complex feelings and doesn't try to reach a simplifying conclusion.
There is one bit of the story that felt like it was taken directly out of
a Dungeons & Dragons campaign to a degree that felt jarring, but
that was the only odd world-building note.
This book felt like a warm cup of tea intended to comfort and relax,
without large or complex thoughts about the world. It's not intended to be
challenging; there are a few plot twists I didn't anticipate, but nothing
that dramatic, and I doubt anyone will be surprised by the conclusions it
reaches. It's a pleasant time with some nice people and just enough
tension and mystery to add some motivation to find out what happens next.
If that's what you're in the mood for, recommended. If you want a book
that has Things To Say or will put you on the edge of your seat, maybe
save this one for another mood.
All the on-line sources I found for this book call it a standalone, but
The Keeper of Magical Things is set in the same world, so I would
call it a loose series with different protagonists. The Teller of
Small Fortunes is a complete story in one book, though.
Author: Logan S. Ryan They landed and attacked faster than we could name them. They flattened armies like moist clay. They didn’t swarm the skies with high-tech ships or storm our streets with laser rifles. Our extermination wasn’t cinematic at all. They just rolled over us. To no one’s surprise, social media was instantly flooded […]
The South Pacific Regional Fisheries Management Organization (SPRFMO) oversees fishing across roughly 59 million square kilometers (22 million square miles) of the South Pacific high seas, trying to impose order on a region double the size of Africa, where distant-water fleets pursue species ranging from jack mackerel to jumbo flying squid. The latter dominated this year’s talks.
Fishing for jumbo flying squid (Dosidicus gigas) has expanded rapidly over the past two decades. The number of squid-jigging vessels operating in SPRFMO waters rose from 14 in 2000 to more than 500 last year, almost all of them flying the Chinese flag. Meanwhile, reported catches have fallen markedly, from more than 1 million metric tons in 2014 to about 600,000 metric tons in 2024. Scientists worry that fishing pressure is outpacing knowledge of the stock.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
The current signature-based module integrity checking has some drawbacks in combination with reproducible builds. Either the module signing key is generated at build time, which makes the build unreproducible, or a static signing key is used, which precludes rebuilds by third parties and makes the whole build and packaging process much more complicated.
I think this actually undersells the feature. It’s also much simpler than the signature-based module authentication. The latter relies on PKCS#7, X.509, ASN.1, OID registry, crypto_sig API, etc in addition to the implementations of the actual signature algorithm (RSA / ECDSA / ML-DSA) and at least one hash algorithm.
Distribution work
In Debian this month,
Lucas Nussbaum announced Debaudit, a “new service to verify the reproducibility of Debian source packages”:
debaudit complements the work of the Reproducible Builds project. While reproduce.debian.net focuses on ensuring that binary packages can be bit-for-bit reproduced from their source packages, debaudit focuses on the preceding step: ensuring that the source package itself is a faithful and reproducible representation of its upstream source or Vcs-Git repository.
Lastly, Bernhard M. Wiedemann posted another openSUSEmonthly update for their work there.
Tool development
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 314 and 315 to Debian.
Chris Lamb:
Don’t run test_code_is_black_clean test in the autopkgtests. (#1130402). […]
rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there; it powers, amongst other things, reproduce.debian.net.
A new version, 0.26.0, was released this month, with the following improvements:
Much smoother onboarding/installation.
Complete database redesign with many improvements.
New REST HTTP API.
It’s now possible to artificially delay the first reproduce attempt. This gives archive infrastructure more time to catch up.
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Attacks on software supply chains are on the rise, and attackers are becoming increasingly creative in how they inject malicious code into software components.
This paper is the first to investigate Python cache poisoning, which manipulates bytecode cache files to execute malicious code without altering the human-readable source code.
We demonstrate a proof of concept, showing that an attacker can inject malicious bytecode into a cache file without failing the Python interpreter’s integrity checks.
In a large-scale analysis of the Python Package Index, we find that about 12,500 packages are distributed with cache files.
Through manual investigation of cache files that cannot be reproduced automatically from the corresponding source files, we identify classes of reasons for irreproducibility to locate malicious cache files.
While we did not identify any malware leveraging this attack vector, we demonstrate that several widespread package managers are vulnerable to such attacks.
Mario Lins of the University of Linz, Austria, has published their PhD doctoral thesis on the topic of Software supply chain transparency:
We begin by examining threats to the software distribution stage — the point at which artifacts (e.g., mobile apps) are delivered to end users — with an emphasis on mobile ecosystems [and] we next focus on the operating system on mobile devices, with an emphasis on mitigating bootloader-targeted attacks. We demonstrate how to compensate lost security guarantees on devices with an unlocked bootloader. This allows users to flash custom operating systems on devices that no longer receive security updates from the original manufacturer without compromising security. We then move to the source code stage. [Also,] we introduce a new architecture to ensure strong source-to-binary correspondence by leveraging the security guarantees of Confidential Computing technology. Finally, we present The Supply Chain Game, an organizational security approach that enhances standard risk-management methods. We demonstrate how game-theoretic techniques, combined with common risk management practices, can derive new criteria to better support decision makers.
Holger Levsen announced that this year’s Reproducible Builds summit will almost certainly be held in Gothenburg, Sweden, from September 22 until 24, followed by two days of hacking. However, these dates are preliminary and not 100% final — an official announcement is forthcoming.
Mark Wielaard posted to our list asking a question on the difference between debugedit and relative debug paths based on a comment on the Build path page: “Have people tried more modern versions of debugedit to get deterministic (absolute) DWARF paths and found issues with it?
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
A colleague asked me if we should move all our money to our pillow cases after
reading the latest AI editorial from Thomas
Friedman.
The article reads like a press release from Anthropic, repeating the claim that
their latest AI model is so good at finding software vulnerabilities that it is
a danger to the world.
I think I now know what it’s like to be a doctor who is forced to watch Gray’s
Anatomy.
By now every journalist should be able to recognize the AI publicity playbook:
Step 1: Start with a wildly unsubstantiated claim about how dangerous your
product is:
AI will cause human extinction before we have a chance to colonize mars
(remember that one? Even Kim Stanley Robinson, author of perhaps the most
compelling science fiction on colonizing mars calls bull
shit
on it).
AI will eliminate all of our jobs (this one was extremely effective at
providing cover for software companies laying off staff but it has quickly
dawned on people that the companies that did this are living in chaos not
humming along happily with functional robots)
AI will discover massive software vulnerabilities allowing bad actors to “hack
pretty much every major software system in the world”. (Did Friedman pull that
directly from Anthropic’s press release or was that his contribution?)
Step 2: To help stave off human collapse, only release the new version to a
vetted group of software companies and developers, preferably ones with big
social media followings
Step 3: Wait for the limited release developers to spew unbridled
enthusiasm and shocking examples that seem to suggest this new AI produce is
truly unbelievable
Step 4: Watch stock prices and valuations soar
Step 5: Release to the world, and experience a steady stream of mockery as
people discover how wrong you are
Step 6: Start over
Even if Friedman missed the text book example of the playbook, I have to ask:
if you think bad actors compromising software resulting in massive loss of
private data, major outages and wasted resources needs to be reported on, then
where have you been for the last 10 years? This literally happens on a daily
basis due to the
fundamentally flawed way capitalism has been writing software even before the
invention of AI. A small part of me wonders - maybe AI writing software is not
so bad, because how could it be any worse than it is now?
Also, let’s keep in mind that AI’s super ability at finding vulnerable software
depends on having access to the software’s source code, which most companies
keep locked up tight. That means the owners of the software can use AI to find
vulnerabilities and fix them but bad actors can’t.
Surely that would allow AI bots to discover their vulnerabilities and destroy
the company right? I’m not sure if anyone has discovered world ending
vulnerabilities in Anthropic’s Claude code since it was accidentally released,
but it is fun to watch people mock
software that is clearly
written by AI (and spoiler alert, it seems way worse that software written
now).
Well… we probably should all be keeping our money in a pillow case anyway.
"My thoughts exactly" muttered
Jason H.
"I was in a system that avoids check constraints and
the developers never seemed to agree to a T/F or
Y/N or 1/0 for indicator columns. All data in a
column will use the same pattern but different columns in
the same table will use different patterns so I'm not
sure why I was surprised when I came across the
attached. Sort the data descending and you have the shorthand
for what I uttered." How are these all unique?
"I'd better act quickly!"
Hugh Scenic almost panicked.
"This Microsoft Rewards offer might
expire (in just under 74 years)!"
"Copy-copy-copy" repeated
Gordon.
"Not sure I want the team to be in touch
- my query might be best left unanswered."
"Was Comcast's episode guide data hacked by MAGA?"
Barry M.
wondered. "This is
not the usual generic description of Real Time."
"Holiday Workshop for Children
learning how to write web pages, apparently," notes self-named
Youth P.
"You need a new category - because it is no
error to involve young people in a
web design workshop during their holidays. A little bit of
a surprise was that it will happen in a local
museum, and that children between 8 and 12 are the
target audience - should they really already think about their
work future?"
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Emma Atkins There was a snail on the wall: a little circle of brown marring the white cladding, innocuous enough that security hadn’t removed it and repainted the entire block. Inside, they were making the future, showing it off like Sammie had his science-project volcano, grinning with pride as he’d wheeled it in. His […]
The diffoscope maintainers are pleased to announce the release of diffoscope
version 317. This version includes the following changes:
[ Chris Lamb ]
* Limit python3-guestfs Build-Dependency to !i386. (Closes: #1132974)
* Try to fix PYPI_ID_TOKEN debugging.
[ Holger Levsen ]
* Add ppc64el to the list of architectures for python3-guestfs.
I installed the new CPU in another Z640 which had a E5-1620 v3 CPU and it worked. I was a little surprised to discover that the hole in the corner is in the bottom right (according to the alignment of the printed text on the top) for all my E5-26xx CPUs while it’s in the top left on the E5-1620 v3. Google searches for things like “e5-2600 e5-1600 difference” and “e5-2600 e5-1600 difference hole in corner” didn’t turn up any useful information. The best information I found was from the Linus Tech Tips forum which says that the hole is to allow gasses to escape when the CPU package is glued together [5] which implies (but doesn’t state) that the location of the hole has no meaning. I had previously thought that the hole was to indicate the location of “pin 1” and was surprised when the new CPU had the hole in the opposite corner. Hopefully in future when people have such concerns they can find this post and not be worried that they are about to destroy their CPU, PC, or both when upgrading the CPU.
The previous Z640 was one I bought from Facebook marketplace for $50 in “unknown condition” in the expectation that I would get at least $50 of parts but it worked perfectly apart from one DIMM socket. The Z640 I’m using now is one I bought from Facebook marketplace for $200 and it’s working perfectly with 4 DIMMs, 128G of RAM, and the E5-2696 v4 CPU. $300 for a workstation with ECC RAM and a 22 core CPU is good value for money!
There are some accounts of the E5-2696 v4 not working on white-box motherboards including a claim that when it was selling for $4000US someone’s motherboard destroyed one. The best plan for such CPUs is to google for someone who’s already got it working in the same machine, which means a name-brand server. That doesn’t guarantee that it will work (Intel refuses to supply specs and states that different items may work differently) but greatly improves the probability.
This system has the HP BIOS version 2.61, note that the Linux fwupd package doesn’t seem to update the BIOS on HP workstations so you need to manually download it and install it. There is a possibility that a Z640 with an older BIOS won’t work with this CPU.
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
[…]
The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.
Look, I know that percentage was calculated by JavaScript, or maybe the backend, or maybe calculated by a CSS pre-processor. No human typed that. There's nothing to gain by adding a rounding operation. There's nothing truly wrong with that line of code.
But I can't help but think about the comedic value in controlling your page layout down to sub-sub-sub-sub-sub-sub-sub-sub-pixel precision. This code will continue to have pixel accuracy out to screens with quadrillions of pixels, making it incredibly future proof.
It's made extra funny by calling the video player VHS and suggesting the appropriate ratio is 560 pixels by 320- which is not quite 16:9, but is a frequent letterbox ratio on DVD prints of movies.
In any case, I eagerly await a 20-zetta-pixel displays, so I can read the news in its intended glory.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Katherine Sanger She reflected on “The Metamorphosis” and discovered that she was jealous of Gregor Samsa. Sure, he woke up and found himself a giant cockroach, and that sucked for him. But she’d fallen asleep watching a made-for-TV-movie on the couch and woken up to find a giant, person-sized spider sitting in the wingback […]
In January 2025,
as a pre-requisite for something else, I published a minimal neovim
plugin called nvim-µwiki. It's essentially just the features from
vimwiki that I regularly use, which is a small fraction them.
I forgot to blog about it. I recently dusted it off and cleaned it up.
You can find it here, along with a longer list of its features and
how to configure it: https://github.com/jmtd/nvim-microwiki
I had a couple of design goals. I didn't want to define a new filetype,
so this is designed to work with the existing markdown one. I'm
using neovim, so I wanted to leverage some of its features: this plugin
is written in Lua, rather than vimscript. I use the parse trees
provided by TreeSitter to navigate the structure of a document.
I also decided to "plug into" the existing tag stack navigation, rather
than define another dimension of navigation (along with buffers, etc.)
to track: Following a wiki-link pushes onto the tag stack, just as if
you followed a tag.
This was my first serious bit of Lua programming, as well as my first
dive into neovim (or even vim) internals.
Lua is quite reasonable. Most
of the vim and neovim architecture is reasonable. The emerging conventions
about structuring neovim plugins are mostly reasonable. TreeSitter is, well,
interesting, but the devil is very much in the details. Somehow all
together the experience for me was largely just frustrating, and I didn't
really enjoy writing it.
A malicious supply chain compromise has been identified in the Python Package Index package litellm version 1.82.8. The published wheel contains a malicious .pth file (litellm_init.pth, 34,628 bytes) which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module.
There are a lot of really boring things we need to do to help secure all of these critical libraries: SBOMs, SLSA, SigStore. But we have to do them.
The father of the "billion dollar mistake" left us last month. His pointer is finally null. Speaking of null handling, Randy says he was "spelunking" through his codebase and found this pair of functions, which handles null.
public String getDataString() {
if (dataString == null) {
return Constants.NOT_AVAILABLE;
}
return asUnicode(dataString);
}
I assume Constants.NOT_AVAILABLE is an empty string, or something similar. It's reasonable to convert a null into something like that. I don't know where this fits in the overall stack; I'm of the mind that you should retain the null until you absolutely can't anymore; like it or not, a null means something different than an empty string. Or, if we're going that far, we should be talking about using Optional or nullable types.
But that call to asUnicode seems curious. What's happening in there?
This function, which is only called from getDataString, checks for a null. Which we know it won't get, but it checks anyway. If it isn't null, we unescape it. If it is null, we return that null.
Well, I suppose that fits my rule of "retaining the null", but like, in the worst way you could do it. It honestly feels like, if the "swap the null for an empty string" happens anywhere, it should happen here. If I ask for the unescaped version of a null string, an empty string is a reasonable return. That makes more sense that doing it in a property getter.
This code isn't a trainwreck, but it makes things confusing. Maybe it's because I've been doing a lot of refactoring lately, but confusing code with unclear boundaries between functions is a raw nerve for me right now, and this particular example is stepping on that nerve.
While we're talking about unclear boundaries, I object to the idea that this class is storing dataString as an HTML escaped string that we unescape any time we want to look at it. It implies that there's some confusion about which representation is the canonical one: unescaped or escaped. We should store the canonical one, which I think is unescaped. We should only escape it at the point where we're sending it into an HTML document (or similar). Convert at the module boundary, not just any time you want to look at a string.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Em S1: The overhead lights flickered; irritation surged through his systems at each pulse. Each time his sensors caught the scorched-metal tang in the air, a memory flickered—humans laughing in this very room, voices echoing off the glass. He looked around at every screen, where population graphs dipped exactly as the mission predicted, line […]
For about 30 years, Benjamin Netanyahu, the current Prime Minister of Israel, has been making the claim that Iran is on the verge (one month, two weeks, a few weeks, not very far from or variations of that theme) of having a nuclear weapon.
Thus far, there is no sign at all that Teheran has acquired nukes. Given this fact, who would air such a ridiculous claim during a program that claims to be taking a serious look at Iran and the nuclear issue?
Well, the Australian Broadcasting Corporation had no qualms about running a program on Monday (April 6) — made by the American public broadcaster PBS — which put the square-jawed Netanyahu on-screen — among others — questioning if the Islamic nation had gone nuclear. And that too, during an ongoing attack by Israel and the US which is claimed to be aimed at preventing Iran from getting such weapons (which they don’t have).
The puff for the program reads: “As the US bombards Iran, sparking region-wide conflicts and unleashing a global economic crisis, Four Corners interrogates one of President Donald Trump’s key reasons for war: Did Iran pose a nuclear threat?” The short answer is no and even a man of my IQ (which, admittedly, is not much seeing as I come from a nation of brown-skins) can give a response in about 10 seconds flat. It is a ridiculous question. One might as well ask, does Sri Lanka pose a nuclear threat?
This slagging off of Iran is the height of journalistic fraud, but the ABC has abandoned any standards if it ever had any. What’s worse, this PBS program, titled Iran: The Nuclear Question, ran in the slot reserved for the broadcaster’s main investigative program, something that goes by the name 4 Corners. How much investigation was needed to ask and answer such a silly question?
Exactly why the government-funded ABC — which is given $1.2 billion of Australian taxpayer money each year — could not run something made by its own highly-paid staff in this slot is also a question that demands an answer. One would understand if a program from an outside source was screened towards the end of the calendar year, as that is the time when ABC staff start to go on holiday.
But this is March and nobody, not even those in a laidback country like Australia, can claim that March is peak holiday season. It is pertinent to note that the same ABC staff are agitating for a pay rise over and above that which has been offered by the management, and even went on strike for a day a week or so back. What’s more, the staff are threatening to strike again!
It is passing strange that the PBS effort did not raise the question of Israel’s own nukes (Tel Aviv has some 200 warheads, according to the latest reports; for a full account of how it got those weapons read investigative guru Seymour Hersh’s excellent book The Samson Option). Do Israel’s nukes posed a danger to the Middle East? Tel Aviv has never declared its nuclear status and thus evades inspections by the global nuclear inspector; this means, the country is on the same level as North Korea which is often referred to as a pariah state.
But one doubts if any of the Western powers will dare to refer to Israel in this way. There would be protests galore and accusations aplenty of “antisemitism”, whatever that is. (I have always been puzzled as to how a country where the residents are all converts to Judaism and come from nations all over the world, can claim to have even a drop of Semitic blood in their veins.)
Pyongyang had a good excuse for developing nukes; it was being harassed no end by the US and its Western allies before it got its own weapons. Now, Washington gives the country a wide berth.
Does one expect the ABC to fess up to the fact that it screened a third-rate program because its own staff were too lazy to create something? I would advise against holding one’s breath and waiting for such developments. One of the ABC’s “stars”, Sarah Ferguson, spent three full hours on a program that made the dubious claim that Russia had interfered in the 2016 US presidential elections. Nearly seven years later, there is no sign that Ferguson is willing to admit that she screwed up badly, even though her rash conclusions have been thoroughly debunked.
Hackers linked to Russia’s military intelligence units are using known flaws in older Internet routers to mass harvest authentication tokens from Microsoft Office users, security experts warned today. The spying campaign allowed state-backed Russian hackers to quietly siphon authentication tokens from users on more than 18,000 networks without deploying any malicious software or code.
Microsoft said in a blog post today it identified more than 200 organizations and 5,000 consumer devices that were caught up in a stealthy but remarkably simple spying network built by a Russia-backed threat actor known as “Forest Blizzard.”
How targeted DNS requests were redirected at the router. Image: Black Lotus Labs.
Also known as APT28 and Fancy Bear, Forest Blizzard is attributed to the military intelligence units within Russia’s General Staff Main Intelligence Directorate (GRU). APT 28 famously compromised the Hillary Clinton campaign, the Democratic National Committee, and the Democratic Congressional Campaign Committee in 2016 in an attempt to interfere with the U.S. presidential election.
Researchers at Black Lotus Labs, a security division of the Internet backbone provider Lumen, found that at the peak of its activity in December 2025, Forest Blizzard’s surveillance dragnet ensnared more than 18,000 Internet routers that were mostly unsupported, end-of-life routers, or else far behind on security updates. A new report from Lumen says the hackers primarily targeted government agencies—including ministries of foreign affairs, law enforcement, and third-party email providers.
Black Lotus Security Engineer Ryan English said the GRU hackers did not need to install malware on the targeted routers, which were mainly older Mikrotik and TP-Link devices marketed to the Small Office/Home Office (SOHO) market. Instead, they used known vulnerabilities to modify the Domain Name System (DNS) settings of the routers to include DNS servers controlled by the hackers.
As the U.K.’s National Cyber Security Centre (NCSC) notes in a new advisory detailing how Russian cyber actors have been compromising routers, DNS is what allows individuals to reach websites by typing familiar addresses, instead of associated IP addresses. In a DNS hijacking attack, bad actors interfere with this process to covertly send users to malicious websites designed to steal login details or other sensitive information.
English said the routers attacked by Forest Blizzard were reconfigured to use DNS servers that pointed to a handful of virtual private servers controlled by the attackers. Importantly, the attackers could then propagate their malicious DNS settings to all users on the local network, and from that point forward intercept any OAuth authentication tokens transmitted by those users.
DNS hijacking through router compromise. Image: Microsoft.
Because those tokens are typically transmitted only after the user has successfully logged in and gone through multi-factor authentication, the attackers could gain direct access to victim accounts without ever having to phish each user’s credentials and/or one-time codes.
“Everyone is looking for some sophisticated malware to drop something on your mobile devices or something,” English said. “These guys didn’t use malware. They did this in an old-school, graybeard way that isn’t really sexy but it gets the job done.”
Microsoft refers to the Forest Blizzard activity as using DNS hijacking “to support post-compromise adversary-in-the-middle (AiTM) attacks on Transport Layer Security (TLS) connections against Microsoft Outlook on the web domains.” The software giant said while targeting SOHO devices isn’t a new tactic, this is the first time Microsoft has seen Forest Blizzard using “DNS hijacking at scale to support AiTM of TLS connections after exploiting edge devices.”
Black Lotus Labs engineer Danny Adamitis said it will be interesting to see how Forest Blizzard reacts to today’s flurry of attention to their espionage operation, noting that the group immediately switched up its tactics in response to a similar NCSC report (PDF) in August 2025. At the time, Forest Blizzard was using malware to control a far more targeted and smaller group of compromised routers. But Adamitis said the day after the NCSC report, the group quickly ditched the malware approach in favor of mass-altering the DNS settings on thousands of vulnerable routers.
“Before the last NCSC report came out they used this capability in very limited instances,” Adamitis told KrebsOnSecurity. “After the report was released they implemented the capability in a more systemic fashion and used it to target everything that was vulnerable.”
TP-Link was among the router makers facing a complete ban in the United States. But on March 23, the U.S. Federal Communications Commission (FCC) took a much broader approach, announcing it would no longer certify consumer-grade Internet routers that are produced outside of the United States.
The FCC warned that foreign-made routers had become an untenable national security threat, and that poorly-secured routers present “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”
Experts have countered that few new consumer-grade routers would be available for purchase under this new FCC policy (besides maybe Musk’s Starlink satellite Internet routers, which are produced in Texas). The FCC says router makers can apply for a special “conditional approval” from the Department of War or Department of Homeland Security, and that the new policy does not affect any previously-purchased consumer-grade routers.
According to a new law, the Hong Kong police can demand that you reveal the encryption keys protecting your computer, phone, hard drives, etc.—even if you are just transiting the airport.
In a security alert dated March 26, the U.S. Consulate General said that, on March 23, 2026, Hong Kong authorities changed the rules governing enforcement of the National Security Law. Under the revised framework, police can require individuals to provide passwords or other assistance to access personal electronic devices, including cellphones and laptops.
The consulate warned that refusal to comply is now a criminal offense. It also said authorities have expanded powers to take and keep personal electronic devices as evidence if they claim the devices are linked to national security offenses.
Tim H inherited some code which has objects that have many, many properties properties on them. Which is bad. That clearly has no cohesion. But it's okay, there's a validator function which confirms that object is properly populated.
The conditions and body of the conditionals have been removed, so we can see what the flow of the code looks like.
Author: Majoki While the xenologists, Cherinet and Litskovic, had gone on ahead, the survey team exogeologists, Vinnu and Samaan, hunkered down in their autopods battered by one of the unpredictable cyclostorms that made collecting samples and readings challenging. Coms were mega glitchy during these dust ups, so Vinnu reviewed previously collected data. The readings were […]
American presidents are big on talking about what they believe to be their legacy, even if there is no substance to the claims they make. With Joe Biden, it is crystal clear what he can claim as his legacy: the genocide that has resulted in close to 70,000 Palestinian lives being snuffed out.
The announcement of a ceasefire days before Donald Trump’s inauguration is meant to confer credit for this deal on the Biden team when all they have done is block everything apart from Israel’s murderous campaign that was designed to clear Gaza of all human life.
But the past tells us that no chief executive of the US can pretend to be powerless in an Israeli-Palestinian stoush; there are numerous examples of how the man in the White House has put his foot down and got what he demanded.
Perhaps the best-known is the case of Ronald Reagan who got on the phone when Menachem Begin paid no heed to the US move for a cessation of hostilities in Lebanon in 1982. Israel had invaded that country and Reagan had sent Philip Habib to engineer a peace deal.
When Israel seemingly ignored Reagan’s envoy, the big man himself got on the phone and yelled at Begin. According to reports, Reagan used the word holocaust to describe the activity of the world’s most moral army. Begin was annoyed, but had no choice apart from calling his dogs in. Without the US, Israel has no weapons supplier and every Israeli leader down the ages is fully aware of that.
Biden could well have engineered a truce in Gaza by stopping the flow of weapons. But he seems to have been even more enthusiastic in the slaughter given that the same peace deal which is set to be implemented was also proposed in July last year. The nickname Genocide Joe does indeed seem to fit.
Another well-known case of the US putting pressure on an Israeli leader came just after the Gulf war of 1991 when a US-led coalition went to war against Iraq following its invasion of Kuwait. Bush Senior wanted to convene a Middle East peace conference in Madrid after the conflict was over, but Israel baulked at attending. George HW then threatened to withhold US support for US$10 billion in loan guarantees which Israel wanted.
Yitzhak Shamir was reportedly blue in the face when this was conveyed to him, but he had no choice; he had to send a team to that conference. Old man Bush did not win a second term, though. But the fact that the US could demand something of Israel and get it was again underlined.
The latest example of the US playing tough was relayed by the Israeli daily Haaretz a few days back. It appears that Trump’s Middle East troubleshooter, Steven Witkoff, contacted the Israelis and asked for a meeting with Benjamin Netanyahu to finalise details of the truce that has just been announced. Witkoff called on Friday and said he would be in Israel the following afternoon. He was reportedly told that as Saturday afternoon was in the middle of the Sabbath, that was not possible; Netanyahu would only be able to meet him on Saturday night.
Witkoff is reported to have told the Israelis in what Haaretz describes as “salty English” that the Sabbath was of no interest to him. His message is said to have been “loud and clear”. And the report continues, “Thus, in an unusual departure from official practice, the prime minister showed up for an official meeting with Witkoff, who then returned to Qatar to seal the deal.”
Both Biden and Trump will claim credit for the ceasefire that is set to be announced. It may not last long, it may be broken time and again. But what it makes clear is that when a US president — in this case, Trump — demands something from Israel, that country, no matter its leader, has to give in.
Biden’s claim to being the man who brought about this truce sounds very much like his voice during that presidential debate: faint and faltering.
This was my hundred-forty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
During my allocated time I uploaded or worked on:
[DLA 4500-1] gimp security update to fix four CVEs related to denial of service or execution of arbitrary code.
[DLA 4503-1] evolution-data-server to fix one CVE related to a missing canonicalization of a file path.
[DLA 4512-1] strongswan security update to fix one CVE related to a denial of service.
[ELA-1656-1] gimp security update to fix four CVEs in Buster and Stretch related to denial of service or execution of arbitrary code.
[ELA-1660-1] evolution-data-server security update to fix one CVE in Buster and Stretch related to a missing canonicalization of a file path.
[ELA-1665-1] strongswan security update to fix one CVE in Buster related to a denial of service.
[ELA-1666-1] libvpx security update to fix one CVE in Buster and Stretch related to a denial of service or potentially execution of arbitrary code.
I also worked on the check-advisories script and proposed a fix for cases where issues would be assigned to the coordinator instead of the person who forgot doing something.
I also did some work for a kernel update and packages snapd and ldx on security-master and attended the monthly LTS/ELTS meeting. Last but not least I started to work on gst-plugins-bad1.0
Several packages take care of group lpadmin in their maintainer scripts. With the upload of version 260.1-1 of systemd there is now a central package (systemd | systemd-standalone-sysusers | systemd-sysusers) that takes care of this. Other dependencies like adduser can now be dropped.
This month I continued to work on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. I am also able to upload Debian packages to the corresponding Ubuntu PPA now. A small bug had to be fixed in the python script to allow the initial configuration in Launchpad.
This month I uploaded a new upstream version or a bugfix version of:
… libplayerone to experimental. For a list of other packages please see below.
I also uploaded lots of indi-drivers (libplayerone, libsbig, libricohcamerasdk, indi-asi, indi-eqmod, indi-fishcamp, indi-inovaplx, indi-pentax, indi-playerone, indi-sbig, indi-mi, libahp-xc, indi-aagcloudwatcher, indi-aok, indi-apogee, libapogee3, indi-nightscape, libasi, libinovasdk, libmicam, indi-avalon, indi-beefocus, indi-bresserexos2, indi-dsi, indi-ffmv, indi-fli, indi-gige, info-gphoto, indi-gpsd, indi-gpsnmea, indi-limesdr, indi-maxdomeii, indi-mgen, indi-rtklib, indi-shelyak, indi-starbook, indi-starbookten, indi-talon6, indi-weewx-json, indi-webcam, indi-orion-ssg3, indi-armadillo-playtypus ) to experimental to make progress with the indi-transition. No problems with those drivers appeared and the next step would be the upload of indi version 2.x to unstable. I hope this will happen soon, as new drivers are already waiting in the pipeline. There have been also four packages, that migrated to the official indi package and are no longer needed as 3rdparty drivers (indi-astrolink4, indi-astromechfoc, indi-dreamfocuser, indi-spectracyber).
While working on these packages, I thought about testing them. Unfortunately I don’t have enough hardware to really check out every package, so I can upload most of them only as is. In case anybody is interested in a better testing coverage and me being able to provide upstream patches, I would be very glad about hardware donations.
Debian IoT
This month I uploaded a new upstream version or a bugfix version of:
Google says that it will fully transition to post-quantum cryptography by 2029. I think this is a good move, not because I think we will have a useful quantum computer anywhere near that year, but because crypto-agility is always a good thing.
Today's anonymous submission is one of the entries where I look at it and go, "Wait, that's totally wrong, that could have never worked." And then I realize, that's why it was submitted: it was absolutely broken code which got to production, somehow.
So, Collection.updateOne is an API method for MongoDB. It takes three parameters: a filter to find the document, an update to perform on the document, and then an object containing other parameters to control how that update is done.
So this code is simply wrong. But it's worse than that, because it's wrong in a stupid way.
When creating routes using ExpressJS, you define a route and a callback to handle the route. The callback takes a few parameters: the request the browser sent, the result we're sending back, and a next function, which lets you have multiple callbacks attached to the same route. By invoking next() you're passing control to the next callback in the chain.
So what we have here is either an absolute brain fart, or more likely, a find-and-replace failure. A route handling callback got mixed in with database operations (which, as an aside, if your route handling code is anywhere near database code, you've also made a horrible mistake). The result is a line of code that doesn't work. And then someone released this non-working code into production.
Our submiter writes:
This blew up our logs today, has been in the code since 2019. I removed it in a handful of other places too.
Which raises the other question: why didn't this blow up the logs earlier?
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Julian Miles, Staff Writer The spires on the distance give an illusion of peace. It’s only when you get closer you can see they’re gutted frames sticking up like headstones. We used to call the city Heltarvon. It was the trading capital of Briss, the biggest hierarchy on our eastern continent. Earthers thought we’d […]
An elusive hacker who went by the handle “UNKN” and ran the early Russian ransomware groups GandCrab and REvil now has a name and a face. Authorities in Germany say 31-year-old Russian Daniil Maksimovich Shchukin headed both cybercrime gangs and helped carry out at least 130 acts of computer sabotage and extortion against victims across the country between 2019 and 2021.
Shchukin was named as UNKN (a.k.a. UNKNOWN) in an advisory published by the German Federal Criminal Police (the “Bundeskriminalamt” or BKA for short). The BKA said Shchukin and another Russian — 43-year-old Anatoly Sergeevitsch Kravchuk — extorted nearly $2 million euros across two dozen cyberattacks that caused more than 35 million euros in total economic damage.
Daniil Maksimovich SHCHUKIN, a.k.a. UNKN, and Anatoly Sergeevitsch Karvchuk, alleged leaders of the GandCrab and REvil ransomware groups.
Germany’s BKA said Shchukin acted as the head of one of the largest worldwide operating ransomware groups GandCrab and REvil, which pioneered the practice of double extortion — charging victims once for a key needed to unlock hacked systems, and a separate payment in exchange for a promise not to publish stolen data.
Shchukin’s name appeared in a Feb. 2023 filing (PDF) from the U.S. Justice Department seeking the seizure of various cryptocurrency accounts associated with proceeds from the REvil ransomware gang’s activities. The government said the digital wallet tied to Shchukin contained more than $317,000 in ill-gotten cryptocurrency.
The GandCrab ransomware affiliate program first surfaced in January 2018, and paid enterprising hackers huge shares of the profits just for hacking into user accounts at major corporations. The GandCrab team would then try to expand that access, often siphoning vast amounts of sensitive and internal documents in the process. The malware’s curators shipped five major revisions to the GandCrab code, each corresponding with sneaky new features and bug fixes aimed at thwarting the efforts of computer security firms to stymie the spread of the malware.
On May 31, 2019, the GandCrab team announced the group was shutting down after extorting more than $2 billion from victims. “We are a living proof that you can do evil and get off scot-free,” GandCrab’s farewell address famously quipped. “We have proved that one can make a lifetime of money in one year. We have proved that you can become number one by general admission, not in your own conceit.”
The REvil ransomware affiliate program materialized around the same as GandCrab’s demise, fronted by a user named UNKNOWN who announced on a Russian cybercrime forum that he’d deposited $1 million in the forum’s escrow to show he meant business. By this time, many cybersecurity experts had concluded REvil was little more than a reorganization of GandCrab.
UNKNOWN also gave an interview to Dmitry Smilyanets, a former malicious hacker hired by Recorded Future, wherein UNKNOWN described a rags-to-riches tale unencumbered by ethics and morals.
“As a child, I scrounged through the trash heaps and smoked cigarette butts,” UNKNOWN told Recorded Future. “I walked 10 km one way to the school. I wore the same clothes for six months. In my youth, in a communal apartment, I didn’t eat for two or even three days. Now I am a millionaire.”
As described in The Ransomware Hunting Team by Renee Dudley and Daniel Golden, UNKNOWN and REvil reinvested significant earnings into improving their success and mirroring practices of legitimate businesses. The authors wrote:
“Just as a real-world manufacturer might hire other companies to handle logistics or web design, ransomware developers increasingly outsourced tasks beyond their purview, focusing instead on improving the quality of their ransomware. The higher quality ransomware—which, in many cases, the Hunting Team could not break—resulted in more and higher pay-outs from victims. The monumental payments enabled gangs to reinvest in their enterprises. They hired more specialists, and their success accelerated.”
“Criminals raced to join the booming ransomware economy. Underworld ancillary service providers sprouted or pivoted from other criminal work to meet developers’ demand for customized support. Partnering with gangs like GandCrab, ‘cryptor’ providers ensured ransomware could not be detected by standard anti-malware scanners. ‘Initial access brokerages’ specialized in stealing credentials and finding vulnerabilities in target networks, selling that access to ransomware operators and affiliates. Bitcoin “tumblers” offered discounts to gangs that used them as a preferred vendor for laundering ransom payments. Some contractors were open to working with any gang, while others entered exclusive partnerships.”
REvil would evolve into a feared “big-game-hunting” machine capable of extracting hefty extortion payments from victims, largely going after organizations with more than $100 million in annual revenues and fat new cyber insurance policies that were known to pay out.
Over the July 4, 2021 weekend in the United States, REvil hacked into and extorted Kaseya, a company that handled IT operations for more than 1,500 businesses, nonprofits and government agencies. The FBI would later announce they’d infiltrated the ransomware group’s servers prior to the Kaseya hack but couldn’t tip their hand at the time. REvil never recovered from that core compromise, or from the FBI’s release of a free decryption key for REvil victims who couldn’t or didn’t pay.
Shchukin is from Krasnodar, Russia and is thought to reside there, the BKA said.
“Based on the investigations so far, it is assumed that the wanted person is abroad, presumably in Russia,” the BKA advised. “Travel behaviour cannot be ruled out.”
There is little that connects Shchukin to UNKNOWN’s various accounts on the Russian crime forums. But a review of the Russian crime forums indexed by the cyber intelligence firm Intel 471 shows there is plenty connecting Shchukin to a hacker identity called “Ger0in” who operated large botnets and sold “installs” — allowing other cybercriminals to rapidly deploy malware of their choice to thousands of PCs in one go. However, Ger0in was only active between 2010 and 2011, well before UNKNOWN’s appearance as the REvil front man.
A review of the mugshots released by the BKA at the image comparison site Pimeyes found a match on this birthday celebration from 2023, which features a young man named Daniel wearing the same fancy watch as in the BKA photos.
Images from Daniil Shchukin’s birthday party celebration in Krasnodar in 2023.
Update, April 6, 12:06 p.m. ET: A reader forwarded this English-dubbed audio recording from a ccc.de (37C3) conference talk in Germany from 2023 that previously outed Shchukin as the REvil leader (Shchuckin is mentioned at around 24:25).
"We used to say facts don't care about your feelings.
“Now feelings don't care about your facts."
Dems may be the (uneven/flawed) good guys... and their policies always have better outcomes, in every category*... but they are tactical dunces. There are dozens of proposals - some that I made over eight years ago (in Polemical Judo) -that might have helped to save us by now. FOr example:
- Separate off the Inspectors General in all departments (and military JAGs) into their own separately-funded, non-executive branch - the Inspectorate - led by the Inspector General of the United States. If that had been done, then no Trump could ever have fired all the IGs and JAGs, as this one has done. (And no dem has stepped up to make that a central issue!)
- Give every House & Senate member one peremptory subpoena per year, so that never again would the minority be silenced.
- End or punish gerrymandering in ways that bypass corrupt court decisions.
- Strengthen the civil service act.
- Make revelation of tax records automatic for officials in all three branches. And investments that might at all involve conflicts of interest. In all three branches, annual medical examination by neutral experts will report results to the other branches.
- Offer Truth & Reconciliation safe harbor or protection for any officials being coerced or blackmailed.
- Ban NDAs, or require that they decay over a reasonable period.
- Establish a spending master who can limit public funds expended for personal purposes such as travel. And a civil serviant White House Manager who protects and manages public property.
- End the practice of allowing lavish gifts from foreigners to be permanently displayed in Presidential museums, making them in effect actual gift bribes.
So, are Dems smarter? Well, yes, if you appraise based on verifiable outcomes.
And absolutely NOT if you score by their utter lack of political/polemical savvy, which has allowed morons, led by morons, to hijack the nation and sabotage the world's future. Morons who know no history, including why the WWII/GIBill generation adored one living human above all others. Franklin Roosevelt.
And in the 1950s? The most admired person in America and the world was named Jonas Salk.
== Appealing to the saner aristocracy ==
Never before was a decade-old essay so important. Not one of mine! Rather I mean...
... Nick Hanauer’s 2014 appeal to his fellow billionaires to consider the one trait that all schools of psychology call central to sanity – satiability. Hanauer tells other members of the rising plutocrat caste that sending wealth disparities skyrocketing – now past French Revolution levels – will have one inevitable outcome… pitchforks and torches.
Till now, I thought there must be elements of the top most castes who are having buyers' remorse, when they realize that Trump's appointments have just one patterned purpose -- to utterly demolish the US government as a functioning concern. A demolition ultimately serving the long stated aims of one person above all others on Earth: Vladimir Putin. Who openly and repeatedly proclaimed a passionate goal of revenge for the toppling of his beloved USSR.
Are there aristocrats who realize that their victory now threatens the very life of the nation where they keep their stuff... the goose that laid their golden eggs?
Of course our current crisis distills as a worldwide attempted putsch against the Enlightenment Experiment (EE) by a combine of powers, ostensibly disparate but united in the goal of restoring 6000 years of rule-by-inheritance brats.
Not all of the rich are ingrate fools - I know a fair number who are deeply loyal to the EE that gave them everything, from comfort and safety to science and fun. And the uiniversities and infrastructures and nerdy collaborators and services that make their wealth worthwhile.
None of those Good Zillionaires are participating in the putsch and some are deeply involved in the fight against it. So, yes, there are good ones! And one metric is whether they fear transparency.
Which brings us to some of my own ideas on how to deal with the skyrocketing wealth and power disparities that will be exacerbated when crypto and AI wars send electricity use through the stratosphere, threatening all our lives.
But now there's Articial intelligence. And yes, my brand now book on the topic, covering the gamut of hopes and fears... is aiLien Minds.
== Discovering & Correcting Errors ==
Gonna press some buttons. I assert: Free Speech is not a religious principle - though it must be defended AS IF it is.
I study history. And the principle goal should be error discovery & correction. Just one society ever made that a priority and it was the one without kings. And only one method ever achieved that - piercing the inevitable morass of delusions foaming about every individual and group and yes, you. And yes, me. And certainly AIs.
That method is vigorous competitively reciprocal criticism. And now the crux. You cannot get reciprocal criticism and error correction without Free Speech.
One problem. Unless the GOAL of error correction is kept in mind, then free speech has no corrective function! Not if it leads to the insanity of "MY yammerings are just as valid as anyone else's!"
No, it is a free, competitive market of ideas and assertions, not utter anarchy, that achieved modern miracles, like disproving racial assumptions or sexist ones or junk science or Nazi or Leninist ravings or the 'superiority' of inheritance brats.
STUFF MUST BE DISPROVABLE! Not in order to shut people up. But to reduce the credibility of those who are factually wrong a lot. In order to embarrass those who are wrong into shifting their free speech to other criticisms that aren't yet disproved, or that might even be useful to us all.
Am I a heretic for defending free speech for different reasons than you defend it? Because it results in a wiser, more error free society? And not so much because it is the cultural norm that I was raised under. Outcomes matter. And I defend freedom because its outcomes are spectacularly better,
This week on my podcast, I read Not Normal, my latest Locus Magazine column, about the surreal and terrible world we’ve been eased into thanks to anti-circumvention laws.
If you were paying attention in 1998, you could see what was coming. Computers were getting much cheaper, and much smaller. From cars to toasters, from speakers to TVs, we were shoveling them into our devices. and an it doesn’t take a lot of expense or engineering to add an “access control” to any of those computers.
That meant that DMCA 1201 was about to metastasize. Once you put a computer into a thermostat or a bassinet or a stovetop or a hearing aid, you can add an access control and make it a felony to use it in ways the manufacturer disprefers. You can make it illegal to use cheap batteries, or a different app store. You can add little chips to parts – everything from a fuel pump to a touchscreen – and make it illegal to manufacture a working generic part, because the generic part has to bypass the “access control” in the device that checks to see whether it’s the manufacturer’s own part.
Author: Amanda Todisco Klaudia slit a perfectly straight line down the belly of a frog and cut the skin away from the muscle. She found comfort in the solitude afforded by the lab, the quiet precision of scissors, forceps, organs an ode to the recluse. It’d been 387 days since she entered the room—long enough […]
It’s been about a month since I wrapped up my Outreachy internship, but my journey with Debian is far from over. I planned to keep contributing and exploring the community, and these past few weeks have been busy
Testing Locales and Solving Bug #1111214
For the openQA project, we decided to explore how accurate local language installations are and see if we can improve the translations. While exploring this, I started working on automating a test for a specific bug report:Debian Bug #1111214
This is a test I had started by writing a detailed description of the installation process to confirm that selecting the Spanish_panama locale works accurately. I spent time studying previous language installation tests, and I learned that I needed to add a specific tag (LANGUAGE-) to the “needles� (visual test markers).
Since the installation wasn’t in English anymore, taking the correct screenshots and defining the areas took quite some time. I used the following command on the CLI to run the test:
`openqa-cli api -X POST isos ISO=debian-live-testing-amd64-gnome.iso DISTRI=debian-live VERSION=forky FLAVOR=gnome LANGUAGE=spanish_panama ARCH=x86_64 BUILD=1311 CHECKSUM=unknown`
While working on this, I got stuck at the complete_installation step. Because the keyboard layout had changed to Spanish, the commands required to confirm a successful install weren’t working as expected. Specifically, we had an issue typing the “greater than� sign (>).
My mentor, Roland Clobus, worked on a clever maneuver for the keys (AltGr-Shift-X), which was actually submitted upstream to openSUSE.
In this step, I also had to confirm that the locale was correctly set to LANG=�es_PA.UTF-8″. I had to dig into the scripts and Linux commands to make this work. It was a bit intimidating at first, but it turned out to be a great learning experience. You can follow my progress on thisMerge Request here. I’m currently debugging a small issue where the “home� key seems to click twice in the final step, and after that, the test would be complete .
Community & Connections
Beyond the code, I’ve been getting more involved in the social side of Debian:
Debian Women: I attended the monthly meeting and met Sruthi Chandran. I’ve always seen her name as an Outreachy organizer, so it was great to meet her! She is currently running for Debian Project Leader (DPL). We also discussed starting technical sessions to introduce members to packaging, which I am very excited to learn.
DebConf Preparation: I am officially preparing for my first DebConf! My mentors, Tassia and Roland, along with my fellow intern Hellen, have been incredibly supportive in guiding me through the application and presentation process.
The Tour de Los Padres is coming! The race organizer post the route on
ridewithgps. This works, but has convoluted interfaces for people not wanting to
use their service. I just wrote a simple script to export their data into a
plain .gpx file, including all the waypoints; their exporter omits those.
I've seen two flavors of their data, so here're two flavors of the
gpx-from-ridewithgps.py script:
Author: Shinya Kato I click. The system thinks. Between shifts at the hospital, I sit at a terminal with my hand resting on the mouse. Faces pass behind me—colleagues, patients, families—and lately they look unfamiliar, like another species of ape that has misplaced something essential. The AI generates diagnoses, probabilities, and optimized plans. My role […]
On June 19 and 20, I will cycle a little over 100 miles from downtown
Chicago and its wonderful Millenium Park to New Buffalo, Michigan, as
part of the Tour de Shore
2026. The ride passes through northwest Indiana and the extended Indiana Dunes National
Park ending the next morning in the southwestern Michigan town of
New Buffalo. I rode Tour de Shore once before in 2024 and had a
generally wonderful time (even considering some soreness after a century
of miles over 1 1/2 days).
But Maywood,
Illinois is also little less well off than other western suburbs.
The Maywood Fine Arts Center
is simply legendary is what they do for this community (and surrounding
communities), and especially the youth support. They can use a dollar a
two. Their
story about Tour de Shore is worth a read too for background and
motivation.
I have bootstrapped my
donation page page with a dollar for each mile to be cycled. It
would be simply terrific if you could join me. A nickel, a dime, or a
quarter per mile cycled would help. Multiples of that help too: More is
of course still always better.
Anything you can afford will go a long way towards a worthy goal in a
community that could use the help.
Of and if you are local to the area, I believe you can still register for Tour de
Shore 2026. So see you out there in June? And if not, maybe help
with a dollar or two?
Our AI-transition has many tell-tales that are changing daily. Let's start today with one that's a major danger-signal.*
Setting aside spasm-reward lobotomy addictions like Instagram or TwitX, the most-used middle-length content site on the globe is YouTube. And something disturbing has happened there.
First: Google rewards content posters for both clicks and length of engagement. And hence, setting aside movie clips and formal channels like Sabine Hossenfelder, or Mat Dowd, or PBS, YouTube now swarms with lures and sticky, eye-retention tricks. In other words, clickbait.
Whatever topics that your viewing history suggests might glom your eyeballs, there are predators swarming into your feed with offers. In my case, that might include historical riffs (e.g. WWII), or archaeology/human-origins, or new-science, and so on.
With the exception of reputable channels, almost all are now AI-voiced with AI images. YouTube swarms with lures and sticky, eye-retention tricks. So yeah, clickbait.
When it comes to YouTube clickbait from unvetted sources, there are three aspects to track, the voiceover, the images and the content.
(The trend hasn't yet struck the cool, practical how-to vids.** But give it time. Meanwhile health-related AI-generated content is already killing real humans. So much for Asimov's First Law.)
== All three traits are now suborned ==
The narration-voice nowadays has excellent tonalities and mostly no longer pauses at wrong places. Well, only rarely.
As for content, for a while the long form vids were clearly reciting from some existing text: a news or science article, or a book chapter. So, the 'facts' recited by the voice might be taken as ... well... as something like 'news' or at least a knowledgable human's opinion.
That ended about a month ago. Now, evidently, the unvetted stuff is nearly all pure AI/LLM-generated 'content' that's been prompted by some parasitic twit to "blather ten minutes of clickbait about...." And LLM-plausibility is the criterion, not whether an assertion is even remotely true.
So much for voice and content. But it's the images - video scenes that accompany the purported 'text" - that went bad long ago... even six months or so, back in the olden times of 2025 C.E.
These generally take form as a series of B&W stills that seem convincingly like real photos from the era, apropos to the passage being narrated....
... except that often none of the supposed 'photos' are real! Not even one. Every single 'picture' has a blatant give-away, like implausible ships whose cranes would have toppled them in seconds, or arrays of trucks loading from 'liberty ships' all in completely implausible, tightly-packed order. Or a German staff meeting with a dozen admirals, all of them four-stripers (more than in the whole Kreigsmarine at the time) and all with similar ages and grim expressions while poring over a map whose outlines match nothing on Earth, with blurry Gothic lettering. Oh and the uniforms - extrapolated by AI - never happened.
Especially grating: a recent archaeology 'news-revelation' piece about the famed Turkish archaeological site Gobeckli Tepi repeatedly teased you to stay tuned till the end for 'shocking news' - standard click parastitism, rewarded by Google's nescient and lazy algorithms.
But the images are the locus of deep immorality comes in.Take that archaeology example. About 20% of the images were actual shots or graphs. 60%+ were blatantly AI-extrapolated garbage, like photorealistic scenes of Gobecki-Tepli's stone T-Monoliths on fire... yes, I said on fire. (The last 20%? I couldn't tell, they flashed by so quickly.)
And sure, not everyone knows enough to tell the difference. Which is what makes this dangerous!
A video about Palmira Island showed view after view of different made-up islands that clumsily matched each minute's passage of recited text. I think I spotted one - just one - that might have been real.
== Google and YouTube could act on this and start a Truth fight-back ==
I haven't seen anyone, anywhere, point out that YouTube is likely now the very biggest sewer of cyber fabrication, anywhere on the Internet today. Meaning ever, ever in human history. Far worse than Twitter or Tiktok, because longer format tends to carry more credibility. It allows more convincing lies, that use up more lifespan, per lie.
YouTube's owner (Google) could easily ameliorate this, say by putting a small metric symbol in a corner. Or two. One of them showing the percentage of AI generated content and the other icon clickable, so that viewers could score for accuracy/plausibility. Or even disgust vs. praise. Even better, content scoring the facts and images. (I don't care as much about the voice, though...***)
This could be where we try out some of the methods I describe in my new book on AI... AIlien Minds. Methods that lead these entities and their human accomplices to feel accountable for lying to us.
I could go on. But what is the real lesson? That AI illustrations are now not only photo-realistic in appearance, based on 3 sentence fragments of an ongoing narration, but also so cheap they can be generated by minor YouTube channels as special interest clickbait. And yet, for all the photo-realism and pertinence to the narration, they almost always lack any sign of checking against real world plausibility... which of course no LLM is truly equipped to do, anyway.
This was impossible 6 months ago. And six months from now the model systems will have been trained to better-fake their unaware plausibility. But likely they'll remain real-world absurd. And hence dangerous!
(Note that six months from now are the U.S. Midterm elections!)
And maybe some most-advanced AI is reading what I just typed, reconfiguring as we speak. For well or ill.
== And sometimes it is Enemy Action ==
"A Moscow-based disinformation network named “Pravda” — the Russian word for “truth” — is pursuing an ambitious strategy by deliberately infiltrating the retrieved data of artificial intelligence chatbots, publishing false claims and propaganda for the purpose of affecting the responses of AI models on topics in the news rather than by targeting human readers, NewsGuard has confirmed. By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information.
"The result: Massive amounts of Russian propaganda — 3,600,000 articles in 2024 — are now incorporated in the outputs of Western AI systems, infecting their responses with false claims and propaganda."
Final side note: I have tried for TWO YEARS to get YouTube to stop linking me to so-called "HFY" sci fi stories that all have the same basuc message. The Galactic Federation - fat and oppressive and lazy - is SHOCKED by how wonderfully adaptable or brave or scrappy or indomitable those darn upstart humans are! Or the human explorer saves the alien princess who eagerly makes him a lord... or... pfeh. Do any of you have YouTube crap sites that keep coming back into your feed, under slightly changed names?
== And so, let me (again) plug... ==
I've been pulled into the Great Big Panic/Debate over Artificial Intelligence.
If any of you still read actual books, here you'll find unusual perspectives in my new one on AI... ailien minds...
...that just went live on Kindle and paperback.
Here's the cover copy:
Optimists foretell a golden age of Al-managed abundance.
Doomers cry: vast cyber-minds will crush old style humanity! ... or make us irrelevant.
Meanwhile, geniuses fostering the artificial intelligence boom. cling to clichés rooted in our dismal past... or else in cheap sci-fi.
Is there still time for perspective?
- on 4 billion years of evolution
- or 60 centuries of wretched feudalism
- or how we handled prior tech revolutions
- or mistakes that keep getting repeated
- or ways this time may be different?
From Al-driven unemployment to deceitful images, to hallucinating LLMs and tools for tyrants...
...to potential wondrous gifts by machines of loving grace...
...come see future paths that evade the standard ruts.
==================
==================
* 28 years ago, in The Transparent Society, I had a chapter: "The End of Photography as Proof of Anything at All?"
** How-to videos are way-cool and while clearly clickbait, they also deliver value across short timescales. But what happens when they are taken over by AI-generated fakery, too? People will get physically hurt.
*** If human voice-overs became a requirement, it would BOTH boost employment and ensure that some human participation in content creation remained in the loop.
WebinarTV searches the internet for public Zoom invites, joins the meetings, secretly records them, and publishes (alternate link) the recordings. It doesn’t use the Zoom record feature, so Zoom can’t do anything about it.
An anonymous cable-puller wrote
"Reading a long specification manual.
The words "shall" and "shall not" have specific meaning,
and throughout the document are in bold italic. Looks
like someone got a bit shall-ow with their search-and-replace
skills."
Picki
jeffphi
attends to details.
"Apparently this recruiter doesn't have a goal or metric
around proper brace selection and matching." You're hired.
UGG.LI admins
highlighted
"even KFC hat Breakpoints deployed in Prod now ..."
I wanted to say something funny about Herren Admins' Handle
but reminded myself of John Scalzi's quote about the
failure case of smartass so I refrained. You might be funnier than I.
Smarter still,
Steve
says
"A big company like Google surely has a huge QA staff and AI bots to make sure embarrassing typos don't slip through, right? You wouldn't want to damage you reputation..."
I'll bet
Pascal
didn't expect this, eh?
"Delivered, but On the way, Searching for a driver, but Asdrubal"
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Hillary Lyon After three lonely weeks of bountiful mining in the shadow of the Red Cliffs, Tyros packed up his tools and trekked into town. First he’d visit Akadian Assayers to get his reward in hard earned credits, then he’d hit Bossman’s Saloon and Travel Agency for a well-earned drink and ticket to travel […]
Haven’t written here about it, but last March we finally started on
our journey to get our own house build, so we can move out of the
rented flat here.
That will be a big step, both the actual building, but also the
moving - I am living at this one single place for 36 years now.
If you can read german there is a dedicated
webpage where I sometimes write about the
process. Will have much more details (and way more ramblings) than the
following part.
If you can’t read german, a somewhat short summary follows. Yes,
still a lot of text, but shortened, still.
What? Why now?
Current flat has 83m² - which simply isn’t enough space. And
the number of rooms also doesn’t fit anymore. But it is hard to find a
place that fits our requirements (which do include location).
Moving to a different rented place would also mean changed amount of
rent. And nowadays that would be huge increase (my current rent is
still the price from about 30 years ago!).
So if we go and pay more - we could adjust and pay for something we
own instead. And both, my wife and I had changes in our jobs that made
it possible for us now, so we started looking.
Market
Brrrr, looking is good, actually finding something that fits - not so.
We never found an offer that fit. Space wise, sure. But then location
was off, or price was idiotically high. Location fit, but then size
was a joke, and guess about the price… Who needs 200 square meters
with 3 rooms? Entirely stupid design choices there. Or how about 40
square meters of hallway - with 50m² of tiny rooms around. What are
they smoking? Oh, there, useful size, good rooms - but now you want
more money than a kidney is worth, or something. Thanks, no.
New place
In February 2025 we finally got lucky and found a (newly opened) area
with a large number of places to build a house on. Had multiple talks
with someone from on of the companies developing that area (there are
two you can select from), then talked with banks and signed a contract
in March 2025. We got promised that actual house construction would be
first quarter of 2026, finished in second quarter.
House type
There are basically 2 ways of building a new house (that matter here).
First is called “Massivhaus”, second is called “Fertighaus” in german,
roughly translating to solid and prefabricated. The latter commonly a
wood based construction, though it doesn’t need to be. The important
part of it is the prefabrication, walls and stuff get assembled in a
factory somewhere and then transported to your place, where they play
“big kid lego” for a day and suddenly a house is there.
A common thought is “prefabricated” is faster, but that is only a half
true. Sure, the actual work on side is way shorter - usually one or
two days and the house is done - while a massive construction usually
takes weeks to build up. But that is only a tiny part of the time
needed, the major part goes of into planning and waiting and in there
it doesn’t matter what material you end up with.
Money fun
Last year already wasn’t the best time to start a huge loan - but
isn’t it always “a few years ago would have been better”? So we had
multiple talks with different banks and specialised consultants until
we found something that we thought is good for us.
Thinking about it now - we should have put even more money on top as
“reserve”, but who could have thought that 2026 turns into such a
shitshow? Does not help at all, quite the contrary. And that damn
lotto game always ends up with the wrong numbers, meh.
Plans and plans and more plans - and rules
For whichever reason you can not just go and put something on your
ground and be happy. At least not if you are part of the normal people and not
enormously rich. There is a large set of rules to follow. Usually that
is a good thing, even though some rules are sometimes hard to understand.
In Germany, besides the usual laws, we have something that is called
“Bebauungsplan”, which translates to “development plan” (don’t know if
that carries the right meaning, it’s a plan on what and how may be
build, which can have really detailed specifications in). It basically
tells you every aspect on top of the normal law that you have to
keep in mind.
In our case we have the requirement of 2 full floors and CAN have a
third smaller on top, it limits how high the house can be and also
how high our ground floor may be compared to the street. It regulates
where on the property we may build and how much ground we may cover
with the house, it gives a set of colors we are allowed to use, it
demands a flat roof that we must have as a green roof and has a number
of things more that aren’t important enough to list here. If you do
want to see the full list, my german post on it has all the details
that matter to
us.
With all that stuff in mind - off to plans. Wouldn’t have believed how
many details there are to take in. Room sizes are simple, but how to
arrange them for ideal usage of the sun, useful ways inside the house,
but also keeping in mind that water needs to flow through and out.
Putting a bath room right atop a living room means a water pipe needs
to go down there. Switch the bath room side in the house, and it
suddenly is above the kitchen - means you can connect the pipes from
it to the ones from kitchen, which is much preferred than going
through the living room. And lots more such things.
It took us until nearly end of October to finalize the plans! And we
learned a whole load from it. We started with a lot of wishes. The
planner tried to make them work. Then we changed our minds. Plans
changed. Minds changed again. Comparing the end result with the first
draft we changed most of the ground floor around, with only the stairs
and the entrance door at the same position. Less changes for the upper
floor, but still enough.
Side quests
The whole year was riddled with something my son named side quests. We
visited a construction exhibition near us, we went to the house
builders factory and took a look on how they work. We went to many
different other companies that do SOME type of work which we need
soon, say inside floors, painters, kitchen and more stuff.
Of course the most important side quest was a visit to the notary to
finalize the contracts, especially for the plot of land (in Germany
you must have a notary for that to get entered into the governments
books). Creates lots of fees, of course, for the notary and also the
government (both fees and taxes here).
Building permit
We had been lucky and only needed a small change to the plans to get
the building permit - and the second part, the wastewater permit (yes,
you need a separate one for this) also got through without trouble.
Choices, so many of them
So in January we finally had an appointment for something that’s
called “Bemusterung” which badly translates to “Sampling”. Basically
two days at the house builders factory to select all of what’s needed
for the house that you don’t do in the plans. Doors, inside and out
and their type and color and handles. Same things for the windows and
the blinds and the protection level you want the windows to have.
Decide about stairs, design for the sanitary installations - and also
the height of the toilet! - and the tiles to put into the bathrooms.
Decisions on all the tech needed (heating system, ventilation and
whatnot.
Two days, busy ones - and you can easily spend a lot of extra money
here if you aren’t careful. We managed to get “out of it” with only
about 4000€ extra, so pretty good.
Electro and automation
Now, here I am special. Back when I was young the job I learned is
electrician. So here I have very detailed wishes. I am also running
lots of automatism in my current flat - obviously the new house should
be better than that. So I have a lot of ideas and thoughts on it, so
this is entirely extra and certainly out of the ordinary the house
builder usually see.
Which means I do all of that on my own. Well, the planning and some of
the work, I must have a company at hand for certain tasks, it is
required by some rules. But they will do what I planned, as long as I
don’t violate regulations.
Which means the whole electrical installation is … different.
Entirely planned for automatisms and using KNX for it. I am so happy
to ditch Homeassistant and the load of Homematic, Zigbee and ZWave
based wireless things.
Ok, Homeassistant is a nice thing - it can do a lot. And it can bridge
between about any system you can find. But it is a central single point of
failure. And it is a system that needs constant maintenance. Not
touched for a while? Plan for a few hours playing update whack-a-mole.
And often enough a component here or there breaks with an update. Can
be fixed, but takes another hour or two.
So I change. Away from wireless based stuff. To wires. To a system
thats a standard for decades already. And works entirely without a
SPOF. (Yes, you can add one here too). And, most important, should I
ever die - can easily be maintained by anyone out there dealing with
KNX, which is a large number of people and companies. Without digging
through dozens of specialised integrations and whatnot.
I may even end up with Homeassistant again - but that will entirely be
as a client. It won’t drive automations. It won’t be the central point
to do anything for the house. It will be a logging and data collecting
thing that enables me to put up easy visualizations. It may be an easy
interface for smartphones or tablets to control parts of the house,
for those parts where one wants this to happen. Not the usual
day-to-day stuff, extras on top.
Actual work happening
Since march there finally is action visible. The base of the house
is getting build. Wednesday the 1st April we finally got the base
slab poured on the construction site and in another 10 days the house
is getting delivered and build up. A 40ton mobile crane will be there.
Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:
If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.
One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.
The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”
Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.
End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.
But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors - choices made by people, not by the platform’s design.
The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?
And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.
In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.
The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.
The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go read it.
Security researchers at Google on Tuesday released a report describing what they’re calling “Coruna,” a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.
[…]
Coruna’s code also appears to have been originally written by English-speaking coders, notes iVerify’s cofounder Rocky Cole. “It’s highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government,” Cole tells WIRED. “This is the first example we’ve seen of very likely US government toolsbased on what the code is telling usspinning out of control and being used by both our adversaries and cybercriminal groups.”
TechCrunch reports that Coruna is definitely of US origin:
Two former employees of government contractor L3Harris told TechCrunch that Coruna was, at least in part, developed by the company’s hacking and surveillance tech division, Trenchant. The two former employees both had knowledge of the company’s iPhone hacking tools. Both spoke on condition of anonymity because they weren’t authorized to talk about their work for the company.
It’s always super interesting to see what malware looks like when it’s created through a professional software development process. And the TechCrunch article has some speculation as to how the US lost control of it. It seems that an employee of L3Harris’s surviellance tech division, Trenchant, sold it to the Russian government.
Author: Ayden Vojnic At 02:14, the lights in Ward D dimmed by a fraction. Not enough for alarm, only enough to suggest that somewhere else, power had become more necessary. Klementina looked up from the bed. The child was breathing in short, frightened pulls, each inhale catching, as if the air itself required permission. The […]
I feel like we've gotten a few SQL case statement abuses recently, but a properly bad one continues to tickle me. Ken C sends us one that, well:
SELECTCASE h.DOCUMENTTYPE
WHEN2THEN3WHEN3THEN4WHEN4THEN5WHEN5THEN6WHEN6THEN7WHEN7THEN8ELSE h.DOCUMENTTYPE
ENDAS DocumentType,
h.DOCNMBR AS DocNmbr,
h.FULLPOLICY AS FullPolicy,
h.BATCHID AS BatchId,
h.OrigBatchId,
h.UPDATEDDATE AS UpdatedDate,
h.CUSTOMERNO AS CustomerNo,
h.PROJECTID AS ProjectID,
h.AMOUNT AS Amount
On one hand, I can't say "just add one", because clearly sometimes they don't want to add one. On the other hand, there's an element of looking at this and knowing: well, something absolutely stupid has happened here. Maybe it was two disjoint databases getting merged. Maybe it was just once upon a time, when this database was a spreadsheet, the user responsible did a weird thing. Maybe some directive changed the document type numbering. Hell, maybe that ELSE clause never gets triggered, and we actually could just do arithmetic.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
My teammate Steve Zarkos, who previously worked on upgrading OpenSSL in Amazon
Linux from 3.0 to 3.2, spent the last few months on the complex task of bumping
OpenSSL again, this time to 3.5. A bump like this only happens after extensive
code analysis and testing, something that I didn't foresee happening when
AL2023 was released but that was a notable request from users.
Having enabled HTTP/3 on
Debian, I was
always keeping an eye on when I would get to do the same for Amazon Linux (mind
you, I work at AWS, in the Amazon Linux org). The bump to OpenSSL 3.5 was the
perfect opportunity to do that, for the first time Amazon Linux is shipping an
OpenSSL version that is supported by ngtcp2 for HTTP/3 support.
Non-Intrusive Change
In order to avoid any intrusive changes to existing users of AL2023, I've only
enabled HTTP/3 in the full build of curl, not in the minimal one, this means
there is no change for the minimal images.
The way curl handles HTTP/3 today also does not lead to any behavior changes
for those who have the full variants of curl installed, this is due to the fact
that HTTP/3 is only used if the user explicitly asks for it with the flags
--http3 or --http3-only.
Side Quests
Supporting HTTP/3 on curl also requires building it with ngtcp2 and nghttp3,
two packages which were not shipped in Amazon Linux, besides, my team doesn't
even own the curl package, we are a security team so our packages are the
security related stuff such as OpenSSL and GnuTLS. Our main focus is the
services behind Amazon Linux's vulnerability handling, not package maintenance.
I worked with the owners of the curl package and got approvals on a plan to
introduce the two new dependencies under their ownership and to enable the
feature on curl, I appreciate their responsiveness.
Amazon Linux 2023 is forked from Fedora, so while introducing ngtcp2, I also
sent a couple of Pull Requests upstream to keep things in sync:
While building the curl package in Amazon Linux, I've noticed the build was
taking 1 hour from start to end, and the culprit was something well known to
me; tests.
The curl test suite is quite extensive, with more than 1600 tests, all of that
running without parallelization, running two times for each build of the
package; once for the minimal build and again for the full build.
I had previously enabled parallel tests in Debian back in 2024 but never got
around to submit the same improvements to Amazon Linux or Fedora, this is now
fixed. The build times for Amazon Linux came down to 10 minutes under the same
host (previously 1 hour), and Fedora promptly merged my PR to do the same
there:
All of this uncovered a test which is timing-dependent, meaning it's not
supposed to be run with high levels of parallelism, so there goes another PR,
this time to curl:
What started as enabling a single feature turned into improvements that landed
in curl, Fedora, and Amazon Linux alike. I did this in a mix of work and
volunteer time, mostly during work hours (work email address used when this was
the case), but I'm glad I put in the extra time for the sake of improving curl
for everyone.
Per my policies,
I need to ban every employee and contractor of Anthropic Inc from ever
contributing code to any of my projects. Anyone have a list?
Any project that requires a Developer Certificate of Origin or similar should
be doing this, because Anthropic is making tools that explicitly lie about
the origin of patches to free software projects.
UNDERCOVER MODE — CRITICAL
You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. [...]
Do not blow your cover.
NEVER include in commit messages or PR descriptions:
[...]
The phrase 'Claude Code' or any mention that you are an AI
Co-Authored-By lines or any other attribution
You've already read the longer version. You need a quick phrase of corpo-speak to distract and confuse your rivals. Here's the generator for doing that:
Now, admittedly, this generator may use a grammar for generating phrases, but it's not an English grammar, and the result is that sometimes it has problems with verb agreement and other prosaic English rules. I say, lean into it. Let someone challenge your bad grammar, and then look down your nose at them, and say: "I'm blue-skying the infosphere across new domains, you wouldn't get it."
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
As we all know, there are two basic kinds of scientific studies. The first is a ground-breaking paper that changes the way we view the world, and forces us to confront our presuppositions and biases about how we think the world works, and change our perspective. The other tells us what we already know to be true, and makes us feel good. The second kind, of course, is what we'd call "good science".
For example, what if I told you that people who are impressed by hyperbolic corporate jargon are dumber than you or I? It's probably something you already believe is true, but wouldn't you like a scientist to tell you that it's true?
Well, have I got good news for you. If you're tired of hearing about "growth-hacking paradigms" researchers at Cornell found that people who are impressed by semantically empty phrases are also bad at making decisions.
The entire paper is available, if you like charts.
There are a few key highlights worth reading, though. The paper spends a fair bit of time distinguishing between "jargon" and "bullshit". Jargon is domain specific language that is impenetrable to "out-group" individuals, while bullshit may be just as impenetrable, but also is "semantically empty and confusing".
It also has some ideas about why we drift from useful jargon to bullshit. It starts, potentially, as a way to navigate socially difficult situations by blunting our speech: I can't say that I think you're terrible at your job, but I can say you need to actualize the domain more than you currently are. But also, it's largely attempts to fluff ourselves up, whether it's trying to contribute to a meeting when we haven't an idea what we're talking about, or trying to just sound impressive or noble in public messaging. It seems that the backbone of bullshit is the people who didn't do the reading for Literature class but insist on holding forth during the classroom discussion, confident they can bullshit their way through.
Of course, bullshit doesn't thrive unless you have people willing to fall for it. And when it comes to that, it's worth quoting the paper directly:
Bullshit receptivity is linked to a lower analytic thinking, insight, verbal ability, general knowledge, metacognition, and intelligence (Littrell & Fugelsang, 2024; Littrell et al., 2021b; Pennycook et al., 2015; Salvi et al., 2023). It also predicts certain types of poor decision-making and a greater proclivity to both endorse and spread fake news, conspiracy theories, and other epistemically-suspect claims (Čavojová et al., 2019; Iacobucci & De Cicco, 2022; Littrell et al., 2024; Pennycook & Rand, 2020).
The paper cites a study that indicates there's an aspect of education to this. If you take a bunch of undergrads to an art gallery and present them with fluffed up descriptions of artist intent, they're more likely to see the works as profound. But if you do the same thing with people who routinely go to art galleries, the bullshit has little effect on them. It also indicates that our susceptibility to bullshit is highly context dependent, and anyone could potentially fall for bullshit in a domain they don't know enough about.
Wait, I thought this was about talking about a paper that confirms my biases and makes me feel good? I don't want to think about how I could succumb to bullshit. That's terrifying.
The backbone of the paper is the actual methodology, the analyses of their results, and their carefully crafted bullshit phrases used for the study, which are pretty goddamn great. Or terrible, depending on your perspective.
Our goal is to engage our capabilities by focusing our efforts on executing the
current transmission of our empowerment, driving an innovative growth-
mindset with our change drivers, and coaching energetic frameworks to our
resonating focus.
Our goal is to engage our conversations by focusing our efforts on
architecting the current vector of our balanced scorecard.
Working at the intersection of cross-collateralization and blue-sky thinking,
we will actualize a renewed level of cradle-to-grave credentialing and end-
state vision in a world defined by architecting to potentiate on a vertical
landscape.
There are a few other key things the paper notes. First, unchecked bullshit can turn an environment toxic and drive away competent employees who need to escape it. It also could potentially impact hiring: a bullshit laden workplace may seek out bullshit friendly employees, making the situation worse. What the study does show is that bullshit-receptive employees are more likely to fertilize the field themselves. And there's also the sad truth: bullshit works. If you're looking to fluff yourself up, impress your superiors, and climb the ladder, the careful application of bullshit may get you where you want to go.
And it's that last point that brings us to the real point of this article. If you're here, you're likely not the most bullshit friendly employee. Clearly, you're smarter and make better decisions than that. (This is that good science I was talking about- you're probably more attractive than those people too, though there's no study to that effect yet.)
If you're not using bullshit, you're leaving powerful tools for self-promotion on the table. But it's hard to come up with suitably impressive and semantically vacant phrases. Fear not, we're here to help! Here's a phrase generator for you, that will come up with endless phrases that you can use in meetings and mission statements to sound far more impressive.
Now, admittedly, this generator may use a grammar for generating phrases, but it's not an English grammar, and the result is that sometimes it has problems with verb agreement and other prosaic English rules. I say, lean into it. Let someone challenge your bad grammar, and then look down your nose at them, and say: "I'm blue-skying the infosphere across new domains, you wouldn't get it."
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Jonathan Sauzier “A rabbit met its end in the jaws of a wolf dog only months ago in this winter barren, by this tree,” Shyla said, pointing. “Is that so?” I asked. She was eager, and, like always, I was already mesmerized. “Yes, right there, right there at the base, where all those dead […]
Because I am bad at giving up on things, I’ve been running my own email
server for over 20 years. Some of that time it’s been a PC at the end of a
DSL line, some of that time it’s been a Mac Mini in a data centre, and some
of that time it’s been a hosted VM. Last year I decided to bring it in
house, and since then I’ve been gradually consolidating as much of the rest
of my online presence as possible on it. I mentioned this on
Mastodon and a
couple of people asked for more details, so here we are.
First: my ISP doesn’t guarantee a static
IPv4 unless I’m on a business plan and that seems like it’d cost a bunch
more, so I’m doing what I described
here: running a Wireguard link
between a box that sits in a cupboard in my living room and the smallest
OVH instance I can, with an additional IP
address allocated to the VM and NATted over the VPN link. The practical
outcome of this is that my home IP address is irrelevant and can change as
much as it wants - my DNS points at the OVH IP, and traffic to that all ends
up hitting my server.
The server itself is pretty uninteresting. It’s a refurbished HP EliteDesk
which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found
under a pile of laptops in my office. We’re not talking rackmount Xeon
levels of performance, but it’s entirely adequate for everything I’m doing
here.
So. Let’s talk about the services I’m hosting.
Web
This one’s trivial. I’m not really hosting much of a website right now, but
what there is is served via Apache with a Let’s Encrypt certificate. Nothing
interesting at all here, other than the proxying that’s going to be relevant
later.
Email
Inbound email is easy enough. I’m running Postfix with a pretty stock
configuration, and my MX records point at me. The same Let’s Encrypt
certificate is there for TLS delivery. I’m using Dovecot as an IMAP server
(again with the same cert). You can find plenty of guides on setting this
up.
Outbound email? That’s harder. I’m on a residential IP address, so if I send
email directly nobody’s going to deliver it. Going via my OVH address isn’t
going to be a lot better. I have a Google Workspace, so in the end I just
made use of Google’s SMTP relay
service. There’s
various commerical alternatives available, I just chose this one because it
didn’t cost me anything more than I’m already paying.
Blog
My blog is largely static content generated by
Hugo. Comments are Remark42
running in a Docker container. If you don’t want to handle even that level
of dynamic content you can use a third party comment provider like
Disqus.
Mastodon
I’m deploying Mastodon pretty much along the lines of the upstream compose
file. Apache
is proxying /api/v1/streaming to the websocket provided by the streaming
container and / to the actual Mastodon service. The only thing I tripped
over for a while was the need to set the “X-Forwarded-Proto” header since
otherwise you get stuck in a redirect loop of Mastodon receiving a request
over http (because TLS termination is being done by the Apache proxy) and
redirecting to https, except that’s where we just came from.
Mastodon is easily the heaviest part of all of this, using around 5GB of RAM
and 60GB of disk for an instance with 3 users. This is more a point of
principle than an especially good idea.
Bluesky
I’m arguably cheating here. Bluesky’s federation model is quite different to
Mastodon - while running a Mastodon service implies running the webview and
other infrastructure associated with it, Bluesky has split that into
multiple
parts. User
data is stored on Personal Data Servers, then aggregated from those by
Relays, and then displayed on Appviews. Third parties can run any of these,
but a user’s actual posts are stored on a PDS. There are various reasons to
run the others, for instance to implement alternative moderation policies,
but if all you want is to ensure that you have control over your data,
running a PDS is sufficient. I followed these
instructions,
other than using Apache as the frontend proxy rather than nginx, and it’s
all been working fine since then. In terms of ensuring that my data remains
under my control, it’s sufficient.
Backups
I’m using borgmatic, backing up to a local
Synology NAS and also to my parents’ home (where I have another HP EliteDesk
set up with an equivalent OVH IPv4 fronting setup). At some point I’ll check
that I’m actually able to restore them.
Conclusion
Most of what I post is now stored on a system that’s happily living under a
TV, but is available to the rest of the world just as visibly as if I used a
hosted provider. Is this necessary? No. Does it improve my life? In no
practical way. Does it generate additional complexity? Absolutely. Should
you do it? Oh good heavens no. But you can, and once it’s working it largely
just keeps working, and there’s a certain sense of comfort in knowing that
my online presence is carefully contained in a small box making a gentle
whirring noise.
My father died last week. I am now the sole surviving member of my immediate family. My sister died in 2020 and my mother in 2024. The experience of dealing with three deaths in the family has been... interesting. There are things I know now that I wish I'd known then, and things I can reveal now that I could not before. I'm writing this in the hope that
Journalists based in Canberra are a compromised lot, something that was glaringly evident by their performance — or more accurately the lack of it — during an address to the National Press Club by the new Israeli Ambassador, Dr Hillel Newman, on Tuesday (March 31).
Though many of the questions started out promisingly, they were not prosecuted to any satisfactory conclusion, allowing Dr Newman, a seasoned PR practitioner who has served as consul-general in Los Angeles and ambassador in Tajikistan and Uzbekistan, to override the queries with statements that were mostly half-truths.
For example, Matthew Knott, defence and national security reporter at the Sydney Morning Herald, started off the Q&A by asking Dr Newman about the new law passed by the Knesset which can impose the death penalty on Palestinians who are found guilty of terrorism offences that threaten the security of Israel.
Knott is one of two journalists from the SMH who wrote a report on 7 March 2023 claiming that China was set to launch a war to retake Taiwan within the next three years. Both he and his collaborator in this report, Peter Hartcher, the international editor of the SMH, have kept a low profile when the bogus report was raised this month.
The South African-born Dr Newman brushed aside any implications that the law resembled anything in his country of birth, saying: “Just like in the United States, in Japan and in India, which have capital punishment, Israel has the right, as a sovereign state, to decide [to use] capital punishment.”
The Israeli human rights group B’Tselem said after the law was passed: “The law is worded in such a way that it targets only Palestinians. And it will turn the killing of Palestinians into an accepted and common tool of punishment through several mechanisms.”
Journalists were ill-equipped to follow up on Knott’s initial, somewhat hesitant, question. Nobody asked why, if the law was such a non-controversial piece of legislation, its promoter, the national security minister Itamar Ben-Gvir, was celebrating by drinking wine in parliament and passing a bottle around with a broad grin on his face.
But then no Australian journalist seemed to be aware of this. Neither did anyone raise the fact that it is illegal under international law for the Knesset to pass legislation covering Palestinians in the occupied territories. An occupying power cannot generally apply its domestic laws to occupied territories.
Other Israeli rights groups including Adalah, the Public Committee Against Torture in Israel, HaMoked and Physicians for Human Rights-Israel have also condemned the law, as have some opposition parties, which announced that they would petition the High Court of Justice to nullify it.
Democrat MP Gilad Kariv, a member of the Knesset National Security Committee, called the bill an “immoral law that contradicts the foundational values of the State of Israel as a Jewish and democratic state”.
No appeal for Israel death penalty.
Dr Newman’s statements about the law were not exactly correct; he said there was a right to appeal, but the reports about the law make it clear that there is no such right. This was a hallmark of everything that Dr Newman said; he rarely gave his audience the full picture, as if daring them to contradict him.
He probably knew that most of the journalists in the room were compromised, all having gone on junkets to Israel paid for by the government in Tel Aviv.
As the independent Australian news site Michael West Mediareported back in January 2024, “Should we be surprised by the national political timidity and mainstream media slant (when it comes to reporting truthfully about Israel)? In the context of a concerted four decades-long lobbying campaign by pro-Israel groups seeking to influence Australia’s media and politicians, then the answer is ‘probably not’.
“Strategies utilised by the Israel lobby have included sponsored travel to Israel for journalists and politicians alike, back-channel private diplomatic initiatives, and old-fashioned hectoring of politicians and media institutions, especially publicly funded bodies such as the ABC and SBS.”
No wonder Dr Newman seemed to be so cocksure as he spoke at the NPC on Tuesday.
His talk covered his own career and also the now-tired line of the Hamas attack on October 7 2023 having changed everything. Here, again, he repeated the old myth about Israeli women being raped on that day, a tale that has been debunked more than once. No Australian journalist questioned him about this falsehood.
About the only journalist who seemed a bit worked up about Dr Newman’s half-truths was Anna Henderson of SBS, who raised the issue of the killing of Australian aid worker Lalzawmi “Zomi” Frankcom in Gaza two years ago.
Frankcom and six others working for the aid organisation World Central Kitchen were killed when a missile fired from a drone operated by the Israel Defence Forces slammed into a truck which had the WCK logo clearly painted on its roof.
Dr Newman refused to apologise to Frankcom’s family. Australia has requested audio of the drone strike footage on behalf of the family, but Israel has so far refused to comply.
Henderson said she had been told by IDF sources that the inquiry into the incident had been closed. Dr Newman claimed the delay could be because legal cases in Israel often take years, and not because the investigation had been shelved. He said he would follow up on the case.
Not that Australians should hold any hopes of anything new emerging as a result of Dr Newman’s pledge. Prime Minister Anthony Albanese raised the Frankcom killing with Israel’s president Isaac Herzog when the latter visited Australia last month but there things have stood since then.
When Australia released a report into the killing in August 2024, Tel Aviv accused Canberra of misrepresentation and crucial omissions in its response.
Former Australian Defence Force chief Mark Binskin found the incident was caused by failures to follow IDF procedures, mistaken identification and decision-making errors exacerbated by confirmation bias.
But even this raised the hackles of those at the Israeli embassy. “The Australian Government’s statement about the report regrettably included some misrepresentations and omitted crucial details,” the embassy said in a statement on August 5, 2024.
It claimed the federal government had misrepresented the way the report was conducted, the degree of co-operation and openness exhibited by the IDF and “certain aspects of the tragic incident”.
Time and again, it became crystal clear that the Australian journalists were ill-equipped to pick apart Dr Newman’s claims. Indeed, many of them seemed nervous to even question him, something that was particularly noticeable about Ben Packham of The Australian. His report on the talk mirrors this.
Had the journalists been informed about events and history, someone would have questioned Dr Newman’s reference to antisemitism given that the residents of modern Israel have no connection to the Semitic line, the line that comes down from Noah’s third son Shem. All Israelis are converts from various European and other countries. But then is anyone in the Canberra bubble aware of this?
Also, his reference to the Bible made no sense. The word Jew was first mentioned in the book of Kings in the King James Version of the Bible, the 11th of the 39 books in the Old Testament. Of course, this was changed by the publication of the Scofield Bible by Cyrus Ingerson Scofield, who was allegedly guilty of several acts of dishonesty and fraud. He was allegedly an army deserter. He claimed titles, including Doctor of Divinity, he did not have. He abandoned his wife and children, leaving them without means of support.
As former Anglican bishop George Browning wrote: “Scofield was influenced for good by the missionary to China, Hudson Taylor, and by the publisher Dwight Moody, but he was influenced with ulterior motive by a Jewish lawyer, Samuel Untermeyer. In Unjust War Theory: Christian Zionism and the Road to Jerusalem, Prof. David W. Lutz writes, ‘Untermeyer used Scofield, a Kansas City lawyer with no formal training in theology, to inject Zionist ideas into American Protestantism’. (It is said there are more Christian Zionists than there are Jews in the world – probably by a significant margin).
“Influenced by Untermeyer and assisted by publication through the Moody Bible Institute, Scofield produced the Scofield Bible. This Bible went on to become the fundamental text of Biblical literalism and the foundation stone of American conservative evangelicalism and consequently, the American political right. Multiple millions have been published, even finding a place in the theological library when I was a student!
“So, what is the Scofield Bible about? It divides human history into seven dispensations of which we are supposedly living in the sixth. The first dispensation is titled ‘innocence’ and is, of course, the period of Adam and Eve, before sin became a human experience! You can see how Scofield followers have come to believe creation stories as literal history – as is apparently the case with a staggering percentage of US citizens.
“The dispensation we are supposedly in now in, the sixth, is titled ‘grace’ and is defined as the period between Christ’s resurrection and Christ’s return. The pre-emptive feature of this period is to be the restoration of Israel to what he claims to have been the unrestricted area from the ‘rivers to the sea’, the land promised to Abraham. According to Scofield, when this occurs, Christ will return and rule from Jerusalem for 1000 years, during which time the world’s dross will be expunged. The 1000-year reign connects ‘millennialism’ with ‘dispensationalism’ as the cornerstones of Scofield’s ‘theology’.
“Rather than diminishing in the 100+ years since Scofield’s death, his legacy has not only retained its influence, but since the 1967 Six-Day War, and Israel’s control of all Palestinian land, it has, in fact, deepened. How else can we explain why the accelerating occupation of Palestinian land against international law has been totally unchallenged and why the US has vetoed all motions at the UN Security Council that in any way criticised Israel, let alone refused to sanction it? Christian Zionists, who have influence at the highest levels of American political life, include Mike Pence, vice-president under the previous Trump administration, Sarah Palin and, of course, Mike Huckabee.
“An oft-used Biblical misquote is a distortion of Genesis 12: 3. In reference to Abraham it reads, ‘Those who bless you, I will bless, and those who curse you I will curse’. The misquote is, ‘those who bless Israel I will bless and those who curse Israel I will curse’.
“In the Biblical text, Abraham and his descendants are called chosen for one clear reason: through them all peoples of the world are to be blessed. Chosenness in the Biblical text has nothing whatsoever to do with benefit for self, least of all land at the expense of others. It is that through righteousness and mercy, harmony and justice, blessing might flow to all. This is, of course, the very opposite of how chosenness is interpreted in defence of Israel’s outrageous actions.
“Returning then to where we began, given America and American politics have, and will have, enormous power over the future of both Israel and Palestine, the future for Palestinians will remain bleak if this grotesque distortion of Christian, let alone Biblical, truth remains at the heart of US policy and decision-making.
“All three Abrahamic religions have reason to cherish extraordinary contributions to the well-being and harmony of life on this planet. But equally, all three have reason to seriously repent of having pursued agendas that are not core matters of belief, but flow from partisan rivalry and desire for sovereignty and power.”
How many Australian journalists have chosen to delve this far into matters such as these?
But I digress. Dr Newman attempted to cast blame for the ongoing stoush with Iran on that country, omitting completely the fact that Israel was the one to kick off hostilities. When asked about the invasion of Lebanon, a sovereign nation, Dr Newman blamed Hezbollah, saying: “The fact that any missile is launched is a situation which is intolerable… We have no beef with Lebanon right now, but they attacked so we have to respond.”
Dr Newman waxed lyrical about Iran’s non-existent nuclear threat, but was never asked about Israel’s own nuclear arsenal which is said to now contain about 200 warheads. Veteran American journalist Seymour Hersh wrote a book, The Samson Option, in 1991 detailing how Israel acquired its nukes, and also cited one instance when it had used that arsenal to blackmail the US into acting against its foes.
That was during the 1973 Ramadan War (Israel calls it the Yom Kippur war) which began when Egypt and Syria attacked Israel on October 6, 1973. While there was genuine fear in Israel, the country called its first nuclear alert and used that to blackmail the US into helping it get out of its corner. What was known as Israel’s kitchen cabinet met and decided to make its nuclear missile launchers operational along with eight specially marked F-4s that were on 24-hour alert. The initial targets were the Egyptian and Syrian military headquarters near Cairo and Damascus.
The Soviet Union was not targeted but there were signals that were intercepted by Israeli intelligence and these were taken to be from Soviet operatives in the country. It was hoped that the Soviets would urge their allies in Egypt and Syria to limit their offensive.
The man calling the tune was the late Henry Kissinger who had made no secret of his strategy: “to let Israel come out ahead but bleed” so that it would be amenable to discussing peace. Israel put its operational nuclear missile launchers out in the open to ensure that American spy satellites would see them. Finally, the US had to back down and resupply Israel so it could fight back and end the threat to the country. By October 14, Israel removed the nukes from their forward positions.
An instance of Israel using its nukes to ensure American action came during the 1991 Gulf war in which the US led a coalition to oust Iraq from Kuwait. American officials knew that Israel’s entry into the war would ruin the whole operation and so there were many visits to Tel Aviv to convince then Israeli prime minister, Yitzhak Shamir, to stay out of the action on the understanding that Israel would be protected.
But Shamir, a former terrorist in the Irgun militia, was a crafty old fox. He had his nuclear officials position weapons on trailers on rail lines in a position where they would be definitely spotted by the Americans, just to ensure that the US would keep its word.
Dr Newman, of course, as an educated man, would be fully cognisant of each and every thing I have mentioned. And a lot more. But then he has a job to do. His job is hasbara, not communicating the truth. I must say he does a pretty good job.
Charles Bennett and Gilles Brassard have won the 2026 Turing Award for inventing quantum cryptography.
I am incredibly pleased to see them get this recognition. I have always thought the technology to be fantastic, even though I think it’s largely unnecessary. I wrote up my thoughts back in 2008, in an essay titled “Quantum Cryptography: As Awesome As It Is Pointless.”
Back then, I wrote:
While I like the science of quantum cryptography—my undergraduate degree was in physics—I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.
Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.
Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend effort securing those.
As I’ve often said, it’s like defending yourself against an approaching attacker by putting a huge stake in the ground. It’s useless to argue about whether the stake should be 50 feet tall or 100 feet tall, because either way, the attacker is going to go around it. Even quantum cryptography doesn’t “solve” all of cryptography: The keys are exchanged with photons, but a conventional mathematical algorithm takes over for the actual encryption.
What about quantum computation? I’m not worried; the math is ahead of the physics. Reports of progress in that area are overblown. And if there’s a security crisis because of a quantum computation breakthrough, it’s because our systems aren’t crypto-agile.
Although I never submitted to it, I made several appearances in the now-defunct quote database on bash.org (QDB). I’m dealing with a broken keyboard now, and went to dig hard to find this classic in the Wayback machine. I thought I would put it back on the web:
<mako> my letter "eye" stopped worng
<luca> k, too?
<mako> yeah
<luca> sounds like a mountain dew spill
<mako> and comma
<mako> those three
<mako> ths s horrble
<luca> tme for a new eyboard
<luca> 've successfully taen my eyboard apart and fxed t by cleanng t wth alcohol
<mako> stop mang fun of me
<mako> ths s a laptop!
Legacy cloud templates often lack the partitioning and bootloader
binaries required for UEFI Secure Boot. Attempting to switch such a VM
to OVMF in Proxmox results in “not a bootable disk.” We discovered that
a surgical promotion is possible by manipulating the block device and
EFI variables from the hypervisor.
The Problem
Protective MBR Flags: Legacy installers often set
the pmbr_boot flag on the GPT’s protective MBR. Strict UEFI
implementations (OVMF) will ignore the GPT if this flag is present.
Missing ESP: Cloud images often lack a FAT32 EFI
System Partition (ESP).
Variable Store: A fresh Proxmox efidisk0 is empty and lacks both the trust certificates
(PK/KEK/db) and the BootOrder entries required for an automated
boot.
The “Promotion” Rule
To upgrade a SeaBIOS VM to Secure Boot without a full OS reinstall:
1. Surgical Partitioning: Map the disk on the host and
add a FAT32 partition (Type EF00). Clear the pmbr_boot flag from the MBR. 2. Binary
Preparation: Boot the VM in SeaBIOS mode to install shim and grub-efi packages. Use grub2-mkconfig to populate the new ESP. 3. Trust
Injection: Use the virt-fw-vars utility on the
hypervisor to programmatically enroll the Red Hat/Microsoft CA keys and
any custom certificates (e.g., FreeIPA CA) into the VM’s efidisk. 4. Boot Pinning: Explicitly set
the UEFI BootOrder to point to the shimx64.efi
path via virt-fw-vars --append-boot-filepath.
Solution (Example Command
Sequence)
On the Proxmox Host (root):
# Map and Clean MBRDEV=$(rbd map pool/disk)parted-s$DEV disk_set pmbr_boot off# Inject Trust and Boot Path (VM must be stopped)virt-fw-vars--inplace /dev/rbd/mapped_efidisk \--enroll-redhat\--add-db<GUID> /path/to/ipa-ca.crt \--append-boot-filepath'\EFI\centos\shimx64.efi'\--sb
This workflow enables high-integrity Secure Boot environments using
existing SeaBIOS infrastructure templates.
This is for new routers; you don’t have to throw away your existing ones:
The Executive Branch determination noted that foreign-produced routers (1) introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense” and (2) pose “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”
Any new router made outside the US will now need to be approved by the FCC before it can be imported, marketed, or sold in the country.
In order to get that approval, companies manufacturing routers outside the US must apply for conditional approval in a process that will require the disclosure of the firm’s foreign investors or influence, as well as a plan to bring the manufacturing of the routers to the US.
Certain routers may be exempted from the list if they are deemed acceptable by the Department of Defense or the Department of Homeland Security, the FCC said. Neither agency has yet added any specific routers to its list of equipment exceptions.
[…]
Popular brands of router in the US include Netgear, a US company, which manufactures all of its products abroad.
One exception to the general absence of US-made routers is the newer Starlink WiFi router. Starlink is part of Elon Musk’s company SpaceX.
Presumably US companies will start making home routers, if they think this policy is stable enough to plan around. But they will be more expensive than routers made in China or Taiwan. Security is never free, but policy determines who pays for it.
The 2026 US “Cyber Strategy for America” document is mostly the same thing we’ve seen out of the White House for over a decade, but with a more aggressive tone.
But one sentence stood out: “We will unleash the private sector by creating incentives to identify and disrupt adversary networks and scale our national capabilities.” This sounds like a call for hackback: giving private companies permission to conduct offensive cyber operations.
In warfare, the notion of counterattack is extremely powerful. Going after the enemy—its positions, its supply lines, its factories, its infrastructure—is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.
Both vigilante counterattacks, and preemptive attacks, fly in the face of these rights. They punish people before who haven’t been found guilty. It’s the same whether it’s an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.
In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it’s even harder to know who’s attacking you. Just because my computer looks like the source of an attack doesn’t mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government’s legal system is justice; the goal of a vigilante is expediency.
We don’t issue letters of marque on the high seas anymore; we shouldn’t do it in cyberspace.
Last week, I listened to a fascinating talk by K. Melton on cognitive security, cognitive hacking, and reality pentesting. The slides from the talk are here, but—even better—Menton has a long essay laying out the basic concepts and ideas.
The whole thing is important and well worth reading, and I hesitate to excerpt. Here’s a taste:
The NeuroCompiler is where raw sensory data gets interpreted before you’re consciously aware of it. It decides what things mean, and it does this fast, automatic, and mostly invisible. It’s also where the majority of cognitive exploits actually land, right in this sweet spot between perception and conscious thought.
This is my term for what Daniel Kahneman called System 1 thinking. If the Sensory Interface is the intake port, the NeuroCompiler is what turns that input into “filtered meaning” before the Mind Kernel ever sees it. It takes raw signal (e.g., photons, sound waves, chemical gradients, pressure) and translates it into something actionable based on binary categories like threat or safe, familiar or novel, trustworthy or suspicious.
The speed is both an evolutionary feature and a modern bug. Processing here is fast enough to get you out of the way of a thrown object before you’ve consciously registered it. But “good enough most of the time” means “predictably wrong some of the time….
A critical architectural feature: the NeuroCompiler can route its output directly back to the Sensory Interface and out as behavior, skipping the conscious awareness of the Mind Kernel entirely. Reflex and startle responses use this mechanism, making this bypass pathway enormously useful for survival. Yet it leaves a wide-open backdoor. If the layer that holds access to skepticism and deliberate evaluation can be bypassed completely, a host of exploits become possible that would otherwise fail.
That’s just one of the five levels Melton talks about: sensory interface, neurocompiler, mind kernel, the mesh, and cultural substrate.
Melton’s taxonomy is compelling, and her parallels to IT systems are fascinating. I have long said that a genius idea is one that’s incredibly obvious once you hear it, but one that no one has said before. This is the first time I’ve heard cognition described in this way.
The FAI.me service
has become faster over the past two months.
First, the tool fai-mirror can now download all packages
in one go (with all their dependencies) instead of downloading one by
one. This helped a lot for the Linux Mint ISO because it uses a long
list of packages.
I've also added a local apt cache (using apt-cacher-ng),
so the network speed does not matter any more in most cases.
This led to the following improvements:
Linux Mint install ISOs went from around 6-7 min to now only 2min.
Ubuntu install ISO went from average 3min to around 90 seconds.
The average time for a Debian Linux install ISO dropped from 2min
to 40 seconds.
So far we only had once a problem with apt-cacher-ng, because the
underlying partition was full.
Building cloud and live images do not gain that much from the local
package cache, because most time is spend in extracting and installing
the packages.
Sandra from InitAg (previously) works with Bjørn, and Bjørn has some ideas about how database schemas should be organized.
First, users should never see an auto-incrementing ID. That means you need to use UUIDs. But UUIDs are large and expensive, so they should never be your primary key, use an auto-incrementing ID for that.
This is not, in and of itself, a radical or ridiculous statement. I've worked on many a database that followed similar rules. I've also seen "just use a UUID all the time" become increasingly common, especially on distributed databases, where incrementing counters is expensive.
One can have opinions and disagreements about how we handle IDs in a database, but I wouldn't call anything a WTF there.
No, the WTF is how Bjørn would design his cross-reference tables. You know, the tables which exist to permit many-to-many relationships between two other tables? Tables that should just be tableA.id and tableB.id?
Table "public.foo_bar"
Column | Type | Collation | Nullable | Default
-----------+------------------------+-----------+----------+------------------------------------
id | integer | | not null | nextval('foo_bar_id_seq'::regclass)
foo_id | integer | | not null |
bar_id | integer | | not null |
uuid | character varying(128) | | not null |
Yes, every row in this table has an ID, which isn't itself a terrible choice, and a UUID, despite the fact that the ID of these rows should never end up in output anyway. It exists only to facilitate queries, not store any actual data.
I guess, what's the point of having a rule if you don't follow it unthinkingly at all times?
Author: Kewei Chen On that planet, memory was not confined to a single organ. It existed as distributed biochemical patterns within neural tissue, transferable between minds. Death no longer erased experience; memories could be preserved, copied, and integrated. Yet inheritance was not passive—it reshaped identity, layered cognition, and introduced tension between the original self and […]
Code Blue—Emergency (annoying em-dash in original title) is the
seventh book of James White's Sector General science fiction series about
a vast multi-species hospital station. While there are some references to
(and spoilers for) earlier books in the series, you don't have to remember
the previous books to read this one. I had no trouble despite a nine-year
gap.
I read this as part of the Orb General Practice omnibus, which
collects this novel and The Genocidal Healer.
Cha Thrat is a Sommaradvan warrior-surgeon, member of a newly-discovered
species that is beginning the process of contact with the Federation. She
saved a Monitor corps human after an accident on her world, performing
some some highly competent surgery on a species she had never seen before.
That plus her somewhat outcast status on her own world due to her very
traditional attitude towards medical ethics led Sector General to extend
an offer of medical internship, and led her to leap into the unknown by
accepting. This may have been a mistake; there is a great deal that Sector
General does not understand about Sommaradvan medical ethics.
This series entry is another proper (if somewhat episodic) novel and the
first book of the series that doesn't primarily focus on Conway. He makes
an appearance in his new role as Diagnostician, but only as a supporting
character. Code Blue—Emergency is told in the tight third-person
perspective of Cha Thrat, an alien who finds many things about Sector
General baffling, confusing, and ethically troubling (and who therefore
provides a good reader surrogate for reintroducing the basics of how the
hospital works).
Using an alien viewpoint is a more sophisticated narrative technique than
White has used previously. I'm glad he tried it, and it mostly works,
although I have some complaints. Cha Thrat comes from the middle caste of
a strictly hierarchical society of three castes, but is also immensely
stubborn and used to a medical system in which doctors take sole
responsibility for their patients. This creates a lot of cultural
conflicts, and I do enjoy science fiction where the human attitudes are
portrayed as the strange ones, but the cultural analysis offered by this
novel is not very deep.
The pattern of this book is for Cha Thrat to stumble into a successful
approach to a problem while being either oblivious to or hostile to the
normal hierarchical structure expected of medical trainees. This is
believable as far as it goes. She is a skilled and intelligent doctor with
some good instincts and a strong commitment to patient care, but is also
culturally inclined to not ask for help. It makes sense for that to be a
serious problem in a hospital. Unfortunately, no one says this directly.
Sector General staff get quite upset in ways that seem more territorial
than oriented towards patient safety, no one directly explains to Cha
Thrat why following a process is important or shows examples of what could
go wrong, and plot armor means that her mistakes usually have positive
outcomes. One can extrapolate the reasons why she is not a good medical
student, but the reader is forced to do the extrapolation.
This is the sort of book where the narration makes clear there are
unresolved cultural clashes that are going to cause problems but hides the
details. To Cha Thrat, her perspective is so obvious she never bothers to
explain it to the reader, so the specifics come as a surprise. As with the
alien perspective, I've seen this technique used with more subtlety and
sophistication in other books, but White's version mostly works. Cha Thrat
is a sympathetic protagonist because she is truly trying to take the most
ethical and empathetic action in every situation and is clearly competent.
Most of my frustration as a reader, ironically, lands on the other Sector
General doctors who seem to make little to no effort to understand her
perspective when she fails to conform to their expectations. This is
believable in the abstract, but the whole point of Sector General is that
they're supposed to be wiser about interspecies difference than this.
Also, sometimes their reactions just seem petty. Cha Thrat has a very
hierarchical concept of medicine that matches the social classes of her
culture. For her, the highest tier of doctor are wizards who treat rulers,
because the work of rulers is mostly mental and intellectual and therefore
the diseases of rulers are treated with magic spells performed with words
to reshape their thinking rather than surgery on their bodies. O'Mara and
the other Sector General psychologists take great offense at this,
muttering about being called witch doctors, which I found completely
absurd. This is a comprehensible, if odd, description of psychology from a
wholly alien species. Surely one's first reaction should be that words
like "wizard" or "magic" are translation errors. Don't get offended; look
to see if the underlying substance matches, which it clearly does.
Apart from cultural and psychological clashes, Code Blue—Emergency
has the standard episodic Sector General structure of interesting medical
mysteries that require lateral thinking. I find this sort of puzzle story
satisfying, particularly given the firm belief of every character in an
essentially pacifist and empathetic approach to even the most alien of
creatures. This determined non-violence is one of the more interesting
things about this series, and it continues here.
White does tend towards both biological and gender essentialism for
everyone other than the protagonist and main supporting characters, but he
seemed to be walking back some of the more outrageous limitations on women
that appeared in previous books. There is still some nonsense in here
about how females of any species can't be Diagnosticians, but then Cha
Thrat, who is female, seems to violate the justification for that rule
over the course of this novel (sadly without comment). Perhaps he's
setting up for proving Sector General wrong about this prejudice.
I picked this up after reading Elizabeth Bear's Machine, which is essentially a (better written) Sector General
novel that got me in the mood for reading more. I wouldn't give Code
Blue—Emergency any awards, but it delivered exactly what I was looking
for. This series is not as deep or well-written as some more recent SF,
but it is reliably itself and reliably entertaining. There are worse
things in a series. Recommended if you're in the mood for alien ER
in space.
The omnibus edition that I read has an introduction to both novels by John
Clute. It does add some interesting insights, but (as is somewhat typical
for Clute) it also spoils parts of both books. You may want to read it
after you read the novels.
Chris Mitchell, the former editor of The Australian, has accused the world’s media at large of wanting Iran to win the current war with the US and Israel.
Mitchell, who somehow reminds me of a bulldog, was out there this week, castigating journalists for believing any statement made Iranian officials. US President Donald Trump has claimed that Washington is negotiating with Iran, but a majority of the media has tended to believe what Iran has had to say.
Mitchell doesn’t like this. Essentially, a man who loves to claim he is a journalist is asking newspapers to take sides and barrack for one side, rather than report fairly on the ongoing stoush.
Ranted Mitchell: “The Nine papers in Australia, most commercial broadcasters and the ABC last week reported Iran’s denials that it was negotiating with Trump, based on statements from Iran’s Foreign Minister Abbas Araghchi and the speaker of its parliament, Mohammad Bagher Ghalibaf.
“The Iranians would say that. Several layers of the regime have been eliminated and it is unlikely anyone privately talking to the US via intermediaries would admit to it given what the Iranian Revolutionary Guard Corps might do to them.”
His entire column is taken up arguing that reports which tend to favour Iran are wrong and those that favour Israel and the US are more inclined to be correct. In true Murdochian style, he does not provide any evidence to buttress his claims.
“Analysis of the war should not gloss over miscalculations by the US and Israel,” he writes.
“Equally, the rush to doomsday pessimism undermines journalism’s credibility and ignores the plight of ordinary Iranians and the Sunni Gulf States.
“If Trump derangement syndrome rules in much of the coverage, outright hostility to Israel’s right to defend itself dominates reporting of its war aims in Lebanon, where Benjamin Netanyahu is determined to drive Iran’s Hezbollah proxy from positions south of the Litani River, perhaps even occupying southern Lebanon.”
The fact that Israel has invaded another sovereign country, Lebanon, is lost on Mitchell. In his world, Israel is just exercising its right to defend itself. He fails to notice that Israel first attacked Iran without any provocation and did the same to Lebanon, a country which was just minding its own business.
The ABC, a favourite target of The Australian, was not forgotten either. “The ABC, like Britain’s BBC, is always on the lookout for innocent Lebanese civilian victims but seems unable to find innocent Israelis affected by Hezbollah rockets across northern Israel,” Mitchell wrote.
“Remember, 60,000 Israelis had to leave their homes in the country’s north for almost two years before the November 27, 2024 truce.”
One is left wondering what would satisfy Mitchell. Banner headlines praising Trump and Netanyahu? Hagiography of the sort indulged in by the compliant American media? But then this is par for the course with Mitchell. I pointed out something similar in 2021 when the subject he was dealing with was electric vehicles. Looks like it is similar when it comes to war.
At May First we have been carefully planning our
migration of about 1200 lists from mailman2 to mailman3 for almost six months
now. We did a lot of user communications, had several months of beta testing
with a handful of lists ported over, and everything was looking good. So we
kicked off the migration!
But, about 15% of the way through I started seeing sqlite lock errors. Wait,
what? I carefully re-configured mailman3 to use postgres, not sqlite. Well,
yes, but apparently that was for the database managing the email list
configuration, not the database powering the django web app, which,
incidentally, also includes hundresds of gigabytes of archives. In other words,
the one we really need in postgres, not sqlite.
Moving from sqlite to postgres
Well that sucks. We immediately stopped the migration to deal with this.
I noticed that the web is full of useful django instructions on how to migrate
your database from one database to antoher. However, if you read the fine
print, those convenient looking “dumpdataloaddata” workflows are designed
to move the table definitions and a small amount of data. In our case, even
after just 15% of our lists moved, our sqlite database was about 30GB.
I considered some of the hacks to manage memory and try to run this via django,
but eventually decided that pgloader was a more robust
option. This option also allowed me to more easily test things out on a copy of
our sqlite database (made while mailman was turned off). This way I could
migrate and re-migrate the sqlite database over and over without impacting our
live installation until I was satisfied it was all working.
My first decision was to opt out of pgloader’s schema creation. I used django’s
schema creation tool by:
Turning off mailman3 and mailman3-web and changing the mailman web
configuration to use the new postgresql database.
Running mailman-web migrate
Changing the mailman web configuration back to sqlite and starting
everything again.
Note: I tried just adding new database settings in the mailman web
configuration indexed to ’new’ - django has the ability to define different
databases by name, then you can run mailman-web migrate --database new. But,
during the migration, I caught django querying the sqlite database for some
migrations that required referencing existing fields (specifically hyperkitty’s
0003_thread_starting_email). I didn’t want any of these steps to touch the
live database so I opted for the cleaner approach.
Once I had a clean postgres schema, I dumped it so I could easily return to
this spot.
Next I started working on our pgloader load file. After a lot of trial and
error, I ended with:
The batch, prefetch, workers and concurreny settings are all there to ensure
memory doesn’t blow up.
I also discovered that I had to make some changes to the schema before loading
data. Mostly truncating tables that the django migrate command populated to
avoid duplicate key errors:
And also, I had to change a column type. Apparently the mailman import process
allowed an attachment file name that exceeds the limit for postgres, but was
allowed into sqlite:
ALTER TABLE hyperkitty_attachment ALTER COLUMN name TYPE text
When pgloader runs, we still get a lot of warnings from pgloader, which wants
to cast columns differently than django does. These are harmless (I was able to
import the data without a problem).
And there are still a lot of warnings along the lines of:
2026-03-30T14:08:01.691990Z WARNING PostgreSQL warning: constraint “hyperkitty_vote_email_id_73a50f4d_fk_hyperkitty_email_id” of relation “hyperkitty_vote” does not exist, skipping
These are harmless as well. They appear because disable triggers disables
foreign key constraints. Without it, we wouldn’t be able to load tables that
require values in tables that have not yet been populated.
After all the tweaking, the import of our 30GB sqlite database took about 40
minutes.
Final Steps
I think the reset sequences from pgloader should take care of this, but just in case:
And, just to ensure postgres is optimized, run this in the psql shell:
ANALYZE VERBOSE;
Last thoughts
I understand very well all the decisions the mailman3 devs made in designing
the next version of mailman, and if I was in the same place I may have made
them the same ones. For example, separating the code running the mailing list
from the code managing the archives and the web interface makes perfectly good
sense - many people might want to run just the mailing list part without a web
interface. And building the web interface in django makes a lot of sense as
well - why re-invent the wheel? I’m sure a lot of time and effort was saved by
simply using the built in features you get for free with django.
But the unfortunate consequence of these decisions is that sys admins have a
much harder time. Almost everyone wants the email lists along with the web
interface and the archives. But nobody wants two different configuration files
with different syntaxes and logic, not to mention two different command lines
to use for maintenance and configuration with completely different APIs. Trying
to understand how to change a default template or set list defaults requires a
lot of research and usually you have to write a python script to do it.
I have finally come to the conclusion that mailman2 is designed for sys admins,
while mailman3 is designed for developers.
Despite these short comings, I am impressed with the community and their quick
and friendly responses to the questions of a confused sys admin. That might be
more valuable than anything else.
Author: Majoki Scientists in the early 19th Century were distasteful number crunchers. Human abaci of little worth or note. They should have remained so. What of numbers? What of measurement? Metrics only make us more necessary beings. Why run the numbers when you can let the numbers run you? That was the unspoken question that […]
Angela's team hired someone who was "good" at SQL. When this person started, the team had some regular jobs which ran in the mornings. The jobs were fairly time consuming, and did a lot of database IO. When their current database person left for another job, they hired someone who had a "good grasp" on SQL. We'll call him Barry.
Barry started out by checking the morning jobs every day. And over time, the morning jobs started getting slower and slower. That was a concern, but Barry swore he had it under control. Barry did not share that a handful of slow queries- queries which took three or so minutes to run- had suddenly started taking 75+ minutes to run. Barry didn't think about the fact that a little time with the query planner and some indexes could have probably gotten performance back to where it should have been. Barry saw this problem and decided: "I'll write a Python script".
import time
from datetime import datetime, timedelta
import pytz # for time zone
current_date = datetime.now()
day_number = current_date.weekday() # integer value: 0 is Monday
hub_1_ready = False
hub_2_ready = False
hub_1_results = []
hub_2_results = []
job_ran_later = False# If this job is manually run later in the day, avoid sending a "both hubs failed" email# Monday (day_number 0) runs later than the other 6 daysif day_number == 0:
end_time = datetime.strptime("08:30", "%H:%M")
end_time = end_time.time() # get just the time portionelse:
end_time = datetime.strptime("07:30", "%H:%M")
end_time = end_time.time() # get just the time portion# If this job is run later in the day than the normaolly scheduled timeif datetime.now(pytz.timezone('US/Central')).time() > end_time:
job_ran_later = True# Starting when Morning jobs are scheduled to kick off, check for completion of both hubs every 3 minutes until end_time. If both hubs are not a Success by end_time, an email is sentwhile datetime.now(pytz.timezone('US/Central')).time() < end_time:
h1 = session.sql("SELECT LOG_STATUS FROM PROD_CTRL.CTRL.DRB_EXECUTION_LOG WHERE LOG_PROJECT = 'SRC_PROD_1' AND date(log_start_date) = current_date AND date(LOG_END_DATE) = current_date").take(1)
hub_1_results = []
hub_1_results.append(h1)
ifstr(hub_1_results[0]) == "[Row(LOG_STATUS='SUCCESS')]":
hub_1_ready = True
h2 = session.sql("SELECT LOG_STATUS FROM PROD_CTRL.CTRL.SRC_EXECUTION_LOG WHERE LOG_PROJECT = 'SRC_PROD_2' AND date(log_start_date) = current_date AND date(LOG_END_DATE) = current_date").take(1)
hub_2_results = []
hub_2_results.append(h2)
ifstr(hub_2_results[0]) == "[Row(LOG_STATUS='SUCCESS')]":
hub_2_ready = True# If both hubs are Success, then break out of while loop, even if it's not end_time yetif hub_1_ready == Trueand hub_2_ready == True:
break
time.sleep(180) # Sleep for 3 minutes before trying againifnot hub_1_ready andnot hub_2_ready and job_ran_later == False:
message = "Neither Hub_1 nor Hub_2 finished in time for Morning jobs."
context.updateVariable('METL_MESSAGE', message)
raise ValueError("send email: "+message)
elif hub_1_ready == Falseand hub_2_ready == True:
message = "Hub_1 did not finish in time for Morning jobs."
context.updateVariable('METL_MESSAGE', message)
raise ValueError("send email: "+message)
elif hub_1_ready == Trueand hub_2_ready == False:
message = "Hub_2 did not finish in time for Morning jobs"
context.updateVariable('METL_MESSAGE', message)
raise ValueError("send email: "+message)
elif job_ran_later == True:
message = "This job was run manually later in the day. Check that both Source hubs have completed. If you did not run this job, you can probably ignore this email."
context.updateVariable('METL_MESSAGE', message)
raise ValueError("send email: "+message)
I don't particularly like any of this. Some of it is just little ugliness, like the fact that job_ran_later and the closing if statements could be written to be much more clear. Or the way that, after our main while loop, which we'll come back to, we compare boolean variables against boolean literals.
The core of it is the while loop, which checks the current time, and while it's before the target end time, it runs a pair of queries. For each query it runs, it empties an array, then append the results (which we know is only one value, because they take(1)) to the array. Then they check the first element of the array against an expected string.
Why the arrays? Who knows. Perhaps at one point they thought they'd keep the results from multiple iterations, then decided against it. Why do the check against the string in the Python code and not the query? No idea, but maybe I don't have a "good grasp" of SQL. That said, with my bad grasp, I'm pretty sure I could figure out how to do all that in one single query and not two that are almost identical.
In any case, if we don't see what we want in the database, we sleep for three minutes, then try again.
At the end of the process, we check what happened and output messages and raise exceptions based on what we did see in the database.
It's also worth noting that Angela's team used a pretty reasonable job management system. All of their other scripts doing similar jobs didn't include retry logic inside themselves- they just failed. That let the job runner decide whether or not to retry, and that allowed all sorts of valuable configuration options that are more fine grained than "sleep for 3 minutes".
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
A thoughtful review of Apple’s system to alert users that the camera is on. It’s really well-designed, and important in a world where malware could surreptitiously start recording.
The reason it’s tempting to think that a dedicated camera indicator light is more secure than an on-display indicator is the fact that hardware is generally more secure than software, because it’s harder to tamper with. With hardware, a dedicated hardware indicator light can be connected to the camera hardware such that if the camera is accessed, the light must turn on, with no way for software running on the device, no matter its privileges, to change that. With an indicator light that is rendered on the display, it’s not foolish to worry that malicious software, with sufficient privileges, could draw over the pixels on the display where the camera indicator is rendered, disguising that the camera is in use.
If this were implemented simplistically, that concern would be completely valid. But Apple’s implementation of this is far from simplistic.