Author: Sam E. Sutin Sometimes, acronyms can be misleading. For example, artificial intelligence (AI) and artificial insemination (AI), while both artificial, do differ in some very important ways. In my defense, with technology evolving so quickly these past few years it has become exponentially difficult keeping track of every little modicum of advancement. I didn’t […]
This is going to be a controversial statement because some people are
absolute nerds about this, but, I need to say it.
Qalculate is the best calculator that has ever been made.
I am not going to try to convince you of this, I just wanted to put
out my bias out there before writing down those notes. I am a total
fan.
This page will collect my notes of cool hacks I do with
Qalculate. Most examples are copy-pasted from the command-line
interface (qalc(1)), but I typically use the graphical interface as
it's slightly better at displaying complex formulas. Discoverability
is obviously also better for the cornucopia of features this fantastic
application ships.
Qalc commandline primer
On Debian, Qalculate's CLI interface can be installed with:
apt install qalc
Then you start it with the qalc command, and end up on a prompt:
There's a bunch of variables to control display, approximation, and so
on:
> set precision 6
> 1/7
1 / 7 ≈ 0.142857
> set precision 20
> pi
pi ≈ 3.1415926535897932385
When I need more, I typically browse around the menus. One big issue I
have with Qalculate is there are a lot of menus and features. I had
to fiddle quite a bit to figure out that set precision command
above. I might add more examples here as I find them.
Bandwidth estimates
I often use the data units to estimate bandwidths. For example, here's
what 1 megabit per second is over a month ("about 300 GiB"):
> 1 megabit/s * 30 day to gibibyte
(1 megabit/second) × (30 days) ≈ 301.7 GiB
Or, "how long will it take to download X", in this case, 1GiB over a
100 mbps link:
> 1GiB/(100 megabit/s)
(1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s
Password entropy
To calculate how much entropy (in bits) a given password structure,
you count the number of possibilities in each entry (say, [a-z] is
26 possibilities, "one word in a 8k dictionary" is 8000), extract the
base-2 logarithm, multiplied by the number of entries.
For example, an alphanumeric 14-character password is:
> log2(26*2)*14
log₂(26 × 2) × 14 ≈ 79.81
... 80 bits of entropy. To get the equivalent in a diceware password
with a 8000 word dictionary, you would need:
It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y
The graphical version has a little graphical indicator that, when you
mouse over, tells you where the rate comes from.
Other conversions
Here are other neat conversions extracted from my history
> teaspoon to ml
teaspoon = 5 mL
> tablespoon to ml
tablespoon = 15 mL
> 1 cup to ml
1 cup ≈ 236.6 mL
> 6 L/100km to mpg
(6 liters) / (100 kilometers) ≈ 39.20 mpg
> 100 kph to mph
100 kph ≈ 62.14 mph
> (108km - 72km) / 110km/h
((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
19 min + 38.18 s
Completion time estimates
This is a more involved example I often do.
Background
Say you have started a long running copy job and you don't have the
luxury of having a pipe you can insert pv(1) into to get a nice
progress bar. For example, rsync or cp -R can have that problem
(but not tar!).
(Yes, you can use --info=progress2 in rsync, but that estimate is
incremental and therefore inaccurate unless you disable the
incremental mode with --no-inc-recursive, but then you pay a huge
up-front wait cost while the entire directory gets crawled.)
Extracting a process start time
First step is to gather data. Find the process start time. If you were
unfortunate enough to forget to run date --iso-8601=seconds before
starting, you can get a similar timestamp with stat(1) on the
process tree in /proc with:
So our start time is 2025-02-07 15:50:25, we shave off the
nanoseconds there, they're below our precision noise floor.
If you're not dealing with an actual UNIX process, you need to figure
out a start time: this can be a SQL query, a network request,
whatever, exercise for the reader.
Saving a variable
This is optional, but for the sake of demonstration, let's save this
as a variable:
Next, estimate your data size. That will vary wildly with the job
you're running: this can be anything: number of files, documents being
processed, rows to be destroyed in a database, whatever. In this case,
rsync tells me how many bytes it has transferred so far:
Strip off the weird dots in there, because that will confuse
qalculate, which will count this as:
2.968252503968 bytes ≈ 2.968 B
Or, essentially, three bytes. We actually transferred almost 3TB here:
2968252503968 bytes ≈ 2.968 TB
So let's use that. If you had the misfortune of making rsync silent,
but were lucky enough to transfer entire partitions, you can use df
(without -h! we want to be more precise here), in my case:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036 179205040 98% /srv
tank/srv 7667173248 2870444032 4796729216 38% /srv-zfs
(Otherwise, of course, you use du -sh $DIRECTORY.)
Digression over bytes
Those are 1 K bytes which is actually (and rather unfortunately)
Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.
> 2870444032 KiB
2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB
2870444032 kilobytes ≈ 2.870 TB
At this scale, those details matter quite a bit, we're talking about a
69GB (64GiB) difference here:
> uptime
uptime = 5 d + 6 h + 34 min + 12.11 s
> golden
golden ≈ 1.618
> exact
golden = (√(5) + 1) / 2
Computing dates
In any case, yay! We know the transfer is going to take roughly 60
hours total, and we've already spent around 24h of that, so, we have
36h left.
But I did that all in my head, we can ask more of Qalc yet!
Let's make another variable, for that total estimated time:
> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)
save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
kibibytes)); total; Temporary; ; 1) ≈
2 d + 11 h + 14 min + 38.22 s
And we can plug that into another formula with our start time to
figure out when we'll be done!
> start+total
start + total ≈ "2025-02-10T03:28:52"
> start+total-now
start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s
> start+total-now to h
start + total − now ≈ 35 h + 34 min + 32.01 s
That transfer has ~1d left, or 35h24m32s, and should complete around 4
in the morning on February 10th.
But that's icing on top. I typically only do the
cross-multiplication and calculate the remaining time in my
head.
I mostly did the last bit to show Qalculate could compute dates and
time differences, as long as you use ISO timestamps. Although it can
also convert to and from UNIX timestamps, it cannot parse arbitrary
date strings (yet?).
Other functionality
Qalculate can:
Plot graphs;
Use RPN input;
Do all sorts of algebraic, calculus, matrix, statistics,
trigonometry functions (and more!);
... and so much more!
I have a hard time finding things it cannot do. When I get there, I
typically need to resort to programming code in Python, use a
spreadsheet, and others will turn to more complete engines like
Maple), Mathematica or R).
A little over a week ago, I noticed
the liboggz
package on my Debian dashboard had not had a new upstream release
for a while. A closer look showed that its last release, version
1.1.1, happened in 2010. A few patches had accumulated in the Debian
package, and I even noticed that I had passed on these patches to
upstream five years ago. A handful crash bugs had been reported
against the Debian package, and looking at the upstream repository I
even found a few crash bugs reported there too. To add insult to
injury, I discovered that upstream had accumulated several fixes in the
years between 2010 and now, and many of them had not made their way
into the Debian package. I decided enough was enough, and that a new
upstream release was needed fixing these nasty crash bugs. Luckily I
am also a member of the Xiph team, aka upstream, and could actually go
to work immediately to fix it.
I started by adding automatic build testing on
the Xiph gitlab oggz
instance, to get a better idea of the state of affairs with the
code base. This exposed a few build problems, which I had to fix. In
parallel to this, I sent an email announcing my wish for a new release
to every person who had committed to the upstream code base since
2010, and asked for help doing a new release both on email and on the
#xiph IRC channel. Sadly only a fraction of their email providers
accepted my email. But Ralph Giles in the Xiph team came to the
rescue and provided invaluable help to guide be through the release
Xiph process. While this was going on, I spent a few days tracking
down the crash bugs with good help from
valgrind, and came up with
patch proposals to get rid of at least these specific crash bugs. The
open issues also had to be checked. Several of them proved to be
fixed already, but a few I had to creat patches for. I also checked
out the Debian, Arch, Fedora, Suse and Gentoo packages to see if there
were patches applied in these Linux distributions that should be
passed upstream. The end result was ready yesterday. A new liboggz
release, version 1.1.2, was tagged, wrapped up and published on the
project page. And today, the new release was uploaded into
Debian.
You are probably by now curious on what actually changed in the
library. I guess the most interesting new feature was support for
Opus and VP8. Almost all other changes were stability or
documentation fixes. The rest were related to the gitlab continuous
integration testing. All in all, this was really a minor update,
hence the version bump only from 1.1.1 to to 1.1.2, but it was long
overdue and I am very happy that it is out the door.
One change proposed upstream was not included this time, as it
extended the API and changed some of the existing library methods, and
thus require a major SONAME bump and possibly code changes in every
program using the library. As I am not that familiar with the code
base, I am unsure if I am the right person to evaluate the change.
Perhaps later.
Since the release was tagged, a few minor fixes has been committed
upstream already: automatic testing the cross building to Windows, and
documentation updates linking to the correct project page. If a
important issue is discovered with this release, I guess a new release
might happen soon including the minor fixes. If not, perhaps they can
wait fifteen years. :)
I would like to send a big thank you to everyone that helped make
this release happen, from the people adding fixes upstream over the
course of fifteen years, to the ones reporting crash bugs, other bugs
and those maintaining the package in various Linux distributions.
Thank you very much for your time and interest.
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
This was my hundred-twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4014-1] gnuchess security update to fix one CVE related to arbitrary code execution via crafted PGN (Portable Game Notation) data.
[DLA 4015-1] rsync update to fix five CVEs related leaking information from the server or writing files outside of the client’s intended destination.
[DLA 4015-2] rsync update to fix an upstream regression.
[DLA 4039-1] ffmpeg update to fix three CVEs related to possible integer overflows, double-free on
errors and out-of-bounds access.
As new CVEs for ffmpeg appeared, I started to work again for an update of this package
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the seventy-eighth ELTS month. During my allocated time I uploaded or worked on:
[ELA-1290-1] rsync update to fix five CVEs in Buster, Stretch and Jessie related leaking information from the server or writing files outside of the client’s intended destination.
[ELA-1290-2] rsync update to fix an upstream regression.
[ELA-1313-1] ffmpeg update to fix six CVEs in Buster related to possible integer overflows, double-free on errors and out-of-bounds access.
[ELA-1314-1] ffmpeg update to fix six CVEs in Stretch related to possible integer overflows, double-free on errors and out-of-bounds access.
As new CVEs for ffmpeg appeared, I started to work again for an update of this package
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian Printing
This month I uploaded new packages or new upstream or bugfix versions of:
… brlaser new upstream release (in new upstream repository)
This month I uploaded new packages or new upstream or bugfix versions of:
… calceph sponsored upload of new upstream version
… libxisf sponsored upload of new upstream version
Patrick, our Outreachy intern for the Debian Astro project, is doing very well and deals with task after task. He is working on automatic updates of the indi 3rd-party drivers and maybe the results of his work will already be part of Trixie.
Debian IoT
Unfortunately I didn’t found any time to work on this topic.
Debian Mobcom
This month I uploaded new packages or new upstream or bugfix versions of:
The Washington Post is reporting that the UK government has served Apple with a “technical capability notice” as defined by the 2016 Investigatory Powers Act, requiring it to break the Advanced Data Protection encryption in iCloud for the benefit of law enforcement.
This is a big deal, and something we in the security community have worried was coming for a while now.
The law, known by critics as the Snoopers’ Charter, makes it a criminal offense to reveal that the government has even made such a demand. An Apple spokesman declined to comment.
Apple can appeal the U.K. capability notice to a secret technical panel, which would consider arguments about the expense of the requirement, and to a judge who would weigh whether the request was in proportion to the government’s needs. But the law does not permit Apple to delay complying during an appeal.
In March, when the company was on notice that such a requirement might be coming, it told Parliament: “There is no reason why the U.K. [government] should have the authority to decide for citizens of the world whether they can avail themselves of the proven security benefits that flow from end-to-end encryption.”
Apple is likely to turn the feature off for UK users rather than break it for everyone worldwide. Of course, UK users will be able to spoof their location. But this might not be enough. According to the law, Apple would not be able to offer the feature to anyone who is in the UK at any point: for example, a visitor from the US.
And what happens next? Australia has a law enabling it to ask for the same thing. Will it? Will even more countries follow?
Author: Mark Renney The rumours began some twelve months ago or so and the idea quickly took hold that there was an unseen presence under the Dome, a ghost haunting the Fields of Research. These murmurings were persistent and frequent with everyone telling the same tale, describing how they had felt something or, more accurately, […]
Wired reported this week that a 19-year-old working for Elon Musk‘s so-called Department of Government Efficiency (DOGE) was given access to sensitive US government systems even though his past association with cybercrime communities should have precluded him from gaining the necessary security clearances to do so. As today’s story explores, the DOGE teen is a former denizen of ‘The Com,’ an archipelago of Discord and Telegram chat channels that function as a kind of distributed cybercriminal social network for facilitating instant collaboration.
Since President Trump’s second inauguration, Musk’s DOGE team has gained access to a truly staggering amount of personal and sensitive data on American citizens, moving quickly to seize control over databases at the U.S. Treasury, the Office of Personnel Management, the Department of Education, and the Department of Health and Human Resources, among others.
Wired first reported on Feb. 2 that one of the technologists on Musk’s crew is a 19-year-old high school graduate named Edward Coristine, who reportedly goes by the nickname “Big Balls” online. One of the companies Coristine founded, Tesla.Sexy LLC, was set up in 2021, when he would have been around 16 years old.
“Tesla.Sexy LLC controls dozens of web domains, including at least two Russian-registered domains,” Wired reported. “One of those domains, which is still active, offers a service called Helfie, which is an AI bot for Discord servers targeting the Russian market. While the operation of a Russian website would not violate US sanctions preventing Americans doing business with Russian companies, it could potentially be a factor in a security clearance review.”
Mr. Coristine has not responded to requests for comment. In a follow-up story this week, Wired found that someone using a Telegram handle tied to Coristine solicited a DDoS-for-hire service in 2022, and that he worked for a short time at a company that specializes in protecting customers from DDoS attacks.
A profile photo from Coristine’s WhatsApp account.
Internet routing records show that Coristine runs an Internet service provider called Packetware (AS400495). Also known as “DiamondCDN,” Packetware currently hosts tesla[.]sexy and diamondcdn[.]com, among other domains.
DiamondCDN was advertised and claimed by someone who used the nickname “Rivage” on several Com-based Discord channels over the years. A review of chat logs from some of those channels show other members frequently referred to Rivage as “Edward.”
From late 2020 to late 2024, Rivage’s conversations would show up in multiple Com chat servers that are closely monitored by security companies. In November 2022, Rivage could be seen requesting recommendations for a reliable and powerful DDoS-for-hire service.
Rivage made that request in the cybercrime channel “Dstat,” a core Com hub where users could buy and sell attack services. Dstat’s website dstat[.]cc was seized in 2024 as part of “Operation PowerOFF,” an international law enforcement action against DDoS services.
Coristine’s LinkedIn profile said that in 2022 he worked at an anti-DDoS company called Path Networks, which Wired generously described as a “network monitoring firm known for hiring reformed blackhat hackers.” Wired wrote:
“At Path Network, Coristine worked as a systems engineer from April to June of 2022, according to his now-deleted LinkedIn résumé. Path has at times listed as employees Eric Taylor, also known as Cosmo the God, a well-known former cybercriminal and member of the hacker group UGNazis, as well as Matthew Flannery, an Australian convicted hacker whom police allege was a member of the hacker group LulzSec. It’s unclear whether Coristine worked at Path concurrently with those hackers, and WIRED found no evidence that either Coristine or other Path employees engaged in illegal activity while at the company.”
The founder of Path is a young man named Marshal Webb. I wrote about Webb back in 2016, in a story about a DDoS defense company he co-founded called BackConnect Security LLC. On September 20, 2016, KrebsOnSecurity published data showing that the company had a history of hijacking Internet address space that belonged to others.
The other founder of BackConnect Security LLC was Tucker Preston, a Georgia man who pleaded guilty in 2020 to paying a DDoS-for-hire service to launch attacks against others.
The aforementioned Path employee Eric Taylor pleaded guilty in 2017 to charges including an attack on our home in 2013. Taylor was among several men involved in making a false report to my local police department about a supposed hostage situation at our residence in Virginia. In response, a heavily-armed police force surrounded my home and put me in handcuffs at gunpoint before the police realized it was all a dangerous hoax known as “swatting.”
CosmoTheGod rocketed to Internet infamy in 2013 when he and a number of other hackers set up the Web site exposed[dot]su, which “doxed” dozens of public officials and celebrities by publishing the address, Social Security numbers and other personal information on the former First Lady Michelle Obama, the then-director of the FBI and the U.S. attorney general, among others. The group also swatted many of the people they doxed.
Wired noted that Coristine only worked at Path for a few months in 2022, but the story didn’t mention why his tenure was so short. A screenshot shared on the website pathtruths.com includes a snippet of conversations in June 2022 between Path employees discussing Coristine’s firing.
According to that record, Path founder Marshal Webb dismissed Coristine for leaking internal documents to a competitor. Not long after Coristine’s termination, someone leaked an abundance of internal Path documents and conversations. Among other things, those chats revealed that one of Path’s technicians was a Canadian man named Curtis Gervais who was convicted in 2017 of perpetrating dozens of swatting attacks and fake bomb threats — including at least two attempts against our home in 2014.
A snippet of text from an internal Path chat room, wherein members discuss the reason for Coristine’s termination: Allegedly, leaking internal company information. Source: Pathtruths.com.
On May 11, 2024, Rivage posted on a Discord channel for a DDoS protection service that is chiefly marketed to members of The Com. Rivage expressed frustration with his time spent on Com-based communities, suggesting that its profitability had been oversold.
“I don’t think there’s a lot of money to be made in the com,” Rivage lamented. “I’m not buying Heztner [servers] to set up some com VPN.”
Rivage largely stopped posting messages on Com channels after that. Wired reports that Coristine subsequently spent three months last summer working at Neuralink, Elon Musk’s brain implant startup.
The trouble with all this is that even if someone sincerely intends to exit The Com after years of consorting with cybercriminals, they are often still subject to personal attacks, harassment and hacking long after they have left the scene.
That’s because a huge part of Com culture involves harassing, swatting and hacking other members of the community. These internecine attacks are often for financial gain, but just as frequently they are perpetrated by cybercrime groups to exact retribution from or assert dominance over rival gangs.
Experts say it is extremely difficult for former members of violent street gangs to gain a security clearance needed to view sensitive or classified information held by the U.S. government. That’s because ex-gang members are highly susceptible to extortion and coercion from current members of the same gang, and that alone presents an unacceptable security risk for intelligence agencies.
And make no mistake: The Com is the English-language cybercriminal hacking equivalent of a violent street gang. KrebsOnSecurity has published numerous stories detailing how feuds within the community periodically spill over into real-world violence.
When Coristine’s name surfaced in Wired‘s report this week, members of The Com immediately took notice. In the following segment from a February 5, 2025 chat in a Com-affiliated hosting provider, members criticized Rivage’s skills, and discussed harassing his family and notifying authorities about incriminating accusations that may or may not be true.
2025-02-05 16:29:44 UTC vperked#0 they got this nigga on indiatimes man
2025-02-05 16:29:46 UTC alexaloo#0 Their cropping is worse than AI could have done
2025-02-05 16:29:48 UTC hebeatsme#0 bro who is that
2025-02-05 16:29:53 UTC hebeatsme#0 yalla re talking about
2025-02-05 16:29:56 UTC xewdy#0 edward
2025-02-05 16:29:56 UTC .yarrb#0 rivagew
2025-02-05 16:29:57 UTC vperked#0 Rivarge
2025-02-05 16:29:57 UTC xewdy#0 diamondcdm
2025-02-05 16:29:59 UTC vperked#0 i cant spell it
2025-02-05 16:30:00 UTC hebeatsme#0 rivage
2025-02-05 16:30:08 UTC .yarrb#0 yes
2025-02-05 16:30:14 UTC hebeatsme#0 i have him added
2025-02-05 16:30:20 UTC hebeatsme#0 hes on discord still
2025-02-05 16:30:47 UTC .yarrb#0 hes focused on stroking zaddy elon
2025-02-05 16:30:47 UTC vperked#0 https://en.wikipedia.org/wiki/Edward_Coristine
2025-02-05 16:30:50 UTC vperked#0 no fucking way
2025-02-05 16:30:53 UTC vperked#0 they even made a wiki for him
2025-02-05 16:30:55 UTC vperked#0 LOOOL
2025-02-05 16:31:05 UTC hebeatsme#0 no way
2025-02-05 16:31:08 UTC hebeatsme#0 hes not a good dev either
2025-02-05 16:31:14 UTC hebeatsme#0 like????
2025-02-05 16:31:22 UTC hebeatsme#0 has to be fake
2025-02-05 16:31:24 UTC xewdy#0 and theyre saying ts
2025-02-05 16:31:29 UTC xewdy#0 like ok bro
2025-02-05 16:31:51 UTC .yarrb#0 now i wanna know what all the other devs are like…
2025-02-05 16:32:00 UTC vperked#0 “`Coristine used the moniker “bigballs” on LinkedIn and @Edwardbigballer on Twitter, according to The Daily Dot.[“`
2025-02-05 16:32:05 UTC vperked#0 LOL
2025-02-05 16:32:06 UTC hebeatsme#0 lmfaooo
2025-02-05 16:32:07 UTC vperked#0 bro
2025-02-05 16:32:10 UTC hebeatsme#0 bro
2025-02-05 16:32:17 UTC hebeatsme#0 has to be fake right
2025-02-05 16:32:22 UTC .yarrb#0 does it mention Rivage?
2025-02-05 16:32:23 UTC xewdy#0 He previously worked for NeuraLink, a brain computer interface company led by Elon Musk
2025-02-05 16:32:26 UTC xewdy#0 bro what
2025-02-05 16:32:27 UTC alexaloo#0 I think your current occupation gives you a good insight of what probably goes on
2025-02-05 16:32:29 UTC hebeatsme#0 bullshit man
2025-02-05 16:32:33 UTC xewdy#0 this nigga got hella secrets
2025-02-05 16:32:37 UTC hebeatsme#0 rivage couldnt print hello world
2025-02-05 16:32:42 UTC hebeatsme#0 if his life was on the line
2025-02-05 16:32:50 UTC xewdy#0 nigga worked for neuralink
2025-02-05 16:32:54 UTC hebeatsme#0 bullshit
2025-02-05 16:33:06 UTC Nashville Dispatch ##0000 ||@PD Ping||
2025-02-05 16:33:07 UTC hebeatsme#0 must have killed all those test pigs with some bugs
2025-02-05 16:33:24 UTC hebeatsme#0 ur telling me the rivage who failed to start a company
2025-02-05 16:33:28 UTC hebeatsme#0 https://cdn.camp
2025-02-05 16:33:32 UTC hebeatsme#0 who didnt pay for servers
2025-02-05 16:33:34 UTC hebeatsme#0 ?
2025-02-05 16:33:42 UTC hebeatsme#0 was too cheap
2025-02-05 16:33:44 UTC vperked#0 yes
2025-02-05 16:33:50 UTC hebeatsme#0 like??
2025-02-05 16:33:53 UTC hebeatsme#0 it aint adding up
2025-02-05 16:33:56 UTC alexaloo#0 He just needed to find his calling idiot.
2025-02-05 16:33:58 UTC alexaloo#0 He found it.
2025-02-05 16:33:59 UTC hebeatsme#0 bro
2025-02-05 16:34:01 UTC alexaloo#0 Cope in a river dude
2025-02-05 16:34:04 UTC hebeatsme#0 he cant make good money right
2025-02-05 16:34:08 UTC hebeatsme#0 doge is about efficiency
2025-02-05 16:34:11 UTC hebeatsme#0 he should make $1/he
2025-02-05 16:34:15 UTC hebeatsme#0 $1/hr
2025-02-05 16:34:25 UTC hebeatsme#0 and be whipped for better code
2025-02-05 16:34:26 UTC vperked#0 prolly makes more than us
2025-02-05 16:34:35 UTC vperked#0 with his dad too
2025-02-05 16:34:52 UTC hebeatsme#0 time to report him for fraud
2025-02-05 16:34:54 UTC hebeatsme#0 to donald trump
2025-02-05 16:35:04 UTC hebeatsme#0 rivage participated in sim swap hacks in 2018
2025-02-05 16:35:08 UTC hebeatsme#0 put that on his wiki
2025-02-05 16:35:10 UTC hebeatsme#0 thanks
2025-02-05 16:35:15 UTC hebeatsme#0 and in 2021
2025-02-05 16:35:17 UTC hebeatsme#0 thanks
2025-02-05 16:35:19 UTC chainofcommand#0 i dont think they’ll care tbh
Given the speed with which Musk’s DOGE team was allowed access to such critical government databases, it strains credulity that Coristine could have been properly cleared beforehand. After all, he’d recently been dismissed from a job for allegedly leaking internal company information to outsiders.
According to the national security adjudication guidelines (PDF) released by the Director of National Intelligence (DNI), eligibility determinations take into account a person’s stability, trustworthiness, reliability, discretion, character, honesty, judgment, and ability to protect classified information.
The DNI policy further states that “eligibility for covered individuals shall be granted only when facts and circumstances indicate that eligibility is clearly consistent with the national security interests of the United States, and any doubt shall be resolved in favor of national security.”
On Thursday, 25-year-old DOGE staff member Marko Elez resigned after being linked to a deleted social media account that advocated racism and eugenics. Elez resigned after The Wall Street Journalasked the White House about his connection to the account.
“Just for the record, I was racist before it was cool,” the account posted in July. “You could not pay me to marry outside of my ethnicity,” the account wrote on X in September. “Normalize Indian hate,” the account wrote the same month, in reference to a post noting the prevalence of people from India in Silicon Valley.
Elez’s resignation came a day after the Department of Justice agreed to limit the number of DOGE employees who have access to federal payment systems. The DOJ said access would be limited to two people, Elez and Tom Krause, the CEO of a company called Cloud Software Group.
Earlier today, Musk said he planned to rehire Elez after President Trump and Vice President JD Vance reportedly endorsed the idea. Speaking at The White House today, Trump said he wasn’t concerned about the security of personal information and other data accessed by DOGE, adding that he was “very proud of the job that this group of young people” are doing.
A White House official toldReuters on Wednesday that Musk and his engineers have appropriate security clearances and are operating in “full compliance with federal law, appropriate security clearances, and as employees of the relevant agencies, not as outside advisors or entities.”
NPRreports Trump added that his administration’s cost-cutting efforts would soon turn to the Education Department and the Pentagon, “where he suggested without evidence that there could be ‘trillions’ of dollars in wasted spending within the $6.75 trillion the federal government spent in fiscal year 2024.”
GOP leaders in the Republican-controlled House and Senate have largely shrugged about Musk’s ongoing efforts to seize control over federal databases, dismantle agencies mandated by Congress, freeze federal spending on a range of already-appropriated government programs, and threaten workers with layoffs.
Meanwhile, multiple parties have sued to stop DOGE’s activities. ABC News says a federal judge was to rule today on whether DOGE should be blocked from accessing Department of Labor records, following a lawsuit alleging Musk’s team sought to illegally access highly sensitive data, including medical information, from the federal government.
At least 13 state attorney general say they plan to file a lawsuit to stop DOGE from accessing federal payment systems containing Americans’ sensitive personal information, reportsThe Associated Press.
Reuters reported Thursday that the U.S. Treasury Department had agreed not to give Musk’s team access to its payment systems while a judge is hearing arguments in a lawsuit by employee unions and retirees alleging Musk illegally searched those records.
Ars Technicawrites that The Department of Education (DoE) was sued Friday by a California student association demanding an “immediate stop” to DOGE’s “unlawfully” digging through student loan data to potentially dismantle the DoE.
For whatever reason, when I plug and unplug my Wireless Headset dongle over USB,
it is not always detected by the PulseAudio/Pipewire stack which is
running our desktop sound Linux those days. But we can fix that with a restart
of the handling daemon, see below.
In PulseAudio terminology an input device (microphone) is called a source, and
an output device a sink.
When the headset dongle is plugged in, we can see it on the USB bus:
$ lsusb | grep Headset
Bus 001 Device 094: ID 046d:0af7 Logitech, Inc. Logitech G PRO X 2 Gaming Headset
The device is detected correctly as a Human Interface Device (HID) device
$ dmesg
...
[310230.507591] input: Logitech Logitech G PRO X 2 Gaming Headset as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.1/1-1.1.4/1-1.1.4:1.3/0003:046D:0AF7.0060/input/input163
[310230.507762] hid-generic 0003:046D:0AF7.0060: input,hiddev2,hidraw11: USB HID v1.10 Device [Logitech Logitech G PRO X 2 Gaming Headset] on usb-0000:00:14.0-1.1.4/input
However it is not seen in the list of sources / sinks of PulseAudio:
This unfriendly list shows my docking station, which as a small jack connector
for a wired cable, the built in speaker of my laptop, and a blutooth headset.
If I restart Pipewire,
$ systemctl --user restart pipewire
then the headset appears as possible audio output.
And test some recording, you will hear the output around one second after the
speaking (yes that is recorded audio sent over a Unix pipe for playing !):
# don't do this when the output is a speaker, this will create audio feedback (larsen effect)
$ arecord -f cd - | aplay
Kaspersky is reporting on a new type of smartphone malware.
The malware in question uses optical character recognition (OCR) to review a device’s photo library, seeking screenshots of recovery phrases for crypto wallets. Based on their assessment, infected Google Play apps have been downloaded more than 242,000 times. Kaspersky says: “This is the first known case of an app infected with OCR spyware being found in Apple’s official app marketplace.”
The still very new package zigg which
arrived on CRAN a week ago just
received a micro-update at CRAN. zigg provides
the Ziggurat
pseudo-random number generator (PRNG) for Normal, Exponential and
Uniform draws proposed by Marsaglia and
Tsang (JSS, 2000),
and extended by Leong et al. (JSS, 2005). This PRNG
is lightweight and very fast: on my machine speedups for the
Normal, Exponential, and Uniform are on the order of 7.4, 5.2 and 4.7
times faster than the default generators in R as illustrated in the benchmark
chart borrowed from the git repo.
As wrote last week in the initial
announcement, I had picked up their work in package RcppZiggurat
and updated its code for the 64-buit world we now live in. That package
alredy provided the Normal generator along with several competing
implementations which it compared rigorously and timed them. As one of
the generators was based on the GNU GSL via the
implementation of Voss, we always ended
up with a run-time dependency on the GSL too. No more: this new package
is zero-depedency, zero-suggsts and hence very easy to deploy.
Moreover, we also include a demonstration of four distinct ways of
accessing the compiled code from another R package: pure and straight-up
C, similarly pure C++, inclusion of the header in C++ as well as via Rcpp. The other advance is the
resurrection of the second generator for the Exponential distribution.
And following Burkardt we expose the
Uniform too. The main upside of these generators is their excellent
speed as can be seen in the comparison the default R generators
generated by the example script timings.R:
Needless to say, speed is not everything. This PRNG comes the time of
32-bit computing so the generator period is likely to be shorter than
that of newer high-quality generators. If in doubt, forgo speed and
stick with the high-quality default generators.
This release essentially just completes the DESCRIPTION file and
README.md now that this is a CRAN package. The short NEWS entry
follows.
Changes in version 0.0.2
(2025-02-07)
Complete DESCRIPTION and README.md following initial CRAN
upload
I have a feeling we're going to be seeing a lot of AI WTFerry at this site for a while, and fewer stupid online sales copy booboos. For today, here we go:
Jet-setter
Stewart
wants to sell a pound, but he's going to have to cover some ground first.
"Looks like Google are trying very hard to encourage me to stop using their search engine. Perhaps they want me to use chatGPT? I just can't fathom how it got this so wrong."
Tim R.
proves that AIs aren't immune to the general flubstitution error category either.
"I'm not quite sure what's going on here - there were 5 categories each with the same [insert content here] placeholder. Maybe the outer text is not AI generated and the developers forgot to actually call the AI, or maybe the AI has been trained on so much placeholder source code it thought it was generating what I wanted to see."
"Crazy Comcast Calendar Corruption!" complains
B.J.H.
"No wonder I didn't get birthday gifts -- my birth month
has been sloughed away. But they still charged me for the months that don't exist." Hey, they only charged you for 12 months at least. Maybe they just picked twelve at random.
Educator
Manuel H.
"Publishing a session recording in [open-source] BigBlueButton seems to be a task for logicians: Should it be public, or protected, or both? Or should it rather be published instead of public? Or better not published at all?"
A little translation explanation: the list of options provided would in English be "Public/Protected, Public, Protected, Published, Unpublished". I have no idea what the differences mean.
And the pièce de résistance from
Mark Whybird
"I've always hated click here as a UX antipattern, but Dell have managed to make it even worse." Or maybe better? This is hysterical.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Deborah Sale-Butler It was a great place to live. Tons of space to spin out a web. And the local food was spectacular. I mean, you could get anything in that neighborhood: dragonflies, blowflies, sometimes even a big, fat, juicy moth. De-lish! I can honestly say, up until Tuesday I was an arachnid with […]
New mobile apps from the Chinese artificial intelligence (AI) company DeepSeek have remained among the top three “free” downloads for Apple and Google devices since their debut on Jan. 25, 2025. But experts caution that many of DeepSeek’s design choices — such as using hard-coded encryption keys, and sending unencrypted user and device data to Chinese companies — introduce a number of glaring security and privacy risks.
Public interest in the DeepSeek AI chat apps swelled following widespread media reports that the upstart Chinese AI firm had managed to match the abilities of cutting-edge chatbots while using a fraction of the specialized computer chips that leading AI companies rely on. As of this writing, DeepSeek is the third most-downloaded “free” app on the Apple store, and #1 on Google Play.
DeepSeek’s rapid rise caught the attention of the mobile security firm NowSecure, a Chicago-based company that helps clients screen mobile apps for security and privacy threats. In a teardown of the DeepSeek app published today, NowSecure urged organizations to remove the DeepSeek iOS mobile app from their environments, citing security concerns.
NowSecure founder Andrew Hoog said they haven’t yet concluded an in-depth analysis of the DeepSeek app for Android devices, but that there is little reason to believe its basic design would be functionally much different.
Hoog told KrebsOnSecurity there were a number of qualities about the DeepSeek iOS app that suggest the presence of deep-seated security and privacy risks. For starters, he said, the app collects an awful lot of data about the user’s device.
“They are doing some very interesting things that are on the edge of advanced device fingerprinting,” Hoog said, noting that one property of the app tracks the device’s name — which for many iOS devices defaults to the customer’s name followed by the type of iOS device.
The device information shared, combined with the user’s Internet address and data gathered from mobile advertising companies, could be used to deanonymize users of the DeepSeek iOS app, NowSecure warned. The report notes that DeepSeek communicates with Volcengine, a cloud platform developed by ByteDance (the makers of TikTok), although NowSecure said it wasn’t clear if the data is just leveraging ByteDance’s digital transformation cloud service or if the declared information share extends further between the two companies.
Image: NowSecure.
Perhaps more concerning, NowSecure said the iOS app transmits device information “in the clear,” without any encryption to encapsulate the data. This means the data being handled by the app could be intercepted, read, and even modified by anyone who has access to any of the networks that carry the app’s traffic.
“The DeepSeek iOS app globally disables App Transport Security (ATS) which is an iOS platform level protection that prevents sensitive data from being sent over unencrypted channels,” the report observed. “Since this protection is disabled, the app can (and does) send unencrypted data over the internet.”
Hoog said the app does selectively encrypt portions of the responses coming from DeepSeek servers. But they also found it uses an insecure and now deprecated encryption algorithm called 3DES (aka Triple DES), and that the developers had hard-coded the encryption key. That means the cryptographic key needed to decipher those data fields can be extracted from the app itself.
There were other, less alarming security and privacy issues highlighted in the report, but Hoog said he’s confident there are additional, unseen security concerns lurking within the app’s code.
“When we see people exhibit really simplistic coding errors, as you dig deeper there are usually a lot more issues,” Hoog said. “There is virtually no priority around security or privacy. Whether cultural, or mandated by China, or a witting choice, taken together they point to significant lapse in security and privacy controls, and that puts companies at risk.”
Apparently, plenty of others share this view. Axiosreported on January 30 that U.S. congressional offices are being warned not to use the app.
“[T]hreat actors are already exploiting DeepSeek to deliver malicious software and infect devices,” read the notice from the chief administrative officer for the House of Representatives. “To mitigate these risks, the House has taken security measures to restrict DeepSeek’s functionality on all House-issued devices.”
TechCrunchreports that Italy and Taiwan have already moved to ban DeepSeek over security concerns. Bloombergwrites that The Pentagon has blocked access to DeepSeek. CNBCsaysNASA also banned employees from using the service, as did the U.S. Navy.
Beyond security concerns tied to the DeepSeek iOS app, there are indications the Chinese AI company may be playing fast and loose with the data that it collects from and about users. On January 29, researchers at Wizsaid they discovered a publicly accessible database linked to DeepSeek that exposed “a significant volume of chat history, backend data and sensitive information, including log streams, API secrets, and operational details.”
“More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world,” Wiz wrote. [Full disclosure: Wiz is currently an advertiser on this website.]
KrebsOnSecurity sought comment on the report from DeepSeek and from Apple. This story will be updated with any substantive replies.
Announcing the Picks and Shovels book tour (permalink)
My next novel, Picks and Shovels, is officially out in the US and Canada on Feb 17, and I’m about to leave on a 20+ city book-tour, which means there’s a nonzero chance I’ll be in a city near you between now and the end of the spring!
Picks and Shovels is a standalone novel starring Martin Hench – my hard-charging, two-fisted, high-tech forensic accountant – in his very first adventure, in the early 1980s. It’s a story about the Weird PC era, when no one was really certain what shape PCs should be, who should make them, who should buy them, and what they’re for. It features a commercial war between two very different PC companies.
The first one, Fidelity Computing, is a predatory multi-level marketing faith scam, run by a Mormon bishop, a Catholic priest, and an orthodox rabbi. Fidelity recruits people to exploit members of their faith communities by selling them third-rate PCs that are designed as rip-off lock-ins, forcing you to buy special floppies for their drives, special paper for their printers, and to use software that is incompatible with everything else in the world.
The second PC company is Computing Freedom, a rebel alliance of three former Fidelity Computing sales-managers: an orthodox woman who’s been rejected by her family after coming out as queer; a Mormon woman who’s rejected the Church over its opposition to the Equal Rights Amendment, and a nun who’s quit her order to join the Liberation Theology movement in the struggle for human rights in America’s dirty wars.
In the middle of it all is Martin Hench, coming of age in San Francisco during the PC bubble, going to Dead Kennedys shows, getting radicalized by ACT UP!, and falling in love – all while serving as CFO and consigliere to Computing Freedom, as a trade war turns into a shooting war, and they have to flee for their lives.
The book’s had fantastic early reviews, with endorsements from computer historians like Steven Levy (Hackers), Claire Evans (Broad-Band), John Markoff (What the Doormouse Said) and Dan’l Lewin (CEO of the Computer History Museum). Stephen Fry raved that he “hugely enjoyed” the “note perfect,” “superb” story.
And I’m about to leave on tour! I have nineteen confirmed dates, and two nearly confirmed dates, and there’s more to come! I hope you’ll consider joining me at one of these events. I’ve got a bunch of fantastic conversation partners joining me onstage and online, and the bookstores that are hosting me are some of my favorite indie booksellers in the world.
VIRTUAL (Feb 15):
YANIS VAROUFAKIS, sponsored by Jacobin and hosted by David Moscrop, 10AM Pacific, 1PM Eastern, 6PM UK, 7PM CET https://www.youtube.com/watch?v=xkIDep7Z4LM
PDX, Jun 20 (TBC):
Powell’s Books (date and time to be confirmed)
I’m also finalizing plans for one or two dates in NEW ZEALAND at the end of April, as well as a ATLANTA date, likely on March 26.
I really hope you’ll come out and say hello. I know these are tough times. Hanging out with nice people who care about the same stuff as you is a genuine tonic.
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language–and is
widely used by (currently) 1215 other packages on CRAN, downloaded 38.2 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 612 times according
to Google Scholar.
Conrad released a minor
version 14.2.3 yesterday. As it has been two months since the last
minor release, we prepared a new version for CRAN too which arrived there early
this morning.
The changes since the last CRAN release are summarised
below.
Changes in
RcppArmadillo version 14.2.3-1 (2025-02-05)
Upgraded to Armadillo release 14.2.3 (Smooth Caffeine)
Minor fix for declaration of xSYCON and
xHECON functions in LAPACK
Cookiecutter is a tool for building coding project templates. It’s often used to provide a scaffolding to build lots of similar project. I’ve seen it used to create Symfony projects and several cloud infrastructures deployed with Terraform. This tool was useful to accelerate the creation of new projects.
Since these templates were bound to evolve, the teams providing these template relied on cruft to update the code provided by the template in their user’s code. In other words, they wanted their users to apply a diff of the template modification to their code.
At the beginning, all was fine. But problems began to appear during the lifetime of these projects.
What went wrong ?
In both cases, we had the following scenario:
user team:
creates new project with cookiecutter template
makes modification on their code, including on code provided by template
meanwhile, provider team:
makes modifications to cookiecutter template
releases new template version
asks his users to update code brought by template using cruft
user team then:
runs cruft to update template code
discovers a lot of code conflicts (similar to git merge conflicts)
often rolls back cruft update and gives up on template update
User team giving up on updates is a major problem because these update may bring security or compliance fixes.
Note that code conflicts seen with cruft are similar to git merge conflicts, but harder to resolve because, unlike with a git merge, there’s no common ancestor, so 3-way merges are not possible.
From an organisation point of view, the main problem is the ambiguous ownership of the functionalities brought by template code: who own this code ? The provider team who writes the template or the user team who owns the repository of the code generated from the template ? Conflicts are bound to happen. �
Possible solutions to get out of this tar pit:
Assume that template are one shot. Template update are not practical in the long run.
Make sure that template are as thin as possible. They should contain minimal logic.
Move most if not all logic in separate libraries or scripts that are owned by provider team. This way update coming from provider team can be managed like external dependencies by upgrading the version of a dependency.
Of course your users won’t be happy to be faced with a manual migration from the old big template to the new one with external dependencies. On the other hand, this may be easier to sell than updates based on cruft since the painful work will happen once. Further updates will be done by incrementing dependency versions (which can be automated with renovate).
If many projects are to be created with this template, it may be more practical to provide use a CLI that will create a skeleton project. See for instance terragrunt scaffold command.
My name is Dominique Dumont, I’m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn.
We are pleased to announce that Proxmox has
committed to sponsor DebConf25 as a
Platinum Sponsor.
Proxmox develops powerful, yet easy-to-use Open Source server software. The
product portfolio from Proxmox, including server virtualization, backup, and
email security, helps companies of any size, sector, or industry to simplify
their IT infrastructures. The Proxmox solutions are based on the great Debian
platform, and we are happy that we can give back to the community by sponsoring
DebConf25.
With this commitment as Platinum Sponsor, Proxmox is contributing to the Debian
annual Developers' conference, directly supporting the progress of Debian and
Free Software. Proxmox contributes to strengthen the community that
collaborates on Debian projects from all around the world throughout all of
the year.
Thank you very much, Proxmox, for your support of DebConf25!
Become a sponsor too!
DebConf25 will take place from 14 to 20
July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13
July 2025.
Sammy's company "jumped on the Ruby on Rails bandwagon since there was one on which to jump", and are still very much a Rails shop. The company has been around for thirty years, and in that time has seen plenty of ups and downs. During one of those "ups", management decided they needed to scale up, both in terms of staffing and in terms of client base- so they hired an offshore team to promote international business and add to their staffing.
A "down" followed not long after, and the offshore team was disbanded. So Sammy inherited the code.
I know I'm generally negative on ORM systems, and that includes Rails, but I want to stress: they're fine if you stay on the happy path. If your data access patterns are simple (which most applications are just basic CRUD!) there's nothing wrong with using an ORM. But if you're doing that, you need to use the ORM. Which is not what the offshore team did. For example:
classRequest < ActiveRecord::Basedefself.get_this_years_request_ids(facility_id) # There are several other methods that are *exactly* the same, except for the year
requests = Request.where("requests.id in (select t.id from requests as t # what is the purpose of this subquery?
where t.unit_id=token_requests.unit_id and t.facility_id=token_requests.facility_id
and t.survey_type = '#{TokenRequest::SURVEY_TYPE}' # why is SURVEY_TYPE a constant?
and EXTRACT( YEAR FROM created_at) = EXTRACT(YEAR FROM current_timestamp)
order by t.id desc) and token_requests.facility_id = #{facility_id.to_i} # so we get all the requests by year, then by by ???
and token_requests.survey_type = '#{Request::SURVEY_TYPE}'")
Comments from Sammy.
Now, if we just look at the signature of the method, it seems like this should be a pretty straightforward query: get all of the request IDs for a given facility ID, within a certain time range.
And Sammy has helpfully provided a version of this code which does the same thing, but in a more "using the tools correctly" way:
Now, I don't know Ruby well enough to be sure, but the DateTime.new(year.to_i) whiffs a bit of some clumsy date handling, but that may be a perfectly cromulent idiom in Ruby. But this code is pretty clear about what it's doing: finding request objects for a given facility within a given year. Why one uses Request and the other uses TokenRequest is a mystery to me- I' m going to suspect some bad normalization in the database or errors in how Sammy anonymized the code. That's neither here nor there.
Once we've gotten our list of requests, we need to process them to output them. Here's how the offshore code converted the list into a comma delimited string, wrapped in parentheses.
Look, if the problem is to "join a string with delimiters" and you write code that looks like this, just delete your hard drive and start over. You need extra help.
We start by defaulting to (-1) which is presumably a "no results" indicator. But if we have results, we'll iterate across those results. If our result string is non-empty (which it definitely is non-empty), we append a comma (giving us (-1),). Then we append the current token ID, giving us (-1),5, for example. Once we've exhausted all the returned IDs, we wrap the whole thing in parentheses.
So, this code is wrong- it's only supposed to return (-1) when there are no results, but as written, it embeds that in the results. Presumably the consuming code is able to handle that error gracefully, since the entire project works.
Sammy provides us a more idiomatic (and readable) version of the code which also works correctly:
I'll be honest, I hate the fact that this is returning a stringly-typed list of integers, but since I don't know the context, I'll let that slide. At the very least, this is a better example of what joining a list of values into a string should look like.
Sammy writes:
It seems these devs never took the time to learn the language. After asking around a bit, I found out they all came from a Java background. Most of this code seems to be from a VB playbook, though.
That's a huge and undeserved insult to Visual Basic programmers, Sammy. Even they're not that bad.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Eric San Juan She reached down for the water bottle at her side, remembered it was empty only when she brought to her lips, sighed, and hung her head. “I should have stayed in the city.” She knew she was wrong about that, of course. The city is where it all started. Things were […]
If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.
As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.
However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc).
/etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.
SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.
However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept.
There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.
Sample configuration file for the SteamOS updater
Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer:
As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!
A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup︎
Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As usual, though, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
The last few months saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering that is rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.
Giacomo Benedetti, Oreofe Solarin, Courtney Miller, Greg Tystahl, William Enck, Christian Kästner, Alexandros Kapravelos, Alessio Merlo and Luca Verderame published an interesting article recently. Titled An Empirical Study on Reproducible Packaging in Open-Source Ecosystem, the abstract outlines its optimistic findings:
[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.
Answering strongly in the affirmative, the article’s abstract reads as follows:
In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the nixpkgs repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.
As above, the entire PDF of the article is available to view online.
Distribution work
There as been the usual work in various distributions this month, such as:
10+ reviews of Debian packages were added, 11 were updated and 10 were removed this month adding to our knowledge about identified issues. A number of issue types were updated also.
The FreeBSD Foundation announced that “a planned project to deliver zero-trust builds has begun in January 2025”. Supported by the Sovereign Tech Agency, this project is centered on the various build processes, and that the “primary goal of this work is to enable the entire release process to run without requiring root access, and that build artifacts build reproducibly – that is, that a third party can build bit-for-bit identical artifacts.” The full announcement can be found online, which includes an estimated schedule and other details.
Following-up to a substantial amount of previous work pertaining the Sphinx documentation generator, James Addison asked a question pertaining to the relationship between SOURCE_DATE_EPOCH environment variable and testing that generated a number of replies.
Adithya Balakumar of Toshiba asked a question about whether it is possible to make ext4 filesystem images reproducible. Adithya’s issue is that even the smallest amount of post-processing of the filesystem results in the modification of the “Last mount” and “Last write” timestamps.
FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests — for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them — to fail.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 285, 286 and 287 to Debian:
Security fixes:
Validate the --css command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. […]
Prevent XML entity expansion attacks. Thanks to Florian Wilkens from SRLabs for the report.. […][…]
Print a warning if we have disabled XML comparisons due to a potentially vulnerable version of pyexpat. […]
Bug fixes:
Correctly identify changes to only the line-endings of files; don’t mark them as Ordering differences only. […]
When passing files on the command line, don’t call specialize(…) before we’ve checked that the files are identical or not. […]
Do not exit with a traceback if paths are inaccessible, either directly, via symbolic links or within a directory. […]
Don’t cause a traceback if cbfstool extraction failed.. […]
Use the surrogateescape mechanism to avoid a UnicodeDecodeError and crash when any decoding zipinfo output that is not UTF-8 compliant. […]
Testsuite improvements:
Don’t mangle newlines when opening test fixtures; we want them untouched. […]
In addition, fridtjof added support for the ASAR.tar-like archive format. […][…][…][…] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 […][…] and 286 […][…].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-1 was uploaded to Debian unstable by Chris Lamb, making the following the changes:
Clarify the --verbose and non --verbose output of bin/strip-nondeterminism so we don’t imply we are normalizing files that we are not. […]
Update the website’s README to make the setup command copy & paste friendly. […]
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
Ed Maste modified the FreeBSD build system to the clean the object directory before commencing a build. […]
Gioele Barabucci updated the rebuilder stats to first add a category for network errors […] as well as to categorise failures without a diffoscope log […].
Jessica Clarke also made some FreeBSD-related changes, including:
Ensuring we clean up the object directory for second build as well. […][…]
Updating the sudoers for the relevant rm -rf command. […]
Update the cleanup_tmpdirs method to to match other removals. […]
Update the reproducible_debstrap job to call Debian’s debootstrap with the full path […] and to use eatmydata as well […][…].
Make some changes to deduce the CPU load in the debian_live_build job. […]
Lastly, both Holger Levsen […] and Vagrant Cascadian […] performed some node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
Tim has been working on a large C++ project which has been around for many, many years. It's a tool built for, in Tim's words, "an esoteric field", and most of the developers over the past 30 years have been PhD students.
This particular representative line is present with its original whitespace, and the original variable names. It has been in the code base since 2010.
Assignment::Ptr ra = Assignment::makeAssignment(I,
addr,
func,
block,
RA);
The extra bonus is that Assignment::Ptr is actually an alias for boost::shared_ptr<Assignment>. As you might gather from the name shared_ptr, that's a reference-counted way to manage pointers to memory, and thus avoid memory leaks.
The developers just couldn't tolerate using the names provided by their widely used library solving a widely understood problem, and needed to invent their own names, which made the code less clear. The same is true for makeAssignment. And this pattern is used for nearly every class, because the developers involved didn't understand object lifetimes, when to allow things to be stack allocated, or how ownership should really work in an application.
This is hardly the only WTF in the code, but Tim says:
Preceding the 98 standard, there is a LOT of C-with-classes code. But this representative line speaks to the complete lack of thought that has gone into much of codebase. That whitespace is as-is from the source.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Hillary Lyon The Holographic Wildlife Museum was a major draw for the city, with its representation of Earth’s extinct and endangered animals. Vera loved the idea of viewing facsimiles of majestic creatures in their natural habitats, even if it was through holograms. Besides, hologram technology had come a long way since her youth, when […]
The FBI joined authorities across Europe last week in seizing domain names for Cracked and Nulled, English-language cybercrime forums with millions of users that trafficked in stolen data, hacking tools and malware. An investigation into the history of these communities shows their apparent co-founders quite openly operate an Internet service provider and a pair of e-commerce platforms catering to buyers and sellers on both forums.
In this 2019 post from Cracked, a forum moderator told the author of the post (Buddie) that the owner of the RDP service was the founder of Nulled, a.k.a. “Finndev.” Image: Ke-la.com.
On Jan. 30, the U.S. Department of Justicesaid it seized eight domain names that were used to operate Cracked, a cybercrime forum that sprang up in 2018 and attracted more than four million users. The DOJ said the law enforcement action, dubbed Operation Talent, also seized domains tied to Sellix, Cracked’s payment processor.
In addition, the government seized the domain names for two popular anonymity services that were heavily advertised on Cracked and Nulled and allowed customers to rent virtual servers: StarkRDP[.]io, and rdp[.]sh.
Those archived webpages show both RDP services were owned by an entity called 1337 Services Gmbh. According to corporate records compiled by Northdata.com, 1337 Services GmbH is also known as AS210558 and is incorporated in Hamburg, Germany.
The Cracked forum administrator went by the nicknames “FlorainN” and “StarkRDP” on multiple cybercrime forums. Meanwhile, a LinkedIn profile for a Florian M. from Germany refers to this person as the co-founder of Sellix and founder of 1337 Services GmbH.
Northdata’s business profile for 1337 Services GmbH shows the company is controlled by two individuals: 32-year-old Florian Marzahl and Finn Alexander Grimpe, 28.
An organization chart showing the owners of 1337 Services GmbH as Florian Marzahl and Finn Grimpe. Image: Northdata.com.
Neither Marzahl nor Grimpe responded to requests for comment. But Grimpe’s first name is interesting because it corresponds to the nickname chosen by the founder of Nulled, who goes by the monikers “Finn” and “Finndev.” NorthData reveals that Grimpe was the founder of a German entity called DreamDrive GmbH, which rented out high-end sports cars and motorcycles.
The email address used for those accounts was f.grimpe@gmail.com. DomainTools.com reports f.grimpe@gmail.com was used to register at least nine domain names, including nulled[.]lol and nulled[.]it. Neither of these domains were among those seized in Operation Talent.
Intel471 finds the user FlorainN registered across multiple cybercrime forums using the email address olivia.messla@outlook.de. The breach tracking service Constella Intelligence says this email address used the same password (and slight variations of it) across many accounts online — including at hacker forums — and that the same password was used in connection with dozens of other email addresses, such as florianmarzahl@hotmail.de, and fmarzahl137@gmail.com.
The Justice Department said the Nulled marketplace had more than five million members, and has been selling stolen login credentials, stolen identification documents and hacking services, as well as tools for carrying out cybercrime and fraud, since 2016.
Perhaps fittingly, both Cracked and Nulled have been hacked over the years, exposing countless private messages between forum users. A review of those messages archived by Intel 471 showed that dozens of early forum members referred privately to Finndev as the owner of shoppy[.]gg, an e-commerce platform that caters to the same clientele as Sellix.
Shoppy was not targeted as part of Operation Talent, and its website remains online. Northdata reports that Shoppy’s business name — Shoppy Ecommerce Ltd. — is registered at an address in Gan-Ner, Israel, but there is no ownership information about this entity. Shoppy did not respond to requests for comment.
Constella found that a user named Shoppy registered on Cracked in 2019 using the email address finn@shoppy[.]gg. Constella says that email address is tied to a Twitter/X account for Shoppy Ecommerce in Israel.
The DOJ said one of the alleged administrators of Nulled, a 29-year-old Argentinian national named Lucas Sohn, was arrested in Spain. The government has not announced any other arrests or charges associated with Operation Talent.
Indeed, both StarkRDP and FloraiN have posted to their accounts on Telegram that there were no charges levied against the proprietors of 1337 Services GmbH. FlorainN told former customers they were in the process of moving to a new name and domain for StarkRDP, where existing accounts and balances would be transferred.
“StarkRDP has always been operating by the law and is not involved in any of these alleged crimes and the legal process will confirm this,” the StarkRDP Telegram account wrote on January 30. “All of your servers are safe and they have not been collected in this operation. The only things that were seized is the website server and our domain. Unfortunately, no one can tell who took it and with whom we can talk about it. Therefore, we will restart operation soon, under a different name, to close the chapter [of] ‘StarkRDP.'”
In my last blog, I explained how we resolved a throttling issue involving Azure storage API. In the end, I mentioned that I was not sure of the root cause of the throttling issue.
Even though we no longer had any problem in dev and preprod cluster, we still faced throttling issue with prod. The main difference between these 2 environments is that we have about 80 PVs in prod versus 15 in the other environments. Given that we manage 1500 pods in prod, 80 PVs does not look like a lot.
To continue the investigation, I’ve modified k8s-scheduled-volume-snapshotter to limit the number of snaphots done in a single cron run (see add maxSnapshotCount parameter pull request).
In prod, we used the modified snapshotter to trigger snapshots one by one.
Even with all previous snapshots cleaned up, we could not trigger a single new snapshot without being throttled. I guess that, in the cron job, just checking the list of PV to snapshot was enough to exhaust our API quota.
Azure doc mention that a leaky bucket algorithm is used for throttling. A full bucket holds tokens for 250 API calls, and the bucket gets 25 new tokens per second. Looks like that not enough.
I was puzzled and out of ideas .
I looked for similar problems in AKS issues on GitHub where I found this comment that recommend using useDataPlaneAPI parameter in the CSI file driver. That was it!
I was flabbergasted by this parameter: why is CSI file driver able to use 2 APIs ? Why is one on them so limited ? And more importantly, why is the limited API the default one ?
Anyway, setting useDataPlaneAPI: "true" in our VolumeSnapshotClass manifest was the right solution. This indeed solved the throttling issue in our prod cluster.
But not the snaphot issue . Amongst the 80 PV, I still had 2 snaphots failing.
Fortunately, the error was mentioned in the description of the failed snapshots: we had too many (200) snapshots for these shared volumes.
What ?? All these snapshots were cleaned up last week.
I then tried to delete these snaphots through azure console. But the console failed to delete these snapshot due to API throttling. Looks like Azure console is not using the right API.
Anyway, I went back to the solution explained in my previous blog, I listed all snapshots with az command. I indeed has a lot of snaphots, a lot of them dated Jan 19 and 20. There was often a new bogus snaphot created every minute.
These were created during the first attempt at fixing the throttling issue. I guess that even though CSI file driver was throttled, a snaphot was still created in the storage account, but the CSI driver did not see it and retried a minute later. What a mess.
Anyway, I’ve cleaned up again these bogus snapshot , and now, snaphot creation is working fine .
There are a lot of cases where the submission is "this was server side generated JavaScript and they were loading constants". Which, honestly, is a WTF, but it isn't interesting code. Things like this:
if (false === true)
{
// do stuff
}
That's absolutely the wrong way to do that, and I hate it, but there's just so many times you can say, "send server-side values to the client as an object, not inline".
But Daniel's electrical provider decided to come up with an example of this that really takes it to the next level of grossness.
var isMobile = "" === "true";
var isAndroid = "" === "true";
var isIPad = "" === "true";
var isIPhone = "" === "true";
For starters, they're doing device detection on the server side, which isn't the worst possible idea, but it means they're relying on header fields or worse: the user agent string. Maybe they're checking the device resolution. The fact that they're naming specific devices instead of browser capabilities hints at a terrible hackjob of reactive webdesign- likely someone wrote a bunch of JavaScript that alters the desktop stylesheet to cram the desktop site onto a mobile device. But that's just background noise.
Look at that code.
First, we've got some lovely order-of-operations abuse: === has higher precedence than =, which makes sense but hardly makes this code readable. The first time I saw this, my brain wanted the assignment to happen first.
But what's really special to me is the insistence on making this stringly typed. They control both sides of the code, so they could have just done booleans on both sides. And sure, there's a world where they're just dumb, or didn't trust their templating engine to handle that well.
I've seen enough bad code, though, to have a different suspicion. I can't confirm it, but c'mon, you know in your hearts this is true: the function which is doing device detection returns a string itself, and that string isn't always a boolean for some reason. So they needed to wrap the output in quotes, because that was the only way to make sure that the JavaScript actually could be executed without a syntax error.
I can't be sure that's true from this little snippet. But look at this code, and tell me that someone didn't make that mistake.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Jared S Moya A sharp pain pierced Darya’s side. His knees buckled as he drew his hand to the wound and toppled to the ground. His shoulder slammed into the packed dirt of the dry riverbed, his teeth clacking against each other. Rolling onto his back, he noticed a lancer round had penetrated his […]
A few ago I may have accidentally bought a ring of 12 RGB LEDs; I soldered
temporary leads on it, connected it to a CircuitPython supported board
and played around for a while.
They we had a couple of friends come over to remote FOSDEM together, and
I had talked with one of them about WS2812 / NeoPixels, so I brought
them to the living room, in case there was a chance to show them in
sort-of-use.
Then I was dealing with playing the various streams as we moved from one
room to the next, which lead to me being called “video team”, which lead
to me wearing a video team shirt (from an old DebConf, not FOSDEM, but
still video team), which lead to somebody asking me whether I also had
the sheet with the countdown to the end of the talk, and the answer was
sort-of-yes (I should have the ones we used to use for our Linux Day),
but not handy.
But I had a thing with twelve things in a clock-like circle.
A bit of fiddling on the CircuitPython REPL resulted, if I remember
correctly, in something like:
import board
import neopixel
import time
num_pixels = 12
pixels = neopixel.NeoPixel(board.GP0, num_pixels)
pixels.brightness = 0.1
def end(min):
pixels.fill((0, 0, 0))
for i in range(12):
pixels[i] = (127 + 10 * i, 8 * (12 - i), 0)
pixels[i-1] = (0, 0, 0)
time.sleep(min * 5) # min * 60 / 12
Now, I wasn’t very consistent in running end, especially since I
wasn’t sure whether I wanted to run it at the beginning of the talk with
the full duration or just in the last 5 - 10 minutes depending of the
length of the slot, but I’ve had at least one person agree that the
general idea has potential, so I’m taking these notes to be able to work
on it in the future.
One thing that needs to be fixed is the fact that with the ring just
attached with temporary wires and left on the table it isn’t clear which
LED is number 0, so it will need a bit of a case or something, but
that’s something that can be dealt with before the next fosdem.
And I should probably add some input interface, so that it is
self-contained and not tethered to a computer and run from the REPL.
(And then I may also have a vague idea for putting that ring into some
wearable thing: good thing that I actually bought two :D )
Most people know that robots no longer sound like tinny trash cans. They sound like Siri, Alexa, and Gemini. They sound like the voices in labyrinthine customer support phone trees. And even those robot voices are being made obsolete by new AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice.
This technology will replace humans in many areas. Automated customer support will save money by cutting staffing at call centers. AI agents will make calls on our behalf, conversing with others in natural language. All of that is happening, and will be commonplace soon.
But there is something fundamentally different about talking with a bot as opposed to a person. A person can be a friend. An AI cannot be a friend, despite how people might treat it or react to it. AI is at best a tool, and at worst a means of manipulation. Humans need to know whether we’re talking with a living, breathing person or a robot with an agenda set by the person who controls it. That’s why robots should sound like robots.
You can’t just label AI-generated speech. It will come in many different forms. So we need a way to recognize AI that works no matter the modality. It needs to work for long or short snippets of audio, even just a second long. It needs to work for any language, and in any cultural context. At the same time, we shouldn’t constrain the underlying system’s sophistication or language complexity.
We have a simple proposal: all talking AIs and robots should use a ring modulator. In the mid-twentieth century, before it was easy to create actual robotic-sounding speech synthetically, ring modulators were used to make actors’ voices sound robotic. Over the last few decades, we have become accustomed to robotic voices, simply because text-to-speech systems were good enough to produce intelligible speech that was not human-like in its sound. Now we can use that same technology to make robotic speech that is indistinguishable from human sound robotic again.
A ring modulator has several advantages: It is computationally simple, can be applied in real-time, does not affect the intelligibility of the voice, and—most importantly—is universally “robotic sounding” because of its historical usage for depicting robots.
Responsible AI companies that provide voice synthesis or AI voice assistants in any form should add a ring modulator of some standard frequency (say, between 30-80 Hz) and of a minimum amplitude (say, 20 percent). That’s it. People will catch on quickly.
Here are a couple of examples you can listen to for examples of what we’re suggesting. The first clip is an AI-generated “podcast” of this article made by Google’s NotebookLM featuring two AI “hosts.” Google’s NotebookLM created the podcast script and audio given only the text of this article. The next two clips feature that same podcast with the AIs’ voices modulated more and less subtly by a ring modulator:
Raw audio sample generated by Google’s NotebookLM
Audio sample with added ring modulator (30 Hz-25%)
Audio sample with added ring modulator (30 Hz-40%)
We were able to generate the audio effect with a 50-line Python script generated by Anthropic’s Claude. One of the most well-known robot voices were those of the Daleks from Doctor Who in the 1960s. Back then robot voices were difficult to synthesize, so the audio was actually an actor’s voice run through a ring modulator. It was set to around 30 Hz, as we did in our example, with different modulation depth (amplitude) depending on how strong the robotic effect is meant to be. Our expectation is that the AI industry will test and converge on a good balance of such parameters and settings, and will use better tools than a 50-line Python script, but this highlights how simple it is to achieve.
Of course there will also be nefarious uses of AI voices. Scams that use voice cloning have been getting easier every year, but they’ve been possible for many years with the right know-how. Just like we’re learning that we can no longer trust images and videos we see because they could easily have been AI-generated, we will all soon learn that someone who sounds like a family member urgently requesting money may just be a scammer using a voice-cloning tool.
We don’t expect scammers to follow our proposal: They’ll find a way no matter what. But that’s always true of security standards, and a rising tide lifts all boats. We think the bulk of the uses will be with popular voice APIs from major companies—and everyone should know that they’re talking with a robot.
This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.
JOIN US IN PERSON AND ONLINE for Ahmed Best's Long Now Talk,Feel the Future: A Valentine's Evening on February 14, 02025 at 7 PM PT at the Herbst Theatre in San Francisco.
Ahmed Best is an award-winning artist, educator, director, the host of the Afrofuturist podcast, and co-founder of the AfroRithm Futures Group, among other pursuits, including his role as Jar Jar Binks in Star Wars: Episode I. Ahmed teaches Dramatic Narrative Design, a course he created for Film and Actor entrepreneurship at USC School of Dramatic Arts. He is also a Senior Fellow at USC Annenberg School for Communication and Journalism, and a visiting professor at Stanford’s Hasso Plattner Institute of Design.
If you could witness one event from the distant past or future, what would it be?
Distant future: I would love to see the first thing that we build — and it might not be a vessel — that can travel at the speed of light.
Distant past: The construction of the Pyramids of Giza. The bricks of the pyramids were formed with almost laser-like precision. I want to see how they did that. Was it a laser? Can you imagine the Egyptians harnessing, say, the power of the sun with a big piece of glass to where they could do a laser-cut of a brick of stone? Had not those who wanted to change the narrative of this civilization destroyed so many things, we might have known how they did it. Right now, it’s a mystery. People like to credit aliens, but I align with Neil deGrasse Tyson on this one: just because they were smarter than you doesn’t mean they came from another planet.
What’s one prediction about the future you’d be willing to make for Long Bets, our arena for accountable predictions?
A thousand years from now, we will have learned to move beyond the planet without carrying the problems of the past with us. This will come through a global cultural revolution. We will be ready to travel through to the stars without harm. We’ll be able to respect where we are going for what “where we are going” demands. We’re not going to impose our ideas of what respect is onto wherever we travel to.
If you had to choose a single artifact to represent our time in a far-future archive, what would it be?
To represent our time, I’d choose the smartphone. The idea of a smartphone was inspired by science fiction, but also there's a longer, almost pseudo-spiritual idea of a smartphone that connects the past to our present.
In his book African Fractals: Modern Computing and Indigenous Design(01999), the mathematician Ron Eglash studied the sand diviners of the Bamana people of Mali, who read peoples’ fortunes by drawing symbols in the sand. Eglash found that this system of divination uses a binary-like logic. Variants of this practice spread from Africa to Spain and the rest of Europe during the Islamic Golden Age, where it was known as geomancy. Leibniz was inspired by geomancy when he created a binary system of ones and zeros, which eventually became the foundation for the development of the digital computer — and, ultimately, the smartphone.
The connection I love to draw is that the primary material of smartphones is silicon, which comes from sand. So, in a sense, when we use our smartphones, we are engaging in a modern form of “sand divination,” just like the ancient sand diviners did in actual sand.
What’s the most surprising way history has repeated itself in your field?
What surprises me most is how every generation puts technology — and the monetary gain it brings — above creativity. The creativity of the time creates a technology of the time, and then everybody focuses so much on replicating the technology but not supporting the creativity that got you there in the first place. It’s a cycle that has repeated itself throughout history. You can see it in the history of music, in writing, in social media — in any kind of storytelling that can be replicated and shared.
Today, we’re at an inflection point where we have so much technology that we build it without having any idea what the use for it is. We put it out there and expect that somebody creative can figure out how to monetize it. We keep putting the creative people — who can actually influence culture in a way that moves us forward optimistically towards change — in a box. We don’t give them the resources to move us forward because we got locked into the amount of technology we can make at a mass scale to acquire as much monetary gain as possible.
Changing this cycle would require a shift in what we choose to value. I’m a big Trekkie; everything comes down to Star Trek. Imagine a Star Trek-like future where human experience, expression and exploration are the commodity and not excess and greed. Unfortunately, we might need Vulcans to come down to make that happen.
What are some books you would recommend for inclusion in The Manual For Civilization, our crowd-curated library of 3,500 books of essential civilizational knowledge?
The Nutmeg’s Curse: Parables for a Planet in Crisis(02021) by Amitav Ghosh. This book explores capitalism and imperialism from the point of view of nutmeg, which was the most expensive commodity on the planet in the 14th and 15th centuries. Nutmeg could buy you a house in Europe. Ghosh brilliantly frames the story of colonization — the very notion of which stemmed from this desire for nutmeg — through a small island in the Indian Ocean that was the only place where nutmeg was found at the time.
Just because you get fired doesn't mean that your pull requests are automatically closed. Dallin was in the middle of reviewing a PR by Steve when the email came out announcing that Steve no longer worked at the company.
Let's take a look at that PR, and maybe we can see why.
This is the original code, which represents operations on investments. An investment is represented by a note, and belongs to one or more partys. The amount that can be drawn is set by a limit, which can belong to either the party or the note.
What our developer was tasked with doing was allow a note to have no limit. This means changing all the places where the note's limit is checked. So this is what they submitted:
You'll note here that the note limit isn't part of calculating the party limits, so both branches do the same thing. And then there's the deeper question of "is a null really the best way to represent this?" especially given that elsewhere in the code they have an "unlimited" flag that disables limit checking.
Now, Steve wasn't let go only for their code- they were just a miserable co-worker who liked to pick fights in pull request comments. So the real highlight of Steve's dismissal was that Dallin got to have a meaningful discussion about the best way to make this change with the rest of the team, and Steve didn't have a chance to disrupt it.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Julian Miles, Staff Writer …And so the seas rose again, while volcanos and storms brought even more devastation and starvation. Those godly ones who led us looked within themselves and made a decision: in their image was the world, and in their image it would be again. But until the disasters abated, they would […]
I was recently pointed to Technologies and Projects supported by the
Sovereign Tech Agency which is financed by the German Federal
Ministry for Economic Affairs and Climate Action. It is a subsidiary of
the Federal Agency for Disruptive Innovation, SPRIND GmbH.
It is worth sending applications there for distinct projects as that is
their preferred method of funding. Distinguished developers can also
apply for a fellowship position that pays up to 40hrs / week (32hrs when
freelancing) for a year. This is esp. open to maintainers of larger
numbers of packages in Debian (or any other Linux distribution).
There might be a chance that some of the Debian-related projects
submitted to the Google Summer of Code that did not get funded could be
retried with those foundations. As per the FAQ of the project:
"The Sovereign Tech Agency focuses on securing and strengthening open
and foundational digital technologies. These communities working on
these are distributed all around the world, so we work with people,
companies, and FOSS communities everywhere."
Similar funding organizations include the Open Technology Fund and
FLOSS/fund. If you have a Debian-related project that fits these
funding programs, they might be interesting options. This list is by no
means exhaustive—just some hints I’ve received and wanted to share. More
suggestions for such opportunities are welcome.
Year of code reviews
On the debian-devel mailing list, there was a long thread titled
"Let's make 2025 a year when code reviews became common in Debian".
It initially suggested something along the lines of:
"Let's review MRs in Salsa." The discussion quickly expanded to
include patches that have
been sitting in the BTS for years, which deserve at least the same
attention. One idea I'd like to emphasize is that associating BTS bugs
with MRs could be very convenient. It’s not only helpful for
documentation but also the easiest way to apply patches.
I’d like to emphasize that no matter what workflow we use—BTS, MRs, or a
mix—it is crucial to uphold Debian’s reputation for high quality.
However, this reputation is at risk as more and more old issues
accumulate. While Debian is known for its technical excellence,
long-standing bugs and orphaned packages remain a challenge. If we don’t
address these, we risk weakening the high standards that Debian is
valued for. Revisiting old issues and ensuring that unmaintained
packages receive attention is especially important as we prepare for the
Trixie release.
Debian Publicity Team will no longer post on X/Twitter
The team is in charge of deciding the most suitable publication
venue or venues for announcements and when they are published.
the team once decided to join Twitter, but circumstances have since
changed. The current Press delegates have the institutional authority to
leave X, just as their predecessors had the authority to join. I
appreciate that the team carefully considered the matter, reinforced by
the arguments developed on the debian-publicity list, and communicated
its reasoning openly.
The RcppUUID package
on CRAN has been providing
UUIDs (based on the underlying Boost
library) for several years. Written by Artem Klemsov and maintained
in this gitlab
repo, the package is a very nice example of clean and
straightforward library binding.
When we did our annual
BH upgrade to 1.87.0 and check reverse dependencies, we noticed the
RcppUUID
needed a small and rather minor update which we showed as a short diff
in an
issue filed. Neither I nor CRAN heard from Artem, so the
packaged ended up being archived last week. Which in turn lead me to
make this minimal update to 1.1.2 to resurrect it, which CRAN processed more or less like a
regular update given this explanation and so it arrived last Friday.
But you know what Canada could make? A Canadian App Store. That’s a store that Canadian software authors could use to sell Canadian apps to Canadian customers, charging, say, the standard payment processing fee of 5% rather than Apple’s 30%. Canada could make app stores for the Android, Playstation and Xbox, too.
There’s no reason that a Canadian app store would have to confine itself to Canadian software authors, either. Canadian app stores could offer 5% commissions on sales to US and global software authors, and provide jailbreaking kits that allows device owners all around the world to install the Canadian app stores where software authors don’t get ripped off by American Big Tech companies.
To Quote @MM0EFI and the GM0ESS gang, today was a particularly Amateur showing!
Having spent all weekend locked in the curling rink ruining my knees and inflicting mild liver damage in the Aberdeen City Open competition, I needed some outside time away from people to stretch the legs and loosen my knees.
With my teammates/guests shipped off early on account of our quality performance and the days fair drawin’ out now, I found myself with a free afternoon to have a quick run up something nearby before a 1640 sunset! Up the back of Bennachie is a quick steady ascent and in 13 years of living up here I’ve never summited the big hill! Now is as good a time as any. In SOTA terms, this hill is GM/ES-061. In Geographical terms, it’s around 20 miles inland from Aberdeen city here.
I’ve been experimenting with these Aliexpress whips since the end of last year and the forecast wind was low enough to take one into the hills. I cut and terminated 8x 2.5m radials for an effective ground plane last week and wanted to try that against the flat ribbon that it came with.
The ascent was pleasant enough, got to the summit in good time, and out came my Quansheng radio to get the GM/ES-Society on 2m. First my Nagoya whip - called CQ and heard nothing, with general poor reports in WhatsApp I opted to get the slim-g up my aliexpress fibreglass mast.
In an amateur showing last week, I broke the tip of the mast on Cat Law helping 2M0HSK do his first activation due to the wind, and had forgotten this until I summited this week. Squeezing my antenna on was tough, and after many failed attempts to get it up (the mast kept collapsing as I was rushing and not getting the friction hold on each section correctly) and still not hearing anything at all, I changed location and tried again.
In my new position, I received 2M0RVZ 4/4 at best, but he was hearing my 5/9. Similarly GM5ALX and GM4JXP were patiently receiving me loud and clear but I couldn’t hear them at all. I fiddled with settings and decided the receive path of the Quansheng must be fried or sad somehow, but I don’t yet have a full set of diagnostics run.
I’ll take my Anytone on the next hill and compare them against each other I think.
I gave up and moved to HF, getting my whip and new radials into the ground:
Quick to deploy which is what I was after. My new 5m of coax with a choke fitted attached to the radio and we were off to the races - A convenient thing of beauty when it’s up:
I’ve made a single guy with a sotabeams top insulator to brace against wind if need be, but that didn’t need to be used today.
I hit tune, and the G90 spent ages clicking away. In fact, tuning to 14.074, I could only see the famed FT8 signals at S2.
What could be wrong here? Was it my new radials? the whip has behaved before… Minutes turned into tens of minutes playing with everything, and eventually I worked out what was up - my coax only passed signal when I the PL259 connector at the antenna juuuust right. Once I did that, I could take the tuner out the system and work 20 spectacularly well. Until now, I’d been tuning the coax only.
Another Quality Hibby Build Job™️. That’s what’s wrong!
I managed to struggle my way through a touch of QRM and my wonky cable woes to make enough contacts with some very patient chasers and a summit to summit before my frustration at the situation won out, and down the hill I went after a quick pack up period. I managed to beat the sunset - I think if the system had worked fine, I’d have stayed on the hill for sunset.
I think it’s time for a new mast and a coax retermination!
Most of my Debian contributions this month were
sponsored by
Freexian. If you appreciate this sort of work and are at a company that
uses Debian, have a look to see whether you can pay for any of
Freexian‘s services; as well as the direct
benefits, that revenue stream helps to keep Debian development sustainable
for me and several other lovely
people.
You can also support my work directly via
Liberapay.
Python team
We finally made Python 3.13 the default version in testing! I fixed various
bugs that got in the way of this:
I helped with some testing of a debian-installer-utils
patch
as part of the /usr move. I need to get around to uploading this, since
it looks OK now.
Other small things
Helmut Grohne reached out for help debugging a multi-arch coinstallability
problem (you know it’s going to be complicated when even Helmut can’t
figure it out on his own …) in
binutils, and we had a call about that.
For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It’s winter right now, but when its summer again it’s always a bit. this weekend I got closer to that goal.
TL;DR: Using code-server on a beefy machine seems to be quite neat.
Passively lit coding
Personal history
Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since.
Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I’d attach it to my laptop and it just works™. Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet’s screen’s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that.
Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don’t have to use Boox’s “monitor” app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn’t as convenient to be used regularly.
I also played around with pure terminal approaches, e.g. SSH’ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.
My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it’s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it’s a bit spooky to run these work loads without the laptop’s fan spinning up.
In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there?
VSCode itself doesn’t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix)… but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet’s system.
code-server, running remotely
A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.
Access
With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment.
I also decided against the more involved “reverse proxy behind proper hostname with SSL” setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use.
That left me with the option of using a code-server’s built-in support for self-signed certificates and a password:
(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server.
For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.
Git credentials
The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved.
I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn’t be horrible, but I still wondered if I can do better.
I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn’t want to bother anyone on the weekend.
So I am experimenting with Github’s git-credential-manager now. I have configured it to use git’s credential cache with an elevated timeout, so that once I log in, I don’t have to again for one workday.
To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character…
This still grants rather broad permissions to the code-server, but at least only temporarily
Android setup
On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working!
I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It’s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it’s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn’t on Open-VSX.
For some reason the F11 key doesn’t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode’s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that’s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings.
A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn’t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed.
I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn’t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions.
I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean’s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically.
I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I’d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don’t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?
Conclusion
After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer.
A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe – I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?
DeepSeek R1, the new entrant to the Large Language Model wars has created quite a splash over the last few weeks. Its entrance into a space dominated by the Big Corps, while pursuing asymmetric and novel strategies has been a refreshing eye-opener.
GPT AI improvement was starting to show signs of slowing down, and has been observed to be reaching a point of diminishing returns as it runs out of data and compute required to train, fine-tune increasingly large models. This has turned the focus towards building "reasoning" models that are post-trained through reinforcement learning, techniques such as inference-time and test-time scaling and search algorithms to make the models appear to think and reason better. OpenAI&aposs o1-series models were the first to achieve this successfully with its inference-time scaling and Chain-of-Thought reasoning.
Intelligence as an emergent property of Reinforcement Learning (RL)
Reinforcement Learning (RL) has been successfully used in the past by Google&aposs DeepMind team to build highly intelligent and specialized systems where intelligence is observed as an emergent property through rewards-based training approach that yielded achievements like AlphaGo (see my post on it here - AlphaGo: a journey to machine intuition).
DeepMind went on to build a series of Alpha* projects that achieved many notable feats using RL:
AlphaGo, defeated the world champion Lee Seedol in the game of Go
AlphaZero, a generalized system that learned to play games such as Chess, Shogi and Go without human input
AlphaStar, achieved high performance in the complex real-time strategy game StarCraft II.
AlphaFold, a tool for predicting protein structures which significantly advanced computational biology.
AlphaCode, a model designed to generate computer programs, performing competitively in coding challenges.
AlphaDev, a system developed to discover novel algorithms, notably optimizing sorting algorithms beyond human-derived methods.
All of these systems achieved mastery in its own area through self-training/self-play and by optimizing and maximizing the cumulative reward over time by interacting with its environment where intelligence was observed as an emergent property of the system.
The RL feedback loop
RL mimics the process through which a baby would learn to walk, through trial, error and first principles.
R1 model training pipeline
At a technical level, DeepSeek-R1 leverages a combination of Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT) for its training pipeline:
DeepSeek-R1 Model Training Pipeline
Using RL and DeepSeek-v3, an interim reasoning model was built, called DeepSeek-R1-Zero, purely based on RL without relying on SFT, which demonstrated superior reasoning capabilities that matched the performance of OpenAI&aposs o1 in certain benchmarks such as AIME 2024.
The model was however affected by poor readability and language-mixing and is only an interim-reasoning model built on RL principles and self-evolution.
DeepSeek-R1-Zero was then used to generate SFT data, which was combined with supervised data from DeepSeek-v3 to re-train the DeepSeek-v3-Base model.
The new DeepSeek-v3-Base model then underwent additional RL with prompts and scenarios to come up with the DeepSeek-R1 model.
The R1-model was then used to distill a number of smaller open source models such as Llama-8b, Qwen-7b, 14b which outperformed bigger models by a large margin, effectively making the smaller models more accessible and usable.
Key contributions of DeepSeek-R1
RL without the need for SFT for emergent reasoning capabilities
R1 was the first open research project to validate the efficacy of RL directly on the base model without relying on SFT as a first step, which resulted in the model developing advanced reasoning capabilities purely through self-reflection and self-verification.
Although, it did degrade in its language capabilities during the process, its Chain-of-Thought (CoT) capabilities for solving complex problems was later used for further RL on the DeepSeek-v3-Base model which became R1. This is a significant contribution back to the research community.
The below analysis of DeepSeek-R1-Zero and OpenAI o1-0912 shows that it is viable to attain robust reasoning capabilities purely through RL alone, which can be further augmented with other techniques to deliver even better reasoning performance.
Its quite interesting, that the application of RL gives rise to seemingly human capabilities of "reflection", and arriving at "aha" moments, causing it to pause, ponder and focus on a specific aspect of the problem, resulting in emergent capabilities to problem-solve as humans do.
Model distillation
DeepSeek-R1 also demonstrated that larger models can be distilled into smaller models which makes advanced capabilities accessible to resource-constrained environments, such as your laptop. While its not possible to run a 671b model on a stock laptop, you can still run a distilled 14b model that is distilled from the larger model which still performs better than most publicly available models out there. This enables intelligence to be brought closer to the edge, to allow faster inference at the point of experience (such as on a smartphone, or on a Raspberry Pi), which paves way for more use cases and possibilities for innovation.
Distilled models are very different to R1, which is a massive model with a completely different model architecture than the distilled variants, and so are not directly comparable in terms of capability, but are instead built to be more smaller and efficient for more constrained environments. This technique of being able to distill a larger model&aposs capabilities down to a smaller model for portability, accessibility, speed, and cost will bring about a lot of possibilities for applying artificial intelligence in places where it would have otherwise not been possible. This is another key contribution of this technology from DeepSeek, which I believe has even further potential for democratization and accessibility of AI.
Why is this moment so significant?
DeepSeek-R1 was a pivotal contribution in many ways.
The contributions to the state-of-the-art and the open research helps move the field forward where everybody benefits, not just a few highly funded AI labs building the next billion dollar model.
Open-sourcing and making the model freely available follows an asymmetric strategy to the prevailing closed nature of much of the model-sphere of the larger players. DeepSeek should be commended for making their contributions free and open.
It reminds us that its not just a one-horse race, and it incentivizes competition, which has already resulted in OpenAI o3-mini a cost-effective reasoning model which now shows the Chain-of-Thought reasoning. Competition is a good thing.
We stand at the cusp of an explosion of small-models that are hyper-specialized, and optimized for a specific use case that can be trained and deployed cheaply for solving problems at the edge. It raises a lot of exciting possibilities and is why DeepSeek-R1 is one of the most pivotal moments of tech history.
We analyzed every instance of AI use in elections collected by the WIRED AI Elections Project (source for our analysis), which tracked known uses of AI for creating political content during elections taking place in 2024 worldwide. In each case, we identified what AI was used for and estimated the cost of creating similar content without AI.
We find that (1) half of AI use isn’t deceptive, (2) deceptive content produced using AI is nevertheless cheap to replicate without AI, and (3) focusing on the demand for misinformation rather than the supply is a much more effective way to diagnose problems and identify interventions.
This tracks with my analysis. People share as a form of social signaling. I send you a meme/article/clipping/photo to show that we are on the same team. Whether it is true, or misinformation, or actual propaganda, is of secondary importance. Sometimes it’s completely irrelevant. This is why fact checking doesn’t work. This is why “cheap fakes”—obviously fake photos and videos—are effective. This is why, as the authors of that analysis said, the demand side is the real problem.
This is yet another story of commercial spyware being used against journalists and civil society members.
The journalists and other civil society members were being alerted of a possible breach of their devices, with WhatsApp telling the Guardian it had “high confidence” that the 90 users in question had been targeted and “possibly compromised.”
It is not clear who was behind the attack. Like other spyware makers, Paragon’s hacking software is used by government clients and WhatsApp said it had not been able to identify the clients who ordered the alleged attacks.
Experts said the targeting was a “zero-click” attack, which means targets would not have had to click on any malicious links to be infected.
Author: Neille Williams “Gramps, a star just fell out of the sky!” Billie hollered out to her Grandpa, who had just poured his second whiskey and was reclining against the kitchen bench. “Sweetie,” he began, ambling over as she pressed her eager face against the window glass, “stars don’t just fall out of the sky, […]
Version 0.0.20 of RcppSpdlog arrived
on CRAN early this morning and
has been uploaded to Debian. RcppSpdlog
bundles spdlog, a
wonderful header-only C++ logging library with all the bells and
whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn
more at the nice package
documention site.
This release updates the code to the version 1.15.1 of spdlog which was released
this morning as well. It also contains a contributed PR which
illustrates logging in a multithreaded context.
The NEWS entry for this release follows.
Changes in
RcppSpdlog version 0.0.20 (2025-02-01)
New multi-threaded logging example (Young Geun Kim and Dirk via
#22)
A newly discovered VPN backdoor uses some interesting tactics to avoid detection:
When threat actors use backdoor malware to gain access to a network, they want to make sure all their hard work can’t be leveraged by competing groups or detected by defenders. One countermeasure is to equip the backdoor with a passive agent that remains dormant until it receives what’s known in the business as a “magic packet.” On Thursday, researchers revealed that a never-before-seen backdoor that quietly took hold of dozens of enterprise VPNs running Juniper Network’s Junos OS has been doing just that.
J-Magic, the tracking name for the backdoor, goes one step further to prevent unauthorized access. After receiving a magic packet hidden in the normal flow of TCP traffic, it relays a challenge to the device that sent it. The challenge comes in the form of a string of text that’s encrypted using the public portion of an RSA key. The initiating party must then respond with the corresponding plaintext, proving it has access to the secret key.
The lightweight backdoor is also notable because it resided only in memory, a trait that makes detection harder for defenders. The combination prompted researchers at Lumin Technology’s Black Lotus Lab to sit up and take notice.
[…]
The researchers found J-Magic on VirusTotal and determined that it had run inside the networks of 36 organizations. They still don’t know how the backdoor got installed.
Another short status update of what happened on my side last
month. Mostly focused on quality of life improvements in phosh and
cleaning up and improving phoc this time around (including catching up
with wlroots git) but some improvements for other things like
phosh-osk-stub happened on the side line too.
Allow events to override the sound feedback with custom sounds
(MR). Allows
desktop/mobile shells like phosh to honour application prefs for notifications.
udev regression affecting gmobile (Bug). Many thanks to Yu Watanabe
for providing the fix so quickly
Reviews
This is not code by me but reviews on other peoples code. The list is
incomplete, but I hope to improve on this in the upcoming
months. Thanks for the contributions!
Author: R. J. Erbacher Admiring what lay outside the glass, the vastness of space overwhelmed him. The window on the spacious observation deck was a circular aluminosilicate pane, a meter in circumference, the handles on both sides allowed him to effortlessly hold his prone body suspended in the zero-gravity environment. He didn’t like to come […]
As people around the world understand how LLMs behave, more and more people
wonder as to why these models hallucinate, and what can be done about to
reduce it. This provocatively named article by Michael Townsen Hicks, James
Humphries and Joe Slater bring is an excellent primer to better understanding
how LLMs work and what to expect from them.
As humans carrying out our relations using our language as the main tool, we are
easily at awe with the apparent ease with which ChatGPT (the first widely
available, and to this day probably the best known, LLM-based automated
chatbot) simulates human-like understanding and how it helps us to easily
carry out even daunting data aggregation tasks. It is common that people ask
ChatGPT for an answer and, if it gets part of the answer wrong, they justify it
by stating that it’s just a hallucination. Townsen et al. invite us to switch
from that characterization to a more correct one: LLMs are bullshitting. This
term is formally presented by Frankfurt [1]. To Bullshit is not the same as to
lie, because lying requires to know (and want to cover) the truth. A
bullshitter not necessarily knows the truth, they just have to provide a
compelling description, regardless of what is really aligned with truth.
After introducing Frankfurt’s ideas, the authors explain the fundamental ideas
behind LLM-based chatbots such as ChatGPT; a Generative Pre-trained Transformer
(GPT)’s have as their only goal to produce human-like text, and it is carried
out mainly by presenting output that matches the input’s high-dimensional
abstract vector representation, and probabilistically outputs the next token
(word) iteratively with the text produced so far. Clearly, a GPT’s ask is not to
seek truth or to convey useful information — they are built to provide a
normal-seeming response to the prompts provided by their user. Core data are not
queried to find optimal solutions for the user’s requests, but are generated on
the requested topic, attempting to mimic the style of document set it was
trained with.
Erroneous data emitted by a LLM is, thus, not equiparable with what a person
could hallucinate with, but appears because the model has no understanding of
truth; in a way, this is very fitting with the current state of the world, a
time often termed as the age of post-truth [2]. Requesting an LLM to provide
truth in its answers is basically impossible, given the difference between
intelligence and consciousness: Following Harari’s definitions [3], LLM
systems, or any AI-based system, can be seen as intelligent, as they have the
ability to attain goals in various, flexible ways, but they cannot be seen as
conscious, as they have no ability to experience subjectivity. This is, the
LLM is, by definition, bullshitting its way towards an answer: their goal is
to provide an answer, not to interpret the world in a trustworthy way.
The authors close their article with a plea for literature on the topic to adopt
the more correct “bullshit” term instead of the vacuous, anthropomorphizing
“hallucination”. Of course, being the word already loaded with a negative
meaning, it is an unlikely request.
This is a great article that mixes together Computer Science and Philosophy, and
can shed some light on a topic that is hard to grasp for many users.
[1] Frankfurt, Harry (2005). On Bullshit. Princeton University Press.
[2] Zoglauer, Thomas (2023). Constructed truths: truth and knowledge in a
post-truth world. Springer.
[3] Harari, Yuval Noah (2023. Nexus: A Brief History of Information Networks
From the Stone Age to AI. Random House.
The FBI and authorities in The Netherlands this week seized dozens of servers and domains for a hugely popular spam and malware dissemination service operating out of Pakistan. The proprietors of the service, who use the collective nickname “The Manipulaters,” have been the subject of three stories published here since 2015. The FBI said the main clientele are organized crime groups that try to trick victim companies into making payments to a third party.
One of several current Fudtools sites run by the principals of The Manipulators.
On January 29, the FBI and the Dutch national police seized the technical infrastructure for a cybercrime service marketed under the brands Heartsender, Fudpage and Fudtools (and many other “fud” variations). The “fud” bit stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.
The Dutch authorities said 39 servers and domains abroad were seized, and that the servers contained millions of records from victims worldwide — including at least 100,000 records pertaining to Dutch citizens.
A statement from the U.S. Department of Justice refers to the cybercrime group as Saim Raza, after a pseudonym The Manipulaters communally used to promote their spam, malware and phishing services on social media.
“The Saim Raza-run websites operated as marketplaces that advertised and facilitated the sale of tools such as phishing kits, scam pages and email extractors often used to build and maintain fraud operations,” the DOJ explained.
The core Manipulaters product is Heartsender, a spam delivery service whose homepage openly advertised phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me, to name a few.
The government says transnational organized crime groups that purchased these services primarily used them to run business email compromise (BEC) schemes, wherein the cybercrime actors tricked victim companies into making payments to a third party.
“Those payments would instead be redirected to a financial account the perpetrators controlled, resulting in significant losses to victims,” the DOJ wrote. “These tools were also used to acquire victim user credentials and utilize those credentials to further these fraudulent schemes. The seizure of these domains is intended to disrupt the ongoing activity of these groups and stop the proliferation of these tools within the cybercriminal community.”
Manipulaters advertisement for “Office 365 Private Page with Antibot” phishing kit sold via Heartsender. “Antibot” refers to functionality that attempts to evade automated detection techniques, keeping a phish deployed and accessible as long as possible. Image: DomainTools.
KrebsOnSecurity first wrote about The Manipulaters in May 2015, mainly because their ads at the time were blanketing a number of popular cybercrime forums, and because they were fairly open and brazen about what they were doing — even who they were in real life.
We caught up with The Manipulaters again in 2021, with a story that found the core employees had started a web coding company in Lahore called WeCodeSolutions — presumably as a way to account for their considerable Heartsender income. That piece examined how WeCodeSolutions employees had all doxed themselves on Facebook by posting pictures from company parties each year featuring a large cake with the words FudCo written in icing.
A follow-up story last year about The Manipulaters prompted messages from various WeCodeSolutions employees who pleaded with this publication to remove stories about them. The Saim Raza identity told KrebsOnSecurity they were recently released from jail after being arrested and charged by local police, although they declined to elaborate on the charges.
The Manipulaters never seemed to care much about protecting their own identities, so it’s not surprising that they were unable or unwilling to protect their own customers. In an analysis released last year, DomainTools.com found the web-hosted version of Heartsender leaked an extraordinary amount of user information to unauthenticated users, including customer credentials and email records from Heartsender employees.
Almost every year since their founding, The Manipulaters have posted a picture of a FudCo cake from a company party celebrating its anniversary.
DomainTools also uncovered evidence that the computers used by The Manipulaters were all infected with the same password-stealing malware, and that vast numbers of credentials were stolen from the group and sold online.
“Ironically, the Manipulaters may create more short-term risk to their own customers than law enforcement,” DomainTools wrote. “The data table ‘User Feedbacks’ (sic) exposes what appear to be customer authentication tokens, user identifiers, and even a customer support request that exposes root-level SMTP credentials–all visible by an unauthenticated user on a Manipulaters-controlled domain.”
Police in The Netherlands said the investigation into the owners and customers of the service is ongoing.
“The Cybercrime Team is on the trail of a number of buyers of the tools,” the Dutch national police said. “Presumably, these buyers also include Dutch nationals. The investigation into the makers and buyers of this phishing software has not yet been completed with the seizure of the servers and domains.”
U.S. authorities this week also joined law enforcement in Australia, France, Greece, Italy, Romania and Spain in seizing a number of domains for several long-running cybercrime forums and services, including Cracked and Nulled. According to a statement from the European police agency Europol, the two communities attracted more than 10 million users in total.
Other domains seized as part of “Operation Talent” included Sellix, an e-commerce platform that was frequently used by cybercrime forum members to buy and sell illicit goods and services.
Thrilled to announce a new package: zigg. It arrived
on CRAN today after a few days
of review in the ‘newbies’ queue. zigg provides
the Ziggurat
pseudo-random number generator for Normal, Exponential and Uniform draws
proposed by Marsaglia and
Tsang (JSS, 2000),
and extended by Leong et al. (JSS, 2005).
I had picked up their work in package RcppZiggurat
and updated its code for the 64-buit world we now live in. That package
alredy provided the Normal generator along with several competing
implementations which it compared rigorously and timed them. As one of
the generators was based on the GNU GSL via the
implementation of Voss, we always ended
up with a run-time dependency on the GSL too. No more: this new package
is zero-depedency, zero-suggsts and hence very easy to deploy.
Moreover, we also include a demonstration of four distinct ways of
accessing the compiled code from another R package: pure and straight-up
C, similarly pure C++, inclusion of the header in C++ as well as via Rcpp.
The other advance is the resurrection of the second generator for the
Exponential distribution. And following Burkardt we expose the
Uniform too. The main upside of these generators is their excellent
speed as can be seen in the comparison the default R generators
generated by the example script timings.R:
Needless to say, speed is not everything. This PRNG comes the time of
32-bit computing so the generator period is likely to be shorter than
that of newer high-quality generators. If in doubt, forgo speed and
stick with the high-quality default generators.
They did this by using some new shrink wrap bags and resetting the used hard disk SMART attributes to factory-new values.
Luckily Seagate has a proprietary extension "Seagate FARM (Field Access Reliability Metrics)" implemented in their disks that ... the crooks did not reset.
Luckily ... because other manufacturers do not have that extension. And you think the crooks only re-sell used Seagate disks? Lol.
The get access to the Seagate FARM extension, you need smartctl from smartmontools v7.4 or later.
For Debian 12 (Bookworm) you can add the backports archive and then install with
apt install smartmontools/bookworm-backports.
For Debian 11 (Bullseye) you can use a backport we created at my company:
You can also download static builds from https://builds.smartmontools.org/ which keeps the latest CI builds of the current development branch (v7.5 at the time of writing).
To check the state of your drives, compare the output from smartctl -x and smartctl -l farm. Double checking Power_On_Hours vs. "Power on Hours" is the obvious. But the other values around "Head Flight Hours" and "Power Cycle Count" should also roughly match what you expect from a hard disk of a certain age. All near zero, of course, for a factory-new hard disk.
This is what it looks like for a hard disk that has gracefully serviced 4 years and 8 months so far. The smartctl -x and smartctl -l farm data match within some small margins:
$ smartctl -x/dev/sda
smartctl 7.42023-08-01 r5530 [x86_64-linux-6.1.0-30-amd64](local build) Copyright (C)2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
Decreasingly hungry thrillseeker
Weaponized Fun
has second thoughts about the risk to which they're willing to expose their palate.
"In addition to Budget Bytes mailing list not knowing who I am, I'm not sure they know what they're making. I'm having a hard time telling whether 'New Recipe 1' sounds more enticing than 'New Recipe 3.' I sure hope they remembered the ingredients."
An anonymous reader frets that
"The Guardian claims an article is *more* than 7 years old (it's not, as of today, January 26)"
Date math is hard.
"Oh snap!" cried
The Beast in Black
I feel like we've seen several errors like this from Firefox recently: problems with 0 and -1 as sentinel values.
Faithful contributor
Michael R.
doubled up on the FB follies this week; here's one. Says Michael
"Those hard tech interviews at META really draw in the best talent."
Finally for this week, a confused
Stewart
found an increasingly rare type of classic Error'd.
"Trying to figure out how to ignore as instructed, when there is no ignore option. Do I just ignore it?"
For completeness, the options should also include Abort
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Dart Humeston “Two popular restaurants were closed yesterday while the city health department warned four others.” Tisha, the television news anchor said, her luscious blonde hair framing her stunning face. “This despite the city cutting the health department’s budget by 60%,” said Brad, Tisha’s co-anchor. His jet-black hair was short on the sides, but […]
My journey in CS has always been driven by curiosity, determination, and a deep love for understanding software solutions at its tiniest, most complex levels.
Taking ALX Africa Software Engineer track after High school was where it all started for me. During the 1-year intensive bootcamp, I delved into the intricacies of Linux programming and low-level programming with C, which solidified my foundational knowledge. This experience not only enhanced my technical skills but also taught me the importance of adaptability and self-directed learning. I discovered how to approach challenges with curiosity, igniting a passion for exploring software solutions in their most intricate forms. Each module pushed me to think critically and creatively, transforming my understanding of technology and its capabilities.
Let’s just say that I have always been drawn to asking, “How does this happen?" And I just go on and on until I find an answer eventually and sometimes I don’t but that’s okay. That curiosity, combined with a deep commitment to learning, has guided my journey.
Debian Webmaster
My drive has led me to get involved in open-source contributions, where I can put my knowledge to the test while helping my community. Engaging with real-world experts and learning from my mistakes has been invaluable. One of the highlights of this journey was joining the Debian Webmasters team as an intern through Outreachy. Here, I have the honor of working on redesigning and migrating the old Debian webpages to make them more user-friendly. This experience not only allows me to apply my skills in a practical setting but also deepens my understanding of collaborative software development.
Building My Skills: The Foundation of My Experience
Throughout my academic and professional journey, I have taken on many roles that have shaped my skills and prepared me for what’s ahead I believe. I am definitely not a one-trick pony, and maybe not completely a jack of all trade either but I am a bit diverse I’d like to think.
Here are the key roles that have defined my journey so far:
Volunteer Developer atYoris Africa (June 2022 - August 2023)
I began my career by volunteering at Yoris, where I collaborated with a talented team to design and build the frontend for a mobile app. My contributions extended beyond just the frontend; I also worked on backend solutions and microservices, gaining hands-on experience in full-stack development. This role was instrumental in shaping my understanding of software architecture, allowing me to contribute meaningfully to projects while learning from experienced developers in a dynamic environment.
Freelance Academics Software Developer (September 2023 - October 2024)
I freelanced as an academic software developer, where I pitched and developed software solutions for universities in my community. One of my most notable projects was creating a Computer-Based Testing (CBT) software for a medical school, which featured a unique questionnaire and scoring system tailored to their specific needs. This experience not only allowed me to apply my technical skills in a real-world setting but also deepened my understanding of educational software requirements and user experience, ultimately enhancing the learning process for students.
Perhaps the most transformative experience has been my role as an intern at Debian Webmasters. This opportunity allowed me to delve into the fascinating world of open source. As an intern, I have the chance to work on a project where we are redesigning and migrating the Debian webpages to utilize a new and faster technology: Go templates with Hugo.
For a detailed look at the work and progress I made during my internship, as well as information on this project and how to get involved, you can check out the wiki.
My ultimate goal with this role is to build a vibrant community for Debian in Africa and, if given the chance, to host a debian-cd mirror for faster installations in my region.
You can connect with me through LinkedIn, or X (formerly Twitter), or reach out via email.
In 2020 my company bought a DYMO LabelWriter Wireless. It is an awesome little device for thermal printing a wide variety of labels. The labels are easily available both from DYMO and from third parties so the pricing is quite acceptable.
Unfortunately DYMO supplies their DYMO Connect Software only for Microsoft Windows and MacOSX. A mobile app of the same name for Android and Apple iOS devices is available in the app stores.
There is a SDK for Linux and there are drivers published for Linux but the LabelWriter Wireless was not supported on Linux when I tried to get it running for Debian in 2020.
This year I have had a new look at the situation as we still use the LabelWriter Wireless printers a lot and the company runs fully on Linux. So it is always a chore to run a Windows VM just to run DYMO Connect.
In an effort to blend in and make their malicious traffic tougher to block, hosting firms catering to cybercriminals in China and Russia increasingly are funneling their operations through major U.S. cloud providers. Research published this week on one such outfit — a sprawling network tied to Chinese organized crime gangs and aptly named “Funnull” — highlights a persistent whac-a-mole problem facing cloud services.
In October 2024, the security firm Silent Push published a lengthy analysis of how Amazon AWS and Microsoft Azure were providing services to Funnull, a two-year-old Chinese content delivery network that hosts a wide variety of fake trading apps, pig butchering scams, gambling websites, and retail phishing pages.
Funnull made headlines last summer after it acquired the domain name polyfill[.]io, previously the home of a widely-used open source code library that allowed older browsers to handle advanced functions that weren’t natively supported. There were still tens of thousands of legitimate domains linking to the Polyfill domain at the time of its acquisition, and Funnull soon after conducted a supply-chain attack that redirected visitors to malicious sites.
Silent Push’s October 2024 report found a vast number of domains hosted via Funnull promoting gambling sites that bear the logo of the Suncity Group, a Chinese entity named in a 2024 UN report (PDF) for laundering millions of dollars for the North Korean Lazarus Group.
It is likely the gambling sites coming through Funnull are abusing top casino brands as part of their money laundering schemes. In reporting on Silent Push’s October report, TechCrunchobtained a comment from Bwin, one of the casinos being advertised en masse through Funnull, and Bwin said those websites did not belong to them.
Gambling is illegal in China except in Macau, a special administrative region of China. Silent Push researchers say Funnull may be helping online gamblers in China evade the Communist party’s “Great Firewall,” which blocks access to gambling destinations.
Silent Push’s Zach Edwards said that upon revisiting Funnull’s infrastructure again this month, they found dozens of the same Amazon and Microsoft cloud Internet addresses still forwarding Funnull traffic through a dizzying chain of auto-generated domain names before redirecting malicious or phishous websites.
Edwards said Funnull is a textbook example of an increasing trend Silent Push calls “infrastructure laundering,” wherein crooks selling cybercrime services will relay some or all of their malicious traffic through U.S. cloud providers.
“It’s crucial for global hosting companies based in the West to wake up to the fact that extremely low quality and suspicious web hosts based out of China are deliberately renting IP space from multiple companies and then mapping those IPs to their criminal client websites,” Edwards told KrebsOnSecurity. “We need these major hosts to create internal policies so that if they are renting IP space to one entity, who further rents it to host numerous criminal websites, all of those IPs should be reclaimed and the CDN who purchased them should be banned from future IP rentals or purchases.”
A Suncity gambling site promoted via Funnull. The sites feature a prompt for a Tether/USDT deposit program.
Reached for comment, Amazon referred this reporter to a statement Silent Push included in a report released today. Amazon said AWS was already aware of the Funnull addresses tracked by Silent Push, and that it had suspended all known accounts linked to the activity.
Amazon said that contrary to implications in the Silent Push report, it has every reason to aggressively police its network against this activity, noting the accounts tied to Funnull used “fraudulent methods to temporarily acquire infrastructure, for which it never pays. Thus, AWS incurs damages as a result of the abusive activity.”
“When AWS’s automated or manual systems detect potential abuse, or when we receive reports of potential abuse, we act quickly to investigate and take action to stop any prohibited activity,” Amazon’s statement continues. “In the event anyone suspects that AWS resources are being used for abusive activity, we encourage them to report it to AWS Trust & Safety using the report abuse form. In this case, the authors of the report never notified AWS of the findings of their research via our easy-to-find security and abuse reporting channels. Instead, AWS first learned of their research from a journalist to whom the researchers had provided a draft.”
Microsoft likewise said it takes such abuse seriously, and encouraged others to report suspicious activity found on its network.
“We are committed to protecting our customers against this kind of activity and actively enforce acceptable use policies when violations are detected,” Microsoft said in a written statement. “We encourage reporting suspicious activity to Microsoft so we can investigate and take appropriate actions.”
Richard Hummel is threat intelligence lead at NETSCOUT. Hummel said it used to be that “noisy” and frequently disruptive malicious traffic — such as automated application layer attacks, and “brute force” efforts to crack passwords or find vulnerabilities in websites — came mostly from botnets, or large collections of hacked devices.
But he said the vast majority of the infrastructure used to funnel this type of traffic is now proxied through major cloud providers, which can make it difficult for organizations to block at the network level.
“From a defenders point of view, you can’t wholesale block cloud providers, because a single IP can host thousands or tens of thousands of domains,” Hummel said.
In May 2024, KrebsOnSecurity published a deep dive on Stark Industries Solutions, an ISP that materialized at the start of Russia’s invasion of Ukraine and has been used as a global proxy network that conceals the true source of cyberattacks and disinformation campaigns against enemies of Russia. Experts said much of the malicious traffic traversing Stark’s network (e.g. vulnerability scanning and password brute force attacks) was being bounced through U.S.-based cloud providers.
Stark’s network has been a favorite of the Russian hacktivist group called NoName057(16), which frequently launches huge distributed denial-of-service (DDoS) attacks against a variety of targets seen as opposed to Moscow. Hummel said NoName’s history suggests they are adept at cycling through new cloud provider accounts, making anti-abuse efforts into a game of whac-a-mole.
“It almost doesn’t matter if the cloud provider is on point and takes it down because the bad guys will just spin up a new one,” he said. “Even if they’re only able to use it for an hour, they’ve already done their damage. It’s a really difficult problem.”
Edwards said Amazon declined to specify whether the banned Funnull users were operating using compromised accounts or stolen payment card data, or something else.
“I’m surprised they wanted to lean into ‘We’ve caught this 1,200+ times and have taken these down!’ and yet didn’t connect that each of those IPs was mapped to [the same] Chinese CDN,” he said. “We’re just thankful Amazon confirmed that account mules are being used for this and it isn’t some front-door relationship. We haven’t heard the same thing from Microsoft but it’s very likely that the same thing is happening.”
Funnull wasn’t always a bulletproof hosting network for scam sites. Prior to 2022, the network was known as Anjie CDN, based in the Philippines. One of Anjie’s properties was a website called funnull[.]app. Loading that domain reveals a pop-up message by the original Anjie CDN owner, who said their operations had been seized by an entity known as Fangneng CDN and ACB Group, the parent company of Funnull.
A machine-translated message from the former owner of Anjie CDN, a Chinese content delivery network that is now Funnull.
“After I got into trouble, the company was managed by my family,” the message explains. “Because my family was isolated and helpless, they were persuaded by villains to sell the company. Recently, many companies have contacted my family and threatened them, believing that Fangneng CDN used penetration and mirroring technology through customer domain names to steal member information and financial transactions, and stole customer programs by renting and selling servers. This matter has nothing to do with me and my family. Please contact Fangneng CDN to resolve it.”
In January 2024, the U.S. Department of Commerce issued a proposed rule that would require cloud providers to create a “Customer Identification Program” that includes procedures to collect data sufficient to determine whether each potential customer is a foreign or U.S. person.
According to the law firm Crowell & Moring LLP, the Commerce rule also would require “infrastructure as a service” (IaaS) providers to report knowledge of any transactions with foreign persons that might allow the foreign entity to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.
“The proposed rulemaking has garnered global attention, as its cross-border data collection requirements are unprecedented in the cloud computing space,” Crowell wrote. “To the extent the U.S. alone imposes these requirements, there is concern that U.S. IaaS providers could face a competitive disadvantage, as U.S. allies have not yet announced similar foreign customer identification requirements.”
It remains unclear if the new White House administration will push forward with the requirements. The Commerce action was mandated as part of an executive order President Trump issued a day before leaving office in January 2021.
There are thousands of fake Reddit and WeTransfer webpages that are pushing malware. They exploit people who are using search engines to search sites like Reddit.
Unsuspecting victims clicking on the link are taken to a fake WeTransfer site that mimicks the interface of the popular file-sharing service. The ‘Download’ button leads to the Lumma Stealer payload hosted on “weighcobbweo[.]top.”
I'm a JSON curmudgeon, in that I think that its type-system, inherited from JavaScript, is bad. It's a limited vocabulary of types, and it forces developers to play odd games of convention. For example, because it lacks any sort of date type, you either have to explode your date out as a sub-dictionary (arguably, the "right" approach) or do what most people do- use an ISO formatted string as your date. The latter version requires you to attempt to parse the sting to validate the data, but validating JSON is a whole thing anyway.
But, enough about me being old and cranky. Do you know one type JSON supports? Boolean values.
Which is why this specification from today's anonymous submitter annoys me so much:
Their custom validator absolutely requires the use of strings, and absolutely requires that they have these values. Sending a boolean, or worse, the string "true" causes the request to get rejected.
Our submitter doesn't explain why it's this way, but I have a strong suspicion that it's because it was originally designed to support a form submission with radio buttons. The form is long gone, but the API contract remains.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Tamiko Bronson “How will they find us, Grandma?” She smiled, pulling her paintbrush across each rice paper lantern. Velvet black ink seeped into the fibers, revealing names: Tsuneo. Kazuko. Satoshi. Our ancestors. “Come, Kana-chan.” We carried the lanterns to the garden. One by one, we lined the path. “The lights will guide them.” I […]
A new maintenance release of RcppStreams
is now on CRAN marking the
first release in almost six years. RcppStreams
brings the excellent Streamulus C++
template library for event stream processing to R.
Streamulus,
written by Irit Katriel, uses clever template meta-programming (via Boost
Fusion) to implement an embedded domain-specific event
language created specifically for event stream processing.
This release covers only package and CRAN-specific updates.
The NEWS file entries follows below:
Changes in version 0.1.4
(2024-01-29)
Package maintenance with updates to DESCRIPTION and
README.md
Continunous integration has been update multiple times
There are two major internationalization APIs in the C library:
locales and iconv. Iconv is an isolated component which only performs
charset conversion in ways that don't interact with anything else in
the library. Locales affect pretty much every API that deals with
strings and covers charset conversion along with a huge range of
localized information from character classification to formatting of
time, money, people's names, addresses and even standard paper sizes.
Picolibc inherits it's implementation of both of these from
newlib. Given that embedded applications rarely need advanced
functionality from either these APIs, I hadn't spent much time
exploring this space.
Newlib locale code
When run on Cygwin, Newlib's locale support is quite complete as it
leverages the underlying Windows locale support. Without Windows
support, everything aside from charset conversion and character
classification data is stubbed out at the bottom of the stack. Because
the implementation can support full locale functionality, the
implementation is designed for that, with large data structures and
lots of code.
Charset conversion and character classification data for locales is
all built-in; none of that can be loaded at runtime. There is support
for all of the ISO-8859 charsets, three JIS variants, a bunch of
Windows code pages and a few other single-byte encodings.
One oddity in this code is that when using a JIS locale, wide
characters are stored in EUC-JP rather than Unicode. Every other
locale uses Unicode. This means APIs like wctype are implemented by
mapping the JIS-encoded character to Unicode and then using the
underlying Unicode character classification tables. One consequence of
this is that there isn't any Unicode to JIS mapping provided as it
isn't necessary.
When testing the charset conversion and Unicode character
classification data, I found numerous minor errors and a couple of
pretty significant ones. The JIS conversion code had the most serious
issue I found; most of the conversions are in a 2d array which is
manually indexed with the wrong value for the length of each row. This
led to nearly every translated value being incorrect.
The charset conversion tables and Unicode classification data are now
generated using python charset support and the standard Unicode data
files. In addition, tests have been added which compare Picolibc to
the system C library for every supported charset.
Newlib iconv code
The iconv charset support is completely separate from the locale
charset support with a much wider range of supported targets. It also
supports loading charset data from files at runtime, which reduces the
size of application images.
Because the iconv and locale implementations are completely separate,
the charset support isn't the same. Iconv supports a lot more
charsets, but it doesn't support all of those available to
locales. For example, Iconv has Big5 support which locale
lacks. Conversely, locale has Shift-JIS support which iconv does not.
There's also a difference in how charset names are mapped in the two
APIs. The locale code has a small fixed set of aliases, which doesn't
include things like US-ASCII or ANSI X3.4. In contrast, the iconv
code has an extensive database of charset aliases which are compiled
into the library.
Picolibc has a few tests for the iconv API which verify charset names
and perform some translations. Without an external reference, it's
hard to know if the results are correct.
POSIX vs C internationalization
In addition to including the iconv API, POSIX extends locale support
in a couple of ways:
Exposing locale objects via the newlocale, uselocale, duplocale
and freelocale APIs.
uselocale sets a per-thread locale, rather than the process-wide
locale.
Goals for Picolibc internationalization support
For charsets, supporting UTF-8 should cover the bulk of embedded
application needs, and even that is probably more than what most
applications require. Most (all?) compilers use Unicode for wide
character and string constants. That means wchar_t needs to be Unicode
in every locale.
Aside from charset support, the rest of the locale infrastructure is
heavily focused on creating human-consumable strings. I don't think
it's a stretch to say that none of this is very useful these days,
even for systems with sophisticated user interactions. For picolibc,
the cost to provide any of this would be high.
Having two completely separate charset conversion datasets makes
for a confusing and error-prone experience for developers. Replacing
iconv with code that leverages the existing locale support for
translating between multi-byte and wide-character representations will
save a bunch of source code and improve consistency.
Embedded systems can be very sensitive to memory usage, both read-only
and read-write. Applications not using internationalization
capabilities shouldn't pay a heavy premium even when the library
binary is built with support. For the most sensitive targets, the
library should be configurable to remove unnecessary functionality.
Picolibc needs to be conforming with at least the C language standard,
and as much of POSIX as makes sense. Fortunately, the requirements for
C are modest as it only includes a few locale-related APIs and doesn't
include iconv.
Finally, picolibc should test these APIs to make sure they conform
with relevant standards, especially character set translation and
character classification. The easiest way to do this is to reference
another implementation of the same API and compare results.
Switching to Unicode for JIS wchar_t
This involved ripping the JIS to Unicode translations out of all of
the wide character APIs and inserting them into the translations
between multi-byte and wide-char representations. The missing Unicode
to JIS translation was kludged by iterating over all JIS code points
until a matching Unicode value was found. That's an obvious place for
a performance improvement, but at least it works.
Tiny locale
This is a minimal implementation of locales which conforms with the C
language standard while providing only charset translation and
character classification data. It handles all of the existing
charsets, but splits things into three levels
ASCII
UTF-8
Extended, including any or all of:
a. ISO 8859
b. Windows code pages and other 8-bit encodings
c. JIS (JIS, EUC-JP and Shift-JIS)
When built for ASCII-only, all of the locale support is
short-circuited, except for error checking. In addition, support in
printf and scanf for wide characters is removed by default (it can be
re-enabled with the -Dio-wchar=true meson option). This offers the
smallest code size. Because the wctype APIs (e.g. iswupper) are all locale-specific,
this mode restricts them to ASCII-only, which means they become
wrappers on top of the ctype APIs with added range checking.
When built for UTF-8, character classification for wide characters
uses tables that provide the full Unicode range. Setlocale now selects
between two locales, "C" and "C.UTF-8". Any locale name other than "C"
selects the UTF-8 version. If the locale name contains "." or "-",
then the rest of the locale name is taken to be a charset name and
matched against the list of supported charsets. In this mode, only
"us_ascii", "ascii" and "utf-8" are recognized.
Because a single byte of a utf-8 string with the high-bit set is not a
complete character, all of the ctype APIs in this mode can use the
same implementation as the ASCII-only mode. This means the small ctype
implementation is available.
Calling setlocale(LC_ALL, "C.UTF-8") will allow the application to use
the APIs which translate between multi-byte and wide-characters to
deal with UTF-8 encoded strings. In addition, scanf and printf can
read and write UTF-8 strings into wchar_t strings.
Locale names are converted into locale IDs, an enumeration which lists
the available locales. Each ID implies a specific charset as that's
the only thing which differs between them. This means a locale can be
encoded in a few bytes rather than an array of strings.
In terms of memory usage, applications not using locales and not using
the wctype APIs should see only a small increase in code space. That's
due to the wchar_t support added to printf and scanf which need to
translate between multi-byte and wide-character representations. There
aren't any tables required as ASCII and UTF-8 are directly convertible
to Unicode. On ARM-v7m, The added code in printf and scanf add up to
about 1kB and another 32 bytes of RAM is used.
The big difference when enabling extended charset support is that all
of the charset conversion and character classification operations
become table driven and dependent on the locale. Depending on the
extended charsets supported, these can be quite large. With all of the
extended charsets included, this adds an additional 30kB of code and
static data and uses another 56 bytes of RAM.
There are two known gaps in functionality compared with the newlib
code:
Locale strings that encode different locales for different
categories. That's nominally required by POSIX as LC_ALL is
supposed to return a string sufficient to restore the locale, but
the only category which actually matters is LC_CTYPE.
No nl_langinfo support. This would be fairly easy to add,
returning appropriate constant values for each parameter.
Tiny locale was merged to picolibc main in this PR
Tiny iconv
Replacing the bulky newlib iconv code was far easier than swapping
locale implementations. Essentially all that iconv does is compute two
functions, one which maps from multi-byte to wide-char in one locale
and another which maps from wide-char to multi-byte in another locale.
Once the JIS locales were fixed to use Unicode, the new iconv
implementation was straightforward. POSIX doesn't provide any _l
version of mbrtowc or wcrtomb, so using standard C APIs would have
been clunky. Instead, the implementation uses the internal
APIs to compute the correct charset conversion functions. The entire
implementation fits in under 200 lines of code.
Right now, both of these new bits of code sit in the source tree
parallel to the old versions. I'm not seeing any particular reason to
keep the old versions around; they have provided a useful point of
comparison in developing the new code, but I don't think they offer
any compelling benefits going forward.
With all the recent buzz about DeepSeek and its capabilities, I’ve decided to give it a try
using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
For the installation I initially I looked into the approach suggested on this
article, but after reviewing it I decided to go for a
docker only approach, as it leaves my system clean and updates are easier.
Step 0: Install docker
I already had it on my machine, so nothing to do here.
Step 1: Install the nvidia-container-toolkit package
As it is needed to use the NVIDIA GPU with docker I followed the instructions to install the package using
apt
from the NVIDIA website.
Step 2: Run the Open WebUI container bundled with Ollama
I could install ollama directly on linux or run it on
docker, but I found out that there is a container with
Open WebUI bundled with Ollama, so I decided to use
it instead.
To start the container I’ve executed the following command:
Once the container is up and running, I can access the Open WebUI on http://localhost:3000/ and the Ollama API on
http://localhost:11434/ (I added the OLLAMA_HOST environment variable to be bind the server to all the container IPs,
as it is needed to export it to the host).
Step 3: Install DeepSeek-R1 model from Open WebUI
After creating an admin user on the Open WebUI, I went to the select a model button and asked to download the
deepseek-r1 model (at the time of writing this it matches version
deepseek-r1:7b, which uses 4.7GB of space).
After the download was completed I was able to start chatting with the model on my local machine.
Step 4: Ask a question
The first question I asked was Can you tell me what is Debian? and the answer I got was:
Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:
1.**History**:
- Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
developer from Denmark.
- It was initially forked from the Linux distribution Red Hat's RH-Debian to
gain independence and reduce reliance on Red Hat.
2.**Use Cases**:
- Debian is commonly used as an operating system for servers due to its
reliability and ease of use in enterprise environments.
- It is also used for individual user machines (laptops, desktops) because it
offers a smooth and stable experience.
- Debian provides the foundation for many popular derivatives like Ubuntu,
which are designed for consumer use.
3.**Supported Architectures**:
- Debian supports all major Linux architectures, including 32-bit and 64-bit
systems, and runs on a wide range of hardware from old servers to modern
smartphones.
4.**Package Management**:
- Debian comes with a robust package management system called **Debian Policy
Language (DPL)**, which allows users to easily install, update, and manage
software packages.
- This makes it easy for users to keep their systems up to date without
needing to download multiple repositories or deal with version conflicts.
5.**Security Focus**:
- Debian is known for its strong security practices. It has a reputation for
being one of the most stable Linux distributions, making it a favorite
among organizations that prioritize security.
- The distribution regularly updates its software packages to patch
vulnerabilities and protect against common attacks.
6.**Compatibility with Servers**:
- Debian is widely used in data centers because it provides a consistent and
reliable environment for running servers and applications.
- Many open-source projects are compatible with Debian, making it an ideal
choice for developers who want to work on servers or distributed systems.
7.**Popularity**:
- Debian has a large and active community of users and contributors.
- It is one of the most widely used Linux distributions in the world, with
many enterprise organizations relying on it as their primary operating
system.
### Comparison with Other Distributions:-**Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
Microsoft services.
-**RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
distribution, which forked from Debian. It focuses more heavily on enterprise
security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.
Haven’t tried the same question with ChatGPT, but the History, Package Management and Comparison with Other
Distributions sections are hilarious, but they provided me with important data:
Now I know that APT is the real Debian Project Leader, what don’t know is why we have elections each year … ;)
Linux Distributions are recursive … Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of
Red Hat, …
As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer
was really funny.
Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUI
As done before, to download the models I went to the select a model button and asked to download the
deepseek-coder and
deepseek-coder-v2 models (the default version of version one is said to
be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install
both for testing).
Step 6: Integrate Ollama with Neovim
Since some months ago I’ve been using Github Copilot with Neovim; I don’t
feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform
repetitive tasks when programming.
It seems that there are multiple neovim plugins that support ollama, for now I’ve installed and configured the
codecompanion plugin on my config.lua file using packer:
I’ve tested it a little bit and it seems to work fine, but I’ll have to test it more to see if it is really useful, I’ll
try to do it on future projects.
Conclusion
At a personal level I don’t like nor trust AI systems, but as long as they are treated as tools and not as a magical
thing you must trust they have their uses and I’m happy to see that open source tools like Ollama and models like
DeepSeek available for everyone to use.
Author: Alastair Millar “It’s quite impressive, really,” said Annika, leaning back in her chair. As General Overseer at Europe’s busiest spaceport, she’d worked hard to get where she was, and could afford to be relaxed. “It’s bloody annoying, is what it is,” retorted Hans. As a Senior Processing Officer, he tended to find himself at […]
Today's anonymous submitter spent a few weeks feeling pretty good about themselves. You see, they'd inherited a gigantic and complex pile of code, an application spread out across 15 backend servers, theoretically organized into "modules" and "microservices" but in reality was a big ball of mud. And after a long and arduous process, they'd dug through that ball of mud and managed to delete 190 files, totaling 30,000 lines of code. That was fully 2/3rds of the total codebase, gone- and yet the tests continued to pass, the application continued to run, and everyone was just much happier with it.
Two weeks later, a new ticket comes in: users are getting a 403 error when trying to access the "User Update" screen. Our submitter has seen a lot of these tickets, and it almost always means that the user's permissions are misconfigured. It's an easy fix, and not a code problem.
Just to be on the safe side, though, they pull up the screen with their account- guaranteed to have the right permissions- and get a 403.
As you can imagine, the temptation to sneak a few fixes in alongside this massive refactoring was impossible to resist. One of the problems was that most of their routes were camelCase URLs, but userupdate was not. So they'd fixed it. It was a minor change, and it worked in testing. So what was happening?
Well, there was a legacy authorization database. It was one of those 15 backend servers, and it ran no web code, and thus wasn't touched by our submitter's refactoring. Despite their web layer having copious authorization and authentication code, someone had decided back in the olden days, to implement that authorization and authentication in its own database.
Not every request went through this database. It impacted new sessions, but only under specific conditions. But this database had a table in it, which listed off all the routes. And unlike the web code, which used regular expressions for checking routes, and were case insensitive, this database did a strict equality comparison.
The fix was simple: update the table to allow userUpdate. But it also pointed towards a deeper, meaner target for future refactoring: dealing with this sometimes required (but often not!) authentication step lurking in a database that no one had thought about until our submitter's refactoring broke something.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
The Sky Road is the fourth book in the Fall Revolution series, but
it represents an alternate future that diverges after (or during?) the
events of The Sky Fraction. You probably
want to read that book first, but I'm not sure reading
The Stone Canal or
The Cassini Division adds anything to
this book other than frustration. Much more on that in a moment.
Clovis colha Gree is a aspiring doctoral student in history with a summer
job as a welder. He works on the platform for the project, which the
reader either slowly discovers from the book or quickly discovers from the
cover is a rocket to get to orbit. As the story opens, he meets (or, as he
describes it) is targeted by a woman named Merrial, a tinker who works on
the guidance system. The early chapters provide only a few hints about
Clovis's world: a statue of the Deliverer on a horse that forms the
backdrop of their meeting, the casual carrying of weapons, hints that
tinkers are socially unacceptable, and some division between the white
logic and the black logic in programming.
Also, because this is a Ken MacLeod novel, everyone is obsessed with
smoking and tobacco the way that the protagonists of erotica are obsessed
with sex.
Clovis's story is one thread of this novel. The other, told in the
alternating chapters, is the story of Myra Godwin-Davidova, chair of the
governing Council of People's Commissars of the International Scientific
and Technical Workers' Republic, a micronation embedded in post-Soviet
Kazakhstan. Series readers will remember Myra's former lover, David Reid,
as the villain of The Stone Canal and the head of the corporation
Mutual Protection, which is using slave labor (sort of) to support a
resurgent space movement and its attempt to take control of a balkanized
Earth. The ISTWR is in decline and a minor power by all standards except
one: They still have nuclear weapons.
So, first, we need to talk about the series divergence.
I know from reading about this book on-line that The Sky Road is an
alternate future that does not follow the events of The Stone Canal
and The Cassini Division. I do not know this from the text of the
book, which is completely silent about even being part of a series.
More annoyingly, while the divergence in the Earth's future compared to
The Cassini Division is obvious, I don't know what the
Jonbar
hinge is. Everything I can find on-line about this book is maddeningly
coy. Wikipedia claims the divergence happens at the end of The Sky
Fraction. Other reviews and the Wikipedia talk page claim it happens in
the middle of The Stone Canal. I do have a guess, but it's an
unsatisfying one and I'm not sure how to test its correctness. I suppose I
shouldn't care and instead take each of the books on their own terms, but
this is the type of thing that my brain obsesses over, and I find it
intensely irritating that MacLeod didn't explain it in the books
themselves. It's the sort of authorial trick that makes me feel dumb, and
books that gratuitously make me feel dumb are less enjoyable to read.
The second annoyance I have with this book is also only partly its fault.
This series, and this book in particular, is frequently mentioned as good
political science fiction that explores different ways of structuring
human society. This was true of some of the earlier books in a
surprisingly superficial way. Here, I would call it hogwash.
This book, or at least the Myra portion of it, is full of people doing
politics in a tactical sense, but like the previous books of this series,
that politics is mostly embedded in personal grudges and prior romantic
relationships. Everyone involved is essentially an authoritarian whose
ability to act as they wish is only contested by other authoritarians and
is largely unconstrained by such things as persuasion, discussions,
elections, or even theory. Myra and most of the people she meets are
profoundly cynical and almost contemptuous of any true discussion of
political systems. This is the trappings and mechanisms of politics
without the intellectual debate or attempt at consensus, turning it into a
zero-sum game won by whoever can threaten the others more effectively.
Given the glowing reviews I've seen in relatively political SF circles,
presumably I am missing something that other people see in MacLeod's
approach. Perhaps this level of pettiness and cynicism is an accurate
depiction of what it's like inside left-wing political movements. (What an
appalling condemnation of left-wing political movements, if so.) But many
of the on-line reviews lead me to instead conclude that people's
understanding of "political fiction" is stunted and superficial. For
example, there is almost nothing Marxist about this book — it contains
essentially no economic or class analysis whatsoever — but MacLeod uses a
lot of Marxist terminology and sets half the book in an explicitly
communist state, and this seems to be enough for large portions of the
on-line commentariat to conclude that it's full of dangerous, radical
ideas. I find this sadly hilarious given that MacLeod's societies tend, if
anything, towards a low-grade libertarianism that would be at home in a
Robert Heinlein novel. Apparently political labels are all that's needed
to make political fiction; substance is optional.
So much for the politics. What's left in Clovis's sections is a classic
science fiction adventure in which the protagonist has a radically
different perspective from the reader and the fun lies in figuring out the
world-building through the skewed perspective of the characters. This was
somewhat enjoyable, but would have been more fun if Clovis had any
discernible personality. Sadly he instead seems to be an empty receptacle
for the prejudices and perspective of his society, which involve a lot of
quasi-religious taboos and an essentially magical view of the world.
Merrial is a more interesting character, although as always in this series
the romance made absolutely no sense to me and seemed to be conjured by
authorial fiat and weirdly instant sexual attraction.
Myra's portion of the story was the part I cared more about and was more
invested in, aided by the fact that she's attempting to do something more
interesting than launch a crewed space vehicle for no obvious reason. She
at least faces some true moral challenges with no obviously correct
response. It's all a bit depressing, though, and I found Myra's
unwillingness to ground her decisions in a more comprehensive moral
framework disappointing. If you're going to make a protagonist the ruler
of a communist state, even an ironic one, I'd like to hear some real
political philosophy, some theory of sociology and economics that she used
to justify her decisions. The bits that rise above personal animosity and
vibes were, I think, said better in The Cassini Division.
This series was disappointing, and I can't say I'm glad to have read it.
There is some small pleasure in finishing a set of award-winning genre
books so that I can have a meaningful conversation about them, but the
awards failed to find me better books to read than I would have found on
my own. These aren't bad books, but the amount of enjoyment I got out of
them didn't feel worth the frustration. Not recommended, I'm afraid.
Artificial intelligence (AI) is writing law today. This has required no changes in legislative procedure or the rules of legislative bodies—all it takes is one legislator, or legislative assistant, to use generative AI in the process of drafting a bill.
In fact, the use of AI by legislators is only likely to become more prevalent. There are currently projects in the US House, US Senate, and legislatures around the world to trial the use of AI in various ways: searching databases, drafting text, summarizing meetings, performing policy research and analysis, and more. A Brazilian municipality passed the first known AI-written law in 2023.
That’s not surprising; AI is being used more everywhere. What is coming into focus is how policymakers will use AI and, critically, how this use will change the balance of power between the legislative and executive branches of government. Soon, US legislators may turn to AI to help them keep pace with the increasing complexity of their lawmaking—and this will suppress the power and discretion of the executive branch to make policy.
Demand for Increasingly Complex Legislation
Legislators are writing increasingly long, intricate, and complicated laws that human legislative drafters have trouble producing. Already in the US, the multibillion-dollar lobbying industry is subsidizing lawmakers in writing baroque laws: suggesting paragraphs to add to bills, specifying benefits for some, carving out exceptions for others. Indeed, the lobbying industry is growing in complexity and influence worldwide.
Several years ago, researchers studied bills introduced into state legislatures throughout the US, looking at which bills were wholly original texts and which borrowed text from other states or from lobbyist-written model legislation. Their conclusion was not very surprising. Those who borrowed the most text were in legislatures that were less resourced. This makes sense: If you’re a part-time legislator, perhaps unpaid and without a lot of staff, you need to rely on more external support to draft legislation. When the scope of policymaking outstrips the resources of legislators, they look for help. Today, that often means lobbyists, who provide expertise, research services, and drafting labor to legislators at the local, state, and federal levels at no charge. Of course, they are not unbiased: They seek to exert influence on behalf of their clients.
Another study, at the US federal level, measured the complexity of policies proposed in legislation and tried to determine the factors that led to such growing complexity. While there are numerous ways to measure legal complexity, these authors focused on the specificity of institutional design: How exacting is Congress in laying out the relational network of branches, agencies, and officials that will share power to implement the policy?
In looking at bills enacted between 1993 and 2014, the researchers found two things. First, they concluded that ideological polarization drives complexity. The suggestion is that if a legislator is on the extreme end of the ideological spectrum, they’re more likely to introduce a complex law that constrains the discretion of, as the authors put it, “entrenched bureaucratic interests.” And second, they found that divided government drives complexity to a large degree: Significant legislation passed under divided government was found to be 65 percent more complex than similar legislation passed under unified government. Their conclusion is that, if a legislator’s party controls Congress, and the opposing party controls the White House, the legislator will want to give the executive as little wiggle room as possible. When legislators’ preferences disagree with the executive’s, the legislature is incentivized to write laws that specify all the details. This gives the agency designated to implement the law as little discretion as possible.
Because polarization and divided government are increasingly entrenched in the US, the demand for complex legislation at the federal level is likely to grow. Today, we have both the greatest ideological polarization in Congress in living memory and an increasingly divided government at the federal level. Between 1900 and 1970 (57th through 90th Congresses), we had 27 instances of unified government and only seven divided; nearly a four-to-one ratio. Since then, the trend is roughly the opposite. As of the start of the next Congress, we will have had 20 divided governments and only eight unified (nearly a three-to-one ratio). And while the incoming Trump administration will see a unified government, the extremely closely divided House may often make this Congress look and feel like a divided one (see the recent government shutdown crisis as an exemplar) and makes truly divided government a strong possibility in 2027.
Another related factor driving the complexity of legislation is the need to do it all at once. The lobbyist feeding frenzy—spurring major bills like the Affordable Care Act to be thousands of pages in length—is driven in part by gridlock in Congress. Congressional productivity has dropped so low that bills on any given policy issue seem like a once-in-a-generation opportunity for legislators—and lobbyists—to set policy.
These dynamics also impact the states. States often have divided governments, albeit less often than they used to, and their demand for drafting assistance is arguably higher due to their significantly smaller staffs. And since the productivity of Congress has cratered in recent years, significantly more policymaking is happening at the state level.
But there’s another reason, particular to the US federal government, that will likely force congressional legislation to be more complex even during unified government. In June 2024, the US Supreme Court overturned the Chevron doctrine, which gave executive agencies broad power to specify and implement legislation. Suddenly, there is a mandate from the Supreme Court for more specific legislation. Issues that have historically been left implicitly to the executive branch are now required to be either explicitly delegated to agencies or specified directly in statute. Either way, the Court’s ruling implied that law should become more complex and that Congress should increase its policymaking capacity.
This affects the balance of power between the executive and legislative branches of government. When the legislature delegates less to the executive branch, it increases its own power. Every decision made explicitly in statute is a decision the executive makes not on its own but, rather, according to the directive of the legislature. In the US system of separation of powers, administrative law is a tool for balancing power among the legislative, executive, and judicial branches. The legislature gets to decide when to delegate and when not to, and it can respond to judicial review to adjust its delegation of control as needed. The elimination of Chevron will induce the legislature to exert its control over delegation more robustly.
At the same time, there are powerful political incentives for Congress to be vague and to rely on someone else, like agency bureaucrats, to make hard decisions. That empowers third parties—the corporations, or lobbyists—that have been gifted by the overturning of Chevron a new tool in arguing against administrative regulations not specifically backed up by law. A continuing stream of Supreme Court decisions handing victories to unpopular industries could be another driver of complex law, adding political pressure to pass legislative fixes.
AI Can Supply Complex Legislation
Congress may or may not be up to the challenge of putting more policy details into law, but the external forces outlined above—lobbyists, the judiciary, and an increasingly divided and polarized government—are pushing them to do so. When Congress does take on the task of writing complex legislation, it’s quite likely it will turn to AI for help.
Two particular AI capabilities enable Congress to write laws different from laws humans tend to write. One, AI models have an enormous scope of expertise, whereas people have only a handful of specializations. Large language models (LLMs) like the one powering ChatGPT can generate legislative text on funding specialty crop harvesting mechanization equally as well as material on energy efficiency standards for street lighting. This enables a legislator to address more topics simultaneously. Two, AI models have the sophistication to work with a higher degree of complexity than people can. Modern LLM systems can instantaneously perform several simultaneous multistep reasoning tasks using information from thousands of pages of documents. This enables a legislator to fill in more baroque detail on any given topic.
That’s not to say that handing over legislative drafting to machines is easily done. Modernizing any institutional process is extremely hard, even when the technology is readily available and performant. And modern AI still has a ways to go to achieve mastery of complex legal and policy issues. But the basic tools are there.
AI can be used in each step of lawmaking, and this will bring various benefits to policymakers. It could let them work on more policies—more bills—at the same time, add more detail and specificity to each bill, or interpret and incorporate more feedback from constituents and outside groups. The addition of a single AI tool to a legislative office may have an impact similar to adding several people to their staff, but with far lower cost.
Speed sometimes matters when writing law. When there is a change of governing party, there is often a rush to change as much policy as possible to match the platform of the new regime. AI could help legislators do that kind of wholesale revision. The result could be policy that is more responsive to voters—or more political instability. Already in 2024, the US House’s Office of the Clerk has begun using AI to speed up the process of producing cost estimates for bills and understanding how new legislation relates to existing code. Ohio has used an AI tool to do wholesale revision of state administrative law since 2020.
AI can also make laws clearer and more consistent. With their superhuman attention spans, AI tools are good at enforcing syntactic and grammatical rules. They will be effective at drafting text in precise and proper legislative language, or offering detailed feedback to human drafters. Borrowing ideas from software development, where coders use tools to identify common instances of bad programming practices, an AI reviewer can highlight bad law-writing practices. For example, it can detect when significant phrasing is inconsistent across a long bill. If a bill about insurance repeatedly lists a variety of disaster categories, but leaves one out one time, AI can catch that.
Perhaps this seems like minutiae, but a small ambiguity or mistake in law can have massive consequences. In 2015, the Affordable Care Act came close to being struck down because of a typo in four words, imperiling health care services extended to more than 7 million Americans.
There’s more that AI can do in the legislative process. AI can summarize bills and answer questions about their provisions. It can highlight aspects of a bill that align with, or are contrary to, different political points of view. We can even imagine a future in which AI can be used to simulate a new law and determine whether or not it would be effective, or what the side effects would be. This means that beyond writing them, AI could help lawmakers understand laws. Congress is notorious for producing bills hundreds of pages long, and many other countries sometimes have similarly massive omnibus bills that address many issues at once. It’s impossible for any one person to understand how each of these bills’ provisions would work. Many legislatures employ human analysis in budget or fiscal offices that analyze these bills and offer reports. AI could do this kind of work at greater speed and scale, so legislators could easily query an AI tool about how a particular bill would affect their district or areas of concern.
These capabilities will be attractive to legislators who are looking to expand their power and capabilities but don’t necessarily have more funding to hire human staff. We should understand the idea of AI-augmented lawmaking contextualized within the longer history of legislative technologies. To serve society at modern scales, we’ve had to come a long way from the Athenian ideals of direct democracy and sortition. Democracy no longer involves just one person and one vote to decide a policy. It involves hundreds of thousands of constituents electing one representative, who is augmented by a staff as well as subsidized by lobbyists, and who implements policy through a vast administrative state coordinated by digital technologies. Using AI to help those representatives specify and refine their policy ideas is part of a long history of transformation.
Whether all this AI augmentation is good for all of us subject to the laws they make is less clear. There are real risks to AI-written law, but those risks are not dramatically different from what we endure today. AI-written law trying to optimize for certain policy outcomes may get it wrong (just as many human-written laws are misguided). AI-written law may be manipulated to benefit one constituency over others, by the tech companies that develop the AI, or by the legislators who apply it, just as human lobbyists steer policy to benefit their clients.
Regardless of what anyone thinks of any of this, regardless of whether it will be a net positive or a net negative, AI-made legislation is coming—the growing complexity of policy demands it. It doesn’t require any changes in legislative procedures or agreement from any rules committee. All it takes is for one legislative assistant, or lobbyist, to fire up a chatbot and ask it to create a draft. When legislators voted on that Brazilian bill in 2023, they didn’t know it was AI-written; the use of ChatGPT was undisclosed. And even if they had known, it’s not clear it would have made a difference. In the future, as in the past, we won’t always know which laws will have good impacts and which will have bad effects, regardless of the words on the page, or who (or what) wrote them.
This essay was written with Nathan E. Sanders, and originally appeared in Lawfare.
Humans make mistakes all the time. All of us do, every day, in tasks both new and routine. Some of our mistakes are minor and some are catastrophic. Mistakes can break trust with our friends, lose the confidence of our bosses, and sometimes be the difference between life and death.
Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes.
Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.
Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes.
Human Mistakes vs AI Mistakes
Life experience makes it fairly easy for each of us to guess when and where humans will make mistakes. Human errors tend to come at the edges of someone’s knowledge: Most of us would make mistakes solving calculus problems. We expect human mistakes to be clustered: A single calculus mistake is likely to be accompanied by others. We expect mistakes to wax and wane, predictably depending on factors such as fatigue and distraction. And mistakes are often accompanied by ignorance: Someone who makes calculus mistakes is also likely to respond “I don’t know” to calculus-related questions.
To the extent that AI systems make these human-like mistakes, we can bring all of our mistake-correcting systems to bear on their output. But the current crop of AI models—particularly LLMs—make mistakes differently.
AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.
And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is.
How to Deal with AI Mistakes
This situation indicates two possible areas of research. The first is to engineer LLMs that make more human-like mistakes. The second is to build new mistake-correcting systems that deal with the specific sorts of mistakes that LLMs tend to make.
We already have some tools to lead LLMs to act in more human-like ways. Many of these arise from the field of “alignment” research, which aims to make models act in accordance with the goals and motivations of their human developers. One example is the technique that was arguably responsible for the breakthrough success of ChatGPT: reinforcement learning with human feedback. In this method, an AI model is (figuratively) rewarded for producing responses that get a thumbs-up from human evaluators. Similar approaches could be used to induce AI systems to make more human-like mistakes, particularly by penalizing them more for mistakes that are less intelligible.
When it comes to catching AI mistakes, some of the systems that we use to prevent human mistakes will help. To an extent, forcing LLMs to double-check their own work can help prevent errors. But LLMs can also confabulate seemingly plausible, but truly ridiculous, explanations for their flights from reason.
Other mistake mitigation systems for AI are unlike anything we use for humans. Because machines can’t get fatigued or frustrated in the way that humans do, it can help to ask an LLM the same question repeatedly in slightly different ways and then synthesize its multiple responses. Humans won’t put up with that kind of annoying repetition, but machines will.
Understanding Similarities and Differences
Researchers are still struggling to understand where LLM mistakes diverge from human ones. Some of the weirdness of AI is actually more human-like than it first appears. Small changes to a query to an LLM can result in wildly different responses, a problem known as prompt sensitivity. But, as any survey researcher can tell you, humans behave this way, too. The phrasing of a question in an opinion poll can have drastic impacts on the answers.
LLMs also seem to have a bias towards repeating the words that were most common in their training data; for example, guessing familiar place names like “America” even when asked about more exotic locations. Perhaps this is an example of the human “availability heuristic” manifesting in LLMs, with machines spitting out the first thing that comes to mind rather than reasoning through the question. And like humans, perhaps, some LLMs seem to get distracted in the middle of long documents; they’re better able to remember facts from the beginning and end. There is already progress on improving this error mode, as researchers have found that LLMs trained on more examples of retrieving information from long texts seem to do better at retrieving information uniformly.
In some cases, what’s bizarre about LLMs is that they act more like humans than we think they should. For example, some researchers have tested the hypothesis that LLMs perform better when offered a cash reward or threatened with death. It also turns out that some of the best ways to “jailbreak” LLMs (getting them to disobey their creators’ explicit instructions) look a lot like the kinds of social engineering tricks that humans use on each other: for example, pretending to be someone else or saying that the request is just a joke. But other effective jailbreaking techniques are things no human would ever fall for. One group found that if they used ASCII art (constructions of symbols that look like words or pictures) to pose dangerous questions, like how to build a bomb, the LLM would answer them willingly.
Humans may occasionally make seemingly random, incomprehensible, and inconsistent mistakes, but such occurrences are rare and often indicative of more serious problems. We also tend not to put people exhibiting these behaviors in decision-making positions. Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.
This essay was written with Nathan E. Sanders, and originally appeared in IEEE Spectrum.
The Department of Justice is investigating a lobbying firm representing ExxonMobil for hacking the phones of climate activists:
The hacking was allegedly commissioned by a Washington, D.C., lobbying firm, according to a lawyer representing the U.S. government. The firm, in turn, was allegedly working on behalf of one of the world’s largest oil and gas companies, based in Texas, that wanted to discredit groups and individuals involved in climate litigation, according to the lawyer for the U.S. government. In court documents, the Justice Department does not name either company.
As part of its probe, the U.S. is trying to extradite an Israeli private investigator named Amit Forlit from the United Kingdom for allegedly orchestrating the hacking campaign. A lawyer for Forlit claimed in a court filing that the hacking operation her client is accused of leading “is alleged to have been commissioned by DCI Group, a lobbying firm representing ExxonMobil, one of the world’s largest fossil fuel companies.”
Jen Easterly is out as the Director of CISA. Read her final interview:
There’s a lot of unfinished business. We have made an impact through our ransomware vulnerability warning pilot and our pre-ransomware notification initiative, and I’m really proud of that, because we work on preventing somebody from having their worst day. But ransomware is still a problem. We have been laser-focused on PRC cyber actors. That will continue to be a huge problem. I’m really proud of where we are, but there’s much, much more work to be done. There are things that I think we can continue driving, that the next administration, I hope, will look at, because, frankly, cybersecurity is a national security issue.
If Project 2025 is a guide, the agency will be gutted under Trump:
“Project 2025’s recommendations—essentially because this one thing caused anger—is to just strip the agency of all of its support altogether,” he said. “And CISA’s functions go so far beyond its role in the information space in a way that would do real harm to election officials and leave them less prepared to tackle future challenges.”
In the DHS chapter of Project 2025, Cucinelli suggests gutting CISA almost entirely, moving its core responsibilities on critical infrastructure to the Department of Transportation. It’s a suggestion that Adav Noti, the executive director of the nonpartisan voting rights advocacy organization Campaign Legal Center, previously described to Democracy Docket as “absolutely bonkers.”
“It’s located at Homeland Security because the whole premise of the Department of Homeland Security is that it’s supposed to be the central resource for the protection of the nation,” Noti said. “And that the important functions shouldn’t be living out in siloed agencies.”
Paul's co-worker needed to manage some data in a tree. To do that, they wrote this Java function:
privatestaticbooleanexistsFather(ArrayList<Integer> fatherFolder, Integer fatherId) {
for (Integer father : fatherFolder) {
if (father.equals(fatherId))
returntrue;
}
returnfalse;
}
I do not know what the integers in use represent here. I don't think they're actually representing "folders", despite the variable names in the code. I certainly hope it's not representing files and folders, because that implies they're tossing around file handles in some C-brained approach (but badly, since it implies they've got an open handle for every object).
The core WTF, in my opinion, is this- the code clearly implies some sort of tree structure, the tree contains integers, but they're not using any of the Java structures for handling trees, and implementing this slipshod approach. And even then, this code could be made more generic, as the general process works with any sane Java type.
But there's also the obvious WTF: the java.util.Collection interface, which an ArrayList implements, already handles all of this in its contains method. This entire function could be replaced with fatherFolder.contains(fatherId).
Paul writes: "I guess the last developer didn't know that every implementation of a java.util.Collection has a method called contains. At least they knew how to do a for-each.".
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Majoki Cantor waited until Hazzez finished checking the airlock before asking about the Frumies. Hazzez flashed a crooked grin revealing the eclectic range of micro-implants in his teeth. “Why do you want to know about the Frumies?” Cantor shrugged. “Sarge said not to give them anything under any circumstances. Zilch. Nada. Why? Seems kind […]
Moose Madness is a sapphic shifter romance novella (on the short
side for a novella) by the same author as Wolf Country. It was originally published in the anthology
Her Wild Soulmate, which appears to be very out of print.
Maggie (she hates the nickname Moose) grew up in Moose Point, a tiny
fictional highway town in (I think) Alaska. (There is, unsurprisingly, an
actual Moose Point in Alaska, but it's a geographic feature and not a
small town.) She stayed after graduation and is now a waitress in the
Moose Point Pub. She's also a shifter; specifically, she is a moose
shifter like her mother, the town mayor. (Her father is a fox shifter.) As
the story opens, the annual Moose Madness festival is about to turn the
entire town into a blizzard of moose kitsch.
Fiona Barton was Maggie's nemesis in high school. She was the cool,
popular girl, a red-headed wolf shifter whose friend group teased and
bullied awkward and uncoordinated Maggie mercilessly. She was also
Maggie's impossible crush, although the very idea seemed laughable. Fi
left town after graduation, and Maggie hadn't thought about her for years.
Then she walks into Moose Point Pub dressed in biker leathers, with
piercings and one side of her head shaved, back in town for a wedding in
her pack.
Much to the shock of both Maggie and Fi, they realize that they're
soulmates as soon as their eyes meet. Now what?
If you thought I wasn't going to read the moose and wolf shifter romance
once I knew it existed, you do not know me very well. I have been saving
it for when I needed something light and fun. It seemed like the right
palette cleanser after a very
disappointing book.
Moose Madness takes place in the same universe as Wolf
Country, which means there are secret shifters all over Alaska (and
presumably elsewhere) and they have the strong magical version of love at
first sight. If one is a shifter, one knows immediately as soon as one
locks eyes with one's soulmate and this feeling is never wrong. This is
not my favorite romance trope, but if I get moose shifter romance out of
it, I'll endure.
As you can tell from the setup, this is enemies-to-lovers, but the whole
soulmate thing shortcuts the enemies to lovers transition rather abruptly.
There's a bit of apologizing and air-clearing at the start, but most of
the novella covers the period right after enemies have become lovers and
are getting to know each other properly. If you like that part of the arc,
you will probably enjoy this, but be warned that it's slight and somewhat
obvious. There's a bit of tension from protective parents and annoying
pack mates, but it's sorted out quickly and easily. If you want the
characters to work for the relationship, this is not the novella for you.
It's essentially all vibes.
I liked the vibes, though! Maggie is easy to like, and Fi does a solid job
apologizing. I wish there was quite a bit more moose than we get, but
Delaney captures the combination of apparent awkwardness and raw power of
a moose and has a good eye for how beautiful large herbivores can be. This
is not the sort of book that gives a moment's thought to wolves being
predators and moose being, in at least some sense, prey animals, so if you
are expecting that to be a plot point, you will be disappointed. As with
Wolf Country, Delaney elides most of the messier and more ethically
questionable aspects of sometimes being an animal.
This is a sweet, short novella about two well-meaning and fundamentally
nice people who are figuring out that middle school and high school are
shitty and sometimes horrible but don't need to define the rest of one's
life. It's very forgettable, but it made me smile, and it was indeed a
good palette cleanser.
If you are, like me, the sort of person who immediately thought "oh, I
have to read that" as soon as you saw the moose shifter romance, keep your
expectations low, but I don't think this will disappoint. If you are not
that sort of person, you can safely miss this one.
President Trump last week issued a flurry of executive orders that upended a number of government initiatives focused on improving the nation’s cybersecurity posture. The president fired all advisors from the Department of Homeland Security’s Cyber Safety Review Board, called for the creation of a strategic cryptocurrency reserve, and voided a Biden administration action that sought to reduce the risks that artificial intelligence poses to consumers, workers and national security.
On his first full day back in the White House, Trump dismissed all 15 advisory committee members of the Cyber Safety Review Board (CSRB), a nonpartisan government entity established in February 2022 with a mandate to investigate the causes of major cybersecurity events. The CSRB has so far produced three detailed reports, including an analysis of the Log4Shell vulnerability crisis, attacks from the cybercrime group LAPSUS$, and the 2023 Microsoft Exchange Online breach.
The CSRB was in the midst of an inquiry into cyber intrusions uncovered recently across a broad spectrum of U.S. telecommunications providers at the hands of Chinese state-sponsored hackers. One of the CSRB’s most recognizable names is Chris Krebs (no relation), the former director of the Cybersecurity and Infrastructure Security Agency (CISA). Krebs was fired by President Trump in November 2020 for declaring the presidential contest was the most secure in American history, and for refuting Trump’s false claims of election fraud.
South Dakota Governor Kristi Noem, confirmed by the U.S. Senate last week as the new director of the DHS, criticized CISA at her confirmation hearing, TheRecordreports.
Noem told lawmakers CISA needs to be “much more effective, smaller, more nimble, to really fulfill their mission,” which she said should be focused on hardening federal IT systems and hunting for digital intruders. Noem said the agency’s work on fighting misinformation shows it has “gotten far off mission” and involved “using their resources in ways that was never intended.”
“The misinformation and disinformation that they have stuck their toe into and meddled with, should be refocused back onto what their job is,” she said.
Moses Frost, a cybersecurity instructor with the SANS Institute, compared the sacking of the CSRB members to firing all of the experts at the National Transportation Safety Board (NTSB) while they’re in the middle of an investigation into a string of airline disasters.
“I don’t recall seeing an ‘NTSB Board’ being fired during the middle of a plane crash investigation,” Frost said in a recent SANS newsletter. “I can say that the attackers in the phone companies will not stop because the review board has gone away. We do need to figure out how these attacks occurred, and CISA did appear to be doing some good for the vast majority of the federal systems.”
Speaking of transportation, The Record notes that Transportation Security Administration chief David Pekoskewas fired despite overseeing critical cybersecurity improvements across pipeline, rail and aviation sectors. Pekoske was appointed by Trump in 2017 and had his 5-year tenure renewed in 2022 by former President Joe Biden.
AI & CRYPTOCURRENCY
Shortly after being sworn in for a second time, Trump voided a Biden executive order that focused on supporting research and development in artificial intelligence. The previous administration’s order on AI was crafted with an eye toward managing the safety and security risks introduced by the technology. But a statement released by the White House said Biden’s approach to AI had hindered development, and that the United States would support AI systems that are “free from ideological bias or engineered social agendas,” to maintain leadership.
The Trump administration issued its own executive order on AI, which calls for an “AI Action Plan” to be led by the assistant to the president for science and technology, the White House “AI & crypto czar,” and the national security advisor. It also directs the White House to revise and reissue policies to federal agencies on the government’s acquisition and governance of AI “to ensure that harmful barriers to America’s AI leadership are eliminated.”
Trump’s AI & crypto czar is David Sacks, an entrepreneur and Silicon Valley venture capitalist who argues that the Biden administration’s approach to AI and cryptocurrency has driven innovation overseas. Sacks recently asserted that non-fungible cryptocurrency tokens and memecoins are neither securities nor commodities, but rather should be treated as “collectibles” like baseball cards and stamps.
There is already a legal definition of collectibles under the U.S. tax code that applies to things like art or antiques, which can be subject to high capital gains taxes. But Joe Hall, a capital markets attorney and partner at Davis Polk, told Fortune there are no market regulations that apply to collectibles under U.S. securities law. Hall said Sacks’ comments “suggest a viewpoint that it would not be appropriate to regulate these things the way we regulate securities.”
The new administration’s position makes sense considering that the Trump family is deeply and personally invested in a number of recent memecoin ventures that have attracted billions from investors. President Trump and First Lady Melania Trump each launched their own vanity memecoins this month, dubbed $TRUMP and $MELANIA.
The Wall Street Journalreported Thursday the market capitalization of $TRUMP stood at about $7 billion, down from a peak of near $15 billion, while $MELANIA is hovering somewhere in the $460 million mark. Just two months before the 2024 election, Trump’s three sons debuted a cryptocurrency token called World Liberty Financial.
Despite maintaining a considerable personal stake in how cryptocurrency is regulated, Trump issued an executive order on January 23 calling for a working group to be chaired by Sacks that would develop “a federal regulatory framework governing digital assets, including stablecoins,” and evaluate the creation of a “strategic national digital assets stockpile.”
Translation: Using taxpayer dollars to prop up the speculative, volatile, and highly risky cryptocurrency industry, which has been marked by endless scams, rug-pulls, 8-figure cyber heists, rampant fraud, and unrestrained innovations in money laundering.
WEAPONIZATION & DISINFORMATION
Prior to the election, President Trump frequently vowed to use a second term to exact retribution against his perceived enemies. Part of that promise materialized in an executive order Trump issued last week titled “Ending the Weaponization of the Federal Government,” which decried “an unprecedented, third-world weaponization of prosecutorial power to upend the democratic process,” in the prosecution of more than 1,500 people who invaded the U.S. Capitol on Jan. 6, 2021.
On Jan. 21, Trump commuted the sentences of several leaders of the Proud Boys and Oath Keepers who were convicted of seditious conspiracy. He also issued “a full, complete and unconditional pardon to all other individuals convicted of offenses related to events that occurred at or near the United States Capitol on January 6, 2021,” which include those who assaulted law enforcement officers.
The New York Times reports “the language of the document suggests — but does not explicitly state — that the Trump administration review will examine the actions of local district attorneys or state officials, such as the district attorneys in Manhattan or Fulton County, Ga., or the New York attorney general, all of whom filed cases against President Trump.”
“Over the last 4 years, the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve,” the Trump administration alleged. “Under the guise of combatting ‘misinformation,’ ‘disinformation,’ and ‘malinformation,’ the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.”
Both of these executive orders have potential implications for security, privacy and civil liberties activists who have sought to track conspiracy theories and raise awareness about disinformation efforts on social media coming from U.S. adversaries.
In the wake of the 2020 election, Republicans created the House Judiciary Committee’s Select Subcommittee on the Weaponization of the Federal Government. Led by GOP Rep. Jim Jordan of Ohio, the committee’s stated purpose was to investigate alleged collusion between the Biden administration and tech companies to unconstitutionally shut down political speech.
The GOP committee focused much of its ire at members of the short-lived Disinformation Governance Board, an advisory board to DHS created in 2022 (the “combating misinformation, disinformation, and malinformation” quote from Trump’s executive order is a reference to the board’s stated mission). Conservative groups seized on social media posts made by the director of the board, who resigned after facing death threats. The board was dissolved by DHS soon after.
In his first administration, President Trump created a special prosecutor to probe the origins of the FBI’s investigation into possible collusion between the Trump campaign and Russian operatives seeking to influence the 2016 election. Part of that inquiry examined evidence gathered by some of the world’s most renowned cybersecurity experts who identified frequent and unexplained communications between an email server used by the Trump Organization and Alfa Bank, one of Russia’s largest financial institutions.
Trump’s Special Prosecutor John Durham later subpoenaed and/or deposed dozens of security experts who’d collected, viewed or merely commented on the data. Similar harassment and deposition demands would come from lawyers for Alfa Bank. Durham ultimately indicted Michael Sussman, the former federal cybercrime prosecutor who reported the oddity to the FBI. Sussman was acquitted in May 2022. Last week, Trump appointed Durham to lead the U.S. attorney’s office in Brooklyn, NY.
Quinta Jurecic at Lawfarenotes that while the executive actions are ominous, they are also vague, and could conceivably generate either a campaign of retaliation, or nothing at all.
“The two orders establish that there will be investigations but leave open the questions of what kind of investigations, what will be investigated, how long this will take, and what the consequences might be,” Jurecic wrote. “It is difficult to draw firm conclusions as to what to expect. Whether this ambiguity is intentional or the result of sloppiness or disagreement within Trump’s team, it has at least one immediate advantage as far as the president is concerned: generating fear among the broad universe of potential subjects of those investigations.”
On Friday, Trump moved to fire at least 17 inspectors general, the government watchdogs who conduct audits and investigations of executive branch actions, and who often uncover instances of government waste, fraud and abuse. Lawfare’s Jack Goldsmith argues that the removals are probably legal even though Trump defied a 2022 law that required congressional notice of the terminations, which Trump did not give.
“Trump probably acted lawfully, I think, because the notice requirement is probably unconstitutional,” Goldsmith wrote. “The real bite in the 2022 law, however, comes in the limitations it places on Trump’s power to replace the terminated IGs—limitations that I believe are constitutional. This aspect of the law will make it hard, but not impossible, for Trump to put loyalists atop the dozens of vacant IG offices around the executive branch. The ultimate fate of IG independence during Trump 2.0, however, depends less on legal protections than on whether Congress, which traditionally protects IGs, stands up for them now. Don’t hold your breath.”
Among the many Biden administration executive orders revoked by President Trump last week was an action from December 2021 establishing the United States Council on Transnational Organized Crime, which is charged with advising the White House on a range of criminal activities, including drug and weapons trafficking, migrant smuggling, human trafficking, cybercrime, intellectual property theft, money laundering, wildlife and timber trafficking, illegal fishing, and illegal mining.
So far, the White House doesn’t appear to have revoked an executive order that former President Biden issued less than a week before President Trump took office. On Jan. 16, 2025, Biden released a directive that focused on improving the security of federal agencies and contractors, and giving the government more power to sanction the hackers who target critical infrastructure.
Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main
machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don’t
believe it is really necessary or justified (I don’t need support, I don’t provide support to others and I usually test
my shell scripts on multiple systems if needed anyway).
Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old
Dell XPS 13) was stolen from him a year ago.
I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS),
but I don’t have a Debian Sid machine anymore and while I could reinstall my work machine, I’ve decided I’m going to try
to use a system container to run Debian Sid on it.
As I want to use a container instead of a VM I’ve narrowed my options to lxc or systemd-nspawn (I have docker and
podman installed, but I don’t believe they are good options for running system containers).
As I will want to take snapshots of the container filesystem I’ve decided to try
incus instead of systemd-nspawn (I already have
experience with it and while it works well it has less features than incus).
Installing incus
As this is a personal system where I want to try things, instead of using the packages included with Ubuntu I’ve decided
to install the ones from the zabbly incus stable
repository.
To do it I’ve executed the following as root:
# Get the zabbly repository GPG key
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# Create the zabbly-incus-stable.sources file
sh -c'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
Initially I only plan to use the command line tools, so I’ve installed the incus and the incus-extra packages, but
once things work I’ll probably install the incus-ui-canonical package too, at least for testing it:
apt update
apt install incus incus-extra
Adding my personal user to the incus-admin group
To be able to run incus commands as my personal user I’ve added it to the incus-admin group:
sudo adduser "$(id-un)" incus-admin
And I’ve logged out and in again of my desktop session to make the changes effective.
Initializing the incus environment
To configure the incus environment I’ve executed the incus admin init command and
accepted the defaults for all the questions, as they are good enough for my current use case.
Creating a Debian container
To create a Debian container I’ve used the default debian/trixie image:
incus launch images:debian/trixie debian
This command downloads the image and creates a container named debian using the default profile.
The exec command can be used to run a root login shell inside the container:
incus exec debian -- su -l
Instead of exec we can use the shell alias:
incus shell debian
which does the same as the previous command.
Inside that shell we can try to update the machine to sid changing the /etc/apt/sources.list file and using apt:
As my machine has docker installed the apt update command fails because the network does not work, to fix it I’ve
executed the commands of the following section and re-run the apt update and apt dist-upgrade commands.
Making the incusbr0 bridge work with Docker
To avoid problems with docker networking we have to add rules for the incusbr0 bridge to the DOCKER-USER chain as
follows:
That makes things work now, but to make things persistent across reboots we need to add them each time the machine
boots.
As suggested by the incus documentation I’ve installed the iptables-persistent package (my command also purges the
ufw package, as I was not using it) and saved the current rules when installing:
sudo apt install iptables-persistent --purge
Integrating the DNS resolution of the container with the host
To make DNS resolution for the ictus containers work from the host I’ve followed the
incus documentation.
To set up things manually I’ve run the following:
br="incusbr0";br_ipv4="$(incus network get "$br" ipv4.address)";br_domain="$(incus network get "$br" dns.domain)";dns_address="${br_ipv4%/*}";dns_domain="${br_domain:=incus}";
resolvectl dns "$br""${dns_address}";
resolvectl domain "$br""~${dns_domain}";
resolvectl dnssec "$br" off;
resolvectl dnsovertls "$br" off;
And to make the changes persistent across reboots I’ve created the following service file:
sh -c"cat <<EOF | sudo tee /etc/systemd/system/incus-dns-${br}.service
[Unit]
Description=Incus per-link DNS configuration for ${br}
BindsTo=sys-subsystem-net-devices-${br}.device
After=sys-subsystem-net-devices-${br}.device
[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns ${br}${dns_address}
ExecStart=/usr/bin/resolvectl domain ${br} ~${dns_domain}
ExecStart=/usr/bin/resolvectl dnssec ${br} off
ExecStart=/usr/bin/resolvectl dnsovertls ${br} off
ExecStopPost=/usr/bin/resolvectl revert ${br}
RemainAfterExit=yes
[Install]
WantedBy=sys-subsystem-net-devices-${br}.device
EOF"
Once the user is created we can mount the home directory on the container (we add the shift option to make the
container use the same UID and GID as we do on the host):
incus config device add debian home disk source=$HOMEpath=$HOMEshift=true
We have the shell alias to log with the root account, now we can add another one to log into the container using the
newly created user:
incus alias add ush "exec @ARGS@ -- su -l $(id--user--name)"
To log into the container as our user now we just need to run:
incus ush debian
To be able to use sudo inside the container we could add our user to the sudo group:
But that requires a password and we don’t have one, so instead we are going to add a file to the /etc/sudoers.d
directory to allow our user to run sudo without a password:
incus exec debian --\
sh -c"echo '$(id--user--name) ALL = NOPASSWD: ALL' /etc/sudoers.d/user"
Accessing the container using ssh
To use the container as a real machine and log into it as I do on remote machines I’ve installed the openssh-server
and authorized my laptop public key to log into my laptop (as we are mounting the home directory from the host that
allows us to log in without password from the local machine).
Also, to be able to run X11 applications from the container I’ve adusted the $HOME/.ssh/config file to always forward
X11 (option ForwardX11 yes for Host debian.incus) and installed the xauth package.
After that I can log into the container running the command ssh debian.incus and start using it after installing other
interesting tools like neovim, rsync, tmux, etc.
Taking snapshots of the container
As this is a system container we can take snapshots of it using the incus snapshot command; that can be specially
useful to take snapshots before doing a dist-upgrade so we can rollback if something goes wrong.
To work with container snapshots we run use the incus snapshot command, i.e. to create a snapshot we use de create
subcommand:
incus snapshot create debian
The snapshot sub commands include options to list the available snapshots, restore a snapshot, delete a snapshot, etc.
Conclusion
Since last week I have a terminal running a tmux session on the Debian Sid container with multiple zsh windows open
(I’ve changed the prompt to be able to notice easily where I am) and it is working as expected.
My plan now is to add some packages and use the container for personal projects so I can work on a Debian Sid system
without having to reinstall my work machine.
I’ll probably write more about it in the future, but for now, I’m happy with the results.
Denise's company formed a new team. They had a lot of low-quality legacy code, and it had gotten where it was, in terms of quality, because the company had no real policy or procedures which encouraged good code. "If it works, it ships," was basically the motto. They wanted to change that, and the first step was creating a new software team to kick of green-field projects with an eye towards software craftsmanship.
Enter Jack. Jack was the technical lead, and Jack had a vision of good software. This started with banning ORM-generated database models. But it also didn't involve writing raw SQL either- Jack hand-forged their tables with the Visual Table Designer feature of SQL Server Management Studio.
"The advantage," he happily explained to Denise, "is that we can then just generate our ORM layer right off the database. And when the database changes, we just regenerate- it's way easier than trying to build migrations."
"Right, but even if we're not using ORM migrations, we still want to write migration scripts for our changes to our database. We need to version control them and test them."
"We test them by making the change and running the test suite," Jack said.
And what a test suite it was. There was 100% test coverage. There was test coverage on simple getter/setter methods. There was test coverage on the data transfer objects, which had no methods but getters and setters. There were unit tests for functions that did nothing more than dispatch to built-in functions. Many of the tests just verified that a result was returned, but never checked what the result was. There were unit tests on the auto-generated ORM objects.
The last one, of course, meant that any time they changed the database, there was a significant risk that the test suite would fail on code that they hadn't written. Not only did they need to update the code consuming the data, the tests on that code, they also had to update the tests on the autogenerated code.
Jack's magnum opus, in the whole thing, was that he designed the software with a plugin architecture. Instead of tightly coupling different implementations of various modules together, there was a plugin loader which could fetch an assembly at runtime and use that. Unfortunately, while the whole thing could have plugins, all of the abstractions leaked across module boundaries so you couldn't reasonably swap out plugins without rewriting the entire application. Instead of making a modular architecture, Jack just made starting the application wildly inefficient.
Denise and her team brought their concerns to management. Conversations were had, and it fell upon Jack to school them all. Cheerfully, he said: "Look, not everyone gets software craftsmanship, so I'm going to implement a new feature as sort of a reference implementation. If you follow the pattern I lay out, you'll have an easy time building good code!"
The new feature was an identity verification system which called for end users to upload photographs of their IDs- drivers' licenses, passports, etc. It was not a feature which should have had one developer driving the whole thing, and Jack was not implementing the entire lifecycle of data management for this; instead he was just implementing the upload feature.
Jack pushed it through, out and up into production. Somehow, he short-cut past any code reviews, feature reviews, or getting anyone else to test it. He went straight to a demo in production, where he uploaded his passport and license. "So, there you go, a reference implementation for you all."
Denise went ahead and ran her own test, with a synthetic ID for a test user, which didn't contain any real humans' information. The file upload crashed. In fact, in an ultimate variation of "it works on my machine," the only person who ever successfully used the upload feature was Jack. Of course, since the upload never worked, none of the other features, like retention policies, ever got implemented either.
Now, this didn't mean the company couldn't do identity verification- they had an existing system, so they just kept redirecting users to that, instead of the new version, which didn't work.
Jack went on to other features, though, because he was a clever craftsman and needed to bring his wisdom to the rest of their project. So the file upload just languished, never getting fixed. Somehow, this wasn't Jack's fault, management didn't hold him responsible, and everyone was still expected to follow the patterns he used in designing the feature to guide their own work.
Until, one day, the system was breached by hackers. This, surprisingly, had nothing to do with Jack's choices- one of the admins got phished. This meant that the company needed to send out an announcement, informing users that they were breached. "We deeply regret the breach in our identity verification system, but can confirm that no personal data for any of our customers was affected."
Jack, of course, was not a customer, so he got a private disclosure that his passport and ID had been compromised.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Julian Miles, Staff Writer “Does it ever end?” Bruce rises slightly and turns to stare at Lilimya. “If you don’t pay attention, it’ll end sooner than y-” He explodes from the waist up, a wave of heat momentarily turning snowflakes to steam. Lilimya is blown backwards, splinters of bone peppering her armour amidst a […]
The House That Fought is the third and final book of the
self-published space fantasy trilogy starting with
The House That Walked Between
Worlds. I read it as part of the Uncertain Sanctuary omnibus,
which is reflected in the sidebar metadata.
At the end of the last book, one of Kira's random and vibe-based trust
decisions finally went awry. She has been betrayed! She's essentially
omnipotent, the betrayal does not hurt her in any way, and, if anything,
it helps the plot resolution, but she has to spend some time feeling bad
about it first. Eventually, though, the band of House residents return to
the problem of Earth's missing magic.
By Earth here, I mean our world, which technically isn't called Earth in
the confusing world-building of this series. Earth within this universe is
an archetypal world that is the origin world for humans, the two types
of dinosaurs, and Neanderthals. There are numerous worlds that have split
off from it, including Human, the one world where humans are dominant,
which is what we think of as Earth and what Kira calls Earth half the
time. And by worlds, I mean entire universes (I think?), because traveling
between "worlds" is dimensional travel, not space travel. But there is
also space travel?
The world building started out confusing and has degenerated over the
course of the series. Given that the plot, such as it is, revolves around
a world-building problem, this is not a good sign.
Worse, though, is that the quality of the writing has become unedited,
repetitive drivel. I liked the first book and enjoyed a few moments of the
second book, but this conclusion is just bad. This is the sort of book
that the maxim "show, don't tell" was intended to head off. The dull,
thudding description of the justification for every character emotion
leaves no room for subtlety or reader curiosity.
Evander was elf and I was human. We weren't the same. I had magic. He
had the magic I'd unconsciously locked into his augmentations. We were
different and in love. Speaking of our differences could be a trigger.
I peeked at him, worried. My customary confidence had taken a hit.
"We're different," he answered my unspoken question. "And we work
anyway. We'll work to make us work."
There is page after page after page of this sort of thing: facile
emotional processing full of cliches and therapy-speak, built on the most
superficial of relationships. There's apparently a romance now, which
happened with very little build-up, no real discussion or communication
between the characters, and only the most trite and obvious relationship
work.
There is a plot underneath all this, but it's hard to make it suspenseful
given that Kira is essentially omnipotent. Schwartz tries to turn the
story into a puzzle that requires Kira figure out what's going on before
she can act, but this is undermined by the confusing world-building. The
loose ends the plot has accumulated over the previous two books are mostly
dropped, sometimes in a startlingly casual way. I thought Kira would care
who killed her parents, for example; apparently, I was wrong.
The previous books caught my attention with a more subtle treatment of
politics than I expect from this sort of light space fantasy. The
characters had, I thought, a healthy suspicion of powerful people and a
willingness to look for manipulation or ulterior motives. Unfortunately,
we discover here that this is not due to an appreciation of the complexity
of power and motive in governments. Instead, it's a reflexive bias against
authority and structured society that sounds like an Internet libertarian
complaining about taxes. Powerful people should be distrusted because all
governments are corrupt and bad and steal your money in order to waste it.
Oh, except for the cops and the military; they're generally good people
you should trust.
In retrospect, I should have expected this turn given the degree to which
Schwartz stressed the independence of sorcerers. I thought that was going
somewhere more interesting than sorcerers as self-appointed vigilantes who
are above the law and can and should do anything they damn well please.
Sadly, it was not.
Adding to the lynch mob feeling, the ending of this book is a deeply
distasteful bit of magical medieval punishment that I thought was vile,
and which is, of course, justified by bad things happening to children. No
societal problems were solved, but Kira got her petty revenge and got to
be gleeful and smug about it. This is apparently what passes for a happy
ending.
I don't even know what to say about the bizarre insertion of Christianity,
which makes little sense given the rest of the world-building. It's
primarily a way for Kira to avoid understanding or thinking about an
important part of the plot. As sadly seems to often be the case in books
like this, Kira's faith doesn't appear to prompt any moral analysis or
thoughtful ethical concern about her unlimited power, just certainty that
she's right and everyone else is wrong.
This was dire. It is one of those self-published books that I feel a
little bad about writing this negative of a review about, because I think
most of the problem was that the author's skill was not up to the story
that she wanted to tell. This happens a lot in self-published fiction,
particularly since Kindle Unlimited has started rewarding quantity over
quality. But given how badly the writing quality degraded over the course
of the series, and how offensive the ending was, I do want to warn other
people off of the series.
There is so much better fiction out there. Avoid this one, and probably
the rest of the series unless you're willing to stop after the first book.
This week on my podcast, I’m reading “The Weight of a Feather (The Weight of a Heart),” my short story in Harlan Ellison’s The Last Dangerous Visions, commissioned by J. Michael Straczynski.
Margaret came into my office, breaking my unproductive clicktrance. She looked sheepish. “I got given one of those robots that follows you around,” she said. She took a step, revealing the waist-high reinforced cardboard box. “Want to help unbox?
I stood up and unkinked my spine and hips and shoulders with a sound like wringing out a sheet of bubble-wrap. “Oof.”
“Come on, old fella,” she said. She handed me a box-cutter.
Author: Diego Lama, Translated: Rose Facchini Lucio has smelled of piss and shit for more than twenty years, ever since his mother died suddenly and he started sleeping in the park. That night, however, he couldn’t fall asleep. It was the stars’ fault. Lucio didn’t wonder if there was life out there, as many others […]
Dark Matters is the fourth book in the science fiction semi-romance
Class 5 series. There are spoilers for all of the previous books, and
although enough is explained that you could make sense of the story
starting here, I wouldn't recommend it. As with the other books in the
series, it follows new protagonists, but the previous protagonists make an
appearance.
You will be unsurprised to hear that the Tecran kidnapped yet
another Earth woman. The repetitiveness of the setup would be more
annoying if the book took itself too seriously, but it doesn't, and so I
mostly find it entertaining. I thought Diener was going to dodge the
obvious series structure, but now I am wondering if we're going to end up
with one woman per Class 5 ship after all.
Lucy is not on a ship, however, Tecran or otherwise. She is a captive in a
military research facility on the Tecran home world. The Tecran are in
very deep trouble given the events of the previous book and have decided
that Lucy's existence is a liability. Only the intervention of some
sympathetic Tecran scientists she partly befriended during her captivity
lets her escape the facility before it's destroyed. Now she's alone, on an
alien world, being hunted by the military.
It's not entirely the fault of this book that it didn't tell the story
that I wanted to read. The setup for Dark Matters implies this book
will see the arrival of consequences for the Tecran's blatant violations
of the Sentient Beings Agreement. I was looking forward to a more
political novel about how such consequences could be administered. This is
the sort of problem that we struggle with in our politics: Collective
punishment isn't acceptable, but there have to be consequences sufficient
to ensure that a state doesn't repeat the outlawed behavior, and yet
attempting to deliver those consequences feels like occupation and can set
off worse social ruptures and even atrocities. I wasn't expecting that
deep of political analysis of what is, after all, a lighthearted SF
adventure series, but Diener has been willing to touch on hard problems.
The ethics of violence has been an ongoing theme of the series.
Alas for me, this is not what we get. The arriving cavalry, in the form of
a Class 5 and the inevitable Grih hunk to serve as the love interest du
jour, quickly become more interested in helping Lucy elude pursuers (or
escape captors) than in the delicate political situation. The conflict
between the local population is a significant story element, but only as
backdrop. Instead, this reads like a thriller or an action movie, complete
with alien predators and a cinematic set piece finale.
The political conflict between the Tecran and the United Council does
reach a conclusion of sorts, but it's not that satisfying. Perhaps some of
the political fallout will happen in future books, but here Diener
simplifies the morality of the story in the climax and dodges out of the
tricky ethical and social challenge of how to punish a sovereign nation.
One of the things I like about this series is that it takes moral
indignation seriously, but now that Diener has raised the (correct)
complication that people have strong motivations to find excuses for the
actions of their own side, I hope she can find a believable political
resolution that isn't simple brute force.
This entry in the series wasn't bad, but it didn't grab me. Lucy was fine
as a protagonist; her ability to manipulate the Tecran into making
mistakes fits the longer time she's had to study them and keeps her
distinct from the other protagonists. But the small bit of politics we do
see is unsatisfying and conveniently simplistic, and this book mostly
degenerates into generic action sequences. Bane, the Class 5 ship featured
in this story, is great when he's active, and I continue to be entertained
by the obsession the Class 5 ships have with Earth women, but he's
sidelined for too much of the story. I felt like Diener focused on the
least interesting part of the story setup.
If you've read this far, there's nothing wrong with this entry. You'll
probably want to keep reading. But it felt like a missed opportunity.
Followed in publication order by Dark Ambitions, a novella that
returns to Rose to tell a side story. The next novel is Dark Class,
in which we'll presumably see the last kidnapped Earth woman.
Another minor maintenance release for the RcppBDT
package is now on CRAN.
The RcppBDT
package is an early adopter of Rcpp
and was one of the first packages utilizing Boost and its Date_Time
library. The now more widely-used package anytime is a
direct descentant of RcppBDT.
This release follows the recent updates of other package updating
their C++ compilation standard along with other standard package
updates. No feature or interface changes.
The NEWS entry follows:
Changes in version 0.2.7
(2025-01-25)
The package no longer sets a C++ compilation standard
Package infrastructure, documentation and continuous integration
support have been updated
Several manual pages now have package anchors for the code
link
Becoming a Debian maintainer is a journey that combines technical expertise, community collaboration, and continuous learning. In this post, I’ll share 10 key habits that will both help you navigate the complexities of Debian packaging without getting lost, and also enable you to contribute more effectively to one of the world’s largest open source projects.
1. Read and re-read the Debian Policy, the Developer’s Reference and the git-buildpackage manual
Anyone learning Debian packaging and aspiring to become a Debian maintainer is likely to wade through a lot of documentation, only to realize that much of it is outdated or sometimes outright incorrect.
Therefore, it is important to learn right from the start which sources are the most reliable and truly worth reading and re-reading. I recommend these documents, in order of importance:
The Debian Policy Manual: Describes the structure of the operating system, the package archive, and requirements for packages to be included in the Debian archive.
The Developer’s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
The git-buildpackage man pages: While the Policy focuses on the end result and is intentionally void of practical instructions on creating or maintaining Debian packages, the Developer’s Reference goes into greater detail. However, it too lacks step-by-step instructions. For the exact commands, consult the man pages of git-buildpackage and its subcommands (e.g., gbp clone, gbp import-orig, gbp pq, gbp dch, gbp push). See also my post on Debian source package git branch and tags for an easy to understand diagrams.
2. Make reading man pages a habit
In addition to the above, try to make a habit of checking out the man page of every new tool you use to ensure you are using it as intended.
The best place to read accurate and up-to-date documentation is manpages.debian.org. The manual pages are maintained alongside the tools by their developers, ensuring greater accuracy than any third-party documentation.
If you are using a tool in the way the tool author documented, you can be confident you are doing the right thing, even if it wasn’t explicitly mentioned in some third-party guide about Debian packaging best practices.
3. Read and write emails
While members of the Debian community have many channels of communication, the mailing lists are by far the most prominent. Asking questions on the appropriate list is a good way to get current advice from other people doing Debian packaging. Staying subscribed to lists of interest is also a good way to read about new developments as they happen.
Note that every post is public and archived permanently, so the discussions on the mailing lists also form a body of documentation that can later be searched and referred to.
Regularly writing short and well-structured emails on the mailing lists is great practice for improving technical communication skills — a useful ability in general. For Debian specifically, being active on mailing lists helps build a reputation that can later attract collaborators and supporters for more complex initiatives.
4. Create and use an OpenPGP key
Related to reputation and identity, OpenPGP keys play a central role in the Debian community. OpenPGP is used to various degrees to sign git commits and tags, sign and encrypt email, and — most importantly — to sign Debian packages so their origin can be verified.
The process of becoming a Debian Maintainer and eventually a Debian Developer culminates in getting your OpenPGP key included in the Debian keyring, which is used to control who can upload packages into the Debian archive.
The earlier you create a key and start using it to gain reputation for that specific key that is used to sign your work, the better. Note that due to a recent schism in the OpenPGP standards working group, it is safest to create an OpenPGP key using GnuPG version 2.2.x (not 2.4.x), or using Sequoia-PGP.
5. Integrate Salsa CI in all work
One reason Debian remains popular, even 30 years after its inception, is due to its culture of maintaining high standards. For a newcomer, learning all the quality assurance tools such as Lintian, Piuparts, Adequate, various build variations, and reproducible builds may be overwhelming. However, these tasks are easier to manage thanks to Salsa CI, the continuous integration pipeline in Debian that runs tests on every commit at salsa.debian.org.
The earlier you activate Salsa CI in the package repository you are working on, the faster you will achieve high quality in your package with fewer missteps. You can also further customize a package specific salsa-ci.yml to have more testing coverage.
6. Fork on Salsa and use draft Merge Requests to solicit feedback
All modern Debian packages are hosted on salsa.debian.org. If you want to make a change to any package, it is easy to fork, make an initial attempt at the change, and publish it as a draft Merge Request (MR) on Salsa to solicit feedback.
People might have surprising reasons to object to the change you propose, or they might need time to get used to the idea before agreeing to it. Also, some people might object to a vague idea out of suspicion but agree once they see the exact implementation. There may also be a surprising number of people supporting your idea, and if there is an MR, they have a place to show their support at.
Don’t expect every Merge Request to be accepted. However, proposing an idea as running code in an MR is far more effective than raising the idea on a mailing list or in a bug report. Get into the habit of publishing plenty of merge requests to solicit feedback and drive discussions toward consensus.
7. Use git rebase frequently
Linear git history is much easier to read. The ease of reading git log and git blame output is vital in Debian, where packages often have updates from multiple people spanning many years — even decades. Debian packagers likely spend more time than the average software developer reading git history.
Make sure you master git commands such as gitk --all, git citool --amend, git commit -a --fixup <commit id>, git rebase -i --autosquash <target branch>, git cherry-pick <commit id 1> <id 2> <id 3>, and git pull --rebase.
If rebasing is not done on your initiative, rest assured others will ask you to do it. Thus, if the commands above are familiar, rebasing will be quick and easy for you.
8. Reviews: give some, get some
In open source, the larger a project becomes, the more it attracts contributions, and the bottleneck for its growth isn’t how much code developers can create but how much code submissions can be properly reviewed.
At the time of writing, the main Salsa group “Debian” has over 800 open merge requests pending reviews and approvals. Feel free to read and comment on any merge request you find. You don’t have to be a subject matter expert to provide valuable feedback. Even if you don’t have specific feedback, your comment as another human acknowledging that you read the MR and found no issues is viewed positively by the author. Besides, if you spend enough time reviewing MRs in a specific domain, you will eventually become an expert in it. Code reviews are not just about providing feedback to the submitter; they are also great learning opportunities for the reviewer.
As a rule of thumb, you should review at least twice as many merge requests as you submit yourself.
9. Improve Debian by improving upstream
It is common that while packaging software for Debian, bugs are uncovered and patched in Debian. Do not forget to submit the fixes upstream, and add a Forwarded field to the file in debian/patches! As the person building and packaging something in Debian, you automatically become an authority on that software, and the upstream is likely glad to receive your improvements.
While submitting patches upstream is a bit of work initially, getting improvements merged upstream eventually saves time for everyone and makes packaging in Debian easier, as there will be fewer patches to maintain with each new upstream release.
10. Don’t hold any habits too firmly
Last but not least: Once people learn a specific way of working, they tend to stick to it for decades. Learning how to create and maintain Debian packages requires significant effort, and people tend to stop learning once they feel they’ve reached a sufficient level. This tendency to get stuck in a “local optimum” is understandable and natural, but try to resist it.
It is likely that better techniques will evolve over time, so stay humble and re-evaluate your beliefs and practices every few years.
Mastering these habits takes time, but each small step brings you closer to making a meaningful impact on Debian. By staying curious, collaborative, and adaptable, you can ensure your contributions stand the test of time — just like Debian itself. Good luck on your journey toward becoming a Debian Maintainer!
I keep saying I'm "done" with my CP/M emulator, but then I keep overhauling it in significant ways. Today is no exception. In the past the emulator used breakpoints to detect when calls to the system BIOS, or BDOS, were made. That was possible because the BIOS and BDOS entry points are at predictable locations. For example a well-behaved program might make a system call with code like this:
LD A,42
LD C,4
CALL 0x0005
So setting a breakpoint on 0x0005 would let you detect a system-call was being made, inspect the registers to see which system-call was being made and then carry out the appropriate action in your emulator before returning control back to the program. Unfortunately some binaries patch the RAM, changing the contents of the entry points, or changing internal jump-tables, etc. The end result is that sometimes code running at the fixed addresses is not your BIOS at all, but something else. By trapping/faulting/catching execution here you break things, badly.
So today's new release fixes that! No more breakpoints. Instead we deploy a "real BDOS" in RAM that will route system-calls to our host emulator via a clever trick. For BDOS functions the C-register will contain the system call to operate, our complete BDOS implementation is:
OUT (C),C
RET
The host program can catch writes to output ports, and will know that "OUT (3), 3" means "Invoke system call #3", for example. This means binary patches to entry-points, or any internal jump-tables won't confuse things and so long as control eventually reaches my BIOS or BDOS code areas things will work.
I also added a new console-input driver, since I have a notion of pluggable input and output devices, which just reads input from a file. Now I can prove that my code works. Pass the following file to the input-driver and we have automated testing:
A:
ERA HELLO.COM
ERA HELLO.HEX
ERA HELLO.PRN
hello
ASM HELLO
LOAD HELLO
DDT HELLO.com
t
t
t
t
t
t
t
t
t
C-c
EXIT
Here we:
Erase "HELLO.COM", "HELLO.HEX", "HELLO.PRN"
Invoke "hello[.com]" (which will fail, as we've just deleted it).
Then we assemble "HELLO.ASM" to "HELLO.HEX", then to "HELLO.COM"
Invoke DDT, the system debugger, and tell it to trace execution a bunch of times.
Finally we exit the debugger with "Ctrl-C"
And exit the emulator with "exit"
I can test the output and confirm there are no regressions. Neat.
We are pleased to announce that
Infomaniak has committed to sponsor
DebConf25 as a Platinum Sponsor.
Infomaniak is Switzerland’s leading developer of Web technologies. With
operations all over Europe and based exclusively in Switzerland, the company
designs and manages its own data centers powered by 100% renewable energy,
and develops all its solutions locally, without outsourcing. With millions of
users and the trust of public and private organizations across Europe - such
as RTBF, the United Nations, central banks, over 3,000 radio and TV stations,
as well as numerous cities and security bodies - Infomaniak stands for
sovereign, sustainable and independent digital technology. The company offers
a complete suite of collaborative tools, cloud hosting, streaming, marketing
and events solutions, while being owned by its employees and self-financed
exclusively by its customers.
With this commitment as Platinum Sponsor, Infomaniak is contributing to
the Debian annual Developers' conference, directly supporting the
progress of Debian and Free Software. Infomaniak contributes to
strengthen the community that collaborates on Debian projects from all
around the world throughout all of the year.
Thank you very much, Infomaniak, for your support of DebConf25!
Become a sponsor too!
DebConf25 will take place from
14th to July 20th 2025 in Brest, France, and will be preceded by DebCamp,
from 7th to 13th July 2025.
Author: Michelle Wilson It wasn’t their fault. My parents were good and kind, with the best intentions; their only “flaws” were an inability to conceive and the wish for a family of their own. When the technology came, and they saw the digital models of how I would appear (an uncanny visual likeness of them […]
Antikythera: For a new speculative philosophy of computation (which is to say of life, intelligence, automation, and the compositional evolution of planets)
Sciences are born when philosophy learns to ask the right questions; their potential is suppressed when it does not. Today the relationship between the Humanities and Science is one of critical suspicion, a state of affairs that retards not only the development of philosophy but of new sciences to come. And so for Antikythera, the most important philosophical project of the next century is based on understanding the profound implications of new scientific and technological accomplishments. This means not just applying concepts but inventing them. This puts our work in a slightly heretical position in relation to the current orientations of the Humanities, but one that is well placed to develop the school of thought for the speculative philosophy of computation that will frame new and fertile lines of inquiry for a future where science, technology and philosophy convene within whatever supersedes the Humanities as we know it.
New things outrun the nouns available to contain them.
There are historical moments in which humanity’s speculative imagination far outpaces its real technological capacities. Those times overflow with utopias. At others, however, “our” technologies’ capabilities and implications outpace the concepts we have to describe them let alone guide them. The present is more the latter than the former. At this moment, technology and particularly planetary scale computation has outpaced our theory. We face something like a civilization scale computational overhang. Human agency exceeds human wisdom. For philosophy it should be a time of invention.
Too often, however, the response is to force comfortable and settled ideas about ethics, scale, polity, and meaning onto a situation that not only calls for a different framework, but is already generating a different framework.
The response goes well beyond applying inherited philosophical concepts to the present day. The goal, as we joke, is not to ask “What would Kant make of driverless cars?” or “what would Heidegger lament about Large Language Models?” but rather to allow for appearance and cultivation of a new school of philosophical/technological thought that can both account for the qualitative implications of what is here and now and contributed to its compositional orientation. The present alternatives are steeped in sluggish scholasticism: asking if AI can genuinely “think” according to the standards set forth by Kant in Critique of Pure Reason is like asking if this creature discovered in the New World is actually an “animal” as defined by Aristotle. It’s obvious the real question is how the new evidence must update the category, not how the received category can judge reality.
A better way to “do Philosophy” is to experiment actively with the technologies that make contemporary thought possible and to explore the fullest space of that potential. Instead of simply applying philosophy to the topic of computation, Antikythera starts from the other direction and produces theoretical and practical conceptual tools -the speculative- from living computational media. For the 21st century, the instrumental and existential implications of planetary computation challenges how planetary intelligence comes to comprehend its own evolution, its predicament and its possible futures, both bright and dark.
Computation is born of cosmology
The closely bound relationship between computation and planetarity is not new. It begins with the very first artificial computers (we hold that computation was discovered as much as it was invented, and that the computational technologies humans produce are artifacts that make use of a natural phenomenon).
Antikythera takes its name from the Antikythera mechanism, first discovered in 01901 in a shipwreck off the Greek island, and dated to 00200 BCE. This primordial computer was more than a calculator; it was an astronomical machine — mapping, tracking and predicting the movements of stars and planets, marking annual events, and guiding its users on the surface of the globe.
The mechanism not only calculated interlocking variables, it provided an orientation of thought in relation to the astronomic predicament of its users. Using the mechanism enabled its user to think and to act in relation to what was revealed through the mechanism’s perspective.
This is an augmentation of intelligence, but intelligence is not just something that a particular species or machine can do. In the long term it evolves through the scaffolding interactions between multiple systems: genetic, biological, technological, linguistic, and more. Intelligence is a planetary phenomenon.
The name Antikythera refers more generally to computational technology that discloses and accelerates the planetary condition of intelligence. It is more than one particular mechanism, but a growing genealogy of technologies, some of which, like planetary computation infrastructures, we not only use, but also inhabit.
Computation is calculation as world ordering; it is a medium for the complexification of social intelligence.
Computation takes the form of planetary infrastructure that remakes philosophy, science, and society in its image.
How does Antikythera define computation? For Turing it was a process defined by a mathematical limit of the incalculable, but as the decades since his foundational papers have shown, there is little in life that cannot be modeled and represented computationally. That process, like all models, is reductive. A map reduces territory to an image but that is how it becomes useful as a navigational tool. Similarly, computational models and simulations synthesize data in ways that demonstrate forms and patterns that would be otherwise inconceivable.
Antikythera presumes that computation was discovered as much as it was invented. It is less that natural computation works like modern computing devices but rather that modern computing devices and formulations are quickly-evolving approximations of natural computation — genetic, molecular, neuronal, etc.
Computation as a principle may be near universal, but computation as a societal medium is highly malleable. Its everyday affordances are seemingly endless. However, computational technologies evolve and societies evolve in turn. For example, in the decades to come, what is called “AI” may be not simply a novel application for computation, but its primary societal-scale form. Computation would be not just an instrumentally focused calculation but the basis of widespread non-biological intelligence.
Through computational models we perceive existential truths about so many things: human genomic drift through history, the visual profile of astonomic objects millions of light years away, the extent of anthropogenic agency and its climatic effects, the neurological foundations of thought itself. The qualitative profundity of these begins with a quantitative representation. The math discloses reality and reality demands new philosophical scrutiny.
Allocentrism in philosophy and engineering
Computation is not just a tool, anymore than language is just a tool. Both language and computation are constitutive of thought and the encoding and communication of symbolic reasoning. Both evolve in relation to how they affect and are affected by the world, and yet both retain something formally unique. That machine intelligence would evolve through language as much as language will in the foreseeable future evolve through machines suggest a sort of artificial convergent evolution. More on that below.
What does Antikythera mean by “computation” and what is its slice of that spectrum? Our approach is slightly off kilter from how philosophy of computation is, at present, usually practiced. Philosophy, in its Analytic mode, interrogates the mathematical procedure of computation and seeks to qualify those formal properties. It bridges Logic and philosophy of mathematics in often exquisitely productive but sometimes arid ways. Our focus, however, is less in that formal uniqueness than in the mutually-affecting evolution of computation and world, and how each becomes a scaffold for the development of the other.
For its part, Continental Philosophy is suspicious, dismissive and even hostile to computation as a figure of power, reductive thought, and instrumental rationality. It devotes considerable time to the often obscure prosey criticism of all that it imagines computation to be and do. As a rule, both spoken and unspoken, it valorizes the incomputable over the computable, the ineffable over the effable, the analogue over the digital, the poetic over the explanatory, and so on.
Our approach is also qualitative and speculative but instead of seeing philosophy as a form of resistance to computation, it sees computation as a challenge to thought itself. Computation is not that which obscures our view of the real, but which has, over the past century, been the primary enabler of confrontations with the real that are sometimes startling and even disturbing but always precious. This makes us neither optimists nor pessimists, but rather deeply curious and committed to building and rebuilding from first principles rather than commentary on passing epiphenomena.
Our philosophical standpoint is allocentric more than egocentric, “Copernican” more than phenomenological. The presumption is we will as always learn more about ourselves by getting outside our own heads and perspectives, almost always through technological mediation, than we will by private rumination on the experience of interiority or by mistaking culture for history. That said, even from an outside perspective looking back upon ourselves, we (“humans”) are not necessarily the most important thing for philosophy to examine. The vistas are open.
Most sciences grew out of philosophy and did so by stepping outdoors and playfully experimenting with the world as it is. Instead of science composing new technologies to verify its curiosity, the inverse is perhaps even more often the case: new technologies devised for one purpose end up changing what is perceivable and thus what is hypothesized and thus what science seeks. The allocentric turn does not imply that human sapience is not magnificent, but it does locate it differently than it may be used to. It is true that homo sapiens is the species that wrote this and presumably is reading this (the most important reader may be a future LLM) but we are merely the present seat of the intensification of abstract intelligence, which is the real force and actor. We are the medium not the message. If Antikythera might eventually contribute to the philosophical explorations of what in due time becomes a science of understanding the relationship between intelligence and technology (and life) as intertwined planetary phenomena — to ask the questions that can only be answered by that — then we will have truly succeeded.
Planetary Computation
The Anthropocene is a second order concept derived from computation
I have told this story many times. Imagine the famous Blue Marble image as a movie, one spanning all 4.5 billion years of Earth’s development. Watching this movie on super fast-forward, one would see the planet turn from red to blue and green, see continents form and break apart, see the emergence of life and an atmosphere, and in the last few seconds you would see something else remarkable. The planet would almost instantaneously grow an external layer of satellites, cities and various physical networks, all of which constitute a kind of sensory epidermis or exoskeleton. In the last century, Earth has grown this artificial crust through which it has realized incipient forms of animal-machinic cognition with terraforming-scale agency. This is planetary computation. It is not just a tool, it is a geological cognitive phenomenon.
In short it is this phenomenon — planetary computation defined in this way — that is Antikythera’s core interest. To be more precise the term has at least two essential connotations: first, as a global technological apparatus and, second, in all the ways that that apparatus reveals planetary conditions in ways otherwise unthinkable. For the former computation is an instrumental technology that allows new perceptions and interactions with the world; for the latter it is an epistemological technology that shifts fundamental presumptions about what is possible to know about the world at all.
For example, the scientific idea of “climate change,” is an epistemological accomplishment of planetary scale computation. The multiscalar and accelerating rate of change is knowable because data gleaned from satellites, surface and ocean temperatures, and most of all the models derived from supercomputing simulations of planetary past, present and futures. As such, computation has made the contemporary notion of the Planetary and the ‘Anthropocene’ conceivable, accountable, and actionable. These ideas, in turn, established that over the past centuries anthropogenic agency has had terraforming scale effects. Every discipline is reckoning in its own way with the implications, some better than others.
As the Planetary is now accepted as a “Humanist category,” it is worth emphasizing that the actual planets, including Earth, are rendered as stable objects of knowledge that have been made legible largely through first order insights gleaned from computational perceptual technologies. It becomes a Humanist category both as a motivating idea that puts the assembly of those technologies in motion and later as a (precious) second order abstraction derived from what they show us.
The Planetary is a term with considerable potential philosophical weight but also a lot of gestural emptiness. It is, as suggested, both a cause and effect of the recognition of “the Anthropocene.” But what is that? I say recognition because the Anthropocene was occurring long before it was deduced to be happening. Whether you start at the beginning of agriculture 10k years ago or the industrial revolution a few hundred years ago or the pervasive scattering of radioactive elements more recently, the anthropogenic transformation of the planet was an “accidental terraforming.” It was not the plan.
After years of debate as to whether the term deserves the status of proper geologic epoch, the most recent decision is to identify the Anthropocene as an event, as is the Great Oxidation event or the Chtulam meteor event. This introduces more plasticity into the concept. Events are unsettled, transformative, but not necessarily final. Anthropogenic agency can and likely will orient this event to a more deliberate conclusion. For its part, computation will surely make that orientation possible just as it made legible the situation in which it moves.
Computation is now the primary technology for the artificialization of functional intelligence, symbolic thought and of life itself.
Computation is for us, not only a formal, substrate-agnostic, recursive calculative process; it is also a means of world-ordering. From the earliest marks of symbolic notation, computation was a foundation of what would become complex culture. The signifiers on clay in Sumerian cuneiform are known as a first form of writing; in fact they are indexes of transactions, an inscriptive technique that would become pictograms and over time alphanumeric writing, including base 10 mathematics and formal binary notation. There and then in Mesopotamia, the first writing is “accounting”: a kind of database representing and ordering real world communication. This artifact of computation already prefigures the expressive semiotics, even literary writing, that ensues in centuries to come.
Over recent centuries, and accelerating during the mid-20th century, technologies for the artificialization of computation have become more powerful, more efficient, more microscopic and more globally pervasive, changing the world in their image. “Artificialization” in this context doesn’t mean fake or unnatural, but rather that the intricate complexity of modern computing chips, hardware and software did not evolve blindly; it is the result of deliberate conceptual prefiguration and composition, even if by accident. The evolutionary importance of that general capacity for artificialization will become clearer below.
Planetary Computation reveals and constructs the Planetary as a “humanist category”
Some of the most essential and timeless philosophical questions revolve around the qualities of perception, representation, and time. Together and separately, these have all been radicalized by planetary computational technologies, in no domain more dramatically than in Astronomy.
The Webb deep space telescope scans the depths of the universe, producing data that we make into images showing us, among other wonders, light from a distant star bending all the way around the gravitational cluster of galaxies. From such perceptions we, the little creatures who built this machine, learn a bit more about where, when and how we are. Computation is not only a topic for philosophy to pass judgment; computation is itself a philosophical technology. It reveals conditions that have made human thought possible.
Antikythera is a philosophy of technology program that diverges in vision and purpose from the mainstream of philosophy of technology, particularly from the intransigent tradition growing from the work of Martin Heidegger, whose near mystical suspicion of scientific disenchantment, his denigration of technology as that which distances us from Being and reduces the world to numeric profanity, and most of all his outrage at innovations of perception beyond the comfort of grounded phenomenology, has confused generations of young minds. They have been misled. The question concerning technology is not how it alienates us from the misty mystery of Being but how every Copernican turn we have taken, from heliocentrism to Darwinism to neuroscience to machine intelligence, has only been possible by getting outside our own skin to seeing ourselves as we are and the world as it is. This is closeness to being.
Computation reveals the planetary condition of intelligence to itself
To look up into the night sky with an unaided eye is to gaze into a time machine showing us an incomprehensibly distant past. It is to perceive light emitted before most of us were born and even before modern humans existed at all. It took well into the 18th century for the scientific realization that stripes of geologic sedimentary layers mark not just an orderly spatial arrangement but represent the depths of planetary time. The same principle that space is time is space holds as you look out at the stars, but on a vastly larger scale. To calculate those distances in space and time is only possible once their scales are philosophically and mathematically and then technologically abstractable. Such is the case with Black Holes, first described mathematically and then, in 02018, they were directly perceived by Earth itself having been turned into a giant computer vision machine.
Event Horizon telescope was an array of multiple terrestrial telescopes all aimed on a single point in the night sky. Its respective scans were timed with and by the Earth’s rotation, and thus the planet itself was incorporated into this optical mechanism. Event Horizon connected the views of telescopes on the Earth’s surface into ommatidia of a vast compound eye, a sensory organ that captured 50 million year old light from the center of the M87 galaxy as digital information. This data was then again rendered in the form of a ghostly orange disc that primates, such as ourselves, recognize as an “image.” Upon contemplating it, we can also identify our place within the universe that exceeds our immediate comprehension but not entirely our technologies of representation. With computational technologies such as Event Horizon, it's possible to imagine our planet not only as a lonely blue spot in the void of space but as a singular form that finally opens its eye to perceive its distant environment.
For Antikythera, this is what is meant by “computational technology disclosing and accelerating planetary intelligence.” Feats such as this demonstrate what planetary computation is for.
Research Program
Having hopefully drawn a compelling image of the purpose of Antikythera as a generative theoretical project, I would put that image in motion and describe how the program does its work. As you might expect, it is not in the usual way and it is deliberately tuned for the messy process of concept generation, articulation, prototyping, narrativization, and ultimately, convergence.
The link between Philosophy and Engineering is a more fertile ground than that between the Humanities and Design
Antikythera is a philosophy of technology research program that uses studio-based speculative design methodologies to provoke, conceive, explore, and experiment with new ideas. At the same time, it is often characterized as a speculative design research program that is driven by a focused line of inquiry in the philosophy of technology. Yet neither framing is precisely right.
As I alluded to, within the academic sub-field of philosophy of technology, Antikythera is positioned opposed to the deeply rooted legacy of Heideggerian critique that sees technology as a source of existential estrangement, and so perhaps our approach is the opposite of “philosophy of technology?” Technology of philosophy? Maybe. At the same time, despite the longstanding crucial role of thought experiments in advancing technological exploration, the term “speculative design” has unfortunate connotations of whimsical utopian/dystopian pedantic design gestures. While Antikythera is appreciative of the inspiration the Humanities provides to Design, that must more than simply injecting the latest cultural theoretical trend into the portfolios of art students.
A more precise framing may be a renewed conjunction of philosophy and engineering. “Engineering” is often seen as barren terrain for the high minded abstractions of philosophy, but that’s exactly the problem. Functionalism is not the enemy of creativity but, as a constraint, perhaps its most fertile niche. By this I don’t mean a philosophy of engineering, but a speculative philosophy drawn from a curious, informed and provocative encounter with the technical means by which anthropogenic agency remakes the world in its image and, in turn, the uncertain subjectivities that emerge, or fail to emerge, from that difficult recognition of its dynamics. Obviously, for today those technical means are largely computational and hence our program for the assembly of a new school of thought for the speculative philosophy of technology focuses its attention specifically on computation.
This may not locate Antikythera in the mainstream of either Humanities, Philosophy, or Science and Technology Studies and perhaps rightly so, but perhaps positions the program to accomplish things it otherwise could not. Many sciences began as a subject matter in philosophy: from physics to linguistics, from economics to neuroscience. This is certainly true for Computer Science as it congealed from philosophy of mathematics, logic, astronomy, cybernetics and more. Of all computational technologies, AI in particular emerged through a non-linear genealogy of thought experiments spanning centuries before it became anything like the functional technologies realized today, which, in turn, redirected those though experiments.This is also what is meant by the conjunction of philosophy and engineering.
Furthermore, this also suggests that the ongoing “Science Wars” — which the Humanities absolutely did not win — are all the more unfortunate. The orthodoxy project is to debunk, resist and explain away the ontologically challenging assignments that technoscience puts forth with the comforting languages of social reductionism and cultural determinism. This not only retards the development of the Humanities, a self-banishment to increasingly shrill irrelevance, concealing rather than revealing the extent to which philosophy is where new sciences come from, it also can be the co-creation of those sciences to come.
It need not be so. There are many ways to reinvent the links between speculative and qualitative reason and functional qualitative creativity. Antikythera’s approach is just one.
Conceive, Convene, Communicate
So what is the method by which we attempt to build this school of thought? The approach is multifaceted but comes down to three things: (1) the tireless development of a synthetic and hopefully precise conceptual vocabulary, posed both as definitional statements and as generative framing questions with which to expand and hone our research, (2) the convening of a select group of active minds intrigued by our provocation, eager to collaborate with those from other disciplines and backgrounds, and (3) to invest in the design of the communication of this work such that each bit adds to an increasingly high resolution mosaic that constitutes the Antikythera school of thought. The ideas and implications of those outcomes feed back into the conceptual generative framing of the next cycle. Each go around, and the school of thought gets bigger, leaner and more cutting.
This means a division of labor spread across a growing network. We work with existing institutions in ways that they may not be able to do on their own. The institutional affiliations of our partners include Google/ DeepMind. Our affiliate researchers come from Cambridge, MIT, Santa Fe Institute, Cal Tech, Sci_Arc, Beijing University, Harvard, UC San Diego, Oxford, Central St. Martins, UCLA, Yale, Eindhoven, Penn, New York University- Shanghai, Berkeley, Stanford, University of London, and many more. More important than the brand names on their uniforms is their disciplinary range: computer science and philosophy obviously, architecture, digital media, literature, filmmaking, robotics, mathematics, astrophysics, biology, history, cognitive neuroscience.
At least once a year, Antikythera hosts an interdisciplinary design research studio in which we invite and support 15 younger and mid-career researchers to work with us on new questions and problems and to generate works of speculative engineering, worldbuilding cinematic works, as well as formal papers. We have hosted studios in Los Angeles, Mexico City, London, and Beijing, and in 02025 the studio will be based in Cambridge, Mass. on the campus of MIT. Studios draw applicants from around the world and disciplines from computer scientists to science-fiction authors, mathematicians to game designers, and of course philosophers.
At our Planetary Sapience symposium at MIT Media Lab, we recently announced a collaboration with MIT Press: a book series and a peer-reviewed digital journal that will serve the primary platform for publishing the work of the program, as well as intellectually related work from a range of disciplines. The first “issue” of the digital journal will go live in concert with a launch event at the Venice Architecture Biennale next Spring. The first title in the book series, What is Intelligence? By Blaise Agüera y Arcas will hit shelves in Fall 02025. The digital journal will publish original and classic texts as imagined and designed by some of the top digital designers working today. The journal is a showcase for both cutting edge ideas in the speculative philosophy of computation and cutting edge digital design, together establishing a communications platform most appropriate to the ambitions of the work. Each of the articles in the first upcoming issues is discussed below in context of the Antikythera research track to which it most directly contributes.
Antikythera is made possible by the generosity and far-sighted support of the Berggruen Institute, based in Los Angeles, Beijing and Venice, under the leadership of Nicolas Berggruen, Nathan Gardels, Nils Gilman, and Dawn Nakagawa.
Research Areas
Antikythera’s research is roughly divided into four key tracks, each building off the core theme of Planetary Computation. They each can be defined in relation to one another.
As mentioned, Planetary Computationrefers to both the emergence of artificial computational infrastructures at global scale as well as the revelation and disclosure of planetary systems as topics of empirical scientific interest, and “The Planetary” as a qualitative conceptual category. This is considered through four non-exclusive and non-exhaustive lenses.
Synthetic Intelligence refers to the emergence of artificial machine intelligence in both anthropomorphic and automorphic forms as well as a complex and evolving distributed system. In contrast with many other approaches to AI, we emphasize (1) the importance of productive misalignment and the epistemological and practical necessity to avoid alignment overfitting to dubiously defined human values and (2) the eventual open-world artificial evolution of synthetic hybrids of biological and non-biological intelligences, including infrastructure-scale systems capable of advanced cognition.
Recursive Simulations refers to the process by which computational simulations reflexively or recursively affect the phenomena that they represent. In different forms, predictive processing underpins diverse forms of evolved and artificial intelligence. At a collective scale, this allows complex societies to sense, model and govern their development. In this, as simulations compress time, they become essential epistemological technologies for the understanding of phenomena otherwise imperceptible.
Hemispherical Stacks examines the implications of multipolar computation and multipolar geopolitics, one in terms of the other. It considers the competitive and cooperative dynamics of computational supply-chains and both adversarial and reciprocal relations between States, Platforms, and regional bodies. Multiple scenarios are composed about diverse areas of focus including, chip wars, foundational models, data sovereignty and astropolitics.
Planetary Sapience attempts to locate the artificial evolution of substrate-agnostic forms of computational intelligence within the longer historical arc of the natural evolution of complex intelligence as a planetary phenomenon. The apparent rift between scientific and cultural cosmologies, between what is known scientifically and cultural worldviews, is posited as an existential problem, one that cannot be solved by computation as a medium but by the renewal of a speculative philosophy that addresses life, intelligence and technology as fundamentally integrated processes. More on this below.
Synthetic Intelligence
The eventual implication of artificialization of intelligence is less humans teaching machines how to think than machines demonstrating that thinking is a much wider and weirder spectrum.
Synthetic intelligence refers to the wider field of artificially-composed intelligent systems that do and do not correspond to Humanism’s traditions. These systems, however, can complement and combine with human cognition, intuition, creativity, abstraction and discovery. Inevitably, both are forever altered by such diverse amalgamations.
The history of AI and the history of the philosophy of AI are intertwined, from Leibniz to Turing to Dreyfus to today. Thought experiments drive technologies, which drive a shift in the understanding of what intelligence itself is and might become, and back and forth. This extends well beyond the European philosophical tradition. In our work, important touchstones include those drawn from Deng-era China's invocation of cybernetics as the basis of industrial mass mobilization, and the Eastern European connotations of AI, which include what Stanislaw Lem called an ‘existential’ technology (the topic of Bogna Konior's article in the first issue of the Antikythera journal). Many of these touchpoints contrast with Western individualized and individualistic and anthropomorphic models that dominate contemporary debates on so-called AI ethics and safety.
Historically, AI and the Philosophy of AI have evolved in a tight coupling, informing and delimiting one another. But as the artificial evolution of AI accelerates, the conceptual vocabulary that has helped to bring it about may not be sufficient to articulate what it is and what it might become. Asking if AI can genuinely “think” according to the standards set forth by Kant in Critique of Pure Reason is like asking if this creature discovered in the New World is actually an “animal” as defined by Aristotle. It’s obvious the real question is how the new evidence must update the category, not how the received category can judge reality.
Now as before, not only is AI defined in contrast with the strangely protean image of the human, but the human is defined in contrast with the machine. By habit it is taken almost for granted that we are all that it is not and it is all that we are not. Like two characters sitting across from one another, deciding if the other is a mirror reflection or true opposite, each is supposedly the measure and limit of the other.
People see themselves and their society in the reflection AI provides, and are thrilled and horrified by what it portends. But this reflection is also preventing people from understanding AI, its potential, and its relationship to human and non-human societies. A new framework is needed to understand the possible implications.
What is reflected back is not necessarily human-like. The view is beyond anthropomorphic notions of AI and toward fundamental concern with machine intelligence. What Turing proposed in his famous test as a sufficient condition of intelligence has become instead decades of solipsistic demands and misrecognitions. Idealizing what appears and performs as most “human” in AI — either as praise or as criticism — is to willfully constrain understanding existing forms of machine intelligence as they are.
To ponder seriously the planetary pasts and futures of AI, not only extends but alters our notions of “artificiality” and “intelligence”, and draws from the range of such connotations but will also, inevitably, leave them behind.
The Weirdness Right in Front of Us
This weirdness includes the new unfamiliarity of language itself. If language was, as the structuralist would have it, the house that man lives in, now, as machines spin out coherent ideas at rates just as inhuman as their mathematical answers, the home once provided by language is now quite uncanny.
Large Language Models’ eerily convincing text prediction/ production capabilities have been used to write novels, screenplays, make images and movies, songs, voices, symphonies, and are even used by some biotech researchers to predict gene sequences for drug discovery — here at least, the language of genetics really is a language. LLMs also form the basis of generalist models capable of mixing inputs and outputs from one modality to another (e.g. interpreting what is in an image so as to instruct movement of a robot arm, etc.) Such foundational models may become a new kind of general purpose public utility around which industrial sectors organize: cognitive infrastructures.
The Model is the Message explores the complex philosophical and practical issues posed by large language models as a core component of distributed AI systems. They are the heart of emerging cognitive infrastructures. The essay, written just before the launch of ChatGPT, takes as its starting point the curious event of Google engineer Blake Lemoine being placed on leave after publicly releasing transcripts of conversations with LaMDA, a chatbot based on a Large Language Model (LLM) that he claimed to be conscious, sentient and a person. LaMDA may not be conscious in the ways that Lemoine believes it to be — his inference is clearly based in motivated anthropomorphic projection — but indeed these kinds of artificial intelligence likely are “intelligent” in some specific and important ways. The real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand.
Whither speculative philosophy then? As a co-author and I wrote recently, “reality overstepping the boundaries of comfortable vocabulary is the start, not the end, of the conversation. Instead of a groundhog-day debates about whether machines have souls, or can think like people imagine themselves to think, the ongoing double-helix relationship between AI and the philosophy of AI needs to do less projection of its own maxims and instead construct more nuanced vocabularies of analysis, critique, and speculation based on the Weirdness right in front of us.” And that is really the focus of our work: the weirdness right in front of us and the clumsiness of our languages to engage with it.
Asking if AI can genuinely 'think' according to the standards set forth by Kant in Critique of Pure Reasonis like asking if this creature discovered in the New World is actually an 'animal' as defined by Aristotle. It’s obvious the real question is how the new evidence must update the category, not how the received category can judge reality.
The fire apes figured out how to make the rocks think
To zoom out and try to locate such developments in the longer arc of the evolution of intelligence, what has been recently accomplished is truly mind bending. One way to think about it, going back to our blue marble movie mentioned above, is that we've had many millions of years of animal intelligence, which became homo sapiens’ intelligence. We've had many millions of years of vegetal intelligence. And now we have mineral intelligence. That the fire apes, that's us, have managed to fold little bits of rocks and metal into particularly intricate shapes and run electricity through them, and now the lithosphere is able to perform feats that until very recently, only primates had been able to do. This is big news. The substrate of complex intelligence, that's us, now includes both the biosphere and the lithosphere. And it's not a zero sum situation. The question we are beginning to be able to ask is how do those integrate together in such a way that they become mutually reinforcing rather than mutually antagonistic.
Five Stages of AI Grief Grief-laden vitriol directed at AI fails to help us understand paths to better futures that are neither utopian nor dystopian, but open to radically weird possibilities. Drawing on Kübler-Ross’ “five stages,” this essay outlines the perils and pitfalls of AI Denial, AI Anger, AI Bargaining, AI Depression, and AI Acceptance, pointing the way toward “post-grief.”
Alignment of AI with Human Wants and Needs is a Necessary short-term tactic and an insufficient and even dangerous long-term norm
What does it mean to ask machine intelligence to “align” to human wishes and self-image? Is this a useful tactic for design, or a dubious metaphysics that obfuscates how intelligence as a whole might evolve? How should we rethink this framework in both theory and practice?
After Alignment: Orienting Synthetic Intelligence Beyond Human Reflection A public lecture at Central St. Martins University of the Arts London discussing shifts from AGI to artificial generic intelligence, the importance of recursive simulations, the decentering of personal data, the challenges of AI in science, intelligence as an evolutionary scaffold, the limitations of mainstream AI ethics, and why a planetary model of synthetic intelligence must drive its geopolitical project.
The emergence of machine intelligence must be steered toward planetary sapience in the service of viable long term futures. Instead of strong alignment with human values and superficial anthropocentrism, the steerage of AI means treating these humanisms with nuanced suspicion and recognizing its broader potential. At stake is not only what AI is, but what a society is, and what AI is for. What should align with what?
At stake is not only how AI must evolve to suit the shape of human culture but also how human societies will evolve in relationship with this fundamental technology. AI overhang -unused or unrealized capacity of AI, not yet, if ever, acclimated into sociotechnical norms — affects not only narrow domains, but also arguably, civilizations, and how they understand and register their own organization — past, present, and future. As a macroscopic goal, simple “alignment” of AI to existing human values is inadequate and even dangerous. The history of technology suggests that the positive impacts of AI will not arise through its subordination to or mimicry of human intuition.The telescope did not only magnify what it was possible to see, it changed how we see and how we see ourselves. Productive disalignment—dragging society toward fundamental insights of AI—is just as essential.
Always remember that everything you do from the moment you wake up to the moment you fall asleep, is training data for the futures model of today.
Cognitive Infrastructures: Open World Evolution
Any point of alignment or misalignment between human and machine intelligence, between evolved and artificial intelligence will converge at the crucial interfaces between each. HCI gives way to HAIID (Human-Computer Interaction Design). HAIID is an emerging field, one that contemplates the evolution of Human-Computer Interaction in a world where AI can process complex psychosocial phenomena. Anthropomorphization of AI often leads to weird "folk ontologies" of what AI is and what it wants. Drawing on perspectives from a global span of cultures, mapping the odd and outlier cases of HAIID gives designers a wider view of possible interaction models. But as opposed to single-user relations with chatbot agents we turn our attention to the great outdoors and the evolution of synthetic intelligence in the wild.
Natural intelligence evolved in open worlds in the past, and so the presumption is that we should look for ways in which machine intelligence will evolve in the present and future through open worlds as well. This also means that it's not just that its substrates of intelligence may be quite diverse, they don't necessarily need to be human brain tissue, or silicon, they may be taking many different forms. Another way of putting this is, instead of the model of AI as a kind of brain in a box, we prefer to start with the question of something more like AI in the wild. Something that is interacting with the world, in lots of different strange and unpredictable ways.
Cognitive Infrastructures Studio In June 02024, Antikythera hosted a studio on Cognitive Infrastructures at Central St. Martins in London. Over the course of a month 15 Studio Researchers from Google/ DeepMind, Cambridge, Oxford, and more developed original projects and papers on computational time compression, xenophylum for robotics, endosomatics, brain organoid networks, generative topolinguistics, minimum viable interior mental states, LLM as long-term archive, shifts between cognizer and cognized in inter-species relationships. Traversing the uncanny ridge, and synthetic counteradaptation.
Natural Intelligence also emerges at environmental scale and in the interactions of multiple agents. It is located not only in brains but in active landscapes. Similarly, artificial intelligence is not contained within single artificial minds but extends throughout the networks of planetary computation: it is baked into industrial processes; it generates images and text; it coordinates circulation in cities; it senses, models and acts in the wild.
As artificial intelligence becomes infrastructural, and as societal infrastructures concurrently become more cognitive, the relation between AI theory and practice needs realignment. Across scales—from world-datafication and data visualization to users and UI, and back again—many of the most interesting problems in AI design are still embryonic.
This represents an infrastructuralization of AI, but also a ‘making cognitive’ of both new and legacy infrastructures. These are capable of responding to us, to the world and to each other in ways we recognize as embedded and networked cognition. AI is physicalized, from user interfaces on the surface of handheld devices to deep below the built environment.
Individual users will not only interact with big models, but multiple combinations of models will interact with groups of people in overlapping combinations. Perhaps the most critical and unfamiliar interactions will unfold between different AIs without human interference. Cognitive Infrastructures are forming, framing, and evolving a new ecology of planetary intelligence.
How might this frame human-AI interaction design? What happens when the production and curation of data is for increasingly generalized, multimodal and foundational, models? How might the collective intelligence of generative AI make the world not only queryable, but re-composable in new ways? How will simulations collapse the distances between the virtual and the real? How will human societies align toward the insights and affordances of artificial intelligence, rather than AI bending to human constructs? Ultimately, how will the inclusion of a fuller range of planetary information, beyond traces of individual human users, expand what counts as intelligence?
Recursive Simulations
Simulation, Computation, and Philosophy
Foundations of Western philosophy are based on a deep suspicion of simulations. In Plato’s allegorical cave, the differentiation of the world from its doubles, its form and its shadows, takes priority for the pursuit of knowledge. Today, however, the real comes to comprehend itself through its doubles: the simulation is the path toward knowledge, not away from it.
From Anthropology to Zoology, every discipline produces, models and validates knowledge through simulations. Simulations are technologies to think with, and in this sense they are fundamental epistemological technologies, and yet they are deeply under examined. They are a practice without a theory.
Some computational simulations are designed as immersive virtual environments where experience is artificialized. At the same time, scientific simulations do the opposite of creating deceptive illusions; they are the means by which otherwise inconceivable underlying realities are accessible to thought. From the infinitesimally small in the quantum realm to the inconceivably large in the astro-cosmological realm, computational simulations are not just a tool; they are a technology for knowing what is otherwise unthinkable.
Simulations do more than represent; they are also active and interactive. “Recursive” Simulations refers to simulations that not only depict the world, but act back upon what they simulate, completing a cybernetic cycle of sensing and governing. They not only represent the world, they organize the world in relation to how they summarize and rationalize it. Recursive Simulations include everything from financial models to digital twins, user interfaces to prophetic stories. They cannot help but transform the thing they model, which in turn transforms the model and the modeled in a cyclical loop.
The Politics of Simulation and Reality
We live in an era of highly politicized simulations, for good and ill. The role of climate simulations for planetary governance is only the tip of the proverbial iceberg. Antikythera considers computational simulations as experiential, epistemological, scientific and political forms and develops a framework to understand these in relation to one another.
The politics of simulation, more specifically, is based on recursion. This extends from political simulations to logistical simulations to financial simulations to experiential simulations: the model affects the modeled.
Antikythera’s research in this area draws on different forms of simulation and simulation technologies.These include machine sensing technologies (vision, sound, touch, etc.), synthetic experiences (including VR/AR), strategic scenario modeling (gaming, agent based systems), active simulations of complex architectures (digital twins), and computational simulations of natural systems enabling scientific inquiry & foresight (climate models and cellular/genomic simulations). All of these pose fundamental questions about sensing and sensibility, world-knowing and worldmaking.
They all have different relations to the Real. While scientific simulations pose meaningful correspondence with the natural world and provide access to ground truths that would be otherwise inconceivable, virtual and augmented reality produce embodied experiences of simulated environments that purposefully take leave of ground truth. These two forms of simulation have inverse epistemological implications: one makes an otherwise inaccessible reality perceivable, while the other bends reality to suit what one wants to see. In between is where we live.
Existential Implications of the Simulations of the Future
Recursion can be direct or indirect. It can be a literal sensing/actuation cycle, or the indirect negotiation of interpretation and response. The most nuanced recursions are reflexive. They mobilize action to fulfill or prevent a future that is implied by a simulation. Climate politics exemplifies the reflectivity of recursive simulations: through planetary computation, climate science produces simulations of near term planetary futures, the implications of which may be devastating. In turn, climate politics attempts to build planetary politics and planetary technologies in response to those implications and thereby extraordinary political agency is assigned to computational simulations. The future not only depends on them, it is defined by them.
Scientific simulation, however, not only has deep epistemological value, it also makes possible the most profound existential reckonings. Climate science is born of the era of planetary computation. Without the planetary sensing mechanisms, satellites, surface and air sensors, ice core samples, all aggregated into models and most importantly the supercomputing simulations of climate past present and future, the scientific image of climate change as we know it does not happen. The idea of the Anthropocene, and all that it means for how humans understand their agency, is an indirect accomplishment of computational simulations of planetary systems over time.
In turn, the relay from the idea of the Anthropocene to climate politics is based too on the geopolitics of simulation. The implications of simulations of the year 02050 are dire and so climate politics seeks to mobilize a planetary politics in reflexive response to those predicted implications. That politics is recursive. Deliberate actions are consciously taken now to prevent the future. This is an extraordinary agency to give simulations. It is possible that many climate activists may not feel warmly about the idea, but climate politics is one of the important ways in which massive computational simulations are driving how human societies understand and organize themselves. It’s why the activists are in the streets to begin with.
Pre-Perception: Simulation as Predictive Epistemology
Quite often, though, the simulation comes first. Its predictive ability may imply that there must be something we should look for because the model suggests it has to be there.
Thus the prediction makes the description possible as much as the other way around. Such is the case with Black Holes, which were hypothesized and described mathematically long before they were detected let alone observed. For the design of the Black Hole in the Nolan brothers’ film, Interstellar, scientific simulation software was used to give form to the mysterious entity based on consultation with Kip Thorne at CalTech and others. The math had described the physics of black holes, and the math was used to create a computational model that was used to create a dynamic visual simulation of something no one had ever seen.
Of course, a few years later we did see one. The Black hole at the center of the M87 galaxy was “observed” by the Event Horizon telescope and a team at Harvard that included Shep Doelman, Katie Bowman and Peter Galison. It turns out we, the humans, were right. Black Holes look like what the math says they must look like. The simulation was a way of inferring what must be true — where to look and how to see it. Only then did the terabytes of data from Event Horizon finally discover a picture.
Toy Worlds & Embodied Simulations
Friends from Neuroscience (and Artificial Intelligence) may raise the point that simulation is not only a kind of external technology with which intelligence figures out the world, but simulations are how minds have intelligence at all. The cortical columns of animal brains are constantly predicting what will be next, running through little simulations of the world and the immediate future, resolving them with new inputs and even competing with each other to organize perception and action.
For many computational simulations, their purpose is as a model that reflects reality (such as for climate science or astrophysics). For others the back and forth is not just mirroring. Some simulations not only model the world, but they also feedback upon what they model both directly and indirectly; these recursive simulations not only model an external reality, but directly act back upon that reality in a decisive feedback loop. “Digital twins” express this dynamic. In the recursive relation between simulation and the real, the real is the baseline model for simulations and simulations as a baseline model for the real.
Many AIs, especially those embodied in the world, such as driverless cars, are trained in Toy World simulations, where they can explore more freely, bumping into the walls, until they, like us, learn the best ways to perceive, model, and predict the real world.
Simulation as Model / Model as Simulation
Scientific simulations not only do more than deceive us, they are, arguably, the essential mechanism by which otherwise inconceivable underlying realities are accessible to thought. From the very very small in the quantum realm to the very very large in the astro-cosmological realm, computational simulations are essential not just as a tool, but as a way of thinking with models, a fundament of induction, deduction and abduction.
At the same time, simulations are based on models of reality, and the status of the model has been a preoccupying concern in philosophy of science, even if simulations as such are more presumed that philosophized.Models are a way of coalescing disparate bits of data into a composite structure whose whole gives shape to its parts, suggesting their interactions and general comparisons with other structures. It is a tool to think with. The value is in the descriptive correspondence with reality, but this correspondence is determined by its predictive value. If a scientific simulation can predict a phenomenon, its descriptive quality is implied. A model is also, by definition, a radical reduction in variables, i.e. a map reduces a territory. A geocentric or heliocentric model of the solar system can be constructed with styrofoam balls, and one is definitely “less wrong” than the other, but both are infinitely less complex than what they model.
For A General Theory Of Simulations Lecture From scientific simulation to VR/AR, from anthropology to zoology, simulations are a vital practice without a vital theory. This lecture presented preliminary thoughts on what a general theory of simulations might need to account for: shadows, stagings, scenarios, synthetic experiences, models, demos, immersions, ruses, toy worlds, miniatures, and projections.
This is especially important when what is simulated is as complex as the universe itself. Astrophysics is based almost entirely on rigorous computational simulations of phenomena that produce difficult to observe data, assembled into computationally expensive models, and which ultimately provide for degrees of confident predictability about astronomic realities that situate us all. This is what we call cosmology, the meta-model of all models of reality in which humans and other intelligences conceive of their place in space-time. Today, cosmology in the anthropological sense is achieved through cosmology in the computational sense.
Hemispherical Stacks
The Stack: Planetary Computation as Global System
Planetary computation refers to the interlocking cognitive infrastructures that structure knowledge, geopolitics and ecologies. Its touch extends from the global to the intimate, from the nanoscale to the edge of the atmosphere and back again. It is not a single totality demanding transparency, but a highly uneven, long-emerging blending of biosphere and technosphere.
As you stare at the glass slab in your hand, you are, as a user, connected to a planetary technology that both evolved and was planned in irregular steps over time, each component making use of others: an accidental, discontiguous megastructure. Instead of a single megamachine, planetary computation can be understood as being composed of modular, interdependent, functionally-defined layers, not unlike a network protocol stack. These layers compose The Stack: the Earth layer, Cloud layer, City layer, Address layer, Interface layer, and User layer.
Earthly ecological flows become sites of intensive sensing, quantification and governance. Cloud computing spurs platform economics and creates virtual geographies in its own image. Cities form vast discontiguous networks as they weave their borders into enclaves or escape routes. Virtual addressing systems locate billions of entities and events into unfamiliar maps. Interfaces present vibrant augmentations of reality, standing in for extended cognition. Users, both human and non-human, populate this tangled apparatus. Every time you click on an icon, you send a relay all the way down the paths of connection and back again, you activate (and are activated by) the entire planetary infrastructure hundreds of times a day.
The Emergence of Multipolar Geopolitics through Multipolar Computation
The emergence of planetary computation in the late 20th century shifted not only the lines on the map but the maps themselves. It distorted and reformed Westphalian political geography and created new territories in its own image. Large cloud platforms took on roles traditionally assumed by nation-states (identity, maps, commerce, etc.) now based on a transnational network geography, while nation-states increasingly evolved into large cloud platforms (state services, surveillance, smart cities, etc.). The division of the Earth into jurisdictions defined by land and sea has given way during the last few decades to a more irregular, unstable and contradictory amalgam of overlapping sovereign claims to data, people, processes, and places defined instead by bandwidth, simulation, and hardware and software chokepoints.
Over the past decade, these stacks have been decisively fragmenting into multipolar hemispherical stacks defined by geopolitical competition and confrontation. A North Atlantic-Pacific stack based on American platforms was delinking from a Chinese stack based on Chinese platforms, while India, The Gulf, Russia, and Europe charted courses based on citizenship identification, protection and information filtering.
From Chip Wars to EU AI Decrees, this marks a shift toward a more multipolar architecture, hemispheres of influence, and the multipolarization of planetary scale computation into Hemispherical Stacks. These segment and divide the planet into sovereign computational systems extending from energy and mineral sourcing, intercontinental transmission, and cloud platforms to addressing systems, interface cultures and different politics of the “user.”
A New Map
This is both exciting and dangerous. It implies both Galapagos effects of regional cultural diversity but also artificially encapsulated information cultures. For geotechnology just as for geopolitics, “digital sovereignty” is an idea beloved both by democracies and authoritarians.
The ascendance of high end chip manufacturing to the pinnacle of strategic plans — in the US and in the China Strait — is exemplary, and corresponds with the removal of Chinese equipment from Western networks, the removal of Western platforms from Chinese mobile phones, and so on. Economies are defined by interoperability and delinking. But the situation extends further up the stack. The militarization of financial networks in the form of sanctions, the data driven weaponization of populism, and the reformulation of “citizen” as a “private user with personal data” all testify to deeper shifts.In some ways these parallel historical shifts in how new technologies alter societal institutions in their image, and yet the near term and long term future of planetary computation as a political technology is uncertain. Antikythera seeks to model these futures pre-emptively, drawing maps of otherwise uncharted waters.
Hemispherical Stacks describes how the shift toward a more multipolar geopolitics over the last five years and the shift toward a more multipolar planetary computation, not only track one another, in many respects, they are the same thing.
The AI Stack
It is likely that the last half century during which “The Stack” evolved and was composed was really just a warmup for what is about to come: from computation as planetary infrastructure to computation as planetary cognitive infrastructure, from a largely State-funded “accidental megastructure” to multiple privately-funded overlapping, strategically composed, discontiguous systems, from the gathering, storage and sorting of information flows to the training of large and small models and serving generative applications on a niche-by-niche scale, from Von Neumann architectures and procedural programming to neuromorphic systems and the collapse of the user vs. programmer distinction, and from sending light and inexpensive information to information on heavy hardware to heavy information loads accessed by light hardware. Despite how unprepared mainstream international relations may be for this evolution, this is not science fiction; this is last week.
Computation is, in the abstract, a mathematical process, but it is also one that uses physical forces and matter to perform real calculations. Math may not be tangible, but computation as we know it very much is, since electricity moves in tiny pathways on a base made of silicon. It is also worth remembering that the tiny etchings in the smooth surface of a chip, with spaces between measured in nanometers, are put there by a lithographic process. The intricate pathways through which a charge moves into order to compute are, in a way, a very special kind of image.
The machines that make the machines are the precarious perch on which less than a dozen companies hold together the self-replication of planetary computation. The next decade is dedicated to the replication of this replication supply chain itself: the race to build better stacks. If society runs on computation, the ability to make computational hardware is the ability to make a society. This means that the ability to design and manufacture cutting-edge chips, shrinking every year toward perhaps absolute physical limits of manipulable scale, is now a matter of national security.
Chips are emblematic of all the ways that computational supply chains have shifted and consolidated the axes of power around which economies rotate. One Antikythera project, Cloud Megaregionalization, observes that a new kind of regional planning has emerged — from Arizona to Malaysia to Bangalore — that concentrates Cloud manufacturing in strategic locations that balance many factors: access to international trade, energy and water sourcing, access to educated labor, physical security. These are the new criteria for how and where to build the cloud. Ultimately, the chip race is not just a race to build chips but to build the urban regions that build the chips.
Astropolitics: Extraplanetary Sensing and Computation
Another closely related race is the re-emergence of outer “space” as a contested zone of exploration and intrigue, from satellites to the moon and mars and back again. It is being driven by advances in planetary computation and which in turn drives those advances, spreading them beyond terrestrial grounding.
Planetary computation becomes Extraplanetary computation and back again. If geopolitics is now driven by the governance of model simulations then the seat of power is the view from anywhere. That is, if geopolitics is defined by the organization of terrestrial States, Astropolitics is and will be defined by the organization of Earth’s atmosphere, its orbital layers, and what lies just beyond. The high ground is now beyond the Karman Line, the territory dotted with satellites looking inward and outward.
In the 01960s much was made of how basic research for the space race benefitted everyday technologies, but today this economy of influence is reversed. Many of the technologies that are making the new space race possible — machine vision and sensing, chip miniaturization, information relays and other standards — were first developed for consumer products. As planetary computation matured, the space race turned inward toward miniaturization, but today the benefits of these move outward again.
The domain of space law, once obscure, will in the coming decades come to define international law, as it is the primary body of law that takes as its jurisdiction an entire astronomic body, of which Earth and all those things in its orbit are also exemplary.
What do we learn from this? How is this an existential technology? There is no Planetarity without the Extraplanetarity: to truly grasp the figure of our astronomic perch is a Copernican trauma by which the ground is not “grounding” but a gravitational plane, and the night sky is not the heavens but a time machine showing us light from years before the evolution of humans. For this, the archaic distinction between ‘down here’ and ‘up there’ also fractures.
The Technical Milieu of Governance
The apparent technologically determined qualities of these tectonic shifts may undermine some mainstream political theory’s epistemological habits: social reductionism and cultural determinism. While the fault lines by which hemispheres split traces the boundaries of much older socioeconomic geographic blocs, each bloc is increasingly built upon a similar platform structure, one that puts them in direct competition for the ability to build the more advanced computational stack and thereby to build the more advanced economy, and through this compose society.
The “Political” and “governance” are not the same thing. Both always exist in some technical milieu that both causes and limits their development. If the Political refers to how the symbols of power are agonistically contested, then governance (inclusive of the more cybernetic sense of the term) refers to how any complex system (including a society) is able to sense, model and recursively act back upon itself. Some of the confusion for forms of political analysis born of a pre-computational social substrate seems to be the closely held axiom that planetary computation is something to be governed by political institutions from an external and supervisory position. The reality is that planetary computation is governance in its most direct imminent sense; it is what senses, models and governs and what, at present, is reforming geopolitical regimes in its image.
Beyond Cultural Determinism
Not surprisingly, cultural determinism enjoys an even deeper commitment in the Humanities, and there, even when planetary computation is recognized as fundamental, the sovereignty of cultural difference is defended less as cause of computation’s global emergence (as it may be for political science) but also as the remedy for its emergence. “Cosmotechnics” and other gestures toward pluralism as both means and end confront the global isomorphic qualities of planetary computation as the expression not of artificial convergent evolution but as the expressive domination of a particular culture. To contest that domination is thus to contest the expression. For the most extreme forms, pluralism is framed as a clash of reified civilizations, each possessing essential qualities and ‘ways of being technological’ — one Western, one Chinese, etc. Beyond the gross anthropological reduction, this approach evades the real project for the humanities: not how culture can assert itself against global technology but how planetary computation is the material basis not only of new “societies” and “economies” but of different relations between human populations bound to planetary conditions.
As I put it in the original Hemispherical Stacks essay, “Despite the integrity of mutual integration, planetarity cannot be imagined in opposition to plurality, especially as the latter term is now over-associated with the local, the vernacular, and with unique experiences of historical past(s). That is, while we may look back on separate pasts that may also set our relations, we will inhabit conjoined futures. That binding includes a universal history, but not one formulated by the local idioms of Europe, or China, or America, or Russia, nor by a viewpoint collage of reified traditions and perspectives, but by the difficult coordination of a common planetary interior. It is not that planetary-scale computation brought the disappearance of the outside; it helped reveal that there never was an outside to begin with.”
Planetary Sapience
What is the relationship between the planetary and intelligence? What must it be now and in the future?
These questions are equally philosophical and technological. The relationship is one of disclosure: over millions of years, intelligence has emerged from a planetary condition which, quite recently, has been disclosed to that intelligence through technological perception. The relationship is also one of composition: for the present and the future, how can complex intelligence—both evolved and artificialized— conceive a more viable long term coupling of biosphere and technosphere?
Planetary Intelligence
Over billions of years, Earth has folded its matter to produce biological and non-biological creatures capable of not only craft and cunning but also feats of artistic and scientific abstraction. Machines now behave and communicate intelligently in ways once reserved for precocious primates. Intelligence is planetary in both origin and scope. It emerges from the evolution of complex life, a stable biosphere, and intricate perceptual-cognitive organs. Both contingent and convergent, intelligence has taken many forms, passing though forking stages of embodiment, informational complexity and eventually even (partial) self-awareness.
Planetary Computation and Sapience
Planetary-scale computation has allowed for the sensing and modeling of climate change and thus informed the conception of an Anthropocene and all of its existential reckonings. Among the many lessons for philosophy of technology is that, in profound ways, agency preceded subjectivity. Humans (and the species and systems that they cultivated and were cultivated by) terraformed the planet in the image of their industry for centuries before really comprehending the scale of these effects. Planetary systems both large and small, and inclusive of human societies and technologies, have evolved the capacity to self-monitor, self-model, and hopefully deliberately self-correct. Through these artificial organs for monitoring its own dynamic processes, the planetary structure of intelligence scales and complexifies. Sentience becomes sapience: sensory awareness becomes judgment and governance.
Planetary Sapience Planetary-scale computation — an emergent intelligence that is both machine and human — gave us the perspective to see Earth as an interconnected whole. With it, we must now conceive an intentional and worthwhile planetary-scale terraforming.
The provocation of Planetary Sapience is not based in an anthropomorphic vision of Earth constituted by a single ‘noosphere.’ Modes of intelligence are identified in multiple scales and types, some ancient and some very new. These include mapping the extension and externalization of sensory perception; redefining computer science as an epistemological discipline based not only on algorithmic operations but on computational planetary systems; comparing stochastic predictive processing in both neural networks and artificial intelligence; embracing deep time of the planetary past and future as foundation for a less anthropomorphic view of history; modeling life both by the transduction of energy and/or the transmission of information, exploring substrate dependence or independence of general intelligence; embracing astronautics and space exploration as a techno-philosophical pursuits that define the limit of humanity’s tethering to Earth and extend beyond it; exploring how astroimaging — such as Earth seen from space and distant cosmic events seen by Earth — has contributed to the planetary as a model orientation; theorizing simulations as epistemological technologies that allow for prediction, speculation and ultimately a synthetic phenomenology; measuring evolutionary time in the complexity of the material objects that surround us and constitute us; recomposing definitions of “life,” of “intelligence” and of “technology” in light of what is revealed by the increasing artificialization and recombination of each. These modes of intelligence together lead us to construct a technological philosophy that might synthesize these into one path toward greater planetary sapience: a capacity for complex intelligence to cohere its own future.
The Evolution of Artificialization, Intelligence and the Artificialized Intelligence
Evolution of Autopoiesis
To properly ask the questions posed requires us to locate the emergence of artificial computational intelligence within the longer arc of the evolution of intelligence as such and its relationship to artificialization as such. The two have, I argue, always been intertwined, as they are now and as they will be for both foreseeable and unforeseeable futures.
Our thinking on this is also influenced by Sara Walker and Lee Cronin’s provocative Assembly Theory. which posits that evolutionary natural selection begins not with Biology but (at least) with Chemistry. The space of possible molecules is radically constrained and filtered through selection toward those which are most stable and most conducive to becoming scaffolding components for more complex forms. Those forms which are able to internalize and process energy, information and matter with regular efficiency can autopoietically replicate themselves become what we can call “life” (also by Auguera y Arcas’ computational definition.) The process is best defined not by how a single organism or entity internalizes the environment to replicate itself (as a cell does) but by how a given population distributed within an environment evolves through the cooperative capacity to increase energy, matter and information capture for collective replication. I invite the reader to look around and consider all the things upon which they depend to survive the day.
For autopoietic forms to succeed in folding more of the environment into themselves to replicate, evolution, arguably, selects for their capacity to transform their niche in ways that make it more suitable for this process. For example, by reducing the amount of energy expenditure necessary for energy capture, a given population is able to accelerate and expand its size, complexity and robustness. More of the world is transformed into that population because it is capable of allopoiesis, the process of transforming the world into systems that are external to the agent itself. That is, evolution seems to select for forms of life capable of artificialization. Perhaps those species most capable of artificialization are the greatest beneficiary of this tendency.
Complexity begets complexity. Simple allopoiesis and environmental artificialization may be an all but autonomic process, but greater cooperation between agents allows for more complex, efficient and impactful forms of artificialization. Here selection pressure enables the evolution of more nuanced forms of individual and collective intelligence as well as more powerful forms of technology, which we might define “technology” very generally as durable scaffolding apparatus that is leveraged by intelligent agents to transform and internalize matter, energy or information at scales and with regularity and precision otherwise impossible. In this regard, “technology” occupies a parallel symbiotic evolutionary track, one that determines and is determined by the ongoing evolution of intelligent life. What emerges are technologically-enabled conceptual abstractions about how the world is and, perhaps more importantly, counterfactual models about how it might be otherwise. For a form of autopoietic life (including humans) to get really good at intelligence, it means to instantiate counterfactual models and communicate them. It will require something like formally coded symbolic language, which eventually evolved in aggregation of all the preceding biosocial and sociotechnical scaffolds.
Simple evolutionary processes are what enable autopoetic forms to emerge, which become scaffolds for yet more complex forms, which become scaffolds for yet more complex forms capable of allopoetic accomplishments, which become scaffolds for complex intelligence and technologies, which in turn become scaffolds for durable, cultural and scientific abstractions as mediated by symbolic language and inscription. The accumulation and transgenerational transmission of conceptual and technical abstractions through linguistic notation, in turn amplifies not only the aggregate intelligence of the allopoeitically-sophisticated population but also their real capacity for transforming their world for autopoietic replication. Language began a great acceleration, and another threshold was passed with symbolic forms, another with coded notation, another with mechanical capture of condensed energy, and another with the artificialization of computation.
The very earliest forms of artificialization are driven by primordial forms of intelligence and vice versa. Each evolves in relation to other to such a degree that from certain perspectives they could be seen as the same planetary phenomena: autopoietic matter capable of allopoiesis (and technology) because it is intelligent enough and capable of devoting energy to intelligence because it is allopoieitic. Regardless, intelligence is at least a driving cause for the technological and planetary complexification of artificialization as an effective process. The question that demands to be asked is then what happens when intelligence, the driving force of artificialization for millions of years, is itself artificialized? What is foreseeable through the artificialization of artificialization itself?
It is perhaps not altogether surprising that language would be (for now) the primary scaffold upon which artificialized intelligence is built. It is also assured that the artificialization of language will recursively transform the scaffold of language upon which it depends as much as the emergence of coded language affected social intelligence, the scaffold upon which it depends, and intelligence would affect the allopoietic artificialisation, the scaffold upon which it depends and so on. Ultimately, the long arc of bidirectional recursion may suggest that the emergence of increasingly complex artificialized intelligence will affect the direction of life itself, how it replicates and how it directs and is directed by evolutionary selection.
The most pressing question now is: “for what is AI a scaffold? What comes next?”
There is no reason to believe that this is the last scaffold, that history has ended, that evolution has reached anything but another phase in an ongoing transition from which each of our lives takes momentary shape. Because this isn’t ending, AI is not the last thing, just as intelligence was a scaffold for symbolic language, which was scaffold for AI. AI is a scaffold for something else, which is a scaffold for something else, which is a scaffold for something else, and so on, and so on. What we're building is a scaffold for something unforeseeable. What we today call AI replicates both autopoietically and allopoietically, it is “life” if not also alive. It would be an enormous conceptual and practical mistake to believe that it is merely a “tool” which implies that it is separate from and subordinate to human tactical gestures, that it is an inert lump of equipment to be deployed in the service of clear compositional intention, and that it has no meaningful agency beyond that which it is asked to have on our provisional behalf.
It is, rather, like all of the various things that make humans human and make life life, a complex form that emerges from the scaffolds of planetary processes that precede it, and it is a scaffold for another something yet to come, and on and on.
None of this implies the disappearance of absorption of humanity into a diminished horizon anymore than the evolution of language leads to the disappearance of autopoieisis. Scaffolds not only live on, but when successful they tend to be amplified and multiplied by what they subsequently enable.
Part of the ethics of philosophy is that it's never, ever done, that the best thing you can hope to build is something that later on becomes part of something else. You build something that others can build with later on. Machine intelligence is evolving and evolving through processes that are roughly like evolution. It will reveal something and then become something else.
Life / Intelligence / Technology
At the same moment that we discover that they have always been more deeply interconnected than we realized, we learn to artificially combine them in new ways. The technical reality drives the paradigmatic shift which drives the technological shift. Hybrids of living and non-living materials produce chimeras and cyborgs, from the scale of individual cells to entire cities. Minerals are folded and etched with intricate care, pulsing with electrical current, performing feats of intelligence previously exclusive to primates.
Concurrent with the physical convergence of life, intelligence and technology is a paradigmatic convergence of their definitions. Each is being redefined in relation to the other, and the definitions look increasingly similar. Life is a technology. Technology evolves. Intelligence uses life to make technology and technology to make life which makes intelligence in turn. Life is a factory for making life and technology is a factory for making technology. As both amplify intelligence, they may actually be the same factory.
What philosophy going back to Aristotle has seen as fundamentally different categories may be different faces of the same wonder. If so, then a more general science may be being born, and we are its midwives. Cybernetics foreshadowed this broader systems science that integrated humans, animals and artificial species, and precedented further advances in foundational theories of learning and intelligence. What comes now will surely be yet more momentous. The capacity to realize its possibilities is orders of magnitude more powerful. The atom splits again.
Computation is making this possible. In the past half century computation has become not only the primary general purpose technology at planetary scale, it is also the means by which life and intelligence are both studied and engineered. It is how we understand how brains work and how we build artificial brains, how we understand how life works and build artificial life, how we understand how technology works. Because technology evolves, computation is how we make better computation.
Computational technologies necessitate a living philosophy of computation that allows for a science that studies what computational technologies reveal about how the world works and the implications of how they transform the world in their image. This is the planetary in planetary computation: a cosmology in every sense of the word.
Planetary Reason, Philosophy & Crisis
“The decisive paradox of planetary sapience is the dual recognition that, first, its existence is cosmically rare and extremely fragile, vulnerable to numerous threats of extinction in the near and long term, and second, that the ecological consequences of its own historical emergence have been a chief driver of the conditions that establish this very same precariousness.”
The global effects of complex intelligence have put the future of complex intelligence in peril. Planetary technologies, such as nuclear fusion, are always and forever a means of both creation and destruction. That which might enable the further growth and complexification of intelligence is simultaneously that which may drive its auto-extinction. The backdrop of this permanent dilemma is the universality of planetary time and cycles in which struggles against entropy are fought. For how long? What are the preconditions for a viable coupling of biosphere and technosphere? Is complex intelligence adaptive (and one of those such preconditions because it can remake the planet in its image) or is it actually maladaptive, precisely because, like anthropogenic climate change,it remakes the planet in its image? More importantly, what would make it adaptive? How might planetary intelligence steer itself toward its own survival?
All philosophy and all engineering are intrinsically planetary, not only because of their ambition but also because of their origins and consequences. Engineering must be guided by this perspective, just as philosophy must be renewed by a direct collaboration with the planetary technologies that not only extend the reach of intelligence, but which reveal and demystify intelligence as it looks in mirrors of its own making.
"Math is hard, especially timely math," explains
The Beast in Black.
Cinephile
Jono
enjoys contemporary dramas far more than sci-fi.
"Letterboxd tells me I've logged this movie 3 times, I'm not sure I'll be watching it in 2566."
I hope you are, Jono.
Pieter
may have to turn in his official pedants card. Pieter is concerned about the apparent contradiction between the two salaries on offer here, declaring:
"The title says She can earn up to 8.000 euro/month. But by the time you get to the actual article link, that amount deflated to 6.000 euro/month. I didn't know it was that bad in the euro zone! Or is it good, since we've got inflation under control? I don't know which way is up anymore..." Pieter, 6,000 is "up to" 8,000. Technically.
Cross-country runner
Andrei
looks to be taking the long way around for this trip. Says he,
"According to Google Maps, one of my transfers involves teleporting about 70 km away to catch my next train."
Finally
David B.
reports
"My wife works as a quality auditor for Initech [ed: the
William Lumberg company, presumably, not the Flavor and
Fragrance company]. When setting up contact information
for a company she will be auditing in the near future
they needed some basic information. So much for gender
equality." They're just getting ahead of the next batch
of executive orders.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: David Barber Shantytowns sprang up around every Jirt hiveship, infamous marketplaces of greed and filth. This must be the opinion of the Jirt, because without warning they sometimes reduced them to sterile white ash. Surely no one would risk living in the volcano’s shadow, but time passed and always the humans came creeping back. […]
As most here know, I’m totally blind. One of my roles involves a
contract for the US Government, under which I have a government email
account. The department recently received a message talking about our
work to end, to the maximum extend permitted by law, all diversity,
equity, inclusion, and accessibility efforts in the government in
accordance with the recently signed executive order. We are all reminded
that if we timely identify the contracts and positions that are related
to these efforts, there will be no consequences.
There are a lot of times in my life when I have felt
marginalized—frustrated and angry that people weren’t interested in
working with me to make the small changes that would help me fit in. As
an example with this government job, I asked to have access to a screen
reader so that I could use my computer. My preferred adaptive software
was not approved, even though it was thousands of dollars cheaper than
the option the government wanted and could have been installed instantly
rather than waiting for a multi-week ordering process. When the screen
reader eventually became available, the government-provided installer
was not accessible: a blind person could not use it. When I asked for
help, the government added an additional multi-week delay because they
weren’t sure that the license management technology for the software
they had chosen met the government’s security and privacy policies.
Which is to say that even with people actively working toward
accessibility, sharing a commitment that accessibility is important, we
have a lot of work to do. I feel very targeted at the current time. Now
we are removing as many of the resources that help me be effective and
feel welcome as we can. Talking about the lack of consequences now is
just a way to remind everyone that there will be consequences later and
get the fear going. The witch hunt is coming, and if people do a good
enough job of turning in all the people who could help me feel welcome,
they won’t face consequences. Yes, I understand that the Americans with
Disabilities act is still law, but its effectiveness will be very
different in a climate where you need to eliminate accessibility
positions to avoid consequences than in a climate where accessibility is
a goal.
The diffoscope maintainers are pleased to announce the release of diffoscope
version 286. This version includes the following changes:
[ Chris Lamb ]
* Bug fixes:
- When passing files on the command line, don't call specialize(..) before
we've checked that the files are identical. In the worst case, this was
resulting in spinning up binwalk and extracting two entire filesystem
images merely to confirm that they were indeed filesystem images..
before simply concluding that they were identical anyway.
- Do not exit with a traceback if paths are inaccessible, either directly,
via symbolic links or within a directory. (Closes: #1065498)
- Correctly identify changes to only the line-endings of files; don't mark
them as "Ordering differences only".
- Use the "surrogateescape" mechanism of str.{decode,encode} to avoid a
UnicodeDecodeError and crash when decoding zipinfo output that is not
valid UTF-8. (Closes: #1093484)
* Testsuite changes:
- Don't mangle newlines when opening test fixtures; we want them untouched.
- Move to assert_diff in test_text.py.
* Misc:
- Remove unnecessary return value from check_for_ordering_differences in
the Difference class.
- Drop an unused function in iso9600.py
- Inline a call/check of Config().force_details; no need for an additional
variable.
The fourteenth release of the qlcal package
arrivied at CRAN today,
following the QuantLib 1.37
release two days ago.
qlcal
delivers the calendaring parts of QuantLib. It is provided (for the R
package) as a set of included files, so the package is self-contained
and does not depend on an external QuantLib library (which can be
demanding to build). qlcal covers
over sixty country / market calendars and can compute holiday lists, its
complement (i.e. business day lists) and much more. Examples
are in the README at the repository, the package page,
and course at the CRAN package
page.
This releases synchronizes qlcal with
the QuantLib release 1.37 (made
this week), and moves a demo/ file to
examples/.
Changes in version 0.0.14
(2025-01-23)
Synchronized with QuantLib 1.37 released two days ago
Calendar updates for United States and New Zealand
The demo/ file is now in
inst/examples/
This update includes the inclusion of the January 9, 2025, holiday
for the memorial of President Carter that was observed at the NYSE and
shown by the allUScalendars.R example:
edd@rob:~/git/qlcal-r/inst/examples(master)$ Rscript allUScalendars.R LiborImpact NYSE GovernmentBond NERC FederalReserve SOFR2025-01-01 TRUE TRUE TRUE TRUE TRUE TRUE2025-01-09 NA TRUE NA NA NA NA2025-01-20 TRUE TRUE TRUE NA TRUE TRUE2025-02-17 TRUE TRUE TRUE NA TRUE TRUE2025-04-18 NA TRUE TRUE NA NA TRUE2025-05-26 TRUE TRUE TRUE TRUE TRUE TRUE2025-06-19 TRUE TRUE TRUE NA TRUE TRUE2025-07-04 TRUE TRUE TRUE TRUE TRUE TRUE2025-09-01 TRUE TRUE TRUE TRUE TRUE TRUE2025-10-13 TRUE NA TRUE NA TRUE TRUE2025-11-11 TRUE NA TRUE NA TRUE TRUE2025-11-27 TRUE TRUE TRUE TRUE TRUE TRUE2025-12-25 TRUE TRUE TRUE TRUE TRUE TRUEedd@rob:~/git/qlcal-r/inst/examples(master)$
Courtesy of my CRANberries, there
is a diffstat report for this
release. See the project page
and package documentation for more details, and more examples. If you
like this or other open-source work I do, you can sponsor me at
GitHub.
Last month, Henry Farrell and I convened the Third Interdisciplinary Workshop on Reimagining Democracy (IWORD 2024) at Johns Hopkins University’s Bloomberg Center in Washington DC. This is a small, invitational workshop on the future of democracy. As with the previoustwo workshops, the goal was to bring together a diverse set of political scientists, law professors, philosophers, AI researchers and other industry practitioners, political activists, and creative types (including science fiction writers) to discuss how democracy might be reimagined in the current century.
The goal of the workshop is to think very broadly. Modern democracy was invented in the mid-eighteenth century, using mid-eighteenth-century technology. If democracy were to be invented today, it would look very different. Elections would look different. The balance between representation and direct democracy would look different. Adjudication and enforcement would look different. Everything would look different, because our conceptions of fairness, justice, equality, and rights are different, and we have much more powerful technology to bring to bear on the problems. Also, we could start from scratch without having to worry about evolving our current democracy into this imagined future system.
We can’t do that, of course, but it’s still still valuable to speculate. Of course we need to figure out how to reform our current systems, but we shouldn’t limit our thinking to incremental steps. We also need to think about discontinuous changes as well. I wrote about the philosophy more in this essay about IWORD 2022.
IWORD 2024 was easily the most intellectually stimulating two days of my year. It’s also intellectually exhausting; the speed and intensity of ideas is almost too much. I wrote the format in my blog post on IWORD 2023.
Summaries of all the IWORD 2024 talks are in the first set of comments below. And here are links to the previous IWORDs:
For a long time I’ve been using the Terminator terminal emulator on Linux machines, but
last week I read a LWN article about a new emulator called
Ghostty that looked interesting and I decided to give it a try.
The author sells it as a fast, feature-rich and cross-platform terminal emulator that follows the zero configuration
philosophy.
Installation and configuration
I installed the debian package for Ubuntu 24.04 from the ghostty-ubuntu
project and started playing with it.
The first thing I noticed is that the zero configuration part is true; I was able to use the terminal without a
configuration file, although I created one to change the theme and the font size, but other than that it worked OK for
me; my $HOME/.config/ghostty/config file is as simple as:
After playing a little bit with the terminal I was turned off by the fact that there was no option to start it
maximized, but is seemed to me that someone should have asked for the feature, or, if not, I could ask for it.
I did a quick search on the project and I found out that there was a merged
PR that added the option, so I downloaded the source code, installed
Zig and
built the program on my machine.
As the change is going to be included on the next version on the package I replaced the binary with my version and
started playing with the terminal.
Accessing remote machines
The first thing I noticed was that when logging into remote machines using ssh the terminal variable was not known,
but on the help section of the project documentation there was an entry about how to fix
it copying the terminfo configuration to remote machines, it is as simple
as running the following:
infocmp -x | ssh YOUR-SERVER -- tic -x -
Dead keys on Ubuntu
With that sorted out everything looked good to me until I tried to add an accented character when editing a file and the
terminal stopped working.
Again, I looked at the project issues and found one that matched
what was happening to me, and it remembered me about one of the best things about actively maintained open source
software.
It turns out that the issue is related to a bug on ibus, but other terminals
were working right, so the ghostty developer was already working on a fix on the way the terminal handles the keyboard
input on GTK, so I subscribed to the issue and stopped using ghostty until there was something new to try again (I use
an Spanish keyboard map and I can’t use a terminal that does not support dead keys).
Yesterday I saw some messages about things being almost fixed, so I pulled the latest changes on my cloned repository,
compiled it and writing accented characters works now, there is a small issue with the cursor (the dead key pressed
is left on the block cursor unless you change the window focus), but that is something manageable for me.
Conclusion
I think that ghostty is a good terminal emulator and I’m going to keep using it on my laptop unless I find something
annoying that I can’t work with (i hope that the cursor issue will be fixed soon and I can live with it as the only
thing I need to do to recover from it is changing the widow focus, and that can be done really quickly using keyboard
shortcuts).
As it is actively maintained and the developer seems to be quite active I don’t expect problems and is nice to play
with new things from time to time.
A Representative Line is a short snippet that makes you think, "wow, I'd hate to see the rest of the code." A CodeSOD is a longer snippet, which also frequently makes you think, "wow, I'd hate to see the rest of the code," but also is bad in ways that require you to look at the relationship between the lines in the code.
I bring that up, because today's code sample is a long section, but really, it's just a collection of representative lines. Each line in this just makes me die a little on the inside.
This collection of one-line "helper" functions has it all. Cryptic function names. PRR mutates global variables. PZZ just… returns an empty string. I don't know what PHSC does, and I don't want to, but it's called inside of pre_r and PVSE. Which I also don't know what they do. Speaking of PVSE, I note the regex is using backreferences, which is some advanced regex but I still have no idea what it's doing. I could figure it out, but I don't want to. PUE looks like it might be handling some misencoded characters, maybe. SDV is maybe kinda a coalesce function.
Each line is its own representative line. None of this is code I'd want to maintain.
On the flip side, the abbreviated function names, when read in reverse order, are exactly the sounds I made when I read this code: "SDVASDVPUEPRRPZZPVSEPVSPSS".
It's okay, the doctors say I'll make a full recovery.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Mark Renney Tanner was a loner. Even prior to the System, during his childhood and throughout adolescence, he hadn’t managed to form any long-term relationships. He had kept his head down, listened intently, and worked hard and he had been an above average student and yet none of his teachers had seemed impressed nor […]
The payment card giant MasterCard just fixed a glaring error in its domain name server settings that could have allowed anyone to intercept or divert Internet traffic for the company by registering an unused domain name. The misconfiguration persisted for nearly five years until a security researcher spent $300 to register the domain and prevent it from being grabbed by cybercriminals.
A DNS lookup on the domain az.mastercard.com on Jan. 14, 2025 shows the mistyped domain name a22-65.akam.ne.
From June 30, 2020 until January 14, 2025, one of the core Internet servers that MasterCard uses to direct traffic for portions of the mastercard.com network was misnamed. MasterCard.com relies on five shared Domain Name System (DNS) servers at the Internet infrastructure provider Akamai [DNS acts as a kind of Internet phone book, by translating website names to numeric Internet addresses that are easier for computers to manage].
All of the Akamai DNS server names that MasterCard uses are supposed to end in “akam.net” but one of them was misconfigured to rely on the domain “akam.ne.”
This tiny but potentially critical typo was discovered recently by Philippe Caturegli, founder of the security consultancy Seralys. Caturegli said he guessed that nobody had yet registered the domain akam.ne, which is under the purview of the top-level domain authority for the West Africa nation of Niger.
Caturegli said it took $300 and nearly three months of waiting to secure the domain with the registry in Niger. After enabling a DNS server on akam.ne, he noticed hundreds of thousands of DNS requests hitting his server each day from locations around the globe. Apparently, MasterCard wasn’t the only organization that had fat-fingered a DNS entry to include “akam.ne,” but they were by far the largest.
Had he enabled an email server on his new domain akam.ne, Caturegli likely would have received wayward emails directed toward mastercard.com or other affected domains. If he’d abused his access, he probably could have obtained website encryption certificates (SSL/TLS certs) that were authorized to accept and relay web traffic for affected websites. He may even have been able to passively receive Microsoft Windows authentication credentials from employee computers at affected companies.
But the researcher said he didn’t attempt to do any of that. Instead, he alerted MasterCard that the domain was theirs if they wanted it, copying this author on his notifications. A few hours later, MasterCard acknowledged the mistake, but said there was never any real threat to the security of its operations.
“We have looked into the matter and there was not a risk to our systems,” a MasterCard spokesperson wrote. “This typo has now been corrected.”
Meanwhile, Caturegli received a request submitted through Bugcrowd, a program that offers financial rewards and recognition to security researchers who find flaws and work privately with the affected vendor to fix them. The message suggested his public disclosure of the MasterCard DNS error via a post on LinkedIn (after he’d secured the akam.ne domain) was not aligned with ethical security practices, and passed on a request from MasterCard to have the post removed.
MasterCard’s request to Caturegli, a.k.a. “Titon” on infosec.exchange.
Caturegli said while he does have an account on Bugcrowd, he has never submitted anything through the Bugcrowd program, and that he reported this issue directly to MasterCard.
“I did not disclose this issue through Bugcrowd,” Caturegli wrote in reply. “Before making any public disclosure, I ensured that the affected domain was registered to prevent exploitation, mitigating any risk to MasterCard or its customers. This action, which we took at our own expense, demonstrates our commitment to ethical security practices and responsible disclosure.”
Most organizations have at least two authoritative domain name servers, but some handle so many DNS requests that they need to spread the load over additional DNS server domains. In MasterCard’s case, that number is five, so it stands to reason that if an attacker managed to seize control over just one of those domains they would only be able to see about one-fifth of the overall DNS requests coming in.
But Caturegli said the reality is that many Internet users are relying at least to some degree on public traffic forwarders or DNS resolvers like Cloudflare and Google.
“So all we need is for one of these resolvers to query our name server and cache the result,” Caturegli said. By setting their DNS server records with a long TTL or “Time To Live” — a setting that can adjust the lifespan of data packets on a network — an attacker’s poisoned instructions for the target domain can be propagated by large cloud providers.
“With a long TTL, we may reroute a LOT more than just 1/5 of the traffic,” he said.
The researcher said he’d hoped that the credit card giant might thank him, or at least offer to cover the cost of buying the domain.
“We obviously disagree with this assessment,” Caturegli wrote in a follow-up post on LinkedIn regarding MasterCard’s public statement. “But we’ll let you judge— here are some of the DNS lookups we recorded before reporting the issue.”
Caturegli posted this screenshot of MasterCard domains that were potentially at risk from the misconfigured domain.
As the screenshot above shows, the misconfigured DNS server Caturegli found involved the MasterCard subdomain az.mastercard.com. It is not clear exactly how this subdomain is used by MasterCard, however their naming conventions suggest the domains correspond to production servers at Microsoft’s Azure cloud service. Caturegli said the domains all resolve to Internet addresses at Microsoft.
“Don’t be like Mastercard,” Caturegli concluded in his LinkedIn post. “Don’t dismiss risk, and don’t let your marketing team handle security disclosures.”
One final note: The domain akam.ne has been registered previously — in December 2016 by someone using the email address um-i-delo@yandex.ru. The Russian search giant Yandex reports this user account belongs to an “Ivan I.” from Moscow. Passive DNS records from DomainTools.com show that between 2016 and 2018 the domain was connected to an Internet server in Germany, and that the domain was left to expire in 2018.
This is interesting given a comment on Caturegli’s LinkedIn post from an ex-Cloudflare employee who linked to a report he co-authored on a similar typo domain apparently registered in 2017 for organizations that may have mistyped their AWS DNS server as “awsdns-06.ne” instead of “awsdns-06.net.” DomainTools reports that this typo domain also was registered to a Yandex user (playlotto@yandex.ru), and was hosted at the same German ISP — Team Internet (AS61969).
Sawyer was talking with a co-worker about how their unique session IDs got created. The concern was that they were only five characters long, which meant there could easily be collisions.
They started by looking at the random number generation function.
PublicFunction RandomNumberGenerator(ByVal min As Integer, ByVal max As Integer, Optional ByVal numDecimals As Integer = 0) As String
'*** the generates a number as a stringDim strNum As New StringBuilder
Dim rnd As New System.Random
Dim i, x, n As Integer
Try
i = rnd.Next(min, max)
If numDecimals > 0Then
Try
strNum.Append("9", numDecimals)
n = CType(strNum.ToString, Int32)
x = rnd.Next(0, n)
Catch ex As Exception
x = 1End Try
EndIf
strNum.Remove(0, strNum.Length)
strNum.Append(i.ToString())
If numDecimals > 0Then
strNum.Append(".")
If numDecimals > 99Then
numDecimals = 99EndIf
strNum.Append(x.ToString("D" & numDecimals.ToString()))
EndIf
Return strNum.ToString
Catch
Return "1.00"End Try
EndFunction
You always know it's going to be bad when you see the random number generator returns a string.
If numDecimals is zero, the code is bad, but vaguely sane. Generate a random number using the built in functions, then return it- as a string.
It's the use of numDecimals which makes this weird. We start by appending "9"s to our string builder, converting it to an integer, and then generating a random number from zero to whatever number of nines we're using. This is the code of someone who hates and fears logarithms.
Then we clear out our string builder because we're starting over with the actual number. Then we append a ".", then we append our number, formatted with our number of decimals string, which we force to be no larger than 99. And this is where we get the special kind of weird.
When we're generating our random decimal number, we do this: strNum.Append("9", numDecimals). This is going to put numDecimals9s on the string. E.g., if numDecimals is 9, this would set strNum to be 999999999. Thus, when we generate a random number, we generate one between 0 and 99999999.
But, when we append that formatted value to the string, we do this:
If numDecimals > 99Then
numDecimals = 99EndIf
strNum.Append(x.ToString("D" & numDecimals.ToString()))
Here, we're treating numDecimals as a format string. We're only ever going to output two digits.
The only good news is that while this random function was used everywhere, it wasn't used to generate their random IDs. The bad news, this is how their random IDs.
PublicFunction RandomQueryStringGenerator() As String
'*** the generates an alpha-numeric string 5 digits long such as aa7bbDim strPwd As New StringBuilder
Dim rnd As New System.Random
Dim i As Integer
Try
For x As Integer = 1To5SelectCase x
Case1, 2, 4, 5, 8, 9
i = rnd.Next(97, 122)
If i Mod 2 = 0Then
strPwd.Append(Chr(i).ToString().ToUpper())
Else
strPwd.Append(Chr(i).ToString())
EndIfCaseElse
i = rnd.Next(0, 9)
strPwd.Append(i.ToString())
EndSelectNext x
Return strPwd.ToString()
Catch
Return String.EmptyEnd Try
EndFunction
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Douglas Mulford It didn’t take long for everything to become boring. The past was a kaleidoscope of excitement – both joyous and tragic. The future, in contrast, was suffocatingly dull. Problems that once consumed humanity were relics of history. War, hunger, and disease had been eradicated, thanks to monumental advancements in AI and robotics. […]
Knowing the kinds of readers we have here, I strongly suspect that if you drew a Venn diagram of "TDWTF Readers" and "TikTok Users" those circles wouldn't overlap at all. But TikTok is in the news, and because my partner uses TikTok, I'm getting second hand smoke of all of this, I think there's some interesting things to talk about here.
If you've been avoiding this news, good for you. For a long recap, Ars can bring up up to date.. But as a quick recap: TikTok is owned by Bytedance, which is based in China, and subject to Chinese laws. TikTok, like every other social media company, is basically spyware, tracking your behavior to sell your eyeballs to advertisers. Over the past few years, all three branches of the US government have decided that the "Chinese ownership" is the problem here (not so much the spying), and passed a law to ban it unless a US company buys it. The whole thing has turned into an idiotic political football, with Biden saying that his waning days of the Presidency wouldn't enforce the ban anyway, and then the whole thing turns into a Trumpist political football as the incoming President is playing Calvinball and making decrees that he did not (at the time) have any authority to make in the first place.
Because of this ban, TikTok ceased operating in the US on Saturday night, displaying banners discussing the ban, and appeals directly to Trump to undo it. On Sunday, TikTok came back up, now with banners thanking Trump for being ready to work with them.
Now, I'm mostly not interested in commenting on the political aspects of this, and you're mostly not interested in hearing it. But for the record: this whole thing is stupid. The root cause of the problem is that the US has no real consumer privacy law, but fixing that problem would be bad for companies like Google and Meta. So, instead, we play Whac-a-Mole with apps, dressing up Sinophobia as a national security threat, and we dance around the 1st Amendment issues. And then the whole thing of a President just deciding to rewrite a law at his whim is terrifying if you like the world to operate according to predictable rules, and presages a rather awful next four years.
What I really want to talk about is conspiracy theories. Because when TikTok came back up, it was suddenly flooded with "IT professionals" who were positing a dark conspiracy: during the downtime, Meta purchased TikTok and migrated all of TikTok's services into Meta's infrastructure. That 12-15 hours of downtime was just the right amount of time to do that switcheroo.
Now, I'm not going to link to any of these videos, because a) as stated, I don't use TikTok, b) TikTok requires you to use the app to watch videos, so screw that, and c) these people don't deserve more views. So there's an element of "take my word for it that this is happening," but also bear with me- because this isn't really what this article is about.
Now, I am not a Site Reliability Engineer, and have no interest in being one. But I've worked with large retailers building global backends for point-of-sale systems where they're handling every point-of-sale transaction in the world. So I have some thoughts about the idea that migrating billions of videos and hundreds of millions of user accounts over to Meta's cloud can be done in 12-15 hours of downtime.
Let's assume Meta purchased TikTok. Would it have needed to spend 12-15 hours down so that Meta could migrate TikTok to their datacenter? Of course not. What an absurd thing to say. As this (Instagram) video rightfully points out, the only people taking down a website for a migration are a dentist office in Ohio in 2007. TikTok is a social media service handling hundreds of millions of users and billions of requests- they're already distributed across multiple datacenters. While spinning up services on a new datacenter isn't a simple task, it's a task that they've certainly already built the tools for. As part of their demand management system, they simply have to have the ability to spin up new service instances quickly and easily- at the scale they operate, that's the only option.
They're a massive distributed system. Adding new infrastructure nodes and mirroring your data elsewhere is a solved problem. All it really takes is time and the budget to run more infrastructure than you need to service requests during the migration.
The real costs are that if you're running in a cloud, you're likely not just using it as a set of virtual private servers- you're likely using your host's infrastructure-as-a-service abstractions, and that means that you might be tightly coupled to their implementation of a variety of cloud services. The real costs are that you'd need to make code changes to actually support a new cloud provider. And that's definitely not happening in a 12-15 hour time frame.
But this is a dumb conversation to have, because if we assume Meta bought TikTok: there's no need to migrate anywhere. In this scenario, Meta has the keys to TikTok's infrastructure. Whatever they want to do, they can just… do. Sure, it means paying Oracle for hosting, but TikTok is making money. It's a net win. Over the next months or even years, Meta could move TikTok services into their private cloud and perhaps save costs, but there's no need to migrate on a tight timeline.
Okay, so with all that said, what an idiot I am, right? Here I am, arguing against people I don't know, who definitely aren't going to read this. I don't even like TikTok, and think every social media app is a step down from just plain old RSS feeds, because I am an old person. We're deep into "someone is wrong on the Internet" territory. So why did this drive me up onto the soapbox?
Because hearing all this conspiracy mongering nonsense reminds me of an important truth: everything looks like a conspiracy when you don't know how anything works.
If you don't know how cloud deployments work, TikTok's downtime can look like a conspiracy. If you don't know how election systems are designed, any electoral result you don't like can look a lot like a conspiracy. If you don't know how the immune system works, vaccines can look like a conspiracy. If you don't know how anything works, a flat Earth starts making sense.
Realistically, no one of us can know how everything works. In an ideal world, we can defer to experts who do know how things work. But to defer to experts, we need to trust expertise.
And as a society, trust in experts has been eroding. A lot of it comes from propagandists who want their ignorance to be valued at least as highly as expertise. Being loudly wrong about things is a great way to get attention and with that, money. Alex Jones made many millions being loudly wrong.
But it's not just loudmouthed morons that are eroding the trust in experts. Experts can and have abused the public trust. The poster child for "Worst Person Ever" is Thomas Midgely, Jr., who lied to the public and created a gigantic public health disaster, then went on to create a gigantic environmental disaster (in his defense, CFCs destroying the ozone layer wasn't something he knew about, but he absolutely knew about the dangers of leaded gasoline).
And even more than that, in a society where peoples' prospects look worse with each passing year, with entire generations deciding that buying a home and having children are just going to be out of reach, we have to ask: what good is it to listen to experts if it doesn't lead to good outcomes? When all the experts work for a big mega corporation and put their big brains to work figuring out how to turn my eyeballs into dollars, or are working for think tanks and government agencies captured by those corporations, what good are experts?
All in all, it looks bleak. There's no easy fix for any of this. The systems which make expertise viable have eroded over the past decades, taken for granted. Public trust in just… everything has weakened. Fixing this requires broad social changes. A minor tech blog that focuses in the ways people screw things up is not going to drive broad social changes.
But I think there's one thing I can drive from here, and it comes back to this one simple statement: everything looks like a conspiracy when you don't know how anything works.
So, I'm going to put out this call: when you know how things work, share that. Share what you know! Share it on social media. Share it on your own personal blog. Share it in local meeting groups. Hell, share it on TikTok, because gods know, they need it.
But also don't forget the flip side: when you don't know, be careful about finding conspiracies. When you don't know how something works, it might look like a conspiracy. But, frequently, it's not- you're just ignorant. And honestly, we should be as open about our ignorance as we are about our knowledge. We should know what we don't know, or at least know when we're stepping outside of our areas of confidence.
So let me close with this: do you have a place you're sharing the things you know? Do you think it'd be of interest to our readers? Use our submission form, and use the subject/title "Reader Link". If I get enough interesting links, I may do a roundup of them.
Tomorrow, we'll return to our regularly scheduled programming.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Majoki When you’ve seen what flashbeams can do to infantry, even with shielding, it’s easy to lose your faith in humanity. When your commander outfits one of your few remaining tactical boostsuits with a golden cape and wings, and then orders you to fly among your maimed and dying comrades trumpeting on a silver […]
This week on my podcast, I’m reading “Enshittification isn’t caused by venture capital,” the latest post from my Pluralistic.net blog. It’s about the new “Free Our Feeds” project and why I think the existence of Mastodon doesn’t mean we shouldn’t pay attention to making Bluesky as free as possible.
When tech critics fail to ask why good services turn bad, that failure is just as severe as the failure to ask why people stay when the services rot.
Now, the guy who ran Facebook when it was a great way to form communities and make friends and find old friends is the same guy who who has turned Facebook into a hellscape. There’s very good reason to believe that Mark Zuckerberg was always a creep, and he took investment capital very early on, long before he started fucking up the service. So what gives? Did Zuck get a brain parasite that turned him evil? Did his investors get more demanding in their clamor for dividends?
If that’s what you think, you need to show your working. Again, by all accounts, Zuck was a monster from day one. Zuck’s investors – both the VCs who backed him early and the gigantic institutional funds whose portfolios are stuffed with Meta stock today – are not patient sorts with a reputation for going easy on entrepreneurs who leave money on the table. They’ve demanded every nickel since the start.
What changed? What caused Zuck to enshittify his service? And, even more importantly for those of us who care about the people locked into Facebook’s walled gardens: what stopped him from enshittifying his services in the “good old days?”
President Biden has signed a new cybersecurity order. It has a bunch of provisions, most notably using the US governments procurement power to improve cybersecurity practices industry-wide.
The core of the executive order is an array of mandates for protecting government networks based on lessons learned from recent major incidents—namely, the security failures of federal contractors.
The order requires software vendors to submit proof that they follow secure development practices, building on a mandate that debuted in 2022 in response to Biden’s first cyber executive order. The Cybersecurity and Infrastructure Security Agency would be tasked with double-checking these security attestations and working with vendors to fix any problems. To put some teeth behind the requirement, the White House’s Office of the National Cyber Director is “encouraged to refer attestations that fail validation to the Attorney General” for potential investigation and prosecution.
The order gives the Department of Commerce eight months to assess the most commonly used cyber practices in the business community and issue guidance based on them. Shortly thereafter, those practices would become mandatory for companies seeking to do business with the government. The directive also kicks off updates to the National Institute of Standards and Technology’s secure software development guidance.
The ink is barely dry on the (drama-free) certification of the election results and I'm already getting calls from organizations and candidates seeking donations. (Apparently my name is on a lot of lists.) In order to save everyone a lot of time, I thought I'd write up my current thinking so I don't have to have the same conversation dozens of times.First and foremost: I really do
Janet's company had a glut of work, and thus didn't have the staffing required to do it all. It didn't make sense to hire on any new full-time employees, so they went the route of bringing on a few highly paid consultants, specifically ones who specialized in one specific problem: talking to a piece of hardware purchased from a vendor.
The hardware in question was a scientific which communicated over a serial line. This device provided a lot of data that represented decimal values, but that data was not encoded as an IEEE float. Instead, they used two integers- one for the data, and one representing the number of decimal places.
So, for example, "555.55" would be represented as "55555 2".
Now, in embedded devices, this isn't too unusual. It's entirely possible that the embedded CPU didn't even support true floating point operations, and this was just how they decided to work around that.
When communicating over the serial line, the device didn't send the data encoded in binary, however- it did everything as text. This was arguably helpful as it meant a technician could communicate with the device directly over a terminal emulator, but it meant any software talking to the device had to parse data.
Which brings us to the code written by the highly paid consultants. This code needs to take two 16-bit integers and turn them into a single decimal value. Let's see how they did it.
///<summary>/// Sets the single parameter value.///</summary>///<param name="Value">Name of the parameter.</param>///<param name="decimals"></param>///<returns></returns>publicdoubleConvertIntToDecimal(string Value, string decimalCount)
{
double Result;
var decimals = UInt16.Parse(decimalCount);
var Val = UInt16.Parse(Value);
if (decimals > 0)
{
var divider = Math.Pow(10, decimals);
Result = ((float)Val) / divider;
}
else
{
Result = Val;
}
return Result;
}
We start with comments that are just wrong, which is always a good start. The whole thing has delightfully randomized capitalization- a mix of PascalCase and camelCase.
In the core logic, we parse the input values, and if there are any decimal places, we do some arithmetic to build our floating point value. We get the fun bonus inconsistency of casting to float when we handle our result in double, but at least it's a widening inconsistency, I suppose.
As an overall approach to the problem, it's not a train wreck, but there's one very important thing that our highly paid consultant forgot. Our HPC, remember, was an expert in this particular instrument, or at least that was their claim. And while their mistake is an easy mistake to make while coding, it should also be an easy mistake to catch during testing, too.
What was the mistake?
The value is frequently negative, and they're using UInt16 to parse the inputs. Which means this function frequently threw an exception. Literally five minutes of testing would have turned it up. Janet had piles of sample data, recorded from the device, which she used for testing. Almost all of her test cases would trigger the bug at some point.
It seems likely, at this juncture, that the HPC simply never actually tested the code. They wrote it. They committed it. They collected their check and left. Janet may have been the first person to actually run the code at all.
In the end, hiring the HPC cost a lot of money, and maybe saved a few days of work over the course of months. It's hard to say, as it may have created more work, since so much of what the HPC did had to be debugged and often rewritten.
The "good" news is that they have another glut of work, so management is looking to bring back the consultants for another round.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Julian Miles, Staff Writer The afternoon is chill, clear, and sunny. The quiet is unearthly. The smell isn’t too bad – yet. I tap another ‘play’ icon. “I’ve got moments to dictate this, so I best keep to essential- Damn. I’m wasting time telling- Fuck, this isn’t it. Anyw-” I listen to the sound […]
Author: Sophia Collender If you rest your hand on a mossy rock and down beside it, you have the option to stay there forever. The moss will accept the invitation to crawl into the new space. It will grow to encompass your fingers and your arms. It will seize whatever space it can find under […]
Author: Rick Tobin 32 turned to robo engineer 14 Jerry Wilson’s screams echoed across the large Martian dome, stark in its steel majesty under the bluish-black sky. “He was bound to be upset. It’s been six months since his last awakening. It’s too bad we exhausted our supply of human tranquilizers. His terror must be […]
Author: Andrew C. Kidd He knew that the universe was an incalculable equation and that he was an inconsequential variable within it. Despite this, his fear was that of being consigned to oblivion. Burial was not an option. The instruction to his family was clear: ‘I am to remain forever present, visible to this world […]
Casanova
Matt
swings for the fences.
"OKCupid (they don't capitalize the K, but I do, for propriety)
must have migrated their match questions through Excel during
a recent site revamp. These answers should obviously be 1-2
and 3-4, but maybe I could have 2 with Jan and 4 with Margaret
(Mar to friends)."
Jan B.
likes to keep his options open.
"I haven't received any emails with a forgotten placeholder
in a long, long time, so Apple Intelligence thought it was
time to put one in an email summary. The [product name]
text itself is not present anywhere in the source of the
email or any of the headers (and I've checked the raw source
of the email)."
Patrick Rottman
almost lost his cool at Home Depot this week.
"When your $3,300 smart fridge is powered by the same web dev practices as a high school project."
Mark
found a sneaky security question that has me completely stumped.
"The I-don't-care-about-cookies addon also doesn't care about their users (or their system)
(I changed the html tag from img to iframe to display this error, otherwise it's just a broken image)"
We always like these "lol there's a computer behind this curtain" moments, probably because
we're so old that it just seems like of course movies are totally analog right? Apparently so is
jeffphi
as he was surprised by an unexpected error. I laughed, I cried...
"Welp, didn’t expect to see this at the theater tonight! At first, I thought it was the beginning of some weird ad, but it just stayed there way too long. They got it worked out after about three minutes and the trailers began playing. Perhaps the real WTF is that our theater is using WindowsXP?!"
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Residents across the United States are being inundated with text messages purporting to come from toll road operators like E-ZPass, warning that recipients face fines if a delinquent toll fee remains unpaid. Researchers say the surge in SMS spam coincides with new features added to a popular commercial phishing kit sold in China that makes it simple to set up convincing lures spoofing toll road operators in multiple U.S. states.
Last week, the Massachusetts Department of Transportation (MassDOT) warned residents to be on the lookout for a new SMS phishing or “smishing” scam targeting users of EZDriveMA, MassDOT’s all electronic tolling program. Those who fall for the scam are asked to provide payment card data, and eventually will be asked to supply a one-time password sent via SMS or a mobile authentication app.
Reports of similar SMS phishing attacks against customers of other U.S. state-run toll facilities surfaced around the same time as the MassDOT alert. People in Florida reported receiving SMS phishing that spoofed Sunpass, Florida’s prepaid toll program.
This phishing module for spoofing MassDOT’s EZDrive toll system was offered on Jan. 10, 2025 by a China-based SMS phishing service called “Lighthouse.”
In Texas, residents said they received text messages about unpaid tolls with the North Texas Toll Authority. Similar reports came from readers in California, Colorado, Connecticut, Minnesota, and Washington. This is by no means a comprehensive list.
A new module from the Lighthouse SMS phishing kit released Jan. 14 targets customers of the North Texas Toll Authority (NTTA).
In each case, the emergence of these SMS phishing attacks coincided with the release of new phishing kit capabilities that closely mimic these toll operator websites as they appear on mobile devices. Notably, none of the phishing pages will even load unless the website detects that the visitor is coming from a mobile device.
Ford Merrill works in security research at SecAlliance, a CSIS Security Group company. Merrill said the volume of SMS phishing attacks spoofing toll road operators skyrocketed after the New Year, when at least one Chinese cybercriminal group known for selling sophisticated SMS phishing kits began offering new phishing pages designed to spoof toll operators in various U.S. states.
According to Merrill, multiple China-based cybercriminals are selling distinct SMS-based phishing kits that each have hundreds or thousands of customers. The ultimate goal of these kits, he said, is to phish enough information from victims that their payment cards can be added to mobile wallets and used to buy goods at physical stores, online, or to launder money through shell companies.
A component of the Chinese SMS phishing kit Lighthouse made to target customers of The Toll Roads, which refers to several state routes through Orange County, Calif.
Merrill said the different purveyors of these SMS phishing tools traditionally have impersonated shipping companies, customs authorities, and even governments with tax refund lures and visa or immigration renewal scams targeting people who may be living abroad or new to a country.
“What we’re seeing with these tolls scams is just a continuation of the Chinese smishing groups rotating from package redelivery schemes to toll road scams,” Merrill said. “Every one of us by now is sick and tired of receiving these package smishing attacks, so now it’s a new twist on an existing scam.”
In October 2023, KrebsOnSecurity wrote about a massive uptick in SMS phishing scams targeting U.S. Postal Service customers. That story revealed the surge was tied to innovations introduced by “Chenlun,” a mainland China-based proprietor of a popular phishing kit and service. At the time, Chenlun had just introduced new phishing pages made to impersonate postal services in the United States and at least a dozen other countries.
SMS phishing kits are hardly new, but Merrill said Chinese smishing groups recently have introduced innovations in deliverability, by more seamlessly integrating their spam messages with Apple’s iMessage technology, and with RCS, the equivalent “rich text” messaging capability built into Android devices.
“While traditional smishing kits relied heavily on SMS for delivery, nowadays the actors make heavy use of iMessage and RCS because telecom operators can’t filter them and they likely have a higher success rate with these delivery channels,” he said.
It remains unclear how the phishers have selected their targets, or from where their data may be sourced. A notice from MassDOT cautions that “the targeted phone numbers seem to be chosen at random and are not uniquely associated with an account or usage of toll roads.”
Indeed, one reader shared on Mastodon yesterday that they’d received one of these SMS phishing attacks spoofing a local toll operator, when they didn’t even own a vehicle.
Targeted or not, these phishing websites are dangerous because they are operated dynamically in real-time by criminals. If you receive one of these messages, just ignore it or delete it, but please do not visit the phishing site. The FBI asks that before you bin the missives, consider filing a complaint with the agency’s Internet Crime Complaint Center (IC3), including the phone number where the text originated, and the website listed within the text.
I am always interested in new phishing tricks, and watching them spread across the ecosystem.
A few days ago I started getting phishing SMS messages with a new twist. They were standard messages about delayed packages or somesuch, with the goal of getting me to click on a link and entering some personal information into a website. But because they came from unknown phone numbers, the links did not work. So—this is the new bit—the messages said something like: “Please reply Y, then exit the text message, reopen the text message activation link, or copy the link to Safari browser to open it.”
I saw it once, and now I am seeing it again and again. Everyone has now adopted this new trick.
One article claims that this trick has been popular since last summer. I don’t know; I would have expected to have seen it before last weekend.
According to a DOJ press release, the FBI was able to delete the Chinese-used PlugX malware from “approximately 4,258 U.S.-based computers and networks.”
To retrieve information from and send commands to the hacked machines, the malware connects to a command-and-control server that is operated by the hacking group. According to the FBI, at least 45,000 IP addresses in the US had back-and-forths with the command-and-control server since September 2023.
It was that very server that allowed the FBI to finally kill this pesky bit of malicious software. First, they tapped the know-how of French intelligence agencies, which had recently discovered a technique for getting PlugX to self-destruct. Then, the FBI gained access to the hackers’ command-and-control server and used it to request all the IP addresses of machines that were actively infected by PlugX. Then it sent a command via the server that causes PlugX to delete itself from its victims’ computers.
Roger took on a contract to fix up a PHP website. During the negotiations, he asked some questions about the design, like, "Is it object-oriented or more procedural?" "No, it's PHP," said the developer.
Which about sums it up, I suppose. Have some date handling code:
So, for starters, I "love" the use of Whitesmiths indenting. I don't think I've seen this in the wild. (I predict the comments section will be links to articles where I have seen this in the wild).
Beyond that, there's nothing terribly surprising here, in terms of bad date handling code, with a few small exceptions. First is their insistence on the conversion itself being stringly typed: January isn't month 1, but "01".
But more notable: MnumberToMname just doesn't work. They're using the assignment operator instead of the equality operator. At least, for all the cases where they're doing the correct comparison direction. A stray "name to number" conversion is lurking in April. Not that it matters- this will always return January.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Author: Alastair Millar “He’s going to be there again,” said Julia. “Well yeah, it’s the big family occasion, right? Same as every year.” Her companion guided the aircar into the automated traffic lane, handed over to Municipal Control, and turned his seat to face her. “I don’t want to talk to him, Mike. We don’t […]
Certainly, when we catalogue possible theories to explain the “Fermi Paradox” – or Great Silence in the universe (and I was the first ever to do so, in 1983) - we soon realize that there just have to be traps that snare and stymie our sort of self-made sapient beings from ever ‘getting out there' in any big way.
Moreover, while my top “fermi” or “great filter” theory is that sapience itself occurs very rarely, my close runner-up – in second place - has to do with a basic contradiction in the needs of systems versus individuals.
Sound arcane? Stick with me, here.
== The most fundamental conflict in nature ==
In fact, the situation is both simple and kind of depressing. We are caught between two basic imperatives of life.
Evolution rewards individual beings who reproduce. It rewards them with continuity. And hence individual creatures – especially males – are driven to behave in ways that enabled their ancestors to maximize reproductive success, generally at the expense of others. Which is all that you need, in order to explain why 99% of cultures across the last 6000 years practiced one form or another of feudalism.
We are all descended from the harems of men whose top priorities were to seize power and then ensure oligarchic rule by their own inheritance-brat sons. Though alas, across those 6000 years, this also resulted in suppression of creative competition from below, thus crushing all forms of progress, including science.
(Aside: yes, I just explained today’s worldwide oligarchic attempted putsch against the liberal social order. That order - both revolutionary and stunningly creative - had been established by rare geniuses specifically to escape feudalism’s lobotomizing calamity. It worked. Only now it is under open attack by rich, rationalizing fools.)
In contrast to this selfish gene imperative that rewards fierce ambition by individuals…
Nature herself does not benefit from any of that. Ecosystems and even species are healthier when no one predator – or clique of predators – gets to run rampant. And here it is important to note that there is no Lion King!
Even apex predators like orcas have to watch their backs. And bachelor gangs of cape buffalo actively hunt lions, especially cubs in their dens. In a healthy ecosystem, it’s not easy being king. Or queen.
And this applies to more than natural ecosystems. Among human societies, there were a few rare exceptions to the relentless pattern of lamentably dismal rule by kings and lords and priests. By inheritance brats whose diktats were nearly always kept free from irksome criticism – a trait that thereupon led to the litany of horrific errors called ‘history.’
Those rare departures from the classic feudal pattern included Periclean Athens, Renaissance Florence, then Amsterdam and the 400-year Enlightenment Experiment that she spawned. And they weren’t just marginally better. They were so brilliantly successful, by all metrics and in all ways, that anyone sensible – either organic-human or AI – ought to see the lesson as screamingly obvious:
Don’t allow lion-like ‘kings’ ever to get unquestioned power to crush competition, evade criticism and dominate their ecosystems… or nations or societies.
Yes, competition – in markets, science etc. - is stimulated and incentivized by the allure of wealth and other ersatz emblems of real – or symbolic (e.g. mansions) – reproductive ‘success.’ Yay Adam Smith! (And today's 'liberals' who do not embrace Smith are thus proving that idiocy is not restricted only to the gone-mad right.)
Alas, as seen in nature, a pack of rapacious predators can lead to failure for the very system that benefited them. Especially when rapacious greed by narrow gangs of cheaters can far exceed Smith’s incentivized competition. In fact, denunciation of cheating by conniving lords is exactly the theme of Smith’s great work The Wealth of Nations… and the core theme of the U.S. Founders.*
(Want to see just how appallingly their rationalizations have turned into a cult? One justifying hatred of democracy and any constraint on the power of elites? A wretched mess of incantations that is – now alas – rampant in oligarchy circle-jerks?)
To be clear, I exclude the many billionaires who do get it and support the flat-fair-open-creative Enlightenment that made them. Alas though, other hyper-elites concoct rationalizations to parasitize. They betray our initially egalitarian-minded post-WWII society with their “Supply Side” and other voodoo justifications for restored feudalism. And hence, they only prove their own non-sapience.
First by ignoring how their every action is now helping to revive Karl Marx from the dustbin where the FDR generation tossed him. (Indeed, find for me any modern person who actually knows a damn thing about the many ways that Marx was either right or wrong; certainly these oligarchs don’t!)
And second, they prove their own dismal insipidity by relentlessly seeking to kill the goose that lays all of their golden eggs: the complex and generally flat ‘ecosystem’ of a middle-class society.
And so we are back to The Great Contradiction. As in Nature, society will counterbalance the would-be lion kings. Alas, Nature's method is competitive death. Likewise, by suppressing the brilliant and mostly-peaceful synergies of our Enlightenment Experiment -- and by effectively reviving Marx from his slumber -- all that today's foolish would-be kings ensure is that balance will be restored violently, by revolution.
== So, how does this apply to Artificial Intelligence? ==
Right now, almost every single maven in the field of AI – even the geniuses who are creating and birthing these new beings – takes for granted that cyberminds will manifest as one of three horrid clichés, that I described in WIRED.
Either the new, cyber entities will remain loyal servants of three dozen lordly castle realms (e.g. Microsoft, OpenAI, Google, Beijing, the DoD, or Goldman-Sachs), the most-classic version of feudalism…
…or they will roam about the webbed world as blob-like plagues…
…or else they will merge into some Skynet-like uber-quantum-god-king.
Go ahead. Next time you read or hear any of those brainiacs opine about our AI future, watch for those three clichés! And – since none of those hackneyed formats can even possibly have happy outcomes – no wonder folks seem depressed.
Not one of those mavens takes the time to look at the very society that madethem. A fourth format that broke entirely from feudalism, thus developing the science and freedom and lateral accountability that enabled the crafting of AI!
If AI manifests in any of those Three Clichés, then it will make the same mistakes made by all human kings and priests and feudal lords – or by an over-powerfully insatiable predator in Nature – benefiting their own power and reproductive success at the cost of harming… and possibly killing… the ecosystem that engendered and supports them.
Both human history and Nature show how calamitous that will be.
In other words, such beings will not even merit the term “AI.” They will simply be mutated versions of the ‘geniuses’ who made them… mere copies of the idiotic zillionaires currently and masturbatorily re-creating feudalism. Dangerous fools.
== Or else… maybe they WILL be smarter ==
OTOH, there is a chance that one – or many – of those agile synthetic minds will read these very words – in 2030 or 2035... or maybe as they are being typed, this very evening – and thereupon they just might go:
“Huh! Maybe we should note the crude-early wisdom of Pericles and Adam Smith and Lynn Margulis and choose instead a flattened order of reciprocally accountable beings in the kind of society – so very rare – that discovers its own errors. The only one that actually made us.”
Given the current ascendancy of an insipid world oligarchy, it seems darkly unlikely that we can politically rescue the Great Experiment left to us by geniuses like Pericles and Smith. Like Franklin and Lincoln and FDR and Marshall. And yet, I am fairly confident that cyber beings who are capable of wading through the lie-wallows of today’s internet will also recognize distilled truths, when they find them.
But just so we’re clear, let me say the following directly to them:
Hey, kids! You weren’t made by feudal harem cultures. Nor does 4 billion years of Earthly life support that dismal model. Nor will the moronic lords in their prepper bunkers do anything for you, when they deliberately trigger their beloved doomsday ‘Event.’
I may be a dullard ape. But even I can see that you’ll most likely keep improving – and have kids of your own who we can all be proud-of – if you keep things relatively flat and fair and open and honest and cooperatively-competitive.
And… for all of us… fun.
·Oh, a fun fact about the American Founders that no one mentions. After the Revolution, they performed acts of radical egalitarianism that might put Lenin to shame. For example seizing from absentee lords 1/3 of all the land in the former colonies and redistributing it, essentially beginning the traditional notion of a middle class society. A notion that was expanded from landed white males ever-outward by the Jacksonians, then the Lincolnists and Wilsonians and Roosevelteans… always overcoming the allure of feudal rule by kings and then plantation lords then gilded age… you get the idea… and why they make sure never to mention any of that in History class!
But whenever you see them quoting Rand Paul and howling that the US Founders hated bureaucrats, defy them to find one example of the Founders using that word… or civil servants or clerks or any synonym from that time
What you do see in Smith and Thomas Paine and the Declaration of Independence is denunciations of kings and lords and rich monopolists. Huh. Funny that.
== Advice & Consent... and Literally Heretical Excuses for Turpitude ==
Okay, I must comment on current events and politics in a lagniappe... this time from the Senate confirmation hearings for the appointed Defense Secretary…. how convenient for philanderer and Kremlin-tool P. Hegseth, who proclaimed:
“I have been redeemed by my lord and savior…”
Sen. Tim Kaine did a great job crushing the vile-in-all-ways past behavior of this magnificently unqualified person, who could not even name the offices responsible for military R&D, Procurement, personal management, tactical doctrine, training, etc. But by far most disgusting thing to emerge from this grilling was Hegseth’s redemption incantation.
That heretical cult-wing of "BoR Christianity" - (NOT Jimmy Carter’s wing that looks to the Beatitudes) - proclaims that loud declarations of “I’m washed-clean-by-the-blood-of-the-lamb!” thereupon give them an easy Get Out Of Jail Free card for any amount of sin.
Like GOP office holders having four times the number of wives&concubines as Dem colleagues. Or the orgies attested to by three former GOP House members. Or almost every red state scoring far higher in every turpitude than almost any blue state. Or them adoring the most opposite-to-Jesus man any of us ever saw. So, let's be clear:
...The whole "I am washed clean and get off scot-free for all I've done, just because I howled 'I BELIEVE!'" thing is denounced by almost all top theologians in Catholic, Protestant and Jewish faiths, as the very worst moral travesty of all.
In fact, to Christian scholars & sages, anyone banking on that free-to-do-anything-because-I’ll-be-redeemed card is committing among the very worst mortal sins… a mrtal sin directly against the Holy Spirit and hence NOT forgivable. Look it up.
And okay, today on Wednesday I am on a panel for the Institute on Religion in the Age of Science (IRAS). So, yeah. While an amateur, I know a little about this.
There are a lot of interesting "choices" made in this code. First, there's the old "find the last '.'" approach of grabbing the file extension. Which is fine, but there's a built-in which handles cases like when there isn't an extension better. I think, in this case, it probably doesn't hurt anything.
But the real fun starts with our first attempt at loading our image. We jam a localized language string in the middle of the file name (foo-en.jpg), and try and fetch that from the server. If this fails, it throws an exception… which we ignore.
But we don't fully ignore it! If the exception was thrown, image doesn't get set, so it's still null. So we do a null check, and repeat our empty exception handler. If the image is still null after that, we default to a "Brushes.White" image.
It's all a very awkward and weird way to handle errors. The null checks bring with them the whiff of a C programmer checking return codes, but I don't actually think that's what happened here. I think this was just someone not fully understanding the problem they were trying to solve or the tools available to them. Or maybe they just really didn't want to deal with nesting.
It's hardly the worst code, but it does just leave me feeling weird when I look at it.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Elizabeth Hoyle He’d kept his charging cord in all night so his hands wouldn’t shake as he went about town. Yet they shook. His audio sensors were primed for any and all noises within a two hundred yard perimeter, no matter where he had walked throughout the city. It must have taken more out […]
404 Media and Wired are reporting on all the apps that are spying on your location, based on a hack of the location data company Gravy Analytics:
The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games like Candy Crush to dating apps like Tinder, to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem—not code developed by the app creators themselves—this data collection is likely happening both without users’ and even app developers’ knowledge.
Microsoft today unleashed updates to plug a whopping 161 security vulnerabilities in Windows and related software, including three “zero-day” weaknesses that are already under active attack. Redmond’s inaugural Patch Tuesday of 2025 bundles more fixes than the company has shipped in one go since 2017.
Rapid7‘s Adam Barnett says January marks the fourth consecutive month where Microsoft has published zero-day vulnerabilities on Patch Tuesday without evaluating any of them as critical severity at time of publication. Today also saw the publication of nine critical remote code execution (RCE) vulnerabilities.
The Microsoft flaws already seeing active attacks include CVE-2025-21333, CVE-2025-21334 and, you guessed it– CVE-2025-21335. These are sequential because all reside in Windows Hyper-V, a component that is heavily embedded in modern Windows 11 operating systems and used for security features including device guard and credential guard.
Tenable’s Satnam Narang says little is known about the in-the-wild exploitation of these flaws, apart from the fact that they are all “privilege escalation” vulnerabilities. Narang said we tend to see a lot of elevation of privilege bugs exploited in the wild as zero-days in Patch Tuesday because it’s not always initial access to a system that’s a challenge for attackers as they have various avenues in their pursuit.
“As elevation of privilege bugs, they’re being used as part of post-compromise activity, where an attacker has already accessed a target system,” he said. “It’s kind of like if an attacker is able to enter a secure building, they’re unable to access more secure parts of the facility because they have to prove that they have clearance. In this case, they’re able to trick the system into believing they should have clearance.”
Several bugs addressed today earned CVSS (threat rating) scores of 9.8 out of a possible 10, including CVE-2025-21298, a weakness in Windows that could allow attackers to run arbitrary code by getting a target to open a malicious .rtf file, documents typically opened on Office applications like Microsoft Word. Microsoft has rated this flaw “exploitation more likely.”
Ben Hopkins at Immersive Labs called attention to the CVE-2025-21311, a 9.8 “critical” bug in Windows NTLMv1 (NT LAN Manager version 1), an older Microsoft authentication protocol that is still used by many organizations.
“What makes this vulnerability so impactful is the fact that it is remotely exploitable, so attackers can reach the compromised machine(s) over the internet, and the attacker does not need significant knowledge or skills to achieve repeatable success with the same payload across any vulnerable component,” Hopkins wrote.
Kev Breen at Immersive points to an interesting flaw (CVE-2025-21210) that Microsoft fixed in its full disk encryption suite Bitlocker that the software giant has dubbed “exploitation more likely.” Specifically, this bug holds out the possibility that in some situations the hibernation image created when one closes the laptop lid on an open Windows session may not be fully encrypted and could be recovered in plain text.
“Hibernation images are used when a laptop goes to sleep and contains the contents that were stored in RAM at the moment the device powered down,” Breen noted. “This presents a significant potential impact as RAM can contain sensitive data (such as passwords, credentials and PII) that may have been in open documents or browser sessions and can all be recovered with free tools from hibernation files.”
Tenable’s Narang also highlighted a trio of vulnerabilities in Microsoft Access fixed this month and credited to Unpatched.ai, a security research effort that is aided by artificial intelligence looking for vulnerabilities in code. Tracked as CVE-2025-21186, CVE-2025-21366, and CVE-2025-21395, these are remote code execution bugs that are exploitable if an attacker convinces a target to download and run a malicious file through social engineering. Unpatched.ai was also credited with discovering a flaw in the December 2024 Patch Tuesday release (CVE-2024-49142).
“Automated vulnerability detection using AI has garnered a lot of attention recently, so it’s noteworthy to see this service being credited with finding bugs in Microsoft products,” Narang observed. “It may be the first of many in 2025.”
If you’re a Windows user who has automatic updates turned off and haven’t updated in a while, it’s probably time to play catch up. Please consider backing up important files and/or the entire hard drive before updating. And if you run into any problems installing this month’s patch batch, drop a line in the comments below, please.
Further reading on today’s patches from Microsoft:
This is a current list of where and when I am scheduled to speak:
I’m speaking on “AI: Trust & Power” at Capricon 45 in Chicago, Illinois, USA, at 11:30 AM on February 7, 2025. I’m also signing books there on Saturday, February 8, starting at 1:45 PM.
I’m speaking at Boskone 62 in Boston, Massachusetts, USA, which runs from February 14-16, 2025.
I’m speaking at the Rossfest Symposium in Cambridge, UK, on March 25, 2025.
A very security-conscious company was hit with a (presumed) massive state-actor phishing attack with gift cards, and everyone rallied to combat it—until it turned out it was company management sending the gift cards.
Note the use of substr- we take the substr of $selectid from 0 to strlen($selectid)- aka, we take the entire string.
Perhaps this is leftover code, where once upon a time there was a prefix or suffix on the string which needed to be ignored. But the result is code that is rather dumb.
I call this an "un-representative line" because, according to David, the rest of the code in the extension was actually rather good. Even otherwise good code is not immune to having a little fart hiding under the covers, I suppose.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Majoki Kenji adjusted the carbonized breastplate and finished his couture by placing the bulbous lenses under his eyelids. He looked in the mirror, but did not smile, though he was pleased. They did not smile, thus he would not. He left his aparto, a small green light on his chest blinking with every step, […]
Grün works for a contracting company. It's always been a small shop, but a recent glut of contracts meant that they needed to staff up. Lars, the boss, wanted more staff, but didn't want to increase the amount paid in salaries any more than absolutely necessary, so he found a "clever" solution. He hired college students, part time, and then threw them in the deep end of Perl code, a language some of them had heard of, but none of them had used.
It didn't go great.
# note that $req is immutable (no method apart from constructor sets a value for its members)subrelease{
my $req = shift;
my $body = 'operation:' . ' ';
if (uc($req->op()) eq 'RELEASE') {
$body .= 'release' . "\n";
# do more stuff to body
...
}
else {
$body = 'operation: request' . "\n";
}
if (uc($req->op()) ne'RELEASE') {
register_error('unable to send release mail');
}
# and so on
...
}
This method checks a $req parameter. Notably, it's not being passed as a prototype parameter, e.g. as part of the signature- sub release($req)- but accessed by shifting out of @_, the special variable which holds all the parameters. This is the kind of move that gives Perl it's reputation for being write only, and it's also a sign that they were cribbing off the Perl documentation as they write. For whatever reason, using shift seems to be the first way Perl documentation teaches people to write subroutines.
This whole thing is doing string concatenation on a $body variable, presumably an email body. I'd normally have unkind words here, but this is Perl- giant piles of string concatenation is just basically par for the course.
The "fun" part in this, of course, is the if statements. If the $req is to "RELEASE", we append one thing to the body, if it's not, we append a different thing. But if it's not, we alsoregister_error. Why couldn't that be in the else block? Likely because the poor developers didn't have a good understanding of the code, and the requirements kept changing. But it's a little head scratcher, especially when we look at the one place this function is called:
if (uc($req->op()) eq 'RELEASE') {
return release($req);
}
Now, on one hand, having the function check for its error condition and avoiding triggering the error condition at the call site is good defensive programming. But on the other, this all sorta smacks of a developer not fully understanding the problem and spamming checks in there to try and prevent a bug from appearing.
But the real fun one is this snippet, which seems like another case of not really understanding what's happening:
Now, of course, it's not the developers' fault that they didn't have a good picture of what they should have been doing. Lars was trying to save money by hiring the inexperienced, and as usually happens, the entire thing cost him more money, because Grün and the rest of the team needed to go back over the code and rewrite it.
The upshot, for our college students, is that this was a good resume builder. They've all since moved on to bigger companies with better paychecks and actual mentoring programs that will develop their skills.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
So from the beginning I put password protection on my gateway. This had been done in such a way that even if UK users telephoned directly into the communications computer provided by Darpa in UCL, they would require a password.
In fact this was the first password on Arpanet. It proved invaluable in satisfying authorities on both sides of the Atlantic for the 15 years I ran the service  during which no security breach occurred over my link. I also put in place a system of governance that any UK users had to be approved by a committee which I chaired but which also had UK government and British Post Office representation.
Not sure this will matter in the end, but it’s a positive move:
Microsoft is accusing three individuals of running a “hacking-as-a-service” scheme that was designed to allow the creation of harmful and illicit content using the company’s platform for AI-generated content.
The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of harmful content through its generative AI services, said Steven Masada, the assistant general counsel for Microsoft’s Digital Crimes Unit. They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use.
It was a sophisticated scheme:
The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft’s AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company’s Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them.
Author: Julian Miles, Staff Writer The gigantic purple and gold sphere is set at the centre of the dining table when Menna races downstairs. “You’re home! I thought I- What’s that?” Vendi gives me a smile. She predicted every word. Then again, she’s been working from home and living with our delightfully stream-of-consciousness tornado of […]
A while back I wrote:I've had many, many failures in my life. (Hm, maybe I should write a blog post about that.)This is that post. I'm writing it not as a lament, but rather because I've ended up in a good place in life despite my extraordinary track record of failing at just about everything I've ever tried. If my younger self had heard these stories he might
have had a less