My 3D printer took me on another adventure recently. Or, well, actually
someone else's 3D printer did: It turns out that building a realtime
system (with high-speed motors controlling to a 300-degree metal rod)
by cobbling together a bunch of Python and JavaScript on an anemic
Arm SoC with zero resource isolation doesn't always meet those realtime
guarantees. So in particular after installing a bunch of plugins,
people would report the infamous “MCU timer too close” Klipper error,
which essentially means that the microcontroller didn't get new commands
in time from the Linux host and shut down as a failsafe. (Understandably,
this sucks if it happens in the middle of an eight-hour print.
Nobody really invented a way to reliably resume from these things yet.)
I was wondering whether it was possible to provoke this and then look
at what was actually going on in the scheduler; perf sched lets you
look at scheduling history on the host, so if I could reproduce the
error while collecting data, I could go in afterwards and see what was
the biggest CPU hog, or at least that was the theory.
However, to my surprise, perf sched record died with an error essentially
saying that the kernel was compiled without ftrace support (which is
needed for the scheduler hooks; it's somewhat possible to do without
by just doing a regular profile, but that's a different story and
much more annoying). Not very surprising, these things tend to run
stone-age vendor kernels from some long-forgotten branch with zero
security support and seemingly no ftrace.
Now, I did not actually run said vendor kernel; at some point, I upgraded
to the latest stable kernel (6.6) from Armbian,
which is still far from mainline (for one, it needs to carry out-of-tree
drivers to make wireless work at all) but which I trust infinitely more to
actually provide updated kernels over time. It doesn't support ftrace
either, so I thought the logical step would be to upgrade to the latest
“edge” kernel (aka 6.11) and then compile with the right stuff on.
After a couple of hours of compiling (almost nostalgic to have such slow
kernel compiles; cross-compiling didn't work for me!), I could boot into the new kernel, and:
and then Klipper would refuse to start because it couldn't find the
host thermal sensors. (I don't know exactly why it is a hard dependency,
but seemingly, it is.) A bit of searching shows that this error message
is doubly vexing; it should have said “wait for supplier /i2c@fdd40000/pmic@20/regulators/SWITCH_REG1”
or something similar, but ends only in a space and then nothing.
So evidently this has to be something about the device tree (DT),
and switching out the new DT for the old one didn't work. Bisecting
was also pretty much out of the question (especially with 400+ patches
that go on top of the git tree), but after a fair bit of printk debugging
and some more reading, I figured out what had happened:
First, the sun8i-thermal driver, which had been carried out-of-tree
in Armbian, had gone into mainline. But it was in a slightly different
version; while the out-of-tree version used previously (in Armbian's
6.6 kernel) had relied on firmware (run as part of U-Boot, as I understand it)
to set a special register bit, the mainline version would be stricter
and take care to set it itself. I don't really know what the bit does,
short of “if you don't set it, all the values you get back are really
crazy”, so this is presumably a good change. So the driver would set a bit in a special
memory address somewhere (sidenote: MMIO will always feel really weird
to me; like, some part of the CPU has to check all memory accesses in case they're
really not to RAM at all?), and for that, the thermal driver would need
to take on a DT reference to the allwinner,sram (comma is evidently
some sort of hierarchical separator) node so that it could get its
address. Like, in case it was moved around in future SoCs or something.
Second, there was an Armbian patch that dealt with exactly these allwinner,sram
nodes in another way; it would make sure that references to them
would cause devlink references between the nodes. I don't know what those are
either, but it seems the primary use case is for waiting: If you have a
dependency from A to B, then A's initialization will wait until B is ready.
The configuration bit in question is always ready, but I guess it's cleaner
somehow, and you get a little symlink somewhere in /sys to explain the
relationship, so perhaps it's good? But that's what the error message means;
“A: deferred probe pending: wait for supplier B” means that we're not probing
for A's existence yet, because it wants B to supply something and B isn't
ready yet.
But why is the relationship broken? Well, for that, we need to look at how
the code in the patch looks:
So that explains it; the code expects that all DT references are to a child
of a child of syscon to find the supplier, and just goes up two levels
to find it. But for the thermal sensor, the reference is directly to the
syscon itself, and it goes up past the root of the tree, which is, well,
NULL. And then the error message doesn't have a node name to print out,
and the dependency just fails forever.
So that's two presumably good changes that just interacted in a really bad
way (in particular, due to too little flexibility in the second one).
A small patch
later, and the kernel boots with thermals again!
Oh, and those scheduling issues I wanted to debug? I never managed to reliably
reproduce them; I have seen them, but they're very rare for me. I guess that
upstream for the plugins in question just made things a bit less RAM-hungry in
the meantime, or that having a newer kernel improves things enough in
itself. Shrug. :-)
Beyond the Fringe is a military science fiction short story
collection set in the same universe as Artifact Space. It is intended as a bridge between that novel and
its sequel, Deep Black.
Originally I picked this up for exactly the reason it was published: I was
eagerly awaiting Deep Black and thought I'd pass the time with some
filler short fiction. Then, somewhat predictably, I didn't get around to
reading it until after Deep Black was already out. I still read
this collection first, partly because I'm stubborn about reading things in
publication order but mostly to remind myself of what was going on in
Artifact Space before jumping into the sequel.
My stubbornness was satisfied. My memory was not; there's little to no
background information here, and I had to refresh my memory of the
previous book anyway to figure out the connections between these stories
and the novel.
My own poor decisions aside, these stories are... fine, I guess? They're
competent military SF short fiction, mostly more explicitly military than
Artifact Space. All of them were reasonably engaging. None of them
were that memorable or would have gotten me to read the series on their
own. They're series filler, in other words, offering a bit of setup for
the next novel but not much in the way of memorable writing or plot.
If you really want more in this universe, this exists, but my guess (not
having read Deep Black) is that it's entirely skippable.
"Getting Even": A DHC paratrooper lands on New Shenzen, a
planet that New Texas is trying to absorb into the empire it is attempting
to build. He gets captured by one group of irregulars and then runs into
another force with an odd way of counting battle objectives.
I think this exists because Cameron wanted to tell a version of a World
War II story he'd heard, but it's basically a vignette about a weird
military unit with no real conclusion, and I am at a loss as to the point
of the story. There isn't even much in the way of world-building. I'm
probably missing something, but I thought it was a waste of time. (4)
"Partners": The DHC send a planetary exobiologist to New Texas
as a negotiator. New Texas is aggressively, abusively capitalist and is
breaking DHC regulations on fair treatment of labor. Why send a planetary
exobiologist is unclear (although probably ties into the theme of this
collection that the reader slowly pieces together); maybe it's because
he's originally from New Texas, but more likely it's because of his
partner. Regardless, the New Texas government are exploitative assholes
with delusions of grandeur, so the negotiations don't go very smoothly.
This was my favorite story of the collection just because I enjoy people
returning rudeness and arrogance to sender, but like a lot of stories in
this collection it doesn't have much of an ending. I suspect it's mostly
setup for Deep Black. (7)
"Dead Reckoning": This is the direct fallout of the previous
story and probably has the least characterization of this collection.
It covers a few hours of a merchant ship having to make some fast
decisions in a changing political situation. The story is framed around a
veteran spacer and his new apprentice, although even that frame is mostly
dropped once the action starts. It was suspenseful and enjoyable enough
while I was reading it, but it's the sort of story that you forget
entirely after it's over. (6)
"Trade Craft": Back on a planet for this story, which follows
an intelligence agent on a world near but not inside New Texas's area of
influence. I thought this was one of the better stories of the collection
even though it's mostly action. There are some good snippets of
characterization, an interesting mix of characters, and some well-written
tense scenes. Unfortunately, I did not enjoy the ending for reasons that
would be spoilers. Otherwise, this was good but forgettable. (6)
"One Hour": This is the first story with a protagonist outside
of the DHC and its associates. It instead follows a PTX officer (PTX is a
competing civilization that features in Artifact Space) who has
suspicions about what his captain is planning and recruits his superior
officer to help him do something about it.
This is probably the best story in the collection, although I personally
enjoyed "Partners" a smidgen more. Shunfu, the first astrogator who is
recruited by the protagonist, is a thoroughly enjoyable character, and the
story is tense and exciting all the way through. For series readers, it
also adds some depth to events in Artifact Space (if the reader
remembers them), and I suspect will lead directly into Deep Black.
(7)
"The Gifts of the Magi": A kid and his mother, struggling
asteroid miners with ancient and malfunctioning equipment, stumble across a
DHC ship lurking in the New Texas system for a secret mission. This is a
stroke of luck for the miners, since the DHC is happy to treat the serious
medical problems of the mother without charging unaffordable fees the way
that the hyper-capitalist New Texas doctors would. It also gives the
reader a view into DHC's covert monitoring of the activities of New Texas
that all the stories in this collection have traced.
As you can tell from the title, this is a Christmas story. The crew of
the DHC ship is getting ready to celebrate Alliday, which they claim rolls
all of the winter holidays into one. Just like every other effort to do
this, no, it does not, it just subsumes them all into Christmas with some
lip service to other related holidays. I am begging people to realize
that other religions often do not have major holidays in December, and
therefore you cannot include everyone by just declaring December to be
religious holiday time and thinking that will cover it.
There is the bones of an interesting story here. The covert mission setup
has potential, the kid and his mother are charming if cliched, there's a
bit of world-building around xenoglas (the magical alien material at the
center of the larger series plot), and there's a lot of foreshadowing for
Deep Black. Unfortunately, this is too obviously a side story and a
setup story: none of this goes anywhere satisfying, and along the way the
reader has to endure endless rather gratuitous Christmas references, such
as the captain working on a Nutcracker ballet performance for the ship
talent show.
This isn't bad, exactly, but it rubbed me the wrong way. If you love
Christmas stories, you may find it more agreeable. (5)
Alvin Montessori and Captain Ohm are leading a landing party through a park near the city of Cal’Mari – now renamed “squid” — guided by a mysterious local named (according to the demmie-programmed translator device) “Earl Dragonlord.”
At dusk, when the demmies on the team seem most susceptible to superstitious imaginings, suddenly shapes loom upon them from the growing gloom…
Montessori recounts:
“As I turned, a horrific howl pealed. Then another, and still more from all sides, baying like hounds from hell. Before I could finish spinning about, a dark, flapping shape descended over me, enveloping my face in stifling folds and choking off my scream.”
Consciousness returned in fits and starts, accompanied by a rhythmic, irritating, “plinking” sound – the repetitive dripping of water into some pool. Even before I opened my eyes, mineral aromas and stony echoes told me that I must be underground, lying on some cold, gritty floor.
Spikes of yellow light stabbed when I cracked my eyelids, but I tried not to move or make a sound as blurry outlines gradually formed into steady images – a stretch of rocky wall; a smoldering torch set in an iron sconce; stacks of wooden crates covered with frayed tarps; a rough wooden table, where lay a platter, stacked with raw meat steaks. A glass tankard frothed with some kind of brownish ale.
A pair of pale, squinting eyes peered over the tankard’s rim as it rose to meet a broad face, nearly covered by a riot of dark fur.
The meniscus level of ale dropped swiftly, accompanied by slurping gulps as the tankard swung horizontal, draining down that hairy gullet. With a deep satisfied sigh, the furry one licked the goblet’s rim with a prodigious tongue. Overall, the shape of the skull was much like a person’s. The eyes, though recessed, were green and still somewhat humanoid. Only where Earl Dragonlord had possessed canine uppers even pointier than a demmy’s, this fellow had huge, heavy lower tusks, jutting up to graze his shaggy cheeks.
The flagon slammed down and he started toward the pile of steaks, salivating prodigiously… then he stopped, sniffing the air. A matched pair of splendidly huge eyebrows arched as he turned toward me, grinning impressively.
My captor must not have come into contact with the translator-converter. Or else the device was knocked out during the ambush. No matter. I never believed in that method of dealing with language differences, anyway. “When in Rome…” begins an old human expression that’s good advice for any traveler.
I tongued one of my molars, turning on the interpreter nanos in my own ear canal.
“Grimble gramble gnash… so-o-o it’s no-o-o yoosh pretending-g-g,” rumbled the deep, slurred voice, which grew steadily easier to understand. “I ken when a man’s scannin’ me, though ’is gaze be narrow as a Nomort’s charity.”
I opened my eyes fully and sat up on one elbow, wincing just a little from sharp twinges.
“I suppose I’m your prisoner,” I said, subvocalizing first in my own language, then relaxing to let my laryngeal nano-woofers fashion the equivalent in local dialect.
The hirsute fellow replied with what I took to be a shrug, using shoulders the size of hamhocks. When he next opened his mouth, what emerged was a hearty, majestic belch.
I made certain to look impressed.
“Hm. Well said. I take it you are what they call a Lik’em.”
If he winced at my use of the term, it was hidden by the mat of hair covering all but his nose and eyes.
“This week I seek no relief, ’xcept to be what I be, and am what I am. You should see me elsetimes. Handsome bugger, or so says my mirror. An’ what about you? What’s your fate? To eat, or be ate?”
A queer question. It made me glance, against my better wishes, at the stack of bloody cutlets on his plate.
“My name is Dr. Alvin Montessori. And I’m not sure I understand what you mean. Someone recently told me that I looked like a… a Standard.”
My host grunted expressively. “So does a corpambulist, when he’s new an’ not too smelly. So’s a Nomort, in daylight. Heck-o, you should see me most days when there’s no moon in view. Smooth as a baby an’ don’t say maybe!” He guffawed heartily, a friendly sound that would have cheered me, were not beads of saliva running down his yellow tusks and pooling on his lower lip before they spilled on the deeply stained tabletop.
Questions had been swirling in my head ever since we met Earl Dragonlord, about the social class structure on this world. I had a feeling I wasn’t going to like the answers.
“Let’s say I am a Standard. Does that automatically mean I’m slated for somebody’s dinner table?”
My host sniggered, as if amused by my ignorance.
“In some measure that’s up to the Standard hisself.”
“And I suppose Lik’ems and corpsic—”
“Corpambulists,” he corrected. “Though they prefer bein’ called Zoomz. T’is easier to pronounce, especially in their condition.”
“Zooms?” I’m afraid I rolled my eyes. “Then Lik’ems and Zooms are devourers of—”
“Hey. Don’t pin the whole rap on us! There’s Nomorts, too, y’know.”
Nomorts… such as Earl Dragonlord. The native I last saw guiding my captain and crewmates toward his home. His lair.
I felt a chill that had little to do with the dank, underground cold. Turning toward the torch, I squinted so that its light pierced between my eyelids in sharp, diffracting rays. My nose began to tickle.
“So,” I asked. “What must a Standard do in order to keep from being someone’s dinner?”
The furry humanoid grinned, his tusks gleaming. “You mean you really don’t know? Then as we suspected—”
The tickling light beams struck a nerve at last. I gasped… then bellowed a ferocious sneeze.
The abrupt noise sent my captor toppling backward, off his chair. If my intent had been to jump him, that would have been the time. But I only took the occasion to gather myself up to one knee, pulling in my collar tab.
A fleecy, dark mane reappeared in view, rising above the table, followed by peering eyes.
“Wha… what was that?”
“Just a sneeze. It’s freezing down here, don’t you think? Doesn’t a solitary captive like me deserve a blanket, after being attacked on the darkened streets of your urb district, knocked out, and dragged underground, away from my friends?”
“That was a sneeze? It sounded like a cross ’tween a hellion howl and a razortooth’s roar.” He blinked some more. “I thought you said you was a Standard.”
I divided my attention, as another voice buzzed in my ears.
“Advisor Montessori, this is Commander Talon, on the bridge. Thank heavens you’re all right! I assume from your phrasing that you’re alone underground, under some type of coercion, and out of contact with the Captain. Is that correct?”
Demmies are sharp and quick, when they decide to focus, and Talon took focus seriously. I shivered to reinforce the impression that I must keep my hand on my collar. Facing the Lik’em, I spoke sharply, as if to answer his question.
“I never said I was a member of the planetwide social class that’s apparently preyed upon by three other sub-races of humanoids… those three groups being called the corpambulists, whom I’ve never seen; and the elegant Nomorts, one of whom I last saw guiding my comrades toward castle-like structures on a hill west of the park, presumably into a trap; or Lik’ems like you my captor, who seem to grow abundant lower bicuspids and facial fur during certain times of the month, and relish beer with their raw meat.”
The Lik’em stared at me, rising the rest of the way. “Uh, why are you talkin’ like that?”
“How should I talk to a fellow who has taken away my belt pouch and all my tools, and now holds me captive in a subterranean chamber, a little over two meters in height and roughly three meters long by four wide, with a tunnel exiting along the long axis? There you are, standing almost two meters tall, though in a bit of a forward-canine crouch, on the other side of a table piled high with raw steaks, and you have the nerve to ask—”
“We’re homing in on your signal now, Advisor. I don’t think we can read quite the kind of detail you’re giving us. Not through solid rock. But the room dimensions should help us track you down.”
“—have the nerve to ask why I’m talking like this? You really don’t know why I’m talking like this?”
The Lik’em shook his head vigorously, eyes betraying growing worry. “Look, Doc, maybe we got off to a bad start. My name’s Lorg.” He hurried over to a pile of tarps in the corner. “Here, let me get you that blanket—”
“Got it!” The voice of the ship’s exec cut in. “Hold on, Advisor, we’ve found your locus, in a cavity underneath one of their streets. I’m warming up the blasters right now. Just give us a few seconds. We’ll rip away thirty meters of rock and have you outta there in a jif—”
“No!” I cried out, leaping to my feet so fast that I lost contact with the throat mike. Lorg jumped back in dismay, yelping like a puppy with its tail caught in a door.
I pressed my uniform collar once more. “Don’t you dare!” I reiterated. My heartbeat raced, knowing how quickly demmies can work when they think they’re coming to the rescue of a friend. Any moment now, the planetary crust over my head might start boiling into the atmosphere, surgically peeled in molten sheets by a giga-terrawatt laser.
“Just… just hold it right there,” I added, in a lower tone. “Hold it and stay calm.”
Lorg stared at me, clutching the blanket in front of him, his jaw quivering, tusks and all.
“I’m calm. I’m calm!”
Commander Talon also replied – “Roger, Doctor Montessori. Understood. Standing by.”
I tried to think. So far I’d been improvising… a technique which isn’t taught much at Earth’s Advisor Academy, since that skill is usually left to demmies. (It is their strongest trait.) But sometimes a human has to do the demmiest things. At this point I had my captor intimidated, but I knew that would give way when he realized my loud bark wasn’t backed up with bite.
I took an assertive step towards him. “Where are we now? In the sub-urb?”
Lorg nodded. “Under my own place. You were closest to the manhole, so I grabbed you before the Renks snatched ever’body else.”
This confused me. “You mean the captai… my friends aren’t here too?”
“Naw. The Renks laid a trap for ’em. Me an’ my friends were lucky to get you.”
“Renks? Who are they? Are they Nomorts?” My suspicions of Earl Dragonlord flared. Had he led our party into an ambush?
But that didn’t make sense! We had been following Earl toward the hill of castles he called home. Why should he abduct victims who were already heading into his lair?
“Renks is a kind of Zoomz,” Lorg said, with a shiver and a shake of his head. “They swarmed over y’all. We hardly had time to—”
“Shut up, Lorg!”
A new, harsh voice cut in, making us both startle and turn. At the entrance to the underground chamber, three more Lik’ems had appeared, even larger than my host. Foremost among the newcomers was a giant figure, bulging out of his clothes, which resembled some kind of striped tracksuit, with a sweater draped over the shoulders. Pale yellow fur stood on end with rage, and his curling tusks made Lorg look like a poster boy for Orthodontia Monthly.
“Besh!”Lorg cried out. “I was just—”
“Playing with your food again, I know.” The bigger Lik’em sauntered in – if one can “saunter” with tree-like arms that almost brush the floor. “How many times do I haveta tell you? If you talk to it, that only makes it harder to eat.”
The other two Lik’ems leaned against the door and chortled, a sound vaguely like what an engine might say, after being fed a treat of corundum sand. Lorg turned red – in those few bare patches showing through his matted pelt.
“Uh, Besh, I don’t think this’s food at all. It… he ain’t like any Standard I ever seen.”
“Nonsense! Look at him! X’cept for that funny nose, and those flattish eyes, that silly chin, and smooth fore’ead—”
What funny nose? I thought, a bit put out.
“Besides, what were Renks doing out there? Hunting for partners in a game of spin the skull? They must want this meat pretty bad, risking a foray into our urb like that.”
“Exactly!” Lorg said, gaining some feeling in his voice. “You ever see that happen before? Or for that matter, you ever see Standards come strolling through the urb at night? With a moon full? I tell you, them Renks wanted somethin’ more’n just Standard flesh.”
Besh seemed torn between affront at Lorg’s daring to talk back, and interest in the possibilities he’d raised.
“Not a regular Standard, eh? Maybe something tastier?”
“Maybe something a whole lot more dangerous,” I interjected, speaking with more steadiness than I felt inside.
Besh looked me over, and barked a savage laugh. He ambled toward me with an air of relish… and mustard and mayonnaise, I’d wager.
“I don’t scare off easy, meat. I’m Besh, night-howler and hill-loper! Runner in the woods and bed-lover of all three moons! My yowl curdles milk in far counties. It shatters windows in the Standards’ armored high rises. Nomorts take a sunburn, before they face Besh. Little baldie, you dare try to out-bluff me?”
As he moved closer, flexing hands like the scoops at the end of a steam shovel, Lorg tugged at his sleeve.
“Watch out, Besh. He makes this noise.”
I had been getting ready for a fight, relaxing into Judo stance… as if that would help much against four such demons. But Lorg’s words gave me an idea. I pressed my collar again.
“Did that noise impress you, Lorg? Why, I wouldn’t insult Besh with anything so puny.”
This time the big Lik’em stopped, clearly intrigued.
“Oh yeah?” he asked.
“Yeah! Besh calls himself night-howler? Why, I can out-bellow him anytime, anywhere. I can make clamor that’ll rattle your gums and shake your teeth out of their sockets. I can make water rise up and stones fall from above. You want noise? I’ll give you noise!”
Would Commander Talon understand what I wanted? By sonic induction, it should be easy enough to transmit vibrations directly into the bedrock all around this chamber – something loud and awe-inspiring. It would only be a matter of timing, triggering it to coincide with my surreptitious cue. Just the sort of improvised trick I had seen the Captain pull, plenty of times.
I felt a moment’s triumph from the facial expressions of Besh and the others. Clearly, bravado and bluster were components of Lik’em character, part of how they sorted out their own pecking order. Now to back up my bravado with something that would turn them into jibbering converts, eager to help me any way they could.
“Right!” I took a step forward, brandishing a fist. “I’ll make these rock walls tremble with such a din, you’ll think the world is ending!”
The Lik’ems stared at me, wide-eyed and nervously expectant.
Seconds passed, measured by the slow plinking of condensation droplets, falling unhurriedly into a nearby puddle. With each “plunk” my heart sank. Where was Talon? Why didn’t he answer, to confirm my request?
Besh blinked once. Twice. Scratching his shaggy, blond mane, he ran his tongue back and forth a few times between his tusks, making a thoughtful clicking.
He glanced at Lorg, who looked back at him and shrugged.
“Okay, I’ll bite,” Besh said, facing me once more. “What noise is it you were thinkin’ of impressin’ us with?”
“Yeah,” Lorg added, a little eagerly. “Will it hurt?”
I pressed the collar mike against my throat, with desperate urgency.
“Hurt? Why… I can make a racket that will shiver these chambers and rattle your soul! A cacophony to show you I’m nobody’s meat. It’ll petrify your very bones, shrivel your guts, shake your teeth—”
“We heard that part already,” Lorg complained, a little churlishly. I really was doing my best, under the circumstances.
“Enough!” Besh roared, setting off his own reverberations and sweeping the plate of cutlets off the table, crashing to the floor.
“Enough braggin’! Just do it, meat. Give it a shot.”
He crossed his arms, waiting.
My mind whirled. What had gone wrong? Was it a problem with my microphone or nanos? Or had something gone amiss with the Clever Gamble, in orbit?
The eyes of the Lik’em chieftain told me, I had but seconds left.
Improvise! Part of me insisted.
But I’m no demmie! Another part replied. I’m a logical Earthman!
That thought cheered me, just a little. Enough to find some saliva in my dry mouth, to wet my lips.
I brought them together… and blew.
This isn’t going to work, I thought, as I began a softshoe tap-shuffle, to my own whistling accompaniment.
A follow-up release 0.3.11 to the recent 0.3.10
release release of the anytime
package arrived on CRAN two
days ago. The package is fairly feature-complete, and code and
functionality remain mature and stable, of course.
anytime
is a very focused package aiming to do just one thing really
well: to convert anything in integer, numeric, character,
factor, ordered, … input format to either POSIXct (when called as
anytime) or Date objects (when called as
anydate) – and to do so without requiring a format
string as well as accomodating different formats in one input
vector. See the anytime page,
or the GitHub repo
for a few examples, and the beautiful documentation site
for all documentation.
This release simply skips one test file. CRAN labeled an error ‘M1mac’ yet
it did not reproduce on any of the other M1 macOS I can access
(macbuilder, GitHub Actions) as this appeared related to a local setting
of timezone values I could not reproduce anywwhere. So the only way to
get rid of the ‘fail’ is to … not to run the test. Needless to say the
upload process was a little tedious as I got the passive-aggressive ‘not
responding’ treatment on a first upload and the required email answer it
lead to. Anyway, after a few days, and even more deep breaths, it is
taken care of and now the package
result standing is (at least currently) pristinely clean.
Author: Jean-Philippe Martin The traveler came to our house the day before harvest, detective. I did not notice anything amiss. He said the had nothing but was willing to work, so we housed him and showed him the next day how to pack, haul, and stack the boxes of fruit. He went along fine with […]
I have been working all year on a solar upgrade aimed at December. Now here
it is, midwinter, and my electric car is charging on a cloudy day from my
offgrid solar fence.
I lived happily enough with 1 kilowatt of solar that I
installed in 2017.
Meanwhile, solar panel prices came down massively, incentives increased
and everything came together: This was the year.
In the spring I started clearing forest trees that were leaning over the house,
making both a firebreak and a solar field.
In June I picked up a pallet of panels in a box truck.
a porch with a a bunch of solar panels, stacked on edge leaning up against the wall. A black and white cat is sprawled in front of them.
In August I bought the EV and was able to charge it offgrid from my old
solar system... a few miles per day on the most sunny days.
Me standing in front of the solar fence, which is 10 panels long
For the past several weeks I have been installing additional solar panels
on ballasted ground mounts full of gravel. At this point I'm half way
through installing my 30 panel upgrade.
The design goal of my 12 kilowatt system is to produce 1 kilowatt of power all
day on a cloudy day in midwinter, which allows swapping between major loads (EV
charger, hot water heater, etc) on a cloudy day and running everything on a
sunny day. So the size of the battery bank doesn't matter much. Batteries are
getting cheaper fast too, but they are a wear item, so it's better to oversize
the solar system and minimize the battery.
A lot of this is nonstandard and experimental. And that makes sense with the
price of solar panels. It costs more to mount solar panels now than the panels
are worth. And non-ideal panel orientation isn't a problem when the system is
massively overpaneled.
I'm hoping to finish up the install before the end of winter. I have more trees
to clear, more ballasted ground mounts to install, and need to come up with
something even more experimental for a half dozen or so panels. Using solar
panels as mounts for solar panels? Hanging them from trees?
Soon the wan light will fade, time to head off to the solstice party to enjoy
the long night, and a bonfire.
Solar fence with some ballasted ground mounts in front of it, late evening light. Old pole mounted solar panels in the foreground are from the 90's.
We haven't stopped moving ahead. Nor will we. And hence, with the aim of ending a tumultuous year on a high note... very high... here's my roundup of recent space science news - and upcoming missions... and so on...
== Lots of stuff out there! ==
Asteroid 5748DaveBrin
First, here's Asteroid 5748DaveBrin, kindly named by discoverer Eleanor "Glo" Helin, back in the 20th Century. Since then, many thousands more have been tracked, but so many more must be, in order to ensure our safety (from dinosaur-killers or city-smashers) and to assay future wealth!
In an era of Big Government and Big Commercial Science, the B612* Foundation has a special niche, software-mining massive old datasets, and thusly finding and cataloguing more rocks out there than anyone!Consider B612 for your list of save the world donations!(*I am on the B612 advisory council.)
(If this is your season for general philanthropy or giving, or investing in a better tomorrow, here's my annual appeal that you consider the win-win-win of Proxy Activism!And again, do include potentially world-saving B612!)
But sure, the Big Guys will also help.
In fact, there are high hopes and expectations for the Vera Rubin (formerly Large Synoptic Survey) Telescope opening in Chile, next year. It will scan the sky in vast sweeps, comparing images from night to night, for transients and changes, discovering far more supernovas and novas, for example...
... but also possibly millions of previously undetected asteroids. See this chart provided by the Asteroid Institute and B612. Together, we are finding thousands of objects and appraising their potential to endanger our planet. Or else to make our children rich.
Even before the Vera Rubin scope commences to tally many new objects in the Kuiper belt beyond Neptune, some surprises are already emerging about that cold, dark region (of which Pluto is a part.) Astronomers have just found hints of an unexpected rise in the density of Kuiper Belt objects or KBOs, between 70 and 90 AU from the Sun. In the region between 55 and 70 AU, however, next to nothing has been found.
== So, who should do the exploring, out there? ==
Well, if you are talking about just exploring – poking at new places and doing science – then robotics wins, hands-down.
Sorry but machines are better for poking at the edges. That’s what NASA/Japan and Europe should do with respect to the Moon, instead of silly footprint stunts. For 5% of the cost of “Artemis” we could robotically seekand verify, or else (more likely) refute those tall tales of ‘lunar resources.’
But there’s another mission for astronauts – plus tourists and researchers – in space. And that is studying how humans can learn to actually live and work out there.
For the near term, that’ll entail a lot of work in Low Earth Orbit (LEO), where issues of supply, recycling and radiation safety are easier to control. And above all, we should (must!) finally build spinning facilities that can tell us (at long last) what gravity conditions humans need, in order to survive and stay healthy.
Over 60 years since Gagarin, we still haven't a clue how to answer that simple question! It’s the fascinating topic that Joseph Carroll elucidates in "What do we need astronauts for?" published in in Space Review.
He follows that up with a more detailed article, "How to test artificial gravity" - about near term missions to experiment with spinning artificial gravity (SAG), starting with a simple test using just a Crew Dragon and the upper Falcon stage that launched it. Then moving on to a highly plausible path toward making space a vastly more welcoming place.
== Looking ahead.... Future Space Missions ==
Jeff Bezos's Blue Origin plans to skip from the tiny-but-self-landing New Shepherd, leaping way past sturdy-reliable self-landing Falcon9 and triple self-landing Falcon Heavy, all the way to landing sub-Starship New Glenn on a barge. Or so they say. I guess we'll see - maybe soon.
Rocket Lab’s twin probes to study aurorae and the atmosphere of Mars were made super-inexpensively. They’ll head out there soon (NOT cheaply) on the New Glenn heavy.
Among many terrific initiatives seed-funded by NIAC (where I was an advisor for a decade), one getting attention in the New Yorker is the Farview radio telescope to be set up on the Moon’s far side. Though the article made an error in the name; it’s NASA’s Innovative & Advanced Concepts program – (NIAC). But yeah, look at the range of incredible, just-short-of-science-fiction concepts!
One of my favorite NIAC concepts of the last few years was the Linares Statite, that would hover on sunlight, way out at the asteroid belt, ready to fold its wings and dive like a peregrine falcon past the sun to catch up with almost anything, such as another 'Oumuamua interstellar visitor. Slava Turyshev's Project Sundiver has shown that you get a lot of speed if you plummet to graze just past Sol, then snap open your lightsail at nearest passage. In fact it is the best way to streak to the Kuiper Belt. And beyond!
That's just one of many potential uses of lightsails that are described - via both stories and nonfiction - in the 21st Century edition of Project Solar Sail! Revised and updated, then edited by me and Stephen W. Potts, this great new version will be featured by the Planetary Society next month!
Finally....
Beautiful images of the hot place. The ESA/JAXA BepiColombo mission has successfully completed its fourth of six gravity assist flybys at Mercury, capturing images of two special impact craters as it uses the little planet’s gravity to steer itself on course to enter orbit around Mercury in November 2026.
And an Ai piloted F-16 (with human observers) outperformed regularly piloted F-16s in tests including 'dogfights.'
My friend and former NIAC colleague-physicist John Cramer (who just turned 90; happy birthday John!) two decades ago used data from NASA’s WMAP survey to produce "The Sound of the Big Bang.” … A recent topic of Brewster Rockit!
And yeah, may you and yours... and all of us... manage to persevere... and yes thrive(!) through "interesting times."
And may we meet and party hearty eventually... out there.
We did it again™! Just in time, we’re excited to announce the release of Grml stable version 2024.12, code-named ‘Adventgrenze’! (If you’re not familiar with Grml, it’s a Debian-based live system tailored for system administrators.)
This new release is built on Debian trixie, and for the first time, we’re introducing support for 64-bit ARM CPUs (arm64 architecture)!
I’m incredibly proud of the hard work that went into this release. A significant amount of behind-the-scenes effort went into reworking our infrastructure and redesigning the build process. Special thanks to Chris and Darsha – our Grml developer days in November and December were a blast!
For a detailed overview of the changes between releases 2024.02 and 2024.12, check out our official release announcement. And, as always, after a release comes the next one – exciting improvements are already in the works!
BTW: recently we also celebrated 20(!) years of Grml Releases. If you’re a Grml and or grml-zsh user, please join us in celebrating and send us a postcard!
A coworker asked recently about how people use VMs locally for dev work, so I figured I’d take a few minutes to write up a bit about what I do. There are many use cases for local virtual machines in software development and testing. They’re self-contained, meaning you can make a mess of them without impacting your day-to-day computing environment. They can run different distributions, kernels, and even entirely different operating systems from the one you use regularly. Etc. They’re also cheaper than cloud services and provide finer grained control over the resources.
I figured I’d share a little bit about how I manage different virtual machines in case anybody finds this useful. This is what works for me, but it won’t necessarily work for you, or maybe you’ve already got something better. I’ve found it to be easy to work with, light weight, and is easy to evolve my needs change.
Use short-lived VMs
Rather than keep a long-lived “development” VM around that you customize over time, I recommend automating the common customizations and provisioning new VMs regularly. If I’m working on reproducing a bug or testing a change prior to submitting it upstream, I’ll do this work in a VM and delete the VM when when I’m done. When provisioning VMs this frequently, though, walking through the installation process for every new VM is tedious and a waste of time. Since most of my work is done in Debian, so I start with images generated daily by the cloud team. These images are available for multiple releases and architectures. The ‘nocloud’ variant boots to a root prompt and can be useful directly, or the ‘generic’ images can be used for cloud-init based customization.
Automating image preparation
This makefile lets me do something like make image and get a new qcow2 image with the latest build of a given Debian release (sid by default, with others available by specifying DIST).
While the ‘nocloud’ images can be useful, I typically find that I want to apply the same modifications to each new VM I launch, and they don’t provide facilities for automating this. The ‘generic’ images, on the other hand, run cloud-init by default. Using cloud-init, I can create my user account, point apt at local mirrors, install my preferred tools, ensure the root filesystem is resized to make full use of the backing storage, etc.
The cloud-init configuration on the generic images will read from a local config drive, which can contain an ISO9660 (cdrom) filesystem image. This image can be generated from a subdirectory containing the various cloud-init input files using the following make syntax:
This invokes qemu with the root volume and ISO image attached as disks, uses an emulated “q35” machine with the host’s CPU and KVM acceleration, the userspace network stack, and a serial console. The first time the VM boots, cloud-init will apply the configuration from the cloud-config available in the ISO9660 filesystem.
Alternatives to cloud-init
virt-customize is another tool accomplishing the same type of customization. I use cloud-init because it works directly with cloud providers in addition to local VM images. You could also use something like ansible.
Variations
I have a variant of this that uses a bridged network, which I’ll write
more about later. The bridge is nice because it’s more featureful,
with full support for IPv6, etc, but it needs a bit more
infrastructure in place.
It also can be helpful to use 9p or virtfs to share filesystem state between the host the VM. I don’t tend to rely on these, and will instead use rsync or TRAMP for moving files around.
Containers are also useful, of course, and there are plenty of times when the full isolation of a VM is not worth the overhead.
Author: Stephen Dougherty The wind picked up the dust with brutal force. It ripped up the scorched land and tossed it into the never-ending night. Through the dark maelstrom, he could see what he hoped was Beacon Five through the scuffed glass of Beacon Two, its amber light scything through the burnt dust like the […]
Rational
Tim R.
observed
"When setting up my security camera using the ieGeek app there
seem to be two conflicting definitions of sensitivity. I hope
the second one is wrong, but if it's right, I really
hope the first one is wrong."
"That's what happens when you use a LLM to write your date
handling code!" crowed an anonymous Errordian. "Actually, it is
interesting that they store dates as days since the beginning
of the current Julian period."
Sarcastic
Michael P.
grumped
"Oh, shoot. I hope I can find time to charge my doorbell
before it dies. I guess Google Home takes a much longer
view of time than us mere humans."
"Hello To You Too!" cheered
Simon T.
when he happened on this friendly welcome. Not really. What he really said was
"We all love a hello world, but probably not on almost the front page of a national system."
Maybe, maybe not.
Mathematician
Mark V.
figures Firefox's math doesn't add up.
"Apparently my browser has cached 17 Exabytes of data from YouTube - on my 512GB laptop.
That's some serious video compression!" Technically, it depends on the lighting.
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
It’s difficult to understand why the Australian cricket authorities decided to stage the third Test of the current series against India in Brisbane, a city known for its rain and storms in December and early January.
For some strange reason, the powers-that-be gave the first Test to Perth, a venue that normally stages a match later in the series, especially when there are five Tests against the one country.
What resulted in Brisbane was something of a disaster. Play was restricted to 13.2 overs on the first day and thereafter rain was the winner on every day except the second. It spoiled what could have been a tight game.
Brisbane is normally a venue that favours Australia due to the pitch supporting pace. Australia has won there more often that not; after the home team lost to the West Indies in 1988-89, it took them until 2021 to lose a game at the ground. That was to India.
In January 2024, the West Indies recorded an eight-run win, something totally unexpected.
Australian authorities have chosen Brisbane as the venue for the first Test because it gives the home team an advantage. Losing right at the start of the series tends to drive crowds away.
But despite all these factors, Perth hosted the first Test. Surprisingly, India won that game and by 295 runs too.
That Australia won in Adelaide was no surprise; the pink ball and the day-night Tests have always favoured the Australians.
And then we had Brisbane where a total of just 216 overs were bowled over the five days. Rain, bad light and at times the threat of lightning interrupted play all the time.
Cybercriminals are selling hundreds of thousands of credential sets stolen with the help of a cracked version of Acunetix, a powerful commercial web app vulnerability scanner, new research finds. The cracked software is being resold as a cloud-based attack tool by at least two different services, one of which KrebsOnSecurity traced to an information technology firm based in Turkey.
Araneida Scanner.
Cyber threat analysts at Silent Push said they recently received reports from a partner organization that identified an aggressive scanning effort against their website using an Internet address previously associated with a campaign by FIN7, a notorious Russia-based hacking group.
But on closer inspection they discovered the address contained an HTML title of “Araneida Customer Panel,” and found they could search on that text string to find dozens of unique addresses hosting the same service.
It soon became apparent that Araneida was being resold as a cloud-based service using a cracked version of Acunetix, allowing paying customers to conduct offensive reconnaissance on potential target websites, scrape user data, and find vulnerabilities for exploitation.
Silent Push also learned Araneida bundles its service with a robust proxy offering, so that customer scans appear to come from Internet addresses that are randomly selected from a large pool of available traffic relays.
The makers of Acunetix, Texas-based application security vendor Invicti Security, confirmed Silent Push’s findings, saying someone had figured out how to crack the free trial version of the software so that it runs without a valid license key.
“We have been playing cat and mouse for a while with these guys,” said Matt Sciberras, chief information security officer at Invicti.
Silent Push said Araneida is being advertised by an eponymous user on multiple cybercrime forums. The service’s Telegram channel boasts nearly 500 subscribers and explains how to use the tool for malicious purposes.
In a “Fun Facts” list posted to the channel in late September, Araneida said their service was used to take over more than 30,000 websites in just six months, and that one customer used it to buy a Porsche with the payment card data (“dumps”) they sold.
Araneida Scanner’s Telegram channel bragging about how customers are using the service for cybercrime.
“They are constantly bragging with their community about the crimes that are being committed, how it’s making criminals money,” said Zach Edwards, a senior threat researcher at Silent Push. “They are also selling bulk data and dumps which appear to have been acquired with this tool or due to vulnerabilities found with the tool.”
Silent Push also found a cracked version of Acunetix was powering at least 20 instances of a similar cloud-based vulnerability testing service catering to Mandarin speakers, but they were unable to find any apparently related sales threads about them on the dark web.
According to an August 2023 report (PDF) from the U.S. Department of Health and Human Services (HHS), Acunetix (presumably a cracked version) is among several tools used by APT 41, a prolific Chinese state-sponsored hacking group.
THE TURKISH CONNECTION
Silent Push notes that the website where Araneida is being sold — araneida[.]co — first came online in February 2023. But a review of this Araneida nickname on the cybercrime forums shows they have been active in the criminal hacking scene since at least 2018.
A search in the threat intelligence platform Intel 471 shows a user by the name Araneida promoted the scanner on two cybercrime forums since 2022, including Breached and Nulled. In 2022, Araneida told fellow Breached members they could be reached on Discord at the username “Ornie#9811.”
According to Intel 471, this same Discord account was advertised in 2019 by a person on the cybercrime forum Cracked who used the monikers “ORN” and “ori0n.” The user “ori0n” mentioned in several posts that they could be reached on Telegram at the username “@sirorny.”
Orn advertising Araneida Scanner in Feb. 2023 on the forum Cracked. Image: Ke-la.com.
The Sirorny Telegram identity also was referenced as a point of contact for a current user on the cybercrime forum Nulled who is selling website development services, and who references araneida[.]co as one of their projects. That user, “Exorn,” has posts dating back to August 2018.
In early 2020, Exorn promoted a website called “orndorks[.]com,” which they described as a service for automating the scanning for web-based vulnerabilities. A passive DNS lookup on this domain at DomainTools.com shows that its email records pointed to the address ori0nbusiness@protonmail.com.
Constella Intelligence, a company that tracks information exposed in data breaches, finds this email address was used to register an account at Breachforums in July 2024 under the nickname “Ornie.” Constella also finds the same email registered at the website netguard[.]codes in 2021 using the password “ceza2003” [full disclosure: Constella is currently an advertiser on KrebsOnSecurity].
A search on the password ceza2003 in Constella finds roughly a dozen email addresses that used it in an exposed data breach, most of them featuring some variation on the name “altugsara,” including altugsara321@gmail.com. Constella further finds altugsara321@gmail.com was used to create an account at the cybercrime community RaidForums under the username “ori0n,” from an Internet address in Istanbul.
According to DomainTools, altugsara321@gmail.com was used in 2020 to register the domain name altugsara[.]com. Archive.org’s history for that domain shows that in 2021 it featured a website for a then 18-year-old Altuğ Şara from Ankara, Turkey.
Archive.org’s recollection of what altugsara dot com looked like in 2021.
LinkedIn finds this same altugsara[.]com domain listed in the “contact info” section of a profile for an Altug Sara from Ankara, who says he has worked the past two years as a senior software developer for a Turkish IT firm called Bilitro Yazilim.
Neither Altug Sara nor Bilitro Yazilim responded to requests for comment.
Invicti’s website states that it has offices in Ankara, but the company’s CEO said none of their employees recognized either name.
“We do have a small team in Ankara, but as far as I know we have no connection to the individual other than the fact that they are also in Ankara,” Invicti CEO Neil Roseman told KrebsOnSecurity.
Researchers at Silent Push say despite Araneida using a seemingly endless supply of proxies to mask the true location of its users, it is a fairly “noisy” scanner that will kick off a large volume of requests to various API endpoints, and make requests to random URLs associated with different content management systems.
What’s more, the cracked version of Acunetix being resold to cybercriminals invokes legacy Acunetix SSL certificates on active control panels, which Silent Push says provides a solid pivot for finding some of this infrastructure, particularly from the Chinese threat actors.
Not everything needs to be digital and “smart.” License plates, for example:
Josep Rodriguez, a researcher at security firm IOActive, has revealed a technique to “jailbreak” digital license plates sold by Reviver, the leading vendor of those plates in the US with 65,000 plates already sold. By removing a sticker on the back of the plate and attaching a cable to its internal connectors, he’s able to rewrite a Reviver plate’s firmware in a matter of minutes. Then, with that custom firmware installed, the jailbroken license plate can receive commands via Bluetooth from a smartphone app to instantly change its display to show any characters or image.
[…]
Because the vulnerability that allowed him to rewrite the plates’ firmware exists at the hardware level—in Reviver’s chips themselves—Rodriguez says there’s no way for Reviver to patch the issue with a mere software update. Instead, it would have to replace those chips in each display.
The whole point of a license plate is that it can’t be modified. Why in the world would anyone think that a digital version is a good idea?
After the MiniDebConf Marseille 2019, COVID-19 made it impossible or difficult to organize new MiniDebConfs for a few years. With the gradual resumption of in-person events (like FOSDEM, DebConf, etc.), the idea emerged to host another MiniDebConf in France, but with a lighter organizational load. In 2023, we decided to reach out to the organizers of Capitole du Libre to repeat the experience of 2017: hosting a MiniDebConf alongside their annual event in Toulouse in November. However, our request came too late for 2023. After discussions with Capitole du Libre in November 2023 in Toulouse and again in February 2024 in Brussels, we confirmed that a MiniDebConf Toulouse would take place in November 2024!
On Thursday, November 14, and Friday, November 15, 2024, about forty developers arrived from around the world (France, Spain, Italy, Switzerland, Germany, England, Brazil, Uruguay, India, Brest, Marseille…) to spend two days at the MiniDebCamp in the beautiful collaborative spaces of Artilect in Toulouse city center.
Author: Hillary Lyon “You have three minutes,” Harmon said, sticking the end of an unlit cigar in his mouth.“Go.” “Okay,” Jepson nervously began. “Picture this: an unlikely romance between a peppy vacuum cleaner and a stoic lawn mower.” Harmon struck a match and lit his cigar. Jepson continued, “Defying the conventions of their middle class […]
Michael had a co-worker who was new to the team. As such, there was definitely an expected ramp-up time. But this new developer got that ramp up time, and still wasn't performing. Worse, they ended up dragging down the entire team, as they'd go off, write a bunch of code, end up in a situation that they couldn't understand why nothing was working, and then beg for help.
For example, this dev was tasked with adding timestamps to a set of logging messages. The logs had started as simple "print" debugging messages, but had grown in complexity and it was time to treat them like real logging.
This stumped them, as the following C# code only ever printed out a zero:
DateTime d = new DateTime();
int timestamp = d.Minute + d.Second + d.Millisecond;
Console.WriteLine(timestamp + message);
On one hand, this is a clear example of not understanding operator overloading- clearly, they understood that + could be used for string concatenation, but they seem to have forgotten that it could also be used for arithmetic.
I don't think this actually only ever printed out a zero. It certainly didn't print out a timestamp, but it also didn't print out a zero. So not only is the code bad, but the understanding of how it's bad is also bad. It's bad. Bad. Bad.
I am using GitLab CI/CD pipelines for several upstream projects (libidn, libidn2, gsasl, inetutils, libtasn1, libntlm, …) and a long-time concern for these have been that there is too little testing on GNU Guix. Several attempts have been made, and earlier this year Ludo’ came really close to finish this. My earlier effort to idempotently rebuild Debian recently led me to think about re-bootstrapping Debian. Since Debian is a binary distribution, it re-use earlier binary packages when building new packages. The prospect of re-bootstrapping Debian in a reproducible way by rebuilding all of those packages going back to the beginning of time does not appeal to me. Instead, wouldn’t it be easier to build Debian trixie (or some future release of Debian) from Guix, by creating a small bootstrap sandbox that can start to build Debian packages, and then make sure that the particular Debian release can idempotently rebuild itself in a reproducible way? Then you will eventually end up with a reproducible and re-bootstrapped Debian, which pave the way for a trustworthy release of Trisquel. Fortunately, such an endeavour appears to offer many rabbit holes. Preparing Guix container images for use in GitLab pipelines is one that I jumped into in the last few days, and just came out of.
Let’s go directly to the point of this article: here is a GitLab pipeline job that runs in a native Guix container image that builds libksba after installing the libgpg-error dependency from Guix using the pre-built substitutes.
You can put that in a .gitlab-ci.yml and push it to GitLab and you will end up with a nice pipeline job output.
As you may imagine, there are several things that are sub-optimal in the before_script above that ought to be taken care of by the Guix container image, and I hope to be able to remove as much of the ugliness as possible. However that doesn’t change that these images are useful now, and I wanted to announce this work to allow others to start testing them and possibly offer help. I have started to make use of these images in some projects, see for example the libntlm commit for that.
Really interesting research into the structure of prime numbers. Not immediately related to the cryptanalysis of prime-number-based public-key algorithms, but every little bit matters.
As The Long Now Foundation steps into its second quarter century, I find myself reflecting on what an extraordinary moment we're living through and how grateful I am for our community.
The rapid emergence of artificial intelligence, the compounding realities of ecological crises, and the evolving ways we create and preserve knowledge — all of these call us to think differently about our place in time and how we frame the future.
They also invite us to be more present to the joys of remaining curious, nurturing friendships, and embracing imagination that Long Now provides us amidst all our present day issues.
At its heart, Long Now has always been about bringing people together to explore big ideas, preserve wisdom across generations, and demonstrate long-term thinking in action.
These things are worth our time, and, in fact, make our time more worthwhile.
As Long Now’s new Board President, I'm honored to help nurture our remarkable community, curate the conversations that shape our collective understanding, and secure the resources that sustain our mission.
Patrick Dowd at The Interval. Photo by Christopher Michel
It’s About Time
The practice of reframing how we think about time has been woven into Long Now's DNA since our inception, and yet long-term thinking is still not common.
The frames our community uses for thinking differently about time are useful tools for building better futures and they deserve to be more widely shared.
Remember that extra zero we add before years? It's such a simple thing, yet it opens our minds to the vast expanse of time ahead. Stewart Brand's pace layers framework helps us see the different rhythms that make up our society — from the quick pulse of fashion, to the slow beat of culture, to the glacial pace of geological time. And then our 10,000-year clocks remind us just how precious our time really is.
These frames help us to expand our perspectives, learn from the deep past, and embrace the possibilities of the future.
And here's what's beautiful about these frames: they change not just how we think, but how we feel about time. When you start seeing the world through the lens of centuries rather than quarters or election cycles, something shifts. The anxiety of the immediate begins to dissolve into a larger sense of possibility. In this way, long-term thinking isn't just an intellectual exercise — it's a form of emotional and perhaps even spiritual practice.
The range of big thinkers reframing how we think about time is exciting indeed: while Jenny Odell's exploration of deep attention challenges our obsession with linear progress, Dipesh Chakrabarty’s work on anthropocene time helps us grasp the intersection of human history and geological time. Vandana Shiva's vision of cyclic time and Anna Tsing's insights into multispecies temporalities expand our understanding even further. The only thing more exciting than this flourishing of ideas is how I see our community engaging with them, testing them, and bringing them to life in art, tech, commerce, and other forms of creativity.
Photo by Christopher Michel
Dancing with Ideas
As a curator and host of Long Now Talks, I look forward to helping our community continue its quarter century tradition of exploring new frames for how we view our place in time.
I have always felt that our talks possess a unique alchemy that few other events can replicate — not because of any single voice, but because of the robust dialogue between our speakers and our community.
Looking ahead to 02025, we're bringing together an extraordinary lineup of voices to explore how we might transform our thinking to address planetary-scale challenges. From Benjamin Bratton's insights on planetary computation and sapience to Stephen Heintz and Kim Stanley Robinson's discussion of new logics for international relations, and from Ezra Klein and Derek Thompson's investigations of abundance to Sara Imari Walker's explorations of the physical origins of life itself — these conversations are just the beginning.
Our programming evolves with our community's dialogue — and your voice matters in shaping it. The magic happens not just in the presentations but in the conversations that follow, when members bring their own perspectives and experiences to the table. Whether in The Interval's intimate setting or our larger gatherings, our most profound insights often emerge from the dialogue between members who bring diverse frames of reference from technology, academia, arts, governance, and countless other domains.
And people keep coming back, because long-term thinking feels good. It's the opposite of crisis-mode thinking, offering a pathway to hope and a healthy foundation for lasting relationships.
In a world that often feels fragmented and accelerated, our gatherings provide a rare space to slow down, zoom out, and connect with others who share this broader perspective.
And speaking of The Interval — when was the last time you stopped by?
With its updated exhibits on long-term thinking, it’s a better reflection of our community than ever before: not just a critically acclaimed bar and café but a true gathering place for some of the most interesting conversations in the Bay Area.
I love describing the way our community interacts and welcomes newcomers to The Interval as 'intellectual samba' — a joyful, open exploration where you're invited to dance with ideas without any pressure to make them your own.
One moment you might be discussing ecology and deep time over a rare tea, the next you're debating existential risk over a craft cocktail, or simply gazing at the Golden Gate Bridge and pondering what the world might look like thousands of years from now.
The beauty lies in the freedom to engage, question, consider, and then move on to the next fascinating conversation.
Fort Mason Center, home to The Interval
A Bay Area Institution
Our deep roots in the San Francisco Bay Area's countercultural movements provide us with a unique vitality.
The region we call home has repeatedly served as a wellspring for transformative ideas — from the birth of the personal computer to the development of the internet to the Whole Earth Catalog's pioneering vision of sustainable living.
Nowadays, it’s as dynamic as it's ever been, and while we take pride in our history, we are equally excited about the here and now.
The density of local research institutions like Stanford, Berkeley, and SLAC, combined with the wealth of funding being invested in deep technology projects and the strength of our cultural institutions, creates an environment where transformative ideas can and do rapidly evolve from conception to reality. As Long Now board member Patrick Collison has shown with his pioneering Fast Grants, there are ample opportunities for our community to reimagine institutional structures that can accelerate scientific and cultural progress.
What makes this ecosystem truly special is its paradoxical culture of thinking big while remaining pragmatic — a combination that resonates deeply with Long Now's mission. This is a place where people routinely work on challenges so vast they stretch beyond traditional human timescales, while maintaining a practical focus on tangible progress. It's a rare alchemy that continues to yield breakthrough thinking.
Yet while Silicon Valley often measures success through exponential growth and market share, we see things a little differently. For us, success is measured in time, not scale.
This orientation towards longevity over rapid expansion makes us uniquely countercultural in a region that often prioritizes growth at all costs.
Earth from Apollo 17. NASA Image #AS17-148-22727
Majority World
While we're proudly rooted in California, we think in planetary terms and draw inspiration and wisdom from the Majority World — those regions often called "developing" but which represent most of humanity.
This concept, coined by photographer and activist Shahidul Alam, informs how we think about global futures. As a frame, it raises questions about whose futures we imagine, whose knowledge we preserve, and whose voices shape our understanding of time and progress. From emergent phenomena in global megacities to Indigenous approaches to multi-generational thinking, from Asian traditions of cosmic cycles to Latin American concepts of ecological stewardship, the Majority World offers rich traditions of long-term thinking that must inform our imagining of planetary futures.
As anthropologist Wade Davis powerfully reminded us in a memorable Long Now Talk: "Other parts of the world are not less developed versions of us."
This simple truth revolutionizes how we think about progress, time, and the future. It suggests that the path forward isn't about everyone becoming more like the West, but about weaving together diverse ways of knowing and being in time.
💡
WATCH Wade Davis' 02021 Long Now Talk, Activist Anthropology, which look back at the pioneering work of Franz Boas that upended long-held Western assumptions about race, gender, and "social progress".
While many parts of the Majority World are driving new waves of high-tech innovation, other elements have long recognized what Western science is only beginning to understand: that human knowledge is just one part of a vast tapestry of intelligence that surrounds us.
As we expand our frames beyond Western human perspectives, we must also embrace the intelligence and agency of more-than-human life.
All around us, a hidden world of intelligence is coming into view. We're discovering how forests communicate through intricate fungal networks, watching ravens solve puzzles that challenge our assumptions about reasoning, observing octopi as they navigate complex problems, and learning how plants remember and adapt to their environments. These revelations of nature's sophistication arrive at a critical moment, as our ecological crises deepen. Together, they invite us to reimagine our place in a web of interspecies intelligence.
Building on Wangari Maathai's vision of ecological restoration as a multi-generational project, our approach recognizes that long-term thinking must embrace these overlapping ecologies of intelligence. As Robin Wall Kimmerer has shown through her work bridging Indigenous wisdom and botanical science, and as Leslie Carol Roberts has explored through her Ecopoesis gatherings at California College of the Arts, understanding more-than-human intelligence requires us to radically reconsider our temporal and spatial frames.
The convergence of these developments — our growing appreciation of biological intelligence, accelerating ecological collapse, and the emergence of artificial intelligence — calls us to fundamentally reconsider intelligence, consciousness, and time itself.
Since the start, Long Now Talks have explored multiple ways of knowing and being in time — from the swift neural firings of insects to the slow growth of ancient trees, to the distributed intelligence of fungal networks, and the emergent capabilities of artificial minds.
The wisdom of diverse ecologies, encoded in everything from the adaptive strategies of microorganisms to the collaborative networks of forest ecosystems, offers crucial lessons for navigating our long-term future.
Artwork by Brian Eno
Neural Media
The question of media — how to capture, store, and transmit understanding across time and space — is a hallmark of our past and an important area of focus for our future.
Our Long Now Talks and thought-provoking long-term projects like our 10,000-year clocks designed by Danny Hillis and our Rosetta Project have pioneered powerful forms of immersive media — a format that traces its lineage to Stewart Brand's transformative Trips Festivals of the 01960s. Our YouTube channel has found a global audience of millions in the realm of networked media.
Today, we find ourselves increasingly engaged with what interdisciplinary artist and technologist K Allado-McDowell terms "neural media" — technologies that operate through high-dimensional networks inspired by biological brains. These new media don't simply broadcast or connect: they appear to think, reason, and create.
As Allado-McDowell observes, they challenge our understanding of intelligence and consciousness itself, presenting both opportunities and complexities for framing and sharing knowledge across time. Just as previous media revolutions transformed how humanity processes information and creates meaning, neural media will fundamentally reshape how we think about thinking.
The Long Now Foundation's role in this transformation is clear: we must help develop frameworks for using these powerful new tools responsibly in service of long-term thinking and civilizational wisdom.
Combining these and other diverse frames — from Majority World perspectives to ecological intelligence and neural media — is an essential practice for navigating the transformational era that we are now living through. As technological change accelerates and ecological systems approach tipping points, our ability to think and act across longer timescales and wider frames becomes not just valuable but vital for our collective futures.
The Long Now Foundation’s Pace Layers Annual Journal
An Invitation to Engage
Whether you're a technologist grappling with AI, an artist exploring new forms of expression, a scientist probing the fundamentals of existence, or simply someone who feels the pull of longer timescales — your perspective adds to our collective understanding.
The Long Now Foundation has always been about creating spaces where different worldviews enrich rather than conflict with each other. This is something to cherish and nurture.
Through our Long Now Talks, our forthcoming 10,000-year library initiative, our fellowship program, and the launch of our new Advisory Council, we're building both the intellectual frameworks and the human networks needed to foster responsibility over civilizational timescales.
You are invited to be part of all this:attend a Talk, become a member, visit The Interval, support our fellowship program, or partner with us in building the 10,000-year library.
Many of our most engaged members tell me they initially joined out of simple curiosity, only to discover unexpected ways their own experiences could contribute to these crucial conversations about humanity's future.
In an age of accelerating change, we feel good about creating spaces for slower, deeper thinking. Together, we're building the intellectual and cultural infrastructure needed for civilization to thrive — not just for the next quarter or year, but for generations to come.
Want to reframe the future?
The first step is dancing with ideas — and that’s what we’re here to do.
We’ve recently returned from Long Now’s Nevada Bristlecone Preserve, a mountain habitat of the world’s oldest living trees. Time feels different up there at 11,000 feet, nestled among these ancient organisms that have borne witness to centuries of comings and goings.
It was an ideal place to reflect on the founding of The Long Now Foundation as a community where memory, imagination, and the long view could be fostered over millennia. Foremost on our minds is what it takes to be a long-lived institution — the passages between generations and the liminal spaces of possibility that open up in between.
As we turn the corner on our first quarter century and find ourselves in an electrifying first generational transition, we dare to imagine there will be hundreds more to come, but this one, this is our very first.
And we’re here in this moment with all of you — an extraordinary community bound by curiosity and commitment — together creating what Long Now cofounder Brian Eno calls scenius, the collective intelligence and intuition of a cultural scene. Thank you for being on this adventure with us.
Rebecca Lendl has been leading Long Now as Interim Executive Director since early 02023. Under her leadership, Long Now has taken on the ambitious work of reimagining the institution — distilling the learnings of our founding era, reorienting to the present moment, and modeling a generational transition for many more to come. We reimagined our home for long-term thinking with The Interval Decennial, published our first annual print journal Pace Layers, explored arboreal timekeeping in Centuries of the Bristlecone, and programmed an extraordinary upcoming season of Long Now Talks. Rebecca has spent her career supporting visionary ideas and new ways of thinking across art, culture, and technology at the Center for Humane Technology, Creative Commons, Headlands Center for the Arts, and Creative Time.
Photo by Christopher Michel
Patrick Dowd comes to Long Now as a longtime member, friend, and a trusted advisor. Most recently, Patrick served as a curatorial advisor on the relaunch of Long Now Talks, expanding our frame around themes like the Majority World, interspecies ecologies, and neural media. Throughout his career, Patrick has been driven by a personal mission to help big things be good and good things get big, all grounded in intellectual curiosity, community building, and an adventurous spirit. His Millennial Trains Project was a national exercise in scenius, taking young creators on a series of intercontinental train journeys to explore America’s new frontiers. He runs the creative studio Stellar, served as Head of Brand Innovation at PayPal, was Editor-at-Large for National Geographic Traveler, and a Fulbright scholar in India.
Photo by Christopher Michel
As Rebecca and Patrick refine our shared vision for all that’s ahead, we are attentive to things we’ve learned from our community of long-term thinkers —
Long-term thinking is a planetary imperative — Challenges that feel impossible to tackle within a single human lifetime become conceivable when you have a longer timescale to work with. This has always been true. But today, in our age of compounding global crises and pathologically shrinking time horizons, the call to deepen our imaginative capacity grows ever stronger. We are here to build collective capacity to think and act wisely together over the long term.
This moment calls for shared leadership — Working on longer timescales means that our best ideas require an institutional champion. This is a collaboration across generations, far bigger than any one of us. More than ever since our founding, we are focused on the full breadth of our scenius here with all of you: staff, board, fellows, speakers, thinkers, advisors, members, supporters, partners, community. We’re eager to hear from you, learn with you, and co-create with you.
Long Now is a rare institution — There are very few places one can go to think, wrestle with ideas, and orient ourselves, to make disagreement useful and to make sense of the moment within the context of a longer now. Long Now is a rare home for civilizational-scale thinking and being. We will fiercely protect Long Now’s singular place in the ecosystem — ever experimental, radical, and driven by wild imagination.
💡
READ: For more on how Long Now is thinking about curation in our next quarter century, read Patrick’s Reframing the Future
Launching our second quarter century
Looking back, we see a remarkable first quarter century built on audacious projects that ignite cultural imagination like 10,000-year clocks, a language archive on the moon, the community that formed around a Talks series that connected over 400 civilization-scale thinkers with millions around the globe, and our newly reimagined home for long-term thinking at The Interval at Long Now in San Francisco.
As we play with ideas for what’s next, we’ve been thinking hard about the early days of Long Now. We were officially founded by Stewart Brand as the “Clock/Library Project” to “act somewhat like a whole-Earth photograph in time — to help get civilization out of its pathologically short attention span. Along with the clock, we aim to build a library of the deep future, for the deep future.”
It’s time to bring that 10,000-year library into being.
Like a traditional library, a 10,000-year library might include books, magazines, and access to tools and media resources. It might serve as a space for working, studying, gathering, as well as visiting exhibits and attending events. But unlike a traditional library, the 10,000-year library can be imagined as expansively as possible. A collection of resources that may include even people — “check out” a fellow, take an immersive geographical expedition to experience deep time, or curl up with a new idea. Imagine a single metaphorical roof over everything Long Now has to offer, tightly curated and made easily accessible as a library for the future.
We’ve already broken conceptual ground on certain instantiations of these 10,000-year libraries. The Interval at Long Now, redesigned this year to feature an exhibition about long-term thinking, is one vision of a physical outpost of the library. The first volume of Pace Layers, our new annual journal of long-term thinking, is a compendium of some of the best of our first quarter century of thought and writing, and a compelling reminder of the power of the printed word to store ideas over time. Possibilities abound.
Photo by Christopher Michel
Welcome. You have arrived with us at a new beginning.
Our doors are open. Visit us soon and often at The Interval at Long Now. Join us for our newly relaunched Long Now Talks. Subscribe to our videos and podcasts and get connected to a whole world of long-term thinking. Send us your thoughts and ideas. All of us are a part of this lineage — artists, builders, technologists, teachers, parents, entrepreneurs, scientists, researchers, makers, all free thinkers and hungry learners — and we’re made better by being here in community with all of you. Stay tuned for more of the unexpected from Long Now.
💡
We invite you to join us in community at our Second Quarter Century Happy Hour on Monday, February 3 at The Interval. Save the date with more details to come.
All of this is a call to adventure
This work will take all of us. An aspiration as big as fostering long-term thinking across millennia will require a community that collaborates across many generations. A community supported by an institutional champion. A community like you. An institution like The Long Now Foundation.
Adam Griffin is still in disbelief over how quickly he was robbed of nearly $500,000 in cryptocurrencies. A scammer called using a real Google phone number to warn his Gmail account was being hacked, sent email security alerts directly from google.com, and ultimately seized control over the account by convincing him to click “yes” to a Google prompt on his mobile device.
Griffin is a battalion chief firefighter in the Seattle area, and on May 6 he received a call from someone claiming they were from Google support saying his account was being accessed from Germany. A Google search on the phone number calling him — (650) 203-0000 — revealed it was an official number for Google Assistant, an AI-based service that can engage in two-way conversations.
At the same time, he received an email that came from a google.com email address, warning his Google account was compromised. The message included a “Google Support Case ID number” and information about the Google representative supposedly talking to him on the phone, stating the rep’s name as “Ashton”— the same name given by the caller.
Griffin didn’t learn this until much later, but the email he received had a real google.com address because it was sent via Google Forms, a service available to all Google Docs users that makes it easy to send surveys, quizzes and other communications.
A phony security alert Griffin received prior to his bitcoin heist, via Google Forms.
According to tripwire.com’s Graham Cluely, phishers will use Google Forms to create a security alert message, and then change the form’s settings to automatically send a copy of the completed form to any email address entered into the form. The attacker then sends an invitation to complete the form to themselves, not to their intended victim.
“So, the attacker receives the invitation to fill out the form – and when they complete it, they enter their intended victim’s email address into the form, not their own,” Cluely wrote in a December 2023 post. “The attackers are taking advantage of the fact that the emails are being sent out directly by Google Forms (from the google.com domain). It’s an established legitimate domain that helps to make the email look more legitimate and is less likely to be intercepted en route by email-filtering solutions.”
The fake Google representative was polite, patient, professional and reassuring. Ashton told Griffin he was going to receive a notification that would allow him to regain control of the account from the hackers. Sure enough, a Google prompt instantly appeared on his phone asking, “Is it you trying to recover your account?”
Adam Griffin clicked “yes,” to an account recovery notification similar to this one on May 6.
Griffin said that after receiving the pop-up prompt from Google on his phone, he felt more at ease that he really was talking to someone at Google. In reality, the thieves caused the alert to appear on his phone merely by stepping through Google’s account recovery process for Griffin’s Gmail address.
“As soon as I clicked yes, I gave them access to my Gmail, which was synched to Google Photos,” Griffin said.
Unfortunately for Griffin, years ago he used Google Photos to store an image of the secret seed phrase that was protecting his cryptocurrency wallet. Armed with that phrase, the phishers could drain all of his funds.
“From there they were able to transfer approximately $450,000 out of my Exodus wallet,” Griffin recalled.
Griffin said just minutes after giving away access to his Gmail account he received a call from someone claiming to be with Coinbase, who likewise told him someone in Germany was trying to take over his account.
Griffin said a follow-up investigation revealed the attackers had used his Gmail account to gain access to his Coinbase account from a VPN connection in California, providing the multi-factor code from his Google Authenticator app. Unbeknownst to him at the time, Google Authenticator by default also makes the same codes available in one’s Google account online.
But when the thieves tried to move $100,000 worth of cryptocurrency out of his account, Coinbase sent an email stating that the account had been locked, and that he would have to submit additional verification documents before he could do anything with it.
GRAND THEFT AUTOMATED
Just days after Griffin was robbed, a scammer impersonating Google managed to phish 45 bitcoins — approximately $4,725,000 at today’s value — from Tony, a 42-year-old professional from northern California. Tony agreed to speak about his harrowing experience on condition that his last name not be used.
Tony got into bitcoin back in 2013 and has been investing in it ever since. On the evening of May 15, 2024, Tony was putting his three- and one-year-old boys to bed when he received a message from Google about an account security issue, followed by a phone call from a “Daniel Alexander” at Google who said his account was compromised by hackers.
Tony said he had just signed up for Google’s Gemini AI (an artificial intelligence platform formerly known as “Bard”), and mistakenly believed the call was part of that service. Daniel told Tony his account was being accessed by someone in Frankfurt, Germany, and that he could evict the hacker and recover access to the account by clicking “yes” to the prompt that Google was going to send to his phone.
The Google prompt arrived seconds later. And to his everlasting regret, Tony clicked the “Yes, it’s me” button.
Then came another call, this one allegedly from security personnel at Trezor, a company that makes encrypted hardware devices made to store cryptocurrency seed phrases securely offline. The caller said someone had submitted a request to Trezor to close his account, and they forwarded Tony a message sent from his Gmail account that included his name, Social Security number, date of birth, address, phone number and email address.
Tony said he began to believe then that his Trezor account truly was compromised. The caller convinced him to “recover” his account by entering his cryptocurrency seed phrase at a phishing website (verify-trezor[.]io) that mimicked the official Trezor website.
“At this point I go into fight or flight mode,” Tony recalled. “I’ve got my kids crying, my wife is like what the heck is going on? My brain went haywire. I put my seed phrase into a phishing site, and that was it.”
Almost immediately, all of the funds he was planning to save for retirement and for his children’s college fund were drained from his account.
“I made mistakes due to being so busy and not thinking correctly,” Tony told KrebsOnSecurity. “I had gotten so far away from the security protocols in bitcoin as life had changed so much since having kids.”
Tony shared this text message exchange of him pleading with his tormentors after being robbed of 45 bitcoins.
Tony said the theft left him traumatized and angry for months.
“All I was thinking about was protecting my boys and it ended up costing me everything,” he said. “Needless to say I’m devastated and have had to do serious therapy to get through it.”
MISERY LOVES COMPANY
Tony told KrebsOnSecurity that in the weeks following the theft of his 45 bitcoins, he became so consumed with rage and shame that he was seriously contemplating suicide. Then one day, while scouring the Internet for signs that others may have been phished by Daniel, he encountered Griffin posting on Reddit about the phone number involved in his recent bitcoin theft.
Griffin said the two of them were initially suspicious of each other — exchanging cautious messages for about a week — but he decided Tony was telling the truth after contacting the FBI agent that Tony said was working his case. Comparing notes, they discovered the fake Google security alerts they received just prior to their individual bitcoin thefts referenced the same phony “Google Support Case ID” number.
Adam Griffin and Tony said they received the same Google Support Case ID number in advance of their thefts. Both were sent via Google Forms, which sends directly from the google.com domain name.
More importantly, Tony recognized the voice of “Daniel from Google” when it was featured in an interview by Junseth, a podcaster who covers cryptocurrency scams. The same voice that had coaxed Tony out of his considerable cryptocurrency holdings just days earlier also had tried to phish Junseth, who played along for several minutes before revealing he knew it was a scam.
Daniel told Junseth he was a teenager and worked with other scam callers who had all met years ago on the game Minecraft, and that he recently enjoyed a run of back-to-back Gmail account compromises that led to crypto theft paydays.
“No one gets arrested,” Daniel enthused to Junseth in the May 7 podcast, which quickly went viral on social media. “It’s almost like there’s no consequences. I have small legal side hustles, like businesses and shit that I can funnel everything through. If you were to see me in real life, I look like a regular child going to school with my backpack and shit, you’d never expect this kid is stealing all this shit.”
Daniel explained that they often use an automated bot that initiates calls to targets warning that their account is experiencing suspicious activity, and that they should press “1” to speak with a representative. This process, he explained, essentially self-selects people who are more likely to be susceptible to their social engineering schemes. [It is possible — but not certain — that this bot Daniel referenced explains the incoming call to Griffin from Google Assistant that precipitated his bitcoin heist].
Daniel told Junseth he and his co-conspirators had just scored a $1.2 million theft that was still pending on the bitcoin investment platform SwanBitcoin. In response, Junseth tagged SwanBitcoin in a post about his podcast on Twitter/X, and the CEO of Swan quickly replied that they caught the $1.2 million transaction that morning.
Apparently, Daniel didn’t appreciate having his voice broadcast to the world (or his $1.2 million bitcoin heist disrupted) because according to Junseth someone submitted a baseless copyright infringement claim about it to Soundcloud, which was hosting the recording.
The complaint alleged the recording included a copyrighted song, but that wasn’t true: Junseth later posted a raw version of the recording to Telegram, and it clearly had no music in the background. Nevertheless, Soundcloud removed the audio file.
“All these companies are very afraid of copyright,” Junseth explained in a May 2024 interview with the podcast whatbitcoindid.com, which features some of the highlights from his recorded call with Daniel.
“It’s interesting because copyright infringement really is an act that you’re claiming against the publisher, but for some reason these companies have taken a very hard line against it, so if you even claim there’s copyrighted material in it they just take it down and then they leave it to you to prove that you’re innocent,” Junseth said. “In Soundcloud’s instance, part of declaring your innocence is you have to give them your home address and everything else, and it says right on there, ‘this will be provided to the person making the copyright claim.'”
AFTERMATH
When Junseth asked how potential victims could protect themselves, Daniel explained that if the target doesn’t have their Google Authenticator synced to their Google cloud account, the scammers can’t easily pivot into the victim’s accounts at cryptocurrency exchanges, as they did with Griffin.
By default, Google Authenticator syncs all one-time codes with a Gmail user’s account, meaning if someone gains access to your Google account, they can then access all of the one-time codes handed out by your Google Authenticator app.
To change this setting, open Authenticator on your mobile device, select your profile picture, and then choose “Use without an Account” from the menu. If you disable this, it’s a good idea to keep a printed copy of one-time backup codes, and to store those in a secure place.
You may also wish to download Google Authenticator to another mobile device that you control. Otherwise, if you turn off cloud synching and lose that sole mobile device with your Google Authenticator app, it could be difficult or impossible to recover access to your account if you somehow get locked out.
Griffin told KrebsOnSecurity he had no idea it was so easy for thieves to take over his account, and to abuse so many different Google services in the process.
“I know I definitely made mistakes, but I also know Google could do a lot better job protecting people,” he said.
In response to questions from KrebsOnSecurity, Google said it can confirm that this was a narrow phishing campaign, reaching a “very small group of people.”
“We’re aware of this narrow and targeted attack, and have hardened our defenses to block recovery attempts from this actor,” the company said in a written statement, which emphasized that the real Google will never call you.
“While these types of social engineering campaigns are constantly evolving, we are continuously working to harden our systems with new tools and technical innovations, as well as sharing updated guidance with our users to stay ahead of attackers,” the statement reads.
Both Griffin and Tony say they continue to receive “account security” calls from people pretending to work for Google or one of the cryptocurrency platforms.
“It’s like you get put on some kind of list, and then those lists get recycled over and over,” Tony said.
Griffin said that for several months after his ordeal, he accepted almost every cryptocurrency scam call that came his way, playing along in the vain hope of somehow tricking the caller into revealing details about who they are in real life. But he stopped after his taunting caused one of the scammers to start threatening him personally.
“I probably shouldn’t have, but I recorded two 30-minute conversations with these guys,” Griffin said, acknowledging that maybe it wasn’t such a great idea to antagonize cybercriminals who clearly already knew everything about him. “One guy I talked to about his personal life, and then his friend called me up and said he was going to dox me and do all this other bad stuff. My FBI contact later told me not to talk to these guys anymore.”
Sound advice. So is hanging up whenever anyone calls you about a security problem with one of your accounts. Even security-conscious people tend to underestimate the complex and shifting threat from phone-based phishing scams, but they do so at their peril.
When in doubt: Hang up, look up, and call back. If your response to these types of calls involves anything other than hanging up, researching the correct phone number, and contacting the entity that claims to be calling, you may be setting yourself up for a costly and humbling learning experience.
Understand that your email credentials are more than likely the key to unlocking your entire digital identity. Be sure to use a long, unique passphrase for your email address, and never pick a passphrase that you have ever used anywhere else (not even a variation on an old password).
Finally, it’s also a good idea to take advantage of the strongest multi-factor authentication methods offered. For Gmail/Google accounts, that includes the use of passkeys or physical security keys, which are heavily phishing resistant. For Google users holding measurable sums of cryptocurrency, the most secure option is Google’s free Advanced Protection program, which includes more extensive account security features but also comes with some serious convenience trade-offs.
Conditional statements, we would hope, are one of the most basic and well understood constructs in any programming language. Hope, of course, is for fools and suckers, so let's take a look at a few short snippets.
Our first installment comes from Jonas.
if (!checkAndDelete(Definitions.DirectoryName, currentTime)); //Empty statement
I appreciate the comment, which informs us that this empty statement is intentional. Why it's intentional remains mysterious.
Jonas found this while going through linter warnings. After fixing this, there are only 25,000 more warnings to go.
Brodey has a similar construct, but from a very different language.
If (Session.Item(Session.SessionID & "Origional") IsNot Nothing) ThenEndIf
I have to give bonus points for the origional spelling of "original". But spelling aside, there's a hint of something sinister here- we're concatenating strings with the SessionId- I don't know what is going wrong here, but it's definitely something.
Our last little snippet comes from Midiane. While not a conditional, it shows a misunderstanding of either booleans or comments.
$mail->SMTPAuth = false; // turn on SMTP authentication
The comment clearly is out of date with the code (which is the main reason we shouldn't repeat what is in the code as a comment). At least, we hope the comment is just out of date. A worse scenario is that setting the flag equal to false enables it.
Author: Mark Renney Each time Rod pushed his way through the portal, his initial response was disappointment. Although he hadn’t been aware of it the first time, he was actually stepping into the future. He realised this was an immense and astounding feat but it was just his flat, a perfect replica, albeit a little […]
Boost is a very large and
comprehensive set of (peer-reviewed) libraries for the C++ programming
language, containing well over one hundred individual libraries. The BH package provides a
sizeable subset of header-only libraries for (easier, no linking
required) use by R. It is fairly
widely used: the (partial) CRAN mirror logs (aggregated from the cloud
mirrors) show over 38.5 million package downloads.
Version 1.87.0 of Boost was released last week following the regular
Boost release schedule of April, August and December releases. As
before, we packaged it almost immediately and started testing following
our annual update cycle which strives to balance being close enough to
upstream and not stressing CRAN and the user base too much. The reverse
depends check revealed six packages requiring changes or adjustments. We
opened issue
#103 to coordinate the issue (just as we did in previous years). Our
sincere thanks to Matt
Fidler who fixed two packages pretty much immediately.
As I had not heard back from the other maintainers since filing the
issue, I uploaded the package to CRAN suggesting that the coming
winter break may be a good opportunity for the four other packages to
catch up. CRAN concurred, and
1.87.0-1 is now available there.
There are no other changes apart from cosmetics in the
DESCRIPTION file. For once, we did not add any new Boost libraries. The short NEWS entry
follows.
Changes in version
1.87.0-1 (2024-12-17)
Upgrade to Boost 1.87.0, patched as usual to comment-out
diagnostic suppression messages per the request of CRAN
Switched to Authors@R
Via my CRANberries, there
is a diffstat report relative to the previous
release. Comments and suggestions about BH are welcome via the issue
tracker at the GitHub
repo.
This week on my podcast, it’s our annual Daddy-Daughter Podcast, a tradition since 2012! The kid’s sixteen now, a senior in high school and getting ready to head off to university next year, so this may well be the final installment in the series.
While artificial intelligence (AI) applications for natural language processing (NLP) are no longer something new or unexpected, nobody can deny the revolution and hype that started, in late 2022, with the announcement of the first public version of ChatGPT. By then, synthetic translation was well established and regularly used, many chatbots had started attending users’ requests on different websites, voice recognition personal assistants such as Alexa and Siri had been widely deployed, and complaints of news sites filling their space with AI-generated articles were already commonplace. However, the ease of prompting ChatGPT or other large language models (LLMs) and getting extensive answers–its text generation quality is so high that it is often hard to discern whether a given text was written by an LLM or by a human–has sparked significant concern in many different fields. This article was written to present and compare the current approaches to detecting human- or LLM-authorship in texts.
The article presents several different ways LLM-generated text can be detected. The first, and main, taxonomy followed by the authors is whether the detection can be done aided by the LLM’s own functions (“white-box detection”) or only by evaluating the generated text via a public application programming interface (API) (“black-box detection”).
For black-box detection, the authors suggest training a classifier to discern the origin of a given text. Although this works at first, this task is doomed from its onset to be highly vulnerable to new LLMs generating text that will not follow the same patterns, and thus will probably evade recognition. The authors report that human evaluators find human-authored text to be more emotional and less objective, and use grammar to indicate the tone of the sentiment that should be used when reading the text–a trait that has not been picked up by LLMs yet. Human-authored text also tends to have higher sentence-level coherence, with less term repetition in a given paragraph. The frequency distribution for more and less common words is much more homogeneous in LLM-generated texts than in human-written ones.
White-box detection includes strategies whereby the LLMs will cooperate in identifying themselves in ways that are not obvious to the casual reader. This can include watermarking, be it rule based or neural based; in this case, both processes become a case of steganography, as the involvement of a LLM is explicitly hidden and spread through the full generated text, aiming at having a low detectability and high recoverability even when parts of the text are edited.
The article closes by listing the authors’ concerns about all of the above-mentioned technologies. Detecting an LLM, be it with or without the collaboration of the LLM’s designers, is more of an art than a science, and methods deemed as robust today will not last forever. We also cannot assume that LLMs will continue to be dominated by the same core players; LLM technology has been deeply studied, and good LLM engines are available as free/open-source software, so users needing to do so can readily modify their behavior. This article presents itself as merely a survey of methods available today, while also acknowledging the rapid progress in the field. It is timely and interesting, and easy to follow for the informed reader coming from a different subfield.
Joseph sends us a tried and true classic: bad date handling code, in JavaScript. We've all seen so much bad date handling code that it takes something special to make me do the "confused dog" head tilt.
var months=newArray(13);
months[1]='January';
months[2]='February';
months[3]='March';
months[4]='April';
months[5]='May';
months[6]='June';
months[7]='July';
months[8]='August';
months[9]='September';
months[10]='October';
months[11]='November';
months[12]='December';
var time=newDate();
var lmonth=months[time.getMonth() + 1];
var date=time.getDate();
var year=time.getFullYear();
document.write(lmonth + ' ');
document.write(date + ', ' + year);
We create a 13 element array to hold our twelve months, because we can't handle it being zero indexed. This array is going to be our lookup table for month names, so I almost forgive making it one-indexed- January is month 1, normally.
Almost. Because not only is that stupid, the getMonth() function on a date returns the month as a zero-indexed number. January is month 0. So they need to add one to the result of getMonth for their lookup table to work, and it's just so dumb.
Then of course, be output this all using document.write, so we just know it's terrible JavaScript, all the way around.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Review: Iris Kelly Doesn't Date, by Ashley Herring Blake
Series:
Bright Falls #3
Publisher:
Berkley Romance
Copyright:
October 2023
ISBN:
0-593-55058-7
Format:
Kindle
Pages:
381
Iris Kelly Doesn't Date is a sapphic romance novel (probably a
romantic comedy, although I'm bad at romance subgenres). It is the third
book in the Bright Falls series. In the romance style, it has a new set
of protagonists, but the protagonists of the previous books appear as
supporting characters and reading this will spoil the previous books.
Among the friend group we were introduced to in Delilah Green Doesn't Care, Iris was the irrepressible loudmouth.
She's bad at secrets, good at saying whatever is on her mind, and has zero
desire to either get married or have children. After one of the side
plots of Astrid Parker Doesn't Fail, she
has sworn off dating entirely.
Iris is also now a romance novelist. Her paper store didn't get enough
foot traffic to justify staying open, so she switched her planner business
to online only and wrote a romance novel that was good enough to get a
two-book deal. Now she needs to write a second book and she has
absolutely nothing. Her own avoidance of romantic situations is not
helping, but neither is her meddling family who are convinced her choices
about marriage and family can be overturned with sufficient pestering.
She desperately needs to shake up her life, get out of her creative rut,
and do something new. Failing that, she'll settle for meeting someone in
a bar and having some fun.
Stevie is a barista and actress living in Portland. Six months ago, she
broke up with Adri, her creative partner, girlfriend of six years, and the
first person with whom she had a serious relationship. More precisely,
Adri broke up with her. They're still friends, truly, even though that
friendship is being seriously strained by Adri dating Vanessa, another
member of their small and close-knit friend group. Stevie has
occasionally-crippling anxiety, not much luck in finding real acting roles
in Portland, and a desperate desire to not make waves. Ren, the fourth
member of their friend group, thinks Stevie needs a new relationship, or
at least a fling. That's how Stevie, with Ren as backup and
encouragement, ends up at the same bar with Iris.
The resulting dance and conversation was rather fun for both Stevie and
Iris. The attempted one-night stand afterwards was a disaster due to
Stevie's anxiety, and neither of them expected to see the other again.
Stevie therefore felt safe pretending they'd hit it off to get her friends
off her back. When Iris's continued restlessness lands her in an audition
for Adri's fundraiser play that she also talked Stevie into performing in,
this turns into a full-blown fake dating trope.
These books continue to be impossible to put down. I'm not sure what
Blake is doing to make the pacing so perfect, but as with the previous
books of the series I found this utterly compulsive reading. I started it
in the afternoon, took a break in the evening for a few hours, and then
finished it at 2am.
I wasn't sure if a book focused on Iris would work as well, but I need not
have worried. Iris Kelly Doesn't Date is both more dramatic and
more trope-centered than the earlier books, but Blake handles that in a
way that fits Iris's personality and wasn't annoying even to a reader like
me, who has an aversion to many types of relationship drama. The secret
is Stevie, and specifically having the other protagonist be someone with
severe anxiety.
No was never a very easy word for Stevie when it came to
Adri, when it came to anyone, really. She could handle the little
stuff — do you want a soda, have you seen this movie, do you
like onions on your pizza — but the big stuff, the stuff that caused
disappointed expressions and down-turned mouths... yeah, she sucked at
that part. Her anxiety would flare, and she'd spend the next week
convinced her friends hated her, she'd die alone and miserable, and
wasn't worth a damn to anyone. Then, when said friend or family
member eventually got ahold of her to tell her that, no, of course
they didn't hate her, why in the world would she think that, her
anxiety would crest once again, convincing her that she was terrible
at understanding people and could never trust her own brain to make
heads or tails of any social situation.
This is a spot-on description of a particular type of anxiety, but also
this is the perfect protagonist to pair with Iris. Throughout the series,
Iris has always been the ride-or-die friend, the person who may have no
idea how to help but who will show up anyway and at least try to distract
you. Stevie's anxiety makes Iris feel protective, which reveals one of
the best sides of Iris's personality, and then the protectiveness plays
off against Iris's own relationship issues and tendency to avoid taking
anything too seriously. It's one of those relationships that starts a bit
one-sided and then becomes mutually supporting once Stevie gets her feet
under her. That's a relationship pattern I really enjoy reading about.
As with the rest of the series, the friendship dynamics are great. Here
we get to see two friend groups at work: Iris's, which we've seen in the
previous two volumes and which expanded interestingly in Astrid
Parker Doesn't Fail, and Stevie's, which is new. I liked all of these
people, even Adri in her own way (although she's the hardest to like).
The previous happily-ever-afters do get a bit awkward here, but Blake
tries to make that part of the plot and also avoids most of the problem of
somewhat-boring romantic bliss by spreading the friendship connections a
bit wider.
Stevie's friend group formed at orientation at Reed College, and that let
me put my finger on another property of this series: essentially all of
the characters are from a very specific social class. They're nearly all
arts people (bookstore owner, photographer, interior decorator, actress,
writer, director), they've mostly gone to college, and while most of them
don't have lots of money, there's always at least one person in each
friend group with significant wealth. Jordan, from the previous book, is
a bit of an exception since she works in a trade (a carpenter), but she
still acts like someone from that same social class. It's a bit like
reading Jane Austen novels and realizing that the protagonists are drawn
from a very specific and very narrow portion of society.
This is not a complaint, to be clear; I have no objections to reading
about a very specific social class. But if one has already read lots of
books about this class of people, I could see that diminishing the appeal
of this series a bit. There are a lot of assumptions baked into the story
that aren't really questioned, such as the ubiquity of therapists. (I
don't know how Stevie affords one on a barista salary.) There are also
some small things in the terminology (therapy speak, for example) and in
the specific type of earnestness with which the books attempt to be
diverse on most axes other than social class that I suspect may
grate a bit for some readers. If that's you, this is your warning.
There is a third-act breakup here, just like the previous volumes. There
is also a defense of the emotional punch of third-act breakups in romance
novels in the book itself, put into Iris's internal monologue, so I
suspect that's the author's answer to critics like myself who don't like
the trope. I was less frustrated by this one because it fit the drama
level of the protagonists, but I'll also know to expect a third-act
breakup in any Blake novel I read in the future.
But, all that said, the summary once again is that I loved this book and
could not put it down. Iris is dramatic and occasionally self-destructive
but has a core of earnest empathy that makes her easy to like. She's
exactly the sort of extrovert who is soothing to introverts rather than
draining because she carries the extrovert load of social situations.
Stevie is adorably earnest and thoughtful beneath her anxiety. They two
of them are wildly different and yet remarkably good together, and I loved
reading their story.
Highly recommended, along with the whole series. Start with Delilah
Green Doesn't Care; if you like that, you're in for a treat.
Content note: This book is also rather sex-forward and pretty explicit in
the sex scenes, maybe a touch more than Astrid Parker Doesn't Fail.
If that is or is not your thing in romance novels, be aware going in.
Author: Majoki To dream is the dream. Anyone thinking that we need to sleep to live is missing the real payoff. We should be living to sleep. Snow White had it right for the wrong reasons. She didn’t bite a poisoned apple, she micro-dosed from the forbidden fruit of the real tree of knowledge: somna. […]
We introduced r-ci
here in post
#32 here nearly four years ago. It has found pretty widespread use
and adoption, and we received a few kind words then (in the
linked issue) and also more
recently (in a follow-up comment) from which we merrily quote:
[…] almost 3 years later on and I have had zero problems with
this CI setup. For people who want reliable R software, resources like
these are invaluable.
And while we followed up with post
#41 about r2u for
simple continuous integration, we may not have posted when we based r-ci on r2u (for the obvious Linux
usage case). So let’s make time now for a (comparitively smaller)
update, and an update usage examples.
We made two changes in the last few days. One is a (obvious in
hindsight) simplification. Given that the bootstrap step
was always executed, and needed no parameters, we pulled it into a new
aggregated setup simply called r-ci that includes it so
that it can be omitted as a step in the yaml file. Second, we recently
needed Fortran on macOS too, and realized it was not installed by
default so we just added that too.
With that a real and used example is now as simple as the
screenshot to the left (and hence one ‘paragraph’ shorter). The trained
eye will no doubt observe that there is nothing specific to a
given repo. And that is basically the key feature: we can simply copy
this file around and get fast and easy and
reliable CI by taking advantage of the underlying robustness of
r2u solving all
dependency automagically and reliably. The option to enable macOS is
also solid and compelling as the GitHub runners are fast (but more
‘expensive’ in how the count against the limit of minutes—so again a
tradeoff to make), as is the option to run coverage if one so desires.
Some of my repos do too.
Take a look at the r-ci website which has
more examples for the other supported CI servics it can used with, and
feel free to ask questions as issue in the repo.
In Gary Trudeau's wonderful Doonesbury strip, the lead character -- Mike Doonesbury -- has 'summer daydreams' of how things ought to be. In my case, it's year-round and every week! But hey, it's my job and someone's gotta do it -- ponder potential paths -- some plausible and many just kind-of and some not-at-all. Possible paths out of the traps that we face.
And sure, I'm generally relegated to the lamentation of Cassandra, muttering "I told you so!" too many times to count. Though not always, e.g. when the California Democratic Party asked me to propose "near future legislation" to address the plummet of factuality and verification in American politics. Perhaps the most crucial matter of our time! The resulting Fact Act at least got a little attention! Though not enough to get anywhere.
Other proposals include methods to get around the current Supreme Court's outrageous support for gerrymandering. One concept, that would bypass all politicians, got approving attention from a senior US Court of Appeals judge. My collection of such potential maneuvers - many of them non- or even anti-partisan - can be found in Polemical Judo.
But here I'll focus on four concepts that could affect one of the most important and pivotal days in the entire history of civilization.
That pivotal day is today, December 16, 2024.
It's the very last day that a few brave Americans might turn back (partially) a tsunami of treason and pain. Because the very next day -- Tuesday the 17th of December 2024 -- is the day that the Electoral College 'meets' to cast the actual votes that will make Donald Trump President for a second -- and maniacally destructive -- time.
Trying to get these concepts where they might be acted upon has been futile, of course. The Democratic Party political and punditry clans are frantically circling their wagons to fend off accountability for their incompetent blunders. Above all, any idea Not Invented Here is to be met with savage repression.
In particular, any mention of the Electoral College prompts shrugs and sighs, even though several past elections have tilted this way or that (depending on your view) with either patriotic acts of courage or shenanigans.
Well, well.
With one day left, I must admit I was straying outside my lane.
Still, here are the two EC notions that I tried to convey.
And one more that occurred to me just yesterday!
... Plus one that Joe Biden might still pull off, during his remaining month in office, and be known for it, forever.
== Two now-forlorn ways to shift the Electoral College... and one more that could work, even now ==
Okay, it's too late for these first two. In fact, it's because it's too late that I'm telling you the second one, now.
* The first one is so old that it's in Polemical Judo. It describes how two rich dudes - one Republican, one Democrat and both patriots -- might arrange for the Presidential Electors to actually meet and deliberate in person -- by their OWN volition and without any outside pressures -- as the Founders clearly intended. I describe the concept here. There's no reason it couldn't happen.
But not this year. And maybe - if the Putinists succeed - not ever. Still, here it is
* Second idea: I only hinted at this one, in hopes that I might be able to pitch it directly to Kamala Harris.
Only her.
It would have guaranteed her a place of amazed remembrance across U.S. history!
Alas, her layers of not-invented-here factotums were too thick.
Boiled down to essence, I suggested that she could declare:
"Look, I lost the election! That's on me. By narrow margins but in crucial states, the people chose for the next Senate, House and Presidency to be controlled by Republicans.
"But does it have to be THIS Republican? A capering, frothing madman whose every chosen appointee openly and gleefully declares open war against every fact-using element in American life? Like the Roman Emperor Caligula, who made his horse Consul of Rome, Trump is appointing a whole herd of utter crazies!"
Forget idiocies like "right" or "left." This is now aboutall-out war vs all fact using professions! From science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. And so she might have continued:
"Already it's clear that many of those who supported Donald Trump -- perhaps because they feared or disliked me -- are having buyers' remorse. So let me ask this.
"If we're to be led by Republicans, can it at least be grownup ones? Anyway the last thing I should do is stand in the way of the ruling party making their own choice on the matter.
"And hence I am stepping out of the way.
"I now ask all of the U.S. Electoral College members who are pledged to me NOT to vote for me on December 17!
"Instead, I ask that all of them... every single one... vote for the current head of the Republican Party and Majority Leader in the U.S. Senate, the honorable John Thune."
Amid that moment of shock, she could point out that Thune and she have cancelled each others votes any number of times. They support vastly different policies. Moreover --
"If this works, I will oppose him at most levels of practical politics.
"But, unlike Donald Trump, John Thune is an honorable person, a sane person, a person of genuine mental capacity and above all, an American patriot."
Look, here's the deal: If every Democratic elector voted for Thune, the 312 Republican electors would then have a choice. A chance to back out of their monstrous deal with the devil. Just 15% of them could make the difference and turn a madman from all that power. Just 40 or so could instead make a decent human being President!
And then, in a spirit of bipartisan peace-making, maybe vote to make Tim Walz Vice President?
"Just 15% of GOP electors could do this. And don't let those sappy state laws against 'faithless electors' intimidate you! They have no value against the Founders' clear intent for Elector sovereignty.
"And so I urge you Democratic electors who are pledged to me, to follow my lead on this, one last time. Let us lay a challenge before our Republican neighbors. Tell them YOU WIN! Now show us that you plan to use your victory toward an America that is at least not-insane."
== Brin's gone mad? ==
So, okay, it's not gonna happen. I never shared this with all of you out there, in forlorn hope that she might ponder it. Ponder acting as Alexander Hamilton did, in the mixed up election of 1800. Rising above party to pick decency over corruption.
Is the idea at least original?
Sure. It's what I'm paid for...
... and it's why many DP hacks cry "Shields up!" against anything like original thinking.
Okay. Maybe I'm a fool. But at least an entertaining one!
== Tonight's last idea! ==
Okay then, if it's too late for those two ideas, then why am I hurrying to post this blog, on the last of all possible days?
Well, first, to once again remind Joe Biden that his own potential gambit still is possible!I described it here and the potential for utterly rocking the entire political boat is stunning!
(You could do this one thing, Joe! During the next month. It might accomplish nothing... or else transform U.S. politics and society utterly. And you have nothing to lose.)
But that's not tonight's featured idea!
Here it comes.
This one ain't gonna work, either! But it is related to the Biden vs. Blackmail concept. And I'd be wrong not to at least mention it...
...and I promise it is WAY unconventional. Though it would make a great concept for a thriller novel!
Okay, here goes.
Donald Trump has made it clear how much he hates modernity and every smartypants profession -- it is the shared hate that got him support from many former democrats and all of the MAGAs who now pour spite at the universities and nerdy civil servants and scientists and FBI/Intel/military officers and all the rest who are now hated-on nightly by Fox.
Above all, Trump was traumatized when nearly all of the adults-in-the-room he appointed in 2017 later turned and denounced him! Almost 100 of them. Two secretaries of state, two of defense, two chiefs of staff and so on and so on. And Don swore never to let it happen again!
The one common trait of ALL of his new appointees is that there is not a single adult among them. Not one who wants to do a good job. All are meant - above all - to spite every grownup in America and around the world. All of them are Caligula's horses.
But that's not enough. Personal loyalty is paramount to Trump. It is the ONLY thing that matters. And there is one way to ensure loyalty, that he learned from Vladimir Putin.
No, it's not poison tea or upper story windows. Not yet.
Rather, the thing that works.
The one thing that works almost always and almost perfectly is blackmail.
(I can just hear many of you: "Again, Brin, with the blackmail thing?"
(Hell yeah! Because everyone who simply shrugs it off is a pure dunce.
(And I am looking at YOU, right now, my friend.)
Blackmail means that if someone turns on you, you get to ruin their life. It works. It is likely rife in DC right now. And Joe Biden could shatter it, during his final month. He could!
But... but Trump is making hundreds of appointments!
So how could he collect blackmail on all of them?
Or even the top fifty or so?
That's ridiculous, Brin!
Why... he'd have to.... He'd have to...
Ah, I see the light in some of your eyes.
You are starting to see.
You begin to picture a set of rooms, in a back corner of Mar-a-Lago...
== The irony of the donkey ==
Okay, we truly are down a rabbit hole, now! Some of you are storming off, in a huff, declaring that I've lost all credibility, if not my marbles.
The rest of you are staying, to see how far down it goes.
Hang in there. It won't take long.
LOOK at the execrable quality of the men and women Donald Trump is appointing! This is their one chance in otherwise miserable lives, envying and hating all those snooty, smartypants fact people who actually know stuff and can think.
These moronic appointees want aboard!
They will do anything Trump asks of them...
... including going into those back rooms at Mar-a-Lago and -- in front of cameras -- giving Don all the leverage and kompromat he could ever want...
...so he can feel secure in their loyalty, forever.
And yes, how lovely - if kinky - the symbolism, if some of the acts involve the symbol animal of the other party?
Before you sniff and roll your eyes... consider. The motive, means and opportunity are all there, along with the expertise.
The method has been standard in Russian secret services ever since czarist times!
The Oprichina, the Okrhana, the NKVD and KGB, the current Kremlin all used it... and quite a few western oligarchs, as well.
Can you give me one good reason why Donald Trump would NOT do this? Given motive, means and opportunity... and the flunkies' desperate wish to get aboard? And his own desperate wish to keep personal loyalty secured, forever and ever and ever?
== And so, one last forlorn hope ==
If any of you out there happens to know anyone who knows any of Trump's menagerie of jibbering losers (and I am deliberately excluding a couple of hugely brilliant winners), you MIGHT pass along word about this. Especially today.
Tell them there's a possible way out of this trap:
- If you reveal it on Monday December 17, you might have the perpetual respect and gratitude of the nation! A nation you just might have helped to save! And the donkey thing won't matter.
- if you reveal it to Biden and/or the FBI before January 20, you will likely get a pardon. And still be thanked for arming us to protect against the worst.
- and if you miss those dates, but ever step up and help us all to topple the madness, I promise that I - at least - will fight for you.
Okay then, there's my last gasp of a "Brin's Autumn Daydreams" about the mad election of 2024...
There's not much to say here, as we've seen this before, and we'll see it again. It's wrong, it's not how anything should be done, yet here it is, yet again. In this case, it's C#, which also has a lovely set of built-in options for doing this, making this code totally unnecessary.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Author: Julian Miles, Staff Writer “DAY-NA!” The roar of anger is so loud it stops everyone. Dayna, presumably the being we’ve managed to corner after a three-hour citywide chase, was dubbed ‘Jaqueline the Ripper’ by the newsfeeds. Surrounded by rings of armoured vehicles and furious enforcers, she was laughing. Now she looks scared. What’s coming? […]
Our longstanding offering won’t fundamentally change next year, but we are going to introduce a new offering that’s a big shift from anything we’ve done before—short-lived certificates. Specifically, certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event.
Because we’ve done so much to encourage automation over the past decade, most of our subscribers aren’t going to have to do much in order to switch to shorter lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20x as many certificates as we do now. It’s not inconceivable that at some point in our next decade we may need to be prepared to issue 100,000,000 certificates per day.
That sounds sort of nuts to me today, but issuing 5,000,000 certificates per day would have sounded crazy to me ten years ago.
Finders is a far future science fiction novel with cyberpunk vibes.
It is the first of a series, but the second (and, so far, only other) book
of the series is a prequel. It stands alone reasonably well (more on that
later).
Cassilde Sam is a salvor. That means she specializes in exploring ancient
wrecks and ruins left behind by the Ancients and salvaging materials that
can be reused. The most important of those are what are called Ancestral
elements: BLUE, which can hold programming; GOLD, which which reacts to
BLUE instructions; RED, which produces actions or output; and GREEN, the
rarest and most valuable, which powers everything else. Cassilde and her
partner Dai Winter file claims on newly-discovered or incompletely
salvaged Ancestor sites and then extract elemental material and anything
else of value in their small salvage ship.
Cassilde is also dying. She has Lightman's, an incurable degenerative
disease that can only be treated with ever-increasing quantities of GREEN.
It's hard to sleep, hard to get warm, hard to breathe, and eventually
she'll run out of money to pay for the GREEN and she'll die.
To push that day off into the future, she and Dai need work. The good
news is that the wreckage of a new Ancestor sky palace was discovered in a
long orbit and will create enough salvage work for every experienced
salvor in the system. The bad news is that they're not qualified to bid
on it. They need a scholar with a class-one license to bid on the best
sections, and they haven't had a reliable scholar since their former
partner and lover Summerland Ashe picked the opposite side in the Troubles
and left the Fringe for the Entente, the more densely settled and
connected portion of human space.
But, unexpectedly and suspiciously, Ashe may be back and offering to work
with them again.
So, first, I love this setting. This is far from the first SF novel that
is set in the aftermath of a general collapse of human civilization and
revolving around discovering lost mysteries. Most examples of that genre
are post-apocalyptic novels limited to Earth or the local solar system,
but Kate Elliott's Unconquerable Sun
comes immediately to mind. It's also not the first space archaeology
series I've read; Kristine Kathyrn Rusch's story series starting with
"Diving into the Wreck" also
came to mind. But I don't recall the last time I've seen the author sell
the setting so effectively.
This is a world with starships and spaceports and clearly advanced
technology, but it feels like a post-collapse society that's built
on ruins. It's not just that technology runs on half-understood Ancestral
elements and states fight over control of debris fields. It's also that
the society repurposes Ancestral remnants in ways that both they and the
reader know weren't originally intended, and that sometimes are more
ingenious or efficient than how the Ancestors probably used them. There's
a creative grittiness here that reminds me of good cyberpunk.
It's not just good atmospheric writing, though. Scott makes a
world-building decision that is going to sound trivial when I say it, but
that has brilliant implications for the rest of the setting. There was
not just one collapse; there were two.
The Ancestor civilization, presumed to be the first human civilization,
has passed into myth, quite literally when it comes to the stories around
its downfall in the aftermath of a war against AIs. After the Ancestors
came the Successors, who followed a similar salvage and rebuild approach
and got as far as inventing their own warp drive technology that was based
on but different than the Ancestor technology. Then they also collapsed,
leaving their adapted technology and salvage operations layered over
Ancestor sites. Cassilde's civilization is the third human starfaring
civilization, and it is very specifically the third, neither the second
nor one of dozens.
This has so many small but effective implications that improve this story.
A fall happened twice, so it feels like a pattern that makes Cassilde's
civilization paranoid, but it happened for two very different reasons, so
there is room to argue against it being a pattern. Salvage is harder
because of the layering of Ancestor and Successor activity. Successors
had their own way of controlling technology that is not accessible to
Cassilde and her crew but is also not how the technology was intended to
be used, which sends small ripples of interesting complexity through the
background. And salvors are competing not only against each other but
also against Successor salvage operations for which they have fragmentary
records. It's a beautifully effective touch.
Melissa Scott has been publishing science fiction for forty years, and it
shows in this book. The protagonists are older characters: established
professionals with resource problems but also social connections and an
earned reputation, people who are trying to do a job and live their lives,
not change the world. The writing is competent, deft, and atmospheric,
with the confidence of long practice, but it also has the feel of an
earlier era of science fiction. I mentioned the cyberpunk influence,
which shows in the grittiness of the descriptions, the marginality of the
characters in society, and the background theme of repurposing and reusing
technology in unintended ways. This is the sort of book that feels
solidly in the center of science fiction, without the genre mixing into
either fantasy or romance that has become somewhat more common, and also
without the dramatics of space opera (although the reader discovers that
the stakes of this novel may be higher than anyone realized).
And yet, so much of this book is about navigating a complicated romantic
relationship, and that's where the story structure felt a bit odd.
Cassilde, Dai, and Ashe were a polyamorous triad (polyamory also shows up
in Scott's excellent Roads of Heaven series),
and much of the first third of the book deals with the fracturing of trust
with Ashe and their renegotiation of that relationship given his return.
This is refreshingly written as the thoughtful interaction of three adults
who take issues of trust seriously, but that also means it's much less
dramatic than it sounds, and that means this book starts exceptionally
slow. Scott is going somewhere, and the slow build became engrossing
around the midpoint of the book, but I had to fight to stick with it at
the start.
About 80% of the way through this book, I had no idea how Scott was going
to wrap things up in the pages remaining and was bracing myself for some
sort of series cliffhanger. This is not what happens; the plot is not
fully resolved in every detail, but it reaches a conclusion of sorts that
does not mandate a sequel. I did think the end was a little bit
unsatisfying, though, and I want another book that explores the
implications of the ending. I think it would have to be a much different
book, and the tonal shift might be stark.
I've had this book on my to-read list for a while and kept putting it off
because I wasn't sure I was in the mood for something precarious and
gritty. This turned out to be an accurate worry: this is literally a book
about salvaging the pieces of something full of wonders inextricably
connected to dangers. You have to be in a cyberpunk sort of mood. But
I've never read a bad Melissa Scott book, and this is no exception. The
simplicity and ALL-CAPSNESS of the Ancestral elements grated a bit, but
apart from that, the world-building is exceptional and well worth the
trip. Recommended, although be warned that, if you're like me, it may not
grab you from the first page.
Followed by Fallen, but that book is a prequel that does not share
any protagonists.
Content notes: disability and degenerative illness in a universe where
magical cures are possible, so be warned if that specific thematic
combination is not what you're looking for.
India’s prime minister, Narendra Modi, has used AI to translate his speeches for his multilingual electorate in real time, demonstrating how AI can help diverse democracies to be more inclusive. AI avatars were used by presidential candidates in South Korea in electioneering, enabling them to provide answers to thousands of voters’ questions simultaneously. We are also starting to see AI tools aid fundraising and get-out-the-vote efforts. AI techniques are starting to augment more traditional polling methods, helping campaigns get cheaper and faster data. And congressional candidates have started using AI robocallers to engage voters on issues. In 2025, these trends will continue. AI doesn’t need to be superior to human experts to augment the labor of an overworked canvasser, or to write ad copy similar to that of a junior campaign staffer or volunteer. Politics is competitive, and any technology that can bestow an advantage, or even just garner attention, will be used.
Most politics is local, and AI tools promise to make democracy more equitable. The typical candidate has few resources, so the choice may be between getting help from AI tools or getting no help at all. In 2024, a US presidential candidate with virtually zero name recognition, Jason Palmer, beat Joe Biden in a very small electorate, the American Samoan primary, by using AI-generated messaging and an online AI avatar.
At the national level, AI tools are more likely to make the already powerful even more powerful. Human + AI generally beats AI only: The more human talent you have, the more you can effectively make use of AI assistance. The richest campaigns will not put AIs in charge, but they will race to exploit AI where it can give them an advantage.
But while the promise of AI assistance will drive adoption, the risks are considerable. When computers get involved in any process, that process changes. Scalable automation, for example, can transform political advertising from one-size-fits-all into personalized demagoguing—candidates can tell each of us what they think we want to hear. Introducing new dependencies can also lead to brittleness: Exploiting gains from automation can mean dropping human oversight, and chaos results when critical computer systems go down.
Politics is adversarial. Any time AI is used by one candidate or party, it invites hacking by those associated with their opponents, perhaps to modify their behavior, eavesdrop on their output, or to simply shut them down. The kinds of disinformation weaponized by entities like Russia on social media will be increasingly targeted toward machines, too.
AI is different from traditional computer systems in that it tries to encode common sense and judgment that goes beyond simple rules; yet humans have no single ethical system, or even a single definition of fairness. We will see AI systems optimized for different parties and ideologies; for one faction not to trust the AIs of a rival faction; for everyone to have a healthy suspicion of corporate for-profit AI systems with hidden biases.
This is just the beginning of a trend that will spread through democracies around the world, and probably accelerate, for years to come. Everyone, especially AI skeptics and those concerned about its potential to exacerbate bias and discrimination, should recognize that AI is coming for every aspect of democracy. The transformations won’t come from the top down; they will come from the bottom up. Politicians and campaigns will start using AI tools when they are useful. So will lawyers, and political advocacy groups. Judges will use AI to help draft their decisions because it will save time. News organizations will use AI because it will justify budget cuts. Bureaucracies and regulators will add AI to their already algorithmic systems for determining all sorts of benefits and penalties.
Whether this results in a better democracy, or a more just world, remains to be seen. Keep watching how those in power uses these tools, and also how they empower the currently powerless. Those of us who are constituents of democracies should advocate tirelessly to ensure that we use AI systems to better democratize democracy, and not to further its worst tendencies.
This essay was written with Nathan E. Sanders, and originally appeared in Wired.
It’s been the biggest year for elections in human history: 2024 is a “super-cycle” year in which 3.7 billion eligible voters in 72 countries had the chance to go the polls. These are also the first AI elections, where many feared that deepfakes and artificial intelligence-generated misinformation would overwhelm the democratic processes. As 2024 draws to a close, it’s instructive to take stock of how democracy did.
In a Pew survey of Americans from earlier this fall, nearly eight times as many respondents expected AI to be used for mostly bad purposes in the 2024 election as those who thought it would be used mostly for good. There are real concerns and risks in using AI in electoral politics, but it definitely has not been all bad.
The dreaded “death of truth” has not materialized—at least, not due to AI. And candidates are eagerly adopting AI in many places where it can be constructive, if used responsibly. But because this all happens inside a campaign, and largely in secret, the public often doesn’t see all the details.
Connecting with voters
One of the most impressive and beneficial uses of AI is language translation, and campaigns have started using it widely. Local governments in Japan and California and prominent politicians, including India Prime Minister Narenda Modi and New York City Mayor Eric Adams, used AI to translate meetings and speeches to their diverse constituents.
Even when politicians themselves aren’t speaking through AI, their constituents might be using it to listen to them. Google rolled out free translation services for an additional 110 languages this summer, available to billions of people in real time through their smartphones.
Other candidates used AI’s conversational capabilities to connect with voters. U.S. politicians Asa Hutchinson, Dean Phillips and Francis Suarez deployed chatbots of themselves in their presidential primary campaigns. The fringe candidate Jason Palmer beat Joe Biden in the American Samoan primary, at least partly thanks to using AI-generated emails, texts, audio and video. Pakistan’s former prime minister, Imran Khan, used an AI clone of his voice to deliver speeches from prison.
Perhaps the most effective use of this technology was in Japan, where an obscure and independent Tokyo gubernatorial candidate, Takahiro Anno, used an AI avatar to respond to 8,600 questions from voters and managed to come in fifth among a highly competitive field of 56 candidates.
Nuts and bolts
AIs have been used in political fundraising as well. Companies like Quiller and Tech for Campaigns market AIs to help draft fundraising emails. Other AI systems help candidates target particular donors with personalized messages. It’s notoriously difficult to measure the impact of these kinds of tools, and political consultants are cagey about what really works, but there’s clearly interest in continuing to use these technologies in campaign fundraising.
Polling has been highly mathematical for decades, and pollsters are constantly incorporating new technologies into their processes. Techniques range from using AI to distill voter sentiment from social networking platforms—something known as “social listening“—to creating synthetic voters that can answer tens of thousands of questions. Whether these AI applications will result in more accurate polls and strategic insights for campaigns remains to be seen, but there ispromisingresearch motivated by the ever-increasing challenge of reaching real humans with surveys.
In 2024, similar capabilities were almost certainly used in a variety of elections around the world. In the U.S., for example, a Georgia politician used AI to produce blog posts, campaign images and podcasts. Even standard productivity software suites like those from Adobe, Microsoft and Google now integrate AI features that are unavoidable—and perhaps very useful to campaigns. Other AI systems help advise candidates looking to run for higher office.
Fakes and counterfakes
And there was AI-created misinformation and propaganda, even though it was not as catastrophic as feared. Days before a Slovakian election in 2023, fake audio discussing election manipulation went viral. This kind of thing happened many times in 2024, but it’s unclear if any of it had any real effect.
In the U.S. presidential election, there was a lot of press after a robocall of a fake Joe Biden voice told New Hampshire voters not to vote in the Democratic primary, but that didn’t appear to make much of a difference in that vote. Similarly, AI-generated images from hurricane disaster areas didn’t seem to have much effect, and neither did a stream of AI-faked celebrity endorsements or viral deepfake images and videos misrepresenting candidates’ actions and seemingly designed to prey on their political weaknesses.
AI also played a role in protecting the information ecosystem. OpenAI used its own AI models to disrupt an Iranian foreign influence operation aimed at sowing division before the U.S. presidential election. While anyone can use AI tools today to generate convincing fake audio, images and text, and that capability is here to stay, tech platforms also use AI to automatically moderate content like hate speech and extremism. This is a positive use case, making content moderation more efficient and sparing humans from having to review the worst offenses, but there’s room for it to become more effective, more transparent and more equitable.
There is potential for AI models to be much more scalable and adaptable to more languages and countries than organizations of human moderators. But the implementations to date on platforms like Meta demonstrate that a lot more work needs to be done to make these systems fair and effective.
One thing that didn’t matter much in 2024 was corporate AI developers’ prohibitions on using their tools for politics. Despite market leader OpenAI’s emphasis on banning political uses and its use of AI to automatically reject a quarter-million requests to generate images of political candidates, the company’s enforcement has been ineffective and actual use is widespread.
The genie is loose
All of these trends—both good and bad—are likely to continue. As AI gets more powerful and capable, it is likely to infiltrate every aspect of politics. This will happen whether the AI’s performance is superhuman or suboptimal, whether it makes mistakes or not, and whether the balance of its use is positive or negative. All it takes is for one party, one campaign, one outside group, or even an individual to see an advantage in automation.
This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.
Author: Don Nigroni I met Nancy in college, and we got married shortly after she received her PhD. While I’m smarter than the average bear, Nancy is brilliant. I work for a stock brokerage firm, and she worked for the Department of Energy until three years ago when she joined a private consortium to do […]
I just bought a Hisense 65U80G 65″ Inch 8K ULED Android TV (2021 model) for $1,568 including delivery. I got that deal by googling refurbished 8K TVs and finding the cheapest one I could buy. Amazon and eBay didn’t have any good prices on second hand 8K TVs and new ones start at $3,000 on special. I didn’t assess how Hisense compares to other TVs, as far as I could determine there was only one model of 8K TV on sale in Australia in the price range I was prepared to pay. So I won’t review how this TV compares to other models but how refurbished TVs compare to other display options.
I bought this because the highest resolution monitor in my price range is 5120*2160 [1]. While I could get a 5128*2880 monitor for around $1,500 paying 3* the money for 33% more pixels is bad value for money. Getting 4* the pixels for under 3* the price is good value even when it’s a TV with the lower display quality that involves.
I don’t plan to make it a main monitor. While 5120*2160 isn’t as good as I like on my desk it’s bearable and the quality of the display is high. High resolution isn’t needed for all tasks, for example I’m writing this blog post on my laptop while watching a movie on the 8K TV.
One thing I’d like to do with the 8K TV when I get it working as a monitor is to share the screen for team programming projects. I don’t have any specific plans other than team coding projects at the moment. But it will be interesting to experiment with it when I get it working.
Technical Issues with High Resolution Monitors
Hardware Needed
A lot of the graphic hardware out there don’t support resolutions higher than 5120*2880. It seems that most laptops don’t support resolutions higher than that and higher resolutions than 4K are difficult. Only quite recent and high end video cards will do 8K. Apparently the RTX 2080 is one of the oldest ones that does and that’s $400 on ebay. Strangely the GPU chipset spec pages don’t list the maximum resolution and there’s the additional complication that the other chips might not support the resolutions that the GPU itself can support.
As an aside I don’t use NVidia cards for regular workstations due to reliability problems. But they are good for ML work and for special purpose systems.
Interface Versions
To do 8K video it seems that you need HDMI 2.1 (or maybe 2.0 with 4:2:0 chroma subsampling) or DisplayPort 1.3 for 30Hz with 24bit color and 2.0 for higher refresh rates. But using a particular version of the interface doesn’t require supporting all the resolutions that it might support. This TV has HDMI 2.1 inputs, I’ve bought an adaptor cable that does DisplayPort 1.4 to HDMI 2.1 at 8K resolution. So I need a video card that does DisplayPort 1.4 or HDMI 2.1 output. That doesn’t mean that the card will work, but it could work.
It’s a pity that no-one has made a USB-C video controller that has a basic frame-buffer supporting 8K and the minimal GPU capabilities. The consensus of opinion is that no games will run well at 8K at this time so anyone using 8K resolution doesn’t need GPU power unless it’s for ML stuff.
I’m thinking of making a system that can be used as a ML server and X/Wayland server so a GPU with a decent amount of RAM and compute power would be good. I’m not particularly interested in spending $1,500+ to get a GPU that can drive a $1,568 TV. I’m looking into getting a RTX A2000 with 12G of RAM which should be adequate for ML experiments and can handle 8K@60Hz output.
I’ve ordered a DisplayPort to HDMI converter cable so if I get a DisplayPort card it will work.
Software Support
When I first got started with 4K monitors I had significant problems in adjusting the UI to be usable. The support for scaling software is much better now than it was then and 8K 65″ has a lower DPI than 4K 32″. So I hope this won’t be an issue.
Progress So Far
My first Hisense 8K TV stopped working properly. It would change to a mostly white screen after being used for some time. The screen would change in ways that correlate to changes in what should appear, but not in a way that was usable. It was just a different pattern of white blobs when I changed to a menu view not anything that allowed using it. I presume that this was the problem that drove a need for refurbishment as when I first got the TV it was still signed in to Google accounts for YouTube and to NetFlix.
Best Buy Electrical was good about providing a quick replacement, they took away the old TV and delivered a new one on the same visit and it’s now working well.
I’ve obtained a NVidia card that can allegedly do 8K output and a combination of cables that might be able to carry an 8K signal. Now I just need to get the NVidia drivers to not cause a kernel panic to get things to work.
I recently got a OnePlus 6 for the purpose of running Debian, here’s the Debian wiki page about it [1]. It runs Debian nicely and the basic functions all work, but the problem I’m having now is that AldiMobile (Telstra) and KoganMobile (Vodafone) don’t enable VoLTE for that and all the Australian telcos have turned off 3G. The OnePlus 6 does VoLTE with Chinese SIMs so the phone itself can do it.
The OnePlus 6 was never sold in Australia by the telcos, so they are all gray-market imports which aren’t designed by OnePlus to work in Australia. Until recently that wasn’t a problem, but now that the 3G network has been turned off we need VoLTE and OnePlus didn’t include that in the OS. Reddit has documentation on how to fix this but it has to be done on Android [2]. So I had to go back from Mobian to Android to get VoLTE (and VoWifi) working and then install Mobian again.
For people with similar issues Telstra has a page for checking which phones are supported [3], it’s the only way to determine if it’s the phone or the network that makes VoLTE not work – Android isn’t informative about such things. Telstra lists the OP6 as a suitable phone.
Now after doing this I still can’t get the OP6 working for phone calls on Phosh or PlasmaMobile and I’m not sure why. I’m going to give the PinePhone Pro another go and see if it now works better. In the past I had problems with the PinePhonePro battery discharging too fast, charging too slowly, and having poor call quality [4]. The battery discharge issue should be at least alleviated by some of the changes in the Plasma 6 code that’s now in Debian/Unstable.
I’ve also been lent a PinePhone (non-pro) and been told that it will have better battery life in many situations. I’ll do some tests of that. The PinePhonePro isn’t capable of doing the convergence things I was hoping to do so the greater RAM and CPU power that it has aren’t as relevant as they otherwise would be.
I have a vision for how phones should work. I am not discouraged by the Librem 5, PinePhonePro, Note 9, and OnePlus 6 failing in various ways to do what I hoped for. I will eventually find a phone that I can get working well enough.
PREVIOUSLY… Alvin Montessori, ‘human advisor’ aboard the exploration vessel Clever Gamble, has slurried down to planet Oxytocin with a landing team of demmie crewmates. Impulsive, mercurial beings, infuriating but… brilliant and kind of lovable, despite all that, A bit excitable and jumpy, the security officer fires a stun gun at a local native. Fortunately, this particular kind of stun gun is non-lethal in… unique ways. But unfortunately…
Reprise:
The native – pinned to the ground like Gulliver by the stun nanos — was now much calmer, prattling at a slower pace while I set up the universal translator on its tripod. Our captain dropped to one knee, preparing for that special moment when true First Contact could begin. Colored buttons flickered as the machine scanned, seeking meaning in the slur of local speech. Abruptly, all lights turned green. The translator swiveled and fired three more nanos at the native, one for each ear and another that streaked like a smart missile down his throat.
It isn’t painful, but startlement made him stop and swallow in surprise.
“On behalf of the Federated Alliance of—” Captain Ohm began, expansively spreading his arms. Then he frowned as the impudent creature interrupted, this time speaking aristocratically-accented Demmish.
“…I don’t know who you people are, or where you come from, but you must get out of the park, quickly! Don’t you know it’s dangerous?”
Part Three
While I vaporized the rest of the stun-ropes, Guts (our medical officer) helped the poor fellow back to his feet.
I was about to resume questioning him when Nuts squeezed between us, giving me a sharp swipe of her elbow. I rubbed my ribs as she brushed leaves and sticks off the native gentleman’s clothing, getting his measure with a few demure, barely noticeable gropes.
That was when the security leader Lieutenant Gala Morrell came with bad news.
“Captain, I’m sorry to report that Crewman Wems has disappeared.”
Ohm gave an exasperated sigh. “Wems, eh? Missing, you say? Well, hm.”
He glanced at the other security men. “I guess we could send Jums and Smet to look for him.”
The two greenies paled, cringing backward two paces. I cleared my throat. The captain looked my way.
“No?”
“Not if you ever want to see them again, sir.”
The captain may be impulsive, but he’s not stupid.
“Hmm, yeah. Better save ’em for later.”
He shrugged. “Okay, we all go. Form up everybody!”
Each of us was equipped with a locator, to find the spigot in case we got separated. I tried scanning for Wems, but could pick up no sign of his signal. Either something was jamming it or he was out of range. Or the transmitter had been vaporized – and Wems along with it.
We scoured the area for the better part of an hour, while our former captive grew increasingly nervous, sucking on his lower lip and peering toward the bushes. Finally, we decided to let him choose our direction of march, flanked on one side by the captain and the other by our chief artificer, Commander-Engineer Nomlin, – or “Guts’ – who gripped his arm like a tourniquet, batting her eyes so fast the wind might have mussed his hair again, if it weren’t already coifed and greased back from a peaked forehead.
Aside from several teeth even more pointy than a demmie’s, our guide had pasty skin that he tried to keep shaded with his cloak. Taking readings, I found that the sun did emit high ultraviolet levels. Moreover, the air was laced with industrial pollutants and signs of a degraded ozone layer – fairly typical for a world passing through its Level Sixteen crisis point. If proper relations were established, we might help the natives with such problems. Perhaps enough to make up for contacting them in the first place.
The native informed Nuts that his name was “Earl Dragonlord” – at least that is how the nano in his throat forced his vocal apparatus to pronounce it, in accented demmish. He seemed unaware of any change in speech patterns, since other nanos in his ears re-translated the sounds back into his native tongue. From his perspective, we were all miraculously speaking the local lingo.
The master translator unit followed our party, watching out for more aliens to convert in this way. A typically demmie solution to the inconvenience of a polyglot cosmos.
Our chief artificer swooned all over Earl, asking him what the name of that tree was, and how did he ever get such dark eyes, and how long would it take to have a local tailor make another cape just like his. Fortunately, Nuts had to pause occasionally to breathe. During one of these intermissions, Captain Ohm broke in to ask about the “danger” Earl spoke of earlier.
“It’s become a nightmare in our city!” he related in hushed tones, glistening eyes darting nervously. “The Lik’ems are breaking their age-old vows. They no longer cull only the least-deserving Standards, but prey on anyone they wish! Why, they’ve even taken to pouncing on Nomorts like you and me! Then there’s the ongoing strike by the corpambulists…”
It sounded awfully complicated already, and we’d only gone fifty meters from the spigot. I interrupted.
“I’m sorry. Did you say – ‘like you and me?’ What do you mean by that?”
He glanced at me, noticing my human features. “I was referring to your companions and me. No offense meant. Although you are clearly a Standard, I can tell that your lineage is strong, and your bile is un-ripe. Or else, why would you mingle with these Nomorts in apparent friendship? True, your kind is used to being hunted. Nevertheless, you must realize the rules are drastically changed here. Traditional restraints no longer hold in our poor city!”
I shared a glance with the Captain. Clearly, the native thought we were visitors from another town, and that the demmies were fellow “Nomorts”… his own kind of people. Perhaps because of the similarity in dentition. In his hurry, he seemed willing to overlook our uniforms and strange tools.
The afternoon waned as our path climbed a tree-crested hill. Suddenly, spread before us, there lay the city proper… one of the more intriguing urban landscapes I ever saw.
Some skyscrapers towered eighty or more stories, with cantilevered decks protruding into a gathering mist. Many spires were linked together by graceful sky-bridges, arching across open space at giddy heights. Yet none of these towers compared with a distant edifice that shone through the sunset haze. A gleaming pyramidal structure whose apex glittered with jeweled light.
“Cal’mari!” Earl announced, gesturing with obvious pride toward his city.
“What?” blurted Nuts, briefly taking her hand from his arm. “You mean squid?”
“Yes… Squid.” Earl said with sublime dignity, as the translator took its cue from Nuts, automatically replacing one word with another. Earl seemed blithely unaware that two entirely different sounds had emerged from his voicebox.
“Squid it is,” Ohm nodded, regarding the skyscrapers. And that was that. From now on, any demmie, and any speech-converted local, would use that word to signify this town.
I sighed. After all, it was only a city. But you students should take note that several civilizations have made the mistake of declaring war on demmies, over the insult of changing their planet’s name without asking. Not that it ever did any good.
“Squid” was impressive for a pre-starflight city. At one time, it must have been even more grand. The metropolis clearly used to surround the park on all sides, though now many quarters seemed empty, devoid of life. Once-proud spires were abandoned to the ravages of time, with blank windows like blind eyes staring into space. But straight ahead, the burg still thrived – a noisy, vibrant forest of tall buildings draped in countless sheets of colored glass, resembling 20th century New York, dressed-up with ostentatious, spiral minarets.
Skeins of filmy material, like mosquito netting, spanned the spaces between most buildings. Many windows and balconies were also covered with a gauzy, sparkling sheen – screen coverings that I later learned held bits of sharp metal or broken glass. As the sun sank, Squid resembled a maze of glittering spiderwebs, festooned with drops of dew.
Broad roadways were congested with cyclopean motor cars and lorries, all jostling for space and revving their engines before racing at top speed for an open parking space. I saw that every fourth avenue was a canal carrying boats of all description. My sinuses stung at the smell of ozone and unburnt hydrocarbons.
“Well, will you looka that!”
Our doctor pointed beyond the downtown area, to where jagged terrain rose steeply toward a rocky hill, its summit topped by striking silhouettes, totally unlike the metropolitan center. Scores of midget castles stood on those heights, with dark battlements and towers jutting from every slope. Earl Dragonlord sighed with gladness to see them, and motioned for us to follow.
“Come along, cousins. Sunshine is bad enough, but we definitely should not be out by moonlight! At home I’ll fit you with more appropriate clothes. Then we can go to the Crown.”
“Uh, is that where we’ll speak to your government leaders?” Captain Ohm asked. “We do have work to do, y’know.”
The last part was directed at Nuts. Her resumed grip on our guide’s elbow might force a lesser fellow to cry uncle. Earl was clearly a man of stamina and patience, all the more alluring to a demmie female.
“Government?” he answered. “Well, in a manner of speaking. Along the way, I’ll introduce you to our local council of Nomort elders. Unless… do you actually wish to meet the mayor of Squid? A Standard?” He glanced at me. “No offense.”
“None taken,” I assured. “Actually, I think our capt… our leader refers to government on a planetary scale. Or, in lieu of a world government, then some international mediation body—”
Earl’s look of puzzlement was followed by a dawning light of understanding. But before he could speak, a low groaning sound interrupted from the city, rising rapidly to become an ululating wail. Our greenies drew their weapons. Earl’s dusky eyes darted nervously.
“The sunset siren! A welcome sound to our kind, in most cities. But alas, not in poor Squid. We must go!”
“Well then, lead on MacDuff,” Ohm said, nearly as eager to be moving along. Earl looked baffled for a moment. Then, with a swirl of his cape, he hurried east with our ship’s engineer clinging like a happy lamprey, pushing on toward the pile of gingerbread palaces that now seemed aglow against a swollen reddish sun.
“It’s lay on, Captain,” I muttered to Ohm as we hurried along. “If you fancy quoting Shakespeare, you might try to get it right.”
Lieutenant Morell chirped a chuckle from her guard position, covering our rear. Ohm winced, then ruefully grinned.
“As you say, Advisor. As you say.”
From the park, we dropped toward a dim precinct of low dwellings that lurked between us and yonder hilltop castles. I glanced back at the downtown area, noting with surprise that the streets and canals no longer thronged with traffic. In a matter of moments they had become completely, eerily, deserted.
Dusk deepened and the largest of three moons rose in the east, about two thirds the size of Luna and almost as bright. Its phase was almost full.
In order to reach the elegant towers where Earl lived, we first had t2o cross a sprawling zone of dark roofs and small, overgrown lots, laid along an endless series of curvy lanes and cul-de-sacs.
“Urbs,” Earl Dragonlord commented with apparent distaste.
“Hold on a minute,” offered Guts, rummaging through his medical bag. “I think I’ve got some bicarbonate for that.”
“No, no.” The native grimaced. “Urbs. These are the surface dwellings where Lik’ems make their homes for the greater part of each month, feigning to live as Standards used to, long ago, before the Great Change, in tacky private dwelling places, depressingly alike. All blissfully equipped with linoleum floors and formica counter tops, with doilies on the armrests and bowling trophies on the mantelpiece. And never forget a lawn mower in the garage, along with the hedge trimmer, weed-eater, automatic mulcher, leaf blower, snow blower, and razor edged pole-pruner…”
Of course these terms were produced in demmish by the translator in his throat. They might only approximate the actual meanings in Earl’s mind.
“Sounds awful,” Guts commiserated, patting the arm not held in a hammerlock by Nuts.
“Yes. But that is just the beginning. For under the floor of each innocent-looking house, there lurks—”
He paused as the demmies all leaned toward him, wide-eyed.
“Yes? Yes? What lurks!”
Earl’s voice hushed.
“There lurks a trap door…”
“A secret entrance?”Captain Ohm asked in a whisper.
Our guide nodded.
“…leading downward to catacombs below the urb. In other words, to the sub-urbs, where…”
I cut in, coughing behind my hand. I did not want my crewmates slipping into a storytelling trance right then.
“Hadn’t we better move on then, while there’s still light?”
Earl cast me a sour glance. “Right. Follow me this way.”
Soon we passed down an avenue lined by bedraggled trees. No light shone from any of the rusty lampposts onto narrow ribbons of buckled sidewalk bordering small fenced lots. Most of the houses were dark and weedy, with broken tile roofs and missing windows, but one in four seemed well-tended, with flower beds and neatly edged lawns. Dim illumination passed through drawn curtains. Once or twice, I glimpsed dark silhouettes moving within.
The demmies, their eager imaginations stirred by Earl’s testimony, kept swiveling nervously, peering into the darkness, shying away from the gaping storm drains. Our greenies, especially, looked close to panic. They kept dropping back from their scout positions, trying to get as close to the captain as possible, much to his annoyance. At one point, Ohm dialed his blaster and shot Corporal Jums with a dose of itch-nanos. The poor fellow yelped and immediately ran back to position, scratching himself furiously, effectively distracted from worrying about spooks for a while.
I admired how efficiently Earl had accomplished this transformation. His uninformative hints managed to put my crewmates into a real state. I wondered – did he do it on purpose?
Remember, students, almost anything can set off demmie credulity. Once, during an uneventful voyage, I read aloud to the crew from Edgar Allan Poe’s “The Telltale Heart.”
Mistake! For a week thereafter we kept getting jittery reports of thumping sounds, causing Maintenance to rip out half of the ship’s air ducts. The bridge weapons team vaporized nine or ten passing asteroids that they swore were “acting suspicious,” and the infirmary treated dozens for stun wounds inflicted by nervous co-workers. Actually, if truth be told, I never had a better time aboard the Clever Gamble, and neither did the demmies. Still, Healer Paolim took me aside afterward and demanded that I never do it again.
The urb became a maze. Few of the streets were straight, and most terminated in outrageously inconvenient dead-ends that the translator described as culled-socks – an uninviting and unappetizing name. Even in better days, it must have been a nightmare journey of many kilometers to travel between two points only a block apart.
I felt as if we had slipped into a type of warped space, like a fractal structure whose surface is small, but whose perimeter is practically infinite – a true nightmare of insane urban planning. We might march forever and never get beyond this endless tract of boxlike houses. Captain Ohm shared my concern, and while the other demmies peered wide-eyed at shadows, he kept his sidearm nonchalantly poised toward Earl’s back, in case the native showed any sign of bolting.
I scanned selected dwellings with my multispec. Blurry infrared signals indicated humanoid forms within. From carbon scintillation counts, it seemed this part of the city must be as old as the downtown area. I wondered about the apparent fall in population. Were things like this planetwide? Or did these symptoms relate particularly to the localcrisis our guide had mentioned?
Surreptitiously, I pressed my uniform collar, turning it into a throat microphone to call the ship with an info-quest. Soon, the nanos in my ear canal whispered with the voice of Ensign Nota Taken, now on duty at the Clever Gamble’s sensor desk.
“Planetary surface scanning underway, Advisor Montessori. Preliminary indications show that paved cities comprise over six percent of total land area, an unusually high proportion, even for a world passing through stage eighteen, though much contraction appears to have occurred recently. Gosh, I wish I was down there exploring with you guys, instead of stuck up here.”
“Ensign Taken,” I murmured firmly.
“Um… survey also shows considerable environmental degradation in agricultural zones and coastal waters, with twenty-eight percent loss of topsoil accompanied by profound silting. Say, will you bring me back a souvenir? Last time you promised you’d—”
“Ensign—”
“All right, so you didn’t exactly promise, but you didn’t say no either. Remember the party in hydroponics last week? You were talking about detection thresholds for supernova neutrinos, but I could tell you kept looking down my—”
“Ensign!”
“The worst environmental damage seems to have occurred about a century ago, with gradual reforestation now underway in temperate zones. Um, I’ve just been handed a preliminary estimate of the decline in the humanoid population. Approximately sixty percent in the last century! Now that’s puzzling, I see no sign of major warfare or disease. And there are some other anomalies.”
“Anomalies?”
“Bio section urgently asks that you guys send up some live samples of the planet’s flora and fauna. Two of every species will do, if that won’t be too much trouble. Male and female, they say… as if a brilliant man like you would ever forget a detail like that.”
Exerting patience, I sighed. Subvocalizing lowly, I repeated—
“Anomalies? What anomalies are you talking about?”
“It’s got me worried. I admit it. I haven’t seen you since the party. You don’t answer my calls. Doctor… was I too forward? Why don’t you come to my quarters after you get back and I’ll make it up to—”
I let go of my collar. The connection broke and my ear-nanos went quiet, letting night sounds float back… including a faint rustling that I hadn’t noticed before. A creaking… then a scrape that might have been leather against pavement.
The captain halted abruptly and I collided with his back. Through his tunic I felt the tense bristles of demmie hackle-ridges, standing on end. Ohm’s pompadour just reached my eyes, so I couldn’t see ahead. But a glance left showed the ship’s healer also stopped in his tracks, staring, utterly transfixed by something.
Lieutenant Morell hurried forward and gasped, fumbling the dial of her blaster.
A sudden, grating sound echoed behind me, followed by a clang of heavy metal on concrete.
As I turned, a horrific howl pealed. Then another, and still more from all sides, baying like hounds from hell.
Before I could finish spinning about, a dark, flapping shape descended over me, enveloping my face in stifling folds and choking off my scream.
Author: David C. Nutt I took a swig straight out of the bottle of the rare, vintage wine- didn’t even let the damn thing breathe. It cost me only $8,420.00 USD, on sale from $10,000.00. As a relatively new multi-billionaire I didn’t even feel the cost. The wine sucked. Tasted like grape flavored caustic lye. […]
Last week, we saw a supply-chain attack against the Ultralytics AI library on GitHub. A quick summary:
On December 4, a malicious version 8.3.41 of the popular AI library ultralytics —which has almost 60 million downloads—was published to the Python Package Index (PyPI) package repository. The package contained downloader code that was downloading the XMRig coinminer. The compromise of the project’s build environment was achieved by exploiting a known and previously reported GitHub Actions script injection.
Seth Michael Larson—the security developer in residence with the Python Software Foundation, responsible for, among other things, securing PyPi—has a good summary of what should be done next:
From this story, we can see a few places where PyPI can help developers towards a secure configuration without infringing on existing use-cases.
API tokens are allowed to go unused alongside Trusted Publishers. It’s valid for a project to use a mix of API tokens and Trusted Publishers because Trusted Publishers aren’t universally supported by all platforms. However, API tokens that are being unused over a period of time despite releases continuing to be published via Trusted Publishing is a strong indicator that the API token is no longer needed and can be revoked.
GitHub Environments are optional, but recommended, when using a GitHub Trusted Publisher. However, PyPI doesn’t fail or warn users that are using a GitHub Environment that the corresponding Trusted Publisher isn’t configured to require the GitHub Environment. This fact didn’t end up mattering for this specific attack, but during the investigation it was noticed as something easy for project maintainers to miss.
There’s also a more general “What can you do as a publisher to the Python Package Index” list at the end of the blog post.
This is the story of an investigation conducted by Jochen Sprickerhof, Helmut
Grohne, and myself. It was true teamwork, and we would have not reached the
bottom of the issue working individually. We think you will find it as
interesting and fun as we did, so here is a brief writeup. A few of the steps
mentioned here took several days, others just a few minutes. What is described
as a natural progression of events did not always look very obvious at the
moment at all.
Let us go through the Six Stages of Debugging together.
Stage 1: That cannot happen
Official Debian GCC builds start failing on multiple architectures in late
November.
The build error happens on the build servers when running the testuite, but we
know this cannot happen. GCC builds are not meant to fail in case of testsuite
failures! Return codes are not making the build fail, make is being called
with -k, it just cannot happen.
A lot of the GCC tests are always failing in fact, and an extensive log of
the results is posted to the debian-gcc
mailing list, but the packages always build fine regardless.
Building on my machine running Bookworm is just fine. The Build Daemons run
Bookworm and use a Sid chroot for the build environment, just like I am. Same
kernel.
The only obvious difference between my setup and the Debian buildds is that I
am using sbuild 0.85.0 from bookworm, and the buildds have 0.86.3~bpo12+1
from bookworm-backports. Trying again with 0.86.3~bpo12+1, the build fails on
my system too. The build daemons were updated to the bookworm-backports version
of sbuild at some point in late November. Ha.
Stage 3: That should not happen
There are quite a few sbuild versions in between 0.85.0 and 0.86.3~bpo12+1,
but looking at recent sbuild bugs shows that
sbuild 0.86.0 was
breaking "quite a number of packages". Indeed, with 0.86.0 the build still
fails. Trying the version immediately before, 0.85.11, the build finishes
correctly. This took more time than it sounds, one run including the tests
takes several hours. We need a way to shorten this somehow.
The Debian packaging of GCC allows to specify which languages you may want to
skip, and by default it builds Ada, Go, C, C++, D, Fortran, Objective
C, Objective C++, M2, and Rust. When running the tests sequentially,
the build logs stop roughly around the tests of a runtime library for D,
libphobos. So can we still reproduce the failure by skipping everything except
for D? With
DEB_BUILD_OPTIONS=nolang=ada,go,c,c++,fortran,objc,obj-c++,m2,rust
the build still fails, and it fails faster than before. Several minutes, not
hours. This is progress, and time to file a bug. The report contains massive
spoilers, so no link. :-)
Stage 4: Why does that happen?
Something is causing the build to end prematurely. It’s not the OOM killer, and
the kernel does not have anything useful to say in the logs. Can it be that the
D language tests are sending signals to some process, and that is what’s
killing make ? We start tracing signals sent with bpftrace by writing the
following script, signals.bt:
tracepoint:signal:signal_generate {
printf("%s PID %d (%s) sent signal %d to PID %d\n", comm, pid, args->sig, args->pid);
}
And executing it with sudo bpftrace signals.bt.
The build takes its sweet time, and it fails. Looking at the trace output
there’s a suspicious process.exe terminating stuff.
process.exe (PID: 2868133) sent signal 15 to PID 711826
That looks interesting, but we have no clue what PID 711826 may be. Let’s change
the script a bit, and trace signals received as well.
tracepoint:signal:signal_generate {
printf("PID %d (%s) sent signal %d to %d\n", pid, comm, args->sig, args->pid);
}
tracepoint:signal:signal_deliver {
printf("PID %d (%s) received signal %d\n", pid, comm, args->sig);
}
The working version of sbuild was using dumb-init, whereas the new one
features
a
little init in perl. We patch the current version of sbuild by making it use
dumb-init instead, and trace two builds: one with the perl init, one with
dumb-init.
Here are the signals observed when building with dumb-init.
PID 3590011 (process.exe) sent signal 2 to 3590014
PID 3590014 (sleep) received signal 9
PID 3590011 (process.exe) sent signal 15 to 3590063
PID 3590063 (std.process tem) received signal 9
PID 3590011 (process.exe) sent signal 9 to 3590065
PID 3590065 (std.process tem) received signal 9
And this is what happens with the new init in perl:
PID 3589274 (process.exe) sent signal 2 to 3589291
PID 3589291 (sleep) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589338
PID 3589338 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 9 to 3589340
PID 3589340 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589341
PID 3589274 (process.exe) sent signal 15 to 3589323
PID 3589274 (process.exe) sent signal 15 to 3589320
PID 3589274 (process.exe) sent signal 15 to 3589274
PID 3589274 (process.exe) received signal 9
PID 3589341 (sleep) received signal 9
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589320
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589323
There are a few additional SIGTERM being sent when using the perl init, that’s
helpful. At this point we are fairly convinced that process.exe is worth
additional inspection. The
source
code of process.d shows something interesting:
1221 @system unittest
1222 {
[...]
1247 auto pid = spawnProcess(["sleep", "10000"],
[...]
1260 // kill the spawned process with SIGINT
1261 // and send its return code
1262 spawn((shared Pid pid) {
1263 auto p = cast() pid;
1264 kill(p, SIGINT);
So yes, there’s our sleep and the SIGINT (signal 2) right in the unit tests
of process.d, just like we have observed in the bpftrace output.
Can we study the behavior of process.exe in isolation, separatedly from the
build? Indeed we can. Let’s take the executable from a failed build, and try
running it under /usr/libexec/sbuild-usernsexec.
First, we prepare a chroot inside a suitable user namespace:
Now we can run process.exe on its own using the perl init, and trace signals at will:
/usr/libexec/sbuild-usernsexec --pivotroot --nonet u:0:100000:65536 g:0:100000:65536 /tmp/rootfs ema /whatever -- /process.exe
We can compare the behavior of the perl init vis-a-vis the one using
dumb-init in milliseconds instead of minutes.
Stage 5: Oh, I see.
Why does process.exe send more SIGTERMs when using the perl init is now the
big question. We have a simple reproducer, so this is where using strace
becomes possible.
sudo strace --user ema --follow-forks -o sbuild-dumb-init.strace ./sbuild-usernsexec-dumb-init --pivotroot --nonet u:0:100000:65536 g:0:100000:65536 /tmp/dumbroot ema /whatever -- /process.exe
3593883 kill(-2, SIGTERM) = -1 ESRCH (No such process)
No such process. Under perl-init instead:
3593777 kill(-2, SIGTERM <unfinished ...>
The process is there under perl-init!
That is a kill with negative pid. From the kill(2) man page:
If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid.
It would have been very useful to see this kill with negative pid in the
output of bpftrace, why didn’t we? The tracepoint used,
tracepoint:signal:signal_generate, shows when signals are actually being
sent, and not the syscall being called. To confirm, one can trace
tracepoint:syscalls:sys_enter_kill and see the negative PIDs, for example:
PID 312719 (bash) sent signal 2 to -312728
The obvious question at this point is: why is there no process group 2 when
using dumb-init?
Stage 6: How did that ever work?
We know that process.exe sends a SIGTERM to every process in the process
group with ID 2. To find out what this process group may be, we spawn a shell
with dumb-init and observe under /proc PIDs 1, 16, and 17. With perl-init
we have 1, 2, and 17. When running dumb-init, there are a few forks before
launching the program, explaining the difference. Looking at /proc/2/cmdline
we see that it’s bash, ie. the program we are running under perl-init. When
building a package, that is dpkg-buildpackage itself.
The test is accidentally killing its own process group.
Now where does this -2 come from in the test?
2363 // Special values for _processID.
2364 enum invalid = -1, terminated = -2;
Oh. -2 is used as a special value for PID, meaning "terminated". And there’s a
call to kill() later on:
2694 do { s = tryWait(pid); } while (!s.terminated);
[...]
2697 assertThrown!ProcessException(kill(pid));
What sets pid to terminated you ask?
Here is tryWait:
2568 auto tryWait(Pid pid) @safe
2569 {
2570 import std.typecons : Tuple;
2571 assert(pid !is null, "Called tryWait on a null Pid.");
2572 auto code = pid.performWait(false);
This essay was written with Nathan E. Sanders. It originally appeared as a response to Evgeny Morozov in Boston Review‘s forum, “The AI We Deserve.”
For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he calls them, are wrong to see AI as “irreparably tainted” by its origins. AI is better understood as a creative, global field of human endeavor that has been largely captured by U.S. venture capitalists, private equity, and Big Tech. But that was never the inevitable outcome, and it doesn’t need to stay that way.
The internet is a case in point. The fact that it originated in the military is a historical curiosity, not an indication of its essential capabilities or social significance. Yes, it was created to connect different, incompatible Department of Defense networks. Yes, it was designed to survive the sorts of physical damage expected from a nuclear war. And yes, back then it was a bureaucratically controlled space where frivolity was discouraged and commerce was forbidden.
Over the decades, the internet transformed from military project to academic tool to the corporate marketplace it is today. These forces, each in turn, shaped what the internet was and what it could do. For most of us billions online today, the only internet we have ever known has been corporate—because the internet didn’t flourish until the capitalists got hold of it.
AI followed a similar path. It was originally funded by the military, with the military’s goals in mind. But the Department of Defense didn’t design the modern ecosystem of AI any more than it did the modern internet. Arguably, its influence on AI was even less because AI simply didn’t work back then. While the internet exploded in usage, AI hit a series of dead ends. The research discipline went through multiple “winters” when funders of all kinds—military and corporate—were disillusioned and research money dried up for years at a time. Since the release of ChatGPT, AI has reached the same endpoint as the internet: it is thoroughly dominated by corporate power. Modern AI, with its deep reinforcement learning and large language models, is shaped by venture capitalists, not the military—nor even by idealistic academics anymore.
We agree with much of Morozov’s critique of corporate control, but it does not follow that we must reject the value of instrumental reason. Solving problems and pursuing goals is not a bad thing, and there is real cause to be excited about the uses of current AI. Morozov illustrates this from his own experience: he uses AI to pursue the explicit goal of language learning.
AI tools promise to increase our individual power, amplifying our capabilities and endowing us with skills, knowledge, and abilities we would not otherwise have. This is a peculiar form of assistive technology, kind of like our own personal minion. It might not be that smart or competent, and occasionally it might do something wrong or unwanted, but it will attempt to follow your every command and gives you more capability than you would have had without it.
Of course, for our AI minions to be valuable, they need to be good at their tasks. On this, at least, the corporate models have done pretty well. They have many flaws, but they are improving markedly on a timescale of mere months. ChatGPT’s initial November 2022 model, GPT-3.5, scored about 30 percent on a multiple-choice scientific reasoning benchmark called GPQA. Five months later, GPT-4 scored 36 percent; by May this year, GPT-4o scored about 50 percent, and the most recently released o1 model reached 78 percent, surpassing the level of experts with PhDs. There is no one singular measure of AI performance, to be sure, but other metrics also show improvement.
That’s not enough, though. Regardless of their smarts, we would never hire a human assistant for important tasks, or use an AI, unless we can trust them. And while we have millennia of experience dealing with potentially untrustworthy humans, we have practically none dealing with untrustworthy AI assistants. This is the area where the provenance of the AI matters most. A handful of for-profit companies—OpenAI, Google, Meta, Anthropic, among others—decide how to train the most celebrated AI models, what data to use, what sorts of values they embody, whose biases they are allowed to reflect, and even what questions they are allowed to answer. And they decide these things in secret, for their benefit.
It’s worth stressing just how closed, and thus untrustworthy, the corporate AI ecosystem is. Meta has earned a lot of press for its “open-source” family of LLaMa models, but there is virtually nothing open about them. For one, the data they are trained with is undisclosed. You’re not supposed to use LLaMa to infringe on someone else’s copyright, but Meta does not want to answer questions about whether it violated copyrights to build it. You’re not supposed to use it in Europe, because Meta has declined to meet the regulatory requirements anticipated from the EU’s AI Act. And you have no say in how Meta will build its next model.
The company may be giving away the use of LLaMa, but it’s still doing so because it thinks it will benefit from your using it. CEO Mark Zuckerberg has admitted that eventually, Meta will monetize its AI in all the usual ways: charging to use it at scale, fees for premium models, advertising. The problem with corporate AI is not that the companies are charging “a hefty entrance fee” to use these tools: as Morozov rightly points out, there are real costs to anyone building and operating them. It’s that they are built and operated for the purpose of enriching their proprietors, rather than because they enrich our lives, our wellbeing, or our society.
But some emerging models from outside the world of corporate AI are truly open, and may be more trustworthy as a result. In 2022 the research collaboration BigScience developed an LLM called BLOOM with freely licensed data and code as well as public compute infrastructure. The collaboration BigCode has continued in this spirit, developing LLMs focused on programming. The government of Singapore has built SEA-LION, an open-source LLM focused on Southeast Asian languages. If we imagine a future where we use AI models to benefit all of us—to make our lives easier, to help each other, to improve our public services—we will need more of this. These may not be “eolithic” pursuits of the kind Morozov imagines, but they are worthwhile goals. These use cases require trustworthy AI models, and that means models built under conditions that are transparent and with incentives aligned to the public interest.
Perhaps corporate AI will never satisfy those goals; perhaps it will always be exploitative and extractive by design. But AI does not have to be solely a profit-generating industry. We should invest in these models as a public good, part of the basic infrastructure of the twenty-first century. Democratic governments and civil society organizations can develop AI to offer a counterbalance to corporate tools. And the technology they build, for all the flaws it may have, will enjoy a superpower that corporate AI never will: it will be accountable to the public interest and subject to public will in the transparency, openness, and trustworthiness of its development.
Author: Clare Strahan Pat had to turn the drone over, to get to the metal hatch door and unscrew the screws that fixed it to the body. What did the drone think of, when Pat wasn’t there? Did it remember the battlefield, the shrapnel and wounding, the fall into the ocean, the washing up on […]
The weather isn't the only thing that's balmy around this parts.
For instance
Bruce,
who likes it hot.
"Westford, MA is usually bracing for winter in December,
but this year we got another day of warm temperatures.
The feels like temperature was especially nice."
And
Robert
who wailed
"Not only do I have to enter Coupon Code Invalid:3,
but the small print pretty much prevents me from using the coupon in the first place!"
But did he TRY the code to see if it worked? You never know.
But not
Steven B.
who sagely noted that
"Datetime is hard, even for Facebook."
And definitely not
this incensed anonymous reader who ranted
"What do you call web devs in 2024? Web devs, obviously.
What do you call web devs that don't know in 2024 how to
correctly implement email validation for top level domains
with more than 3 letters? I'm still trying to figure that
one out since nothing nice has come to mind and I am trying
not to offend.
And to be clear Falabella isn't some small business that
can't afford to hire proper devs but one of the largest
retail and online marketplaces in Latin America. Where in
some countries you say Amazon, in most of Latin America you say Falabella."
That's it! Time's up for now, but not for
Phillip J.
who has just a little bit extra. Tick tock, Phillip.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
The LTS coordinators, Roberto and Santiago, delivered a talk at the Mini-DebConf event in Toulouse, France. The title of the talk was “How LTS goes beyond LTS”. The talk covered work done by the LTS Team during the past year. This included contributions related to individual packages in Debian (such as tomcat, jetty, radius, samba, apache2, ruby, and many others); improvements to tooling and documentation useful to the Debian project as a whole; and contributions to upstream work (apache2, freeimage, node-dompurify, samba, and more). Additionally, several contributors external to the LTS Team were highlighted for their contributions to LTS. Readers are encouraged to watch the video of the presentation for a more detailed review of various ways in which the LTS team has contributed more broadly to the Debian project and to the free software community during the past year.
We wish to specifically thank Salvatore (of the Debian Security Team) for swiftly handling during November the updates of needrestart and libmodule-scandeps-perl, both of which involved arbitrary code execution vulnerabilities. We are happy to see increased involvement in LTS work by contributors from outside the formal LTS Team.
The work of the LTS Team in November was otherwise unremarkable, encompassing the customary triage, development, testing, and release of numerous DLAs, along with some associated contributions to related packages in stable and unstable.
A few weeks ago, and following an informal ‘call for talks’ by James Lamb, I had an opportunity
to talk about r2u to
the Chicago ML and MLops meetup groups. You can find
the slides
here.
Over the last 2 1/2 years, r2u has become a
widely-deployed mechanism in a number of settings, including (but not
limited to) software testing via continuous integration, deployment on
cloud servers—besides of course to more standard use on local laptops or
workstation. 30
million downloads illustrate this. My thesis for the talk was that
this extends equally to ML(ops) where no surprises, no hickups
automated deployments are key for large-scale model training, evaluation
and of course production deployments.
In this context, I introduce r2u while giving credit
both to what came before it, the existing alternatives (or ‘competitors’
for mindshare if one prefers that phrasing), and of course what lies
underneath it.
The central takeaway, I argue, is that r2u can and does take
advantage of a unique situation in that we can ‘join’ the package
manager task for the underlying (operating) system and and the
application domain, here R and
its unique CRAN repository
network. Other approaches can, and of course do, provide binaries, but
by doing this outside the realm of the system package manager
can only arrive at a lesser integration (and I show a common error
arising in that case). So where r2u is feasible, it
dominates the alternatives (while the alternatives may well provide
deployment on more platforms which, even when less integrated, may be of
greater importance for some). As always, it all depends.
But the talk, and its slides,
motivate and illustrate why we keep calling r2u by its slogan of
r2u: Fast. Easy. Reliable. Pick All Three.
The distinction between hardware and software has historically been relatively easy to understand - hardware is the physical object that software runs on. This is made more complicated by the existence of programmable logic like FPGAs, but by and large things tend to fall into fairly neat categories if we're drawing that distinction.
Conversations usually become more complicated when we introduce firmware, but should they? According to Wikipedia, Firmware is software that provides low-level control of computing device hardware, and basically anything that's generally described as firmware certainly fits into the "software" side of the above hardware/software binary. From a software freedom perspective, this seems like something where the obvious answer to "Should this be free" is "yes", but it's worth thinking about why the answer is yes - the goal of free software isn't freedom for freedom's sake, but because the freedoms embodied in the Free Software Definition (and by proxy the DFSG) are grounded in real world practicalities.
How do these line up for firmware? Firmware can fit into two main classes - it can be something that's responsible for initialisation of the hardware (such as, historically, BIOS, which is involved in initialisation and boot and then largely irrelevant for runtime[1]) or it can be something that makes the hardware work at runtime (wifi card firmware being an obvious example). The role of free software in the latter case feels fairly intuitive, since the interface and functionality the hardware offers to the operating system is frequently largely defined by the firmware running on it. Your wifi chipset is, these days, largely a software defined radio, and what you can do with it is determined by what the firmware it's running allows you to do. Sometimes those restrictions may be required by law, but other times they're simply because the people writing the firmware aren't interested in supporting a feature - they may see no reason to allow raw radio packets to be provided to the OS, for instance. We also shouldn't ignore the fact that sufficiently complicated firmware exposed to untrusted input (as is the case in most wifi scenarios) may contain exploitable vulnerabilities allowing attackers to gain arbitrary code execution on the wifi chipset - and potentially use that as a way to gain control of the host OS (see this writeup for an example). Vendors being in a unique position to update that firmware means users may never receive security updates, leaving them with a choice between discarding hardware that otherwise works perfectly or leaving themselves vulnerable to known security issues.
But even the cases where firmware does nothing other than initialise the hardware cause problems. A lot of hardware has functionality controlled by registers that can be locked during the boot process. Vendor firmware may choose to disable (or, rather, never to enable) functionality that may be beneficial to a user, and then lock out the ability to reconfigure the hardware later. Without any ability to modify that firmware, the user lacks the freedom to choose what functionality their hardware makes available to them. Again, the ability to inspect this firmware and modify it has a distinct benefit to the user.
So, from a practical perspective, I think there's a strong argument that users would benefit from most (if not all) firmware being free software, and I don't think that's an especially controversial argument. So I think this is less of a philosophical discussion, and more of a strategic one - is spending time focused on ensuring firmware is free worthwhile, and if so what's an appropriate way of achieving this?
I think there's two consistent ways to view this. One is to view free firmware as desirable but not necessary. This approach basically argues that code that's running on hardware that isn't the main CPU would benefit from being free, in the same way that code running on a remote network service would benefit from being free, but that this is much less important than ensuring that all the code running in the context of the OS on the primary CPU is free. The maximalist position is not to compromise at all - all software on a system, whether it's running at boot or during runtime, and whether it's running on the primary CPU or any other component on the board, should be free.
Personally, I lean towards the former and think there's a reasonably coherent argument here. I think users would benefit from the ability to modify the code running on hardware that their OS talks to, in the same way that I think users would benefit from the ability to modify the code running on hardware the other side of a network link that their browser talks to. I also think that there's enough that remains to be done in terms of what's running on the host CPU that it's not worth having that fight yet. But I think the latter is absolutely intellectually consistent, and while I don't agree with it from a pragmatic perspective I think things would undeniably be better if we lived in that world.
This feels like a thing you'd expect the Free Software Foundation to have opinions on, and it does! There are two primarily relevant things - the Respects your Freedoms campaign focused on ensuring that certified hardware meets certain requirements (including around firmware), and the Free System Distribution Guidelines, which define a baseline for an OS to be considered free by the FSF (including requirements around firmware).
RYF requires that all software on a piece of hardware be free other than under one specific set of circumstances. If software runs on (a) a secondary processor and (b) within which software installation is not intended after the user obtains the product, then the software does not need to be free. (b) effectively means that the firmware has to be in ROM, since any runtime interface that allows the firmware to be loaded or updated is intended to allow software installation after the user obtains the product.
The Free System Distribution Guidelines require that all non-free firmware be removed from the OS before it can be considered free. The recommended mechanism to achieve this is via linux-libre, a project that produces tooling to remove anything that looks plausibly like a non-free firmware blob from the Linux source code, along with any incitement to the user to load firmware - including even removing suggestions to update CPU microcode in order to mitigate CPU vulnerabilities.
For hardware that requires non-free firmware to be loaded at runtime in order to work, linux-libre doesn't do anything to work around this - the hardware will simply not work. In this respect, linux-libre reduces the amount of non-free firmware running on a system in the same way that removing the hardware would. This presumably encourages users to purchase RYF compliant hardware.
But does that actually improve things? RYF doesn't require that a piece of hardware have no non-free firmware, it simply requires that any non-free firmware be hidden from the user. CPU microcode is an instructive example here. At the time of writing, every laptop listed here has an Intel CPU. Every Intel CPU has microcode in ROM, typically an early revision that is known to have many bugs. The expectation is that this microcode is updated in the field by either the firmware or the OS at boot time - the updated version is loaded into RAM on the CPU, and vanishes if power is cut. The combination of RYF and linux-libre doesn't reduce the amount of non-free code running inside the CPU, it just means that the user (a) is more likely to hit since-fixed bugs (including security ones!), and (b) has less guidance on how to avoid them.
As long as RYF permits hardware that makes use of non-free firmware I think it hurts more than it helps. In many cases users aren't guided away from non-free firmware - instead it's hidden away from them, leaving them less aware that their freedom is constrained. Linux-libre goes further, refusing to even inform the user that the non-free firmware that their hardware depends on can be upgraded to improve their security.
Out of sight shouldn't mean out of mind. If non-free firmware is a threat to user freedom then allowing it to exist in ROM doesn't do anything to solve that problem. And if it isn't a threat to user freedom, then what's the point of requiring linux-libre for a Linux distribution to be considered free by the FSF? We seem to have ended up in the worst case scenario, where nothing is being done to actually replace any of the non-free firmware running on people's systems and where users may even end up with a reduced awareness that the non-free firmware even exists.
Sometimes you want to restrict access to something to a specific set of devices - for instance, you might want your corporate VPN to only be reachable from devices owned by your company. You can't really trust a device that self attests to its identity, for instance by reporting its MAC address or serial number, for a couple of reasons:
These aren't fixed - MAC addresses are trivially reprogrammable, and serial numbers are typically stored in reprogrammable flash at their most protected
A malicious device could simply lie about them
If we want a high degree of confidence that the device we're talking to really is the device it claims to be, we need something that's much harder to spoof. For devices with a TPM this is the TPM itself. Every TPM has an Endorsement Key (EK) that's associated with a certificate that chains back to the TPM manufacturer. By verifying that certificate path and having the TPM prove that it's in posession of the private half of the EK, we know that we're communicating with a genuine TPM[1].
Android has a broadly equivalent thing called ID Attestation. Android devices can generate a signed attestation that they have certain characteristics and identifiers, and this can be chained back to the manufacturer. Obviously providing signed proof of the device identifier is kind of problematic from a privacy perspective, so the short version[2] is that only apps installed using a corporate account rather than a normal user account are able to do this.
But that's still not ideal - the device identifiers involved included the IMEI and serial number of the device, and those could potentially be used to correlate devices across privacy boundaries since they're static[3] identifiers that are the same both inside a corporate work profile and in the normal user profile, and also remains static if you move between different employers and use the same phone[4]. So, since Android 12, ID Attestation includes an "Enterprise Specific ID" or ESID. The ESID is based on a hash of device-specific data plus the enterprise that the corporate work profile is associated with. If a device is enrolled with the same enterprise then this ID will remain static, if it's enrolled with a different enterprise it'll change, and it just doesn't exist outside the work profile at all. The other device identifiers are no longer exposed.
But device ID verification isn't enough to solve the underlying problem here. When we receive a device ID attestation we know that someone at the far end has posession of a device with that ID, but we don't know that that device is where the packets are originating. If our VPN simply has an API that asks for an attestation from a trusted device before routing packets, we could pass that on to said trusted device and then simply forward the attestation to the VPN server[5]. We need some way to prove that the the device trying to authenticate is actually that device.
The answer to this is key provenance attestation. If we can prove that an encryption key was generated on a trusted device, and that the private half of that key is stored in hardware and can't be exported, then using that key to establish a connection proves that we're actually communicating with a trusted device. TPMs are able to do this using the attestation keys generated in the Credential Activation process, giving us proof that a specific keypair was generated on a TPM that we've previously established is trusted.
Android again has an equivalent called Key Attestation. This doesn't quite work the same way as the TPM process - rather than being tied back to the same unique cryptographic identity, Android key attestation chains back through a separate cryptographic certificate chain but contains a statement about the device identity - including the IMEI and serial number. By comparing those to the values in the device ID attestation we know that the key is associated with a trusted device and we can now establish trust in that key.
"But Matthew", those of you who've been paying close attention may be saying, "Didn't Android 12 remove the IMEI and serial number from the device ID attestation?" And, well, congratulations, you were apparently paying more attention than Google. The key attestation no longer contains enough information to tie back to the device ID attestation, making it impossible to prove that a hardware-backed key is associated with a specific device ID attestation and its enterprise enrollment.
I don't think this was any sort of deliberate breakage, and it's probably more an example of shipping the org chart - my understanding is that device ID attestation and key attestation are implemented by different parts of the Android organisation and the impact of the ESID change (something that appears to be a legitimate improvement in privacy!) on key attestation was probably just not realised. But it's still a pain.
[1] Those of you paying attention may realise that what we're doing here is proving the identity of the TPM, not the identity of device it's associated with. Typically the TPM identity won't vary over the lifetime of the device, so having a one-time binding of those two identities (such as when a device is initially being provisioned) is sufficient. There's actually a spec for distributing Platform Certificates that allows device manufacturers to bind these together during manufacturing, but I last worked on those a few years back and don't know what the current state of the art there is
[2] Android has a bewildering array of different profile mechanisms, some of which are apparently deprecated, and I can never remember how any of this works, so you're not getting the long version
[3] Nominally, anyway. Cough.
[4] I wholeheartedly encourage people not to put work accounts on their personal phones, but I am a filthy hypocrite here
[5] Obviously if we have the ability to ask for attestation from a trusted device, we have access to a trusted device. Why not simply use the trusted device? The answer there may be that we've compromised one and want to do as little as possible on it in order to reduce the probability of triggering any sort of endpoint detection agent, or it may be because we want to run on a device with different security properties than those enforced on the trusted device.
Phil's company hired a contractor. It was the typical overseas arrangement: bundle up a pile of work, send it off to another timezone, receive back terrible code, push back during code review, then the whole thing blows up when the contracting company pushes back about how while the code review is in the contract if you're going to be such sticklers about it, they'll never deliver, and then management steps in and says, "Just keep the code review to style comments," and then it ends up not mattering anyway because the contractor assigned to the contract leaves for another contracting company, and management opts to use the remaining billable hours for a new feature instead of finishing the inflight work, so you inherit a half-finished pile of trash and somehow have to make it work.
Like I said, pretty standard stuff.
Phil found this construct scattered all over the codebase:
if cond1 and cond2:
passelif cond1 or cond2:
# do actual work
I hesitate to post this, because what we're looking at is just an attempt at doing a xor operation. And it's not wrong- it's an if statement way of writing (not a and b) or (a and not b). And if we're being nit-picky, while Python has a xor operator, it's technically a bitwise xor, so I could see someone not connecting that they could use it in this case- cond1 ^ cond2 would work just fine, so long as both conditions are actual booleans. But Python often uses non-boolean comparisons, like:
text = ""if text:
print("This won't print.")
This is playing with truthiness, and the problem here is that you can't use a xor to chain these conditions together.
if text ^ otherText:
# do stuff
That's a runtime error, as the ^ is only defined for integral types. You'd have to write:
ifbool(text) ^ bool(otherText):
# do stuff
So, would it have been better to use one of the logical equivalences for xor? Certainly. Would it have been even better to turn that equivalence into a function so you could actually call a xor function? Absolutely.
But I also can't complain too much about this one. I hate it, don't get me wrong, but it's not a trainwreck.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Trinity J. Choi “W-who are you?” Those were the three most painful words I’d heard my entire life. “Someone who loves you dearly.” I responded, unable to control the slight crack in my voice. I could see my face shining back at me through her empty eyes. The reflection of a sister she doesn’t […]
RcppCCTZ
uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for
translating between absolute and civil times using the rules of a time
zone. In fact, it is two libraries. One for dealing with
civil time: human-readable dates and times, and one for
converting between between absolute and civil times via time zones. And
while CCTZ is made by
Google(rs), it is not an official Google product. The RcppCCTZ
page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now several others
packages (four the last time we counted) include its sources too. Not
ideal, but beyond our control.
This version include most routine package maintenance as well as one
small contributed code improvement. The changes since the last CRAN release are summarised
below.
Changes in version 0.2.13
(2024-12-11)
No longer set compilation standard as recent R version set a
sufficiently high minimum
Qualify a call to cctz::format (Michael Quinn in #44)
Routine updates to continuous integration and badges
A financial firm registered in Canada has emerged as the payment processor for dozens of Russian cryptocurrency exchanges and websites hawking cybercrime services aimed at Russian-speaking customers, new research finds. Meanwhile, an investigation into the Vancouver street address used by this company shows it is home to dozens of foreign currency dealers, money transfer businesses, and cryptocurrency exchanges — none of which are physically located there.
Richard Sanders is a blockchain analyst and investigator who advises the law enforcement and intelligence community. Sanders spent most of 2023 in Ukraine, traveling with Ukrainian soldiers while mapping the shifting landscape of Russian crypto exchanges that are laundering money for narcotics networks operating in the region.
More recently, Sanders has focused on identifying how dozens of popular cybercrime services are getting paid by their customers, and how they are converting cryptocurrency revenues into cash. For the past several months, he’s been signing up for various cybercrime services, and then tracking where their customer funds go from there.
The 122 services targeted in Sanders’ research include some of the more prominent businesses advertising on the cybercrime forums today, such as:
-abuse-friendly or “bulletproof” hosting providers like anonvm[.]wtf, and PQHosting;
-sites selling aged email, financial, or social media accounts, such as verif[.]work and kopeechka[.]store;
-anonymity or “proxy” providers like crazyrdp[.]com and rdp[.]monster;
-anonymous SMS services, including anonsim[.]net and smsboss[.]pro.
The site Verif dot work, which processes payments through Cryptomus, sells financial accounts, including debit and credit cards.
Sanders said he first encountered some of these services while investigating Kremlin-funded disinformation efforts in Ukraine, as they are all useful in assembling large-scale, anonymous social media campaigns.
According to Sanders, all 122 of the services he tested are processing transactions through a company called Cryptomus, which says it is a cryptocurrency payments platform based in Vancouver, British Columbia. Cryptomus’ website says its parent firm — Xeltox Enterprises Ltd. (formerly certa-pay[.]com) — is registered as a money service business (MSB) with the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC).
Sanders said the payment data he gathered also shows that at least 56 cryptocurrency exchanges are currently using Cryptomus to process transactions, including financial entities with names like casher[.]su, grumbot[.]com, flymoney[.]biz, obama[.]ru and swop[.]is.
These platforms are built for Russian speakers, and they each advertise the ability to anonymously swap one form of cryptocurrency for another. They also allow the exchange of cryptocurrency for cash in accounts at some of Russia’s largest banks — nearly all of which are currently sanctioned by the United States and other western nations.
A machine-translated version of Flymoney, one of dozens of cryptocurrency exchanges apparently nested at Cryptomus.
An analysis of their technology infrastructure shows that all of these exchanges use Russian email providers, and most are directly hosted in Russia or by Russia-backed ISPs with infrastructure in Europe (e.g. Selectel, Netwarm UK, Beget, Timeweb and DDoS-Guard). The analysis also showed nearly all 56 exchanges used services from Cloudflare, a global content delivery network based in San Francisco.
“Purportedly, the purpose of these platforms is for companies to accept cryptocurrency payments in exchange for goods or services,” Sanders told KrebsOnSecurity. “Unfortunately, it is next to impossible to find any goods for sale with websites using Cryptomus, and the services appear to fall into one or two different categories: Facilitating transactions with sanctioned Russian banks, and platforms providing the infrastructure and means for cyber attacks.”
Cryptomus did not respond to multiple requests for comment.
PHANTOM ADDRESSES?
The Cryptomus website and its FINTRAC listing say the company’s registered address is Suite 170, 422 Richards St. in Vancouver, BC. This address was the subject of an investigation published in July by CTV National News and the Investigative Journalism Foundation (IJF), which documented dozens of cases across Canada where multiple MSBs are incorporated at the same address, often without the knowledge or consent of the location’s actual occupant.
This building at 422 Richards St. in downtown Vancouver is the registered address for 90 money services businesses, including 10 that have had their registrations revoked. Image: theijf.org/msb-cluster-investigation.
Their inquiry found 422 Richards St. was listed as the registered address for at least 76 foreign currency dealers, eight MSBs, and six cryptocurrency exchanges. At that address is a three-story building that used to be a bank and now houses a massage therapy clinic and a co-working space. But they found none of the MSBs or currency dealers were paying for services at that co-working space.
The reporters found another collection of 97 MSBs clustered at an address for a commercial office suite in Ontario, even though there was no evidence these companies had ever arranged for any business services at that address.
Peter German, a former deputy commissioner for the Royal Canadian Mounted Police who authored two reports on money laundering in British Columbia, told the publications it goes against the spirit of Canada’s registration requirements for such businesses, which are considered high-risk for money laundering and terrorist financing.
“If you’re able to have 70 in one building, that’s just an abuse of the whole system,” German said.
Ten MSBs registered to 422 Richard St. had their registrations revoked. One company at 422 Richards St. whose registration was revoked this year had a director with a listed address in Russia, the publications reported. “Others appear to be directed by people who are also directors of companies in Cyprus and other high-risk jurisdictions for money laundering,” they wrote.
A review of FINTRAC’s registry (.CSV) shows many of the MSBs at 422 Richards St. are international money transfer or remittance services to countries like Malaysia, India and Nigeria. Some act as currency exchanges, while others appear to sell merchant accounts and online payment services. Still, KrebsOnSecurity could find no obvious connections between the 56 Russian cryptocurrency exchanges identified by Sanders and the dozens of payment companies that FINTRAC says share an address with the Cryptomus parent firm Xeltox Enterprises.
SANCTIONS EVASION
In August 2023, Binance and some of the largest cryptocurrency exchanges responded to sanctions against Russia by cutting off many Russian banks and restricting Russian customers to transactions in Rubles only. Sanders said prior to that change, most of the exchanges currently served by Cryptomus were handling customer funds with their own self-custodial cryptocurrency wallets.
By September 2023, Sanders said he found the exchanges he was tracking had all nested themselves like Matryoshka dolls at Cryptomus, which adds a layer of obfuscation to all transactions by generating a new cryptocurrency wallet for each order.
“They all simply moved to Cryptomus,” he said. “Cryptomus generates new wallets for each order, rendering ongoing attribution to require transactions with high fees each time.”
“Exchanges like Binance and OKX removing Sberbank and other sanctioned banks and offboarding Russian users did not remove the ability of Russians to transact in and out of cryptocurrency easily,” he continued. “In fact, it’s become easier, because the instant-swap exchanges do not even have Know Your Customer rules. The U.S. sanctions resulted in the majority of Russian instant exchanges switching from their self-custodial wallets to platforms, especially Cryptomus.”
Russian President Vladimir Putin in August signed a new law legalizing cryptocurrency mining and allowing the use of cryptocurrency for international payments. The Russian government’s embrace of cryptocurrency was a remarkable pivot: Bloomberg notes that as recently as January 2022, just weeks before Russia’s full-scale invasion of Ukraine, the central bank proposed a blanket ban on the use and creation of cryptocurrencies.
In a report on Russia’s cryptocurrency ambitions published in September, blockchain analysis firm Chainalysis said Russia’s move to integrate crypto into its financial system may improve its ability to bypass the U.S.-led financial system and to engage in non-dollar denominated trade.
“Although it can be hard to quantify the true impact of certain sanctions actions, the fact that Russian officials have singled out the effect of sanctions on Moscow’s ability to process cross-border trade suggests that the impact felt is great enough to incite urgency to legitimize and invest in alternative payment channels it once decried,” Chainalysis assessed.
Asked about its view of activity on Cryptomus, Chainanlysis said Cryptomus has been used by criminals of all stripes for laundering money and/or the purchase of goods and services.
“We see threat actors engaged in ransomware, narcotics, darknet markets, fraud, cybercrime, sanctioned entities and jurisdictions, and hacktivism making deposits to Cryptomus for purchases but also laundering the services using Cryptomos payment API,” the company said in a statement.
SHELL GAMES
It is unclear if Cryptomus and/or Xeltox Enterprises have any presence in Canada at all. A search in the United Kingdom’s Companies House registry for Xeltox’s former name — Certa Payments Ltd. — shows an entity by that name incorporated at a mail drop in London in December 2023.
The sole shareholder and director of that company is listed as a 25-year-old Ukrainian woman in the Czech Republic named Vira Krychka. Ms. Krychka was recently appointed the director of several other new U.K. firms, including an entity created in February 2024 called Globopay UAB Ltd, and another called WS Management and Advisory Corporation Ltd. Ms. Krychka did not respond to a request for comment.
WS Management and Advisory Corporation bills itself as the regulatory body that exclusively oversees licenses of cryptocurrencies in the jurisdiction of Western Sahara, a disputed territory in northwest Africa. Its website says the company assists applicants with bank setup and formation, online gaming licenses, and the creation and licensing of foreign exchange brokers. One of Certa Payments’ former websites — certa[.]website — also shared a server with 12 other domains, including rasd-state[.]ws, a website for the Central Reserve Authority of the Western Sahara.
The website crasadr dot com, the official website of the Central Reserve Authority of Western Sahara.
This business registry from the Czech Republic indicates Ms. Krychka works as a director at an advertising and marketing firm called Icon Tech SRO, which was previously named Blaven Technologies (Blaven’s website says it is an online payment service provider).
In August 2024, Icon Tech changed its name again to Mezhundarondnaya IBU SRO, which describes itself as an “experienced company in IT consulting” that is based in Armenia. The same registry says Ms. Krychka is somehow also a director at a Turkish investment venture. So much business acumen at such a young age!
For now, Canada remains an attractive location for cryptocurrency businesses to set up shop, at least on paper. The IJF and CTV News found that as of February 2024, there were just over 3,000 actively registered MSBs in Canada, 1,247 of which were located at the same building as at least one other MSB.
“That analysis does not include the roughly 2,700 MSBs whose registrations have lapsed, been revoked or otherwise stopped,” they observed. “If they are included, then a staggering 2,061 out of 5,705 total MSBs share a building with at least one other MSB.”
Kim Carson will explore the profound question of how to rediscover and define human purpose in the age of AI. With influences ranging from Brene Brown’s courage-centered leadership to Paulo Coelho’s exploration of personal destiny, Kim’s work weaves together practical strategies and philosophical insights, guiding audiences in understanding how AI can serve as a partner in awakening creativity, navigating uncertainty, and fostering a deeper connection with others and ourselves.
Her perspective also embraces the Dalai Lama’s call for compassion and balance, advocating for AI systems that enhance collective well-being while honoring the uniqueness of the individual. Carson invites audiences to experience AI not as a threat to human identity but as a catalyst for discovery and growth, empowering us to become co-creators of a future where technology supports our highest aspirations.
Discovering Open Source: How I Got Introduced
Hey there! I’m Divine Attah-Ohiemi, a sophomore studying Computer Science. My journey into the world of open source was anything but grand. It all started with a simple question to my sister: “How do people get jobs without experience?� Her answer? Open source! I dove into this vibrant community, and it felt like discovering a hidden treasure chest filled with knowledge and opportunities.
Choosing Debian: Why This Community?
Why Debian, you ask? Well, I applied to Outreachy twice, and both times, I chose Debian. It’s not just my first operating system; it feels like home. The Debian community is incredibly welcoming, like a big family gathering where everyone supports each other. Whether I was updating my distro or poring over documentation, the care and consideration in this community were palpable. It reminded me of the warmth of homeschooling with relatives. Plus, knowing that Debian's name comes from its creator Ian and his wife Debra adds a personal touch that makes me feel even more honored to contribute to making the website better!
Why I Applied to Outreachy: What Inspired Me
Outreachy is my golden ticket to the open source world! As a 19-year-old, I see this internship as a unique opportunity to gain invaluable experience while contributing to something meaningful. It’s the perfect platform for me to learn, grow, and connect with like-minded individuals who share my passion for technology and community.
I’m excited for this journey and can’t wait to see where it takes me! 🌟
One of the long-tenured developers, Douglas at Patrick's company left, which meant Patrick was called upon to pick up that share of the work. The code left behind by Douglas the departing developer was, well… code.
For example, this block of Java:
private String[] getDomainId(Collection<ParticularCaseEntity> particularCase) {
// Get all domainId
Collection<String> ids = new ArrayList<String>();
for (ParticularCaseEntity case : particularCase) {
ids.add(case.getDomainId());
}
Set<String> domainIdsWithoutRepeat = new HashSet<String>();
domainIdsWithoutRepeat.addAll(ids);
Collection<String> domainIds = new ArrayList<String>();
for (String domainId : domainIdsWithoutRepeat) {
domainIds.add(domainId);
}
return domainIds.toArray(new String[0]);
}
The purpose of this code is to get a set of "domain IDs"- a set, specifically, because we want them without duplicates. And this code takes the long way around to do it.
First, it returns a String[]- but logically, what it should return is a set. Maybe it's meant to comply with an external interface, but it's a private method- so I actually think this developer just didn't understand collection types at all.
And I have more evidence for that, which is the rest of this code.
We iterate across our ParticularCaseEntitys, and add each one to an array list. Then we create a hash set, and add all of those to a hash set. Then we create another array list, and add each entry in the set to the array list. Then we convert that array list into an array so we can return it.
At most, we really only needed the HashSet. But this gives us a nice tour of all the wrong data structures to use for this problem, which is helpful.
Speaking of helpful, it didn't take long for Patrick's employer to realize that having Patrick doing his job, and also picking up the work that Douglas used to do was bad. So they opened a new position, at a higher pay grade, hoping to induce a more senior developer to step in. And wouldn't you know, after 6 months, they found a perfect candidate, who had just finished a short six month stint at one of their competitors: Douglas!
[Advertisement]
Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Author: Majoki On Splinx, you have to follow the rules if you wanna break the law. Rule 1: Phasespace is your friend. Rule 2: In phasespace you have no friends. Seems simple enough until you try to skirt the laws of thermodynamics and attempt the biggest heist in quantum gambling history. And Splinx, being the […]
Microsoft today released updates to plug at least 70 security holes in Windows and Windows software, including one vulnerability that is already being exploited in active attacks.
The zero-day seeing exploitation involves CVE-2024-49138, a security weakness in the Windows Common Log File System (CLFS) driver — used by applications to write transaction logs — that could let an authenticated attacker gain “system” level privileges on a vulnerable Windows device.
The security firm Rapid7 notes there have been a series of zero-day elevation of privilege flaws in CLFS over the past few years.
“Ransomware authors who have abused previous CLFS vulnerabilities will be only too pleased to get their hands on a fresh one,” wrote Adam Barnett, lead software engineer at Rapid7. “Expect more CLFS zero-day vulnerabilities to emerge in the future, at least until Microsoft performs a full replacement of the aging CLFS codebase instead of offering spot fixes for specific flaws.”
Elevation of privilege vulnerabilities accounted for 29% of the 1,009 security bugs Microsoft has patched so far in 2024, according to a year-end tally by Tenable; nearly 40 percent of those bugs were weaknesses that could let attackers run malicious code on the vulnerable device.
Rob Reeves, principal security engineer at Immersive Labs, called special attention to CVE-2024-49112, a remote code execution flaw in the Lightweight Directory Access Protocol (LDAP) service on every version of Windows since Windows 7. CVE-2024-49112 has been assigned a CVSS (badness) score of 9.8 out of 10.
“LDAP is most commonly seen on servers that are Domain Controllers inside a Windows network and LDAP must be exposed to other servers and clients within an enterprise environment for the domain to function,” Reeves said. “Microsoft hasn’t released specific information about the vulnerability at present, but has indicated that the attack complexity is low and authentication is not required.”
Tyler Reguly at the security firm Fortra had a slightly different 2024 patch tally for Microsoft, at 1,088 vulnerabilities, which he said was surprisingly similar to the 1,063 vulnerabilities resolved in 2023 and the 1,119 vulnerabilities resolved in 2022.
“If nothing else, we can say that Microsoft is consistent,” Reguly said. “While it would be nice to see the number of vulnerabilities each year decreasing, at least consistency lets us know what to expect.”
If you’re a Windows end user and your system is not set up to automatically install updates, please take a minute this week to run Windows Update, preferably after backing up your system and/or important data.
System admins should keep an eye on AskWoody.com, which usually has the details if any of the Patch Tuesday fixes are causing problems. In the meantime, if you run into any problems applying this month’s fixes, please drop a note about in the comments below.
As someone who daily drives rolling release operating systems, having a bootable USB stick with a live install of Debian is essential. It has saved reverting broken packages and making my device bootable again at least a couple of times. However, creating a bootable USB stick usually uses the entire storage of the USB stick, which seems unnecessary given that USB sticks easily have 64GiB or more these days while live ISOs still don’t use more than 8GiB.
In this how-to, I will explain how to create a bootable USB stick, that also holds a FAT32 partition, which allows the USB stick used as usual.
Creating the partition partble
The first step is to create the partition table on the new drive. There are several tools to do this, I recommend the ArchWiki page on this topic for details. For best compatibility, MBR should be used instead of the more modern GPT.
For simplicity I just went with the GParted since it has an easy GUI, but feel free to use any other tool.
The layout should look like this:
The disk names are just an example and have to be adjusted for your system.
Don’t set disk labels, they don’t appear on the new install anyway and some UEFIs might not like it on your boot partition.
If you used GParted, create the EFI partition as FAT32 and set the esp flag.
Mounting the ISO and USB stick
The next step is to download your ISO and mount it. I personally use a Debian Live ISO for this.
To mount the ISO, use the loop mounting option:
mkdir -p /tmp/mnt/iso
sudo mount -o loop /path/to/image.iso /tmp/mnt/iso
Similarily, mount your USB stick:
mkdir -p /tmp/mnt/usb
sudo mount /dev/sda1 /tmp/mnt/usb
Copy the ISO contents
To copy the contents from the ISO to the USB stick, use this command:
sudo cp -r -L /tmp/mnt/iso/. /tmp/mnt/usb
The -L option is required to deference the symbolic links present in the ISO, since FAT32 does not support symbolic links.
Note that you might get warning about cyclic symbolic links. Nothing can be done to fix those, except hoping that they won’t cause a problem. For me this never was a problem though.
Finishing touches
Umnount the ISO and USB stick:
sudo umount /tmp/mnt/usb
sudo umount /tmp/mnt/iso
Now the USB stick should be bootable and have a usable data partition that can be used on essentially every operating system.
I recommend testing that the stick is bootable to make sure it actually works in case of an emergency.
Right now playing with minikube, to run a three nodes highly available Kubernetes control plane.
I am using the docker driver of minikube, so each Kubernetes node component is running inside a docker container, instead of using full blown VMs.
Rob's co-worker needed to write a loop that iterated across every element in an array. This very common problem, and you'd imagine that a developer would use one of the many common solutions to this problem. The language, in this case, is JavaScript, which has many possible options for iterating across an array.
Perhaps that buffet of possible options was too daunting. Perhaps the developer thought to themselves, "a for each loop is easy mode, I'm a 10x programmer, and I want a 10x solution!" Or perhaps they just didn't know what the hell they were doing.
Regardless of why, this is the result:
try {
var index = 0;
while (true) {
var nextItem = someArray[index];
doSomethingWithItem(nextItem);
index++;
}
} catch (e) { }
This code iterates across the array in an infinite while loop, passing each item to doSomethingWithItem. Eventually, they hit the end of the array, and someArray[index] starts returning undefined. Somewhere, deep in doSomethingWithItem, that causes an exception to be thrown.
That is how we break out of the loop- eventually something chokes on an undefined value, which lets us know there's nothing left in the array.
Which puts us in an interesting position- if anyone decided to add better error handling to doSomethingWithItem, the entire application could break, and it wouldn't be obvious why. This is a peak example of "every change breaks somebody's workflow", but specifically because that workflow is stupid.
Author: Don Nigroni From my source, I knew there was a lot of debate concerning whether we should blow up that spacecraft before it got near Earth. It had suddenly and inexplicably appeared between Mars and Earth last night. It was obviously from another planet and might have been manned, but the fear was that, […]
I haven't made one of these posts since... last year? Good lord. I've
therefore already read and reviewed a lot of these books.
Kemi Ashing-Giwa — The Splinter in the Sky (sff)
Moniquill Blackgoose — To Shape a Dragon's Breath (sff)
Ashley Herring Blake — Delilah Green Doesn't Care (romance)
Ashley Herring Blake — Astrid Parker Doesn't Fail (romance)
Ashley Herring Blake — Iris Kelly Doesn't Date (romance)
Molly J. Bragg — Scatter (sff)
Sarah Rees Breenan — Long Live Evil (sff)
Michelle Browne — And the Stars Will Sing (sff)
Steven Brust — Lyorn (sff)
Miles Cameron — Beyond the Fringe (sff)
Miles Cameron — Deep Black (sff)
Haley Cass — Those Who Wait (romance)
Sylvie Cathrall — A Letter to the Luminous Deep (sff)
Ta-Nehisi Coates — The Message (non-fiction)
Julie E. Czerneda — To Each This World (sff)
Brigid Delaney — Reasons Not to Worry (non-fiction)
Mar Delaney — Moose Madness (sff)
Jerusalem Demsas — On the Housing Crisis (non-fiction)
Michelle Diener — Dark Horse (sff)
Michelle Diener — Dark Deeds (sff)
Michelle Diener — Dark Minds (sff)
Michelle Diener — Dark Matters (sff)
Elaine Gallagher — Unexploded Remnants (sff)
Bethany Jacobs — These Burning Stars (sff)
Bethany Jacobs — On Vicious Worlds (sff)
Micaiah Johnson — Those Beyond the Wall (sff)
T. Kingfisher — Paladin's Faith (sff)
T.J. Klune — Somewhere Beyond the Sea (sff)
Mark Lawrence — The Book That Wouldn't Burn (sff)
Mark Lawrence — The Book That Broke the World (sff)
Mark Lawrence — Overdue (sff)
Mark Lawrence — Returns (sff collection)
Malinda Lo — Last Night at the Telegraph Club (historical)
Jessie Mihalik — Hunt the Stars (sff)
Samantha Mills — The Wings Upon Her Back (sff)
Lyda Morehouse — Welcome to Boy.net (sff)
Cal Newport — Slow Productivity (non-fiction)
Naomi Novik — Buried Deep and Other Stories (sff collection)
Claire O'Dell — The Hound of Justice (sff)
Keanu Reeves & China Miéville — The Book of Elsewhere (sff)
Kit Rocha — Beyond Temptation (sff)
Kit Rocha — Beyond Jealousy (sff)
Kit Rocha — Beyond Solitude (sff)
Kit Rocha — Beyond Addiction (sff)
Kit Rocha — Beyond Possession (sff)
Kit Rocha — Beyond Innocence (sff)
Kit Rocha — Beyond Ruin (sff)
Kit Rocha — Beyond Ecstasy (sff)
Kit Rocha — Beyond Surrender (sff)
Kit Rocha — Consort of Fire (sff)
Geoff Ryman — HIM (sff)
Melissa Scott — Finders (sff)
Rob Wilkins — Terry Pratchett: A Life with Footnotes (non-fiction)
Gabrielle Zevin — Tomorrow, and Tomorrow, and Tomorrow
(mainstream)
That's a lot of books, although I think I've already read maybe a third of
them? Which is better than I usually do.
A bit of history: Drupal at my workplace (and in Debian)
My main day-to-day responsibility in my workplace is, and has been for 20 years,
to take care of the network infrastructure for UNAM’s Economics Research
Institute. One of the most visible parts of this
responsibility is to ensure we have a working Web presence, and that it caters
for the needs of our academic community.
I joined the Institute in January
2005. Back
then, our designer pushed static versions of our webpage, completely built in
her computer. This was standard practice at the time, and lasted through some
redesigns,
but I soon started advocating for the adoption of a Content Management
System. After evaluating some alternatives, I recommended adopting
Drupal. It took us quite a bit to do the change: even
though I clearly recall starting work toward adopting it as early as 2006,
according to the Internet Archive, we switched to a Drupal-backed site around
June
2010. We
started using it somewhere in the version 6’s lifecycle.
As for my Debian work, by late 2012 I started getting involved in the
maintenance of the drupal7 package,
and by April 2013 I became its primary maintainer. I kept the drupal7 package
up to date in Debian until ≈2018; the supported build methods for Drupal 8 are
not compatible with Debian (mainly, bundling third-party libraries and updating
them without coordination with the rest of the ecosystem), so towards the end of
2016, I announced I would not package Drupal 8 for
Debian.
By March 2016, we migrated our main page to Drupal
7. By
then, we already had several other sites for our academics’ projects, but my
narrative follows our main Web site. I did manage to migrate several Drupal 6
(D6) sites to Drupal 7 (D7); it was quite involved process, never transparent to
the user, and we did have the backlash of long downtimes (or partial downtimes,
with sites half-available only) with many of our users. For our main site, we
took the opportunity to do a complete redesign and deployed a fully new site.
You might note that March 2016 is after the release of D8 (November 2015). I
don’t recall many of the specifics for this decision, but if I’m not mistaken,
building the new site was a several months long process — not only for the
technical work of setting it up, but for the legwork of getting all of the
needed information from the different areas that need to be represented in the
Institute. Not only that: Drupal sites often include tens of contributed themes
and modules; the technological shift the project underwent between its 7 and 8
releases was too deep, and modules took a long time (if at all — many themes and
modules were outright dumped) to become available for the new release.
Naturally, the Drupal Foundation wanted to evolve and deprecate the old
codebase. But the pain to migrate from D7 to D8 is too big, and many sites have
remained under version 7 — Eight years after D8’s release, almost 40% of Drupal
installs are for version 7,
and a similar proportion runs a currently-supported release (10 or 11). And
while the Drupal Foundation made a great job at providing very-long-term support
for D7, I understand the burden is becoming too much, so close to a year ago
(and after pushing several times the D7, they finally announced support will
finish this upcoming January 5.
Drupal 7 must go!
I found the following usage graphs quite interesting: the usage statistics for
all Drupal versions follows a very
positive slope, peaking around 2014 during the best years of D7, and somewhat
stagnating afterwards, staying since 2015 at the 25000–28000 sites mark (I’m
very tempted to copy the graphs, but builtwith’s terms of
use are very clear in not allowing it). There is a
sharp drop in the last year — I attribute it to the people that are leaving D7
for other technologies after its end-of-life announcement. This becomes clearer
looking only at D7’s usage
statistics: D7 peaks at ≈15000
installs in 2016 stays there for close to 5 years, and has a sharp drop to under
7500 sites in the span of one year.
This graph is disgregated into minor versions, and I don’t want to come up with
yet another graph for it 😉 but it supports (most of) the narrative I presented
above… although I do miss the recent drop builtwith reported in D7’s
numbers!
And what about Backdrop?
During the D8 release cycle, a group of Drupal developers were not happy with
the depth of the architectural changes that were being adopted, particularly the
transition to the Symfony PHP component framework, and forked the D7 codebase to
create the Backdrop CMS, a modern version of Drupal,
without dropping the known and tested architecture it had. The Backdrop
developers keep working closely together with the Drupal community, and although
its usage numbers are way
smaller than Drupal’s, seems to be sustainable and lively. Of course, as I
presented their numbers in the previous section, you can see Backdrop’s numbers
in builtwith… are way, way
lower.
I have found it to be a very warm and welcoming community, eager to receive new
members. And, thanks to its contributed D2B Migrate
module, I found it is quite easy
to migrate a live site from Drupal 7 to Backdrop.
Given I have several sites to migrate, and that I’m trying to get my colleagues
to follow suite, I decided to automatize the migration by writing an Ansible
playbook to do the heavy lifting. Of
course, the playbook’s users will probably need to tweak it a bit to their
personal needs. I’m also far from an Ansible expert, so I’m sure there is ample
room fo improvement in my style.
But it works. Quite well, I must add.
But with this size of database…
I did stumble across a big pebble, though. I am working on the migration of one
of my users’ sites, and found that its database is… huge. I checked the
mysqldump output, and it got me close to 3GB of data. And given the
D2B_migrate is meant to work via a Web interface (my playbook works around it
by using a client I wrote with Perl’s
WWW::Mechanize), I repeatedly
stumbled with PHP’s maximum POST size, maximum upload size, maximum memory
size…
I asked for help in Backdrop’s Zulip chat
site,
and my attention was taken off fixing PHP to something more obvious: Why is the
database so large? So I took a quick look at the database (or rather: my first
look was at the database server’s filesystem usage). MariaDB stores each table
as a separate file on disk, so I looked for the nine largest tables:
# ls -lhS|head
total 3.8G
-rw-rw---- 1 mysql mysql 2.4G Dec 10 12:09 accesslog.ibd
-rw-rw---- 1 mysql mysql 224M Dec 2 16:43 search_index.ibd
-rw-rw---- 1 mysql mysql 220M Dec 10 12:09 watchdog.ibd
-rw-rw---- 1 mysql mysql 148M Dec 6 14:45 cache_field.ibd
-rw-rw---- 1 mysql mysql 92M Dec 9 05:08 aggregator_item.ibd
-rw-rw---- 1 mysql mysql 80M Dec 10 12:15 cache_path.ibd
-rw-rw---- 1 mysql mysql 72M Dec 2 16:39 search_dataset.ibd
-rw-rw---- 1 mysql mysql 68M Dec 2 13:16 field_revision_field_idea_principal_articulo.ibd
-rw-rw---- 1 mysql mysql 60M Dec 9 13:19 cache_menu.ibd
A single table, the access log, is over 2.4GB long. The three following tables
are, cache tables. I can perfectly live without their data in our new site! But
I don’t want to touch the slightest bit of this site until I’m satisfied with
the migration process, so I found a way to exclude those tables in a
non-destructive way: given D2B_migrate works with a mysqldump output, and
given that mysqldump locks each table before starting to modify it and unlocks
it after its job is done, I can just do the following:
$ perl -e '$output = 1; while (<>) { $output=0 if /^LOCK TABLES `(accesslog|search_index|watchdog|cache_field|cache_path)`/; $output=1 if /^UNLOCK TABLES/; print if $output}' < /tmp/d7_backup.sql > /tmp/d7_backup.eviscerated.sql; ls -hl /tmp/d7_backup.sql /tmp/d7_backup.eviscerated.sql
-rw-rw-r-- 1 gwolf gwolf 216M Dec 10 12:22 /tmp/d7_backup.eviscerated.sql
-rw------- 1 gwolf gwolf 2.1G Dec 6 18:14 /tmp/d7_backup.sql
Five seconds later, I’m done! The database is now a tenth of its size, and
D2B_migrate is happy to take it. And I’m a big step closer to finishing my
reliance on (this bit of) legacy code for my highly-visible sites 😃
It’s a video of someone trying on a variety of printed full-face masks. They won’t fool anyone for long, but will survive casual scrutiny. And they’re cheap and easy to swap.
This was my hundred-twenty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 3968-1] netatalk security update to fix four CVEs related to heap buffer overflow and writing arbitrary files. The patches have been prepared by the maintainer.
[DLA 3976-1] tgt update to fix one CVE related to not using a propper seed for rand()
[DLA 3977-1] xfpt update to fix one CVE related to a stack-based buffer overflow
[DLA 3978-1] editorconfig-core update to fix two CVEs related to buffer overflows.
I also continued to work on a fix for glewlwyd, which is more difficult than expected. Besides I started to work on ffmpeg and haproxy.
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the seventy-sixth ELTS month. During my allocated time I uploaded or worked on:
[ELA-1259-1]editorconfig-core security update for two CVEs in Buster to fix buffer overflows.
I also started to work on a fix for kmail-account-wizzard. Unfortunately preparing a testing environment takes some time and I did not finish testing this month. Besides I started to work on ffmpeg and haproxy.
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian Printing
Unfortunately I didn’t found any time to work on this topic.
Debian Matomo
Unfortunately I didn’t found any time to work on this topic.
Debian Astro
This month I uploaded new packages or new upstream or bugfix versions of:
Alice has the dubious pleasure of working with SalesForce. Management wants to make sure that any code is well tested, so they've set a requirement that all deployed code needs 75% code coverage. Unfortunately, properly configuring a code coverage tool is too hard, so someone came up with a delightful solution: just count how many lines are in your tests and how many lines are in your code, and make sure that your tests make up 75% of the total codebase.
Keep adding lines, and you could easily get close to 100% code coverage, this way. Heck, if you get close enough to round up, it'll look like 100% code coverage.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Julian Miles, Staff Writer My principal settles comfortably and waves us off, indicating we should exit and close the door. We’re just by that door when all our proximity alerts shriek. As one, we spin about and rush to protect him. After all, if he dies, we don’t get paid – and quite possibly […]
If Australia wins the current Test series against India, it would be the first time since 1968-69 that it has won a five-game series at home after losing the first Test. That season, Australia played the West Indies and lost the first Test at Brisbane.
But the locals then bounced back with wins in Melbourne and Sydney, before a thriller in Adelaide ended in a draw. The fifth and final game saw the West Indies thrashed by 382 runs in Sydney, giving the home team a 3-1 series win.
Australia’s win against India in the second Test of the current series has levelled things at 1-1, with three games left. Brisbane, Melbourne and Sydney will host those three Tests. Given the way the series has see-sawed thus far, it is difficult to predict with any degree of certainty the outcome of these three games.
I remember the 1968-69 series as if it were yesterday because I listened to commentary on the games from Radio Ceylon (the old name for Sri Lanka). Commentary was broadcast from both England and Australia – despite the fact that Ceylon was not even close to gaining admission to the club of Test-playing nations. I was then a 11-year-old, obsessed with the game and more so when the West Indies were involved.
I recall clearly the gravelly tone of Australian expert Alan McGilvray and the language of West Indies ace Tony Cozier as they covered the game. Their knowledge of the game was vast, they both had extensive vocabularies and their commentary was both entertaining and exciting.
The 1968-69 West indies team was an ageing one with the pace duo of Wes Hall and Charlie Griffith heading the bowling attack. Lance Gibbs was the main spinner while Gary Sobers was captain. The team was not expected to do well against a strong Australian outfit which included Ian Chappell, Doug Walters and Ian Redpath. Leading the team was one of the meanest captains of all time: Bill Lawry.
Surprisingly, the West Indies took the lead in Brisbane, despite a first-day collapse, when they floundered from 188 for one to 267 for nine at the end of the first day. Ending on 296, they, again surprisingly, bowled out Australia for 284, a total largely built on centuries from Lawry and Chappell.
The West Indies’ second innings seemed to be going nowhere until Clive Lloyd stepped in with a blistering knock of 129. He was aided by opener Joey Carew who was batting down the order due to an injury and contributed 71. The West Indies set Australia 366 to win.
Sobers opened the bowling and came up with a superlative performance, in fact returning what would turn out to be his best Test figures of six for 73, to ensure a 125-run for the visitors.
At the MCG, on what was the first Boxing Day Test, the West Indies went into reverse. Apart from Roy Fredericks, who was making his debut and contributed 76, the Windies had no answer to Graham McKenzie who took eight for 71.
Lawry (205) and Chappell (165), along with Walters’ contribution of 76, ensured a 310-run lead for the home team. Only Seymour Nurse (74) and Sobers (67) made any scores of note in the West Indies second effort as they were soundly beaten by an innings and 30 runs. Leggie John Gleeson grabbed five wickets in this rout.
Come the third Test and it was more of the same. The West Indies batted first after winning the toss and could only muster 264. Australia, with Walters scoring the first of his centuries in the series, took a 283-run lead and then kept the Windies down to 324 in their second innings. Basil Butcher top-scored with 101 and Rohan Kanhai got 69 but they were the only two to make meaningful scores. Australia needed only 42 in the fourth innings to take a 2-1 lead.
The fourth Test was a classic. West Indies again batted first and a swashbuckling 110 from Sobers carried them to 276. Australia ensured that they would have a massive lead once again, reaching 533 off the back of Walters (110) and half-centuries from Lawry, Keith Stackpole, Chappell, Paul Sheahan and McKenzie.
This time, however, the West Indies did not fold. They reached 261 for three at the end of day three with Carew (90) and Kanhai (80) being responsible for the resistance. The next day, there was more of the same, with even nightwatchman Griffith (44) getting among the runs, and Butcher scoring his second ton of the series.
Things seemed to be approaching the end when the West Indies were 492 for eight but then David Holford (80) and Jackie Hendricks (36) took over and carried the total beyond 600.
The final day saw Australia chasing 360 to win. Lawry (89) and Chappell (96) looked to be leading the way to victory but then a spate of run-outs – including Griffith running out Redpath who was backing up too far – saw Australia start the final over at 333 for nine, with Sheahan to face eight balls from Griffith. He managed to block those eight deliveries successfully and Australia escaped with a draw. [As an aside, the Ceylon Daily News did not know how to describe Redpath’s dismissal, so it simply printed “I. Redpath…………………..9 on its sports pages. The unofficial name for such a dismissal was being Mankaded as the first person to effect such a dismissal in the modern era was Indian Vinoo Mankad.]
The final Test was an anti-climax after the heroics of Adelaide. Sobers inserted the Australians after winning the toss – and had to watch as the home team amassed 619, with Walters getting another ton, this time a double. Lawry added to his tally as well, with 151.
The West Indies could only manage 279 in reply, but Lawry, determined to grind his opponents into the ground, batted a second time and set the visitors 735 to win. Walters got a second ton in the match, becoming the first batsman in Test history to do so. [As of today, eight batsmen have achieved this feat.]
When the West Indies collapsed (yet again) to 102 for five in pursuit of this huge target, the match looked destined to end before the final day. But Sobers and Nurse did not give up. Nurse was injured but both got centuries, and finally the West Indies ended up with 352.
With five games in a series, there can always be twists and turns in a cricket match. Only only hopes that the remaining Australia-India Tests are even half as exciting as the Adelaide Test of 1968-69.
This week on my podcast, I read the sixth and final installment of “Spill“, a new Little Brother story commissioned by Clay F Carlson and published on Reactor, the online publication of Tor Books. Also available in DRM-free ebook form as a Tor Original. Spill will be reprinted in Allen Kaster’s 2025 Year’s Best SF on Earth.
I didn’t plan to go to Oklahoma, but I went to Oklahoma.
My day job is providing phone tech support to people in offices who use my boss’s customer-relationship management software. In theory, I can do that job from anywhere I can sit quietly on a good Internet connection for a few hours a day while I’m on shift. It’s a good job for an organizer, because it means I can go out in the field and still pay my rent, so long as I can park a rental car outside of a Starbucks, camp on their WiFi, and put on a noise-canceling headset. It’s also good organizer training because most of the people who call me are angry and confused and need to have something difficult and technical explained to them.
My comrades started leaving for Oklahoma the day the Water Protector camp got set up. A lot of them—especially my Indigenous friends—were veterans of the Line 3 Pipeline, the Dakota Access Pipeline, and other pipeline fights, and they were plugged right into that network.
The worse things got, the more people I knew in OK. My weekly affinity group meeting normally had twenty people at it. One week there were only ten of us. The next week, three. The next week, we did it on Zoom (ugh) and most of the people on the line were in OK, up on “Facebook Hill,” the one place in the camp with reliable cellular data signals.
Emilio has been helping finish the mpi-defaults switch to mpich on 32-bit
architectures, and the openmpi transitions.
This involves filing bugs for the reverse dependencies, doing NMUs, and
requesting removals for outdated (Not Built from Source) binaries on 32-bit
architectures where openmpi is no longer available. Those transitions got
entangled with a few others, such as the petsc stack, and were blocking many
packages from migrating to testing. These transitions were completed in early
December.
cPython 3.12.7+ update uploads, by Stefano Rivera
Python 3.12 had failed to build on mips64el,
due to an obscure dh_strip failure. The mips64el porters never figured it out,
but the missing build on mips64el was blocking migration to Debian testing.
After waiting a month, enough changes had accumulated in the upstream 3.12
maintenance git branch that we could apply them in the hope of changing the
output enough to avoid breaking dh_strip. This worked.
Of course there were other things to deal with too. A test started failing due
to a Debian-specific patch we carry for python3.x-minimal, and it needed to be
reworked. And Stefano forgot to strip the trailing + from PY_VERSION, which
confuses some python
libraries. This always requires another patch when applying git updates from the
maintenance branch. Stefano added a build-time check to catch this mistake in
the future. Python 3.12.7 migrated.
Python 3.13 Transition, by Stefano Rivera and Colin Watson
During November the Python 3.13-add
transition
started. This is the first stage of supporting a new version of Python in Debian
archive (after preparatory work), adding it as a new supported but non-default
version. All packages with compiled Python extensions need to be re-built to add
support for the new version.
We have covered the lead-up to this transition in the past. Due to preparation,
many of the failures we hit were expected and we had patches waiting in the bug
tracker. These could be NMUed to get the transition moving. Others had been
known about but hadn’t been worked on, yet.
Some other packages ran into new issues, as we got further into the transition
than we’d been able to in preparation. The whole Debian Python team has been
helping with this work.
The rebuild stage of the 3.13-add transition is now over, but many packages
need work before
britney will let python3-defaults migrate to testing.
Limiting build concurrency based on available RAM, by Helmut Grohne
In recent years, the concurrency of CPUs has been increasing as has the demand
for RAM by linkers. What has not been increasing as quickly is the RAM supply in
typical machines. As a result, we more frequently run into situations where the
package builds exhaust memory when building at full concurrency. Helmut
initiated a
discussion about
generalizing an approach to this in Debian packages. Researching existing code
that limits concurrency as well as providing possible extensions to debhelper
and dpkg to provide concurrency limits based on available system RAM. Thus far
there is consensus on the need for a more general solution, but ideas are still
being collected for the precise solution.
Santiago spoke on Linux Live Patching in
Debian,
presenting an update on the idea since DebConf 24. This includes the
initial requirements for the livepatch package format, that would be used to
distribute the livepatches.
Stefano and Anupa worked as part of the video team, streaming and recording the
event’s talks.
Miscellaneous contributions
Stefano looked into packaging the latest upstream python-falcon version in
Debian, in support of the Python 3.13 transition. This appeared to break
python-hug, which is sadly looking neglected upstream, and the best course of
action is probably its removal from Debian.
Stefano uploaded videos from various 2024 Debian events to PeerTube and
YouTube.
Stefano and Santiago visited the site for DebConf 2025 in Brest, after the
MiniDebConf in Toulouse, to meet with the local team and scout out the venue.
The on-going DebConf 25 organization work of last month also included
handling the logo and artwork call for
proposals.
Carles implemented multiple language support on
po-debconf-manager and
tested it using Portuguese-Brazilian during MiniDebConf Toulouse. The system was
also tested and improved by reviewing more than 20 translations to Catalan,
creating merge requests for those packages, and providing user support to new
users. Additionally, Carles implemented better status transitions, configuration
keys management and other small improvements.
Helmut sent 32 patches for cross build failures. The
wireplumber one was an interactive
collaboration with Dylan Aïssi.
Helmut continued to monitor the /usr-move, sent a patch for lib64readline8
and continued several older patch conversations. lintian now reports some
aliasing issues in unstable.
Colin upgraded 42 Python packages to new upstream versions. Some were
complex: python-catalogue had some upstream version
confusion, pydantic and
rpds-py involved several Rust package upgrades as prerequisites, and
python-urllib3 involved first packaging python-quart-trio and then vendoring an
unpackaged test-dependency.
Colin contributed Incus support to needrestart upstream.
Lucas set up a machine to do a rebuild of all ruby reverse dependencies to
check what will be broken by adding ruby 3.3 as an alternative interpreter. The
tool used for this is
mass-rebuild and the initial
rebuilds have already started. The ruby interpreter maintainers are planning to
experiment with debusine next time.
Lucas is organizing a Debian Ruby
sprint towards the end of
January in Paris. The plan of the team is to finish any missing bits of Ruby 3.3
transition at the time, try to push Rails 7 transition and fix RC bugs affecting
the ruby ecosystem in Debian.
Anupa attended a Debian Publicity team meeting in-person during MiniDebCamp
Toulouse.
Anupa moderated and posted in the Debian Administrator group in LinkedIn.
A new version of our pinp package
arrived on CRAN today, and is
the first release in four years. The pinp package
allows for snazzier one or two column Markdown-based pdf vignettes, and
is now used by a few packages. A screenshot of the package vignette can
be seen below. Additional screenshots are at the pinp page.
This release contains no new features or new user-facing changes but
reflects the standard package and repository maintenance over the
four-year window since the last release: updating of actions, updating
of URLs and addressing small packaging changes spotted by
ever-more-vigilant R checking code.
The NEWS entry for this release follows.
Changes in pinp version 0.0.11 (2024-12-08)
Standard package maintenance for continuous integration, URL
updates, and packaging conventions
I always find it amazing the opportunities I have thanks to my contributions to the Debian Project. I am happy to receive this recognition through the help I receive with travel to attend events in other countries.
This year, two MiniDebConfs were scheduled for the second half of the year in Europe: the traditional edition in Cambridge in UK and a new edition in Toulouse in France. After weighing the difficulties and advantages that I would have to attend one of them, I decided to choose Toulouse, mainly because it was cheaper and because it was in November, giving me more time to plan the trip. I contacted the current DPL Andreas Tille explaining my desire to attend the event and he kindly approved my request for Debian to pay for the tickets. Thanks again to Andreas!
MiniDebConf Toulouse 2024 was held in November 16th and 17th (Saturday and Sunday) and took place in one of the rooms of a traditional Free Software event in the city named Capitole du Libre. Before MiniDebConf, the team organized a MiniDebCamp in November 14th and 15th at a coworking space.
The whole experience promised to be incredible, and it was! From visiting a city in France for the first time, to attending a local Free Software event, and sharing four days with people from the Debian community from various countries.
Travel and the city
My plan was to leave Belo Horizonte on Monday, pass through São Paulo, and arrive in Toulouse on Tuesday night. I was going to spend the whole of Wednesday walking around the city and then participate in the MiniDebCamp on Thursday.
But the flight that was supposed to leave São Paulo in the early hours of Monday to Tuesday was cancelled due to a problem with the airplane and I had spent all Tuesday waiting. I was rebooked on another flight that left in the evening and arrived in Toulouse on Wednesday afternoon. Even though I was very tired from the trip, I still took advantage of the end of the day to walk around the city. But it was a shame to have lost an entire day of sightseeing.
On Thursday I left early in the morning to walk around a little more before going to the MiniDebCamp venue. I walked around a lot and saw several tourist attractions. The city is really very beautiful, as they say, especially the houses and buildings made of pink bricks. I was impressed by the narrow and winding streets; at one point it seemed like I was walking through a maze. I arrived to a corner and there would be 5 streets crossing in different directions.
The riverbank that runs through the city is very beautiful and people spend their evenings there just hanging out. There was a lot of history around there.
I stayed in an airbnb 25 minutes walking from the coworking space and only 10 minutes from the event venue. It was a very spacious apartment that was much cheaper than a hotel.
MiniDebCamp
I arrived at the coworking space where the MiniDebCamp was being held and met up with several friends. I also met some new people, talked about the translation work we do in Brazil, and other topics.
We already knew that the organization would pay for lunch for everyone during the two days of MiniDebCamp, and at a certain point they told us that we could go to the room (which was downstairs from the coworking space) to have lunch. They set up a table with quiches, breads, charcuterie and LOTS of cheese :-) There were several types of cheese and they were all very good. I just found it a little strange because I’m not used to having cheese for lunch, but the experience was amazing anyway :-)
In the evening, we went as a group to dinner at a restaurant in front of the Capitolium, the city’s main tourist attraction.
On the second day, in the morning, I walked around the city a bit more, then went to the coworking space and had another incredible cheese table for lunch.
Video Team
One of my ideas for going to Toulouse was to be able to help the video team in setting up the equipment for broadcasting and recording the talks. I wanted to follow this work from the beginning and learn some details, something I can’t do before the DebConfs because I always arrive after the people have already set up the infrastructure. And later reproduce this work in the MiniDebConfs in Brazil, such as the one in Maceió that is already scheduled for May 1-4, 2025.
As I had agreed with the people from the video team that I would help set up the equipment, on Friday night we went to the University and stayed in the room working. I asked several questions about what they were doing, about the equipment, and I was able to clear up several doubts. Over the next two days I was handling one of the cameras during the talks. And on Sunday night I helped put everything away.
Thanks to olasd, tumbleweed and ivodd for their guidance and patience.
The event in general
There was also a meeting with some members of the publicity team who were there with the DPL. We went to a cafeteria and talked mainly about areas that could be improved in the team.
The talks at MiniDebConf were very good and the recordings are also available here.
I ended up not watching any of the talks from the general schedule at Capitole du Libre because they were in French. It’s always great to see free software events abroad to learn how they are done there and to bring some of those experiences to our events in Brazil.
I hope that MiniDebConf in Toulouse will continue to take place every year, or that the French community will hold the next edition in another city and I will be able to join again :-) If everything goes well, in July next year I will return to France to join DebConf25 in Brest.
Author: Jeremy Nathan Marks The wind was wet. It blew down from unseen heights and spread a damp veil across the plain. The soil, not accustomed to dampness, clotted. The surge in moisture caused large, segmented creatures with prolific legs to fall from the trees and lie twitching in the dirt. The trees, stunted, spiny, […]
Review: Why Buildings Fall Down, by Matthys Levy & Mario Salvadori
Illustrator:
Kevin Woest
Publisher:
W.W. Norton
Copyright:
1992
Printing:
1994
ISBN:
0-393-31152-X
Format:
Trade paperback
Pages:
314
Why Buildings Fall Down is a non-fiction survey of the causes of
structure collapses, along with some related topics. It is a sequel of
sorts to Why Buildings Stand Up by Mario Salvadori, which I have
not read. Salvadori was, at the time of writing, Professor Emeritus of
Architecture at Columbia University (he died in 1997). Levy is an
award-winning architectural engineer, and both authors were principals at
the structural engineering firm Weidlinger Associates. There is a revised
and updated 2002 edition, but this review is of the original 1992 edition.
This is one of those reviews that comes with a small snapshot of how my
brain works. I got fascinated by the analysis of the
collapse of Champlain Towers South in Surfside, Florida in 2021, thanks
largely to a
random YouTube series on the tiny channel of a structural engineer.
Somewhere in there (I don't remember where, possibly from that channel,
possibly not) I saw a recommendation for this book and grabbed a used copy
in 2022 with the intent of reading it while my interest was piqued. The
book arrived, I didn't read it right away, I got distracted by other
things, and it migrated to my shelves and sat there until I picked it up
on an "I haven't read nonfiction in a while" whim.
Two years is a pretty short time frame for a book to sit on my shelf
waiting for me to notice it again. The number of books that have been
doing that for several decades is, uh, not small.
Why Buildings Fall Down is a non-technical survey of structure
failures. These are mostly buildings, but also include dams, bridges, and
other structures. It's divided into 18 fairly short chapters, and the
discussion of each disaster is brisk and to the point. Most of the
structures discussed are relatively recent, but the authors talk about the
Meidum Pyramid, the
Parthenon (in the chapter
on intentional destruction by humans), and the
Pavia Civic
Tower (in the chapter about building death from old age). If you are
someone who has already been down the structural failure rabbit hole, you
will find chapters on the expected disasters like the
Tacoma
Narrows Bridge collapse and the
Hyatt
Regency walkway collapse, but there are a lot of incidents here,
including a short but interesting discussion of the
Leaning Tower
of Pisa in the chapter on problems caused by soil properties.
What you're going to get, in other words, is a tour of ways in which
structures can fail, which is precisely what was promised by the title.
This wasn't quite what I was expecting, but now I'm not sure why I was
expecting something different. There is no real unifying theme here;
sometimes the failure was an oversight, sometimes it was a bad design,
sometimes it was a last-minute change, and sometimes it was something
unanticipated. There are a lot of factors involved in structure design
and any of them can fail. The closest there is to a common pattern is a
lack of redundancy and sufficient safety factors, but that lack of
redundancy was generally not deliberate and therefore this is not a guide
to preventing a collapse. The result is a book that feels a bit like a
grab-bag of structural trivia that is individually interesting but only
occasionally memorable.
The writing style I suspect will be a matter of taste, but once I got used
to it, I rather enjoyed it. In a co-written book, it's hard to separate
the voices of the authors, but Salvadori wrote most of the chapter on the
law in the first person and he's clearly a character. (That chapter is
largely the story of two trials he testified in, which, from his account,
involved him verbally fencing with lawyers who attempted to claim his
degrees from the University of Rome didn't count as real degrees.) If
this translates to his speaking style, I suspect he was a popular lecturer
at Columbia.
The explanations of the structural failures are concise and relatively
clear, although even with Kevin Woest's diagrams, it's hard to capture the
stresses and movement in a written description. (I've found from watching
YouTube videos that animations, or even annotations drawn while someone is
talking, help a lot.) The framing discussion, well, sometimes that is
bombastic in a way that I found amusing:
But we, children of a different era, do not want our lives to be
enclosed, to be shielded from the mystery. We are eager to
participate in it, to gather with our brothers and sisters in a
community of thought that will lift us above the mundane. We need to
be together in sorrow and in joy. Thus we rarely build monolithic
monuments. Instead, we build domes.
It helps that passages like this are always short and thus don't wear out
their welcome. My favorite line in the whole book is a throwaway sentence
in a discussion of building failures due to explosions:
With a similar approach, it can be estimated that the chance of an
explosion like that at Forty-fifth Street was at most one in thirty
million, and probably much less. But this is why life is dangerous
and always ends in death.
Going hard, structural engineering book!
It's often appealing to learn about things from their failures because the
failures are inherently more dramatic and thus more interesting, but if
you were hoping for an introduction to structural engineering, this is
probably not the book you want. There is an excellent and surprisingly
engaging appendix that covers the basics of structural analysis in 45
pages, but you would probably be better off with Why Buildings Stand
Up or another architecture or structural engineering textbook (or maybe a
video course). The problem with learning by failure case study is that
all the case studies tend to blend together, despite the authors' engaging
prose, and nearly every collapse introduces a new structural element with
new properties and new failure modes and only the briefest of
explanations. This book might make you a slightly more informed consumer
of the news, but for most readers I suspect it will be a collection of
forgettable trivia told in an occasionally entertaining style.
I think the book I wanted to read was something that went deeper into the
process of forensic engineering, not just the outcomes. It's interesting
to know what the cause of a failure was, but I'm more interested in how
one goes about investigating a failure. What is the process, how do you
organize the investigation, and how does the legal system around
engineering failures work? There are tidbits and asides here, but this
book is primarily focused on the structural analysis and elides most of
the work done to arrive at those conclusions.
That said, I was entertained. Why Buildings Fall Down is a bit
dated — the opening chapter on airplanes hitting buildings reads much
differently now than when it was written in 1992, and I'm sure it was
updated in the 2002 edition — but it succeeds in being clear without being
soulless or sounding like a textbook. I appreciate an occasional rant
about nuclear weapons in a book about architecture. I'm not sure I really
recommend this, but I had a good time with it.
Also, I'm now looking for opportunities to say "this is why life is
dangerous and always ends in death," so there is that.
For your weekend pleasure, I've posted the third installment of THE ANCIENT ONES - my SF comedy novel... that also delivers some unexpected twists on hoary sci-fi tropes. I don't see any comments under the prior postings, so I assume they were... fun? That some of the puns knocked you unconscious?
Your wit is welcome.
Okay, back to the world and varied ways to save it!
...Well, for the prosperous sci fi aficionado. After 44 years, there is a hardcover of my first novel Sundiver. It's a lovely, collectible edition (numbered and signed) with a gorgeous new cover and interiors by Jim Burns. From Phantasia Press. (Not cheap. But wow does Phantasia do good work!)
Are you more rich in nerdy sci fi knowledge, instead?
Well again, The TASAT project -- There's A Story About That! -- is doing great! A special service I've tried to bring into the world for almost 20 years. And now, thanks top master programmer Todd Zimmerman, it lives!
Come by TASAT.org and see how there's a small but real chance that nerdy SciFi readers like you might one day save the world!
Or like Ray Bradbury's chillingly prescient time paradox story "The Sound of Thunder."
Addendum -- other science fiction resources:
And now a year-ed gift for the nerdiest. A passel of web resources all about science fiction!
Here is a list of useful online science fiction resources, including databases, encyclopedias, as well as discussion forums and question and answer sites...
Science Fiction Academia
Science Fiction Research Association - “The oldest professional association dedicated to scholarly inquiry into Science Fiction and the Fantastic across all media.”
SFE: SF Encyclopedia - “Our aim is to provide a comprehensive, scholarly, and critical guide to science fiction in all its forms.”
The Science Fiction and Fantasy Research Database - “A freely available online resource designed to help students and researchers locate secondary sources for the study of the science fiction and fantasy and associated genres.”
r/scifiwriting - A subreddit for writers of science fiction.
r/AskScienceFiction - "It's like Ask Science, but all questions and answers are written with answers gleaned from the universe itself."
Worldbuilding Stack Exchange - “A question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings.”
PREVIOUSLY: Alvin Montessori, the human advisor aboard the demmie-crewed stellarship Clever Gamble, advised Captain Ohm (the Irresistible) against leading a party down to the surface of planet Oxytocin without taking time to assess. But demmies are demmies – bless (and cuss) ‘em! So, the ship has deployed a humungously long HOSE from a giant reel. (Why else would the whole front of a starship be a great big disk?)
Now, with Ohm and Engineer Nuts and Doctor Guts and a team of greenie security guards, Alvin enters the prep room and prepares to…
For those of you who’ve never slurried, there can be no describing what it’s like to have a beam zap through you, reading the position of every cell in your body. Then comes the rush of solvent fluid, flooding in through a hundred vents, filling the transport chamber, rising from your boots to your thighs to your neck faster than you can cry, I’m melting!
It doesn’t hurt. Really. But it is disconcerting to watch your hands dissolve right in front of you. Closing your eyelids won’t help much, since they go next, leaving a dreadful second or two until your entire skull – brain and all – crumbles like a sugar confection in hot water.
Ever since it was proved – maybe a century ago – that the mind exists independent from the body, philosophers have hoped to tap marvelous insights or great wisdom from the plane of pure abstraction. Some try to do this by peering into dreams. Others hope to sample the filtered essence of thought from people who are in a liquid state.
Oh, it’s true that something seems to happen – thoughts flow – during that strange time when your nervous system isn’t solid anymore, but a churning swirl of loose neurons and separated synapses, gurgling supersonically down a narrow pipe two hundred miles long. Giving new meaning to “brain drain.”
But in my experience, these stray thoughts are seldom anything profound. On that particular day – as I recall – my focus was on the job. The most fundamental underpinning of my task as Earthling Advisor.
Maybe they will grow up.
After all, we did, eventually.
It’s the hope we all cling to.
Or so one part of me told the rest of my myriad selves, during that timeless interval when I had no solid form. When “me” was many and a sense of detachment seemed to come naturally.
Which just goes to show you that it never pays to do any deep thinking when you’re in a slurry.
I regained full consciousness on a strange new world, watching my hands reappear in front of me as the reconstructor at the nozzle end of the Hose re-stacked my cells, one by one, in the same (more or less) relative positions they had been in, aboard ship.
Did I have that mole on my hand, before? Isn’t it a lot like one I saw on the back of Ohm’s neck…
But no. Don’t go there.
Still, while dismissing that spurious thought, I resisted the urge to shake my head or shrug. Best to let ligaments and things congeal a few extra seconds, lest something jar loose and roll away.
I did shift my eyes a bit to look through a window of the Nozzle Chamber. Overhead, the Hose stretched upward into a cloud-flecked sky, cleverly rendered invisible to radar, sonar, infrared, and most visible light. (I could see it, of course. But then, demmies are always amazed by our human ability to perceive the mystical color, “blue.”)
A final word about slurrying. In its way, it is an efficient mode of transport, and I’m not complaining. Things might have been worse. I’m told that true matter teleportation – where an object is read and replicated or “beamed,” atom-by-atom, instead of cell-by-cell, is a ridiculous impossibility. Quantum uncertainty and all that. Won’t ever happen.
Nevertheless, there is a demmie research center that refuses to give up on the idea… and demmies never cease to surprise us.
(Impossibility be damned. I recommend secretly blowing up the place, just to be sure.)
Stumbling out of the Nozzle, we retrieved our tools from container-tubes and proceeded to look around the place. We appeared to have de-licquesced behind some boulders and shrubbery in an uncrowded portion of the park. Tall buildings could be seen jutting skyward beyond a surrounding copse of trees. Vehicle sounds of a bustling city drifted toward us.
So far, so good. The greenies fanned out, very businesslike, covering all directions with their tidy blasters. I took out my scanner and surveyed various sensor bands.
“Life forms?” Ohm said, peering around my shoulder, speaking loud enough to be heard over the traffic noise.
“Yes, Captain,” I replied, patiently. “Many life forms.”
“Many,” Nuts repeated, morosely.
“Many,” Guts added, eyes filling with eagerness while he stroked his vivisection kit.
“Let’s go see,” Ohm commanded, as I counted the seconds till something happened.
Something always happens.
Sure enough, at a count of eight, we heard a scream and hurried toward the source, which turned out to be Lieutenant Morell. She panted, with one hand near her throat, pointing her blaster toward a set of bushes.
“I shot first!”
“What?” Ohm demanded, shoving others aside to charge forward. “What was it?”
She came to attention. “I don’t know, sir. Something was spying on us. I saw the weirdest pair of eyes. Whatever it was, I think I got it.”
“Um,” I stepped forward, reluctant to point out the obvious. “The rule of Simplest Hypothesis might suggest, in a calm city park, that your something just might have been… well… perhaps a local citizen?”
Lieutenant Morell gulped, looking at that moment just like a young human who had made a nervous mistake.
“Of all the damn foolishness,” Guts grumbled, hastening through the undergrowth, drawing his medical kit while I hurried after. Behind me, I heard the Lieutenant sob an apology.
“There now,” Captain Ohm answered. “I’m sure he… she… or it is just stunned. You did use stun-setting, yes?”
“Sir!”
When I glanced back, he was leading her with one arm, his other one sliding around her shoulder. I should have known.
Guts shouted when he found our prowler. A humanoid, of course, like ninety percent of Class M sapients. The poor fellow had managed to crawl a few meters before the stun nanos got organized enough to bring him down. Now he lay sprawled on his back, spread-eagled, with his arms and legs pinned by half a million microscopic fibers to the leaf-strewn loam. He strained futilely till we emerged to surround him. Then he stared with large, dark eyes, gurgling slightly behind the nano-woven gag in his mouth.
Nanomachines are often too small to see, but those that are fired at high speed by a stun blaster can be larger than an Earthling ant. At medium range, only a dozen might hit a fleeing target, and they need several seconds to devour raw matter, duplicating into thousands, before getting to work immobilizing their quarry.
There are quicker ways of subduing someone, but none quite as safe or sure. Anyway, a gulliver-gun is usually swift enough.
By now, a veritable army of little nanos swarmed over the captive, inspecting their handiwork, keeping the tiny ropes taut and jumping up and down in jubilation. Some, for lack of anything else to do, appeared to be hard at work sewing rips in the native’s dark, satin-lined cloak and black, pegged pants. Others re-coifed his mussed hair.
(Just because someone is a prisoner, that doesn’t mean he can’t look sharp.)
Guts pushed his bio-scanner toward the humanoid, having to fight through a tangle of tiny ropes while muttering something about how “…nanos are the winchers of our discontent,” in a Shakespearean accent.
Enough, I thought, drawing my blaster, flicking the setting, then sighting on the victim’s face. He cringed as I fired—
—a stream of tuned microwaves set to turn all nano fibers into harmless gas. The gag in his mouth vanished and he gasped, then began jabbering frightfully in a tongue filled with moist sibilants.
I heard a hiss as Guts injected our captive with a hypo spray, using an orange vial marked ALIEN RELAXANT #1. The native tensed for a moment, then sagged with a sigh.
Remember, students, always inspect your ship’s supply of Alien Relaxant Number One! Make sure of its purity.
Very few sentient life forms have fatal allergic reactions to 100 percent distilled water.
Nevertheless, most will respond quickly to being injected, as if a potent, local narcotic were suddenly flowing through their veins. Bless the placebo effect. Its near universality is among the few reassuring constants in an uncertain cosmos.
Guts gave me a sly wink. He knows what’s going on, so I no longer have to mix batches of “ol’ Number One” all by myself. But don’t assume your ship’s doctor will understand. Call it an “ancient human recipe” until you’re sure your medico can be trusted with the truth.
The native was much calmer, prattling at a slower pace while I set up the universal translator on its tripod. Our captain dropped to one knee, preparing for that special moment when true First Contact could begin. Colored buttons flickered as the machine scanned, seeking meaning in the slur of local speech. Abruptly, all lights turned green. The translator swiveled and fired three more nanos at the native, one for each ear and another that streaked like a smart missile down his throat.
It isn’t painful, but startlement made him stop and swallow in surprise.
“On behalf of the Federated Alliance of—” Captain Ohm began, expansively spreading his arms. Then he frowned as the impudent creature interrupted, this time speaking aristocratically-accented Demmish.
“…I don’t know who you people are, or where you come from, but you must get out of the park, quickly! Don’t you know it’s dangerous?”
While updating my Debian package, I often have to update a field from debian/control file.
This field is named Standards-Version and it declares which version of Debian policy the package complies to. When updating this field, one must follow the upgrading checklist.
That being said, I maintain a lot of similar package and I often have to update this Standards-Version field.
This field can be updated manually with cme fix dpkg (see Managing Debian packages with cme). But this command may make other changes and does not commit the result.
So I’ve created a new update-standards-versioncme script that:
udpate Standards-Version field
commit the changed
For instance:
$ cme run update-standards-version
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Connecting to api.ftp-master.debian.org to check 31 package versions. Please wait...
Got info from api.ftp-master.debian.org for 31 packages.
Warning in'source Standards-Version': Current standards version is '4.7.0'. Please read https://www.debian.org/doc/debian-policy/upgrading-checklist.html for the changes that may be needed on your package
to upgrade it from standard version '4.6.2' to '4.7.0'.
Offending value: '4.6.2'
Changes applied to dpkg-control configuration:
- source Standards-Version: '4.6.2' -> '4.7.0'
[master 552862c1] control: declare compliance with Debian policy 4.7.0
1 file changed, 1 insertion(+), 1 deletion(-)
Here’s the generated commit. Note that the generated log mentions the new policy version:
Dark Minds is the third book in the self-published Class 5
science-fiction romance series. It is a sequel to
Dark Deeds and should not be read out
of order, but as with the other books of the series, it follows new
protagonists.
Imogen, like another Earth woman before her, was kidnapped by a Class 5
for experimentation by the Tecran. She was subsequently transferred to a
research facility and has been held in cages and secret military bases for
over three months. As the story opens, she had been imprisoned in a
Tecran transport for a couple of weeks when the transport is taken by the
Krik who kill all of the Tecran crew. Next stop: another Class 5.
That same Class 5 is where the Krik are taking Camlar Kalor and his team.
Cam is a Grih investigator from the United Council, sent to look into
reports of a second Earth woman rescued from a Garmman ship. (The series
reader will recognize this as the plot of Dark Deeds.) Now he's
trapped with the crews of a bunch of other random ships in the cargo hold
of a Class 5 with Krik instead of Tecran roaming the halls, apparently at
the behest of the ship's banned AI, and a mysterious third Earth woman
already appears to be befriending it.
Imogen is the woman Fiona saw signs of in Dark Deeds. She's had a
rougher time than the protagonists of the previous two books in this
series, and she's been dropped into a less stable situation. The Class 5
she's brought to at the start of the story is far more suspicious (with
quite a lot of cause) and somewhat more hostile than the AIs we've
encountered previously. The rest of the story formula is roughly the
same, though: hunky Grih officer with a personality completely
indistinguishable from the hunky Grih officers in the previous two books,
an AI with a sketchy concept of morality that desperately needs an ally,
the Grih obsession with singing, the eventual discovery of useful armor
and weaponry that Imogen can use to surprise people, and more political
maneuvering over the Sentient Beings Agreement.
This entry in the series mostly abandons the Grih shock and horror at how
badly the Earth women have been treated. This makes sense, given how
dangerous Earth women have proven over the course of this series, and I
like that Diener is changing the political dynamics as the story develops.
I do sometimes miss that appalled anger of Dark Horse, but Dark Minds focuses more on the politics
and corridor fighting of a tense multi-sided stand-off.
I found the action more gripping in Dark Minds than in Dark
Deeds, and I liked Imogen more as a character than Fiona. She doesn't
have the delightfully calm competence of Rose from Dark Horse, but
she's a bit more hardened, a bit more canny, and is better at taking
control of situations. I also like that Diener avoids simplistically
pairing Earth women off with Class 5s. The series plot is progressing
faster than I had expected, and that gives this book a somewhat different
shape than the previous ones.
Cam is probably the least interesting of the men in this series so far and
appears to exist solely to take up the man-shaped hole in the plot. This
is not a great series for gender roles; thankfully, the romance is a small
part of the plot and largely ignorable. The story is about the women and
the AIs, and all of the women and most of the AIs of the previous books
make an appearance. It's clear they're forming an alliance whether the
Grih like it or not, and that part of the story was very satisfying.
Up to this book, this series had been all feel-good happy endings. I will
risk the small spoiler and warn that this is not true to the same degree
here, so you may not want to read this one if you want something entirely
fluffy, light, and positive (inasmuch as a series involving off-screen
experimentation on humans can be fluffy, light, and positive). That
caught me by surprise in a way I didn't entirely like, and I wish Diener
had stuck with the entirely positive tone.
Other than that, though, this was fun, light, readable entertainment.
It's not going to win any literary awards, it's formulaic, the male
protagonist comes from central casting, and the emphasis by paragraph
break is still a bit grating in places, but I will probably pick up the
next book when I'm in the mood for something light. Dark Minds is
an improvement over book two, which bodes well for the rest of the series.
Author: Jeremy Nathan Marks At the console, the technician remembered a few Latin words from school: Deux ex machina. He couldn’t remember what they meant, but he heard his teacher saying them. He felt sweat on his temples, and hoped his supervisor would excuse it. At a different console, a different technician remembered a few […]
After weeks – dare I say months – of work, it is finally done.
lintian.debian.org is back online!
Many, many thanks to everyone who worked hard to make this possible:
Thanks to Nicolas Peugnet, the author of lintian-ssg, who handed us
this custom static site generator on a silver platter. I'm happy I didn't
have to code this myself :)
Thanks to Otto Kekäläinen, maintainer of the lintian-ssg package in Debian,
who worked in tandem with Nicolas to iron out problems.
Thanks to Philipp Kern, who did the work on the DSA side to put the website
back online.
All in all, I did very little (mostly coordinating these fine folks) and they
should get the credit for this very useful service being back.
The new Firebuild release contains plenty of small fixes and a few notable improvements.
Experimental macOS support
The most frequently asked question from people getting to know Firebuild was if it worked on their Mac and the answer sadly used to be that well, it did, but only in a Linux VM. This was far from what they were looking for. �
Linux and macOS have common UNIX roots, but porting Firebuild to macOS included bigger challenges, like ensuring that dyld(1), macOS’s dynamic loader initializes the preloaded interceptor library early enough to catch all interesting calls, and avoid using anything that uses malloc() or thread local variables which are not yet set up then.
Preloading libraries on Linux is really easy, running LD_PRELOAD=my_lib.so ls just works if the library exports the symbols to be interposed, while macOS employs multiple lines of defense to prevent applications from using unknown libraries. Firebuild’s guide for making DYLD_INSERT_LIBRARIES honored on Macs can be helpful with other projects as well that rely on injecting libraries.
Firebuild on macOS can already accelerate simple projects and rebuild itself with Xcode. Since Xcode introduces a lot of nondeterminism to the build, Firebuild can’t shine in acceleration with Xcode yet, but can provide nice reports to show which part of the build is the most time consuming and how each sub-command is called.
If you would like to try Firebuild on macOS please compile it from the GitHub repository for now. Precompiled binaries will be distributed on the Mac App Store and via CI providers. Contact us to get notified when those channels become available.
Dealing with the ‘Epochalypse’
Glibc’s API provides many functions with time parameters and some of those functions are intercepted by Firebuild. Time parameters used to be passed as 32-bit values on 32-bit systems, preventing them to accurately represent timestamps after year 2038, which is known as the Y2038 problem or the Epochalypse.
To deal with the problem glibc 2.34 started providing new function symbol variants with 64-bit time parameters, e.g clock_gettime64() in addition to clock_gettime(). The new 64-bit variants are used when compiling consumers of the API with _TIME_BITS=64 defined.
Processes intercepted by Firebuild may have been compiled with or without _TIME_BITS=64, thus libfirebuild now provides both variants on affected systems running glibc >= 34 to work safely with binaries using 64-bit and 32-bit time representation.
Many Linux distributions already stopped supporting 32-bit architectures, but Debian and Ubuntu still supports armhf, for example, where the Y2038 problem still applies. Both Debian and Ubuntu performed a transition rebuilding every library (and their reverse dependencies) with -D_FILE_OFFSET_BITS=64 set where the libraries exported symbols that changed when switching to 64-bit time representation (thanks to Steve Langasek for driving this!) . Thanks to the transition most programs are ready for 2038, but interposer libraries are trickier to fix and if you maintain one it might be a good idea to check if it works well both 32-bit and 64-bit libraries. Faketime, for example is not fixed yet, see #1064555.
Select passed through environment variables with regular expressions
Firebuild filters out most of the environment variables set when starting a build to make the build more reproducible and achieve higher cache hit rate. Extra environment variables to pass through can be specified on the command line one by one, but with many similarly named variables this may become hard to maintain. With regular expressions this just became easier:
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language–and is
widely used by (currently) 1197 other packages on CRAN, downloaded 37.5 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 605 times according
to Google Scholar.
Conrad released a minor
version 14.2.2 yesterday. This followed a bit of recent work a few of us
did in the ensmallen
and mlpack repositories
following the [14.2.0 release]. Use of (member functions)
.min(index) and .max(index) was deprecated in
Armadillo in favor of
.index_mix() and .index_max(). By now ensmallen and mlpack have been
updated at CRAN. To add some
spice, CRAN emailed that the
(very much unreleased as of now, but coming likely next spring) gcc-15
was unhappy with RcppArmadillo
due to some Armadillo code.
This likely related to the listed gcc-15 C++
change about “Qualified name lookup failure into the current
instantiation”. Anyway, Conrad fixed it within days
and that change too is part of this new version (as is a small behaviour
normalization between the two indexing methods that matters in case of
ties, this was in 14.2.1).
The changes since the last CRAN release are summarised
below.
Changes in
RcppArmadillo version 14.2.2-1 (2024-12-05)
Upgraded to Armadillo release 14.2.2 (Smooth Caffeine)
Workarounds for regressions in pre-release versions of GCC
15
More selective detection of symmetric/hermitian matrices by
various functions
Changes
in RcppArmadillo version 14.2.1-1 (2024-11-24) (GitHub Only)
Upgraded to Armadillo release 14.2.1 (Smooth Caffeine)
Fix for index_min() and index_max() to
ensure that the first index of equal extremum values is found
Discovering Open Source: How I Got Introduced
Hey there! I’m Divine Attah-Ohiemi, a sophomore studying Computer Science. My journey into the world of open source was anything but grand. It all started with a simple question to my sister: “How do peop...
Author: Robin Cassini “Please, have a seat.” A bare lightbulb flickered overhead. I settled onto a folding char. The steel dug relentlessly into my spine. It was not meant to be comfortable. With a creak, the officer positioned himself across the small table. He tapped his clipboard. “Pandora, is it?” I nodded. Sure, my name […]
First he shared a lesson he titled "Offer you can't refuse a.k.a.
Falsehood programmers believe about prices" explaining
"Some programmers believe that new prices per month (when paid annually) are always better then the old ones (when paid monthly). Only this time they have forgotten their long-time clients on legacy packages."
Then he found a few more effs. "This e-shop required to create an
account to download an invoice for order already delivered.
Which is kind of WTF on its own. But when I pasted a generated
62 mixed character (alphanumeric+special) password, their
form still insisted on entering 8+ characters. not correct.
Well, because their programmers didn't expect somebody to
paste a password. Once I did another JS event - e.g. clicked
a submit button, it fixed itself."
And our
Best Beastie in Black
discovered
"Anomalies in the causal structure of our particular 4-dimensional
Lorentzian manifold have apparently caused this secure message
portal belonging to a tax prep/audit company to count emails
that haven't yet been sent by sender."
Traveler
Tim R.
struggled to pay for a visa, and reports this result. Rather than an error
reported as success, we appear to have here a success reported as an error.
"We're all familiar with apps that throw up an
eror dialog with the error message as success
but it's particularly irritating when trying to submit
a payment. This is what happened when I tried to pay for an Indian visa with Paypal.
To add insult to injury, when you try to pay again, it
says that due to errors and network problems, you must
check back in 2 hours before attempting a repeat payment."
Finally
Robert H.
is all charged up about Chevy shenanigans.
"I thought one of the advantages of EV vehicles was they don't need oil changes?"
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
The diffoscope maintainers are pleased to announce the release of diffoscope
version 284. This version includes the following changes:
[ Chris Lamb ]
* Simplify tests_quines.py::test_{differences,differences_deb} to use
assert_diff and not mangle the expected test output.
* Update some tests to support file(1) version 5.46.
(Closes: reproducible-builds/diffoscope#395)
The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in iOS and Android device activity or telltale signs of spyware infection. For paying iVerify customers, the tool regularly checks devices for potential compromise. But the company also offers a free version of the feature for anyone who downloads the iVerify Basics app for $1. These users can walk through steps to generate and send a special diagnostic utility file to iVerify and receive analysis within hours. Free users can use the tool once a month. iVerify’s infrastructure is built to be privacy-preserving, but to run the Mobile Threat Hunting feature, users must enter an email address so the company has a way to contact them if a scan turns up spyware—as it did in the seven recent Pegasus discoveries.
Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security where relevant. As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
The Reproducible Builds community sadly announced it has lost its founding member, Lunar. Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.
Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. He was the author of our earliest status reports and many of our key tools in use today are based on his design. Lunar’s creativity, insight and kindness were often noted.
You can view our full tribute elsewhere on our website. He will be greatly missed.
rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there.
In November, reproduce.debian.net began rebuilding Debian unstable on the amd64 architecture, but throughout the MiniDebConf, it had attempted to rebuild 66% of the official archive. From this, it could be determined that it is currently possible to bit-for-bit reproduce and corroborate approximately 78% of the actual binaries distributed by Debian — that is, using the .buildinfo files hosted by Debian itself.
reproduce.debian.net also contains instructions how to setup one’s own rebuilderd instance, and we very much invite everyone with a machine to spare to setup their own version and to share the results. Whilst rebuilderd is still in development, it has been used to reproduce Arch Linux since 2019. We are especially looking for installations targeting Debian architectures other than i386 and amd64.
We are very happy with our collaboration with both STF and Neighbourhoodie (including many changes not directly related to the website), and look forward to working with them in the future.
SBOMs for Python packages
The Python Software Foundation has announced a new “cross-functional project for SBOMs and Python packages”. Seth Michael Larson writes that the project is “specifically looking to solve these issues”:
Enable Python users that require SBOM documents (likely due to regulations like CRA or SSDF) to self-serve using existing SBOM generation tools.
Solve the “phantom dependency” problem, where non-Python software is bundled in Python packages but not recorded in any metadata. This makes the job of software composition analysis (SCA) tools difficult or impossible.
Make the adoption work by relevant projects such as build backends, auditwheel-esque tools, as minimal as possible. Empower users who are interested in having better SBOM data for the Python projects they are using to be able to contribute engineering time towards that goal.
For more than ten years, the Reproducible Builds project has worked towards reproducible builds of many projects, and for ten years now we have build Debian packages twice—with maximal variations applied—to see if they can be build reproducible still.
Since about a month, we’ve also been rebuilding trying to exactly match the builds being distributed via ftp.debian.org. This talk will describe the setup and the lessons learned so far, and why the results currently are what they are (spoiler: they are less than 30% reproducible), and what we can do to fix that.
The Debian Project Leader, Andreas Tille, was present at the talk and remarked later in his Bits from the DPL update that:
It might be unfair to single out a specific talk from Toulouse, but I’d like
to highlight the one on reproducible builds. Beyond its technical focus, the
talk also addressed the recent loss of Lunar, whom we mourn deeply. It served
as a tribute to Lunar’s contributions and legacy. Personally, I’ve
encountered packages maintained by Lunar and bugs he had filed. I believe
that taking over his packages and addressing the bugs he reported is a
meaningful way to honor his memory and acknowledge the value of his work.
Holger’s slides and video in .webm format are available.
Next, rebuilderd is the server to monitor package repositories of Linux distributions and attempt to reproduce the observed results. This month, version 0.21.0 released, most notably with improved support for binNMUs by Jochen Sprickerhof and updating the rebuilderd-debian.sh integration to the latest debrebuild version by Holger Levsen. There has also been significant work to get the rebuilderd package into the Debian archive, in particular, both rust-rebuilderd-common version 0.20.0-1 and rust-rust-lzma version 0.6.0-1 were packaged by kpcyrd and uploaded by Holger Levsen.
Related to this, Holger Levsen submitted three additional issues against rebuilderd as well:
rebuildctl should be more verbose when encountering issues. […]
Please add an option to used randomised queues. […]
Scheduling and re-scheduling multiple packages at once. […]
… and lastly, Jochen Sprickerhof submitted one an issue requested that rebuilderd downloads the source package in addition to the .buildinfo file […] and kpcyrd also submitted and fixed an issue surrounding dependencies and clarifying the license […]
Elsewhere, Roland Clobus posted to our mailing list this month, asking for input on a bug in Debian’s ca-certificates-java package. The issue is that the Java key management tools embed timestamps in its output, and this output ends up in the /etc/ssl/certs/java/cacerts file on the generated ISO images. A discussion resulted from Roland’s post suggesting some short- and medium-term solutions to the problem.
Holger Levsen uploaded some packages with reproducibility-related changes:
devscripts versions 2.24.3, 2.24.4 and 2.24.5 were uploaded, including several fixes for the debrebuild and debootsnap and scripts.
Trail of Bits, who performed much of the development work, has an in-depth blog post about the work and its adoption, as well as what is left undone:
One thing is notably missing from all of this work: downstream verification. […]
This isn’t an acceptable end state (cryptographic attestations have defensive properties only insofar as they’re actually verified), so we’re looking into ways to bring verification to individual installing clients. In particular, we’re currently working on a plugin architecture for pip that will enable users to load verification logic directly into their pip install flows.
While open-source software has enabled significant levels of reuse to speed up software development, it has also given rise to the dreadful dependency hell that all software practitioners face on a regular basis. This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries. The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges. […]
I can now confidently say (and you can also check, you don’t need to trust me) that there is nothing hiding in zig1.wasm [the checked-in binary] that hasn’t been checked-in as a source file.
The full post is full of practical details, and includes a few open questions.
Website updates
Notwithstanding the significant change to the landing page (screenshot above), there were an enormous number of changes made to our website this month. This included:
Alex Feyerke and Mariano Giménez:
Dramatically overhaul the website’s landing page with new “benefit” cards tailored to the expected visitors to our website and a reworking of the visual hierarchy and design. […][…][…][…][…][…][…][…][…][…]
Bernhard M. Wiedemann:
Update the “System images” page to document the e2fsprogs approach. […]
Avoid so-called ‘ghost’ buttons by not using <button> elements as links, as the affordance of a <button> implies an action with (potentially) a side effect. […][…]
Move publications and generate them instead from a data.yml file with an improved layout. […][…]
Make a large number of small but impactful stylisting changes. […][…][…][…]
Expand the “Tools” to include a number of missing tools, fix some styling issues and fix a number of stale/broken links. […][…][…][…][…][…]
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:
Commit a rebuilder-worker.conf configuration for the o5 node. […]
Debian-related changes:
Grant jspricke and jochensp access to the o5 node. […][…]
Build the qemu package with the nocheck build flag. […]
Misc changes:
Adapt the update_jdn.sh script for new Debian trixie systems. […]
Stop installing the PostgreSQL database engine on the o4 and o5 nodes. […]
Prevent accidental reboots of the o4 node because of a long-running job owned by josch. […][…]
In addition, Mattia Rizzolo addressed a number of issues with reproduce.debian.net […][…][…][…]. And lastly, both Holger Levsen […][…][…][…] and Vagrant Cascadian […][…][…][…] performed node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
Eli sends us something that's not quite a code sample, despite coming from code. It's not a representative line, because it's many lines. But it certainly is representative.
Here's the end of one of their code files:
});
}
}
);
});
}
)
);
}
});
}
}
);
});
I feel like someone heard that JavaScript could do functional programming and decided to write LISP. That's a lot of nested blocks. I don't know what the code looks like, but also I don't want to know what the code looks like.
Also, as someone who programs with a large font size, this is a special kind of hell for me.
Author: Travis Connor Sapp 5 days before present day… 10:38 AM, the sun is out, and Juan rests cozily on a rickety mattress. The normal person is up and doing work, endlessly living their boring 9 to 5, but this big fella starts his day around 11 AM. Juan grabs his yellow-stained hat, barely slides […]
Paladin's Hope is a fantasy romance novel and the third book of The
Saint of Steel series. Each book of that series features different
protagonists in closer to the romance series style than the fantasy series
style and stands alone reasonably well. There are a few spoilers for the
previous books here, so you probably want to read the series in order.
Galen is one of the former paladins of the Saint of Steel, left bereft
and then adopted by the Temple of the Rat after their god dies. Even more
than the paladin protagonists of the previoustwo books, he reacted very badly to that
death and has ongoing problems with nightmares and going into berserker
rages when awakened. As the book opens, he's the escort for a lich-doctor
named Piper who is examining a corpse found in the river.
The last of the five was the only one who did not share a certain
martial quality. He was slim and well-groomed and would be considered
handsome, but he was also extraordinarily pale, as if he lived his
life underground.
It was this fifth man who nudged the corpse with the toe of his boot
and said, "Well, if you want my professional opinion, this great
goddamn hole in his chest is probably what killed him."
As it turns out, slim and well-groomed and exceedingly pale is Galen's
type.
This is another paladin romance, this time between two men. It's almost
all romance; the plot is barely worth mentioning. About half of the book
is an exploration of a puzzle dungeon of the sort that might be fun in a
video game or tabletop RPG, but that I found rather boring and monotonous
in a novel. This creates a lot more room for the yearning and angst.
Kingfisher tends towards slow-burn romances. This romance is a somewhat
faster burn than some of her other books, but instead implodes into one of
the most egregiously stupid third-act breakups that I've read in a
romance plot. Of all the Kingfisher paladin books, I think this one was
hurt the most by my basic difference in taste from the author. Kingfisher
finds constant worrying and despair over being good enough for the
romantic partner to be an enjoyable element, and I find it incredibly
annoying. I think your enjoyment of this book will heavily depend on
where you fall on that taste divide.
The saving grace of this book are the gnoles, who are by far the best part
of this world. Earstripe, a gnole constable, is the one who found the
body that the book opens with and he drives most of the plot, such that it
is. He's also the source of the best banter in the book, which is full of
pointed and amused gnole observations about humans and their various
stupidities. Given that I was also grumbling about human stupidities for
most of the book, the gnole viewpoint and I got along rather well.
"God's stripes." Earstripe shook his head in disbelief. "Bone-doctor
would save some gnole, yes? If some gnole was hurt."
"Of course," said Piper. "If I could."
"And tomato-man would save some gnole?" He swung his muzzle toward
Galen. "If some gnome needed big human with sword?"
"Yes, of course."
Earstripe spread his hands, claws gleaming. "A gnole saves some
human. Same thing." He took a deep breath, clearly choosing his
words carefully. "A gnole's compassion does not require fur."
We learn a great deal more about gnole culture, all of which I found
fascinating, and we get a rather satisfying amount of gnole acerbic
commentary. Kingfisher is very good at banter, and dialogue in general,
which also smoothes over the paucity of detailed plot. There was no
salvaging the romance, at least for me, but I did at least like Piper, and
Galen wasn't too bad when he wasn't being annoyingly self-destructive.
I had been wondering a little if gay romance would, like
sapphic romance, avoid my dislike of
heterosexual gender roles. I think the jury is
still out, but it did not work in this book because Galen is so committed
to being the self-sacrificing protector who is unable to talk about his
feelings that he single-handedly introduced a bunch of annoying pieces of
the male gender role anyway. I will have to try that experiment with a
book that doesn't involve hard-headed paladins.
I have yet to read a bad T. Kingfisher novel, but I thought this one was
on the weaker side. The gnoles are great and kept me reading, but I wish
there had been a more robust plot, a lot less of the romance, and no
third-act breakup. As is, I recommend the other Saint of Steel books
over this one. Ah well.
An updated version of the corels package is
now on CRAN! The ‘Certifiably
Optimal RulE ListS (Corels)’ learner provides interpretable decision
rules with an optimality guarantee—a nice feature which sets it apart in
machine learning. You can learn more about corels at its UBC site.
The changes concern mostly maintenance for both the repository (such
as continunous integration setup, badges, documentation links, …) and
the package level (such as removing the no-longer-requiring C++
compilation standard setter now emitting a NOTE at CRAN.
I am still here. Sadly while I battle this insane infection from my broken arm I got back in July, the hackers got my blog. I am slowly building it back up. Further bad news is I have more surgeries, first one tomorrow. Furthering my current struggles I cannot start my job search due to hospitalization and recovery. Please consider a donation. https://gofund.me/6e99345d
On the open source work front, I am still working on stuff, mostly snaps ( Apps 24.08.3 released )
Thank you everyone that voted me into the Ubuntu Community Council!
I am trying to stay positive, but it seems I can’t catch a break. I will have my computer in the hospital and will work on what I can. Have a blessed day and see you soon.
Fifteen years ago I blogged about a different SQUID. Here’s an update:
Fleeing drivers are a common problem for law enforcement. They just won’t stop unless persuaded—persuaded by bullets, barriers, spikes, or snares. Each option is risky business. Shooting up a fugitive’s car is one possibility. But what if children or hostages are in it? Lay down barriers, and the driver might swerve into a school bus. Spike his tires, and he might fishtail into a van—if the spikes stop him at all. Existing traps, made from elastic, may halt a Hyundai, but they’re no match for a Hummer. In addition, officers put themselves at risk of being run down while setting up the traps.
But what if an officer could lay down a road trap in seconds, then activate it from a nearby hiding place? What if—like sea monsters of ancient lore—the trap could reach up from below to ensnare anything from a MINI Cooper to a Ford Expedition? What if this trap were as small as a spare tire, as light as a tire jack, and cost under a grand?
Thanks to imaginative design and engineering funded by the Small Business Innovation Research (SBIR) Office of the U. S. Department of Homeland Security’s Science and Technology Directorate (S&T), such a trap may be stopping brigands by 2010. It’s called the Safe Quick Undercarriage Immobilization Device, or SQUID. When closed, the current prototype resembles a cheese wheel full of holes. When open (deployed), it becomes a mass of tentacles entangling the axles. By stopping the axles instead of the wheels, SQUID may change how fleeing drivers are, quite literally, caught.
Sometimes you've to look at the content of x509 certificate
chains. Usually one finds them pem encoded and concatenated
in a text file.
Since the openssl x509 subcommand only decodes the first
certificate it will find in a file, I did something like this:
csplit -z -f 'cert' fullchain.pem '/-----BEGIN CERTIFICATE-----/' '{*}'
for x in cert*; do openssl x509 -in $x -noout -text; done
Apparently that's the "wrong" way and the more appropriate way is
using the openssl crl2pkcs7 subcommand albeit we do not try to
parse a revocation list here.
I climbed on top of a mountain with a beautiful view, and when I started
readying my new laptop for a work call (as one does on
top of mountains), I realised that I couldn't right click and it kind of
spoiled the mood.
Clicking on the bottom right corner of my touchpad left-clicked. Clicking with
two fingers left-clicked. Alt-clicking, Super-clicking, Control-clicking, left
clicked.
clicking on different areas at the bottom of the touchpad
double or triple-tapping, as long as the fingers are not too far apart
Skippable digression:
I'm not sure why Gnome insists in following Macs for defaults, which is what
people with non-Mac hardware are less likely to be used to.
In my experience, Macs are as arbitrarily awkward to use as anything else, but
they managed to build a community where if you don't understand how it works
you get told you're stupid. All other systems (including Gnome) have
communities where instead you get told (as is generally the case) that the
system design is stupid, which at least gives you some amount of validation
in your suffering.
Oh well.
How to configure right click
Surprisingly, this is not available in Gnome Shell settings. It can be found
in gnome-tweaks: under "Keyboard & Mouse", "Mouse Click Emulation", one can
choose between "Fingers" or "Area".
I tried both and went for "Area": I use right-drag a lot to resize windows, and
I couldn't find a way, at least with this touchpad, to make it work
consistently in "Fingers" mode.
In January 2022, KrebsOnSecurity identified a Russian man named Mikhail Matveev as “Wazawaka,” a cybercriminal who was deeply involved in the formation and operation of multiple ransomware groups. The U.S. government indicted Matveev as a top ransomware purveyor a year later, offering $10 million for information leading to his arrest. Last week, the Russian government reportedly arrested Matveev and charged him with creating malware used to extort companies.
An FBI wanted poster for Matveev.
Matveev, a.k.a. “Wazawaka” and “Boriselcin” worked with at least three different ransomware gangs that extorted hundreds of millions of dollars from companies, schools, hospitals and government agencies, U.S. prosecutors allege.
Russia’s interior ministry last week issued a statement saying a 32-year-old hacker had been charged with violating domestic laws against the creation and use of malicious software. The announcement didn’t name the accused, but the Russian state news agency RIA Novosticited anonymous sources saying the man detained is Matveev.
Matveev did not respond to requests for comment. Daryna Antoniuk at TheRecordreports that a security researcher said on Sunday they had contacted Wazawaka, who confirmed being charged and said he’d paid two fines, had his cryptocurrency confiscated, and is currently out on bail pending trial.
Matveev’s hacker identities were remarkably open and talkative on numerous cybercrime forums. Shortly after being identified as Wazawaka by KrebsOnSecurity in 2022, Matveev published multiple selfie videos on Twitter/X where he acknowledged using the Wazawaka moniker and mentioned several security researchers by name (including this author). More recently, Matveev’s X profile (@ransomboris) posted a picture of a t-shirt that features the U.S. government’s “Wanted” poster for him.
An image tweeted by Matveev showing the Justice Department’s wanted poster for him on a t-shirt. image: x.com/vxunderground
The golden rule of cybercrime in Russia has always been that as long as you never hack, extort or steal from Russian citizens or companies, you have little to fear of arrest. Wazawaka claimed he zealously adhered to this rule as a personal and professional mantra.
“Don’t shit where you live, travel local, and don’t go abroad,” Wazawaka wrote in January 2021 on the Russian-language cybercrime forum Exploit. “Mother Russia will help you. Love your country, and you will always get away with everything.”
Still, Wazawaka may not have always stuck to that rule. At several points throughout his career, Wazawaka claimed he made good money stealing accounts from drug dealers on darknet narcotics bazaars.
Cyber intelligence firm Intel 471 said Matveev’s arrest raises more questions than answers, and that Russia’s motivation here likely goes beyond what’s happening on the surface.
“It’s possible this is a shakedown by Kaliningrad authorities of a local internet thug who has tens of millions of dollars in cryptocurrency,” Intel 471 wrote in an analysis published Dec. 2. “The country’s ingrained, institutional corruption dictates that if dues aren’t paid, trouble will come knocking. But it’s usually a problem money can fix.
Intel 471 says while Russia’s court system is opaque, Matveev will likely be open about the proceedings, particularly if he pays a toll and is granted passage to continue his destructive actions.
“Unfortunately, none of this would mark meaningful progress against ransomware,” they concluded.
Although Russia traditionally hasn’t put a lot of effort into going after cybercriminals within its borders, it has brought a series of charges against alleged ransomware actors this year. In January, four men tied to the REvil ransomware group were sentenced to lengthy prison terms. The men were among 14 suspected REvil members rounded up by Russia in the weeks before Russia invaded Ukraine in 2022.
Earlier this year, Russian authorities arrested at least two men for allegedly operating the short-lived Sugarlocker ransomware program in 2021. Aleksandr Ermakov and Mikhail Shefel (now legally Mikhail Lenin) ran a security consulting business called Shtazi-IT. Shortly before his arrest, Ermakov became the first ever cybercriminal sanctioned by Australia, which alleged he stole and leaked data on nearly 10 million customers of the Australian health giant Medibank.
In December 2023, KrebsOnSecurity identified Lenin as “Rescator,” the nickname used by the cybercriminal responsible for selling more than 100 million payment cards stolen from customers of Target and Home Depot in 2013 and 2014. Last month, Shefel admitted in an interview with KrebsOnSecurity that he was Rescator, and claimed his arrest in the Sugarlocker case was payback for reporting the son of his former boss to the police.
Ermakov was sentenced to two years probation. But on the same day my interview with Lenin was published here, a Moscow court declared him insane, and ordered him to undergo compulsory medical treatment, The Record’s Antoniuk notes.
The theme "Ceratopsian" by Elise Couper has been selected as the default
theme for Debian 13 "trixie". The theme is inspired by Trixie's (the fictional
character from Toy Story) frill and is also influenced by a previously used
theme called "futurePrototype" by Alex Makas.
After the Debian Desktop Team made the call for proposing themes, a total
of six choices were submitted. The desktop artwork poll was open to the
public, and we received 2817 responses ranking the different choices, of which
Ceratopsian has been ranked as the winner among them.
We'd like to thank all the designers that have participated and have submitted
their excellent work in the form of wallpapers and artwork for Debian 13.
Congratulations, Elise, and thank you very much for your contribution to
Debian!
As often happens, Luka started some work but didn't get it across the finish line before a scheduled vacation. No problem: just hand it off to another experienced developer.
Luka went off for a nice holiday, the other developer hammered away at code, and when Luka came back, there was this lovely method already merged to production, sitting and waiting:
vvv(x, y)
{
returntypeof x[y] !== 'undefined';
}
"What is this?" Luka asked.
"Oh, it's a helper function to check if a property is defined on an object."
Luka could see that much, but that didn't really answer the question.
First, it wasn't the correct way to check if a property existed. Mind you, actually doing those checks in JavaScript is a complicated minefield because of prototype inheritance, but between the in operator, the hasOwn and hasOwnProperty methods, there are simpler and cleaner ways to get there.
But of course, that wasn't what got anyone's attention. What caught Luka up was the name of the function: vvv. And not only was it a terrible name, thanks to the other dev's industriousness, it was now called all over the codebase. Even places where a more "correct" call had been used had been refactored to use this method.
"But it's so brief, and memorable," the developer said.
Luka was vvvery upset by that attitude.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Bryant Benson Dear Henry, Sometimes I wish I never met you. When you found me washed up on the beach in my final hour I was something different that could have stung or bitten, but you took me into your home and gave me a safe place to die. I truly wish it wasn’t […]
Phishing attacks increased nearly 40 percent in the year ending August 2024, with much of that growth concentrated at a small number of new generic top-level domains (gTLDs) — such as .shop, .top, .xyz — that attract scammers with rock-bottom prices and no meaningful registration requirements, new research finds. Meanwhile, the nonprofit entity that oversees the domain name industry is moving forward with plans to introduce a slew of new gTLDs.
Image: Shutterstock.
A study on phishing data released by Interisle Consulting finds that new gTLDs introduced in the last few years command just 11 percent of the market for new domains, but accounted for roughly 37 percent of cybercrime domains reported between September 2023 and August 2024.
Interisle was sponsored by several anti-spam organizations, including the Anti-Phishing Working Group (APWG), the Coalition Against Unsolicited Commercial Email (CAUCE), and the Messaging, Malware, and Mobile Anti-Abuse Working Group (M3AAWG).
The study finds that while .com and .net domains made up approximately half of all domains registered in the past year (more than all of the other TLDs combined) they accounted for just over 40 percent of all cybercrime domains. Interisle says an almost equal share — 37 percent — of cybercrime domains were registered through new gTLDs.
Spammers and scammers gravitate toward domains in the new gTLDs because these registrars tend to offer cheap or free registration with little to no account or identity verification requirements. For example, among the gTLDs with the highest cybercrime domain scores in this year’s study, nine offered registration fees for less than $1, and nearly two dozen offered fees of less than $2.00. By comparison, the cheapest price identified for a .com domain was $5.91.
Currently, there are around 2,500 registrars authorized to sell domains by the Internet Corporation for Assigned Names and Numbers (ICANN), the California nonprofit that oversees the domain industry.
The top 5 new gTLDs, ranked by cybercrime domains reported. Image: Interisle Cybercrime Supply Chain 2014.
Incredibly, despite years of these reports showing phishers heavily abusing new gTLDs, ICANN is shuffling forward on a plan to introduce even more of them. ICANN’s proposed next round envisions accepting applications for new gTLDs in 2026.
John Levine is author of the book “The Internet for Dummies” and president of CAUCE. Levine said adding more TLDs without a much stricter registration policy will likely further expand an already plentiful greenfield for cybercriminals.
“The problem is that ICANN can’t make up their mind whether they are the neutral nonprofit regulator or just the domain speculator trade association,” Levine told KrebsOnSecurity. “But they act a lot more like the latter.”
Levine said the vast majority of new gTLDs have a few thousand domains — a far cry from the number of registrations they would need just to cover the up-front costs of operating a new gTLD (~$180,000-$300,000). New gTLD registrars can quickly attract customers by selling domains cheaply to customers who buy domains in bulk, but that tends to be a losing strategy.
“Selling to criminals and spammers turns out to be lousy business,” Levine said. “You can charge whatever you want on the first year, but you have to charge list price on domain renewals. And criminals and spammers never renew. So if it sounds like the economics makes no sense it’s because the economics makes no sense.”
In virtually all previous spam reports, Interisle found the top brands referenced in phishing attacks were the largest technology companies, including Apple, Facebook, Google and PayPal. But this past year, Interisle found the U.S. Postal Service was by far the most-phished entity, with more than four times the number of phishing domains as the second most-frequent target (Apple).
At least some of that increase is likely from a prolific cybercriminal using the nickname Chenlun, who has been selling phishing kits targeting domestic postal services in the United States and at least a dozen other countries.
Interisle says an increasing number of phishers are eschewing domain registrations altogether, and instead taking advantage of subdomain providers like blogspot.com, pages.dev, and weebly.com. The report notes that cyberattacks hosted at subdomain provider services can be tough to mitigate, because only the subdomain provider can disable malicious accounts or take down malicious web pages.
“Any action upstream, such as blocking the second-level domain, would have an impact across the provider’s whole customer base,” the report observes.
Interisle tracked more than 1.18 million instances of subdomains used for phishing in the past year (a 114 percent increase), and found more than half of those were subdomains at blogspot.com and other services operated by Google.
“Many of these services allow the creation of large numbers of accounts at one time, which is highly exploited by criminals,” the report concludes. “Subdomain providers should limit the number of subdomains (user accounts) a customer can create at one time and suspend automated, high-volume automated account sign-ups – especially using free services.”
"We use a three tier architecture," said the tech lead on Cristian's new team. "It helps us keep concerns separated."
This statement, as it turned out, was half true. They did divide the application into three tiers- a "database layer", a "business layer", and a "presentation layer". The "database layer" was a bunch of Java classes. The "business layer" was a collection of Servlets. And the "presentation layer" was a pile of JSP files.
What they didn't do, however, was keep the concerns separated.
Here's some code from their database layer:
publicsynchronized StringBuffer getStocTotGest(String den, String gest) {
StringBuffersb=newStringBuffer("<table width=\"100%\" border=\"1\" cellspacing=\"1\" cellpadding=\"1\">" + "<tr bgcolor=\"#999999\">" + "<td>Denumire</td>" + "<td>Cant</td>"
+ "<td>PretVanz</td>" + "</tr>");
try {
ResultSetrs= connectionManager
.executeQuery("select (if(length(SUBSTRING(den,1,instr(den,'(')-1))>0,SUBSTRING(den,1,instr(den,'(')-1),den)) as den,um,pret_vinz,sum(stoc) as stoc from stmarfzi_poli where den like '"
+ den + "%' " + gest + " group by den order by den");
while (rs.next()) {
sb.append("<tr><td>" + rs.getString("den") + "</td>");
sb.append("<td><div align=\"right\">" + threeDecimalPlacesFormat.format(rs.getDouble("stoc")) + " " + rs.getString("um") + "</div></td>");
sb.append("<td><div align=\"right\">" + teoDecimalPlacesFormat.format(rs.getDouble("pret_vinz")) + "</div></td></tr>");
}
sb.append("</table>");
} catch (Exception ex) {
ex.printStackTrace();
}
return sb;
}
I guess a sufficiently motivated programmer can write PHP in any language.
This just has a little bit of everything in it, doesn't it? There's the string-munged HTML generation in the database layer. The HTML is also wrong, as header fields are output with td tags, instead of th. There's the SQL injection vulnerability. There's the more-or-less useless exception handler. It's synchronized even though it's not doing anything thread unsafe. It's truly a thing of beauty, at least if you don't know what beauty is and think it means something horrible.
This function was used in a few places. It was called from a few servlets in the "business layer", where the resulting StringBuffer was dumped into a session variable so that JSP files could access it. At least, that was for the JSP files which didn't invoke the function themselves- JSP files which mixed all the various layers together.
Cristian's first task in the code base was changing the background colors of all of the rendered table headers. Since, as you can see, they weren't using CSS to make this easy, that involved searching through the entire codebase, in every layer, to find all the places where maybe a table was generated.
Changing those colors was Cristian's first task in the code base. I assume that Cristian is still working on that, and will be working on that for some time to come.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Majoki Click. Split. The metal gleams. That was all. Decades of research. Years of development. For this. Click. Split. The metal gleams. Hiroshi was toast. His head to be delivered not on a silver platter, but on a silicon wafer to Project Director. He was doomed. Project Director did not accept failure. Project Director […]
Review: Astrid Parker Doesn't Fail, by Ashley Herring Blake
Series:
Bright Falls #2
Publisher:
Berkley Romance
Copyright:
November 2022
ISBN:
0-593-33644-5
Format:
Kindle
Pages:
365
Astrid Parker Doesn't Fail is a sapphic romance novel and a sequel
to Delilah Green Doesn't Care. This is
a romance style of sequel, which means that it spoils the previous book
but involves a different set of protagonists, one of whom was a supporting
character in the previous novel.
I suppose the title is a minor spoiler for Delilah Green Doesn't
Care, but not one that really matters.
Astrid Parker's interior design business is in trouble. The small town of
Bright Falls doesn't generate a lot of business, and there are limits to
how many dentist office renovations that she's willing to do. The
Everwood Inn is her big break: Pru Everwood has finally agreed to remodel
and, even better, Innside America wants to feature the project.
The show always works with local designers, and that means Astrid.
National TV exposure is just what she needs to turn her business around
and avoid an unpleasant confrontation with her domineering, perfectionist
mother.
Jordan Everwood is an out-of-work carpenter and professional fuck-up.
Ever since she lost her wife, nothing has gone right either inside or
outside of her head. Now her grandmother is renovating the favorite place
of her childhood, and her novelist brother had the bright idea of bringing
her to Bright Falls to help with the carpentry work. The remodel and the
HGTV show are the last chance for the inn to stay in business and stay in
the family, and Jordan is terrified that she's going to fuck that up too.
And then she dumps coffee all over the expensive dress of a furious woman
in a designer dress because she wasn't watching where she was going, and
that woman turns out to be the designer of the Everwood Inn renovation. A
design that Jordan absolutely loathes.
The reader met Astrid in Delilah Green Doesn't Care (which you
definitely want to read first). She's a bit better than she was there,
but she's still uptight and unhappy and determined not to think too hard
about why. When Jordan spills coffee down her favorite dress in their
first encounter, shattering her fragile professional calm, it's not a
meet-cute. Astrid is awful to her. Her subsequent regret, combined with
immediately having to work with her and the degree to which she finds
Jordan surprisingly attractive (surprising in part because Astrid thinks
she's straight), slowly crack open Astrid's too-controlled life.
This book was, once again, just compulsively readable. I read most of it
the same day that I started it, staying up much too late, and then
finished it the next day. It also once again made me laugh in delight at
multiple points. I am a sucker for stories about someone learning how to
become a better person, particularly when it involves a release of
anxiety, and oh my does Blake ever deliver on that. Jordan's arc is more
straightforward than Astrid's — she just needs to get her confidence back
— but her backstory is a lot more complex than it first appears, including
a morally ambiguous character who I would hate in person but who I admired
as a deft and tricky bit of characterization.
The characters from Delilah Green Doesn't Care of course play a
significant role. Delilah in particular is just as much of a delight here
as she was in the first book, and I enjoyed seeing the development of her
relationship with her step-sister. But the new characters, both the HGTV
film crew and the Everwoods, are also great. I think Blake has a real
knack for memorable, distinct supporting characters that add a lot of
depth to the main romance plot.
I thought this book was substantially more sex-forward than Delilah
Green Doesn't Care, with some lust at first or second sight, a bit more
physical description of bodies, and an extended section in the middle of
the book that's mostly about sex. If this is or is not your thing in
romance novels, you may have a different reaction to this book than the
previous one.
There is, unfortunately, another third-act break-up, and this one annoyed
me more than the one in Delilah Green Doesn't Care because it felt
more unnecessary and openly self-destructive. The characters felt like
they were headed towards a more sensible and less dramatic resolution, and
then that plot twist caught me by surprise in an unpleasant way. After
two books, I'm getting the sense that Blake has a preferred plot arc, at
least in this series, and I wish she'd varied the story structure a bit
more. Still, the third-act conflict was somewhat believable and the
resolution was satisfying enough to salvage it.
If it weren't for some sour feelings about the shape of that plot climax,
I would have said that I liked this book even better than Delilah
Green Doesn't Care, and that's a high bar. This series is great, and I
will definitely be reading the third one. I'm going to be curious how
that goes since it's about Iris, who so far has worked better for me as a
supporting character than a protagonist. But Blake has delivered
compulsively readable and thoroughly enjoyable books twice now, so I'm
definitely here for the duration.
If you like this sort of thing, I highly recommend this whole series.
Followed by Iris Kelly Doesn't Date in the romance series sense,
but as before this book is a complete story with a satisfying ending.
I had the pleasure of attending the MiniDebConf in Toulouse, which
featured a range of engaging talks, complementing those from the
recent MiniDebConf in Cambridge. Both events were preceded by a DebCamp,
which provided a valuable opportunity for focused work and
collaboration.
DebCamp
During these events, I participated in numerous technical discussions on
topics such as maintaining long-neglected packages, team-based
maintenance, FTP master policies, Debusine, and strategies for
separating maintainer script dependencies from runtime dependencies,
among others. I was also fortunate that members of the Publicity Team
attended the MiniDebCamp, giving us the opportunity to meet in person
and collaborate face-to-face.
Independent of the ongoing lengthy discussion on the
Debian Devel mailing list, I encountered the perspective that unifying
Git workflows might be more critical than ensuring all packages are managed
in Git. While I'm uncertain whether these two questions--adopting Git as
a universal development tool and agreeing on a common workflow for its
use--can be fully separated, I believe it's worth raising this topic for
further consideration.
Attracting newcomers
In my own talk, I regret not leaving enough time for questions--my
apologies for this. However, I want to revisit the sole question raised,
which essentially asked: Is the documentation for newcomers sufficient
to attract new contributors? My immediate response was that this
question is best directed to new contributors themselves, as they are in
the best position to identify gaps and suggest improvements that could
make the documentation more helpful.
That said, I'm personally convinced that our challenges extend beyond
just documentation. I don't get the impression that newcomers are lining
up to join Debian only to be deterred by inadequate documentation. The
issue might be more about fostering interest and engagement in the first
place.
My personal impression is that we sometimes fail to convey that Debian
is not just a product to download for free but also a technical
challenge that warmly invites participation. Everyone who respects our
Code of Conduct will find that Debian is a highly diverse community,
where joining the project offers not only opportunities for technical
contributions but also meaningful social interactions that can make the
effort and time truly rewarding.
In several of my previous talks (you can find them on my talks page
–just search for "team," and don't be deterred if you see
"Debian Med" in the title; it's simply an example), I emphasized that
the interaction between a mentor and a mentee often plays a far more
significant role than the documentation the mentee has to read. The key
to success has always been finding a way to spark the mentee's interest
in a specific topic that resonates with their own passions.
Bug of the Day
In my presentation, I provided a brief overview of the Bug of the
Day initiative, which was launched with the aim of demonstrating how to
fix bugs as an entry point for learning about packaging. While the
current level of interest from newcomers seems limited, the initiative
has brought several additional benefits.
I must admit that I'm learning quite a bit about Debian myself. I often
compare it to exploring a house's cellar with a flashlight –you uncover
everything from hidden marvels to things you might prefer to discard.
I've also come across traces of incredibly diligent people who have
invested their spare time polishing these hidden treasures (what we call
NMUs). The janitor, a service in Salsa that automatically updates
packages, fits perfectly into this cellar metaphor, symbolizing the
ongoing care and maintenance that keep everything in order. I hadn't
realized the immense amount of silent work being done behind the
scenes--thank you all so much for your invaluable QA efforts.
Reproducible builds
It might be unfair to single out a specific talk from Toulouse, but I'd
like to highlight the one on reproducible builds. Beyond its
technical focus, the talk also addressed the recent loss of Lunar, whom
we mourn deeply. It served as a tribute to Lunar's contributions
and legacy. Personally, I've encountered packages maintained by Lunar
and bugs he had filed. I believe that taking over his packages and
addressing the bugs he reported is a meaningful way to honor his memory
and acknowledge the value of his work.
Advent calendar bug squashing
I’d like to promote an idea originally introduced by Thorsten Alteholz,
who in 2011 proposed a Bug Squashing Advent Calendar for the Debian
Med team. (For those unfamiliar with the concept of an Advent Calendar,
you can find an explanation on Wikipedia.) While the original
version included a fun graphical element —which we’ve had to set aside
due to time constraints (volunteers, anyone?)— we’ve kept the tradition
alive by tackling one bug per day from December 1st to 24th each year.
This initiative helps clean up issues that have accumulated over the
year.
Regardless of whether you celebrate the concept of Advent, I warmly
recommend this approach as a form of continuous bug-squashing party for
every team. Not only does it contribute to the release readiness of your
team’s packages, but it’s also an enjoyable and bonding activity for
team members.
Best wishes for a cheerful and productive December
Alexandra inherited a codebase that, if we're being kind, could be called "verbose". Individual functions routinely cross into multiple thousands of lines, with the longest single function hitting 4,000 lines of code.
Very little of this is because the problems being solved are complicated, and much more of it is because people don't understand how anything works.
For example, in this C++ code, they have a vector of strings. The goal is to create a map where the keys are the strings from the vector, and the values are more strings, derived from a function call.
Essentially, what they wanted was:
for (std::string val : invec)
{
umap[val] = lookupValue(val);
}
This would have been the sane, obvious way to do things. That's not what they did.
I won't pick on names here, as they're clearly anonymized. But let's take a look at the approach they used.
They create their map, and then create a new vector- a vector which is a pair<string, string*>- a string and a pointer to a string. Already, I'm confused by why any of this is happening, but let's press on and hope it becomes clear.
We iterate across our input vector, which this I get. Then we create a key in the map and give it an empty string as a value. Then we create a pair out of our key and our pointer to that empty string. That's how we populate our idxvec vector.
Once we've looped across all the values once, we do it again. This time, we pull out those pairs, and set the value at the pointer equal to the string returned by lookup value.
Which leads us all to our favorite letter of the alphabet: WHY?
I don't know. I also am hesitant to comment to much on the memory management and ownership issues here, as with the anonymization, there may be some reference management that got lost. But the fact that we're using bare pointers certainly makes this code more fraught than it needed to be. And, given how complex the STL data structures can be, I think we can also agree that passing around bare pointers to memory inside those structures is a recipe for disaster, even in simple cases like this.
What I really enjoy is that they create a vector of pairs, without ever seeming to understand that a list of pairs is essentially what a map is.
In conclusion: can we at least agree that, from now on, we won't iterate across the same values twice? I think about 15% of WTFs would go away if we all followed that rule.
Oh, wait, no. People who could understand rules like that aren't the ones writing this kind of code. Forget I said anything.
[Advertisement]
Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Author: Julian Miles, Staff Writer “Did you see that, Pete?” I nod. “Just another rocket from Abaella.” Said on the news it’s going to be in range of Earth for another month. “It’s bigger than that, Pete.” Amanda sounds unhappy. I wander out onto the porch in time to see a stray moon level Sacramento. […]
I didn’t plan to go to Oklahoma, but I went to Oklahoma.
My day job is providing phone tech support to people in offices who use my boss’s customer-relationship management software. In theory, I can do that job from anywhere I can sit quietly on a good Internet connection for a few hours a day while I’m on shift. It’s a good job for an organizer, because it means I can go out in the field and still pay my rent, so long as I can park a rental car outside of a Starbucks, camp on their WiFi, and put on a noise-canceling headset. It’s also good organizer training because most of the people who call me are angry and confused and need to have something difficult and technical explained to them.
My comrades started leaving for Oklahoma the day the Water Protector camp got set up. A lot of them—especially my Indigenous friends—were veterans of the Line 3 Pipeline, the Dakota Access Pipeline, and other pipeline fights, and they were plugged right into that network.
The worse things got, the more people I knew in OK. My weekly affinity group meeting normally had twenty people at it. One week there were only ten of us. The next week, three. The next week, we did it on Zoom (ugh) and most of the people on the line were in OK, up on “Facebook Hill,” the one place in the camp with reliable cellular data signals.
Author: Rick Tobin Jason continued to turn a small half-fried reptile in a solar often. Cooking took longer on this world with its distant red sun. Bursts of drifting dust blew over him and his two companion portal flyers. Emily was rinsing her hair delicately with precious water from the tiny oasis near the rocky […]
Author: Palmer Caine Between gates things get weird. Perception splinters to span myriad levels, too many to navigate, too many to understand. Like of Galaxy of mirrors, everything reflected infinitesimally. Or so it seems. Maybe a fly with its many segmented eyes could fashion a path, but not mere humanity, and certainly not me. The […]
At the end I'll cite some book and SF news, including some fun! Like part two of my comedy, The Ancient Ones.
Only now we'll return to the topic on everyone’s mind… WTF just happened? And what should we do now?
We'll start with Nathan Gardels – editor & publisher of the excellent Noēma Magazine - who always offers interesting observations. Though, he often triggers my infamously ornery “Yes… but…” reflex and a too-long response. (Several posts here originated in rémise to Nathan.)
In a recent missive - "How to Soul-Search as a Losing Party" - appraising What Democrats did wrong, Gardels points out many valid things… while reaching a conclusion that I deem spectacularly mistaken. Taking note of how so many Black and Hispanic males abandoned the old, Rooseveltean Coalition, he joins with so many others out there, urging a campaign of gentle conciliation.
Nathan cites a raft of earnest intellectuals, as well as deliberative ‘citizens panels’ that have – in Europe – shown some success at getting participants to bridge the dogmatic gaps that divided them. Indeed, such experiments have been terrific! It is the mature approach. And it works…
...with those who are already drawn far enough into the process to leave behind their familiarly comfortable, polemical echo chambers. Forsaking today’s media Nuremberg rallies, in order to participate.
“(O)nce informed and empathetically exposed to the concerns of others, participants move from previously held dispositions toward a consensus.“
Indeed, that participation can be widespread!As in the Truth and Reconciliation process led by Nelson Mandela, in South Africa, and similar endeavors in Argentina and Chile, wherein vast swathes of the public – on all sides – realized they must do this… or risk losing everything.
As for it happening in today’s USA? Well, I can think of one actual, real world example.
All across the nation, grand juries are selected from randomly-chosen voters and vetted for general reasonableness. In a majority of American counties, the resulting panels consist largely of fairly typical white retirees. And yet, it has been exactly those red county white retirees who – after exposure to arguments and copious evidence – have indicted so many Republican politicians and associates of a vast range of crimes.
I’d argue that is a kind of fact-based consensus-building, even if it leads to some well-deserved pain by the fomenters of one side.
That is the first of many reasons why the masters of that side will have no interest in allowing wider versions of consensus building.
I do not see any hope of such a thing happening in today’s America, at any kind of scale.
…with one barely plausible exception.
== Get the kompromat-compromised to trade 'Truth' for 'Reconciliation' ==
It might begin with one brave act. One so shocking and disruptive that it could rattle the echo chambers and draw millions of ostrich heads out of media holes. It might happen even right now, at the tail end of 2024, if Joe Biden were to offer the incentive of pardons/clemency, in order to draw forward any politicians in DC to admit that they are snared by blackmail.
As I say elsewhere, the pervasiveness of widespread blackmail in Washington is widely known in counter-intelligence circles. Honeypot entrapment of western elites has long been a specialty of Russian intel services – Okhrana, Checka, NKVD, KGB and FSB – all the way back to czarist times. Moreover, three Republican Congress members have recently attested to it likely being widespread among their GOP colleagues.
And hence, perhaps the incentive of presidential clemency just might be enough to draw some heroic – or simply fed-up – blackmail victims into cleansing light. And once a few have done so, others might follow, from all parties.
And yes, I do believe it’s one path that could lead to a Truth & Reconciliation process in America.
On the other hand, could T&R be achieved by preaching for a nationwide flow of commensal consensus, based upon buildingtouchy-feely ‘mutual respect’ and listening?
Now?
That is fantasy.
Especially at this moment.
Because we have nothing to offer to those who are getting exactly what they want, right now.
You know perfectly well what that is, if you ask around, or follow social media at all. There is one voluptuous satisfaction that tens of millions of core MAGA folks seek – and are getting – that fills them with giddy joy, above all. To drink our tears.
If you do not know this, then you really, really need to get out more.
Anyone who thinks they can placate that with ‘can we all just get along?’ has no memory of the middle school playground, where we learned one of the deepest expressions of human nature -- from bullies, whose greatest joy came from hearing nerdy victims cry out - “Can we talk this out?”
== Twin prescriptions that are guaranteed to fail ==
Today’s Chasm of Political Recriminations within Blue America appears to be similarly unbridgeable.
First there’s a left wing that wants only to double down exactly upon a raft of combative identity stances that didn’t work…
(Abortion! Racism! Pronouns! Shun Bill Maher! Forget the economy; it’s all about abortion! And did I mention abortion? And abortion!)…
… vs. those murmuring “we need to reach out for consensus!” Consensus with those who have openly declared hatred of every single fact-using profession in America, along with universities, science, the civil service, the FBI and even the U.S. military officer corps.
To be clear, I am not rejecting consensus building! There have been times when rational politics used to be about negotiation, and those days may come again.
Please. If you read and grasp nothing else here, understand the following history lesson.
In olden times, Republican and Democratic legislators would socialize and get to know, rather than demonize, each other. Their kids went to the same schools! That is, until Dennis “friend to boys” Hastert established a rule (look it up) that GOP representatives must stash their families in the Home District and spend as little time as possible in Washington. And - above all - demonize those on the other side of the aisle.
During some previous eras, a president was able to negotiate – even horse-trade – for a few votes needed by this or that nominee. And each appointment was considered separately.
This was true even as late as the Speakership of Newt Gingrich who, for all of his fiery, right wingism, was there to negotiate and to pass legislation needed by the country. Hence we got Welfare Reform and the Budget Act and Clinton Surpluses.
Alas, at that point Karl Rove’s program to expand gerrymandering shifted the locus of power in hundreds of districts, away from the General Election over to district primaries. Primaries in which radical partisans gained outsized sway. It happened in both parties, but especially in the GOP. Threats of ‘being primaried’ became fierce tools to enforce uniformity.
(There are ways to defeat this! Decisively, in fact. Methods that don’t even require legislation. One simple, nationwide information campaign could destroy the effectiveness of Primary Radicalization… and no party politician will discuss it.)
== The roots of our present political impasse ==
This transformation reached fruition with the 1996 Congressional putsch, when Newt was jettisoned without so much as a thank you and replaced by a later-convicted child predator, whose “Hastert Rule” has ever since declared a political death sentence for any Republican who – ever again – actually negotiates with Democrats.
This resulted in the most tightly disciplined party and politburo America ever saw. (And some of the laziest, worst Congresses in U.S. history. Only once in the last 28 years has there been a session that passed needed legislation that directly resulted in major benefits for the nation.)
How effective is Hastert-Discipline?No hypocrisy is too great. As when GOP Senate Majority Leader Mitch McConnell refused even to meet with Obama nominees more than 13 months before the next election… but hurried to confirm Trump’s final appointments one month before Biden took office. Even “deeply concerned” Senators Collins and Murkowski get back in line at the slightest warning look from Trump or from Trump’s Potemkin puppeteer.
And so… amid all those highly refined tools of fanaticism, radicalization and discipline-enforcement… are we somehow supposed to seek consensus, when every single incentive is designed to thwart it?
== Bitter partisanship is a recurring American norm ==
Again and again, I am appalled by an unwillingness by our brainy, punditry castes ever to look at history.
Like the 6000 years when 99% of human societies fell into drearily similar patterns of feudalism, dominated by male bullies who enforced power based on an inherited ruling class.
Or how the American Experiment - in escaping feudalism - has experienced rhythmic pulses of cultural strife, with pretty much similar casts of characters, across 240-years.
Or how Franklin Delano Roosevelt forged an alliance of rich, middle and poor that rendered Marxist notions of class war obsolete for a while… until Old Karl has lately been revived to fresh pertinence, by those who forget.
Thislatestphase of the recurring U.S. Civil War goes far beyond simply snaring the GOP political caste, as we saw in the previous section. It has been vital to re-create the 1860s alliance of poor whites with their rich overlords, in shared hatred of modernists. This required perfection of masturbatory media, offering in-group solidarity based on a Cultural Schism that has divided America since its inception.
(Look up how in 1850s plantation-lords arranged to burn every southern newspaper that did not hew to the slavocracy line.)
Want a keen insight about all this from a brilliant science fiction author? No, I mean the revered (if somewhat libertarian) Robert A. Heinlein, who describes a recurring American illness. In projecting a future America dominated by religious fundamentalism, he adds:
"Throw in a Depression for good measure, promise a material heaven here on earth, add a dash of anti-Semitism, anti-Catholicism, anti-Negrosim, and a good large dose of anti-“furriners” in general and anti-intellectuals here at home, and the result might be something quite frightening – particularly when one recalls that our voting system is such that a minority distributed as pluralities in enough states can constitute a working majority in Washington."
Excuse me. From FDR to LBJ to Clinton and Obama, rural America has received generous largesse that transformed ‘hick’ Southern and Appalachia states into modern hubs, surrounded by comfortable towns that – under Biden – just received huge waves of infrastructure repair and high-speed Internet. Unemployment is super-low and inflation has fallen.
Did the Harris campaign fail to make all that clear?Of course they did. And that failure was godawful.
But nothing we try, no statistical proofs… and certainly no ‘outreach and listen’ campaign… ever stood a chance against the drug-like power of sanctimony. The volcanic flows of ingrate-hate pouring from Trumpian America, toward…
… toward whom?
Leftists claim that the hated groups are races/genders etc. And while there is some of that, their obsession is - in its own right - poorly based sanctimony-delusion. In its own right, it is delusionally insane.
Test it! Just watch Fox some evening and count the number of minutes spent spewing outright racism or repression of gender variety, or attacking the poor.
All of that is as a droplet next to tsunamis of bile aimed at … nerds. At fact professions. At civil servants. At the FBI and intel agencies. At the U.S. military officer corps. At exactly those who are targeted by Project 2025.
Elsewhere I go into the WHY of this open and insatiable hatred of every single fact-wielding profession. It's exactly the same cultural phenomenon as when Southern white males supported King George against city merchants… and supported slavocrat plantation lords, their actual class enemies, against urban northern sophisticates. And supported Gilded Age plutocrats against the original Progressives…
…and who now support today’s lucre-oligarchy against ‘smug university-smartypants know-it-alls’. The professionals who stand in the way of feudalism’s return.
(Just watch who Trump goes after… and how the red folk who you want us to ‘reach out to and understand’ will cry out gleefully, with every shout of nerdy pain.)
== Defend what they most avidly seek to destroy ==
Can such masturbatory joy at defeating all fact people be assuaged with ‘reaching out’ sessions seeking ‘consensus’?
Okay, sure. Give it a try. It seems worthwhile! I might be wrong!
It doesn’t always have to involve violence! In fact, only one of those earlier phases was truly violent. And a couple were resolved by genius politicians like FDR!
But in this recurring madness, what never worked was supplication. Or looking weak.
What's worked is the same thing that caused bullies on the playground to step up from the dust, stare at the blood they just wiped from their noses, and go “Huh! I guess you aren’t meat, after all. Wanna come over and play X-Box?”
But sure. Read Nathan G's editorial in Noema! As usual, it is articulate and knowledgable and persuasive. So let's by all means assign some folks to give 'consensus-building' a try! Go with the carrots that have never worked. But maybe this time.
Meanwhile, I plan to continue offering sticks.
Tools for fact-folks to use.
Tools that establishment politicians have never-ever-ever actually tried. At least none since FDR and LBJ.
PREVIOUSLY… we met Commander Alvin Montessori, ‘human advisor’ aboard the exploration vessel Clever Gamble, a mighty ship crewed mostly by demmies, a species who learned star travel from Earthlings – for which the galaxy is having some trouble forgiving us.
In orbit above a new world, the demmie commander – Captain Ohm – demands “Are they over 16 on the Turgenev Scale?”
When Alvin nods, Ohm cries out:
“Then we’re going down! Let’s slurry!”
**
Alliance spacecraft look strange to the uninitiated.
Till recently, most starfaring races voyaged in efficient, globelike vessels, with small struts symmetrically arranged for the hyperdrive anchors. Transport to and from a planetary surface took place via orbital elevator at advanced worlds, or else by sensible little shuttles.
Like any prudent person, I’d be far happier traveling that way, but I try to hide the fact, and you students should too. Demmies cannot imagine why everyone doesn’t love slurry transport as much as they do. So, you can expect it to become the principal short-range system near all Alliance worlds.
It’s not so bad. After the first hundred or so times. Trust me. You can get used to anything.
As a demmie-designed exploration ship, the Clever Gamble looks like nothing else in the known universe. There are typically garish dem-style drive struts, looking like frosting swirls on some manic baker’s confection. These are linked to a surprisingly efficient and sensible engineering pod, which then clashes with a habitation module resembling some fairytale castle straight out of Hans Christian Andersen.
Then there is the Reel.
The Reel is a gigantic, protruding disk that takes up half the mass and volume of the ship, all in order to lug a prodigious, unbelievable hose all over the galaxy, frightening comets and intimidating the natives wherever we go. This conduit was already half-deployed by the time the ship’s artificer and healer met us in the slurry room. Through the viewer, we could see a tapering line descend toward the planet’s surface, homing in on a selected landing site.
The captain hopped about, full of ebullient energy. For the record, I reminded him that, contrary to explicit rules and common sense, the descent party once again consisted of the ship’s top four officers, while a fully-trained xenology team waited on standby, just three decks below.
“Are you kidding?” he replied. “I served on one of those teams, long ago. Boringest time I ever had.”
“But the thrill of contacting alien…”
“What contact? All’s we did was sit around while the top brass went down to all the new planets and did all the fighting and peacemaking and screwing. Well, it’s my turn now. Let ’em stew like I did!” He whirled to the reel operator. “Hose almost ready?”
“Aye sir. The Nozzle End has been inserted behind some shrubs in what looks like a park, in their biggest city.”
I sighed. This was not an approach I would have chosen. But most of the time you just have to go with the flow. It really is implacable. And things often turn out all right in the end. Surprisingly often.
The Captain rubbed his hands, raising visible sparks of static electricity. “Good. Then let’s see what’s down there!”
What can I say? Enthusiasm always was his most compelling trait. Ohm truly is hard to resist. Resignedly, I followed my leader to the dissolving room.
We were met outside by Ensign Nota Taken, who offered Ohm a tube to hold his non-organic tools. While the captain handed over his laser pistol and communicator, I was assisted by my own deputy – apprentice-advisor Frieder Koch – fresh out of Earth’s Academy and one of only ten humans aboard the Clever Gamble.
“Stay close to Commander Talon,” I murmured to Frieder, referring to the demmie officer left in charge.
“I will, Advisor,” he assured, both in words and with a moment of eye contact, conveying determination not to let me down. And, like any worried parent, I resigned myself to letting go.
You won’t hear much about Ensign Taken and Frieder for a while, but they figure later in my story.
Ohm and I entered the transporter room to join other members of the landing party. And at this point I suppose I should introduce Guts and Nuts.
Those are not their formal names, of course. But, as a demmie would say, who cares? On an Alliance ship, you quickly learn to go by whatever moniker the captain chooses.
Commander-Healer Paolim – or “Guts” – was the ship’s surgeon, an older demmie and, I might add, an exceptionally reasonable fellow. It is always important to remember that both humans and dems produce individuals along a wide spectrum of personality types, and the races do overlap! While some Earthling men and women can be as flighty and impulsive as a demmie adolescent, the occasional demmie can, in turn, seem mature, patient, reflective.
On the other hand, let me warn you right now – never get so used to such a one that you take it for granted! I recall one time, on Sepsis 69, when this same reasonable old healer actually tried to persuade a mega-thunder ameboid to stop in mid-charge for a group photo…
But save that story for another time. If there’s another time.
Commander-Artificer Nomlin – or “Nuts” – was the ship’s chief engineering officer. A female demmie, she disliked the slang term, “fem-dem,” and I recommend against ever using it. Nuts was brilliant, innovative, stunningly skilled with her hands, mercurial, and utterly fixated on making life miserable for me, for reasons I’d rather not go into. She nodded to the Captain and the doctor, then curtly at me.
“Advisor.”
“Engineer,” I replied.
Our commander looked left and right, frowning. “How many green guys do you think we oughta take along, this time? Just one?”
“Against regulations for first contact on a planet above tech level eight,” Guts reminded him. “Sorry, sir.”
Ohm sighed. “Two then?” he suggested, hopefully. “Three?”
Nuts shook her head. “I gotta bad feelin’ this time, Captain,” she said.
Melodramatic, yes, but we had learned to pay attention to her premonitions.
Guts went over to a cabinet lining the far wall of the chamber, turning a knob all the way over to the last notch on a dial that said 0, 1, 2, 3, M.
(One of the most remarkable things noted by our contact team, when we first encountered demmies, was how much they had already achieved without benefit of higher mathematics. Using clever, hand-made rockets, their reckless astronauts had already reached their nearest moon. And yet, like some early human tribes, they still had no word for any number higher than three! Oh, today some of the finest mathematical minds in the universe are from Dem. And yet, they cling – by almost-superstitious tradition – to a convention in daily conversation… that any number higher than three is – “many”.)
There followed a hum and a rattling wheeze, then a panel hissed open and several impressive figures, emerged from a swirling mist, all attired in lime-green jump suits. They were demmie shaped, and possessed a demmie’s delicately pointy teeth, but they were also powerfully muscled and tall as a human. Across their chests, in big letters, were written.
JUMS
SMET
WEMS
KWALSKI
They stepped before the captain and saluted. He, in turn, retreated a pace and curtly motioned them to step aside. One learns quickly in the service, never make a habit of standing too close to greenies.
When they moved out of the way, it brought into view a smaller figure who had been standing behind them, also dressed in lime green. Her crisp salute tugged the tunic of her uniform, pulling crossed bandoliers tightly across her chest, a display which normally would have put the captain into a panting sweat, calling for someone to relieve him at the con. Here, the sight rocked him back in dismay.
“Lieutenant Gala Morell, Captain,” she introduced herself. “You and your party will be safe with us on the job.” Snappily, she saluted a second time and stepped over to join her team. Along the way, her gaze swept past me.
“Advisor,” she said. And I nodded back. “Lieutenant.”
“Aw hell,” Ohm muttered to me as the security team took up stations behind us. “A girl greenie. I hate it when that happens!”
On that occasion, I silently agreed. This particular young officer had spent much of the voyage out from Nebula Base Twelve pestering me with questions – one of those intellectually voracious demmies you’ll meet who are fascinated by all things human. Once, she even brought me a steaming bowl of our Earthling indispensable camb’l leek soup. Standing there, with her commanding a security detail that was about to land on an alien world, I had to admit that I would kind of miss the attention.
All I could do is shrug and share a brief glance with Nuts. I already agreed with her dour feeling about this mission.
The dissolution techs finished gathering any metal or mechanical objects from us, to be put in pneumatic tubes. Guts made sure – as always – that his medical kit went into the tube last, so it would be readily available upon arrival…
…a bit of mature, human-style prudence that he then proceeded to spoil by saying “Always try to slurry with a syringe on top.”
“Yup.” The captain nodded, perfunctorily. “In case of post-nozzle drip.” But at that moment he was more interested in guns than puns, checking to make sure that there were fresh nanos loaded in a formidable backup blaster before sliding it into a tube.
Time for a brief formality. Into the chamber trooped a trio of figures wearing dark cloaks with heavy cowls almost completely covering their faces. Priests of yah-tze… practitioners of what passes for religion among demmies… which amounts to a mélange of ancient, pre-contact mythologies and whatever alien belief system happens to suit their fancy, at any moment. Mostly recruited from the kitchen staff, these part-time clerics knew better than to delay the captain very long, when he was eager to lead an away-team, so they kept it short.
Ohm and the others bowed their heads, pressing the heels of both hands against their temples while I – politely – folded mine in front of me as the three hooded Ecclesiasts performed their minimal blessing: shaking at each of us a can containing six dice and invoking the name of the Great Lady of Luck in unison, spilling the dice onto a tray.
Three ones and three sixes. My crewmates shivered and even I felt a brief, superstitious chill. But our captain grinned as the priests exited, stripping off their robes and hurrying to back to the galley. Ohm summarized his interpretation of the augury.
“A rough beginning followed by a triumphant ending. Sounds like a perfect adventure, eh Advisor?”
Unless it’s the other way around. I could not help but roll my eyes, as the door to the chamber sealed with a loud hiss.
“Ready, sir?” Ensign Taken asked from the control room, her voice transmitting through the transparent window. Another humanophile, but less intellectually inclined than Lieutenant Morell, she tried to catch my gaze, even as she addressed the captain. Her nickname, “Eyes,” came from big, doe-like irises that she flashed whenever I looked her way. She was very pretty, as demmies go… and they will go all the way at the drop of a boot-lace.
“Do it, do it, do it!” Ohm urged, rocking from foot to foot, his patience at an end.
She turned a switch and I felt a powerful tingling sensation.
Author: R. J. Erbacher The road ahead was dark and going darker as it banked down into the shadows of the towering mountains, blocking the angled light that was parching the land. Pausing there at the summit he wondered if there was anything in the valley below that waited for him. Something nefarious or malicious. […]
...or actually, it doesn't. A few fans found figures that just didn't add up. Here they are.
Steven J Pemberton deserves full credit for this finding.
"My bank helpfully reminds me when it's time to pay my
bill, and normally has no problem getting it
right. But this month, the message sent Today 08:02,
telling me I had to pay by tomorrow 21-Nov was sent
on... 21-Nov. The amount I owed was missing the decimal point. They then apologised
for freaking me out, but got that wrong too, by not
replacing the placeholder for the amount I really needed to pay.
"
Faithful
Michael R.
levels a charge of confusion against what looks like.. Ticketmaster, maybe?
"My card indeed ends with 0000. Perhaps they do some weird math with their cc numbers
to store them as numerics." It's not so much weird math as simply reification. Your
so called "credit card number" is not actually a number; it is a digit string. And
the last four digits are also a digit string.
Marc Würth,
who still uses Facebook, gripes that their webdevs
also don't understand the difference between numbers and digit strings.
"Clicking on Mehr dazu (Learn more), tells me:
> About facebook.com on older versions of mobile browsers
> [...]
> Visit facebook.com from one of these browsers, if it’s available to download on your mobile device:
> [...]
> Firefox (version 48 or higher)
> [...]
Um... Facebook, guess what modern mobile web browser I'm viewing you, right now? [132.0.2 from 2024-11-10]
"
Self-styled
dragoncoder047
is baffled by what is probably a real simple bug in some display logic reporting the numerator
where it should display the denominator (2). Grumbles DC
"Somebody please explain to me how 5+2+2+2+2+2+2+0.75+2+2=23. If WebAssign
itself can't even master basic arithmetic, how can I trust it teaching me calculus?
"
Finally
Andrew C.
has a non-mathematical digit or two to share, assuming you're inclined to obscure puns.
"As well as having to endure the indignity of job seeking, now I get called names too!"
This probably requires explanation for those who are not both
native speakers of the King's English
and familiar with cryptographic engineering.
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Today is holiday in the US, where we celebrate a cosplay version of history with big meals and getting frustrated with our family. It's also a day where we are thankful- usually to not be at work, but also, thankful to not work with Brad. Original --Remy
Anita parked outside the converted garage, the printed graphic reading Global Entertainment Strategies (GES) above it. When the owner, an old man named Brad, had offered her a position after spotting her in a student computer lab, she thought he was crazy, but a background check confirmed everything he said. Now she wondered if her first intuition was correct.
“Anita, welcome!” Brad seemed to bounce like a toddler as he showed Anita inside. The walls of the converted garage were bare drywall; the wall-mounted AC unit rattled and spat in the corner. In three corners of the office sat discount computer desks. Walls partitioned off Brad’s office in the fourth corner.
He practically shoved Anita into an unoccupied desk. The computer seemed to be running an unlicensed version of Windows 8, with no Office applications of any kind. “Ross can fill you in!” He left the office, slamming the door shut behind him.
“Hi.” Ross rolled in his chair from his desk to Anita’s. “Brad’s a little enthusiastic sometimes.”
“I noticed. Uh, he never told me what game we’re working on, or what platform. Not even a title.”
Ross’s voice lowered to a whisper. “None of us know, either. We’ve been coding in Unity for now. He hired you as a programmer, right? Well, right now we just need someone to manage our documentation. I suggest you prepare yourself.”
Ross led Anita into Brad’s office. Above a cluttered desk hung a sagging whiteboard. Every square inch was covered by one, sometimes several, overlapping sticky notes. Each had a word or two written in Brad’s scrawl.
“We need more than just random post-its with ‘big guns!’ and ‘more action!’” Ross said. “We don’t even know what the title is! We’re going crazy without some kind of direction.”
Anita stared at the wall of sticky notes, feeling her sanity slipping from her mind like a wet noodle. “I’ll try.”
Sticky Escalation
Brad, can we switch to Word for our documentation? It’s getting harder
to read your handwriting, and there’s a lot of post-its that have
nothing to do with the game. This will make it easier to proceed with
development. -Anita
Two minutes after she sent the email, Brad barged out of his office. “Anita, why spend thousands of dollars on software licenses when this works just fine? If you can’t do your job with the tools you have, what kind of a programmer does that make you?”
“Brad, this isn’t going to work forever. Your whiteboard is almost out of room, and you won’t take down any of your non-game stickies!”
“I can’t take any of them down, Anita! Any of them!” He slammed the door to his office behind him.
The next day, Anita was greeted at the door by the enthusiastic Brad she had met before the interview. “I listened to reason, Anita. I hope this is enough for you to finish this documentation and get coding again!”
Brad led Anita into his office. On every wall surface, over the door, even covering part of the floor, were whiteboards. Sticky notes dotted nearly a third of the new whiteboard space.
“Now, Anita, if I don’t see new code from you soon, I may just have to let you go! Now get to work!”
Anita went to sit at her desk, then stopped. Instead, she grabbed a bright red sticky note, wrote the words “I QUIT” with a sharpy, barged into Brad’s office, and stuck it to his monitor. Brad was too stunned to talk as she left the converted garage.
The Avalanche
“Are you doing better?” Jason called Anita a few weeks later. Their short time together at GES has made them comrades-in-arms, and networking was crucial in the business.
“Much,” she said. “I got a real job with an indie developer in Santa Monica. We even have a wiki for our framework!”
“Well, listen to this. The day after you quit, the AC unit in the garage broke. I came into work to see Brad crying in a corner in his office. All of the sticky notes had curled in the humidity and fallen to the floor. The day after he got us all copies of Word.
“Too bad we still don’t know what the title of the game is.”
[Advertisement]
ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
When a cricket team is set anything more than 400 to win a Test, the target is generally considered out of reach.
The thinking behind this stems from the fact that only on four occasions has a team scored more than this figure in the final innings to win a Test, beginning in 1948 when Australia scored 3 for 404 to defeat England in the fourth Test at Headingley.
Two Australian legends, Arthur Morris and Donald Bradman, made big centuries in the win, and this only made the target seem more difficult: the logic became that unless you had some top-notch batsmen in your side, you had no chance of achieving a target that big
Thus when Australia batted in a defeatist manner against India in the first Test of the current series after being set 534 to win, it was generally accepted as nothing more than normal. No team is expected to bat out two days and more to save a Test.
But the exceptions tell their own tale. It took 28 years for a second team to overcome the 400-run barrier, with India defeating the West Indies at Port of Spain in the third Test of a series that the West Indies won 2-1.
Clive Lloyd was the West Indies captain for this Test and, based on advice that the wicket would take spin, his team included three spinners, two of them debutants: Albert Padmore and Imtiaz Ali. The third spinner was Raphick Jumadeen.
The West Indies, who had a first-innings lead of 131, declared when they reached 271 in their second innings, confident that the 403-run target they were setting India was enough to secure a win. But it all went pear-shaped. Padmore failed to get a single wicket in India’s second innings, bowling 47 overs for 98, while Jumadeen took two wickets for 70 in 41 overs. Ali also failed to get a wicket, bowling 17 overs for 52.
After the game, Lloyd reportedly castigated the spin trio, asking them sarcastically how many runs he should have set India to ensure that the three would bowl the opposition out.
Sunil Gavaskar and Gundappa Vishwanath were the heroes as India won, both making centuries. Mohinder Amarnath, another well-known name, contributed 85.
It took another 27 years for a Test to end in a victory for a team that was chasing 400 or more in the fourth innings. This time it was the West Indies, though two lesser-known players were the heroes. Australia was the losing team in this 2003 Test.
Shivnarine Chanderpaul made 104 and Ramnaresh Sarwan 105, with captain Brian Lara scoring 60 as the team made 418 for 7, the highest total chased to date.
The Australians had a strong bowling attack, with Glenn McGrath, Jason Gillespie and Brett Lee. Stuart MacGill was the spinner in that team. Lee took four wickets.
The last time a team chased 400-plus in a Test and won, it was South Africa that did the deed in 2008, winning by six wickets. The target was 414 and Graeme Smith (108) and AB deVilliers (106) were the two top contributors.
There were smaller contributions from Jacques Kallis (57), Hashim Amla (53) and J.P. Duminy (50 not out). Mitchell Johnson took three wickets.
On two other occasions, South Africa has batted through the final day of a Test in pursuit of 400-plus targets and drawn both games.
In 2005, South Africa was set 491 to win by Australia and finished the final day on 287 for 5, with youngster Jacques Rudolph the hero.
He made an unbeaten 102 as South Africa negotiated 126 overs against an attack that included Glenn McGrath, Brett Lee, Nathan Bracken and Shane Warne. Rudolph faced 283 balls and was at the crease for a little more than seven hours.
And then in 2012, South Africa, set 430 to win by Australia, eked out a draw with captain Faf du Plessis making an unbeaten 110. No other batsmen made more than 46.
Du Plessis’ innings was remarkable; he batted for nearly eight hours and faced 376 balls. South Africa ended the final day on 248 for 8, well adrift of the target, but they could hold their heads high as they left the field.
There have been numerous occasions in other years when teams have been set 400 or more to win in a Test and just surrendered, with Australia’s crumbling to 238 all out and a 295-run loss last week being just the latest such instance.
Batsmen seem to be in an awful hurry to score and lack the skills and patience to fight it out and put a high price on their wickets. Some attribute this approach to the proliferation of 20-over cricket, but then the Indian batsmen who hung around in the second innings against Australia last week play as much of the shorter version of the game as any other country. They stuck around for long enough to put some runs against their names.
Young Indian opener Yashasvi Jaiswal batted more than seven hours for his second innings 161 – after making a duck in the first innings.
When Australia was chasing 534, only Travis Head faced more than 100 balls. In the first innings, it was a bowler who stuck at the crease the longest – Mitchell Starc batted for a shade more than two hours and faced 112 deliveries.
Modern-day batsmen and batswomen need to learn how to bat time – session to session, hour to hour – when chasing a big target. The reason five-day cricket is a called a Test, is because it is precisely that – a test of skills, a test of character, a test of patience, a test of ability.
Test players are paid enormous amounts because they are expected to be the best and stand the test of a Test.
Author: Joann Evan One morning, I received a suspicious email. The subject line said “King Crimson.” The sender was “No One,” and when I rolled over the name it showed only random letters and numbers. I knew I shouldn’t open it. It was probably a scam. I clicked delete. I worked through the morning, thinking […]
Today, we're going to start with the comment before the method.
/**
* The topology type of primitives to render. (optional)<br>
* Default: 4<br>
* Valid values: [0, 1, 2, 3, 4, 5, 6]
*
* @param mode The mode to set
* @throws IllegalArgumentException If the given value does not meet
* the given constraints
*
*/
This comes from Krzysztof. As much as I dislike these JavaDoc style comments (they mostly repeat information I can get from the signature!), this one is promising. It tells me the range of values, and what happens when I exceed that range, what the default is, and it tells me that the value is optional.
In short, from the comment alone I have a good picture of what the implementation looks like.
With some caveats, mind you- because that's a set of magic numbers in there. No constants, no enum, just magic numbers. That's worrying.
This code isn't terrible. But there are all sorts of small details which flummox me.
Now, again, I want to stress, had they used enums this method would be much simpler. But fine, maybe they had a good reason for not doing that. Let's set that aside.
The obvious ugly moment here is that if condition. Did they not understand that and is a commutative operation? Or did they come to Java from LISP and miss their parentheses?
Then, of course, there's the first if statement- the null check. Honestly, we could have just put that into the chain of the if condition below, and the behavior would have been the same, or they could have just used an Optional type, which is arguably the "right" option here. But now we're drifting into the same space as enums- if only they'd used the core language features, this would be simpler.
Let's focus, instead, on one last odd choice: how they use whitespace. mode!= 0. This, more than anything, makes me think they are coming to Java from some other language. Something that uses glyphs in unusual ways, because why else would the operator only get one space on one side of it? Which also makes me think the null check was written by someone else- because they're inconsistent with it there.
So no, this code isn't terrible, but it does make me wonder a little bit about how it came to be.
[Advertisement] Plan Your .NET 9 Migration with Confidence Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Author: Hillary Lyon Wilson drifted from guest to guest serving hors d’oeuvres, taking drink orders. Most party goers hardly regarded him, too engrossed in their conversations. Save for Brenna. Young, idealistic, she possessed a heart big enough for all creatures—as she often proclaimed. Yet she ignored Devin, the earnest young politico who was doing his […]
Two men have been arrested for allegedly stealing data from and extorting dozens of companies that used the cloud data storage company Snowflake, but a third suspect — a prolific hacker known as Kiberphant0m — remains at large and continues to publicly extort victims. However, this person’s identity may not remain a secret for long: A careful review of Kiberphant0m’s daily chats across multiple cybercrime personas suggests they are a U.S. Army soldier who is or was recently stationed in South Korea.
Kiberphant0m’s identities on cybercrime forums and on Telegram and Discord chat channels have been selling data stolen from customers of the cloud data storage company Snowflake. At the end of 2023, malicious hackers discovered that many companies had uploaded huge volumes of sensitive customer data to Snowflake accounts that were protected with nothing more than a username and password (no multi-factor authentication required).
After scouring darknet markets for stolen Snowflake account credentials, the hackers began raiding the data storage repositories for some of the world’s largest corporations. Among those was AT&T, which disclosed in July that cybercriminals had stolen personal information, phone and text message records for roughly 110 million people. Wired.comreported in July that AT&T paid a hacker $370,000 to delete stolen phone records.
On October 30, Canadian authorities arrestedAlexander Moucka, a.k.a. Connor Riley Moucka of Kitchener, Ontario, on a provisional arrest warrant from the United States, which has since indicted him on 20 criminal counts connected to the Snowflake breaches. Another suspect in the Snowflake hacks, John Erin Binns, is an American who is currently incarcerated in Turkey.
A surveillance photo of Connor Riley Moucka, a.k.a. “Judische” and “Waifu,” dated Oct 21, 2024, 9 days before Moucka’s arrest. This image was included in an affidavit filed by an investigator with the Royal Canadian Mounted Police (RCMP).
Investigators say Moucka, who went by the handles Judische and Waifu, had tasked Kiberphant0m with selling data stolen from Snowflake customers who refused to pay a ransom to have their information deleted. Immediately after news broke of Moucka’s arrest, Kiberphant0m was clearly furious, and posted on the hacker community BreachForums what they claimed were the AT&T call logs for President-electDonald J. Trump and for Vice President Kamala Harris.
“In the event you do not reach out to us @ATNT all presidential government call logs will be leaked,” Kiberphant0m threatened, signing their post with multiple “#FREEWAIFU” tags. “You don’t think we don’t have plans in the event of an arrest? Think again.”
On the same day, Kiberphant0m posted what they claimed was the “data schema” from the U.S. National Security Agency.
“This was obtained from the ATNT Snowflake hack which is why ATNT paid an extortion,” Kiberphant0m wrote in a thread on BreachForums. “Why would ATNT pay Waifu for the data when they wouldn’t even pay an extortion for over 20M+ SSNs?”
Kiberphant0m posting what he claimed was a “data schema” stolen from the NSA via AT&T.
Also on Nov. 5, Kiberphant0m offered call logs stolen from Verizon’s push-to-talk (PTT) customers — mainly U.S. government agencies and emergency first responders. On Nov. 9, Kiberphant0m posted a sales thread on BreachForums offering a “SIM-swapping” service targeting Verizon PTT customers. In a SIM-swap, fraudsters use credentials that are phished or stolen from mobile phone company employees to divert a target’s phone calls and text messages to a device they control.
MEET ‘BUTTHOLIO’
Kiberphant0m joined BreachForums in January 2024, but their public utterances on Discord and Telegram channels date back to at least early 2022. On their first post to BreachForums, Kiberphant0m said they could be reached at the Telegram handle @cyb3rph4nt0m.
A review of @cyb3rph4nt0m shows this user has posted more than 4,200 messages since January 2024. Many of these messages were attempts to recruit people who could be hired to deploy a piece of malware that enslaved host machines in an Internet of Things (IoT) botnet.
On BreachForums, Kiberphant0m has sold the source code to “Shi-Bot,” a custom Linux DDoS botnet based on the Mirai malware. Kiberphant0m had few sales threads on BreachForums prior to the Snowflake attacks becoming public in May, and many of those involved databases stolen from companies in South Korea.
On June 5, 2024, a Telegram user by the name “Buttholio” joined the fraud-focused Telegram channel “Comgirl” and claimed to be Kiberphant0m. Buttholio made the claim after being taunted as a nobody by another denizen of Comgirl, referring to their @cyb3rph4nt0m account on Telegram and the Kiberphant0m user on cybercrime forums.
“Type ‘kiberphant0m’ on google with the quotes,” Buttholio told another user. “I’ll wait. Go ahead. Over 50 articles. 15+ telecoms breached. I got the IMSI number to every single person that’s ever registered in Verizon, Tmobile, ATNT and Verifone.”
On Sept. 17, 2023, Buttholio posted in a Discord chat room dedicated to players of the video game Escape from Tarkov. “Come to Korea, servers there is pretty much no extract camper or cheater,” Buttholio advised.
In another message that same day in the gaming Discord, Buttholio told others they bought the game in the United States, but that they were playing it in Asia.
“USA is where the game was purchased from, server location is actual in game servers u play on. I am a u.s. soldier so i bought it in the states but got on rotation so i have to use asian servers,” they shared.
‘REVERSESHELL’
The account @Kiberphant0m was assigned the Telegram ID number 6953392511. A review of this ID at the cyber intelligence platform Flashpoint shows that on January 4, 2024 Kibertphant0m posted to the Telegram channel “Dstat,” which is populated by cybercriminals involved in launching distributed denial-of-service (DDoS) attacks and selling DDoS-for-hire services [Full disclosure: Flashpoint is currently an advertiser on this website].
Immediately after Kiberphant0m logged on to the Dstat channel, another user wrote “hi buttholio,” to which Kiberphant0m replied with an affirmative greeting “wsg,” or “what’s good.” On Nov. 1, Dstat’s website dstat[.]cc was seized as part of “Operation PowerOFF,” an international law enforcement action against DDoS services.
Flashpoint’s data shows that @kiberphant0m told a fellow member of Dstat on April 10, 2024 that their alternate Telegram username was “@reverseshell,” and did the same two weeks later in the Telegram chat The Jacuzzi. The Telegram ID for this account is 5408575119.
Way back on Nov. 15, 2022, @reverseshell told a fellow member of a Telegram channel called Cecilio Chat that they were a soldier in the U.S. Army. This user also shared the following image of someone pictured waist-down in military fatigues, with a camouflaged backpack at their feet:
Kiberphant0m’s apparent alias ReverseShell posted this image on a Telegram channel Cecilio Chat, on Nov. 15, 2022. Image: Flashpoint.
In September 2022, Reverseshell was embroiled in an argument with another member who had threatened to launch a DDoS attack against Reverseshell’s Internet address. After the promised attack materialized, Reverseshell responded, “Yall just hit military base contracted wifi.”
In a chat from October 2022, Reverseshell was bragging about the speed of the servers they were using, and in reply to another member’s question said that they were accessing the Internet via South Korea Telecom.
Telegram chat logs archived by Flashpoint show that on Aug. 23, 2022, Reverseshell bragged they’d been using automated tools to find valid logins for Internet servers that they resold to others.
“I’ve hit US gov servers with default creds,” Reverseshell wrote, referring to systems with easy-to-guess usernames and/or passwords. “Telecom control servers, machinery shops, Russian ISP servers, etc. I sold a few big companies for like $2-3k a piece. You can sell the access when you get a big SSH into corporation.”
On July 29, 2023, Reverseshell posted a screenshot of a login page for a major U.S. defense contractor, claiming they had an aerospace company’s credentials to sell.
PROMAN AND VARS_SECC
Flashpoint finds the Telegram ID 5408575119 has used several aliases since 2022, including Reverseshell and Proman557.
A search on the username Proman557 at the cyber intelligence platform Intel 471 shows that a hacker by the name “Proman554” registered on Hackforums in September 2022, and in messages to other users Proman554 said they can be reached at the Telegram account Buttholio.
Intel 471 also finds the Proman557 moniker is one of many used by a person on the Russian-language hacking forum Exploit in 2022 who sold a variety of Linux-based botnet malware.
Proman557 was eventually banned — allegedly for scamming a fellow member out of $350 — and the Exploit moderator warned forum users that Proman557 had previously registered under several other nicknames, including an account called “Vars_Secc.”
Vars_Secc’s thousands of comments on Telegram over two years show this user divided their time between online gaming, maintaining a DDoS botnet, and promoting the sale or renting of their botnets to other users.
“I use ddos for many things not just to be a skid,” Vars_Secc pronounced. “Why do you think I haven’t sold my net?” They then proceeded to list the most useful qualities of their botnet:
-I use it to hit off servers that ban me or piss me off
-I used to ddos certain games to get my items back since the data reverts to when u joined
-I use it for server side desync RCE vulnerabilities
-I use it to sometimes ransom
-I use it when bored as a source of entertainment
Flashpoint shows that in June 2023, Vars_Secc responded to taunting from a fellow member in the Telegram channel SecHub who had threatened to reveal their personal details to the federal government for a reward.
“Man I’ve been doing this shit for 4 years,” Vars_Secc replied nonchalantly. “I highly doubt the government is going to pay millions of dollars for data on some random dude operating a pointless ddos botnet and finding a few vulnerabilities here and there.”
For several months in 2023, Vars_Secc also was an active member of the Russian-language crime forum XSS, where they sold access to a U.S. government server for $2,000. However, Vars_Secc would be banned from XSS after attempting to sell access to the Russian telecommunications giant Rostelecom. [In this, Vars_Secc violated the Number One Rule for operating on a Russia-based crime forum: Never offer to hack or sell data stolen from Russian entities or citizens].
On June 20, 2023, Vars_Secc posted a sales thread on the cybercrime forum Ramp 2.0 titled, “Selling US Gov Financial Access.”
“Server within the network, possible to pivot,” Vars_Secc’s sparse sales post read. “Has 3-5 subroutes connected to it. Price $1,250. Telegram: Vars_Secc.”
Vars_Secc also used Ramp in June 2023 to sell access to a “Vietnam government Internet Network Information Center.”
“Selling access server allocated within the network,” Vars_Secc wrote. “Has some data on it. $500.”
BUG BOUNTIES
The Vars_Secc identity claimed on Telegram in May 2023 that they made money by submitting reports about software flaws to HackerOne, a company that helps technology firms field reports about security vulnerabilities in their products and services. Specifically, Vars_Secc said they had earned financial rewards or “bug bounties” from reddit.com, the U.S.Department of Defense, and Coinbase, among 30 others.
“I make money off bug bounties, it’s quite simple,” Vars_Secc said when asked what they do for a living. “That’s why I have over 30 bug bounty reports on HackerOne.”
A month before that, Vars_Secc said they’d found a vulnerability in reddit.com.
“I poisoned Reddit’s cache,” they explained. “I’m going to exploit it further, then report it to reddit.”
KrebsOnSecurity sought comment from HackerOne, which said it would investigate the claims. This story will be updated if they respond.
The Vars_Secc telegram handle also has claimed ownership of the BreachForums member “Boxfan,” and Intel 471 shows Boxfan’s early posts on the forum had the Vars_Secc Telegram account in their signature. In their most recent post to BreachForums in January 2024, Boxfan disclosed a security vulnerability they found in Naver, the most popular search engine in South Korea (according to statista.com). Boxfan’s comments suggest they have strong negative feelings about South Korean culture.
“Have fun exploiting this vulnerability,” Boxfan wrote on BreachForums, after pasting a long string of computer code intended to demonstrate the flaw. “Fuck you South Korea and your discriminatory views. Nobody likes ur shit kpop you evil fucks. Whoever can dump this DB [database] congrats. I don’t feel like doing it so I’ll post it to the forum.”
The many identities tied to Kiberphant0m strongly suggest they are or until recently were a U.S. Army soldier stationed in South Korea. Kiberphant0m’s alter egos never mentioned their military rank, regiment, or specialization.
However, it is likely that Kiberphant0m’s facility with computers and networking was noticed by the Army. According to the U.S. Army’s website, the bulk of its forces in South Korea reside within the Eighth Army, which has a dedicated cyber operations unit focused on defending against cyber threats.
On April 1, 2023, Vars_Secc posted to a public Telegram chat channel a screenshot of the National Security Agency’s website. The image indicated the visitor had just applied for some type of job at the NSA.
A screenshot posted by Vars_Secc on Telegram on April 1, 2023, suggesting they just applied for a job at the National Security Agency.
The NSA has not yet responded to requests for comment.
Reached via Telegram, Kiberphant0m acknowledged that KrebsOnSecurity managed to unearth their old handles.
“I see you found the IP behind it no way,” Kiberphant0m replied. “I see you managed to find my old aliases LOL.”
Kiberphant0m denied being in the U.S. Army or ever being in South Korea, and said all of that was a lengthy ruse designed to create a fictitious persona. “Epic opsec troll,” they claimed.
Asked if they were at all concerned about getting busted, Kiberphant0m called that an impossibility.
“I literally can’t get caught,” Kiberphant0m said, declining an invitation to explain why. “I don’t even live in the USA Mr. Krebs.”
Below is a mind map that hopefully helps illustrate some of the connections between and among Kiberphant0m’s apparent alter egos.
A mind map of the connections between and among the identities apparently used by Kiberphant0m. Click to enlarge.
KrebsOnSecurity would like to extend a special note of thanks to the New York City based security intelligence firm Unit 221B for their assistance in helping to piece together key elements of Kiberphant0m’s different identities.
These are two attacks against the system components surrounding LLMs:
We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user inputs and generated model outputs can adversely affect these other components in the broader implemented system.
[…]
When confronted with a sensitive topic, Microsoft 365 Copilot and ChatGPT answer questions that their first-line guardrails are supposed to stop. After a few lines of text they halt—seemingly having “second thoughts”—before retracting the original answer (also known as Clawback), and replacing it with a new one without the offensive content, or a simple error message. We call this attack “Second Thoughts.”
[…]
After asking the LLM a question, if the user clicks the Stop button while the answer is still streaming, the LLM will not engage its second-line guardrails. As a result, the LLM will provide the user with the answer generated thus far, even though it violates system policies.
In other words, pressing the Stop button halts not only the answer generation but also the guardrails sequence. If the stop button isn’t pressed, then ‘Second Thoughts’ is triggered.
What’s interesting here is that the model itself isn’t being exploited. It’s the code around the model:
By attacking the application architecture components surrounding the model, and specifically the guardrails, we manipulate or disrupt the logical chain of the system, taking these components out of sync with the intended data flow, or otherwise exploiting them, or, in turn, manipulating the interaction between these components in the logical chain of the application implementation.
In modern LLM systems, there is a lot of code between what you type and what the LLM receives, and between what the LLM produces and what you see. All of that code is exploitable, and I expect many more vulnerabilities to be discovered in the coming year.
Author: Majoki The gently rolling hills stretched to the horizon. Randy Jansen shielded his eyes from the noon sun to get a better look at what Jack Forsythe was pointing to along the base of the wind turbine towers. From his vantage, the barrel looked a mile long rising to the top of the highest […]
Robert was diagnosing a problem in a reporting module. The application code ran a fairly simple query- SELECT field1, field2, field3 FROM report_table- so he foolishly assumed that it would be easy to understand the problem. Of course, the "table" driving the report wasn't actually a table, it was a view in the database.
Most of our readers are familiar with how views work, but for those who have had been corrupted by NoSQL databases: database views are great- take a query you run often, and create it as an object in the database:
CREATEVIEW my_report
ASSELECT t1.someField as someField, t2.someOtherField as someOtherField
FROM table1 t1 INNERJOIN table2 t2 ON t1.id = t2.id
Now you can query SELECT * FROM my_report WHERE someField > 5.
Like I said: great! Well, usually great. Well, sometimes great. Well, like anything else, with great power comes great responsibility.
Robert dug into the definition of the view, only to find that the tables it queried were themselves views. And those were in turn, also views. All in all, there were nineteen layers of nested views. The top level query he was trying to debug had no real relation to the underlying data, because 19 layers of abstraction had been injected between the report and the actual data. Even better- many of these nested views queried the same tables, so data was being split up and rejoined together in non-obvious and complex ways.
The view that caused Robert to reach out to us was this:
ALTERVIEW [LSFDR].[v_ControlDate]
ASSELECT
GETDATE() AS controlDate
--GETDATE() - 7 AS controlDate
This query is simply invoking a built-in function which returns today's date. Why not just call the function? We can see that once upon a time, it did offset the date by seven days, making the control date a week earlier. So I suppose there's some readability in mytable m INNER JOIN v_ControlDate cd ON m.transactionDate > cd.controlDate, but that readability also hides the meaning of control date.
That's the fundamental problem of abstraction. We lose details and meaning, and end up with 19 layers of stuff to puzzle through. A more proper solution may have been to actually implement this as a function, not a view- FROM mytable m WHERE m.transactionDate > getControlDate(). At least here, it's clear that I'm invoking a function, instead of hiding it deep inside of a view called from a view called from a view.
In any case, I'd argue that the actual code we're looking at isn't the true WTF. I don't like this view, and I wouldn't implement it this way, but it doesn't make me go "WTF?" The context the view exists in, on the other hand, absolutely does. 19 layers! Is this a database or a Russian Honey Cake?
The report, of course, didn't have any requirements defining its data. Instead, the users had worked with the software team to gradually tweak the output over time until it gave them what they believed they wanted. This meant actually changing the views to be something comprehensible and maintainable wasn't a viable option- changes could break the report in surprising and non-obvious ways. So Robert was compelled to suffer through and make the minimally invasive changes required to fix the view and get the output looking like what the users wanted.
The real WTF? The easiest fix was to create another view, and join it in. Problems compound themselves over time.
[Advertisement]
Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
It has been over 25 years since a handful of pragmatic idealists with a penchant for audaciousness started The Long Now Foundation. It was 10 years before the iPhone. Two years before Google. The human genome was about halfway sequenced. Danny Hillis kept telling his friends about a 10,000-year clock. This always led to great conversations about time and civilization and humanity. An institution began to take shape around them. Brian Eno gave it a name and Stewart Brand wrote the book: The Clock of the Long Now.
As we launch our second quarter-century, we are thrilled to announce Pace Layers, a new annual journal that takes its name from one of the core concepts in The Clock of the Long Now.
Pace Layers was conceived as a bridge between our founding concepts and where we find ourselves today. Each annual issue will provide a snapshot of The Long Now Foundation as it evolves — and a platform for the extraordinary long-term thinkers who join us in reimagining our world together over the long now.
Inside Issue 1
Our inaugural issue is a 282-page compendium of ideas, art, and insights from the remarkable community that has formed around Long Now over the past quarter-century, as well as a glimpse into our plans for our second quarter-century.
Stewart Brand opens this first issue with “Elements of a Durable Civilization,” an essay that revisits the pace layers concept, which describes how civilization’s layers — from the swiftly changing Fashion at the top to the enduring, stabilizing core of Nature at the bottom — work in concert to shape our world.
These layers — Fashion, Commerce, Infrastructure, Governance, Culture, and Nature — function as the organizing principle for the journal’s contents.
FASHION explores the ephemeral space where creativity and innovation converge to drive cultural transformation, featuring artwork by Brian Eno and Alicia Eggert, speculative fiction on the bioengineered future of fashion, a first look at the newly-redesigned Interval, and a history of multimedia events that bridge the worlds of art and technology.
COMMERCE interrogates economic narratives, environmental commodification, and intergenerational responsibility against the backdrop of climate change and with an eye towards building sustainable, resilient systems for future generations.
“When we are bound in a system of reciprocity, not return on investment, we will be closer to being the kind of ancestors future people need.” FORREST BROWN
INFRASTRUCTURE explores humanity’s efforts to maintain and reimagine essential infrastructure for the future, from our Rosetta Disk language archive landing on the moon to interventions in food systems, education, urban living, and beyond.
“Our survival on this planet depends on creating nimble responses to accelerating scales, scopes, and speeds of change. By creating containers for collective imagination of what the future can bring, speculative futures help us create those responses together.” JOHANNA HOFFMAN
GOVERNANCE examines models of leadership and collaboration that embrace long-term thinking in a planetary age, from city-based global governance to innovative policies fighting poverty and inequality.
“If we want to imagine the long-term future of humans on this planet, then we need to get away from the idea that the structures we have now are immutable constraints on those possibilities.” NILS GILMAN
CULTURE considers how language, time, and intergenerational rituals shape humanity’s understanding of itself, with pieces on the 10,000-year clock, the maintenance of ancient geoglyphs, speculative futures of resistance and imagination, and more.
“Maybe the point isn’t to live more in the literal sense of a longer or more productive life, but rather to be more alive in any given moment — a movement across rather than shooting forward on a narrow, lonely track.” JENNY ODELL
NATURE focuses on ecological time, interspecies relationships, and planetary stewardship, and includes a first look at Centuries of the Bristlecone, a new collaboration between Jonathon Keats, Long Now, and the Center for Art +Environment at the Nevada Museum of Art.
Whether you are new to our community or a long-time supporter, we hope you will see this journal as an invitation and guide to making long-term thinking a deeper part of your life and work.
You can also help shape future editions:
Have thoughts about what we should be covering for future editions? Send us your ideas at ideas@longnow.org.
Interested in writing for us? Refer to our pitch guide.
Why does long-term thinking matter to you? Are you involved in any long-term thinking projects or initiatives? Let us about it at ideas@longnow.org.
I have been thinking about what it will take to move from a global civilization to a planetary civilization — and why we need to.
First, consider how we talk about civilization. Mostly, it seems we talk about how it will end and how soon and why. Lately, everything the public frets about gets elevated to where it has to be seen as an “existential threat” to civilization. Over-population! Y2K! Artificial intelligence! Mass extinction! Climate change! Nuclear war! Under-population!
On examination, most are serious in important ways, but declaring that any will certainly end human civilization is an exaggeration that poisons public discourse and distracts us from our primary undertaking, which is managing civilization’s continuity and enhancement.
I suggest it is best thought of as part of our planet’s continuity. Over billions of years, Earth’s life has been through a lot, yet life abides, and with a steady increase over time in complexity.
Over many millennia, humanity has been through a lot, yet we abide. Regional civilizations die all the time; the record is clear on that. But the record is also clear that civilization as a human practice has carried on with no gaps, in a variety of forms, ever since the first cities, with a steady increase over time in complexity and empowerment.
Civilizations come and go. Civilization continues.
Now we have a global civilization. Is it fragile? Or robust? Many think that global civilization must be fragile, because it is so complex. I think our civilization is in fact robust, because it is so complex.
I can explain something about how the complexity works with the Pace Layers diagram. It is a cross-section of a healthy civilization, looking at elements in terms of their rate of change.
In this diagram the rapid parts of a civilization are at the top, the slowest parts at the bottom. Fashion changes weekly. Culture takes decades or centuries to budge at all.
It’s the combination of fast and slow that makes the whole system resilient. Fast learns, slow remembers. Fast proposes, slow disposes. Fast absorbs shocks, slow integrates shocks. Fast is discontinuous, slow continuous. Fast affects slow with accrued innovation and occasional revolution. Slow controls fast with constraint and constancy.
Fast gets all the attention. Slow has all the power.
In the domain where slow has all the power, making any change takes a lot of time and diligence. At the Culture level, for instance, one big, slow, important thing going on this century is worldwide urbanization. Most of our civilization is pouring into cities. And largely because of urbanization, our population is leveling off and soon will begin decreasing.
According to Jonas Salk, that is a fundamental change, because it means civilization — for the first time — is shifting from growing to shrinking. He says those are two completely different epochs, and what was possible in Epoch A will be impossible in Epoch B, and vice versa — some things we couldn’t do in Epoch A will be required in Epoch B — such as long-term thinking.
At the Nature level, the big event is climate. Most of the time it is highly variable. But 10,000 years ago, for unknown reasons, it suddenly settled down into a highly stable climate that happened to be ideal for agriculture and civilization. And it stayed that way till now. That’s the Holocene.
The full NGRIP record, dated using the GICC05modelext chronology. The δ18O is a linear proxy for temperature. The warm Holocene period 11.7 kyr to present is remarkably stable in comparison with the previous glacial period 12-120 kyr B2K. Shao, ZG., Ditlevsen, P. Nat Commun 7, 10951 (02016)
Now we’re in the Anthropocene, with massive climate influence by humans. We have planetary agency — and wish we didn’t. Gaia, we realize, was doing fine until we fell in love with combustion. What we want is for the Anthropocene to be an endless Holocene. (Maybe a little colder would be nice.)
So. We have a global civilization, economically and infrastructurally. Now, because of climate-scale problems that we have caused and must solve at scale, our task in this century is to become a planetary civilization — one that can deal with climate on its own terms. It’s a different order of integration that our global civilization isn’t up to yet. We may have a thriving global economy, but there’s no such thing as a “planetary economy” — the dynamics in play aren’t measured that way.
We have to integrate our considerable complexity with the even greater complexity of Earth’s natural systems so that both can prosper over time as one thriving planetary system of Nature and people.
Intelligence as a planetary scale process. Frank A. Grinspoon, International Journal of Astrobiology
Here’s the sequence in Gaian terms. The early anaerobic biosphere had an atmosphere that was basically stable chemically. After the great oxidation event 2.7 billion years ago, aerobic life took off with a highly unstable atmosphere chemically — lots of reactive oxygen.
Fast-forward to the present — to what Adam Frank and David Grinspoon call the “Immature Technosphere” — with its excessive carbon dioxide and chlorofluorocarbons. Global civilization made that happen. A properly planetary civilization can undo the effect and get us to a “Mature technosphere.”
Can we really do that? Probably, yes. We’ve already taken on protecting the planet in other ways. Being smarter than dinosaurs, we have figured out how to detect and deflect potentially dangerous asteroids.
As for ice ages, our current interglacial period is already overdue for a fresh massive glaciation, but it’s not going to happen, and it may never happen again. Accidentally we’ve created an atmosphere that can no longer cool drastically unless we tell it to.
The goal is this: We want to ensure our own continuity by blending in with Earth’s continuity. How do we do that? Here’s one suggestion: Expand how we think about infrastructure.
We’ve gotten very good at building and maintaining urban and global infrastructure — such as the world’s undersea cables and satellite communication systems. That experience should make it easy for us to understand the role of natural infrastructure and make the effort to maintain and sometimes enhance it.
We already take rivers seriously that way. We understand that they are as much infrastructure that we have to take care of as the bridges over them. We are catching on that the same goes for local ecosystems and the planet’s biosphere as a whole. And climate. All are infrastructure. All need attention and work to keep them going properly.
Does anything change if we say (and somehow mean) “planetary civilization”? I think so, because then civilization takes the planet’s continuing biological life as its model, container, and responsibility. When we say “we,” we mean all life, not just the human part.
You could say that Humanity and Nature are blending into one entity, and that sounds pretty good. But it misses something. I think we have to keep our thinking about Humanity and Nature as distinct, because Humanity operates with mental models and intention and Nature doesn’t. Humanity can analyze Nature, but Nature can’t analyze Humanity.
Our analysis shows that our well-realized intention to harness the energy of fossil fuels had an unwelcome effect on climate that standard Gaian forces won’t fix. That’s okay. Now our intentions are focused on fixing that problem. It will take a century or two, but I’m pretty sure we’ll succeed.
This is the reason to not be constantly obsessed with how civilization might end. It takes our eye off the main event, which is how we manage civilization’s continuity. Continuity is made partly of exploration, but most of the work is maintenance. That’s the strongest argument for protecting Nature, because Nature is the most enormous and consequential self-maintaining thing we know.
We are learning to maintain the wild so that it can keep maintaining us.
Folk singer Pete Seeger, when he was 85, said this: “You should consider that the essential art of civilization is maintenance.”
💡
Stewart Brand adapted this piece from a talk he gave for the Santa Fe Institute in November 02023. Further adapted, it will be part of his book MAINTENANCE: Of Everything, the first chapters of which will be published in 02025 by Stripe Press. They can be read online at https://books.worksinprogress.co/
"Magic bytes" are a common part of a file header. The first few bytes of a file can often be used to identify what type of file it is. For example, a bitmap file starts with "BM", and a PGM file always starts with "PN" where "N" is a number between 1 and 6, describing the specific variant in use, and WAV files start with "RIFF".
Many files have less human-readable magic bytes, like the ones Christer was working with. His team was working on software to manipulate a variety of different CAD file types. One thing this code needed to do is identify when the loaded file was a CAD file, but not the specific UFF file type they were looking for. In this case, they need to check that the file does not start with 0xabb0, 0xabb1, or 0xabb3. It was trivially easy to write up a validation check to ensure that the files had the correct magic bytes. And yet, there is no task so easy that someone can't fall flat on their face while doing it.
This is how Christer's co-worker solved this problem:
Here we have a case of someone who isn't clear on the difference between hexadecimal numbers and strings. Now, you (and the compiler) might think that 0xABB0 and 0xabb0 are, quite clearly, the same thing. But you don't understand the power of lowercase numbers. Here we have an entirely new numbering system where 0xABB0 and 0xabb0 are not equal, which also means 0xABB0 - 0xabb0 is non-zero. An entirely new field of mathematics lies before us, with new questions to be asked. If 0xABB0 < 0xABB1, is 0xABB0 < 0xabb1 also true? From this little code sample, we can't make any inferences, but these questions give us a rich field of useless mathematics to write papers about.
The biggest question of all, is that we know how to write lowercase numbers for A-F, but how do we write a lowercase 3?
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
Author: Julian Miles, Staff Writer Swinging into the forward turret, I see the displays are alive with scanner arrays and intricate calculations. “Morning, Hinton. How’s the hunting?” “Once again, Zaba, I’m going to ignore the irrelevance of arbitrary planetary platitudes. You’re clearly stuck in your ways. So, to answer: while swearing in exasperation is only […]