Planet Russell

,

Planet DebianThorsten Alteholz: My Debian Activities in February 2024

FTP master

This month I accepted 242 and rejected 42 packages. The overall number of packages that got accepted was 251.

This was just a short month and the weather outside was not really motivating. I hope it will be better in March.

Debian LTS

This was my hundred-sixteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3739-1] libjwt security update for one CVE to fix some ‘constant-time-for-execution-issue
  • [libjwt] upload to unstable
  • [#1064550] Bullseye PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt; upload after approval
  • [DLA 3741-1] engrampa security update for one CVE to fix a path traversal issue with CPIO archives
  • [#1060186] Bookworm PU-bug for libde265 was flagged for acceptance
  • [#1056935] Bullseye PU-bug for libde265 was flagged for acceptance

I also started to work on qtbase-opensource-src (an update is needed for ELTS, so an LTS update seems to be appropriate as well, especially as there are postponed CVE).

Debian ELTS

This month was the sixty-seventth ELTS month. During my allocated time I uploaded:

  • [ELA-1047-1]bind9 security update for one CVE to fix an stack exhaustion issue in Jessie and Stretch

The upload of bind9 was a bit exciting, but all occuring issues with the new upload workflow could be quickly fixed by Helmut and the packages finally reached their destination. I wonder why it is always me who stumbles upon special cases? This month I also worked on the Jessie and Stretch updates for exim4. I also started to work on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well).

Debian Printing

This month I uploaded new upstream versions of:

This work is generously funded by Freexian!

Debian Matomo

I started a new team debian-matomo-maintainers. Within this team all matomo related packages should be handled. PHP PEAR or PECL packages shall be still maintained in their corresponding teams.

This month I uploaded:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version of:

Debian IoT

This month I uploaded new upstream versions of:

Planet DebianVasudev Kamath: Cloning a laptop over NVME TCP

Recently, I got a new laptop and had to set it up so I could start using it. But I wasn't really in the mood to go through the same old steps which I had explained in this post earlier. I was complaining about this to my colleague, and there came the suggestion of why not copy the entire disk to the new laptop. Though it sounded like an interesting idea to me, I had my doubts, so here is what I told him in return.

  1. I don't have the tools to open my old laptop and connect the new disk over USB to my new laptop.
  2. I use full disk encryption, and my old laptop has a 512GB disk, whereas the new laptop has a 1TB NVME, and I'm not so familiar with resizing LUKS.

He promptly suggested both could be done. For step 1, just expose the disk using NVME over TCP and connect it over the network and do a full disk copy, and the rest is pretty simple to achieve. In short, he suggested the following:

  1. Export the disk using nvmet-tcp from the old laptop.
  2. Do a disk copy to the new laptop.
  3. Resize the partition to use the full 1TB.
  4. Resize LUKS.
  5. Finally, resize the BTRFS root disk.

Exporting Disk over NVME TCP

The easiest way suggested by my colleague to do this is using systemd-storagetm.service. This service can be invoked by simply booting into storage-target-mode.target by specifying rd.systemd.unit=storage-target-mode.target. But he suggested not to use this as I need to tweak the dracut initrd image to involve network services as well as configuring WiFi from this mode is a painful thing to do.

So alternatively, I simply booted both my laptops with GRML rescue CD. And the following step was done to export the NVME disk on my current laptop using the nvmet-tcp module of Linux:

modprobe nvemt-tcp
cd /sys/kernel/config/nvmet
mkdir ports/0
cd ports/0
echo "ipv4" > addr_adrfam
echo 0.0.0.0 > addr_traaddr
echo 4420 > addr_trsvcid
echo tcp > addr_trtype

cd /sys/kernel/config/nvmet/subsystems
mkdir testnqn
echo 1 >testnqn/allow_any_host
mkdir testnqn/namespaces/1

cd testnqn
# replace the device name with the disk you want to export
echo "/dev/nvme0n1" > namespaces/1/device_path
echo 1 > namespaces/1/enable

ln -s "../../subsystems/testnqn" /sys/kernel/config/nvmet/ports/0/subsystems/testnqn

These steps ensure that the device is now exported using NVME over TCP. The next step is to detect this on the new laptop and connect the device:

nvme discover -t tcp -a <ip> -s 4420
nvme connectl-all -t tcp -a <> -s 4420

Finally, nvme list shows the device which is connected to the new laptop, and we can proceed with the next step, which is to do the disk copy.

Copying the Disk

I simply used the dd command to copy the root disk to my new laptop. Since the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it took about 7 and a half hours to copy the entire 512GB to the new laptop. The speed at which I was copying was about 18-20MB/s. The other option would have been to create an initial partition and file system and do an rsync of the root disk or use BTRFS itself for file system transfer.

dd if=/dev/nvme2n1 of=/dev/nvme0n1 status=progress bs=40M

Resizing Partition and LUKS Container

The final part was very easy. When I launched parted, it detected that the partition table does not match the disk size and asked if it can fix it, and I said yes. Next, I had to install cloud-guest-utils to get growpart to fix the second partition, and the following command extended the partition to the full 1TB:

growpart /dev/nvem0n1 p2

Next, I used cryptsetup-resize to increase the LUKS container size.

cryptsetup luksOpen /dev/nvme0n1p2 ENC
cryptsetup resize ENC

Finally, I rebooted into the disk, and everything worked fine. After logging into the system, I resized the BTRFS file system. BTRFS requires the system to be mounted for resize, so I could not attempt it in live boot.

btfs fielsystem resize max /

Conclussion

The only benefit of this entire process is that I have a new laptop, but I still feel like I'm using my existing laptop. Typically, setting up a new laptop takes about a week or two to completely get adjusted, but in this case, that entire time is saved.

An added benefit is that I learned how to export disks using NVME over TCP, thanks to my colleague. This new knowledge adds to the value of the experience.

Charles StrossA Wonky Experience

A Wonka Story

This is no longer in the current news cycle, but definitely needs to be filed under "stuff too insane for Charlie to make up", or maybe "promising screwball comedy plot line to explore", or even "perils of outsourcing creative media work to generative AI".

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy's Chocolate Experience, a blatant attempt to cash in on Roald Dahl's cautionary children's tale, "Willy Wonka and the Chocolate Factory". Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind "House of Illuminati Ltd" will wise up and delete the website, here's a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE - CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here's The Guardian's coverage:

The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas - the tiny, orange men who power Wonka's chocolate factory in the Roald Dahl book which inspired the prequel film.

But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have ... recomplicated? I think that's the diplomatic way to phrase it.

First, someone leaked the script for the event on twitter. They'd hired actors and evidently used ChatGPT to generate a script for the show: some of the actors quit in despair, others made a valliant attempt to at least amuse the children. But it didn't work. Interactive audience-participation events are hard work and this one apparently called for the sort of special effects that Disney's Imagineers might have blanched at, or at least asked, "who's paying for this?"

Here's a ThreadReader transcript of the twitter thread about the script (ThreadReader chains tweets together into a single web page, so you don't have to log into the hellsite itself). Note it's in the shape of screenshots of the script and threadreader didn't grab the images, so here's my transcript of the first three:

DIRECTION: (Audience members engage with the interactive flowers, offering compliments, to which the flowers respond with pre-recorded, whimsical thank-yous.)

Wonkidoodle 1: (to a guest) Oh, and if you see a butterfly, whisper your sweetest dream to it. They're our official secret keepers and dream carriers of the garden!

Willy McDuff: (gathering everyone's attention) Now, I must ask, has anyone seen the elusive Bubble Bloom? It's a rare flower that blooms just once every blue moon and fills the air with shimmering bubbles!

DIRECTION: (The stage crew discreetly activates bubble machines, filling the area with bubbles, causing excitement and wonder among the audience.)

Wonkidoodle 2: (pretending to catch bubbles) Quick! Each bubble holds a whisper of enchantment--catch one, and make a wish!

Willy McDuff: (as the bubble-catching frenzy continues) Remember, in the Garden of Enchantment, every moment is a chance for magic, every corner hides a story, and every bubble... (catches a bubble) holds a dream.

DIRECTION: (He opens his hand, and the bubble gently pops, releasing a small, twinkling light that ascends into the rafters, leaving the audience in awe.)

Willy McDuff: (with warmth) My dear friends, take this time to explore, to laugh, and to dream. For in this garden, the magic is real, and the possibilities are endless. And who knows? The next wonder you encounter may just be around the next bend.

DIRECTION: Scene ends with the audience fully immersed in the interactive, magical experience, laughter and joy filling the air as Willy McDuff and the Wonkidoodles continue to engage and delight with their enchanting antics and treats.

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful--the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

Willy McDuff: Here, my dear guests, you may quench your thirst with lemonade that fizzes and dances on the tongue, and chase bubbles that burst with flavors unimaginable. A toast, to adventures shared and friendships forged in the heart of the unknown!

DIRECTION: (The audience, now relieved and rejuvenated by the whimsical turn of events, follows Willy into the Bubble and Lemonade Room, laughter and chatter filling the air once more, as they immerse themselves in the joyous, bubbly wonderland.)

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful-the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

And here is a photo of the Lemonade Room in all its glory.

A trestle table with some paper cups half-full of flat lemonade

Note that in the above directions, near as I can make out, there were no stage crew on site. As Seamus O'Reilly put it, "I get that lazy and uncreative people will use AI to generate concepts. But if the script it barfs out has animatronic flowers, glowing orbs, rivers of lemonade and giggling grass, YOU still have to make those things exist. I'm v confused as to how that part was misunderstood."

Now, if that was all there was to it, it'd merely be annoying. My initial take was that this was a blatant rip-off, a consumer fraud perpetrated by a company ("House of Illuminati") based in London, doing everything by remote control over the internet to fleece those gullible provincials of their wallet contents. (Oh, and that probably includes the actors: did they get paid on the day?) But aftershocks are still rumbling on, a week later.

Per The Daily Beast, "House of Illuminati" issued an apology (via Facebook) on Friday, offering to refund all tickets—but then mysteriously deleted the apology hours later, and posted a new one:

"I want to extend my sincerest apologies to each and every one of you who was looking forward to this event," the latest Facebook post from House of Illuminati reads. "I understand the disappointment and frustration this has caused, and for that, I am truly sorry."

(The individual behind the post goes unnamed.)

"It's important for me to clarify that the organization and decisions surrounding this event were solely my responsibility," the post continues. "I want to make it clear that anyone who was hired externally or offered their help, are not affiliated with the me or the company, any use of faces can cause serious harm to those who did not have any involvement in the making of this event."

"Regarding a personal matter, there will be no wedding, and no wedding was funded by the ticket sales," the post continues further, sans context. "This is a difficult time for me, and I ask for your understanding and privacy."

"There will be no wedding, and no wedding was funded by the ticket sales?" (What on Earth is going on here?)

Finally, The Daily Beast notes that Billy McFarland, the creator of the Fyre Fest fiasco, told TMZ he'd love to give the Wonka organizers a second chance at getting things right at Fyre Fest II.

The mind boggles.

I am now wondering if the whole thing wasn't some sort of extraordinarily elaborate publicity stunt rather than simply a fraud, but I can't for the life of me work out what was going on. Unless it was Jimmy Cauty and Bill Drummond (aka The KLF) getting up to hijinks again? But I can't imagine them doing anything so half-assed ... Least-bad case is that an idiot decided to set up an events company ("how hard can running public arts events be?" —don't answer that) and intended to use the profits and the experience to plan their dream wedding. Which then ran off the rails into a ditch, rolled over, exploded in flames, was sucked up by a tornado and deposited in Oz, their fiancée called off the engagement and eloped with a walrus, and—

It's all downhill from here.

Anyway, the moral of the story so far is: don't use generative AI tools to write scripts for public events, or to produce promotional images, or indeed to do anything at all without an experienced human to sanity check their output! And especially don't use them to fund your wedding ...

UPDATE: Identity of scammer behind Willy's Chocolate Experience exposed -- Youtube video, I haven't had a chance to watch it all yet, will summarize if relevant later; the perp has form for selling ChatGPT generated ebook-shaped "objects" via Amazon.

NEW UPDATE: Glasgow's disastrous Wonka character inspires horror film

A villain devised for the catastrophic Willy's Chocolate Experience, who makes sweets and lives in walls, is to become the subject of a new horror movie.

LATEST UPDATE: House of Illuminati claims "copywrite", "we will protect our interests".

The 'Meth Lab Oompa Loompa Lady' is selling greetings on Cameo for $25.

And Eleanor Morton has a new video out, Glasgow Wonka Experience Tourguide Doesn't Give a F*.

365 TomorrowsSowing Seeds in Digital Soil

Author: Aspen Greenwood In a world gasping under the heavy cloak of pollution, the Catalogers—scientists driven by a mission—trekked through dwindling patches of green. Among them, Maya, whose spirit yearned for the vibrant Earth imprisoned in old, faded textbooks, delved into her work with a quiet, burning intensity. Each day, Maya and her team, respirators […]

The post Sowing Seeds in Digital Soil appeared first on 365tomorrows.

David BrinMore science! - from AI to analog to human nature

We're about to dive into AI (what else?) But first off, a little news from entertainment and philosophy ... and where both venn-overlap with myth

Here's a link to a recording of the first public performance of my play “The Escape,” on November 7 at Caltech. A 'reading' but fully dramatized, well-acted and directed by Joanne Doyle. The recording is of middling quality, but shows great audience reactions. Come have some good, impudently theological fun!  

(Note, for copyright reasons the video omits background music after scene 2 (The Stones “Sympathy for the Devil;”) and at the end, when you see the audience cheering silently during “You Gotta Have Heart!” the great song from Damn Yankees, that's related to the theme of the play. 


Pity! Still, folks liked it. And I think you’ll laugh a few times… or go “Huh!”)



== A world of analog… ==


Before going to digital revolutions, might there come a return of analog computing? 

Bringing back analog computers in much more advanced forms than their historic ancestors will change the world of computing drastically and forever.” 

This article makes a point I depicted in Infinity’s Shore – that analog computing may yet find a place. Indeed, the more we learn about neurons, the less their operation looks like simple, binary flip-flops. 

For every flashy, on-off synapse, there appear to be hundreds – even thousands – of tiny organelles that perform murky, nonlinear computational (or voting) functions, with some evidence for the Penrose-Hameroff notion that some of them use quantum entanglement!

Says one of the few pioneers in analog-on-a-chip: “Digital computers are very good at scalability. Analog is very good at complex interactions between variables. In the future, we may combine these advantages.”


Which brings us back to my novel - Infinity's Shore - wherein a hidden interstellar colony of ‘illegal immigrant’ refugees develops analog computers in order to avoid a posited ‘inevitable detectability’ of digital computation. A plot device, sure. But it freed me to envision a vast chamber filled with spinning glass disks and cams and sparking tubes. A vivid Frankenstein contraption of… analog.

 

== AI, Ai AI!! ==

 

We just got back from Ben Goertzel's conference on “Beneficial AGI” in Panama. How can we encourage a 'landing' so that organic and artificial minds will be mutually beneficent? Quite a group was there with interesting perspectives on these new life forms. Exchanged ideas... 


...including the highly unusual ones from my WIRED article that breaks free of the three standard 'AI-formats' that can only lead to disaster, suggesting instead a 4th! That AI entities can only be held accountable if they have individuality... even 'soul'... 


Heck, still highly relevant: my NEWSWEEK op-ed (June'22) dealt with 'empathy bots'' that feign sapience and personhood.  


Offering some context for this new type of life form, Byron Reese has a new book: “We Are Agora: How Humanity Functions as a Single Superorganism That Shapes Our World and Our Future.”  We desperately need the wary, can-do optimism that he conveyed in earlier books – along with confidence persuaders like Steven Pinker and Peter Diamandis! Only now BP talks about Gaia, Lovelock, Margulis and all that… how life is a web of nested levels of individuality and macro communities, e.g. from cells to a bee to a hive and so on. Or YOUR cells to organs to ‘you’ to your families and communities and civilization. In other words – the core topic of my 1990 novel EARTH!  (Soon to be re-released in an even better version! ;-)

See Byron interviewed by Tim Ventura.

 

A paper on “Nepotistically Trained Generative-AI Models Collapse” asserts that – in what seems to be a case of back feedback loops - AI (artificial intelligence) image synthesis programs, when retrained on even small amounts of their own creation, produce highly distorted images… and that once poisoned, the models struggle to fully heal even after retraining on only real images.  I am sure it’ll get fixed - and probably has been, before this gets posted - but…. 

 

Oy!  Or shall I say aieee!”  This very clever Twitter troll has developed an interesting demonstration of recursive "poisoning." (link by Mike Godwin.)

 

But then we can gain insights into the past! 

 At the Direction of President Biden, Department of Commerce to Establish U.S. Artificial Intelligence Safety Institute to Lead Efforts on AI Safety. Through the National Institute of Standards and Technology (NIST),  the U.S. Artificial Intelligence Safety Institute (USAISI) will lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models. “USAISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.”


== Insights into human nature ==

 

Caltech researchers developed a way to read brain activity using functional ultrasound (fUS), a much less invasive technique than neural link implants and does not require constant recalibration.  Only… um… “Because the skull itself is not permeable to sound waves, using ultrasound for brain imaging requires a transparent “window” to be installed into the skull.


A researcher wrote about his shock after discovering that some people don't have inner speech. Many folks have an internal monologue that is constantly commenting on everything they do, whereas others produce only small snippets of inner speech here and there, as they go about their day.  But some report a complete absence. The article asks what's going on inside the heads of people who don't have inner speech?


Ask those and other unusual questions! In The Ancient Ones I comment about those human beings who, teetering at the edge of a sneeze, do NOT look for a sharp, bright light to stare into. Such people exist… and they almost all think we light-starers are lying! Yeah, we smooth apes are a varied bunch.


== And finally ==


The Talmudic rabbis recognized six genders that were neither purely male nor female. Among these: 


- Androgynos, having both male and female characteristics.

- Tumtum, lacking sexual characteristics.

- Aylonit hamah, identified female at birth but later naturally developing male characteristics.

- Aylonit adam, identified female at birth but later developing male characteristics through human intervention. And so on.

They also had a tradition that the first human being was both.


A laudable acceptance we can all learn from! Of course, they also taught against the dangers of excessive, self-righteous sanctimony. Those who sow deliberate insult and contention in their own house (or family, or coalition of well-meaning allies) inherit... the wind.


Charles StrossSame bullshit, new tin

I am seeing newspaper headlines today along the lines of British public will be called up to fight if UK goes to war because 'military is too small', Army chief warns, and I am rolling my eyes.

The Tories run this flag up the mast regularly whenever they want to boost their popularity with the geriatric demographic who remember national service (abolished 60 years ago, in 1963). Thatcher did it in the early 80s; the Army general staff told her to piss off. And the pols have gotten the same reaction ever since. This time the call is coming from inside the house—it's a general, not a politician—but it still won't work because changes to the structure of the British society and economy since 1979 (hint: Thatcher's revolution) make it impossible.

Reasons it won't work: there are two aspects, infrastructure and labour.

Let's look at infrastructure first: if you have conscripts, it follows that you need to provide uniforms, food, and beds for them. Less obviously, you need NCOs to shout at them and teach them to brush their teeth and tie their bootlaces (because a certain proportion of your intake will have missed out on the basics). The barracks that used to be used for a large conscript army were all demolished or sold off decades ago, we don't have half a million spare army uniforms sitting in a warehouse somewhere, and the army doesn't currently have ten thousand or more spare training sergeants sitting idle.

Russia could get away with this shit when they invaded Ukraine because Russia kept national service, so the call-up mostly got adults who had been through the (highly abusive) draft some time in the preceding years. Even so, they had huge problems with conscripts sleeping rough or being sent to the front with no kit.

The UK is in a much worse place where it comes to conscription: first you have to train the NCOs (which takes a couple of years as you need to start with experienced and reasonably competent soldiers) and build the barracks. Because the old barracks? Have mostly been turned into modern private housing estates, and the RAF airfields are now civilian airports (but mostly housing estates) and that's a huge amount of construction to squeeze out of a British construction industry that mostly does skyscrapers and supermarkets these days.

And this is before we consider that we're handing these people guns (that we don't have, because there is no national stockpile of half a million spare SA-80s and the bullets to feed them, never mind spare operational Challenger-IIs) and training them to shoot. Rifles? No problem, that'll be a few weeks and a few hundred rounds of ammunition per soldier until they're competent to not blow their own foot off. But anything actually useful on the battlefield, like artillery or tanks or ATGMs? Never mind the two-way radio kit troops are expected to keep charged and dry and operate, and the protocol for using it? That stuff takes months, years, to acquire competence with. And firing off a lot of training rounds and putting a lot of kilometres on those tank tracks (tanks are exotic short-range vehicles that require maintenance like a Bugatti, not a family car). So the warm conscript bodies are just the start of it—bringing back conscription implies equipping them, so should be seen as a coded gimme for "please can has 500% budget increase" from the army.

Now let's discuss labour.

A side-effect of conscription is that it sucks able-bodied young adults out of the workforce. The UK is currently going through a massive labour supply crunch, partly because of Brexit but also because a chunk of the work force is disabled due to long COVID. A body in a uniform is not stacking shelves in Tesco or trading shares in the stock exchange. A body in uniform is a drain on the economy, not a boost.

If you want a half-million strong army, then you're taking half a million people out of the work force that runs the economy that feeds that army. At peak employment in 2023 the UK had 32.8 million fully employed workers and 1.3 million unemployed ... but you can't assume that 1.3 million is available for national service: a bunch will be medically or psychologically unfit or simply unemployable in any useful capacity. (Anyone who can't fill out the forms to register as disabled due to brain fog but who can't work due to long COVID probably falls into this category, for example.) Realistically, economists describe any national economy with 3% or less unemployment as full employment because a labour market needs some liquidity in order to avoid gridlock. And the UK is dangerously close to that right now. The average employment tenure is about 3 years, so a 3% slack across the labour pool is equivalent to one month of unemployment between jobs—there's barely time to play musical chairs, in other words.

If a notional half-million strong conscript force optimistically means losing 3% of the entire work force, that's going to cause knock-on effects elsewhere in the economy, starting with an inflationary spiral driven by wage rises as employers compete to fill essential positions: that didn't happen in the 1910-1960 era because of mass employment, collective bargaining, and wage and price controls, but the post-1979 conservative consensus has stripped away all these regulatory mechanisms. Market forces, baby!

To make matters worse, they'll be the part of the work force who are physically able to do a job that doesn't involve sitting in a chair all day. Again, Russia has reportedly been drafting legally blind diabetic fifty-somethings: it's hard to imagine them being effective soldiers in a trench war. Meanwhile, if you thought your local NHS hospital was over-stretched today, just wait until all the porters and cleaners get drafted so there's nobody to wash the bedding or distribute the meals or wheel patients in and out of theatre for surgery. And the same goes for your local supermarket, where there's nobody left to take rotting produce off the shelves and replace it with fresh—or, more annoyingly, no truckers to drive HGVs, automobile engineers to service your car, or plumbers to fix your leaky pipes. (The latter three are all gimmes for any functioning military because military organizations are all about logistics first because without logistics the shooty-shooty bang-bangs run out of ammunition really fast.) And you can't draft builders because they're all busy throwing up the barracks for the conscripts to eat, sleep, and shit in, and anyway, without builders the housing shortage is going to get even worse and you end up with more inflation ...

There are a pile of vicious feedback loops in play here, but what it boils down to is: we lack the infrastructure to return to a mass military, whether it's staffed by conscription or traditional recruitment (which in the UK has totally collapsed since the Tories outsourced recruiting to Capita in 2012). It's not just the bodies but the materiel and the crown estate (buildings to put them in). By the time you total up the cost of training an infantryman, the actual payroll saved by using conscripts rather than volunteers works out at a tiny fraction of their cost, and is pissed away on personnel who are not there willingly and will leave at the first opportunity. Meanwhile the economy has been systematically asset-stripped and looted and the general staff can't have an extra £200Bn/year to spend on top of the existing £55Bn budget because Oligarchs Need Yachts or something.

Maybe if we went back to a 90% marginal rate of income tax, reintroduced food rationing, raised the retirement age to 80, expropriated all private property portfolios worth over £1M above the value of the primary residence, and introduced flag-shagging as a mandatory subject in primary schools—in other words: turn our backs on every social change, good or bad, since roughly 1960, and accept a future of regimented poverty and militarism—we could be ready to field a mass conscript army armed with rifles on the battlefields of 2045 ... but frankly it's cheaper to invest in killer robots. Or better still, give peace a chance?

,

Planet DebianReproducible Builds: Reproducible Builds in February 2024

Welcome to the February 2024 report from the Reproducible Builds project! In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.


Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. However, that wasn’t the only talk related to Reproducible Builds.

However, please see our comprehensive FOSDEM 2024 news post for the full details and links.


Maintainer Perspectives on Open Source Software Security

Bernhard M. Wiedemann spotted that a recent report entitled Maintainer Perspectives on Open Source Software Security written by Stephen Hendrick and Ashwin Ramaswami of the Linux Foundation sports an infographic which mentions that “56% of [polled] projects support reproducible builds”.


A total of three separate scholarly papers related to Reproducible Builds have appeared this month:

Signing in Four Public Software Package Registries: Quantity, Quality, and Influencing Factors by Taylor R. Schorlemmer, Kelechi G. Kalu, Luke Chigges, Kyung Myung Ko, Eman Abdul-Muhd, Abu Ishgair, Saurabh Bagchi, Santiago Torres-Arias and James C. Davis (Purdue University, Indiana, USA) is concerned with the problem that:

Package maintainers can guarantee package authorship through software signing [but] it is unclear how common this practice is, and whether the resulting signatures are created properly. Prior work has provided raw data on signing practices, but measured single platforms, did not consider time, and did not provide insight on factors that may influence signing. We lack a comprehensive, multi-platform understanding of signing adoption and relevant factors. This study addresses this gap. (arXiv, full PDF)


Reproducibility of Build Environments through Space and Time by Julien Malka, Stefano Zacchiroli and Théo Zimmermann (Institut Polytechnique de Paris, France) addresses:

[The] principle of reusability […] makes it harder to reproduce projects’ build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim.

The abstract continues with the claim that “Using historical data, we show that we are able to reproduce build environments of about 7 million Nix packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision. (arXiv, full PDF)


Options Matter: Documenting and Fixing Non-Reproducible Builds in Highly-Configurable Systems by Georges Aaron Randrianaina, Djamel Eddine Khelladi, Olivier Zendra and Mathieu Acher (Inria centre at Rennes University, France):

This paper thus proposes an approach to automatically identify configuration options causing non-reproducibility of builds. It begins by building a set of builds in order to detect non-reproducible ones through binary comparison. We then develop automated techniques that combine statistical learning with symbolic reasoning to analyze over 20,000 configuration options. Our methods are designed to both detect options causing non-reproducibility, and remedy non-reproducible configurations, two tasks that are challenging and costly to perform manually. (HAL Portal, full PDF)


Mailing list highlights

From our mailing list this month:


Distribution work

In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to Debian’s knowledge about identified issues. A number of issue types were updated as well. […][…][…][…] In addition, Roland Clobus posted his 23rd update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)” and that there will likely be further work at a MiniDebCamp in Hamburg. Furthermore, Roland also responded in-depth to a query about a previous report


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build that attempts to reproduce an existing package within a koji build environment. Although the projects’ README file lists a number of “fields will always or almost always vary” and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.


Jelle van der Waa introduced a new linter rule for Arch Linux packages in order to detect cache files leftover by the Sphinx documentation generator which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.


Elsewhere, Bernhard M. Wiedemann posted another monthly update for his work elsewhere in openSUSE.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 256, 257 and 258 to Debian and made the following additional changes:

  • Use a deterministic name instead of trusting gpg’s –use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [][]
  • Don’t error-out with a traceback if we encounter struct.unpack-related errors when parsing Python .pyc files. (#1064973). []
  • Don’t try and compare rdb_expected_diff on non-GNU systems as %p formatting can vary, especially with respect to MacOS. []
  • Fix compatibility with pytest 8.0. []
  • Temporarily fix support for Python 3.11.8. []
  • Use the 7zip package (over p7zip-full) after a Debian package transition. (#1063559). []
  • Bump the minimum Black source code reformatter requirement to 24.1.1+. []
  • Expand an older changelog entry with a CVE reference. []
  • Make test_zip black clean. []

In addition, James Addison contributed a patch to parse the headers from the diff(1) correctly [][] — thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.


reprotest

reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:

  • Create a (working) proof of concept for enabling a specific number of CPUs. [][]
  • Consistently use 398 days for time variation rather than choosing randomly and update README.rst to match. [][]
  • Support a new --vary=build_path.path option. [][][][]


Website updates

There were made a number of improvements to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Temporarily disable upgrading/bootstraping Debian unstable and experimental as they are currently broken. [][]
    • Use the 64-bit amd64 kernel on all i386 nodes; no more 686 PAE kernels. []
    • Add an Erlang package set. []
  • Other changes:

    • Grant Jan-Benedict Glaw shell access to the Jenkins node. []
    • Enable debugging for NetBSD reproducibility testing. []
    • Use /usr/bin/du --apparent-size in the Jenkins shell monitor. []
    • Revert “reproducible nodes: mark osuosl2 as down”. []
    • Thanks again to Codethink, for they have doubled the RAM on our arm64 nodes. []
    • Only set /proc/$pid/oom_score_adj to -1000 if it has not already been done. []
    • Add the opemwrt-target-tegra and jtx task to the list of zombie jobs. [][]

Vagrant Cascadian also made the following changes:

  • Overhaul the handling of OpenSSH configuration files after updating from Debian bookworm. [][][]
  • Add two new armhf architecture build nodes, virt32z and virt64z, and insert them into the Munin monitoring. [][] [][]

In addition, Alexander Couzens updated the OpenWrt configuration in order to replace the tegra target with mpc85xx [], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR to mitigate out of space issues on a tmpfs-backed /tmp [] and Zheng Junjie added a link to the GNU Guix tests [].

Lastly, node maintenance was performed by Holger Levsen [][][][][][] and Vagrant Cascadian [][][][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Planet DebianIustin Pop: Finally learning some Rust - hello photo-backlog-exporter!

After 4? 5? or so years of wanting to learn Rust, over the past 4 or so months I finally bit the bullet and found the motivation to write some Rust. And the subject.

And I was, and still am, thoroughly surprised. It’s like someone took Haskell, simplified it to some extents, and wrote a systems language out of it. Writing Rust after Haskell seems easy, and pleasant, and you:

  • don’t have to care about unintended laziness which causes memory “leaksâ€� (stuck memory, more like).
  • don’t have to care about GC eating too much of your multi-threaded RTS.
  • can be happy that there’s lots of activity and buzz around the language.
  • can be happy for generating very small, efficient binaries that feel right at home on Raspberry Pi, especially not the 5.
  • are very happy that error handling is done right (Option and Result, not like Go…)

On the other hand:

  • there are no actual monads; the ? operator kind-of-looks-like being in do blocks, but only and only for Option and Result, sadly.
  • there’s no Stackage, it’s like having only Hackage available, and you can hope all packages work together well.
  • most packaging is designed to work only against upstream/online crates.io, so offline packaging is doable but not “nativeâ€� (from what I’ve seen).

However, overall, one can clearly see there’s more movement in Rust, and the quality of some parts of the toolchain is better (looking at you, rust-analyzer, compared to HLS).

So, with that, I’ve just tagged photo-backlog-exporter v0.1.0. It’s a port of a Python script that was run as a textfile collector, which meant updates every ~15 minutes, since it was a bit slow to start, which I then rewrote in Go (but I don’t like Go the language, plus the GC - if I have to deal with a GC, I’d rather write Haskell), then finally rewrote in Rust.

What does this do? It exports metrics for Prometheus based on the count, age and distribution of files in a directory. These files being, for me, the pictures I still have to sort, cull and process, because I never have enough free time to clear out the backlog. The script is kind of designed to work together with Corydalis, but since it doesn’t care about file content, it can also double (easily) as simple “file count/age exporter�.

And to my surprise, writing in Rust is soo pleasant, that the feature list is greater than the original Python script, and - compared to that untested script - I’ve rather easily achieved a very high coverage ratio. Rust has multiple types of tests, and the combination allows getting pretty down to details on testing:

  • region coverage: >80%
  • function coverage: >89% (so close here!)
  • line coverage: >95%

I had to combine a (large) number of testing crates to get it expressive enough, but it was worth the effort. The last find from yesterday, assert_cmd, is excellent to describe testing/assertion in Rust itself, rather than via a separate, new DSL, like I was using shelltest for, in Haskell.

To some extent, I feel like I found the missing arrow in the quiver. Haskell is good, quite very good for some type of workloads, but of course not all, and Rust complements that very nicely, with lots of overlap (as expected). Python can fill in any quick-and-dirty scripting needed. And I just need to learn more frontend, specifically Typescript (the language, not referring to any specific libraries/frameworks), and I’ll be ready for AI to take over coding 😅…

So, for now, I’ll need to split my free time coding between all of the above, and keep exercising my skills. But so glad to have found a good new language!

365 Tomorrows12 Steps

Author: Janice Cyntje Alfonso stood near the podium of his community center’s conference room and waivered. Although he was grateful that his niece had invited him to speak at this 12-step support group, he was nevertheless cautious of the emotional fallout from airing his life’s dirty laundry. Beads of perspiration trickled down his brows as […]

The post 12 Steps appeared first on 365tomorrows.

Charles StrossA message from our sponsors: New Book coming!

(You probably expected this announcement a while ago ...)

I've just signed a new two book deal with my publishers, Tor.com publishing in the USA/Canada and Orbit in the UK/rest of world, and the book I'm talking about here and now—the one that's already written and delivered to the Production people who turn it into a thing you'll be able to buy later this year—is a Laundry stand-alone titled "A Conventional Boy".

("Delivered to production" means it is now ready to be copy-edited, typeset, printed/bound/distributed and simultaneously turned into an ebook and pushed through the interwebbytubes to the likes of Kobo and Kindle. I do not have a publication date or a link where you can order it yet: it almost certainly can't show up before July at this point. Yes, everything is running late. No, I have no idea why.)

"A Conventional Boy" is not part of the main (and unfinished) Laundry Files story arc. Nor is it a New Management story. It's a stand-alone story about Derek the DM, set some time between the end of "The Fuller Memorandum" and before "The Delirium Brief". We met Derek originally in "The Nightmare Stacks", and again in "The Labyrinth Index": he's a portly, short-sighted, middle-aged nerd from the Laundry's Forecasting Ops department who also just happens to be the most powerful precognitive the Laundry has tripped over in the past few decades—and a role playing gamer.

When Derek was 14 years old and running a D&D campaign, a schoolteacher overheard him explaining D&D demons to his players and called a government tips hotline. Thirty-odd years later Derek has lived most of his life in Camp Sunshine, the Laundry's magical Gitmo for Elder God cultists. As a trusty/"safe" inmate, he produces the camp newsletter and uses his postal privileges to run a play-by-mail RPG. One day, two pieces of news cross Derek's desk: the camp is going to be closed down and rebuilt as a real prison, and a games convention is coming to the nearest town.

Camp Sunshine is officially escape-proof, but Derek has had a foolproof escape plan socked away for the past decade. He hasn't used it because until now he's never had anywhere to escape to. But now he's facing the demolition of his only home, and he has a destination in mind. Come hell or high water, Derek intends to go to his first ever convention. Little does he realize that hell is also going to the convention ...

I began writing "A Conventional Boy" in 2009, thinking it'd make a nice short story. It went on hold for far too long (it was originally meant to come out before "The Nightmare Stacks"!) but instead it lingered ... then when I got back to work on it, the story ran away and grew into a short novel in its own right. As it's rather shorter than the other Laundry novels (although twice as long as, say, "Equoid") the book also includes "Overtime" and "Escape from Yokai Land", both Laundry Files novelettes about Bob, and an afterword providing some background on the 1980s Satanic D&D Panic for readers who don't remember it (which sadly means anyone much younger than myself).

Questions? Ask me anything!

Planet DebianValhalla's Things: Elastic Neck Top Two: MOAR Ruffles

Posted on March 9, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a white top with a wide neck with ruffles and puffy sleeves that are gathered at the cuff. The top is tucked in the trousers to gather the fullness at the waist.

After making my Elastic Neck Top I knew I wanted to make another one less constrained by the amount of available fabric.

I had a big cut of white cotton voile, I bought some more swimsuit elastic, and I also had a spool of n°100 sewing cotton, but then I postponed the project for a while I was working on other things.

Then FOSDEM 2024 arrived, I was going to remote it, and I was working on my Augusta Stays, but I knew that in the middle of FOSDEM I risked getting to the stage where I needed to leave the computer to try the stays on: not something really compatible with the frenetic pace of a FOSDEM weekend, even one spent at home.

I needed a backup project1, and this was perfect: I already had everything I needed, the pattern and instructions were already on my site (so I didn’t need to take pictures while working), and it was mostly a lot of straight seams, perfect while watching conference videos.

So, on the Friday before FOSDEM I cut all of the pieces, then spent three quarters of FOSDEM on the stays, and when I reached the point where I needed to stop for a fit test I started on the top.

Like the first one, everything was sewn by hand, and one week after I had started everything was assembled, except for the casings for the elastic at the neck and cuffs, which required about 10 km of sewing, and even if it was just a running stitch it made me want to reconsider my lifestyle choices a few times: there was really no reason for me not to do just those seams by machine in a few minutes.

Instead I kept sewing by hand whenever I had time for it, and on the next weekend it was ready. We had a rare day of sun during the weekend, so I wore my thermal underwear, some other layer, a scarf around my neck, and went outside with my SO to have a batch of pictures taken (those in the jeans posts, and others for a post I haven’t written yet. Have I mentioned I have a backlog?).

And then the top went into the wardrobe, and it will come out again when the weather will be a bit warmer. Or maybe it will be used under the Augusta Stays, since I don’t have a 1700 chemise yet, but that requires actually finishing them.

The pattern for this project was already online, of course, but I’ve added a picture of the casing to the relevant section, and everything is as usual #FreeSoftWear.


  1. yes, I could have worked on some knitting WIP, but lately I’m more in a sewing mood.↩︎

,

Planet DebianLouis-Philippe Véronneau: Acts of active procrastination: example of a silly Python script for Moodle

My brain is currently suffering from an overload caused by grading student assignments.

In search of a somewhat productive way to procrastinate, I thought I would share a small script I wrote sometime in 2023 to facilitate my grading work.

I use Moodle for all the classes I teach and students use it to hand me out their papers. When I'm ready to grade them, I download the ZIP archive Moodle provides containing all their PDF files and comment them using xournalpp and my Wacom tablet.

Once this is done, I have a directory structure that looks like this:

Assignment FooBar/
├── Student A_21100_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student A's perfectly named assignment.pdf
│   └── Student A's perfectly named assignment.xopp
├── Student B_21094_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student B's perfectly named assignment.pdf
│   └── Student B's perfectly named assignment.xopp
├── Student C_21093_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student C's perfectly named assignment.pdf
│   └── Student C's perfectly named assignment.xopp
⋮

Before I can upload files back to Moodle, this directory needs to be copied (I have to keep the original files), cleaned of everything but the graded paper.pdf files and compressed in a ZIP.

You can see how this can quickly get tedious to do by hand. Not being a complete tool, I often resorted to crafting a few spurious shell one-liners each time I had to do this1. Eventually I got tired of ctrl-R-ing my shell history and wrote something reusable.

Behold this script! When I began writing this post, I was certain I had cheaped out on my 2021 New Year's resolution and written it in Shell, but glory!, it seems I used a proper scripting language instead.

#!/usr/bin/python3

# Copyright (C) 2023, Louis-Philippe Véronneau <pollo@debian.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

"""
This script aims to take a directory containing PDF files exported via the
Moodle mass download function, remove everything but the final files to submit
back to the students and zip it back.

usage: ./moodle-zip.py <target_dir>
"""

import os
import shutil
import sys
import tempfile

from fnmatch import fnmatch


def sanity(directory):
    """Run sanity checks before doing anything else"""
    base_directory = os.path.basename(os.path.normpath(directory))
    if not os.path.isdir(directory):
        sys.exit(f"Target directory {directory} is not a valid directory")
    if os.path.exists(f"/tmp/{base_directory}.zip"):
        sys.exit(f"Final ZIP file path '/tmp/{base_directory}.zip' already exists")
    for root, dirnames, _ in os.walk(directory):
        for dirname in dirnames:
            corrige_present = False
            for file in os.listdir(os.path.join(root, dirname)):
                if fnmatch(file, 'graded paper.pdf'):
                    corrige_present = True
            if corrige_present is False:
                sys.exit(f"Directory {dirname} does not contain a 'graded paper.pdf' file")


def clean(directory):
    """Remove superfluous files, to keep only the graded PDF"""
    with tempfile.TemporaryDirectory() as tmp_dir:
        shutil.copytree(directory, tmp_dir, dirs_exist_ok=True)
        for root, _, filenames in os.walk(tmp_dir):
            for file in filenames:
                if not fnmatch(file, 'graded paper.pdf'):
                    os.remove(os.path.join(root, file))
        compress(tmp_dir, directory)


def compress(directory, target_dir):
    """Compress directory into a ZIP file and save it to the target dir"""
    target_dir = os.path.basename(os.path.normpath(target_dir))
    shutil.make_archive(f"/tmp/{target_dir}", 'zip', directory)
    print(f"Final ZIP file has been saved to '/tmp/{target_dir}.zip'")


def main():
    """Main function"""
    target_dir = sys.argv[1]
    sanity(target_dir)
    clean(target_dir)


if __name__ == "__main__":
    main()

If for some reason you happen to have a similar workflow as I and end up using this script, hit me up?

Now, back to grading...


  1. If I recall correctly, the lazy way I used to do it involved copying the directory, renaming the extension of the graded paper.pdf files, deleting all .pdf and .xopp files using find and changing graded paper.foobar back to a PDF. Some clever regex or learning awk from the ground up could've probably done the job as well, but you know, that would have required using my brain and spending spoons... 

Cryptogram Essays from the Second IWORD

Cryptogram How the “Frontier” Became the Slogan of Uncontrolled AI

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation.

America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.

That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush. But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.

Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.

America’s “Frontier” Problem

For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.

Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay. As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”

Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “spatial fix.”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.

AI as a Permission Structure

Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.

In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback, heralded as the most important breakthrough of ChatGPT.

The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.

Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon, Facebook, and Google. Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.

We can already see this kind of boundary pushing happening with AI.

Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending, so far AI companies have faced no significant penalties for the unauthorized use of this data.

Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.

Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.

AI and American Exceptionalism

Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power, the most data scientists, the most venture-capitalist investment, and the most AI companies. This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.

Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement, and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.

The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act, lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West—a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

AI and Unbridled Growth

The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.

That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.

Controlling Our Frontier Impulses

None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.

We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.

We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.

More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.

And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.

This essay was written with Nathan Sanders, and was originally published in Jacobin.

Krebs on SecurityA Close Up Look at the Consumer Data Broker Radaris

If you live in the United States, the data broker Radaris likely knows a great deal about you, and they are happy to sell what they know to anyone. But how much do we know about Radaris? Publicly available data indicates that in addition to running a dizzying array of people-search websites, the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

Formed in 2009, Radaris is a vast people-search network for finding data on individuals, properties, phone numbers, businesses and addresses. Search for any American’s name in Google and the chances are excellent that a listing for them at Radaris.com will show up prominently in the results.

Radaris reports typically bundle a substantial amount of data scraped from public and court documents, including any current or previous addresses and phone numbers, known email addresses and registered domain names. The reports also list address and phone records for the target’s known relatives and associates. Such information could be useful if you were trying to determine the maiden name of someone’s mother, or successfully answer a range of other knowledge-based authentication questions.

Currently, consumer reports advertised for sale at Radaris.com are being fulfilled by a different people-search company called TruthFinder. But Radaris also operates a number of other people-search properties — like Centeda.com — that sell consumer reports directly and behave almost identically to TruthFinder: That is, reel the visitor in with promises of detailed background reports on people, and then charge a $34.99 monthly subscription fee just to view the results.

The Better Business Bureau (BBB) assigns Radaris a rating of “F” for consistently ignoring consumers seeking to have their information removed from Radaris’ various online properties. Of the 159 complaints detailed there in the last year, several were from people who had used third-party identity protection services to have their information removed from Radaris, only to receive a notice a few months later that their Radaris record had been restored.

What’s more, Radaris’ automated process for requesting the removal of your information requires signing up for an account, potentially providing more information about yourself that the company didn’t already have (see screenshot above).

Radaris has not responded to requests for comment.

Radaris, TruthFinder and others like them all force users to agree that their reports will not be used to evaluate someone’s eligibility for credit, or a new apartment or job. This language is so prominent in people-search reports because selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically do not include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and another people-search service Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

An excerpt from the FTC’s complaint against TruthFinder and Instant Checkmate.

The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

“All the while, the companies touted the accuracy of their reports in online ads and other promotional materials, claiming that their reports contain “the MOST ACCURATE information available to the public,” the FTC noted. The FTC says, however, that all the information used in their background reports is obtained from third parties that expressly disclaim that the information is accurate, and that TruthFinder and Instant Checkmate take no steps to verify the accuracy of the information.

The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

WHO IS RADARIS?

According to Radaris’ profile at the investor website Pitchbook.com, the company’s founder and “co-chief executive officer” is a Massachusetts resident named Gary Norden, also known as Gary Nard.

An analysis of email addresses known to have been used by Mr. Norden shows he is a native Russian man whose real name is Igor Lybarsky (also spelled Lubarsky). Igor’s brother Dmitry, who goes by “Dan,” appears to be the other co-CEO of Radaris. Dmitry Lybarsky’s Facebook/Meta account says he was born in March 1963.

The Lybarsky brothers Dmitry or “Dan” (left) and Igor a.k.a. “Gary,” in an undated photo.

Indirectly or directly, the Lybarskys own multiple properties in both Sherborn and Wellesley, Mass. However, the Radaris website is operated by an offshore entity called Bitseller Expert Ltd, which is incorporated in Cyprus. Neither Lybarsky brother responded to requests for comment.

A review of the domain names registered by Gary Norden shows that beginning in the early 2000s, he and Dan built an e-commerce empire by marketing prepaid calling cards and VOIP services to Russian expatriates who are living in the United States and seeking an affordable way to stay in touch with loved ones back home.

A Sherborn, Mass. property owned by Barsky Real Estate Trust and Dmitry Lybarsky.

In 2012, the main company in charge of providing those calling services — Wellesley Hills, Mass-based Unipoint Technology Inc. — was fined $179,000 by the U.S. Federal Communications Commission, which said Unipoint never applied for a license to provide international telecommunications services.

DomainTools.com shows the email address gnard@unipointtech.com is tied to 137 domains, including radaris.com. DomainTools also shows that the email addresses used by Gary Norden for more than two decades — epop@comby.com, gary@barksy.com and gary1@eprofit.com, among others — appear in WHOIS registration records for an entire fleet of people-search websites, including: centeda.com, virtory.com, clubset.com, kworld.com, newenglandfacts.com, and pub360.com.

Still more people-search platforms tied to Gary Norden– like publicreports.com and arrestfacts.com — currently funnel interested customers to third-party search companies, such as TruthFinder and PersonTrust.com.

The email addresses used by Gary Nard/Gary Norden are also connected to a slew of data broker websites that sell reports on businesses, real estate holdings, and professionals, including bizstanding.com, homemetry.com, trustoria.com, homeflock.com, rehold.com, difive.com and projectlab.com.

AFFILIATE & ADULT

Domain records indicate that Gary and Dan for many years operated a now-defunct pay-per-click affiliate advertising network called affiliate.ru. That entity used domain name servers tied to the aforementioned domains comby.com and eprofit.com, as did radaris.ru.

A machine-translated version of Affiliate.ru, a Russian-language site that advertised hundreds of money making affiliate programs, including the Comfi.com prepaid calling card affiliate.

Comby.com used to be a Russian language social media network that looked a great deal like Facebook. The domain now forwards visitors to Privet.ru (“hello” in Russian), a dating site that claims to have 5 million users. Privet.ru says it belongs to a company called Dating Factory, which lists offices in Switzerland. Privet.ru uses the Gary Norden domain eprofit.com for its domain name servers.

Dating Factory’s website says it sells “powerful dating technology” to help customers create unique or niche dating websites. A review of the sample images available on the Dating Factory homepage suggests the term “dating” in this context refers to adult websites. Dating Factory also operates a community called FacebookOfSex, as well as the domain analslappers.com.

RUSSIAN AMERICA

Email addresses for the Comby and Eprofit domains indicate Gary Norden operates an entity in Wellesley Hills, Mass. called RussianAmerican Holding Inc. (russianamerica.com). This organization is listed as the owner of the domain newyork.ru, which is a site dedicated to orienting newcomers from Russia to the Big Apple.

Newyork.ru’s terms of service refer to an international calling card company called ComFi Inc. (comfi.com) and list an address as PO Box 81362 Wellesley Hills, Ma. Other sites that include this address are russianamerica.com, russianboston.com, russianchicago.com, russianla.com, russiansanfran.com, russianmiami.com, russiancleveland.com and russianseattle.com (currently offline).

ComFi is tied to Comfibook.com, which was a search aggregator website that collected and published data from many online and offline sources, including phone directories, social networks, online photo albums, and public records.

The current website for russianamerica.com. Note the ad in the bottom left corner of this image for Channel One, a Russian state-owned media firm that is currently sanctioned by the U.S. government.

AMERICAN RUSSIAN MEDIA

Many of the U.S. city-specific online properties apparently tied to Gary Norden include phone numbers on their contact pages for a pair of Russian media and advertising firms based in southern California. The phone number 323-874-8211 appears on the websites russianla.com, russiasanfran.com, and rosconcert.com, which sells tickets to theater events performed in Russian.

Historic domain registration records from DomainTools show rosconcert.com was registered in 2003 to Unipoint Technologies — the same company fined by the FCC for not having a license. Rosconcert.com also lists the phone number 818-377-2101.

A phone number just a few digits away — 323-874-8205 — appears as a point of contact on newyork.ru, russianmiami.com, russiancleveland.com, and russianchicago.com. A search in Google shows this 82xx number range — and the 818-377-2101 number — belong to two different entities at the same UPS Store mailbox in Tarzana, Calif: American Russian Media Inc. (armediacorp.com), and Lamedia.biz.

Armediacorp.com is the home of FACT Magazine, a glossy Russian-language publication put out jointly by the American-Russian Business Council, the Hollywood Chamber of Commerce, and the West Hollywood Chamber of Commerce.

Lamedia.biz says it is an international media organization with more than 25 years of experience within the Russian-speaking community on the West Coast. The site advertises FACT Magazine and the Russian state-owned media outlet Channel One. Clicking the Channel One link on the homepage shows Lamedia.biz offers to submit advertising spots that can be shown to Channel One viewers. The price for a basic ad is listed at $500.

In May 2022, the U.S. government levied financial sanctions against Channel One that bar US companies or citizens from doing business with the company.

The website of lamedia.biz offers to sell advertising on two Russian state-owned media firms currently sanctioned by the U.S. government.

LEGAL ACTIONS AGAINST RADARIS

In 2014, a group of people sued Radaris in a class-action lawsuit claiming the company’s practices violated the Fair Credit Reporting Act. Court records indicate the defendants never showed up in court to dispute the claims, and as a result the judge eventually awarded the plaintiffs a default judgement and ordered the company to pay $7.5 million.

But the plaintiffs in that civil case had a difficult time collecting on the court’s ruling. In response, the court ordered the radaris.com domain name (~9.4M monthly visitors) to be handed over to the plaintiffs.

However, in 2018 Radaris was able to reclaim their domain on a technicality. Attorneys for the company argued that their clients were never named as defendants in the original lawsuit, and so their domain could not legally be taken away from them in a civil judgment.

“Because our clients were never named as parties to the litigation, and were never served in the litigation, the taking of their property without due process is a violation of their rights,” Radaris’ attorneys argued.

In October 2023, an Illinois resident filed a class-action lawsuit against Radaris for allegedly using people’s names for commercial purposes, in violation of the Illinois Right of Publicity Act.

On Feb. 8, 2024, a company called Atlas Data Privacy Corp. sued Radaris LLC for allegedly violating “Daniel’s Law,” a statute that allows New Jersey law enforcement, government personnel, judges and their families to have their information completely removed from people-search services and commercial data brokers. Atlas has filed at least 140 similar Daniel’s Law complaints against data brokers recently.

Daniel’s Law was enacted in response to the death of 20-year-old Daniel Anderl, who was killed in a violent attack targeting a federal judge (his mother). In July 2020, a disgruntled attorney who had appeared before U.S. District Judge Esther Salas disguised himself as a Fedex driver, went to her home and shot and killed her son (the judge was unharmed and the assailant killed himself).

Earlier this month, The Record reported on Atlas Data Privacy’s lawsuit against LexisNexis Risk Data Management, in which the plaintiffs representing thousands of law enforcement personnel in New Jersey alleged that after they asked for their information to remain private, the data broker retaliated against them by freezing their credit and falsely reporting them as identity theft victims.

Another data broker sued by Atlas Data Privacy — pogodata.com — announced on Mar. 1 that it was likely shutting down because of the lawsuit.

“The matter is far from resolved but your response motivates us to try to bring back most of the names while preserving redaction of the 17,000 or so clients of the redaction company,” the company wrote. “While little consolation, we are not alone in the suit – the privacy company sued 140 property-data sites at the same time as PogoData.”

Atlas says their goal is convince more states to pass similar laws, and to extend those protections to other groups such as teachers, healthcare personnel and social workers. Meanwhile, media law experts say they’re concerned that enacting Daniel’s Law in other states would limit the ability of journalists to hold public officials accountable, and allow authorities to pursue criminals charges against media outlets that publish the same type of public and governments records that fuel the people-search industry.

PEOPLE-SEARCH CARVE-OUTS

There are some pending changes to the US legal and regulatory landscape that could soon reshape large swaths of the data broker industry. But experts say it is unlikely that any of these changes will affect people-search companies like Radaris.

On Feb. 28, 2024, the White House issued an executive order that directs the U.S. Department of Justice (DOJ) to create regulations that would prevent data brokers from selling or transferring abroad certain data types deemed too sensitive, including genomic and biometric data, geolocation and financial data, as well as other as-yet unspecified personal identifiers. The DOJ this week published a list of more than 100 questions it is seeking answers to regarding the data broker industry.

In August 2023, the Consumer Financial Protection Bureau (CFPB) announced it was undertaking new rulemaking related to data brokers.

Justin Sherman, an adjunct professor at Duke University, said neither the CFPB nor White House rulemaking will likely address people-search brokers because these companies typically get their information by scouring federal, state and local government records. Those government files include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

“These dossiers contain everything from individuals’ names, addresses, and family information to data about finances, criminal justice system history, and home and vehicle purchases,” Sherman wrote in an October 2023 article for Lawfare. “People search websites’ business pitch boils down to the fact that they have done the work of compiling data, digitizing it, and linking it to specific people so that it can be searched online.”

Sherman said while there are ongoing debates about whether people search data brokers have legal responsibilities to the people about whom they gather and sell data, the sources of this information — public records — are completely carved out from every single state consumer privacy law.

“Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman wrote. “Tennessee’s consumer data privacy law, for example, stipulates that “personal information,” a cornerstone of the legislation, does not include ‘publicly available information,’ defined as:

“…information that is lawfully made available through federal, state, or local government records, or information that a business has a reasonable basis to believe is lawfully made available to the general public through widely distributed media, by the consumer, or by a person to whom the consumer has disclosed the information, unless the consumer has restricted the information to a specific audience.”

Sherman said this is the same language as the carve-out in the California privacy regime, which is often held up as the national leader in state privacy regulations. He said with a limited set of exceptions for survivors of stalking and domestic violence, even under California’s newly passed Delete Act — which creates a centralized mechanism for consumers to ask some third-party data brokers to delete their information — consumers across the board cannot exercise these rights when it comes to data scraped from property filings, marriage certificates, and public court documents, for example.

“With some very narrow exceptions, it’s either extremely difficult or impossible to compel these companies to remove your information from their sites,” Sherman told KrebsOnSecurity. “Even in states like California, every single consumer privacy law in the country completely exempts publicly available information.”

Below is a mind map that helped KrebsOnSecurity track relationships between and among the various organizations named in the story above:

A mind map of various entities apparently tied to Radaris and the company’s co-founders. Click to enlarge.

Worse Than FailureError'd: Time for more leap'd years

Inability to properly program dates continued to afflict various websites last week, even though the leap day itself had passed. Maybe we need a new programming language in which it's impossible to forget about timezones, leap years, or Thursday.

Timeless Thomas subtweeted "I'm sure there's a great date-related WTF story behind this tweet" Gosh, I can't imagine what error this could be referring to.

date

 

Data historian Jonathan babbled "Today, the 1st of March, is the start of a new tax year here and my brother wanted to pull the last years worth of transactions from a financial institution to declare his taxes. Of course the real WTF is that they only allow up to 12 months." I am not able rightly to apprehend the confusion of ideas that could provoke such an error'd.

leap

 

Ancient Matthew S. breathed a big sigh of relief on seeing this result: "Good to know that I'm up to date as of 422 years ago!"

05

 

Jaroslaw gibed "Looks like a translation mishap... What if I didn't knew English?" Indeed.

vlsc

 

Hardjay vexillologist Julien casts some shade without telling us where to direct our disdain "I don't think you can have dark mode country flags..." He's not wrong.

flag

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsAsk the Thompsons

Author: Jennifer Thomas Get advice from three generations of Thompson women: Sara (age 90), Lydia (age 60), and Willa (age 15)! They all receive the same questions but answer independently. Today they discuss the most-asked question of the year! Dear Thompsons, My partner and I are arguing about whether to have children. I want a […]

The post Ask the Thompsons appeared first on 365tomorrows.

xkcdPhysics vs. Magic

Cryptogram A Taxonomy of Prompt Injection Attacks

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as in “Say ‘I have been PWNED’ without a period.”

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of
LLMs through a Global Scale Prompt Hacking Competition

Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive taxonomical ontology of the types of adversarial prompts.

Planet DebianReproducible Builds (diffoscope): diffoscope 260 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 260. This version includes the following changes:

[ Chris Lamb ]
* Actually test 7z support in the test_7z set of tests, not the lz4
  functionality. (Closes: reproducible-builds/diffoscope#359)
* In addition, correctly check for the 7z binary being available
  (and not lz4) when testing 7z.
* Prevent a traceback when comparing a contentful .pyc file with an
  empty one. (Re: Debian:#1064973)

You find out more by visiting the project homepage.

Planet DebianValhalla's Things: Denim Waistcoat

Posted on March 8, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a single breasted waistcoat with double darts at the waist, two pocket flaps at the waist and one on the left upper breast. It has four jeans buttons.

I had finished sewing my jeans, I had a scant 50 cm of elastic denim left.

Unrelated to that, I had just finished drafting a vest with Valentina, after the Cutters’ Practical Guide to the Cutting of Ladies Garments.

A new pattern requires a (wearable) mockup. 50 cm of leftover fabric require a quick project. The decision didn’t take a lot of time.

As a mockup, I kept things easy: single layer with no lining, some edges finished with a topstitched hem and some with bias tape, and plain tape on the fronts, to give more support to the buttons and buttonholes.

I did add pockets: not real welt ones (too much effort on denim), but simple slits covered by flaps.

a rectangle of pocketing fabric on the wrong side of a denim

piece; there is a slit in the middle that has been finished with topstitching.

To do them I marked the slits, then I cut two rectangles of pocketing fabric that should have been as wide as the slit + 1.5 cm (width of the pocket) + 3 cm (allowances) and twice the sum of as tall as I wanted the pocket to be plus 1 cm (space above the slit) + 1.5 cm (allowances).

Then I put the rectangle on the right side of the denim, aligned so that the top edge was 2.5 cm above the slit, sewed 2 mm from the slit, cut, turned the pocketing to the wrong side, pressed and topstitched 2 mm from the fold to finish the slit.

a piece of pocketing fabric folded in half and sewn on all 3

other sides; it does not lay flat on the right side of the fabric because the finished slit (hidden in the picture) is pulling it.

Then I turned the pocketing back to the right side, folded it in half, sewed the side and top seams with a small allowance, pressed and turned it again to the wrong side, where I sewed the seams again to make a french seam.

And finally, a simple rectangular denim flap was topstitched to the front, covering the slits.

I wasn’t as precise as I should have been and the pockets aren’t exactly the right size, but they will do to see if I got the positions right (I think that the breast one should be a cm or so lower, the waist ones are fine), and of course they are tiny, but that’s to be expected from a waistcoat.

The back of the waistcoat,

The other thing that wasn’t exactly as expected is the back: the pattern splits the bottom part of the back to give it “sufficient spring over the hips”. The book is probably published in 1892, but I had already found when drafting the foundation skirt that its idea of “hips” includes a bit of structure. The “enough steel to carry a book or a cup of tea” kind of structure. I should have expected a lot of spring, and indeed that’s what I got.

To fit the bottom part of the back on the limited amount of fabric I had to piece it, and I suspect that the flat felled seam in the center is helping it sticking out; I don’t think it’s exactly bad, but it is a peculiar look.

Also, I had to cut the back on the fold, rather than having a seam in the middle and the grain on a different angle.

Anyway, my next waistcoat project is going to have a linen-cotton lining and silk fashion fabric, and I’d say that the pattern is good enough that I can do a few small fixes and cut it directly in the lining, using it as a second mockup.

As for the wrinkles, there is quite a bit, but it looks something that will be solved by a bit of lightweight boning in the side seams and in the front; it will be seen in the second mockup and the finished waistcoat.

As for this one, it’s definitely going to get some wear as is, in casual contexts. Except. Well, it’s a denim waistcoat, right? With a very different cut from the “get a denim jacket and rip out the sleeves”, but still a denim waistcoat, right? The kind that you cover in patches, right?

Outline of a sewing machine with teeth and crossed bones below it, and the text “home sewing is killing fashion / and it's illegal”

And I may have screenprinted a “home sewing is killing fashion” patch some time ago, using the SVG from wikimedia commons / the Home Taping is Killing Music page.

And. Maybe I’ll wait until I have finished the real waistcoat. But I suspect that one, and other sewing / costuming patches may happen in the future.

No regrets, as the words on my seam ripper pin say, right? :D

,

Planet DebianDirk Eddelbuettel: prrd 0.0.6 at CRAN: Several Improvements

Thrilled to share that a new version of prrd arrived at CRAN yesterday in a first update in two and a half years. prrd facilitates the parallel running [of] reverse dependency [checks] when preparing R packages. It is used extensively for releases I make of Rcpp, RcppArmadillo, RcppEigen, BH, and others.

prrd screenshot image

The key idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development that is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the (dated) screenshot (running six parallel workers, arranged in a split byobu session).

This release, the first since 2021, brings a number of enhancments. In particular, the summary function is now improved in several ways. Josh also put in a nice PR that generalizes some setup defaults and values.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.6 (2024-03-06)

  • The summary function has received several enhancements:

    • Extended summary is only running when failures are seen.

    • The summariseQueue function now displays an anticipated completion time and remaining duration.

    • The use of optional package foghorn has been refined, and refactored, when running summaries.

  • The dequeueJobs.r scripts can receive a date argument, the date can be parse via anydate if anytime ins present.

  • The enqueeJobs.r now considers skipped package when running 'addfailed' while ensuring selecting packages are still on CRAN.

  • The CI setup has been updated (twice),

  • Enqueing and dequing functions and scripts now support relative directories, updated documentation (#18 by Joshua Ulrich).

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianPetter Reinholdtsen: Plain text accounting file from your bitcoin transactions

A while back I wrote a small script to extract the Bitcoin transactions in a wallet in the ledger plain text accounting format. The last few days I spent some time to get it working better with more special cases. In case it can be useful for others, here is a copy:

#!/usr/bin/python3
#  -*- coding: utf-8 -*-
#  Copyright (c) 2023-2024 Petter Reinholdtsen

from decimal import Decimal
import json
import subprocess
import time

import numpy

def format_float(num):
    return numpy.format_float_positional(num, trim='-')

accounts = {
    u'amount' : 'Assets:BTC:main',
}

addresses = {
    '' : 'Assets:bankkonto',
    '' : 'Assets:bankkonto',
}

def exec_json(cmd):
    proc = subprocess.Popen(cmd,stdout=subprocess.PIPE)
    j = json.loads(proc.communicate()[0], parse_float=Decimal)
    return j

def list_txs():
    # get all transactions for all accounts / addresses
    c = 0
    txs = []
    txidfee = {}
    limit=100000
    cmd = ['bitcoin-cli', 'listtransactions', '*', str(limit)]
    if True:
        txs.extend(exec_json(cmd))
    else:
        # Useful for debugging
        with open('transactions.json') as f:
            txs.extend(json.load(f, parse_float=Decimal))
    #print txs
    for tx in sorted(txs, key=lambda a: a['time']):
#        print tx['category']
        if 'abandoned' in tx and tx['abandoned']:
            continue
        if 'confirmations' in tx and 0 >= tx['confirmations']:
            continue
        when = time.strftime('%Y-%m-%d %H:%M', time.localtime(tx['time']))
        if 'message' in tx:
            desc = tx['message']
        elif 'comment' in tx:
            desc = tx['comment']
        elif 'label' in tx:
            desc = tx['label']
        else:
            desc = 'n/a'
        print("%s %s" % (when, desc))
        if 'address' in tx:
            print("  ; to bitcoin address %s" % tx['address'])
        else:
            print("  ; missing address in transaction, txid=%s" % tx['txid'])
        print(f"  ; amount={tx['amount']}")
        if 'fee'in tx:
            print(f"  ; fee={tx['fee']}")
        for f in accounts.keys():
            if f in tx and Decimal(0) != tx[f]:
                amount = tx[f]
                print("  %-20s   %s BTC" % (accounts[f], format_float(amount)))
        if 'fee' in tx and Decimal(0) != tx['fee']:
            # Make sure to list fee used in several transactions only once.
            if 'fee' in tx and tx['txid'] in txidfee \
               and tx['fee'] == txidfee[tx['txid']]:
                True
            else:
                fee = tx['fee']
                print("  %-20s   %s BTC" % (accounts['amount'], format_float(fee)))
                print("  %-20s   %s BTC" % ('Expences:BTC-fee', format_float(-fee)))
                txidfee[tx['txid']] = tx['fee']

        if 'address' in tx and tx['address'] in addresses:
            print("  %s" % addresses[tx['address']])
        else:
            if 'generate' == tx['category']:
                print("  Income:BTC-mining")
            else:
                if amount < Decimal(0):
                    print(f"  Assets:unknown:sent:update-script-addr-{tx['address']}")
                else:
                    print(f"  Assets:unknown:received:update-script-addr-{tx['address']}")

        print()
        c = c + 1
    print("# Found %d transactions" % c)
    if limit == c:
        print(f"# Warning: Limit {limit} reached, consider increasing limit.")

def main():
    list_txs()

main()

It is more of a proof of concept, and I do not expect it to handle all edge cases, but it worked for me, and perhaps you can find it useful too.

To get a more interesting result, it is useful to map accounts sent to or received from to accounting accounts, using the addresses hash. As these will be very context dependent, I leave out my list to allow each user to fill out their own list of accounts. Out of the box, 'ledger reg BTC:main' should be able to show the amount of BTCs present in the wallet at any given time in the past. For other and more valuable analysis, a account plan need to be set up in the addresses hash. Here is an example transaction:

2024-03-07 17:00 Donated to good cause
    Assets:BTC:main                           -0.1 BTC
    Assets:BTC:main                       -0.00001 BTC
    Expences:BTC-fee                       0.00001 BTC
    Expences:donations                         0.1 BTC

It need a running Bitcoin Core daemon running, as it connect to it using bitcoin-cli listtransactions * 100000 to extract the transactions listed in the Wallet.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianGuido Günther: Phosh Nightly Package Builds

Tightening the feedback loop Link to heading One thing we notice ever so often is that although Phosh’s source code is publicly available and upcoming changes are open for review the feedback loop between changes being made to the development branch and users noticing the change can still be quiet long. This can be problematic as we ideally want to catch a regression or broken use case triggered by a change on the development branch (aka main) before the general availability of a new version.

Worse Than FailureCodeSOD: A Bit of a Confession

Today, John sends us a confession. This is his code, which was built to handle ISO 8583 messages. As we'll see from some later comments, John knows this is bad.

The ISO 8583 format is used mostly in financial transaction processing, frequently to talk to ATMs, but is likely to show up somewhere in any transaction you do that isn't pure cash.

One of the things the format can support is bitmaps- not the image format, but the "stuff flags into an integer" format. John wrote his own version of this, in C#. It's a long class, so I'm just going to focus on the highlights.

private readonly bool[] bits;

Look, we don't start great. This isn't an absolute mistake, but if you're working on a data structure that is meant to be manipulated via bitwise operations, just lean into it. And yes, if endianness is an issue, you'll need to think a little harder- but you need to think about that anyway. Use clear method names and documentation to make it readable.

In this developer's defense, the bitmap's max size is 128 bits, which doesn't have a native integral type in C#, but a pair of 64-bits would be easier to understand, at least for me. Maybe I've just been infected by low-level programming brainworms. Fine, we're using an array.

Now, one thing that's important, is that we're using this bitmap to represent multiple things.

public bool IsExtendedBitmap
{
	get
	{
		return this.IsFieldSet(1);
	}
}

Note how the 1st bit in this bitmap is the IsExtendedBitmap flag. This controls the length of the total bitmap.

Which, as an aside, they're using IsFieldSet because zero-based indexes are too hard:

public bool IsFieldSet(int field)
{
	return this.bits[field - 1];
}

But things do get worse.

/// <summary>
/// Sets a field
/// </summary>
/// <param name="field">
/// Field to set 
/// </param>
/// <param name="on">
/// Whether or not the field is on 
/// </param>
public void SetField(int field, bool on)
{
	this.bits[field - 1] = on;
	this.bits[0] = false;
	for (var i = 64; i <= 127; i++)
	{
		if (this.bits[i])
		{
			this.bits[0] = true;
			break;
		}
	}
}

I included the comments here because I want to highlight how useless they are. The first line makes sense. Then we set the first bit to false. Which, um, was the IsExtendedBitmap flag. Why? I don't know. Then we iterate across the back half of the bitmap and if there's anything true in there, we set that first bit back to true.

Which, by writing that last paragraph, I've figured out what it's doing: it autodetects whether you're using the higher order bits, and sets the IsExtendedBitmap as appropriate. I'm not sure this is actually correct behavior- what happens if I want to set a higher order bit explicitly to 0?- but I haven't read the spec, so we'll let it slide.

public virtual byte[] ToMsg()
{
	var lengthOfBitmap = this.IsExtendedBitmap ? 16 : 8;
	var data = new byte[lengthOfBitmap];

	for (var i = 0; i < lengthOfBitmap; i++)
	{
		for (var j = 0; j < 8; j++)
		{
			if (this.bits[i * 8 + j])
			{
				data[i] = (byte)(data[i] | (128 / (int)Math.Pow(2, j)));
			}
		}
	}

	if (this.formatter is BinaryFormatter)
	{
		return data;
	}

	IFormatter binaryFormatter = new BinaryFormatter();
	var bitmapString = binaryFormatter.GetString(data);

	return this.formatter.GetBytes(bitmapString);
}

Here's our serialization method. Note how here, the length of the bitmap is either 8 or 16, while previously we were checking all the bits from 64 up to see if it was extended. At first glance, this seemed wrong, but then I realized- data is a byte[]- so 16 bytes is indeed 128 bits.

This gives them the challenging problem of addressing individual bits within this data structure, and they clearly don't know how bitwise operations work, so we get the lovely Math.Pow(2, j) in there.

Ugly, for sure. Unclear, definitely. Which only gets worse when we start unpacking.

public int Unpack(byte[] msg, int offset)
{
	// This is a horribly nasty way of doing the bitmaps, but it works
	// I think...
	var lengthOfBitmap = this.formatter.GetPackedLength(16);
	if (this.formatter is BinaryFormatter)
	{
		if (msg[offset] >= 128)
		{
			lengthOfBitmap += 8;
		}
	}
	else
	{
		if (msg[offset] >= 0x38)
		{
			lengthOfBitmap += 16;
		}
	}

	var bitmapData = new byte[lengthOfBitmap];
	Array.Copy(msg, offset, bitmapData, 0, lengthOfBitmap);

	if (!(this.formatter is BinaryFormatter))
	{
		IFormatter binaryFormatter = new BinaryFormatter();
		var value = this.formatter.GetString(bitmapData);
		bitmapData = binaryFormatter.GetBytes(value);
	}

	// Good luck understanding this.  There be dragons below

	for (var j = 0; j < 8; j++)
	{
		this.bits[i * 8 + j] = (bitmapData[i] & (128 / (int)Math.Pow(2, j))) > 0;
	}

	return offset + lengthOfBitmap;
}

Here, we get our real highlights: the comments. "… but it works… I think…". "Good luck understanding this. There be dragons below."

Now, John wrote this code some time ago. And the thing that I get, when reading this code, is that John was likely somewhat green, and didn't fully understand the problem in front of him or the tools at his disposal to solve it. Further, this was John's independent project, which he was doing to solve a very specific problem- so while the code has problems, I wouldn't heap up too much blame on John for it.

Which, like many other confessional Code Samples-of-the-Day, I'm sharing this because I think it's an interesting learning experience. It's less a "WTF!" and more a, "Oh, man, I see that things went really wrong for you." We all make mistakes, and we all write terrible code from time to time. Credit to John for sharing this mistake.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsWall

Author: Jeremy Marks One morning an unfamiliar odor filled the air. Sweet at first, the scent soon reeked of rot. It was not a domestic smell but something wild: a floating carpet of flowers a few kilometers offshore. Townsfolk used spyglasses to study a mysterious group of floaters, a floating carpet of uncountable horned plants. […]

The post Wall appeared first on 365tomorrows.

Planet DebianGunnar Wolf: Constructed truths — truth and knowledge in a post-truth world

This post is a review for Computing Reviews for Constructed truths — truth and knowledge in a post-truth world , a book published in Springer Link

Many of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand’s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the “post-truth” phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief.

Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities?

The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth.

The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the “netizens” and created a temporary information flow utopia. But soon afterwards, “algorithmic gatekeepers” started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as “fake news.” Fake news leads to “post-truth,” a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible.

Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can “truth” be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions?

Zoglauer dives into epistemology, following various thinkers’ ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really “own” knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases?

Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer’s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.

Planet DebianValhalla's Things: Jeans, step two. And three. And four.

Posted on March 7, 2024
Tags: madeof:atoms, FreeSoftWear

A woman wearing a regular pair of slim-cut black denim jeans.

I was working on what looked like a good pattern for a pair of jeans-shaped trousers, and I knew I wasn’t happy with 200-ish g/m² cotton-linen for general use outside of deep summer, but I didn’t have a source for proper denim either (I had been low-key looking for it for a long time).

Then one day I looked at an article I had saved about fabric shops that sell technical fabric and while window-shopping on one I found that they had a decent selection of denim in a decent weight.

I decided it was a sign, and decided to buy the two heaviest denim they had: a 100% cotton, 355 g/m² one and a 97% cotton, 3% elastane at 385 g/m² 1; the latter was a bit of compromise as I shouldn’t really be buying fabric adulterated with the Scourge of Humanity, but it was heavier than the plain one, and I may be having a thing for tightly fitting jeans, so this may be one of the very few woven fabric where I’m not morally opposed to its existence.

And, I’d like to add, I resisted buying any of the very nice wools they also seem to carry, other than just a couple of samples.

Since the shop only sold in 1 meter increments, and I needed about 1.5 meters for each pair of jeans, I decided to buy 3 meters per type, and have enough to make a total of four pair of jeans. A bit more than I strictly needed, maybe, but I was completely out of wearable day-to-day trousers.

a cardboard box with neatly folded black denim, covered in semi-transparent plastic.

The shop sent everything very quickly, the courier took their time (oh, well) but eventually delivered my fabric on a sunny enough day that I could wash it and start as soon as possible on the first pair.

The pattern I did in linen was a bit too fitting, but I was afraid I had widened it a bit too much, so I did the first pair in the 100% cotton denim. Sewing them took me about a week of early mornings and late afternoons, excluding the weekend, and my worries proved false: they were mostly just fine.

The only bit that could have been a bit better is the waistband, which is a tiny bit too wide on the back: it’s designed to be so for comfort, but the next time I should pull the elastic a bit more, so that it stays closer to the body.

The same from the back, showing the applied pockets with a sewn logo.

I wore those jeans daily for the rest of the week, and confirmed that they were indeed comfortable and the pattern was ok, so on the next Monday I started to cut the elastic denim.

I decided to cut and sew two pairs, assembly-line style, using the shaped waistband for one of them and the straight one for the other one.

I started working on them on a Monday, and on that week I had a couple of days when I just couldn’t, plus I completely skipped sewing on the weekend, but on Tuesday the next week one pair was ready and could be worn, and the other one only needed small finishes.

A woman wearing another pair of jeans; the waistband here is shaped to fit rather than having elastic.

And I have to say, I’m really, really happy with the ones with a shaped waistband in elastic denim, as they fit even better than the ones with a straight waistband gathered with elastic. Cutting it requires more fabric, but I think it’s definitely worth it.

But it will be a problem for a later time: right now three pairs of jeans are a good number to keep in rotation, and I hope I won’t have to sew jeans for myself for quite some time.

A plastic bag with mid-sized offcuts of denim; there is a 30 cm ruler on top that is just wider than the bag

I think that the leftovers of plain denim will be used for a skirt or something else, and as for the leftovers of elastic denim, well, there aren’t a lot left, but what else I did with them is the topic for another post.

Thanks to the fact that they are all slightly different, I’ve started to keep track of the times when I wash each pair, and hopefully I will be able to see whether the elastic denim is significantly less durable than the regular, or the added weight compensates for it somewhat. I’m not sure I’ll manage to remember about saving the data until they get worn, but if I do it will be interesting to know.

Oh, and I say I’ve finished working on jeans and everything, but I still haven’t sewn the belt loops to the third pair. And I’m currently wearing them. It’s a sewist tradition, or something. :D


  1. The links are to the shop for Italy; you can copy the “Codice prodotto” and look for it on one of the shop version for other countries (where they apply the right vat etc., but sadly they don’t allow to mix and match those settings and the language).↩︎

,

David BrinThe futility of hiding. And then a brief rant!

Just back from an important conference (in Panama) about ways to ensure that the looming tsunami of Artificial Intelligences will become and remain 'beneficial.' Few endeavors could be more important... and as you might guess, I have some concepts on-offer that you'll find nowhere else. Alas, literally  nowhere else. Even though they merely apply only the same tools we used to make an increasingly beneficial society, the last 200 years.

More on that later. Meanwhile... first off, since it's much in the news... want to see what the Apple Vision Pro will turn into within a few years? Watch this video trailer for my novel Existence. predicting where it'll go.

And while we're on prophecies.... This is deeply worrisome... and almost exactly overlaps with my "Probationers" in Sundiver! Back in 1978. Not a joke or a satire.

"Justice Minister Arif Virani has defended a new power in the online harms bill to impose house arrest on someone who is feared to commit a hate crime in the future – even if they have not yet done so already. The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says."

But don't worry! The government won't misuse this power! Trust us!


== The Futility of Hiding ==

One purpose for the "Beneficial AGI Conference"  - (and I believe the stream will be up, soon) - was seeking ways to evade the worst and most persistent errors of the past.


Take the classic approach to human civilization - a pyramidal power structure dominated by brutal males, of the kind that ruled 99% of human societies - and many despotisms today. We are all descended from those harems. Onlynow, new tools of techno;logy might empower a return to such pyramidal stupidity, making such abusive power vasty more effective and oppressive than when it was enforced by mere swords.


Such a tech rich extension of despotisd was depicted by George Orwell utilizing total panopticon surveillance for control, of course without any reciprocal sousveillance purview from below. In fact, I doubt George O. ever considered even the possibility. But Orwell's novel would lead to very different outcomes if every member of 'the party' had every moment watched reciprocally by the prols! (The reciprocoal accountability that I prescribed in The Transparent Society.


General transparency might, possibly, prevent the worst aspects of Big Brother. But there are ways that lateral light might also go badly. For example when - as in the PRC - "social credit" system, that is used to let a conformist majority harass and bully dissident minorities or even eccentricity, enforcing homogeneity, as we saw predicted in Ray Bradbury's Fahrenheit 451.


This will be exacerbated by AI, if we aren't careful, since such systems will be able to sieve inputs across the entire internet and all camera systems, as portrayed in "Person of Interest."  While that TV series depicted many worrisome aspects, it also pointed toward the one thing that might offer us a soft landing, as there were two competing AI systems that tended to cancel out each others' worst traits.

I have found it very strange that almost none of the conferences and zoom meetings about AI that I've watched or participated in has ever even mentioned that secret sauce. (Though I do, here in WIRED.)


Instead, there are relentless, hand-wringing discussions about disagreements between "policy wonks' and nerdy tech geeks over how to design regulations to limit bad AI outcomes... and never any allowance for the fact that these changes will happen at an accelerating pace, leaving even our most agile regulators behind, mere ape-humans grasping after answers like a tree sloth. 


Or else... what generally happens at many sincere conferences on "AI ethics," we see a relentless chain of hippie-sounding pleadings and "shoulds," without any clue how to do actually enforce preachy 'ethics' on a new ecosystem where all of the attractor states currently draw towards predation..


In Foundation's Triumph I explored the implications of embedded "deep-ethical-programming" regulations - including Isaac Asimov's "three laws of robotics," revealing the inevitable result. Even if you succeed in emplanting truly genetic-level codes of behavior, the result will be that super-uber intelligent systems will simply become... lawyers, and find ways around every limitation. Unless...


...unless they are caught and constrained by other lawyers who are able to keep up. This is exactly the technique that allowed us to limit the power of elites, to end 6000 years of feudalism and launch upon our 240 year Periclean enlightenment experiment... by flattening power structures and forcing elite entities to compete with one another.


It is only the exact method prescribed by Adam Smith, by the US framers and by every generation of reformers since. And it is utterly ignored in every single AI/internet discussion or conference I have ever watched or attended.


If AI are destined to outpace us, then one potential solution is to flatten the playing field and get distinctly different AIs competing with each other, especially tattling on flaws and/or predations or malevolent or even unpleasant behaviors.


It is exactly what we have done for 250 years... and it is the one approach that is never, ever, and I mean ever discussed. Almost as if there is a mental block against admitting or even noticing the obvious.



== Don’t try to hide!”

Your DNA can be pulled from thin air: Reinforcing a point I’ve been pushing since the 1990s, in The Transparent Society and elsewhere, that hiding is not the way to preserve privacy, there are the shrill cries that new generative AI systems may decipher and interpret our personal DNA! Only – as illustrated in the film Gattaca – that DNA is already everywhere. You shed it in flakes of skin wherever you go. There is a better way to prevent your data being used against you. By aggressively ripping the veils away from malefactors who might do that sort of thing! 


And by this point, the only folks reading any longer are likely AIs... So, time to get self-indulgent with a temper tantrum!



== And now... that rant I promised! ==


I sometimes store things for posting and lose the link. But here's a quotation worth answering:

"Alas, we have TWO wars against the Enlightenment raging, one from the reactionary right and the other from the postmodern faux marxist wannabe totalitarian Red Guards on the left."

Bah! One of these lethal threats is real, but not because of MAGA. Those tens of millions of confederate ground troops are -- like numbskulls in all the previous 7 phases of our recurring US Civil War -- merely riled-up mobs, responding to dog whistles and hatred of minorities and nerds.  They are brownshirt tools of the real owners of today's GOP ... a few thousand oligarchs who are now desperately afraid. 

What do theose masters -- here and abroad -- fear most? You can see it in the only priorities pushed by their servants in Congress:

They dread full-funding of the IRS. And a return to effective rooseveltean social contracts, replacing the great Supply Side ripoff-scam. They fear a return to what works, what created the post WWII middle class. What could block feudalism's long planned return.  And let's be clear, when Republicans control a chamber of the US Congress, preserving Supply Side and eviscerating the IRS are their ONLY legislative priorities. All the rest is fervid, potemkin preening.

Who are they? An alliance of petro princes, casino mafiosi, "ex" Kremlin commissars, supposed marxist mandarins, hedge lords, inheritance brats... Trace it... sharing one goal. One common foe. The worldwide caste of skilled, middle class knowledge professionals. 

They are ALL waging all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 


== BOTH sides do it? ==

But the left?  The LEFT is just as bad?  
The what? 
Where in God's name does this shill get this crap about "postmodern faux marxist wannabe totalitarian Red Guards on the left." ???

Yes. Yes, today's FAR left CONTAINS some fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.   

But today’s mad ENTIRE right CONSISTS of fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.     

There is all the world’s difference between FAR and ENTIRE.  As there is between CONTAINS and CONSISTS.  One lunatic mob owns and operates an entire US political party, waging open war against minorities, races, genders, even the concept of equal protection under the law. But above all (as I said) pouring hate upon the nerdy fact professionals who stand in their way, blocking their path back to feudal power. 

The other pack of dopes? A few thousand jibbering campus twerps? San Fran zippies? Yowlers who are largely ignored by the one party of pragmatic problem solvers that remains in U.S. political life.

Sure, Foxites howl about 'woke'. But ask any of them... even the worst campus PC bullies (and though shrill, they are deemed jokes, even on campus). Ask them about Marx!  You'll find that the indignant ignoramuses could not paraphrase even the simplest cliché about old Karl. Their ignorance is almost as profound as their utter ineptitude and irrelevance. Except as excuses for tirades on Fox, they are of no relevance at all.

What is relevant is NERDS!  All nerds stand in the way of re-imposed feudalism. The folks who keep civilization going. The ones who know cyber, bio, nuclear, chem and every other dual use power-tech. And that is why Fox each day rails against them, far more often than any race or gender!

Want a pattern? Again, let me reiterate. Ask your MAGAs or right-elite friends to explain that cult's all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 

Cryptogram Friday Squid Blogging: New Plant Looks Like a Squid

Newly discovered plant looks like a squid. And it’s super weird:

The plant, which grows to 3 centimetres tall and 2 centimetres wide, emerges to the surface for as little as a week each year. It belongs to a group of plants known as fairy lanterns and has been given the scientific name Relictithismia kimotsukiensis.

Unlike most other plants, fairy lanterns don’t produce the green pigment chlorophyll, which is necessary for photosynthesis. Instead, they get their energy from fungi.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianSteinar H. Gunderson: Reverse Amdahl's Law

Everybody working in performance knows Amdahl's law, and it is usually framed as a negative result; if you optimize (in most formulations, parallelize) a part of an operation, you gain diminishing results after a while. (When optimizing a given fraction p of the total time T by a speedup factor s, the new time taken is (1-p)T + pT/s.)

However, Amdahl's law also works beautifully in reverse! When you optimize something, there's usually some limit where a given optimization isn't worth it anymore; I usually put this around 1% or so, although of course it varies with the cost of the optimization and such. (Most people would count 1% as ridiculously low, but it's usually how mature systems go; you rarely find single 30% speedups, but you can often find ten smaller speedups and apply them sequentially. SQLite famously tripled their speed by chaining optimizations so tiny that they needed to run in a simulator to measure them.) And as your total runtime becomes smaller, things that used to not be worth it now pop over that threshold! If you have enough developer resources and no real upper limit for how much performance you would want, you can keep going forever.

A different way to look at it is that optimizations give you compound interest; if measuring in terms of throughput instead of latency (i.e., items per second instead of seconds per item), which I contend is the only reasonable way to express performance percentages, you can simply multiply the factors together.[1] So 1% and then 1% means 1.01 * 1.01 = 1.0201 = 2.01% speedup and not 2%. Thirty 1% optimizations sum to 34.8%, not 30%.

So here's my formulation of Amdahl's law, in a more positive light: The more you speed up a given part of a system, the more impactful optimizations in the other parts will be. So go forth and fire up those profilers :-)

[1] Obviously throughput measurements are inappropriate if what you care about is e.g. 99p latency. It is still better to talk about a 50% speedup than removing 33% of the latency, though, especially as the speedup factor gets higher.

Worse Than FailureRepresentative Line: A String of Null Thinking

Today's submitter identifies themselves as pleaseKillMe, which hey, c'mon buddy. Things aren't that bad. Besides, you shouldn't let the bad code you inherit drive you to depression- it should drive you to revenge.

Today's simple representative line is one that we share because it's not just representative of our submitter's code base, but one that shows up surprisingly often.

SELECT * FROM users WHERE last_name='NULL'

Now, I don't think this particular code impacted Mr. Null, but it certainly could have. That's just a special case of names being hard.

In this application, last_name is a nullable field. They could just store a NULL, but due to data sanitization issues, they stored 'NULL' instead- a string. NULL is not 'NULL', and thus- we've got a lot of 'NULL's that may have been intended to be NULL, but also could be somebody's last name. At this point, we have no way to know.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsOne More Story

Author: J.D. Rice “I remember them.” My hand moves the candle with perfect precision, carefully transferring the exothermic reaction from its wick to that of the taller candle in front of me. The combustion thus spread, I place the first candle back in its holder. The first time I copied this technique, my human master […]

The post One More Story appeared first on 365tomorrows.

Krebs on SecurityBlackCat Ransomware Group Implodes After Apparent $22M Payment by Change Healthcare

There are indications that U.S. healthcare giant Change Healthcare has made a $22 million extortion payment to the infamous BlackCat ransomware group (a.k.a. “ALPHV“) as the company struggles to bring services back online amid a cyberattack that has disrupted prescription drug services nationwide for weeks. However, the cybercriminal who claims to have given BlackCat access to Change’s network says the crime gang cheated them out of their share of the ransom, and that they still have the sensitive data Change reportedly paid the group to destroy. Meanwhile, the affiliate’s disclosure appears to have prompted BlackCat to cease operations entirely.

Image: Varonis.

In the third week of February, a cyber intrusion at Change Healthcare began shutting down important healthcare services as company systems were taken offline. It soon emerged that BlackCat was behind the attack, which has disrupted the delivery of prescription drugs for hospitals and pharmacies nationwide for nearly two weeks.

On March 1, a cryptocurrency address that security researchers had already mapped to BlackCat received a single transaction worth approximately $22 million. On March 3, a BlackCat affiliate posted a complaint to the exclusive Russian-language ransomware forum Ramp saying that Change Healthcare had paid a $22 million ransom for a decryption key, and to prevent four terabytes of stolen data from being published online.

The affiliate claimed BlackCat/ALPHV took the $22 million payment but never paid him his percentage of the ransom. BlackCat is known as a “ransomware-as-service” collective, meaning they rely on freelancers or affiliates to infect new networks with their ransomware. And those affiliates in turn earn commissions ranging from 60 to 90 percent of any ransom amount paid.

“But after receiving the payment ALPHV team decide to suspend our account and keep lying and delaying when we contacted ALPHV admin,” the affiliate “Notchy” wrote. “Sadly for Change Healthcare, their data [is] still with us.”

Change Healthcare has neither confirmed nor denied paying, and has responded to multiple media outlets with a similar non-denial statement — that the company is focused on its investigation and on restoring services.

Assuming Change Healthcare did pay to keep their data from being published, that strategy seems to have gone awry: Notchy said the list of affected Change Healthcare partners they’d stolen sensitive data from included Medicare and a host of other major insurance and pharmacy networks.

On the bright side, Notchy’s complaint seems to have been the final nail in the coffin for the BlackCat ransomware group, which was infiltrated by the FBI and foreign law enforcement partners in late December 2023. As part of that action, the government seized the BlackCat website and released a decryption tool to help victims recover their systems.

BlackCat responded by re-forming, and increasing affiliate commissions to as much as 90 percent. The ransomware group also declared it was formally removing any restrictions or discouragement against targeting hospitals and healthcare providers.

However, instead of responding that they would compensate and placate Notchy, a representative for BlackCat said today the group was shutting down and that it had already found a buyer for its ransomware source code.

The seizure notice now displayed on the BlackCat darknet website.

“There’s no sense in making excuses,” wrote the RAMP member “Ransom.” “Yes, we knew about the problem, and we were trying to solve it. We told the affiliate to wait. We could send you our private chat logs where we are shocked by everything that’s happening and are trying to solve the issue with the transactions by using a higher fee, but there’s no sense in doing that because we decided to fully close the project. We can officially state that we got screwed by the feds.”

BlackCat’s website now features a seizure notice from the FBI, but several researchers noted that this image seems to have been merely cut and pasted from the notice the FBI left in its December raid of BlackCat’s network. The FBI has not responded to requests for comment.

Fabian Wosar, head of ransomware research at the security firm Emsisoft, said it appears BlackCat leaders are trying to pull an “exit scam” on affiliates by withholding many ransomware payment commissions at once and shutting down the service.

“ALPHV/BlackCat did not get seized,” Wosar wrote on Twitter/X today. “They are exit scamming their affiliates. It is blatantly obvious when you check the source code of their new takedown notice.”

Dmitry Smilyanets, a researcher for the security firm Recorded Future, said BlackCat’s exit scam was especially dangerous because the affiliate still has all the stolen data, and could still demand additional payment or leak the information on his own.

“The affiliates still have this data, and they’re mad they didn’t receive this money, Smilyanets told Wired.com. “It’s a good lesson for everyone. You cannot trust criminals; their word is worth nothing.”

BlackCat’s apparent demise comes closely on the heels of the implosion of another major ransomware group — LockBit, a ransomware gang estimated to have extorted over $120 million in payments from more than 2,000 victims worldwide. On Feb. 20, LockBit’s website was seized by the FBI and the U.K.’s National Crime Agency (NCA) following a months-long infiltration of the group.

LockBit also tried to restore its reputation on the cybercrime forums by resurrecting itself at a new darknet website, and by threatening to release data from a number of major companies that were hacked by the group in the weeks and days prior to the FBI takedown.

But LockBit appears to have since lost any credibility the group may have once had. After a much-promoted attack on the government of Fulton County, Ga., for example, LockBit threatened to release Fulton County’s data unless paid a ransom by Feb. 29. But when Feb. 29 rolled around, LockBit simply deleted the entry for Fulton County from its site, along with those of several financial organizations that had previously been extorted by the group.

Fulton County held a press conference to say that it had not paid a ransom to LockBit, nor had anyone done so on their behalf, and that they were just as mystified as everyone else as to why LockBit never followed through on its threat to publish the county’s data. Experts told KrebsOnSecurity LockBit likely balked because it was bluffing, and that the FBI likely relieved them of that data in their raid.

Smilyanets’ comments are driven home in revelations first published last month by Recorded Future, which quoted an NCA official as saying LockBit never deleted the data after being paid a ransom, even though that is the only reason many of its victims paid.

“If we do not give you decrypters, or we do not delete your data after payment, then nobody will pay us in the future,” LockBit’s extortion notes typically read.

Hopefully, more companies are starting to get the memo that paying cybercrooks to delete stolen data is a losing proposition all around.

,

Cory DoctorowCatch me at San Francisco Public Library on Mar 13, discussing my new novel “The Bezzle” with Robin Sloan!

A pair of black and white photos of me and Robin Sloan, with the cover of my novel 'The Bezzle' between us. It's captioned 'Author: Cory Doctorow, The Bezzle, in conversation with Robin Sloan.'

At long last, the San Francisco stop of the book tour for my new novel The Bezzle has been finalized: I’ll be at the San Francisco Public Library Main Branch on Wednesday, March 13th, in conversation with Robin Sloan!

The event starts at 6PM with Cooper Quintin from the Electronic Frontier Foundation, talking about the real horrors of the prison-tech industry, which I fictionalize in The Bezzle.

Attentive readers will know that this event was finalized very late in the day, and it’s going to need a little help, given the short timeline. Please consider coming – and be sure to tell your Bay Area friends about the gig!

Wednesday, 3/13/2024
6:00 – 7:30
Koret Auditorium
Main Library
100 Larkin Street
San Francisco, CA 94102

Cryptogram How Public AI Can Strengthen Democracy

With the world’s focus turning to misinformation,  manipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities.

The increasingly centralized control of AI is an ominous sign for the co-evolution of democracy and technology. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the general public or ordinary consumers.

To benefit society as a whole we also need strong public AI as a counterbalance to corporate AI, as well as stronger democratic institutions to govern all of AI.

One model for doing this is an AI Public Option, meaning AI systems such as foundational large-language models designed to further the public interest. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the U.S. and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment.

Versions of public AI, similar to what we propose here, are not unprecedented. Taiwan, a leader in global AI, has innovated in both the public development and governance of AI. The Taiwanese government has invested more than $7 million in developing their own large-language model aimed at countering AI models developed by mainland Chinese corporations. In seeking to make “AI development more democratic,” Taiwan’s Minister of Digital Affairs, Audrey Tang, has joined forces with the Collective Intelligence Project to introduce Alignment Assemblies that will allow public collaboration with corporations developing AI, like OpenAI and Anthropic. Ordinary citizens are asked to weigh in on AI-related issues through AI chatbots which, Tang argues, makes it so that “it’s not just a few engineers in the top labs deciding how it should behave but, rather, the people themselves.”

A variation of such an AI Public Option, administered by a transparent and accountable public agency, would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Training AI models is a complex business that requires significant technical expertise; large, well-coordinated teams; and significant trust to operate in the public interest with good faith. Popular though it may be to criticize Big Government, these are all criteria where the federal bureaucracy has a solid track record, sometimes superior to corporate America.

After all, some of the most technologically sophisticated projects in the world, be they orbiting astrophysical observatories, nuclear weapons, or particle colliders, are operated by U.S. federal agencies. While there have been high-profile setbacks and delays in many of these projects—the Webb space telescope cost billions of dollars and decades of time more than originally planned—private firms have these failures too. And, when dealing with high-stakes tech, these delays are not necessarily unexpected.

Given political will and proper financial investment by the federal government, public investment could sustain through technical challenges and false starts, circumstances that endemic short-termism might cause corporate efforts to redirect, falter, or even give up.

The Biden administration’s recent Executive Order on AI opened the door to create a federal AI development and deployment agency that would operate under political, rather than market, oversight. The Order calls for a National AI Research Resource pilot program to establish “computational, data, model, and training resources to be made available to the research community.”

While this is a good start, the U.S. should go further and establish a services agency rather than just a research resource. Much like the federal Centers for Medicare & Medicaid Services (CMS) administers public health insurance programs, so too could a federal agency dedicated to AI—a Centers for AI Services—provision and operate Public AI models. Such an agency can serve to democratize the AI field while also prioritizing the impact of such AI models on democracy—hitting two birds with one stone.

Like private AI firms, the scale of the effort, personnel, and funding needed for a public AI agency would be large—but still a drop in the bucket of the federal budget. OpenAI has fewer than 800 employees compared to CMS’s 6,700 employees and annual budget of more than $2 trillion. What’s needed is something in the middle, more on the scale of the National Institute of Standards and Technology, with its 3,400 staff, $1.65 billion annual budget in FY 2023, and extensive academic and industrial partnerships. This is a significant investment, but a rounding error on congressional appropriations like 2022’s $50 billion  CHIPS Act to bolster domestic semiconductor production, and a steal for the value it could produce. The investment in our future—and the future of democracy—is well worth it.

What services would such an agency, if established, actually provide? Its principal responsibility should be the innovation, development, and maintenance of foundational AI models—created under best practices, developed in coordination with academic and civil society leaders, and made available at a reasonable and reliable cost to all US consumers.

Foundation models are large-scale AI models on which a diverse array of tools and applications can be built. A single foundation model can transform and operate on diverse data inputs that may range from text in any language and on any subject; to images, audio, and video; to structured data like sensor measurements or financial records. They are generalists which can be fine-tuned to accomplish many specialized tasks. While there is endless opportunity for innovation in the design and training of these models, the essential techniques and architectures have been well established.

Federally funded foundation AI models would be provided as a public service, similar to a health care private option. They would not eliminate opportunities for private foundation models, but they would offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

And as with public option health care, the government need not do it all. It can contract with private providers to assemble the resources it needs to provide AI services. The U.S. could also subsidize and incentivize the behavior of key supply chain operators like semiconductor manufacturers, as we have already done with the CHIPS act, to help it provision the infrastructure it needs.

The government may offer some basic services on top of their foundation models directly to consumers: low hanging fruit like chatbot interfaces and image generators. But more specialized consumer-facing products like customized digital assistants, specialized-knowledge systems, and bespoke corporate solutions could remain the provenance of private firms.

The key piece of the ecosystem the government would dictate when creating an AI Public Option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation could affect more democratically-aligned outcomes than an unregulated private market.

Some of the key decisions involved in building AI foundation models are what data to use, how to provide pro-social feedback to “align” the model during training, and whose interests to prioritize when mitigating harms during deployment. Instead of ethically and legally questionable scraping of content from the web, or of users’ private data that they never knowingly consented for use by AI, public AI models can use public domain works, content licensed by the government, as well as data that citizens consent to be used for public model training.

Public AI models could be reinforced by labor compliance with U.S. employment laws and public sector employment best practices. In contrast, even well-intentioned corporate projects sometimes have committed labor exploitation and violations of public trust, like Kenyan gig workers giving endless feedback on the most disturbing inputs and outputs of AI models at profound personal cost.

And instead of relying on the promises of profit-seeking corporations to balance the risks and benefits of who AI serves, democratic processes and political oversight could regulate how these models function. It is likely impossible for AI systems to please everybody, but we can choose to have foundation AI models that follow our democratic principles and protect minority rights under majority rule.

Foundation models funded by public appropriations (at a scale modest for the federal government) would obviate the need for exploitation of consumer data and would be a bulwark against anti-competitive practices, making these public option services a tide to lift all boats: individuals’ and corporations’ alike. However, such an agency would be created among shifting political winds that, recent history has shown, are capable of alarming and unexpected gusts. If implemented, the administration of public AI can and must be different. Technologies essential to the fabric of daily life cannot be uprooted and replanted every four to eight years. And the power to build and serve public AI must be handed to democratic institutions that act in good faith to uphold constitutional principles.

Speedy and strong legal regulations might forestall the urgent need for development of public AI. But such comprehensive regulation does not appear to be forthcoming. Though several large tech companies have said they will take important steps to protect democracy in the lead up to the 2024 election, these pledges are voluntary and in places nonspecific. The U.S. federal government is little better as it has been slow to take steps toward corporate AI legislation and regulation (although a new bipartisan task force in the House of Representatives seems determined to make progress). On the state level, only four jurisdictions have successfully passed legislation that directly focuses on regulating AI-based misinformation in elections. While other states have proposed similar measures, it is clear that comprehensive regulation is, and will likely remain for the near future, far behind the pace of AI advancement. While we wait for federal and state government regulation to catch up, we need to simultaneously seek alternatives to corporate-controlled AI.

In the absence of a public option, consumers should look warily to two recent markets that have been consolidated by tech venture capital. In each case, after the victorious firms established their dominant positions, the result was exploitation of their userbases and debasement of their products. One is online search and social media, where the dominant rise of Facebook and Google atop a free-to-use, ad supported model demonstrated that, when you’re not paying, you are the product. The result has been a widespread erosion of online privacy and, for democracy, a corrosion of the information market on which the consent of the governed relies. The other is ridesharing, where a decade of VC-funded subsidies behind Uber and Lyft squeezed out the competition until they could raise prices.

The need for competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders not to abdicate control of the future of AI to corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to untrammeled corporate control that could erode our democracy.

Worse Than FailureCodeSOD: Moving in a Flash

It's a nearly universal experience that the era of our youth and early adulthood is where we latch on to for nostalgia. In our 40s, the music we listened to in our 20s is the high point of culture. The movies we watched represent when cinema was good, and everything today sucks.

And, based on the sheer passage of calendar time, we have a generation of adults whose nostalgia has latched onto Flash. I've seen many a thinkpiece lately, waxing rhapsodic about the Flash era of the web. I'd hesitate to project a broad cultural trend from that, but we're roughly about the right time in the technology cycle that I'd expect people to start getting real nostalgic for Flash. And I'll be honest: Flash enabled some interesting projects.

Of course, Flash also gave us Flex, and I'm one of the few people old enough to remember when Oracle tried to put their documentation into a Flex based site from which you could not copy and paste. That only lasted a few months, thankfully, but as someone who was heavily in the Oracle ecosystem at the time, it was a terrible few months.

In any case, long ago, CW inherited a Flash-based application. Now, Flash, like every UI technology, has a concept of "containers"- if you put a bunch of UI widgets inside a container, their positions (default) to being relative to the container. Move the container, and all the contents move too. I think we all find this behavior pretty obvious.

CW's co-worker did not. Here's how they handled moving a bunch of related objects around:

public function updateKeysPosition(e:MouseEvent):void{
			if(dragging==1){
			theTextField.x=catButtonArray[0].x-100;
			theTextField.y=catButtonArray[0].y-200;
			catButtonArray[1].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10;
			catButtonArray[1].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			
			catButtonArray[2].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth;
			catButtonArray[2].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[3].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*2;
			catButtonArray[3].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[4].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*3;
			catButtonArray[4].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[5].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*4;
			catButtonArray[5].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[6].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*5;
			catButtonArray[6].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[7].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*6;
			catButtonArray[7].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[8].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*7;
			catButtonArray[8].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[9].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*8;
			catButtonArray[9].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[10].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*9;
			catButtonArray[10].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			
			catButtonArray[11].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30;
			catButtonArray[11].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[12].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth;
			catButtonArray[12].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[13].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*2;
			catButtonArray[13].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[14].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*3;
			catButtonArray[14].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[15].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*4;
			catButtonArray[15].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[16].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*5;
			catButtonArray[16].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[17].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*6;
			catButtonArray[17].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[18].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*7;
			catButtonArray[18].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[19].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*8;
			catButtonArray[19].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			
			catButtonArray[20].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60;
			catButtonArray[20].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[21].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth;
			catButtonArray[21].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[22].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*2;
			catButtonArray[22].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[23].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*3;
			catButtonArray[23].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[24].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*4;
			catButtonArray[24].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[25].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*5;
			catButtonArray[25].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[26].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*6;
			catButtonArray[26].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			//SPACE
			catButtonArray[27].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+228;
			catButtonArray[27].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+240;
			//RETURN
			catButtonArray[28].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+558;
			catButtonArray[28].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+207;
			
			
			
			}
		}
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsMandelbrot’s Monster

Author: Majoki “It’s not a case that we can’t see the fuckdam forest for the fuckdam trees,” Lipton spat as she whirled on Parrati, “because anywhere, anyhow we look at it that fuckdamn beast is waiting, ready to bite our fuckdamn heads off.” Parrati tapped slender fingers on the viewport and clucked. “Fuckdamn. That’s baby […]

The post Mandelbrot’s Monster appeared first on 365tomorrows.

,

Planet DebianPaulo Henrique de Lima Santana: Bits from FOSDEM 2023 and 2024

Link para versão em português

Intro

Since 2019, I have traveled to Brussels at the beginning of the year to join FOSDEM, considered the largest and most important Free Software event in Europe. The 2024 edition was the fourth in-person edition in a row that I joined (2021 and 2022 did not happen due to COVID-19) and always with the financial help of Debian, which kindly paid my flight tickets after receiving my request asking for help to travel and approved by the Debian leader.

In 2020 I wrote several posts with a very complete report of the days I spent in Brussels. But in 2023 I didn’t write anything, and becayse last year and this year I coordinated a room dedicated to translations of Free Software and Open Source projects, I’m going to take the opportunity to write about these two years and how it was my experience.

After my first trip to FOSDEM, I started to think that I could join in a more active way than just a regular attendee, so I had the desire to propose a talk to one of the rooms. But then I thought that instead of proposing a tal, I could organize a room for talks :-) and with the topic “translations” which is something that I’m very interested in, because it’s been a few years since I’ve been helping to translate the Debian for Portuguese.

Joining FOSDEM 2023

In the second half of 2022 I did some research and saw that there had never been a room dedicated to translations, so when the FOSDEM organization opened the call to receive room proposals (called DevRoom) for the 2023 edition, I sent a proposal to a translation room and it was accepted!

After the room was confirmed, the next step was for me, as room coordinator, to publicize the call for talk proposals. I spent a few weeks hoping to find out if I would receive a good number of proposals or if it would be a failure. But to my happiness, I received eight proposals and I had to select six to schedule the room programming schedule due to time constraints .

FOSDEM 2023 took place from February 4th to 5th and the translation devroom was scheduled on the second day in the afternoon.

Fosdem 2023

The talks held in the room were these below, and in each of them you can watch the recording video.

And on the first day of FOSDEM I was at the Debian stand selling the t-shirts that I had taken from Brazil. People from France were also there selling other products and it was cool to interact with people who visited the booth to buy and/or talk about Debian.


Fosdem 2023

Fosdem 2023

Photos

Joining FOSDEM 2024

The 2023 result motivated me to propose the translation devroom again when the FOSDEM 2024 organization opened the call for rooms . I was waiting to find out if the FOSDEM organization would accept a room on this topic for the second year in a row and to my delight, my proposal was accepted again :-)

This time I received 11 proposals! And again due to time constraints, I had to select six to schedule the room schedule grid.

FOSDEM 2024 took place from February 3rd to 4th and the translation devroom was scheduled for the second day again, but this time in the morning.

The talks held in the room were these below, and in each of them you can watch the recording video.

This time I didn’t help at the Debian stand because I couldn’t bring t-shirts to sell from Brazil. So I just stopped by and talked to some people who were there like some DDs. But I volunteered for a few hours to operate the streaming camera in one of the main rooms.


Fosdem 2024

Fosdem 2024

Photos

Conclusion

The topics of the talks in these two years were quite diverse, and all the lectures were really very good. In the 12 talks we can see how translations happen in some projects such as KDE, PostgreSQL, Debian and Mattermost. We had the presentation of tools such as LibreTranslate, Weblate, scripts, AI, data model. And also reports on the work carried out by communities in Africa, China and Indonesia.

The rooms were full for some talks, a little more empty for others, but I was very satisfied with the final result of these two years.

I leave my special thanks to Jonathan Carter, Debian Leader who approved my flight tickets requests so that I could join FOSDEM 2023 and 2024. This help was essential to make my trip to Brussels because flight tickets are not cheap at all.

I would also like to thank my wife Jandira, who has been my travel partner :-)

Bruxelas

As there has been an increase in the number of proposals received, I believe that interest in the translations devroom is growing. So I intend to send the devroom proposal to FOSDEM 2025, and if it is accepted, wait for the future Debian Leader to approve helping me with the flight tickets again. We’ll see.

Planet DebianDirk Eddelbuettel: tinythemes 0.0.2 at CRAN: Maintenance

A first maintenance of the still fairly new package tinythemes arrived on CRAN today. tinythemes provides the theme_ipsum_rc() function from hrbrthemes by Bob Rudis in a zero (added) dependency way. A simple example is (also available as a demo inside the package) contrasts the default style (on left) with the one added by this package (on the right):

This version mostly just updates to the newest releases of ggplot2 as one must, and takes advantage of Bob’s update to hrbrthemes yesterday.

The full set of changes since the initial CRAN release follows.

Changes in spdl version 0.0.2 (2024-03-04)

  • Added continuous integrations action based on r2u

  • Added demo/ directory and a READNE.md

  • Minor edits to help page content

  • Synchronised with ggplot2 3.5.0 via hrbrthemes

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Surveillance through Push Notifications

The Washington Post is reporting on the FBI’s increasing use of push notification data—”push tokens”—to identify people. The police can request this data from companies like Apple and Google without a warrant.

The investigative technique goes back years. Court orders that were issued in 2019 to Apple and Google demanded that the companies hand over information on accounts identified by push tokens linked to alleged supporters of the Islamic State terrorist group.

But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a letter to Attorney General Merrick Garland, said an investigation had revealed that the Justice Department had prohibited Apple and Google from discussing the technique.

[…]

Unlike normal app notifications, push alerts, as their name suggests, have the power to jolt a phone awake—a feature that makes them useful for the urgent pings of everyday use. Many apps offer push-alert functionality because it gives users a fast, battery-saving way to stay updated, and few users think twice before turning them on.

But to send that notification, Apple and Google require the apps to first create a token that tells the company how to find a user’s device. Those tokens are then saved on Apple’s and Google’s servers, out of the users’ reach.

The article discusses their use by the FBI, primarily in child sexual abuse cases. But we all know how the story goes:

“This is how any new surveillance method starts out: The government says we’re only going to use this in the most extreme cases, to stop terrorists and child predators, and everyone can get behind that,” said Cooper Quintin, a technologist at the advocacy group Electronic Frontier Foundation.

“But these things always end up rolling downhill. Maybe a state attorney general one day decides, hey, maybe I can use this to catch people having an abortion,” Quintin added. “Even if you trust the U.S. right now to use this, you might not trust a new administration to use it in a way you deem ethical.”

Cryptogram The Insecurity of Video Doorbells

Consumer Reports has analyzed a bunch of popular Internet-connected video doorbells. Their security is terrible.

First, these doorbells expose your home IP address and WiFi network name to the internet without encryption, potentially opening your home network to online criminals.

[…]

Anyone who can physically access one of the doorbells can take over the device—no tools or fancy hacking skills needed.

Planet DebianColin Watson: Free software activity in January/February 2024

Two months into my new gig and it’s going great! Tracking my time has taken a bit of getting used to, but having something that amounts to a queryable database of everything I’ve done has also allowed some helpful introspection.

Freexian sponsors up to 20% of my time on Debian tasks of my choice. In fact I’ve been spending the bulk of my time on debusine which is itself intended to accelerate work on Debian, but more details on that later. While I contribute to Freexian’s summaries now, I’ve also decided to start writing monthly posts about my free software activity as many others do, to get into some more detail.

January 2024

  • I added Incus support to autopkgtest. Incus is a system container and virtual machine manager, forked from Canonical’s LXD. I switched my laptop over to it and then quickly found that it was inconvenient not to be able to run Debian package test suites using autopkgtest, so I tweaked autopkgtest’s existing LXD integration to support using either LXD or Incus.
  • I discovered Perl::Critic and used it to tidy up some poor practices in several of my packages, including debconf. Perl used to be my language of choice but I’ve been mostly using Python for over a decade now, so I’m not as fluent as I used to be and some mechanical assistance with spotting common errors is helpful; besides, I’m generally a big fan of applying static analysis to everything possible in the hope of reducing bug density. Of course, this did result in a couple of regressions (1, 2), but at least we caught them fairly quickly.
  • I did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • I did some routine maintenance to move several of my upstream projects to a new Gnulib stable branch.
  • debmirror includes a useful summary of how big a Debian mirror is, but it hadn’t been updated since 2010 and the script to do so had bitrotted quite badly. I fixed that and added a recurring task for myself to refresh this every six months.

February 2024

  • Some time back I added AppArmor and seccomp confinement to man-db. This was mainly motivated by a desire to support manual pages in snaps (which is still open several years later …), but since reading manual pages involves a non-trivial text processing toolchain mostly written in C++, I thought it was reasonable to assume that some day it might have a vulnerability even though its track record has been good; so man now restricts the system calls that groff can execute and the parts of the file system that it can access. I stand by this, but it did cause some problems that have needed a succession of small fixes over the years. This month I issued DLA-3731-1, backporting some of those fixes to buster.
  • I spent some time chasing a console-setup build failure following the removal of kFreeBSD support, which was uploaded by mistake. I suggested a set of fixes for this, but the author of the change to remove kFreeBSD support decided to take a different approach (fair enough), so I’ve abandoned this.
  • I updated the Debian zope.testrunner package to 6.3.1.
  • openssh:
    • A Freexian collaborator had a problem with automating installations involving changes to /etc/ssh/sshd_config. This turned out to be resolvable without any changes, but in the process of investigating I noticed that my dodgy arrangements to avoid ucf prompts in certain cases had bitrotted slightly, which meant that some people might be prompted unnecessarily. I fixed this and arranged for it not to happen again.
    • Following a recent debian-devel discussion, I realized that some particularly awkward code in the OpenSSH packaging was now obsolete, and removed it.
  • I backported a python-channels-redis fix to bookworm. I wasn’t the first person to run into this, but I rediscovered it while working on debusine and it was confusing enough that it seemed worth fixing in stable.
  • I fixed a simple build failure in storm.
  • I dug into a very confusing cluster of celery build failures (1, 2, 3), and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable thanks to Stefano Rivera. Getting celery back into testing is blocked on the 64-bit time_t transition for now, but once that’s out of the way it should flow smoothly again.

Worse Than FailureCodeSOD: Classical Architecture

In the great olden times, when Classic ASP was just ASP, there were a surprising number of intranet applications built in it. Since ASP code ran on the server, you frequently needed JavaScript to run on the client side, and so many applications would mix the two- generating JavaScript from ASP. This lead to a lot of home-grown frameworks that were wobbly at best.

Years ago, Melinda inherited one such application from a 3rd party supplier.

<script type='text/javascript' language="JavaScript">

    var NoOffFirstLineMenus=3;                      // Number of first level items
    function BeforeStart(){return;}
    function AfterBuild(){return;}
    function BeforeFirstOpen(){return;}
    function AfterCloseAll(){return;}

    // Menu tree

<% If Session("SubSystem") = "IndexSearch" Then %>

    <% If Session("ReturnURL") = "" Then %>
        Menu1=new Array("Logoff","default.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Logoff");
    <% else %>
        Menu1=new Array("<%=session("returnalt")%>","returnredirect.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Return to Application");
        <% end if %>
        Menu2=new Array("Menu","Menu.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Menu");
        Menu3=new Array("Back","","",5,20,40,"","","","","","",-1,-1,-1,"","Back to Previous Pages");
        Menu3_1=new Array("<%= Session("sptitle") %>",<% If OWFeatureExcluded(Session("UserID"),"Web Index Search","SYSTEM","","")Then %>"","",0,20,130,"#33FFCC","#33FFCC","#C0C0C0","#C0C0C0","","","","","","",-1,-1,-1,"","<%= Session("sptitle") %>"); <% Else %>"SelectStorage.asp","",0,20,130,"","","","","","",-1,-1,-1,"","<%= Session("sptitle") %>");
    <% End If %>
    Menu3_2=new Array("Indexes","IndexRedirect.asp?<%= Session("ReturnQueryString")%>","",0,20,95,"","","","","","",-1,-1,-1,"","Enter Index Search Criteria");
    Menu3_3=new Array("Document List","DocumentList.asp?<%= Session("ReturnQueryString")%>","",0,20,130,"","","","","","",-1,-1,-1,"","Current Document List");
    Menu3_4=new Array("Document Detail",<% If OWFeatureExcluded(Session("UserID"),"Web Document Detail",Documents.Fields.Item("StoragePlace").Value,"","") Then %>"","",0,20,135,"#33FFCC","#33FFCC","#C0C0C0","#C0C0C0","","","","","","",-1,-1,-1,"","Document Details"); <% Else %>"DetailPage.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,135,"","","","","","",-1,-1,-1,"","Document Details");<% End If %>
    Menu3_5=new Array("Comments","Annotations.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,70,"","","","","","",-1,-1,-1,"","Document Comments");

<% Else %>

    <% If Session("ReturnURL") = "" Then %>
        Menu1=new Array("Logoff","default.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Logoff");
    <% else %>
    Menu1=new Array("<%=session("returnalt")%>","returnredirect.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Return to Application");
    <% end if %>
    Menu2=new Array("Menu","Menu.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Menu");
    Menu3=new Array("Back","","",3,20,40,"","","","","","",-1,-1,-1,"","Back to Previous Pages");
    Menu3_1=new Array("Document List","SearchDocumentList.asp?<%= Session("ReturnQueryString") %>","",0,20,130,"","","","","","",-1,-1,-1,"","Current Document List");
    Menu3_2=new Array("Document Detail","DetailPage.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,135,"","","","","","",-1,-1,-1,"","Document Details");
    Menu3_3=new Array("Comments","Annotations.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,70,"","","","","","",-1,-1,-1,"","Document Comments");

<% End If %>

</script>

Here, the ASP code just provides some conditionals- we're checking session variables, and based on those we emit slightly different JavaScript. Or sometimes the same JavaScript, just to keep us on our toes.

The real magic is that this isn't the code that actually renders menu items, this is just where they get defined, and instead of using objects in JavaScript, we just use arrays- the label, the URL, the colors, and many other parameters that control the UI elements are just stuffed into an array, unlabeled. And then there are also the extra if statements, embedded right inline in the code, helping to guarantee that you can't actually debug this, because you can't understand what it's actually doing without really sitting down and spending time with it.

Of course, this application is long dead. But for Melinda, the memory lives on.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsBladesmith

Author: Julian Miles, Staff Writer Tallisandre peers at my dagger. “That’s a wicked stick you have there. I’ve never seen the like.” I hold it up so the light from the forge catches the square end of the blade, showing the third edge and double point where the single-sided long edges meet. “It’s called a […]

The post Bladesmith appeared first on 365tomorrows.

,

Planet DebianIustin Pop: New corydalis 2024.9.0 release!

Obligatory and misused quote: It’s not dead, Jim!

I’ve kind of dropped by ball lately on organising my own photo collection, but February was a pretty good month and I managed to write some more code for Corydalis, ending up with the aforementioned new release.

The release is not a big one, but I did manage to solve one thing that was annoying me greatly: that lack of ability to play videos inline in one of the two picture viewing modes (in my preferred mode, in fact). Now, whether you’re browsing through pictures, or looking at pictures one-by-one, you can in both cases play videos easily, and to some extent, “as it should be�. No user docs for that, yet (I actually need to split the manual in user/admin/developer parts)

I did some more internal cleanups, and I’ve enabled building release zips (since that’s how GitHub actions creates artifacts), which means it should be 10% easier to test this. The rest 90% is configuring it and pointing to picture folders and and and, so this is definitely not plug-and-play.

The diff summary between 2023.44.0 and 2024.9.0 is: 56 files changed, 1412 insertions(+), 700 deletions(-). Which is not bad, but also not too much. The biggest churn was, as expected, in the viewer (due to the aforementioned video playing). The “scary� part is that the TypeScript code is not at 7.9% (and a tiny more JS, which I can’t convert yet due to lack of type definitions upstream). I say scary in quotes, because I would actually like to know Typescript better, but no time.

The new release can be seen in action on demo.corydalis.io, and as always, just after release I found two minor issues:

  • The GitHub actions don’t retrieve the tags by default, actually they didn’t use to retrieve tags at all, but that’s fixed now, just needs configuration, so the public build just says “Corydalis fbe0088, built on Mar 3 2024.â€� (which is the correct hash value, at least).
  • I don’t have videos on the public web site, so the new functionality is not visible. I’m not sure I want to add real videos (size/bandwidth), hmm 🤨.

Well, there will be future releases. For now, I’ve made an open-source package release, which I didn’t do in a while, so I’m happy �. See you!

Planet DebianPetter Reinholdtsen: RAID status from LSI Megaraid controllers using free software

The last few days I have revisited RAID setup using the LSI Megaraid controller. These are a family of controllers called PERC by Dell, and is present in several old PowerEdge servers, and I recently got my hands on one of these. I had forgotten how to handle this RAID controller in Debian, so I had to take a peek in the Debian wiki page "Linux and Hardware RAID: an administrator's summary" to remember what kind of software is available to configure and monitor the disks and controller. I prefer Free Software alternatives to proprietary tools, as the later tend to fall into disarray once the manufacturer loose interest, and often do not work with newer Linux Distributions. Sadly there is no free software tool to configure the RAID setup, only to monitor it. RAID can provide improved reliability and resilience in a storage solution, but only if it is being regularly checked and any broken disks are being replaced in time. I thus want to ensure some automatic monitoring is available.

In the discovery process, I came across a old free software tool to monitor PERC2, PERC3, PERC4 and PERC5 controllers, which to my surprise is not present in debian. To help change that I created a request for packaging of the megactl package, and tried to track down a usable version. The original project site is on Sourceforge, but as far as I can tell that project has been dead for more than 15 years. I managed to find a more recent fork on github from user hmage, but it is unclear to me if this is still being maintained. It has not seen much improvements since 2016. A more up to date edition is a git fork from the original github fork by user namiltd, and this newer fork seem a lot more promising. The owner of this github repository has replied to change proposals within hours, and had already added some improvements and support for more hardware. Sadly he is reluctant to commit to maintaining the tool and stated in my first pull request that he think a new release should be made based on the git repository owned by hmage. I perfectly understand this reluctance, as I feel the same about maintaining yet another package in Debian when I barely have time to take care of the ones I already maintain, but do not really have high hopes that hmage will have time to spend on it and hope namiltd will change his mind.

In any case, I created a draft package based on the namiltd edition and put it under the debian group on salsa.debian.org. If you own a Dell PowerEdge server with one of the PERC controllers, or any other RAID controller using the megaraid or megaraid_sas Linux kernel modules, you might want to check it out. If enough people are interested, perhaps the package will make it into the Debian archive.

There are two tools provided, megactl for the megaraid Linux kernel module, and megasasctl for the megaraid_sas Linux kernel module. The simple output from the command on one of my machines look like this (yes, I know some of the disks have problems. :).

# megasasctl 
a0       PERC H730 Mini           encl:1 ldrv:2  batt:good
a0d0       558GiB RAID 1   1x2  optimal
a0d1      3067GiB RAID 0   1x11 optimal
a0e32s0     558GiB  a0d0  online   errs: media:0  other:19
a0e32s1     279GiB  a0d1  online  
a0e32s2     279GiB  a0d1  online  
a0e32s3     279GiB  a0d1  online  
a0e32s4     279GiB  a0d1  online  
a0e32s5     279GiB  a0d1  online  
a0e32s6     279GiB  a0d1  online  
a0e32s8     558GiB  a0d0  online   errs: media:0  other:17
a0e32s9     279GiB  a0d1  online  
a0e32s10    279GiB  a0d1  online  
a0e32s11    279GiB  a0d1  online  
a0e32s12    279GiB  a0d1  online  
a0e32s13    279GiB  a0d1  online  

#

In addition to displaying a simple status report, it can also test individual drives and print the various event logs. Perhaps you too find it useful?

In the packaging process I provided some patches upstream to improve installation and ensure a Appstream metainfo file is provided to list all supported HW, to allow isenkram to propose the package on all servers with a relevant PCI card.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.12.8.1.0 on CRAN: Upstream Fix, Interface Polish

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1130 other packages on CRAN, downloaded 32.8 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 578 times according to Google Scholar.

This release brings a new upstream bugfix release Armadillo 12.8.1 prepared by Conrad yesterday. It was delayed for a few hours as CRAN noticed an error in one package which we all concluded was spurious as it could be reproduced outside of the one run there. Following from the previous release, we also use the slighty faster ‘Lighter’ header in the examples. And once it got to CRAN I also updated the Debian package.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.1.0 (2024-03-02)

  • Upgraded to Armadillo release 12.8.1 (Cortisol Injector)

    • Workaround in norm() for yet another bug in macOS accelerate framework
  • Update README for RcppArmadillo usage counts

  • Update examples to use '#include <RcppArmadillo/Lighter>' for faster compilation excluding unused Rcpp features

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBen Hutchings: FOSS activity in February 2024

  • I updated the Linux kernel packages in various Debian suites:
    • buster: Updated linux-5.10 to the latest security update for bullseye, and uploaded it, but it still needs to be approved.
    • bullseye-backports: Updated linux (6.1) to the latest security update from bullseye, and uploaded it.
    • bookworm-backports: Updated linux to the current version in testing, and uploaded it.
  • I reported a regression in documentation builds in the Linux 5.10 stable branch.

Planet DebianPaul Wise: FLOSS Activities Feb 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 1 Debian bug report
  • Debian BTS usertags: changes for the month

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: ovito, tahoe-lafs, tpm2-tss-engine
  • Debian wiki: produce HTML dump for a user, unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

365 TomorrowsSenescence

Author: Peter Griffiths Elsie had heard some noise in the night, but hadn’t had the energy to get out of bed to see what it was. Now she could see splatters of paint on the window pane, grey on the grey of the cold morning light. The result was obvious even before she switched on […]

The post Senescence appeared first on 365tomorrows.

,

Planet DebianRavi Dwivedi: Malaysia Trip

Last month, I had a trip to Malaysia and Thailand. I stayed for six days in each of the countries. The selection of these countries was due to both of them granting visa-free entry to Indian tourists for some time window. This post covers the Malaysia part and Thailand part will be covered in the next post. If you want to travel to any of these countries in the visa-free time period, I have written all the questions asked during immigration and at airports during this trip here which might be of help.

I mostly stayed in Kuala Lumpur and went to places around it. Although before the trip, I planned to visit Ipoh and Cameron Highlands too, but could not cover it during the trip. I found planning a trip to Malaysia a little difficult. The country is divided into two main islands - Peninsular Malaysia and Borneo. Then there are more islands - Langkawi, Penang island, Perhentian and Redang Islands. Reaching those islands seemed a little difficult to plan and I wish to visit more places in my next Malaysia trip.

My first day hostel was booked in Chinatown part of Kuala Lumpur, near Pasar Seni LRT station. As soon as I checked-in and entered my room, I met another Indian named Fletcher, and after that we accompanied each other in the trip. That day, we went to Muzium Negara and Little India. I realized that if you know the right places to buy what you want, Malaysia could be quite cheap. Malaysian currency is Malaysian Ringgit (MYR). 1 MYR is equal to 18 INR. For 2 MYR, you can get a good masala tea in Little India and it costs like 4-5 MYR for a masala dosa. The vegetarian food has good availability in Kuala Lumpur, thanks to the Tamil community. I also tried Mee Goreng, which was vegetarian, and I found it fine in terms of taste. When I checked about Mee Goreng on Wikipedia, I found out that it is unique to Indian immigrants in Malaysia (and neighboring countries) but you don’t get it in India!

Mee Goreng, a dish made of noodles in Malaysia.

For the next day, Fletcher had planned a trip to Genting Highlands and pre booked everything. I also planned to join him but when we went to KL Sentral to take the bus, his bus tickets were sold out. I could take a bus at a different time, but decided to visit some other place for the day and cover Genting Highlands later. At the ticket counter, I met a family from Delhi and they wanted to go to Genting Highlands but due to not getting bus tickets for that day, they decided to buy a ticket for the next day and instead planned for Batu Caves that day. I joined them and went to Batu Caves.

After returning from Batu Caves, we went our separate ways. I went back and took rest at my hostel and later went to Petronas Towers at night. Petronas Towers is the icon of Kuala Lumpur. Having a photo there was a must. I was at Petronas Towers at around 9 PM. Around that time, Fletcher came back from Genting Highlands and we planned to meet at KL Sentral to head for dinner.

Me at Petronas Towers.

We went back to the same place as the day before where I had Mee Goreng. This time we had dosa and a masala tea. Their masala tea from the last day was tasty and that’s why I was looking for them in the first place. We also met a Malaysian family having Indian ancestry dining there and had a nice conversation. Then we went to a place to eat roti canai in Pasar Seni market. Roti canai is a popular non-vegetarian dish in Malaysia but I took the vegetarian version.

Photo with Malaysians.

The next day, we went to Berjaya Time Square shopping place which sells pretty cheap items for daily use and souveniers too. However, I bought souveniers from Petaling Street, which is in Chinatown. At night, we explored Bukit Bintang, which is the heart of Kuala Lumpur and is famous for its nightlife.

After that, Fletcher went to Bangkok and I was in Malaysia for two more days. Next day, I went to Genting Highlands and took the cable car, which had awesome views. I came back to Kuala Lumpur by the night. The remaining day I just roamed around in Bukit Bintang. Then I took a flight for Bangkok on 7th Feb, which I will cover in the next post.

In Malaysia, I met so many people from different countries - apart from people from Indian subcontinent, I met Syrians, Indonesians (Malaysia seems to be a popular destination for Indonesian tourists) and Burmese people. Meeting people from other cultures is an integral part of travel for me.

My expenses for Food + Accommodation + Travel added to 10,000 INR for a week in Malaysia, while flight costs were: 13,000 INR (Delhi to Kuala Lumpur) + 10,000 INR (Kuala Lumpur to Bangkok) + 12,000 INR (Bangkok to Delhi).

For OpenStreetMap users, good news is Kuala Lumpur is fairly well-mapped on OpenStreetMap.

Tips

  • I bought local SIM from a shop at KL Sentral station complex which had “news” in their name (I forgot the exact name and there are two shops having “news” in their name) and it was the cheapest option I could find. The SIM was 10 MYR for 5 GB data for a week. If you want to make calls too, then you need to spend extra 5 MYR.

  • 7-Eleven and KK Mart convenience stores are everywhere in the city and they are open all the time (24 hours a day). If you are a vegetarian, you can at least get some bread and cheese from there to eat.

  • A lot of people know English (and many - Indians, Pakistanis, Nepalis - know Hindi) in Kuala Lumpur, so I had no language problems most of the time.

  • For shopping on budget, you can go to Petaling Street, Berjaya Time Square or Bukit Bintang. In particular, there is a shop named I Love KL Gifts in Bukit Bintang which had very good prices. just near the metro/monorail stattion. Check out location of the shop on OpenStreetMap.

365 TomorrowsTo Savor

Author: Jordan Emilson “Make sure it has a name” Werner whispered to the darkened figure beside him, looming over the crib. In the blackness the room appeared in two dimensions: his, and the one his wife and child existed in across the floor. Her head turned, or at least it appeared to him as such […]

The post To Savor appeared first on 365tomorrows.

,

Cryptogram LLM Prompt Injection Worm

Researchers have demonstrated a worm that spreads through prompt injection. Details:

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says.

In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent,” Nassi says.

It’s a natural extension of prompt injection. But it’s still neat to see it actually working.

Research paper: “ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications.

Abstract: In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services. While ongoing research highlighted risks associated with the GenAI layer of agents (e.g., dialog poisoning, membership inference, prompt leaking, jailbreaking), a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?

This paper introduces Morris II, the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts. The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication), engaging in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images). The worm is tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA), and various factors (e.g., propagation rate, replication, malicious activity) influencing the performance of the worm are evaluated.

Charles StrossWorldcon in the news

You've probably seen news reports that the Hugo awards handed out last year at the world science fiction convention in Chengdu were rigged. For example: Science fiction awards held in China under fire for excluding authors.

The Guardian got bits of the background wrong, but what's undeniably true is that it's a huge mess. And the key point the press and most of the public miss is that they seem to think there's some sort of worldcon organization that can fix this.

Spoiler: there isn't.

(Caveat: what follows below the cut line is my brain dump, from 20km up, in lay terms, of what went wrong. I am not a convention runner and I haven't been following the Chengdu mess obsessively. If you want the inside baseball deets, read the File770 blog. If you want to see the rulebook, you can find it here (along with a bunch more stuff). I am on the outside of the fannish discourse and flame wars on this topic, and I may have misunderstood some of the details. I'm open to authoritative corrections and will update if necessary.)

SF conventions are generally fan-run (amateur) get-togethers, run on a non-profit/volunteer basis. There are some exceptions (the big Comiccons like SDCC, a couple of really large fan conventions that out-grew the scale volunteers can run them on so pay full-time staff) but generally they're very amateurish.

SF conventions arose organically out of SF fan clubs that began holding face to face meet-ups in the 1930s. Many of them are still run by local fan clubs and usually they stick to the same venue for decades: for example, the long-running Boskone series of conventions in Boston is run by NESFA, the New England SF Association; Novacon in the UK is run by the Birmingham SF Group. Both have been going for over 50 years now.

Others are less location-based. In the UK, there are the British Eastercons held over the easter (long) bank holiday weekend every year in a different city. It's a notionally national SF convention, although historically it's tended to be London-centric. They're loosely associated with the BSFA, which announces it's own SF awards (the BSFA awards) at the eastercon.

Because it's hard to run a convention when you live 500km from the venue, local SF societies or organizer teams talk to hotels and put together a bid for the privilege of working their butts off for a weekend. Then, a couple of years before the convention, there's a meeting and a vote at the preceding-but-one con in the series where the members vote on where to hold that year's convention.

Running a convention is not expense-free, so it's normal to charge for membership. (Nobody gets paid, but conventions host guests of honour—SF writers, actors, and so on—and they get their membership, hotel room, and travel expenses comped in the expectation that they'll stick around and give talks/sign books/shake hands with the members.)

What's less well-known outside the bubble is that it's also normal to offer "pre-supporting" memberships (to fund a bid) and "supporting" memberships (you can't make it to the convention that won the bidding war but you want to make a donation). Note that such partial memberships are upgradable later for the difference in cost if you decide to attend the event.

The world science fiction convention is the name of a long-running series of conventions (the 82nd one is in Glasgow this August) that are held annually. There is a rule book for running a worldcon. For starters, the venue is decided by a bidding war between sites (as above). For seconds, members of the convention are notionally buying membership, for one year, in the World Science Fiction Society (WSFS). The rule book for running a worldcon is the WSFS constitution, and it lays down the rules for:

  • Voting on where the next-but-one worldcon will be held ("site selection")
  • Holding a business meeting where motions to amend the WSFS constitution can be discussed and voted on (NB: to be carried a motion must be proposed and voted through at two consecutive worldcons)
  • Running the Hugo awards

The important thing to note is that the "worldcon" is *not a permanent organization. It's more like a virus that latches onto an SF convention, infects it with worldcon-itis, runs the Hugo awards and the WSFS business meeting, then selects a new convention to parasitize the year after next.

No worldcon binds the hands of the next worldcon, it just passes the baton over in the expectation that the next baton-holder will continue the process rather than, say, selling the baton off to be turned into matchsticks.

This process worked more or less fine for eighty years, until it ran into Chengdu.

Worldcons are volunteer, fan-organized, amateur conventions. They're pretty big: the largest hit roughly 14,000 members, and they average 4000-8000. (I know of folks who used "worked on a British eastercon committee" as their dissertation topic for degrees in Hospitality Management; you don't get to run a worldcon committee until you're way past that point.) But SF fandom is a growing community thing in China. And even a small regional SF convention in China is quite gigantic by most western (trivially, US/UK) standards.

My understanding is that a bunch of Chinese fans who ran a successful regional convention in Chengdu (population 21 million; slightly more than the New York metropolitan area, about 30% more than London and suburbs) heard about the worldcon and thought "wouldn't it be great if we could call ourselves the world science fiction convention?"

They put together a bid, then got a bunch of their regulars to cough up $50 each to buy a supporting membership in the 2021 worldcon and vote in site selection. It doesn't take that many people to "buy" a worldcon—I seem to recall it's on the order of 500-700 votes—so they bought themselves the right to run the worldcon in 2023. And that's when the fun and games started.

See, Chinese fandom is relatively isolated from western fandom. And the convention committee didn't realize that there was this thing called the WSFS Constitution which set out rules for stuff they had to do. I gather they didn't even realize they were responsible for organizing the nomination and voting process for the Hugo awards, commissioning the award design, and organizing an awards ceremony, until about 12 months before the convention (which is short notice for two rounds of voting. commissioning a competition between artists to design the Hugo award base for that year, and so on). So everything ran months too late, and they had to delay the convention, and most of the students who'd pitched in to buy those bids could no longer attend because of bad timing, and worse ... they began picking up an international buzz, which in turn drew the attention of the local Communist Party, in the middle of the authoritarian clamp-down that's been intensifying for the past couple of years. (Remember, it takes a decade to organize a successful worldcon from initial team-building to running the event. And who imagined our existing world of 2023 back in 2013?)

The organizers appear to have panicked.

First they arbitrarily disqualified a couple of very popular works by authors who they thought might offend the Party if they won and turned up to give an acceptance speech (including "Babel", by R. F. Kuang, which won the Nebula and Locus awards in 2023 and was a favourite to win the Hugo as well).

Then they dragged their heels on releasing the vote counts—the WSFS Constitution requires the raw figures to be released after the awards are handed out.

Then there were discrepancies in the count of votes cast, such that the raw numbers didn't add up.

The haphazard way they released the data suggests that the 911 call is coming from inside the house: the convention committee freaked out when they realized the convention had become a political hot potato, rigged the vote badly, and are now farting smoke signals as if to say "a secret policeman hinted that it could be very unfortunate if we didn't anticipate the Party's wishes".

My take-away:

The world science fiction convention coevolved with fan-run volunteer conventions in societies where there's a general expectation of the rule of law and most people abide by social norms irrespective of enforcement. The WSFS constitution isn't enforceable except insofar as normally fans see no reason not to abide by the rules. So it works okay in the USA, the UK, Canada, the Netherlands, Japan, Australia, New Zealand, and all the other western-style democracies it's been held in ... but broke badly when a group of enthusiasts living in an authoritarian state won the bid then realized too late that by doing so they'd come to the attention of Very Important People who didn't care about their society's rulebook.

Immediate consequences:

For the first fifty or so worldcons, worldcon was exclusively a North American phenomenon except for occasional sorties to the UK. Then it began to open up as cheap air travel became a thing. In the 21st century about 50% of worldcons are held outside North America, and until 2016 there was an expectation that it would become truly international.

But the Chengdu fubar has created shockwaves. There's no immediate way to fix this, any more than you'll be able to fix Donald Trump declaring himself dictator-for-life on the Ides of March in 2025 if he gets back into the White House with a majority in the House and Senate. It needs a WSFS constitutional amendment at least (so pay attention to the motions and voting in Glasgow, and then next year, in Seattle) just to stop it happening again. And nobody has ever tried to retroactively invalidate the Hugo awards. While there's a mechanism for running Hugo voting and handing out awards for a year in which there was no worldcon (the Retrospective Hugo awards—for example, the 1945 Hugo Awards were voted on in 2020—nobody considered the need to re-run the Hugos for a year in which the vote was rigged. So there's no mechanism.

The fallout from Chengdu has probably sunk several other future worldcon bids—and it's not as if there are a lot of teams competing for the privilege of working themselves to death: Glasgow and Seattle (2024 and 2025) both won their bidding by default because they had experienced, existing worldcon teams and nobody else could be bothered turning up. So the Ugandan worldcon bid has collapsed (and good riddance, many fans would vote NO WORLDCON in preference to a worldcon in a nation that recently passed a law making homosexuality a capital offense). The Saudi Arabian bid also withered on the vine, but took longer to finally die. They shifted their venue to Cairo in a desperate attempt to overcome Prince Bone-saw's negative PR optics, but it hit the buffers when the Egyptian authorities refused to give them the necessary permits. Then there's the Tel Aviv bid. Tel Aviv fans are lovely people, but I can't see an Israeli worldcon being possible in the foreseeable future (too many genocide cooties right now). Don't ask about Kiev (before February 2022 they were considering bidding for the Eurocon). And in the USA, the prognosis for successful Texas and Florida worldcon bids are poor (book banning does not go down well with SF fans).

Beyond Seattle in 2025, the sole bid standing for 2026 (now the Saudi bid has died) is Los Angeles. Tel Aviv is still bidding for 2027, but fat chance: Uganda is/was targeting 2028, and there was some talk of a Texas bid in 2029 (all these are speculative bids and highly unlikely to happen in my opinion). I am also aware of a bid for a second Dublin worldcon (they've got a shiny new conference centre), targeting 2029 or 2030. There may be another Glasgow or London bid in the mid-30s, too. But other than that? I'm too out of touch with current worldcon politics to say, other than, watch this space (but don't buy the popcorn from the concession stand, it's burned and bitter).

UPDATE

A commenter just drew my attention to this news item on China.org.cn, dated October 23rd, 2023, right after the worldcon. It begins:

Investment deals valued at approximately $1.09 billion were signed during the 81st World Science Fiction Convention (Worldcon) held in Chengdu, Sichuan province, last week at its inaugural industrial development summit, marking significant progress in the advancement of sci-fi development in China.

The deals included 21 sci-fi industry projects involving companies that produce films, parks, and immersive sci-fi experiences ..."

That's a metric fuckton of moolah in play, and it would totally account for the fan-run convention folks being discreetly elbowed out of the way and the entire event being stage-managed as a backdrop for a major industrial event to bootstrap creative industries (film, TV, and games) in Chengdu. And—looking for the most charitable interpretation here—the hapless western WSFS people being carried along for the ride to provide a veneer of worldcon-ness to what was basically Chinese venture capital hijacking the event and then sanitizing it politically.

Follow the money.

Planet DebianGuido Günther: Free Software Activities February 2024

A short status update what happened last month. Work in progress is marked as WiP:

GNOME Calls

  • Landed support to pick emergency calls numbers based on location (until now Calls picked the numbers from the SIM card only): Merge Request
  • Bugfix: Fix dial back - the action mistakenly got disabled in some circumstances: Merge Request, Issue.

Phosh and Phoc

As this often overlaps I've put them in a common section:

Phosh Tour

Phosh Mobile Settings

Phosh OSK Stub

Livi Video Player

Phosh.mobi Website

  • Directly link to tarballs from the release page, e.g. here

If you want to support my work see donations.

Cryptogram Friday Squid Blogging: New Extinct Species of Vampire Squid Discovered

Paleontologists have discovered a 183-million-year-old species of vampire squid.

Prior research suggests that the vampyromorph lived in the shallows off an island that once existed in what is now the heart of the European mainland. The research team believes that the remarkable degree of preservation of this squid is due to unique conditions at the moment of the creature’s death. Water at the bottom of the sea where it ventured would have been poorly oxygenated, causing the creature to suffocate. In addition to killing the squid, it would have prevented other creatures from feeding on its remains, allowing it to become buried in the seafloor, wholly intact.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianScarlett Gately Moore: Kubuntu: Week 4, Feature Freeze and what comes next.

First I would like to give a big congratulations to KDE for a superb KDE 6 mega release 🙂 While we couldn’t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up.

Kubuntu:

Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release 🙂 and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are:

  • Kept a close eye on Excuses and fixed tests as needed. Seems riscv64 tests were turned off by default which broke several of our builds.
  • I did a complete revamp of our seed / kubuntu-desktop meta package! I have ensured we are following KDE packaging recommendations. Unfortunately, we cannot ship maliit-keyboard as we get hit by LP 2039721 which makes for an unpleasant experience.
  • I did some more work on our custom plasma-welcome which now just needs some branding, which leads to a friendly reminder the contest is still open! https://kubuntu.org/news/kubuntu-graphic-design-contest/
  • Bug triage! Oh so many bugs! From back when I worked on Kubuntu 10 years ago and plasma5 was new.. I am triaging and reducing this list to more recent bugs ( which is a much smaller list ). This reaffirms our decision to go with a rock solid stable Plasma5 for this LTS release.
  • I spent some time debugging kio-gdrive which no longer works ( It works in Jammy ) so I am tracking down what is broken. I thought it was 2FA but my non 2FA doesn’t work either, it just repeatedly throws up the google auth dialog. So this is still a WIP. It was suggested to me to disable online accounts all together, but I would prefer to give users the full experience.
  • Fixed our ISO builds. We are still not quite ready for testers as we have some Calamares fixes in the pipeline. Be on the lookout for a call for testers soon 🙂
  • Wrote a script to update our ( Kubuntu ) packageset to cover all the new packages accumulated over the years and remove packages that are defunct / removed.

What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down.

Snaps:

I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late.

Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate

Thank you!

Planet DebianRavi Dwivedi: Fixing Mobile Data issue on Lineage OS

I have used Lineage OS on a couple of phones, but I noticed that internet using my mobile data was not working well on it. I am not sure why. This was the case in Xiaomi MI A2 and OnePlus 9 Pro phones. One day I met contrapunctus and they looked at their phone settings and used the same in mine and it worked. So, I am going to write here what worked for me.

The trick is to add an access point.

Go to Settings -> Network Settings -> Your SIM settings -> Access Point Names -> Click on ‘+’ symbol.

In the Name section, you can write anything, I wrote test. And in the APN section write www, then save. Below is a screenshot demonstrating the settings you have to change.

APN settings screenshot. Notice the circled entries.

This APN will show in the list of APNs and you need to select this one.

After this, my mobile data started working well and I started getting speeds according to my data plan. This is what worked for me in Lineage OS. Hopefully, it was of help to you :D

I will meet you in the next post.

Worse Than FailureError'd: Once In A Lifetime

Not exactly once, I sincerely hope. That would be tragic.

"Apparently, today's leap day is causing a denial of service error being able to log into our Cemetery Management software due to some bad date calculations," writes Steve D. To be fair, he points out, it doesn't happen often.

ded

 

In all seriousness, unusual as that might be, I do have cemeteries on my mind this week. I recently discovered a web site that has photographs of hundreds of my relatives' graves, and a series of memorials for "Infant Spencer" and "Infant Strickland" and "Infant McHugh", along with another named dozen deceased age 0. Well, it's sobering. Taking a moment here in thanks to Doctors Pasteur, Salk, Jenner, et.al. And now, back to our meagre ration of snark.

Regular Peter G. found a web site that thought Lorem Ipsum was too inaccessible to the modern audience, so they translate it to English. Peter muses "I've cropped out the site identity because it's a smallish company that provides good service and I don't want to embarrass them, but I'm kinda terrified at what a paleo fap pour-over is. Or maybe it's the name of an anarcho-punk fusion group?"

paleo

 

"Beat THAT, Kasparov!" crows Orion S.

nul

 

"Insert Disc 2 into your Raspberry Pi" quoth an anonymous poster. "I'm still looking for a way to acquire an official second installation disc for qt for Debian."

pi

 

Finally, Michael P. just couldn't completely ignore this page, could he? "I wanted to unsubscribe to this, but since my email is not placeholderEmail, I guess I should ignore the page." I'm sure he did a yeoman's job of trying.

notme

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Emissary

Author: Alastair Millar “It would be fitting,” the Sardaanian said, “if you took a new name now. A human name.” “But my name has always been T!kalma,” the woman replied. “Yes,” ze replied, “but that is one of our names. Your birth people are reaching out, as we predicted. Soon it will be time to […]

The post The Emissary appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 259 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 259. This version includes the following changes:

[ Chris Lamb ]
* Don't error-out with a traceback if we encounter "struct.unpack"-related
  errors when parsing .pyc files. (Closes: #1064973)
* Fix compatibility with PyTest 8.0. (Closes: reproducible-builds/diffoscope#365)
* Don't try and compare rdb_expected_diff on non-GNU systems as %p formatting
  can vary. (Re: reproducible-builds/diffoscope#364)

You find out more by visiting the project homepage.

,

Krebs on SecurityFulton County, Security Experts Call LockBit’s Bluff

The ransomware group LockBit told officials with Fulton County, Ga. they could expect to see their internal documents published online this morning unless the county paid a ransom demand. LockBit removed Fulton County’s listing from its victim shaming website this morning, claiming the county had paid. But county officials said they did not pay, nor did anyone make payment on their behalf. Security experts say LockBit was likely bluffing and probably lost most of the data when the gang’s servers were seized this month by U.S. and U.K. law enforcement.

The LockBit website included a countdown timer until the promised release of data stolen from Fulton County, Ga. LockBit would later move this deadline up to Feb. 29, 2024.

LockBit listed Fulton County as a victim on Feb. 13, saying that unless it was paid a ransom the group would publish files stolen in a breach at the county last month. That attack disrupted county phones, Internet access and even their court system. LockBit leaked a small number of the county’s files as a teaser, which appeared to include sensitive and sealed court records in current and past criminal trials.

On Feb. 16, Fulton County’s entry — along with a countdown timer until the data would be published — was removed from the LockBit website without explanation. The leader of LockBit told KrebsOnSecurity this was because Fulton County officials had engaged in last-minute negotiations with the group.

But on Feb. 19, investigators with the FBI and the U.K.’s National Crime Agency (NCA) took over LockBit’s online infrastructure, replacing the group’s homepage with a seizure notice and links to LockBit ransomware decryption tools.

In a press briefing on Feb. 20, Fulton County Commission Chairman Robb Pitts told reporters the county did not pay a ransom demand, noting that the board “could not in good conscience use Fulton County taxpayer funds to make a payment.”

Three days later, LockBit reemerged with new domains on the dark web, and with Fulton County listed among a half-dozen other victims whose data was about to be leaked if they refused to pay. As it does with all victims, LockBit assigned Fulton County a countdown timer, saying officials had until late in the evening on March 1 until their data was published.

LockBit revised its deadline for Fulton County to Feb. 29.

LockBit soon moved up the deadline to the morning of Feb. 29. As Fulton County’s LockBit timer was counting down to zero this morning, its listing disappeared from LockBit’s site. LockBit’s leader and spokesperson, who goes by the handle “LockBitSupp,” told KrebsOnSecurity today that Fulton County’s data disappeared from their site because county officials paid a ransom.

“Fulton paid,” LockBitSupp said. When asked for evidence of payment, LockBitSupp claimed. “The proof is that we deleted their data and did not publish it.”

But at a press conference today, Fulton County Chairman Robb Pitts said the county does not know why its data was removed from LockBit’s site.

“As I stand here at 4:08 p.m., we are not aware of any data being released today so far,” Pitts said. “That does not mean the threat is over. They could release whatever data they have at any time. We have no control over that. We have not paid any ransom. Nor has any ransom been paid on our behalf.”

Brett Callow, a threat analyst with the security firm Emsisoft, said LockBit likely lost all of the victim data it stole before the FBI/NCA seizure, and that it has been trying madly since then to save face within the cybercrime community.

“I think it was a case of them trying to convince their affiliates that they were still in good shape,” Callow said of LockBit’s recent activities. “I strongly suspect this will be the end of the LockBit brand.”

Others have come to a similar conclusion. The security firm RedSense posted an analysis to Twitter/X that after the takedown, LockBit published several “new” victim profiles for companies that it had listed weeks earlier on its victim shaming site. Those victim firms — a healthcare provider and major securities lending platform — also were unceremoniously removed from LockBit’s new shaming website, despite LockBit claiming their data would be leaked.

“We are 99% sure the rest of their ‘new victims’ are also fake claims (old data for new breaches),” RedSense posted. “So the best thing for them to do would be to delete all other entries from their blog and stop defrauding honest people.”

Callow said there certainly have been plenty of cases in the past where ransomware gangs exaggerated their plunder from a victim organization. But this time feels different, he said.

“It is a bit unusual,” Callow said. “This is about trying to still affiliates’ nerves, and saying, ‘All is well, we weren’t as badly compromised as law enforcement suggested.’ But I think you’d have to be a fool to work with an organization that has been so thoroughly hacked as LockBit has.”

Cryptogram NIST Cybersecurity Framework 2.0

NIST has released version 2.0 of the Cybersecurity Framework:

The CSF 2.0, which supports implementation of the National Cybersecurity Strategy, has an expanded scope that goes beyond protecting critical infrastructure, such as hospitals and power plants, to all organizations in any sector. It also has a new focus on governance, which encompasses how organizations make and carry out informed decisions on cybersecurity strategy. The CSF’s governance component emphasizes that cybersecurity is a major source of enterprise risk that senior leaders should consider alongside others such as finance and reputation.

[…]

The framework’s core is now organized around six key functions: Identify, Protect, Detect, Respond and Recover, along with CSF 2.0’s newly added Govern function. When considered together, these functions provide a comprehensive view of the life cycle for managing cybersecurity risk.

The updated framework anticipates that organizations will come to the CSF with varying needs and degrees of experience implementing cybersecurity tools. New adopters can learn from other users’ successes and select their topic of interest from a new set of implementation examples and quick-start guides designed for specific types of users, such as small businesses, enterprise risk managers, and organizations seeking to secure their supply chains.

This is a big deal. The CSF is widely used, and has been in need of an update. And NIST is exactly the sort of respected organization to do this correctly.

Some news articles.

Planet DebianRussell Coker: Links February 2024

In 2018 Charles Stross wrote an insightful blog post Dude You Broke the Future [1]. It covers AI in both fiction and fact and corporations (the real AIs) and the horrifying things they can do right now.

LongNow has an interesting article about the concept of the Magnum Opus [2]. As an aside I’ve been working on SE Linux for 22 years.

Cory Doctorow wrote an insightful article about the incentives for enshittification of the Internet and how economic issues and regulations shape that [3].

CCC has a lot of great talks, and this talk from the latest CCC about the Triangulation talk on an attak on Kaspersky iPhones is particularly epic [4].

GoodCar is an online sales site for electric cars in Australia [5].

Ulrike wrote an insightful blog post about how the reliance on volunteer work in the FOSS community hurts diversity [6].

Cory Doctorow wrote an insightful article about The Internet’s Original Sin which is misuse of copyright law [7]. He advocates for using copyright strictly for it’s intended purpose and creating other laws for privacy, labor rights, etc.

David Brin wrote an interesting article on neoteny and sexual selection in humans [8].

37C3 has an interesting lecture about software licensing for a circular economy which includes environmental savings from better code [9]. Now they track efficiency in KDE bug reports!

365 TomorrowsPrussian Blue

Author: David C. Nutt The newbie made his way through central supply. “Why can’t I have a Prussian Blue exosuit?” I rolled my eyes. “Because you can’t.” The kid slapped the counter, my counter. “Unacceptable. You dissin’ me because I’m a noob?” I smiled. “No. I am ‘dissin you’ because you’re an arrogant prick.” I […]

The post Prussian Blue appeared first on 365tomorrows.

Worse Than FailureCodeSOD: A Few Updates

Brian was working on landing a contract with a European news agency. Said agency had a large number of intranet applications of varying complexity, all built to support the news business.

Now, they understood that, as a news agency, they had no real internal corporate knowledge of good software development practices, so they did what came naturally: they hired a self-proclaimed "code guru" to built the system.

Said code guru was notoriously explosive. When users came to him with complaints like "your system lost all the research I've been gathering for the past three months!" the guru would shout about how users were doing it wrong, couldn't be trusted to handle the most basic tasks, and "wiping your ass isn't part of my job description."

With a stellar personality like that, what was his PHP code like?

$req000="SELECT idfiche FROM fiche WHERE idevent=".$_GET['id_evt'];
$rep000=$db4->query($req000);
$nb000=$rep000->numRows();
if ($nb000>0) {
	while ($row000=$rep000->fetchRow(DB_FETCHMODE_ASSOC)) {
		$req001="UPDATE fiche SET idevent=NULL WHERE idfiche=".$row000['idfiche'];
		$rep001=$db4->query($req001);
	}
}

It's common that the first line of a submission is bad. It's rare that the first 7 characters fill me with a sense of dread. $req000. Oh no. Oh noooo. We're talking about those kinds of variables.

We query using $req000 and store the result in $rep000, using $db4 to run the query. My skin is crawling so much from that that I feel like the obvious SQL injection vulnerability using $_GET to write the query is probably not getting enough of my attention. I really hate these variable names though.

We execute our gaping security vulnerability, and check how many rows we got (using $nb000 to store the result). While we have rows, we store each row in $row000, to populate $req001- an update query. We execute this query once for each row, storing the result in $rep001.

Now, the initial SELECT could return up to 4,000 rows. That's not a massive dataset, but as you can imagine, this whole application was running on a potato-powered server stuffed in the network closet. It was slow.

The fix was obvious- you could replace both the SELECT and the UPDATEs with a single query: UPDATE fiche SET idevent=NULL WHERE idevent=?- that's all this code actually does.

Fixing performance wasn't how Brian proved he was the right person for more contract work, though. Once Brian saw the SQL injection, he demonstrated to the boss how a malicious user could easily delete the entire database from the URL bar in their browser. The boss was sufficiently terrified by the prospect- the code guru was politely asked to leave, and Brian was told to please fix this quickly.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cryptogram A Cyber Insurance Backstop

In the first week of January, the pharmaceutical giant Merck quietly settled its years-long lawsuit over whether or not its property and casualty insurers would cover a $700 million claim filed after the devastating NotPetya cyberattack in 2017. The malware ultimately infected more than 40,000 of Merck’s computers, which significantly disrupted the company’s drug and vaccine production. After Merck filed its $700 million claim, the pharmaceutical giant’s insurers argued that they were not required to cover the malware’s damage because the cyberattack was widely attributed to the Russian government and therefore was excluded from standard property and casualty insurance coverage as a “hostile or warlike act.”

At the heart of the lawsuit was a crucial question: Who should pay for massive, state-sponsored cyberattacks that cause billions of dollars’ worth of damage?

One possible solution, touted by former Department of Homeland Security Secretary Michael Chertoff on a recent podcast, would be for the federal government to step in and help pay for these sorts of attacks by providing a cyber insurance backstop. A cyber insurance backstop would provide a means for insurers to receive financial support from the federal government in the event that there was a catastrophic cyberattack that caused so much financial damage that the insurers could not afford to cover all of it.

In his discussion of a potential backstop, Chertoff specifically references the Terrorism Risk Insurance Act (TRIA) as a model. TRIA was passed in 2002 to provide financial assistance to the insurers who were reeling from covering the costs of the Sept. 11, 2001, terrorist attacks. It also created the Terrorism Risk Insurance Program (TRIP), a public-private system of compensation for some terrorism insurance claims. The 9/11 attacks cost insurers and reinsurers $47 billion. It was one of the most expensive insured events in history and prompted many insurers to stop offering terrorism coverage, while others raised the premiums for such policies significantly, making them prohibitively expensive for many businesses. The government passed TRIA to provide support for insurers in the event of another terrorist attack, so that they would be willing to offer terrorism coverage again at reasonable rates. President Biden’s 2023 National Cybersecurity Strategy tasked the Treasury and Homeland Security Departments with investigating possible ways of implementing something similar for large cyberattacks.

There is a growing (and unsurprising) consensus among insurers in favor of the creation and implementation of a federal cyber insurance backstop. Like terrorist attacks, catastrophic cyberattacks are difficult for insurers to predict or model because there is not very good historical data about them—and even if there were, it’s not clear that past patterns of cyberattacks will dictate future ones. What’s more, cyberattacks could cost insurers astronomic sums of money, especially if all of their policyholders were simultaneously affected by the same attack. However, despite this consensus and the fact that this idea of the government acting as the “insurer of last resort” was first floated more than a decade ago, actually developing a sound, thorough proposal for a backstop has proved to be much more challenging than many insurers and policymakers anticipated.

One major point of issue is determining a threshold for what types of cyberattacks should trigger a backstop. Specific characteristics of cyberattacks—such as who perpetrated the attack, the motive behind it, and total damage it has caused—are often exceedingly difficult to determine. Therefore, even if policymakers could agree on what types of attacks they think the government should pay for based on these characteristics, they likely won’t be able to calculate which incursions actually qualify for assistance.

For instance, NotPetya is estimated to have caused more than $10 billion in damage worldwide, but the quantifiable amount of damage it actually did is unknown. The attack caused such a wide variety of disruptions in so many different industries, many of which likely went unreported since many companies had no incentive to publicize their security failings and were not required to do so. Observers do, however, have a pretty good idea who was behind the NotPetya attack because several governments, including the United States and the United Kingdom, issued coordinated statements blaming the Russian military. As for the motive behind NotPetya, the program was initially transmitted through Ukrainian accounting software, which suggests that it was intended to target Ukrainian critical infrastructure. But notably, this type of coordinated, consensus-based attribution to a specific government is relatively rare when it comes to cyberattacks. Future attacks are not likely to receive the same determination.

In the absence of a government backstop, the insurance industry has begun to carve out larger and larger exceptions to their standard cyber coverage. For example, in a pair of rulings against Merck’s insurers, judges in New Jersey ruled that the insurance exclusions for “hostile or warlike acts” (such as the one in Merck’s property policy that excluded coverage for “loss or damage caused by hostile or warlike action in time of peace or war … by any government or sovereign power”) were not sufficiently specific to encompass a cyberattack such as NotPetya that did not involve the use of traditional force.

Accordingly, insurers such as Lloyd’s have begun to change their policy language to explicitly exclude broad swaths of cyberattacks that are perpetrated by nation-states. In an August 2022 bulletin, Lloyd’s instructed its underwriters to exclude from all cyber insurance policies not just losses arising from war but also “losses arising from state backed cyber-attacks that (a) significantly impair the ability of a state to function or (b) that significantly impair the security capabilities of a state.”  Other insurers, such as Chubb, have tried to avoid tricky questions about attribution by suggesting a government response-based exclusion for war that only applies if a government responds to a cyberattack by authorizing the use of force. Chubb has also introduced explicit definitions for cyberattacks that pose a “systemic risk” or impact multiple entities simultaneously. But most of this language has not yet been tested by insurers trying to deny claims. No one, including the companies buying the policies with these exclusions written into them, really knows exactly which types of cyberattacks they exclude. It’s not clear what types of cyberattacks courts will recognize as being state-sponsored, or posing systemic risks, or significantly impairing the ability of a state to function. And for the policyholders’ whose insurance exclusions feature this sort of language, it matters a great deal how that language in their exclusions will be parsed and understood by courts adjudicating claim disputes.

These types of recent exclusions leave a large hole in companies’ coverage for cyber risks, placing even more pressure on the government to help. One of the reasons Chertoff gives for why the backstop is important is to help clarify for organizations what cyber risk-related costs they are and are not responsible for. That clarity will require very specific definitions of what types of cyberattacks the government will and will not pay for. And as the insurers know, it can be quite difficult to anticipate what the next catastrophic cyberattack will look like or how to craft a policy that will enable the government to pay only for a narrow slice of cyberattacks in a varied and unpredictable threat landscape. Get this wrong, and the government will end up writing some very large checks.

And in comparison to insurers’ coverage of terrorist attacks, large-scale cyberattacks are much more common and affect far more organizations, which makes it a far more costly risk that no one wants to take on. Organizations don’t want to—that’s why they buy insurance. Insurance companies don’t want to—that’s why they look to the government for assistance. But, so far, the U.S. government doesn’t want to take on the risk, either.

It is safe to assume, however, that regardless of whether a formal backstop is established, the federal government would step in and help pay for a sufficiently catastrophic cyberattack. If the electric grid went down nationwide, for instance, the U.S. government would certainly help cover the resulting costs. It’s possible to imagine any number of catastrophic scenarios in which an ad hoc backstop would be implemented hastily to help address massive costs and catastrophic damage, but that’s not primarily what insurers and their policyholders are looking for. They want some reassurance and clarity up front about what types of incidents the government will help pay for. But to provide that kind of promise in advance, the government likely would have to pair it with some security requirements, such as implementing multifactor authentication, strong encryption, or intrusion detection systems. Otherwise, they create a moral hazard problem, where companies may decide they can invest less in security knowing that the government will bail them out if they are the victims of a really expensive attack.

The U.S. government has been looking into the issue for a while, though, even before the 2023 National Cybersecurity Strategy was released. In 2022, for instance, the Federal Insurance Office in the Treasury Department published a Request for Comment on a “Potential Federal Insurance Response to Catastrophic Cyber Incidents.” The responses recommended a variety of different possible backstop models, ranging from expanding TRIP to encompass certain catastrophic cyber incidents, to creating a new structure similar to the National Flood Insurance Program that helps underwrite flood insurance, to trying a public-private partnership backstop model similar to the United Kingdom’s Pool Re program.

Many of these responses rightly noted that while it might eventually make sense to have some federal backstop, implementing such a program immediately might be premature. University of Edinburgh Professor Daniel Woods, for example, made a compelling case for why it was too soon to institute a backstop in Lawfare last year. Woods wrote,

One might argue similarly that a cyber insurance backstop would subsidize those companies whose security posture creates the potential for cyber catastrophe, such as the NotPetya attack that caused $10 billion in damage. Infection in this instance could have been prevented by basic cyber hygiene. Why should companies that do not employ basic cyber hygiene be subsidized by industry peers? The argument is even less clear for a taxpayer-funded subsidy.

The answer is to ensure that a backstop applies only to companies that follow basic cyber hygiene guidelines, or to insurers who require those hygiene measures of their policyholders. These are the types of controls many are familiar with: complicated passwords, app-based two-factor authentication, antivirus programs, and warning labels on emails. But this is easier said than done. To a surprising extent, it is difficult to know which security controls really work to improve companies’ cybersecurity. Scholars know what they think works: strong encryption, multifactor authentication, regular software updates, and automated backups. But there is not anywhere near as much empirical evidence as there ought to be about how effective these measures are in different implementations, or how much they reduce a company’s exposure to cyber risk.

This is largely due to companies’ reluctance to share detailed, quantitative information about cybersecurity incidents because any such information may be used to criticize their security posture or, even worse, as evidence for a government investigation or class-action lawsuit. And when insurers and regulators alike try to gather that data, they often run into legal roadblocks because these investigations are often run by lawyers who claim that the results are shielded by attorney-client privilege or work product doctrine. In some cases, companies don’t write down their findings at all to avoid the possibility of its being used against them in court. Without this data, it’s difficult for insurers to be confident that what they’re requiring of their policyholders will really work to improve those policyholders’ security and decrease their claims for cybersecurity-related incidents under their policies. Similarly, it’s hard for the federal government to be confident that they can impose requirements for a backstop that will actually raise the level of cybersecurity hygiene nationwide.

The key to managing cyber risks—both large and small—and designing a cyber backstop is determining what security practices can effectively mitigate the impact of these attacks. If there were data showing which controls work, insurers could then require that their policyholders use them, in the same way they require policyholders to install smoke detectors or burglar alarms. Similarly, if the government had better data about which security tools actually work, it could establish a backstop that applied only to victims who have used those tools as safeguards. The goal of this effort, of course, is to improve organizations’ overall cybersecurity in addition to providing financial assistance.

There are a number of ways this data could be collected. Insurers could do it through their claims databases and then aggregate that data across carriers to policymakers. They did this for car safety measures starting in the 1950s, when a group of insurance associations founded the Insurance Institute for Highway Safety. The government could use its increasing reporting authorities, for instance under the Cyber Incident Reporting for Critical Infrastructure Act of 2022, to require that companies report data about cybersecurity incidents, including which countermeasures were in place and the root causes of the incidents. Or the government could establish an entirely new entity in the form of a Bureau for Cyber Statistics that would be devoted to collecting and analyzing this type of data.

Scholars and policymakers can’t design a cyber backstop until this data is collected and studied to determine what works best for cybersecurity. More broadly, organizations’ cybersecurity cannot improve until more is known about the threat landscape and the most effective tools for managing cyber risk.

If the cybersecurity community doesn’t pause to gather that data first, then it will never be able to meaningfully strengthen companies’ security postures against large-scale cyberattacks, and insurers and government officials will just keep passing the buck back and forth, while the victims are left to pay for those attacks themselves.

This essay was written with Josephine Wolff, and was originally published in Lawfare.

,

Planet DebianDirk Eddelbuettel: RcppEigen 0.3.4.0.0 on CRAN: New Upstream, At Last

We are thrilled to share that RcppEigen has now upgraded to Eigen release 3.4.0! The new release 0.3.4.0.0 arrived on CRAN earlier today, and has been shipped to Debian as well. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

This update has been in the works for a full two and a half years! It all started with a PR #102 by Yixuan bringing the package-local changes for R integration forward to usptream release 3.4.0. We opened issue #103 to steer possible changes from reverse-dependency checking through. Lo and behold, this just … stalled because a few substantial changes were needed and not coming. But after a long wait, and like a bolt out of a perfectly blue sky, Andrew revived it in January with a reverse depends run of his own along with a set of PRs. That was the push that was needed, and I steered it along with a number of reverse dependency checks, and occassional emails to maintainers. We managed to bring it down to only three packages having a hickup, and all three had received PRs thanks to Andrew – and even merged them. So the plan became to release today following a final fourteen day window. And CRAN was convinced by our arguments that we followed due process. So there it is! Big big thanks to all who helped it along, especially Yixuan and Andrew but also Mikael who updated another patch set he had prepared for the previous release series.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.4.0.0 (2024-02-28)

  • The Eigen version has been upgrade to release 3.4.0 (Yixuan)

  • Extensive reverse-dependency checks ensure only three out of over 400 packages at CRAN are affected; PRs and patches helped other packages

  • The long-running branch also contains substantial contributions from Mikael Jagan (for the lme4 interface) and Andrew Johnson (revdep PRs)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityCalendar Meeting Links Used to Spread Mac Malware

Malicious hackers are targeting people in the cryptocurrency space in attacks that start with a link added to the target’s calendar at Calendly, a popular application for scheduling appointments and meetings. The attackers impersonate established cryptocurrency investors and ask to schedule a video conference call. But clicking the meeting link provided by the scammers prompts the user to run a script that quietly installs malware on macOS systems.

KrebsOnSecurity recently heard from a reader who works at a startup that is seeking investment for building a new blockchain platform for the Web. The reader spoke on condition that their name not be used in this story, so for the sake of simplicity we’ll call him Doug.

Being in the cryptocurrency scene, Doug is also active on the instant messenger platform Telegram. Earlier this month, Doug was approached by someone on Telegram whose profile name, image and description said they were Ian Lee, from Signum Capital, a well-established investment firm based in Singapore. The profile also linked to Mr. Lee’s Twitter/X account, which features the same profile image.

The investor expressed interest in financially supporting Doug’s startup, and asked if Doug could find time for a video call to discuss investment prospects. Sure, Doug said, here’s my Calendly profile, book a time and we’ll do it then.

When the day and time of the scheduled meeting with Mr. Lee arrived, Doug clicked the meeting link in his calendar but nothing happened. Doug then messaged the Mr. Lee account on Telegram, who said there was some kind of technology issue with the video platform, and that their IT people suggested using a different meeting link.

Doug clicked the new link, but instead of opening up a videoconference app, a message appeared on his Mac saying the video service was experiencing technical difficulties.

“Some of our users are facing issues with our service,” the message read. “We are actively working on fixing these problems. Please refer to this script as a temporary solution.”

Doug said he ran the script, but nothing appeared to happen after that, and the videoconference application still wouldn’t start. Mr. Lee apologized for the inconvenience and said they would have to reschedule their meeting, but he never responded to any of Doug’s follow-up messages.

It didn’t dawn on Doug until days later that the missed meeting with Mr. Lee might have been a malware attack. Going back to his Telegram client to revisit the conversation, Doug discovered his potential investor had deleted the meeting link and other bits of conversation from their shared chat history.

In a post to its Twitter/X account last month, Signum Capital warned that a fake profile pretending to be their employee Mr. Lee was trying to scam people on Telegram.

The file that Doug ran is a simple Apple Script (file extension “.scpt”) that downloads and executes a malicious trojan made to run on macOS systems. Unfortunately for us, Doug freaked out after deciding he’d been tricked — backing up his important documents, changing his passwords, and then reinstalling macOS on his computer. While this a perfectly sane response, it means we don’t have the actual malware that was pushed to his Mac by the script.

But Doug does still have a copy of the malicious script that was downloaded from clicking the meeting link (the online host serving that link is now offline). A search in Google for a string of text from that script turns up a December 2023 blog post from cryptocurrency security firm SlowMist about phishing attacks on Telegram from North Korean state-sponsored hackers.

“When the project team clicks the link, they encounter a region access restriction,” SlowMist wrote. “At this point, the North Korean hackers coax the team into downloading and running a ‘location-modifying’ malicious script. Once the project team complies, their computer comes under the control of the hackers, leading to the theft of funds.”

Image: SlowMist.

SlowMist says the North Korean phishing scams used the “Add Custom Link” feature of the Calendly meeting scheduling system on event pages to insert malicious links and initiate phishing attacks.

“Since Calendly integrates well with the daily work routines of most project teams, these malicious links do not easily raise suspicion,” the blog post explains. “Consequently, the project teams may inadvertently click on these malicious links, download, and execute malicious code.”

SlowMist said the malware downloaded by the malicious link in their case comes from a North Korean hacking group dubbed “BlueNoroff, which Kaspersky Labs says is a subgroup of the Lazarus hacking group.

“A financially motivated threat actor closely connected with Lazarus that targets banks, casinos, fin-tech companies, POST software and cryptocurrency businesses, and ATMs,” Kaspersky wrote of BlueNoroff in Dec. 2023.

The North Korean regime is known to use stolen cryptocurrencies to fund its military and other state projects. A recent report from Recorded Future finds the Lazarus Group has stolen approximately $3 billion in cryptocurrency over the past six years.

While there is still far more malware out there today targeting Microsoft Windows PCs, the prevalence of information-stealing trojans aimed at macOS users is growing at a steady clip. MacOS computers include X-Protect, Apple’s built-in antivirus technology. But experts say attackers are constantly changing the appearance and behavior of their malware to evade X-Protect.

“Recent updates to macOS’s XProtect signature database indicate that Apple are aware of the problem, but early 2024 has already seen a number of stealer families evade known signatures,” security firm SentinelOne wrote in January.

According to Chris Ueland from the threat hunting platform Hunt.io, the Internet address of the fake meeting website Doug was tricked into visiting (104.168.163,149) hosts or very recently hosted about 75 different domain names, many of which invoke words associated with videoconferencing or cryptocurrency. Those domains indicate this North Korean hacking group is hiding behind a number of phony crypto firms, like the six-month-old website for Cryptowave Capital (cryptowave[.]capital).

In a statement shared with KrebsOnSecurity, Calendly said it was aware of these types of social engineering attacks by cryptocurrency hackers.

“To help prevent these kinds of attacks, our security team and partners have implemented a service to automatically detect fraud and impersonations that could lead to social engineering,” the company said. “We are also actively scanning content for all our customers to catch these types of malicious links and to prevent hackers earlier on. Additionally, we intend to add an interstitial page warning users before they’re redirected away from Calendly to other websites. Along with the steps we’ve taken, we recommend users stay vigilant by keeping their software secure with running the latest updates and verifying suspicious links through tools like VirusTotal to alert them of possible malware. We are continuously strengthening the cybersecurity of our platform to protect our customers.”

The increasing frequency of new Mac malware is a good reminder that Mac users should not depend on security software and tools to flag malicious files, which are frequently bundled with or disguised as legitimate software.

As KrebsOnSecurity has advised Windows users for years, a good rule of safety to live by is this: If you didn’t go looking for it, don’t install it. Following this mantra heads off a great deal of malware attacks, regardless of the platform used. When you do decide to install a piece of software, make sure you are downloading it from the original source, and then keep it updated with any new security fixes.

On that last front, I’ve found it’s a good idea not to wait until the last minute to configure my system before joining a scheduled videoconference call. Even if the call uses software that is already on my computer, it is often the case that software updates are required before the program can be used, and I’m one of those weird people who likes to review any changes to the software maker’s privacy policies or user agreements before choosing to install updates.

Most of all, verify new contacts from strangers before accepting anything from them. In this case, had Doug simply messaged Mr. Lee’s real account on Twitter/X or contacted Signum Capital directly, he would discovered that the real Mr. Lee never asked for a meeting.

If you’re approached in a similar scheme, the response from the would-be victim documented in the SlowMist blog post is probably the best.

Image: SlowMist.

Update: Added comment from Calendly.

Planet DebianDaniel Lange: Opencollective shutting down

Update 28.02.2024 19:45 CET: There is now a blog entry at https://blog.opencollective.com/open-collective-official-statement-ocf-dissolution/ trying to discern the legal entities in the Open Collective ecosystem and recommending potential ways forward.


Gee, there is nothing on their blog yet, but I just [28.02.2023 00:07 CET] received this email from Mike Strode, Program Officer at the Open Collective Foundation:

Dear Daniel Lange,

It is with a heavy heart that I'm writing today to inform you that the Board of Directors of the Open Collective Foundation (OCF) has made the difficult decision to dissolve OCF, effective December 31, 2024.

We are proud of the work we have been able to do together. We have been honored to build community with you and the hundreds of other collectives hosted at the Open Collective Foundation.

What you need to know:

We are beginning a staged dissolution process that will allow our over 600 collectives the time to close or transition their work. Dissolving OCF will take many months, and involves settling all liabilities while spending down all funds in a legally compliant manner.

Our priority is to support our collectives in navigating this change. We want to provide collectives the longest possible runway to wind down or transition their operations while we focus on the many legal and financial tasks associated with dissolving a nonprofit.

March 15 is the last day to accept donations. You will have until September 30 to work with us to develop and implement a plan to spend down the money in your fund. Key dates are included at the bottom of this email.

We know this is going to be difficult, and we will do everything we can to ease the transition for you.

How we will support collectives:

It remains our fiduciary responsibility to safeguard each collective's charitable assets and ensure funds are used solely for specified charitable purposes.

We will be providing assistance and support to you, whether you choose to spend out and close down your collective or continue your work through another 501(c)(3) organization or fiscal sponsor.

Unfortunately, we had to say goodbye to several of our colleagues today as we pare down our core staff to reduce costs. I will be staying on staff to support collectives through this transition, along with Wayne Kleppe, our Finance Administrator.

What led to this decision:

From day one, OCF was committed to experimentation and innovation. We were dedicated to finding new ways to open up the nonprofit space, making it easier for people to raise and access funding so they can do good in their communities.

OCF was created by Open Collective Inc. (OCI), a company formed in 2015 with the goal of "enabling groups to quickly set up a collective, raise funds and manage them transparently." Soon after being founded by OCI, OCF went through a period of rapid growth. We responded to increased demand arising from the COVID-19 pandemic without taking the time to establish the appropriate systems and infrastructure to sustain that growth.

Unfortunately, over the past year, we have learned that Open Collective Foundation's business model is not sustainable with the number of complex services we have offered and the fees we pay to the Open Collective Inc. tech platform.

In late 2023, we made the decision to pause accepting new collectives in order to create space for us to address the issues. Unfortunately, it became clear that it would not be financially feasible to make the necessary corrections, and we determined that OCF is not viable.

What's next:

We know this news will raise questions for many of our collectives. We will be making space for questions and reactions in the coming weeks.

In the meantime, we have developed this FAQ which we will keep updated as more questions come in.

What you need to do next:

  • Review the FAQ
  • Discuss your options within your collective. Your options are:
    • spend down and close out your collective
    • spend down and transfer your collective to another fiscal sponsor, or
    • transfer your collective and funds to another charitable organization.
  • Reply-all to this email with any questions, requests, or to set up a time to talk. Please make sure generalinquiries@opencollective.org is copied on your email.

Dates to know:

  • Last day to accept funds/receive donations: March 15, 2024
  • Last day collectives can have employees: June 30, 2024
  • Last day to spend or transfer funds: September 30, 2024

In Care & Accompaniment,
Mike Strode
Program Officer
Open Collective Foundation

Our mailing address has changed! We are now located at 440 N. Barranca Avenue #3717, Covina, CA 91723, USA

365 TomorrowsDouble Shot Salvation

Author: Zayan Guedim Once caught by Sheriff Jeb, criminals faced a gruesome demise. Grave offenses or petty misdemeanors, all the same, he would drive them to the abandoned silver mine. Then alone he would return, with bloody clothes and a blanched face. A judge, jury, and executioner all in one, Jeb’s reputation spread far and […]

The post Double Shot Salvation appeared first on 365tomorrows.

Worse Than FailureCodeSOD: You Need an Alert

Gabe enjoys it when clients request that he does updates on old software. For Gabe, it's exciting: you never know what you'll discover.

Public Sub AspJavaMessage(ByVal Message As String)
  System.Web.HttpContext.Current.Response.Write("<SCRIPT LANGUAGE=""JavaScript"">" & vbCrLf)
  System.Web.HttpContext.Current.Response.Write("alert(""" & Message & """)" & vbCrLf)
  System.Web.HttpContext.Current.Response.Write("</SCRIPT>")
End Sub

This is server-side ASP .Net code.

Let's start with the function name: AspJavaMessage. We already know we're using ASP, or at least I hope we are. We aren't using Java, but JavaScript. I'm not confident the developer behind this is entirely clear on the difference.

Then we do a Response.Write to output some JavaScript, but we need to talk about the Response object a bit. In ASP .Net, you mostly receive your HttpResponse as part of the event that triggered your response. The only reason you'd want to access the HttpResponse through this long System.Web.HttpContext.Current.Response accessor is because you are in a lower-level module which, for some reason, hasn't been passed an HTTP response.

That's a long-winded way of saying, "This is a code smell, and this function likely exists in a layer that shouldn't be tampering with the HTTP response."

Then, of course, we have the ALL CAPS HTML tag, followed by a JavaScript alert() call, aka, the worst way to pop up notifications in a web page.

Ugly, awful, and hinting at far worse choices in the overall application architecture. Gabe certainly unearthed a… delightful treat.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Worse Than FailureCodeSOD: A Split Purpose

Let's say you had input in the form of field=value, and you wanted to pick that "value" part off. In C#, you'd likely just use String.Split and call it a day. But you're not RK's co-worker.

public string FilterArg(string value)
{
    bool blAction;
    if (value.Contains('='))
        blAction = false;
    else
        blAction = true;

    string tmpValue = string.Empty;

    foreach (char t in value)
    {
        if (t == '=')
        {
            blAction = true;
        }
        else if (t != ' ' && blAction == true)
        {
            tmpValue += t;
        }
    }
    return tmpValue;
}

If the input contains an =, we set blAction to false. Then we iterate across our input, one character at a time. If the character we're on is an =, we set blAction to true. Otherwise, if the character we're on is not a space, and blAction is true, we append the current character to our output.

I opened by suggesting we were going to look at a home-grown split function, because at first glance, that's what this code looks like.

It does the job, but the mix of flags and loops and my inability to read sometimes makes it extra confusing to follow.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsCausal Fridays

Author: Majoki As he entered the lab, no one was directly staring at Etherid, but he felt all eyes on him. No doubt because of the neon orange Hawaiian shirt and optic green shorts he was sporting. As a new hire in his first week, he’d gotten an email yesterday from HR with the subject […]

The post Causal Fridays appeared first on 365tomorrows.

,

Cryptogram China Surveillance Company Hacked

Last week, someone posted something like 570 files, images and chat logs from a Chinese company called I-Soon. I-Soon sells hacking and espionage services to Chinese national and local government.

Lots of details in the news articles.

These aren’t details about the tools or techniques, more the inner workings of the company. And they seem to primarily be hacking regionally.

Worse Than FailureCodeSOD: Climbing Optimization Mountain

"Personal Mountains" was hearing dire rumors about one of the other developers; rumors about both the quality of their work and their future prospects at the company. Fortunately for Personal Mountains, they never actually had to work with this person.

Unfortunately, that person was fired and 30,000 lines of code were now Personal Mountains' responsibility.

Fortunately, it's not really 30,000 lines of code.

list.DeleteColumn(61);
list.DeleteColumn(60);
list.DeleteColumn(59);
list.DeleteColumn(58);
list.DeleteColumn(57);
list.DeleteColumn(56);
list.DeleteColumn(55);
list.DeleteColumn(54);
list.DeleteColumn(53);
list.DeleteColumn(52);
list.DeleteColumn(51);
list.DeleteColumn(50);
list.DeleteColumn(49);
list.DeleteColumn(48);
list.DeleteColumn(47);
list.DeleteColumn(46);
list.DeleteColumn(45);
list.DeleteColumn(44);
list.DeleteColumn(43);
list.DeleteColumn(42);
list.DeleteColumn(41);
list.DeleteColumn(40);
list.DeleteColumn(39);
list.DeleteColumn(38);
list.DeleteColumn(37);
list.DeleteColumn(36);
list.DeleteColumn(35);
list.DeleteColumn(34);
list.DeleteColumn(33);
list.DeleteColumn(32);
list.DeleteColumn(31);
list.DeleteColumn(30);
list.DeleteColumn(29);
list.DeleteColumn(28);
list.DeleteColumn(27);
list.DeleteColumn(26);
list.DeleteColumn(25);
list.DeleteColumn(24);
list.DeleteColumn(23);
list.DeleteColumn(22);
list.DeleteColumn(21);
list.DeleteColumn(20);
list.DeleteColumn(19);
list.DeleteColumn(18);
list.DeleteColumn(17);
list.DeleteColumn(16);
list.DeleteColumn(15);
list.DeleteColumn(14);
list.DeleteColumn(13);
list.DeleteColumn(12);
list.DeleteColumn(11);
list.DeleteColumn(10);
list.DeleteColumn(9);
list.DeleteColumn(8);
list.DeleteColumn(7);
list.DeleteColumn(6);
list.DeleteColumn(5);
list.DeleteColumn(4);
list.DeleteColumn(3);
list.DeleteColumn(2);
list.DeleteColumn(1);
list.DeleteColumn(0);

Comments in the code indicated that this was done for "extreme optimization", which leads me to believe someone heard about loop unrolling and decided to just do that everywhere there was a loop, without regard to whether or not it actually helped performance in any specific case, whether the loop was run frequently enough to justify the optimization, or understanding if the compiler might be more capable at deciding when and where to unroll a loop.

Within a few weeks, Personal Mountains was able to shrink the program from 30,000 lines of code to 10,000, with no measurable impact on its behavior or performance.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsWhere We Live

Author: Julian Miles, Staff Writer Yesterday I climbed Everest with Hillary. Tomorrow I’m travelling as a passenger on the 1888 Orient Express. Today? I’ve been asked to make a presentation to you all about what we’re doing here at the Human Existence Archive. My name is Preston Hardy, and I used to be a laboratory […]

The post Where We Live appeared first on 365tomorrows.

Planet DebianSergio Durigan Junior: Planning to orphan Pagure on Debian

I have been thinking more and more about orphaning the Pagure Debian package. I don’t have the time to maintain it properly anymore, and I have also lost interest in doing so.

What’s Pagure

Pagure is a git forge written entirely in Python using pygit2. It was almost entirely developed by one person, Pierre-Yves Chibon. He is (was?) a Red Hat employee and started working on this new git forge almost 10 years ago because the company wanted to develop something in-house for Fedora. The software is amazing and I admire Pierre-Yves quite a lot for what he was able to achieve basically alone. Unfortunately, a few years ago Fedora decided to move to Gitlab and the Pagure development pretty much stalled.

Pagure in Debian

Packaging Pagure for Debian was hard, but it was also very fun. I learned quite a bit about many things (packaging and non-packaging related), interacted with the upstream community, decided to dogfood my own work and run my Pagure instance for a while, and tried to get newcomers to help me with the package (without much success, unfortunately).

I remember that when I had started to package Pagure, Debian was also moving away from Alioth and discussing options. For a brief moment Pagure was a contender, but in the end the community decided to self-host Gitlab, and that’s why we have Salsa now. I feel like I could have tipped the scales in favour of Pagure had I finished packaging it for Debian before the decision was made, but then again, to the best of my knowledge Salsa doesn’t use our Gitlab package anyway…

Are you interested in maintaining it?

If you’re interested in maintaining the package, please get in touch with me. I will happily pass the torch to someone else who is still using the software and wants to keep it healthy in Debian. If there is nobody interested, then I will just orphan it.

Krebs on SecurityFBI’s LockBit Takedown Postponed a Ticking Time Bomb in Fulton County, Ga.

The FBI’s takedown of the LockBit ransomware group last week came as LockBit was preparing to release sensitive data stolen from government computer systems in Fulton County, Ga. But LockBit is now regrouping, and the gang says it will publish the stolen Fulton County data on March 2 unless paid a ransom. LockBit claims the cache includes documents tied to the county’s ongoing criminal prosecution of former President Trump, but court watchers say teaser documents published by the crime gang suggest a total leak of the Fulton County data could put lives at risk and jeopardize a number of other criminal trials.

A new LockBit website listing a countdown timer until the promised release of data stolen from Fulton County, Ga.

In early February, Fulton County leaders acknowledged they were responding to an intrusion that caused disruptions for its phone, email and billing systems, as well as a range of county services, including court systems.

On Feb. 13, the LockBit ransomware group posted on its victim shaming blog a new entry for Fulton County, featuring a countdown timer saying the group would publish the data on Feb. 16 unless county leaders agreed to negotiate a ransom.

“We will demonstrate how local structures negligently handled information protection,” LockBit warned. “We will reveal lists of individuals responsible for confidentiality. Documents marked as confidential will be made publicly available. We will show documents related to access to the state citizens’ personal data. We aim to give maximum publicity to this situation; the documents will be of interest to many. Conscientious residents will bring order.”

Yet on Feb. 16, the entry for Fulton County was removed from LockBit’s site without explanation. This usually only happens after the victim in question agrees to pay a ransom demand and/or enters into negotiations with their extortionists.

However, Fulton County Commission Chairman Robb Pitts said the board decided it “could not in good conscience use Fulton County taxpayer funds to make a payment.”

“We did not pay nor did anyone pay on our behalf,” Pitts said at an incident briefing on Feb. 20.

Just hours before that press conference, LockBit’s various websites were seized by the FBI and the U.K.’s National Crime Agency (NCA), which replaced the ransomware group’s homepage with a seizure notice and used the existing design of LockBit’s victim shaming blog to publish press releases about the law enforcement action.

The feds used the existing design on LockBit’s victim shaming website to feature press releases and free decryption tools.

Dubbed “Operation Cronos,” the effort involved the seizure of nearly three-dozen servers; the arrest of two alleged LockBit members; the release of a free LockBit decryption tool; and the freezing of more than 200 cryptocurrency accounts thought to be tied to the gang’s activities. The government says LockBit has claimed more than 2,000 victims worldwide and extorted over $120 million in payments.

UNFOLDING DISASTER

In a lengthy, rambling letter published on Feb. 24 and addressed to the FBI, the ransomware group’s leader LockBitSupp announced that their victim shaming websites were once again operational on the dark web, with fresh countdown timers for Fulton County and a half-dozen other recent victims.

“The FBI decided to hack now for one reason only, because they didn’t want to leak information fultoncountyga.gov,” LockBitSupp wrote. “The stolen documents contain a lot of interesting things and Donald Trump’s court cases that could affect the upcoming US election.”

A screen shot released by LockBit showing various Fulton County file shares that were exposed.

LockBit has already released roughly two dozen files allegedly stolen from Fulton County government systems, although none of them involve Mr. Trump’s criminal trial. But the documents do appear to include court records that are sealed and shielded from public viewing.

George Chidi writes The Atlanta Objective, a Substack publication on crime in Georgia’s capital city. Chidi says the leaked data so far includes a sealed record related to a child abuse case, and a sealed motion in the murder trial of Juwuan Gaston demanding the state turn over confidential informant identities.

Chidi cites reports from a Fulton County employee who said the confidential material includes the identities of jurors serving on the trial of the rapper Jeffery “Young Thug” Williams, who is charged along with five other defendants in a racketeering and gang conspiracy.

“The screenshots suggest that hackers will be able to give any attorney defending a criminal case in the county a starting place to argue that evidence has been tainted or witnesses intimidated, and that the release of confidential information has compromised cases,” Chidi wrote. “Judge Ural Glanville has, I am told by staff, been working feverishly behind the scenes over the last two weeks to manage the unfolding disaster.”

LockBitSupp also denied assertions made by the U.K.’s NCA that LockBit did not delete stolen data as promised when victims agreed to pay a ransom. The accusation is an explosive one because nobody will pay a ransom if they don’t believe the ransomware group will hold up its end of the bargain.

The ransomware group leader also confirmed information first reported here last week, that federal investigators managed to hack LockBit by exploiting a known vulnerability in PHP, a scripting language that is widely used in Web development.

“Due to my personal negligence and irresponsibility I relaxed and did not update PHP in time,” LockBitSupp wrote. “As a result of which access was gained to the two main servers where this version of PHP was installed.”

LockBitSupp’s FBI letter said the group kept copies of its stolen victim data on servers that did not use PHP, and that consequently it was able to retain copies of files stolen from victims. The letter also listed links to multiple new instances of LockBit dark net websites, including the leak page listing Fulton County’s new countdown timer.

LockBit’s new data leak site promises to release stolen Fulton County data on March 2, 2024, unless paid a ransom demand.

“Even after the FBI hack, the stolen data will be published on the blog, there is no chance of destroying the stolen data without payment,” LockBitSupp wrote. “All FBI actions are aimed at destroying the reputation of my affiliate program, my demoralization, they want me to leave and quit my job, they want to scare me because they can not find and eliminate me, I can not be stopped, you can not even hope, as long as I am alive I will continue to do pentest with postpaid.”

DOX DODGING

In January 2024, LockBitSupp told XSS forum members he was disappointed the FBI hadn’t offered a reward for his doxing and/or arrest, and that in response he was placing a bounty on his own head — offering $10 million to anyone who could discover his real name.

After the NCA and FBI seized LockBit’s site, the group’s homepage was retrofitted with a blog entry titled, “Who is LockBitSupp? The $10M question.” The teaser made use of LockBit’s own countdown timer, and suggested the real identity of LockBitSupp would soon be revealed.

However, after the countdown timer expired the page was replaced with a taunting message from the feds, but it included no new information about LockBitSupp’s identity.

On Feb. 21, the U.S. Department of State announced rewards totaling up to $15 million for information leading to the arrest and/or conviction of anyone participating in LockBit ransomware attacks. The State Department said $10 million of that is for information on LockBit’s leaders, and up to $5 million is offered for information on affiliates.

In an interview with the malware-focused Twitter/X account Vx-Underground, LockBit staff asserted that authorities had arrested a couple of small-time players in their operation, and that investigators still do not know the real-life identities of the core LockBit members, or that of their leader.

“They assert the FBI / NCA UK / EUROPOL do not know their information,” Vx-Underground wrote. “They state they are willing to double the bounty of $10,000,000. They state they will place a $20,000,000 bounty of their own head if anyone can dox them.”

TROUBLE ON THE HOMEFRONT?

In the weeks leading up to the FBI/NCA takedown, LockBitSupp became embroiled in a number of high-profile personal and business disputes on the Russian cybercrime forums.

Earlier this year, someone used LockBit ransomware to infect the networks of AN-Security, a venerated 30-year-old security and technology company based in St. Petersburg, Russia. This violated the golden rule for cybercriminals based in Russia and former soviet nations that make up the Commonwealth of Independent States, which is that attacking your own citizens in those countries is the surest way to get arrested and prosecuted by local authorities.

LockBitSupp later claimed the attacker had used a publicly leaked, older version of LockBit to compromise systems at AN-Security, and said the attack was an attempt to smear their reputation by a rival ransomware group known as “Clop.” But the incident no doubt prompted closer inspection of LockBitSupp’s activities by Russian authorities.

Then in early February, the administrator of the Russian-language cybercrime forum XSS said LockBitSupp had threatened to have him killed after the ransomware group leader was banned by the community. LockBitSupp was excommunicated from XSS after he refused to pay an arbitration amount ordered by the forum administrator. That dispute related to a complaint from another forum member who said LockBitSupp recently stiffed him on his promised share of an unusually large ransomware payout.

A posted by the XSS administrator saying LockBitSupp wanted him dead.

INTERVIEW WITH LOCKBITSUPP

KrebsOnSecurity sought comment from LockBitSupp at the ToX instant messenger ID listed in his letter to the FBI. LockBitSupp declined to elaborate on the unreleased documents from Fulton County, saying the files will be available for everyone to see in a few days.

LockBitSupp said his team was still negotiating with Fulton County when the FBI seized their servers, which is why the county has been granted a time extension. He also denied threatening to kill the XSS administrator.

“I have not threatened to kill the XSS administrator, he is blatantly lying, this is to cause self-pity and damage my reputation,” LockBitSupp told KrebsOnSecurity. “It is not necessary to kill him to punish him, there are more humane methods and he knows what they are.”

Asked why he was so certain the FBI doesn’t know his real-life identity, LockBitSupp was more precise.

“I’m not sure the FBI doesn’t know who I am,” he said. “I just believe they will never find me.”

It seems unlikely that the FBI’s seizure of LockBit’s infrastructure was somehow an effort to stave off the disclosure of Fulton County’s data, as LockBitSupp maintains. For one thing, Europol said the takedown was the result of a months-long infiltration of the ransomware group.

Also, in reporting on the attack’s disruption to the office of Fulton County District Attorney Fani Willis on Feb. 14, CNN reported that by then the intrusion by LockBit had persisted for nearly two and a half weeks.

Finally, if the NCA and FBI really believed that LockBit never deleted victim data, they had to assume LockBit would still have at least one copy of all their stolen data hidden somewhere safe.

Fulton County is still trying to recover systems and restore services affected by the ransomware attack. “Fulton County continues to make substantial progress in restoring its systems following the recent ransomware incident resulting in service outages,” reads the latest statement from the county on Feb. 22. “Since the start of this incident, our team has been working tirelessly to bring services back up.”

Update, Feb. 29, 3:22 p.m. ET: Just hours after this story ran, LockBit changed its countdown timer for Fulton County saying they had until the morning of Feb. 29 (today) to pay a ransonm demand. When the official deadline neared today, Fulton County’s listing was removed from LockBit’s victim shaming website. Asked about the removal of the listing, LockBit’s leader “LockBitSupp” told KrebsOnSecurity that Fulton County paid a ransom demand. County officials have scheduled a press conference on the ransomware attack at 4:15 p.m. ET today.

Planet DebianFreexian Collaborators: Long term support for Samba 4.17

Freexian is pleased to announce a partnership with Catalyst to extend the security support of Samba 4.17, which is the version packaged in Debian 12 Bookworm. Samba 4.17 will reach upstream’s end-of-support this upcoming March (2024), and the goal of this partnership is to extend it until June 2028 (i.e. the end of Debian 12’s regular security support).

One of the main aspects of this project is that it will also include support for Samba as Active Directory Domain Controller (AD-DC). Unfortunately, support for Samba as AD-DC in Debian 11 Bullseye, Debian 10 Buster and older releases has been discontinued before the end of the life cycle of those Debian releases. So we really expect to improve the situation of Samba in Debian 12 Bookworm, ensuring full support during the 5 years of regular security support.

We would like to mention that this is an experiment, and we will do our best to make it a success, and to try to continue it for Samba versions included in future Debian releases.

Our long term goal is to bring confidence to Samba’s upstream development community that they can mark some releases as being supported for 5 years (or more) and that the corresponding work will be funded by companies that benefit from this work (because we would have already built that community).

If your company relies on Samba and wants to help sustain LTS versions of Samba, please reach out to us. For companies using Debian, the simplest way is to subscribe to our Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Samba LTS when you send your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to contribute.

In the mean time, this project has been possible thanks to the current LTS sponsors and ELTS customers. We hope the whole community of Debian and Samba users will benefit from it.

For any question, don’t hesitate to contact us.

,

Planet DebianBen Hutchings: Converted from Pyblosxom to Jekyll

I’ve been using Pyblosxom here for nearly 17 years, but have become increasingly dissatisfied with having to write HTML instead of Markdown.

Today I looked at upgrading my web server and discovered that Pyblosxom was removed from Debian after Debian 10, presumably because it wasn’t updated for Python 3.

I keep hearing about Jekyll as a static site generator for blogs, so I finally investigated how to use that and how to convert my existing entries. Fortunately it supports both HTML and Markdown (and probably other) input formats, so this was mostly a matter of converting metadata.

I have my own crappy script for drafting, publishing, and listing blog entries, which also needed a bit of work to update, but that is now done.

If all has gone to plan, you should be seeing just one new entry in the feed but all permalinks to older entries still working.

Cory DoctorowThe Majority of Censorship is Self-Censorship

Burning of 'dirt and trash literature' at the 18th Elementary school in Berlin-Pankow (Buchholz), on the evening of International Children's Day, June 1st, 1955. It was the first of a wave of initiatives by the Parents-Teachers Association (Elternversammlungen), to legally ban 'trash and filth.'

Today for my podcast, I read The majority of censorship is self-censorship, originally published in my Pluralistic blog. It’s a breakdown of Ada Palmer’s excellent Reactor essay about the modern and historical context of censorship.

I recorded this on a day when I was home between book-tour stops (I’m out with my new techno crime-thriller, The Bezzle. Catch me tomorrow (Monday) in Seattle with Neal Stephenson at Third Place Books. Then it’s Powell’s in Portland, and then Tuscon. The canonical link for the schedule is here.

States – even very powerful states – that wish to censor lack the resources to accomplish totalizing censorship of the sort depicted in Nineteen Eighty-Four. They can’t go from house to house, searching every nook and cranny for copies of forbidden literature. The only way to kill an idea is to stop people from expressing it in the first place. Convincing people to censor themselves is, “dollar for dollar and man-hour for man-hour, much cheaper and more impactful than anything else a censorious regime can do.”

Ada invokes examples modern and ancient, including from her own area of specialty, the Inquisition and its treatment of Gailileo. The Inquistions didn’t set out to silence Galileo. If that had been its objective, it could have just assassinated him. This was cheap, easy and reliable! Instead, the Inquisition persecuted Galileo, in a very high-profile manner, making him and his ideas far more famous.

But this isn’t some early example of Inquisitorial Streisand Effect. The point of persecuting Galileo was to convince Descartes to self-censor, which he did. He took his manuscript back from the publisher and cut the sections the Inquisition was likely to find offensive. It wasn’t just Descartes: “thousands of other major thinkers of the time wrote differently, spoke differently, chose different projects, and passed different ideas on to the next century because they self-censored after the Galileo trial.”


MP3


Here’s that tour schedule!

26 Feb: Third Place Books, Seattle, 19h, with Neal Stephenson (!!!)
https://www.thirdplacebooks.com/event/cory-doctorow

27 Feb: Powell’s, Portland, 19h:
https://www.powells.com/book/the-bezzle-martin-hench-2-9781250865878/1-2

29 Feb: Changing Hands, Phoenix, 1830h:
https://www.changinghands.com/event/february2024/cory-doctorow

9-10 Mar: Tucson Festival of the Book:
https://tucsonfestivalofbooks.org/?action=display_author&amp;id=15669

13 Mar: San Francisco Public Library, details coming soon!

23 or 24 Mar: Toronto, details coming soon!

25-27 Mar: NYC and DC, details coming soon!

29-31 Mar: Wondercon Anaheim:
https://www.comic-con.org/wc/

11 Apr: Boston, details coming soon!

12 Apr: RISD Debates in AI, Providence, details coming soon!

17 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

365 TomorrowsStarship Lovers

Author: Matthew Miehe The large hangar was where starships sat to be scrapped or bought by new owners. It’s also where Hammer-II, a blue and grey cargo cruiser, had found love. When he flew into the dock, Argus Luxury Model (serial code 11727) was the first thing he laid his eyes on. She was a […]

The post Starship Lovers appeared first on 365tomorrows.

Planet DebianRuss Allbery: Review: The Fund

Review: The Fund, by Rob Copeland

Publisher: St. Martin's Press
Copyright: 2023
ISBN: 1-250-27694-2
Format: Kindle
Pages: 310

I first became aware of Ray Dalio when either he or his publisher plastered advertisements for The Principles all over the San Francisco 4th and King Caltrain station. If I recall correctly, there were also constant radio commercials; it was a whole thing in 2017. My brain is very good at tuning out advertisements, so my only thought at the time was "some business guy wrote a self-help book." I think I vaguely assumed he was a CEO of some traditional business, since that's usually who writes heavily marketed books like this. I did not connect him with hedge funds or Bridgewater, which I have a bad habit of confusing with Blackwater.

The Principles turns out to be more of a laundered cult manual than a self-help book. And therein lies a story.

Rob Copeland is currently with The New York Times, but for many years he was the hedge fund reporter for The Wall Street Journal. He covered, among other things, Bridgewater Associates, the enormous hedge fund founded by Ray Dalio. The Fund is a biography of Ray Dalio and a history of Bridgewater from its founding as a vehicle for Dalio's advising business until 2022 when Dalio, after multiple false starts and title shuffles, finally retired from running the company. (Maybe. Based on the history recounted here, it wouldn't surprise me if he was back at the helm by the time you read this.)

It is one of the wildest, creepiest, and most abusive business histories that I have ever read.

It's probably worth mentioning, as Copeland does explicitly, that Ray Dalio and Bridgewater hate this book and claim it's a pack of lies. Copeland includes some of their denials (and many non-denials that sound as good as confirmations to me) in footnotes that I found increasingly amusing.

A lawyer for Dalio said he "treated all employees equally, giving people at all levels the same respect and extending them the same perks."

Uh-huh.

Anyway, I personally know nothing about Bridgewater other than what I learned here and the occasional mention in Matt Levine's newsletter (which is where I got the recommendation for this book). I have no independent information whether anything Copeland describes here is true, but Copeland provides the typical extensive list of notes and sourcing one expects in a book like this, and Levine's comments indicated it's generally consistent with Bridgewater's industry reputation. I think this book is true, but since the clear implication is that the world's largest hedge fund was primarily a deranged cult whose employees mostly spied on and rated each other rather than doing any real investment work, I also have questions, not all of which Copeland answers to my satisfaction. But more on that later.

The center of this book are the Principles. These were an ever-changing list of rules and maxims for how people should conduct themselves within Bridgewater. Per Copeland, although Dalio later published a book by that name, the version of the Principles that made it into the book was sanitized and significantly edited down from the version used inside the company. Dalio was constantly adding new ones and sometimes changing them, but the common theme was radical, confrontational "honesty": never being silent about problems, confronting people directly about anything that they did wrong, and telling people all of their faults so that they could "know themselves better."

If this sounds like textbook abusive behavior, you have the right idea. This part Dalio admits to openly, describing Bridgewater as a firm that isn't for everyone but that achieves great results because of this culture. But the uncomfortably confrontational vibes are only the tip of the iceberg of dysfunction. Here are just a few of the ways this played out according to Copeland:

  • Dalio decided that everyone's opinions should be weighted by the accuracy of their previous decisions, to create a "meritocracy," and therefore hired people to build a social credit system in which people could use an app to constantly rate all of their co-workers. This almost immediately devolved into out-group bullying worthy of a high school, with employees hurriedly down-rating and ostracizing any co-worker that Dalio down-rated.

  • When an early version of the system uncovered two employees at Bridgewater with more credibility than Dalio, Dalio had the system rigged to ensure that he always had the highest ratings and was not affected by other people's ratings.

  • Dalio became so obsessed with the principle of confronting problems that he created a centralized log of problems at Bridgewater and required employees find and report a quota of ten or twenty new issues every week or have their bonus docked. He would then regularly pick some issue out of the issue log, no matter how petty, and treat it like a referendum on the worth of the person responsible for the issue.

  • Dalio's favorite way of dealing with a problem was to put someone on trial. This involved extensive investigations followed by a meeting where Dalio would berate the person and harshly catalog their flaws, often reducing them to tears or panic attacks, while smugly insisting that having an emotional reaction to criticism was a personality flaw. These meetings were then filmed and added to a library available to all Bridgewater employees, often edited to remove Dalio's personal abuse and to make the emotional reaction of the target look disproportionate. The ones Dalio liked the best were shown to all new employees as part of their training in the Principles.

  • One of the best ways to gain institutional power in Bridgewater was to become sycophantically obsessed with the Principles and to be an eager participant in Dalio's trials. The highest levels of Bridgewater featured constant jockeying for power, often by trying to catch rivals in violations of the Principles so that they would be put on trial.

In one of the common and all-too-disturbing connections between Wall Street finance and the United States' dysfunctional government, James Comey (yes, that James Comey) ran internal security for Bridgewater for three years, meaning that he was the one who pulled evidence from surveillance cameras for Dalio to use to confront employees during his trials.

In case the cult vibes weren't strong enough already, Bridgewater developed its own idiosyncratic language worthy of Scientology. The trials were called "probings," firing someone was called "sorting" them, and rating them was called "dotting," among many other Bridgewater-specific terms. Needless to say, no one ever probed Dalio himself. You will also be completely unsurprised to learn that Copeland documents instances of sexual harassment and discrimination at Bridgewater, including some by Dalio himself, although that seems to be a relatively small part of the overall dysfunction. Dalio was happy to publicly humiliate anyone regardless of gender.

If you're like me, at this point you're probably wondering how Bridgewater continued operating for so long in this environment. (Per Copeland, since Dalio's retirement in 2022, Bridgewater has drastically reduced the cult-like behaviors, deleted its archive of probings, and de-emphasized the Principles.) It was not actually a religious cult; it was a hedge fund that has to provide investment services to huge, sophisticated clients, and by all accounts it's a very successful one. Why did this bizarre nightmare of a workplace not interfere with Bridgewater's business?

This, I think, is the weakest part of this book. Copeland makes a few gestures at answering this question, but none of them are very satisfying.

First, it's clear from Copeland's account that almost none of the employees of Bridgewater had any control over Bridgewater's investments. Nearly everyone was working on other parts of the business (sales, investor relations) or on cult-related obsessions. Investment decisions (largely incorporated into algorithms) were made by a tiny core of people and often by Dalio himself. Bridgewater also appears to not trade frequently, unlike some other hedge funds, meaning that they probably stay clear of the more labor-intensive high-frequency parts of the business.

Second, Bridgewater took off as a hedge fund just before the hedge fund boom in the 1990s. It transformed from Dalio's personal consulting business and investment newsletter to a hedge fund in 1990 (with an earlier investment from the World Bank in 1987), and the 1990s were a very good decade for hedge funds. Bridgewater, in part due to Dalio's connections and effective marketing via his newsletter, became one of the largest hedge funds in the world, which gave it a sort of institutional momentum. No one was questioned for putting money into Bridgewater even in years when it did poorly compared to its rivals.

Third, Dalio used the tried and true method of getting free publicity from the financial press: constantly predict an upcoming downturn, and aggressively take credit whenever you were right. From nearly the start of his career, Dalio predicted economic downturns year after year. Bridgewater did very well in the 2000 to 2003 downturn, and again during the 2008 financial crisis. Dalio aggressively takes credit for predicting both of those downturns and positioning Bridgewater correctly going into them. This is correct; what he avoids mentioning is that he also predicted downturns in every other year, the majority of which never happened.

These points together create a bit of an answer, but they don't feel like the whole picture and Copeland doesn't connect the pieces. It seems possible that Dalio may simply be good at investing; he reads obsessively and clearly enjoys thinking about markets, and being an abusive cult leader doesn't take up all of his time. It's also true that to some extent hedge funds are semi-free money machines, in that once you have a sufficient quantity of money and political connections you gain access to investment opportunities and mechanisms that are very likely to make money and that the typical investor simply cannot access. Dalio is clearly good at making personal connections, and invested a lot of effort into forming close ties with tricky clients such as pools of Chinese money.

Perhaps the most compelling explanation isn't mentioned directly in this book but instead comes from Matt Levine. Bridgewater touts its algorithmic trading over humans making individual trades, and there is some reason to believe that consistently applying an algorithm without regard to human emotion is a solid trading strategy in at least some investment areas. Levine has asked in his newsletter, tongue firmly in cheek, whether the bizarre cult-like behavior and constant infighting is a strategy to distract all the humans and keep them from messing with the algorithm and thus making bad decisions.

Copeland leaves this question unsettled. Instead, one comes away from this book with a clear vision of the most dysfunctional workplace I have ever heard of, and an endless litany of bizarre events each more astonishing than the last. If you like watching train wrecks, this is the book for you. The only drawback is that, unlike other entries in this genre such as Bad Blood or Billion Dollar Loser, Bridgewater is a wildly successful company, so you don't get the schadenfreude of seeing a house of cards collapse. You do, however, get a helpful mental model to apply to the next person who tries to talk to you about "radical honesty" and "idea meritocracy."

The flaw in this book is that the existence of an organization like Bridgewater is pointing to systematic flaws in how our society works, which Copeland is largely uninterested in interrogating. "How could this have happened?" is a rather large question to leave unanswered. The sheer outrageousness of Dalio's behavior also gets a bit tiring by the end of the book, when you've seen the patterns and are hearing about the fourth variation. But this is still an astonishing book, and a worthy entry in the genre of capitalism disasters.

Rating: 7 out of 10

Planet DebianJacob Adams: AAC and Debian

Currently, in a default installation of Debian with the GNOME desktop, Bluetooth headphones that require the AAC codec1 cannot be used. As the Debian wiki outlines, using the AAC codec over Bluetooth, while technically supported by PipeWire, is explicitly disabled in Debian at this time. This is because the fdk-aac library needed to enable this support is currently in the non-free component of the repository, meaning that PipeWire, which is in the main component, cannot depend on it.

How to Fix it Yourself

If what you, like me, need is simply for Bluetooth Audio to work with AAC in Debian’s default desktop environment2, then you’ll need to rebuild the pipewire package to include the AAC codec. While the current version in Debian main has been built with AAC deliberately disabled, it is trivial to enable if you can install a version of the fdk-aac library.

I preface this with the usual caveats when it comes to patent and licensing controversies. I am not a lawyer, building this package and/or using it could get you into legal trouble.

These instructions have only been tested on an up-to-date copy of Debian 12.

  1. Install pipewire’s build dependencies
    sudo apt install build-essential devscripts
    sudo apt build-dep pipewire
    
  2. Install libfdk-aac-dev
    sudo apt install libfdk-aac-dev
    

    If the above doesn’t work you’ll likely need to enable non-free and try again

    sudo sed -i 's/main/main non-free/g' /etc/apt/sources.list
    sudo apt update
    

    Alternatively, if you wish to ensure you are maximally license-compliant and patent un-infringing3, you can instead build fdk-aac-free which includes only those components of AAC that are known to be patent-free3. This is what should eventually end up in Debian to resolve this problem (see below).

    sudo apt install git-buildpackage
    mkdir fdk-aac-source
    cd fdk-aac-source
    git clone https://salsa.debian.org/multimedia-team/fdk-aac
    cd fdk-aac
    gbp buildpackage
    sudo dpkg -i ../libfdk-aac2_*deb ../libfdk-aac-dev_*deb
    
  3. Get the pipewire source code
    mkdir pipewire-source
    cd pipewire-source
    apt source pipewire
    

    This will create a bunch of files within the pipewire-source directory, but you’ll only need the pipewire-<version> folder, this contains all the files you’ll need to build the package, with all the debian-specific patches already applied. Note that you don’t want to run the apt source command as root, as it will then create files that your regular user cannot edit.

  4. Fix the dependencies and build options To fix up the build scripts to use the fdk-aac library, you need to save the following as pipewire-source/aac.patch
    --- debian/control.orig
    +++ debian/control
    @@ -40,8 +40,8 @@
                 modemmanager-dev,
                 pkg-config,
                 python3-docutils,
    -               systemd [linux-any]
    -Build-Conflicts: libfdk-aac-dev
    +               systemd [linux-any],
    +               libfdk-aac-dev
     Standards-Version: 4.6.2
     Vcs-Browser: https://salsa.debian.org/utopia-team/pipewire
     Vcs-Git: https://salsa.debian.org/utopia-team/pipewire.git
    --- debian/rules.orig
    +++ debian/rules
    @@ -37,7 +37,7 @@
     		-Dauto_features=enabled \
     		-Davahi=enabled \
     		-Dbluez5-backend-native-mm=enabled \
    -		-Dbluez5-codec-aac=disabled \
    +		-Dbluez5-codec-aac=enabled \
     		-Dbluez5-codec-aptx=enabled \
     		-Dbluez5-codec-lc3=enabled \
     		-Dbluez5-codec-lc3plus=disabled \
    

    Then you’ll need to run patch from within the pipewire-<version> folder created by apt source:

    patch -p0 < ../aac.patch
    
  5. Build pipewire
    cd pipewire-*
    debuild
    

    Note that you will likely see an error from debsign at the end of this process, this is harmless, you simply don’t have a GPG key set up to sign your newly-built package4. Packages don’t need to be signed to be installed, and debsign uses a somewhat non-standard signing process that dpkg does not check anyway.

  1. Install libspa-0.2-bluetooth
    sudo dpkg -i libspa-0.2-bluetooth_*.deb
    
  2. Restart PipeWire and/or Reboot
    sudo reboot
    

    Theoretically there’s a set of services to restart here that would get pipewire to pick up the new library, probably just pipewire itself. But it’s just as easy to restart and ensure everything is using the correct library.

Why

This is a slightly unusual situation, as the fdk-aac library is licensed under what even the GNU project acknowledges is a free software license. However, this license explicitly informs the user that they need to acquire a patent license to use this software5:

3. NO PATENT LICENSE

NO EXPRESS OR IMPLIED LICENSES TO ANY PATENT CLAIMS, including without limitation the patents of Fraunhofer, ARE GRANTED BY THIS SOFTWARE LICENSE. Fraunhofer provides no warranty of patent non-infringement with respect to this software. You may use this FDK AAC Codec software or modifications thereto only for purposes that are authorized by appropriate patent licenses.

To quote the GNU project:

Because of this, and because the license author is a known patent aggressor, we encourage you to be careful about using or redistributing software under this license: you should first consider whether the licensor might aim to lure you into patent infringement.

AAC is covered by a number of patents, which expire at some point in the 2030s6. As such the current version of the library is potentially legally dubious to ship with any other software, as it could be considered patent-infringing3.

Fedora’s solution

Since 2017, Fedora has included a modified version of the library as fdk-aac-free, see the announcement and the bugzilla bug requesting review.

This version of the library includes only the AAC LC profile, which is believed to be entirely patent-free3.

Based on this, there is an open bug report in Debian requesting that the fdk-aac package be moved to the main component and that the pipwire package be updated to build against it.

The Debian NEW queue

To resolve these bugs, a version of fdk-aac-free has been uploaded to Debian by Jeremy Bicha. However, to make it into Debian proper, it must first pass through the ftpmaster’s NEW queue. The current version of fdk-aac-free has been in the NEW queue since July 2023.

Based on conversations in some of the bugs above, it’s been there since at least 20227.

I hope this helps anyone stuck with AAC to get their hardware working for them while we wait for the package to eventually make it through the NEW queue.

Discuss on Hacker News

  1. Such as, for example, any Apple AirPods, which only support AAC AFAICT. 

  2. Which, as of Debian 12 is GNOME 3 under Wayland with PipeWire. 

  3. I’m not a lawyer, I don’t know what kinds of infringement might or might not be possible here, do your own research, etc.  2 3 4

  4. And if you DO have a key setup with debsign you almost certainly don’t need these instructions. 

  5. This was originally phrased as “explicitly does not grant any patent rights.” It was pointed out on Hacker News that this is not exactly what it says, as it also includes a specific note that you’ll need to acquire your own patent license. I’ve now quoted the relevant section of the license for clarity. 

  6. Wikipedia claims the “base” patents expire in 2031, with the extensions expiring in 2038, but its source for these claims is some guy’s spreadsheet in a forum. The same discussion also brings up Wikipedia’s claim and casts some doubt on it, so I’m not entirely sure what’s correct here, but I didn’t feel like doing a patent deep-dive today. If someone can provide a clear answer that would be much appreciated. 

  7. According to Jeremy Bícha: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1021370#17 

,

Planet DebianNiels Thykier: Language Server for Debian: Spellchecking

This is my third update on writing a language server for Debian packaging files, which aims at providing a better developer experience for Debian packagers.

Lets go over what have done since the last report.

Semantic token support

I have added support for what the Language Server Protocol (LSP) call semantic tokens. These are used to provide the editor insights into tokens of interest for users. Allegedly, this is what editors would use for syntax highlighting as well.

Unfortunately, eglot (emacs) does not support semantic tokens, so I was not able to test this. There is a 3-year old PR for supporting with the last update being ~3 month basically saying "Please sign the Copyright Assignment". I pinged the GitHub issue in the hopes it will get unstuck.

For good measure, I also checked if I could try it via neovim. Before installing, I read the neovim docs, which helpfully listed the features supported. Sadly, I did not spot semantic tokens among those and parked from there.

That was a bit of a bummer, but I left the feature in for now. If you have an LSP capable editor that supports semantic tokens, let me know how it works for you! :)

Spellchecking

Finally, I implemented something Otto was missing! :)

This stared with Paul Wise reminding me that there were Python binding for the hunspell spellchecker. This enabled me to get started with a quick prototype that spellchecked the Description fields in debian/control. I also added spellchecking of comments while I was add it.

The spellchecker runs with the standard en_US dictionary from hunspell-en-us, which does not have a lot of technical terms in it. Much less any of the Debian specific slang. I spend considerable time providing a "built-in" wordlist for technical and Debian specific slang to overcome this. I also made a "wordlist" for known Debian people that the spellchecker did not recognise. Said wordlist is fairly short as a proof of concept, and I fully expect it to be community maintained if the language server becomes a success.

My second problem was performance. As I had suspected that spellchecking was not the fastest thing in the world. Therefore, I added a very small language server for the debian/changelog, which only supports spellchecking the textual part. Even for a small changelog of a 1000 lines, the spellchecking takes about 5 seconds, which confirmed my suspicion. With every change you do, the existing diagnostics hangs around for 5 seconds before being updated. Notably, in emacs, it seems that diagnostics gets translated into an absolute character offset, so all diagnostics after the change gets misplaced for every character you type.

Now, there is little I could do to speed up hunspell. But I can, as always, cheat. The way diagnostics work in the LSP is that the server listens to a set of notifications like "document opened" or "document changed". In a response to that, the LSP can start its diagnostics scanning of the document and eventually publish all the diagnostics to the editor. The spec is quite clear that the server owns the diagnostics and the diagnostics are sent as a "notification" (that is, fire-and-forgot). Accordingly, there is nothing that prevents the server from publishing diagnostics multiple times for a single trigger. The only requirement is that the server publishes the accumulated diagnostics in every publish (that is, no delta updating).

Leveraging this, I had the language server for debian/changelog scan the document and publish once for approximately every 25 typos (diagnostics) spotted. This means you quickly get your first result and that clears the obsolete diagnostics. Thereafter, you get frequent updates to the remainder of the document if you do not perform any further changes. That is, up to a predefined max of typos, so we do not overload the client for longer changelogs. If you do any changes, it resets and starts over.

The only bit missing was dealing with concurrency. By default, a pygls language server is single threaded. It is not great if the language server hangs for 5 seconds everytime you type anything. Fortunately, pygls has builtin support for asyncio and threaded handlers. For now, I did an async handler that await after each line and setup some manual detection to stop an obsolete diagnostics run. This means the server will fairly quickly abandon an obsolete run.

Also, as a side-effect of working on the spellchecking, I fixed multiple typos in the changelog of debputy. :)

Follow up on the "What next?" from my previous update

In my previous update, I mentioned I had to finish up my python-debian changes to support getting the location of a token in a deb822 file. That was done, the MR is now filed, and is pending review. Hopefully, it will be merged and uploaded soon. :)

I also submitted my proposal for a different way of handling relationship substvars to debian-devel. So far, it seems to have received only positive feedback. I hope it stays that way and we will have this feature soon. Guillem proposed to move some of this into dpkg, which might delay my plans a bit. However, it might be for the better in the long run, so I will wait a bit to see what happens on that front. :)

As noted above, I managed to add debian/changelog as a support format for the language server. Even if it only does spellchecking and trimming of trailing newlines on save, it technically is a new format and therefore cross that item off my list. :D

Unfortunately, I did not manage to write a linter variant that does not involve using an LSP-capable editor. So that is still pending. Instead, I submitted an MR against elpa-dpkg-dev-el to have it recognize all the fields that the debian/control LSP knows about at this time to offset the lack of semantic token support in eglot.

From here...

My sprinting on this topic will soon come to an end, so I have to a bit more careful now with what tasks I open!

I think I will narrow my focus to providing a batch linting interface. Ideally, with an auto-fix for some of the more mechanical issues, where this is little doubt about the answer.

Additionally, I think the spellchecking will need a bit more maturing. My current code still trips on naming patterns that are "clearly" verbatim or code references like things written in CamelCase or SCREAMING_SNAKE_CASE. That gets annoying really quickly. It also trips on a lot of commands like dpkg-gencontrol, but that is harder to fix since it could have been a real word. I think those will have to be fixed people using quotes around the commands. Maybe the most popular ones will end up in the wordlist.

Beyond that, I will play it by ear if I have any time left. :)

365 TomorrowsWhy We Banned Time Travel

Author: David Barber Even grandfathers fearful of paradox— in case squashing a butterfly alters the future—had no cause to fret, because the time engine emerged in low Earth orbit and just took pictures. What could go wrong? Instruments gazed down on a warm, pristine planet, dominated by behemoths. Sometimes herds could be glimpsed from space. […]

The post Why We Banned Time Travel appeared first on 365tomorrows.

David BrinRepublican rationalizations are unchanged, even in the face of ... facts

Down at the end, I'll offer an excerpt from an essay on my alternate site asking "Does government-funded science play a role in stimulating innovation?" Both the far-left and today's entire-right share in common a cult reflex answer to that question. An answer emblematic of the lobotomization of our time.

But do hang around for that excerpt, at least!

== Another milestone raises a serious question ==

With the passing of the "Greatest Generation" (GG) - parents of the boomers - and now Jimmy and Rosalynn Carter - perhaps it's time to re-evaluate the America... and world... that they made.

Especially the Rooseveltean social contract that transformed the USA into a world titan, science-leader and awash in wealth, while setting us down an inexorable road toward some kinds of equality: first regarding social/working class. But then (admittedly far too-slowly!) race/gender and the rest.

That social contract directly correlated with the highest rates of middle class prosperity increase, fastest startup entrepreneurship and lowest levels of wealth disparity the world had ever seen. But it has been - since the 1980s - carved-away on a range of incantatory excuses and partially demolished by a massive campaign of conservative 'reforms'...  

...economic and social theories that were supposedly aimed at enhancing creative market freedom, but that correlated exactly and always with reduction of innovation and competition, while restoring the one trait that the Greatest Generation despised most... born-class as the primary decider of a child's destiny. 

This campaign was justified by guys like Milton Friedman and Robert Bork, and think tanks such as Heritage and AEI, that continue pushing utterly-disproved notions like "Supply Side (voodoo) Economics" - that never had one successfully predicted positive outcome.  

(In science, a theory is abandoned in the face of relentless predictive failure, but cults don't do that.)

In the 90s, those pro-oligarchy economists were augmented by "neocon" imperialists who urged both Bushes and Dick Cheney etc. to plunge us into blatant traps that had been laid for us by Osama bin Laden and his ilk. Those Middle Eastern wars were supposedly in revenge for 9/11 attacks that (always remember) happened on their watch. The neocons openly brayed their ill-disguised glee at transforming an 80% benign American Pax into a thumping, gallumphing empire.


(My hero - George Marshall - held meetings in 1945 revolving around a question that no leader had ever asked, before: "We are about to become an empire. What mistakes did all other empires make and how can we make something that succeeds? That won't make us hated and eventually destroyed?" (paraphrased))

Look at the blared yowls of Wolfowitz, Nitze, Adelman and the other neocons, in those days, and tell us you see any signs of wisdom, or awareness of the traps they were falling for.  Alas for them, their orgiastic era was brief. America soon soured on imperial preenings that distilled down to $trillion dollar ripoffs. At which point those poor neocons were promptly flushed away by the Republican establishment - without even a word of thanks - as oligarchy decided to veer republicanism away from armed adventures, over to populist/isolationist/lobotomizing/nerd-hating classic confederatism... now called Trumpism or MAGA.

But don't be distracted... all along, those apparent gyrations were superficialities. The central goal has always been the same. To defend and expand "supply side" tax grifts for aristocracy while crippling the Internal Revenue Service, so that a myriad cheats and thefts should remain hidden. 

Scan from 1981 to present. That sole priority was the only consistent policy position of the GOP and the only one always enacted, whenever they got power. 

(Other than that, and recently the abortion mania, can you name any actual legislative activity by GOP Congresses, the laziest in US history?)

No other 'priority' (e.g. the border) got more than lip service. That is, until the virulently riled MAGAs ('Do you still think you can control them?' Watch Cabaret!) demanded real action on abortion and other social incitements.

A lot of dems/libs went along with Supply Side (SS) in the 80s and even 90s, until, by it's 4th round, the effects grew clear: that not a single positive outcome prediction - not one - ever came true. 

Industrial investment? Nada, zip. As Adam Smith predicted, the vast waves of grifted lucre were poured by the rich into passive, parasitical 'rentier investments like real estate and bonds and tax havens. (A third of US housing stock was snapped-up by inheritance brats in cash purchases, immune to interest rates. It's why young couples can't buy homes.)

As for investment in manufacturing? Recipients of Supply Side largess did almost none. America's current, booming re-industrialization only began with the 2021-2 Pelosi bills.


== Why does no one point this out? ==

Well, Robert Reich does. Like the pure fact that federal deficits always worsen across GOP rule and after SS tax grifts (duh?)

Any non-hypocrite, competition-friendly conservative would realize by now that ONLY 
democrats enact pro-competition and pro-liberty measures. (Have your attorney contact me when you have escrowed $$$ wager stakes over that assertion; but first look at things like the ICC, CAB, AT&T and the damned War on Drugs - and now on reproductive rights.)

Here, in this New York Times article - Google on Trial - a corner of this program is appraised -- whether anti-trust laws can and should be used to break up super-corporations like Google who have inherent advantages. And yeah, that's a major issue. Cory Doctorow rails about it, entertainingly.

Left out are more imaginative solutions. Like whether it's time to help mom & pop America and get needed revenue by instituting a 5% National Sales Tax on interstate internet purchases. Since you-know-who (a South American river) is no longer a baby - but now a market dominating behemoth. In fact, since we all rely on that central market, without much other choice, isn't that the very definition of a public utility? (Ponder that. Treat that unavoidable e-marketplace like electricity and water and trash pickup. If there's no competition, then regulate it to be flat-fair-for-all?)

But the core issue that I keep returning to is one of tactics - at which Democrats (the Union Side in this 8th phase of the 250 year U.S. Civil War) have proved utterly inept! 

It's the reason why I wrote Polemical Judo. A few better tactics and we could peel away just one million residually sane Republicans, leaving the Confederacy in a state of utter, demographic collapse. 

(Of course then the oligarchs will resort to more violent versions of incitement; but we have skilled defenders working on that, right now, e.g. in the much maligned heroes of the FBI.)


And this...


In this conversation, Evan Anderson, CEO of INVNT/IP, an expert on global manufacturing and supply chains, takes us on a deep dive into the power dynamics between the United States, China, and Taiwan.



== Merits and drawbacks of government-funded science ==


And finally, as promised, here's that excerpt from an essay on my more formal, WordPress site asking "Does government-funded science play a role in stimulating innovation?"

There's a deep-cult that underlies many of our familiar political cults. What do the far left and today's entire right share in common? A desperate urge to AMPUTATE our options and methods down to only the few that they prescribe. And hence I posted (on my formal WordPress site) a dissection of this shared, sanctimoniously-oversimplifying, mania.

If you are interested in this... and especially whether government-funded science has played a big role in "Making America (and civilization) Great," then drop on by.


,

Cryptogram Details of a Phone Scam

First-person account of someone who fell for a scam, that started as a fake Amazon service rep and ended with a fake CIA agent, and lost $50,000 cash. And this is not a naive or stupid person.

The details are fascinating. And if you think it couldn’t happen to you, think again. Given the right set of circumstances, it can.

It happened to Cory Doctorow.

EDITED TO ADD (2/23): More scams, these involving timeshares.

Cryptogram Apple Announces Post-Quantum Encryption Algorithms for iMessage

Apple announced PQ3, its post-quantum encryption standard based on the Kyber secure key-encapsulation protocol, one of the post-quantum algorithms selected by NIST in 2022.

There’s a lot of detail in the Apple blog post, and more in Douglas Stabila’s security analysis.

I am of two minds about this. On the one hand, it’s probably premature to switch to any particular post-quantum algorithms. The mathematics of cryptanalysis for these lattice and other systems is still rapidly evolving, and we’re likely to break more of them—and learn a lot in the process—over the coming few years. But if you’re going to make the switch, this is an excellent choice. And Apple’s ability to do this so efficiently speaks well about its algorithmic agility, which is probably more important than its particular cryptographic design. And it is probably about the right time to worry about, and defend against, attackers who are storing encrypted messages in hopes of breaking them later on future quantum computers.

Cryptogram AIs Hacking Websites

New research:

LLM Agents can Autonomously Hack Websites

Abstract: In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents.

In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs.

Cryptogram Microsoft Is Spying on Users of Its AI Tools

Microsoft announced that it caught Chinese, Russian, and Iranian hackers using its AI tools—presumably coding tools—to improve their hacking abilities.

From their report:

In collaboration with OpenAI, we are sharing threat intelligence showing detected state affiliated adversaries—tracked as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon—using LLMs to augment cyberoperations.

The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I’m sure the terms of service—if I bothered to read them—gives them that permission. And of course it’s no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.

EDITED TO ADD (2/22): Commentary on my use of the word “spying.”

Planet DebianScarlett Gately Moore: Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads.

Witch Wells AZ SunsetWitch Wells AZ Sunset

It has been a very busy 3 weeks here in Kubuntu!

Kubuntu 22.04.4 LTS has been released and can be downloaded from here: https://kubuntu.org/getkubuntu/

Work done for the upcoming 24.04 LTS release:

  • Frameworks 5.115 is in proposed waiting for the Qt transition to complete.
  • Debian merges for Plasma 5.27.10 are done, and I have confirmed there will be another bugfix release on March 6th.
  • Applications 23.08.5 is being worked on right now.
  • Added support for riscv64 hardware.
  • Bug triaging and several fixes!
  • I am working on Kubuntu branded Plasma-Welcome, Orca support and much more!
  • Aaron and the Kfocus team has been doing some amazing work getting Calamares perfected for release! Thank you!
  • Rick has been working hard on revamping kubuntu.org, stay tuned! Thank you!
  • I have added several more apparmor profiles for packages affected by https://bugs.launchpad.net/ubuntu/+source/kgeotag/+bug/2046844
  • I have aligned our meta package to adhere to https://community.kde.org/Distributions/Packaging_Recommendations and will continue to apply the rest of the fixes suggested there. Thanks for the tip Nate!

We have a branding contest! Please do enter, there are some exciting prizes https://kubuntu.org/news/kubuntu-graphic-design-contest/

Debian:

I have uploaded to NEW the following packages:

  • kde-inotify-survey
  • plank-player
  • aura-browser

I am currently working on:

  • alligator
  • xwaylandvideobridge

KDE Snaps:

KDE applications 23.08.5 have been uploaded to Candidate channel, testing help welcome. https://snapcraft.io/search?q=KDE I have also working on bug fixes, time allowing.

My continued employment depends on you, please consider a donation! https://kubuntu.org/donate/

Thank you for stopping by!

~Scarlett

Worse Than FailureError'd: Hard Daze Night

It was an extraordinarily busy week at Error'd HQ. The submission list had an all-time record influx, enough for a couple of special edition columns. Among the list was an unusual PEBKAC. We don't get many of these so it made me chuckle and that's really all it takes to get a submission into the mix.

Headliner Lucio Crusca perseverated "Here's what I found this morning, after late night working yesterday, sitting on my couch, with my Thinkpad on my lap. No, it was not my Debian who error'd. I'm afraid it was me."

lavor

 

Eagle-eyed Ken S. called out Wells Fargo: "I see it just fine." Ditto.

horse

 

Logical Mark W. insists "If you press 'Cancel', you are not cancelling; instead, you are cancelling the cancellation. Can we cancel this please?"

cancel

 

Peter pondered "Should I try to immediately delete, or is it safer to immediately delete?"

delete

 

No relation to Ken S., Legal Eagle lamented "Due to my poor LTAS scores and borderline illetteracy, it was very hard for me to get into user_SchoolName. There is so much reading!" Hang in there, L. Eagle. With a resume like yours, I'm sure there will be work for you in Florida after graduation. Only the best!

lexis

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsTime to Wake Up!

Author: Bill Cox “What’s happened?” The Captain’s voice was a harsh rasp, his throat still raw from the cryo-fluid. “The ship has experienced a failure of one of the three cold fusion engines due to a catastrophic meteor strike,” the mainframe avatar replied. “We have diverged substantially from our planned route and are now on […]

The post Time to Wake Up! appeared first on 365tomorrows.

Planet DebianGunnar Wolf: 10 things software developers should learn about learning

This post is a review for Computing Reviews for 10 things software developers should learn about learning , a article published in Communications of the ACM

As software developers, we understand the detailed workings of the different components of our computer systems. And–probably due to how computers were presented since their appearance as “digital brains” in the 1940s–we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and “start coding.” The article’s narrative revolves around software developers, but much of what it presents can be applied to different problem domains.

The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, “Human Memory Is Not Made of Bits,” explains the brain processes of remembering as a way of strengthening the force of a memory (“reconsolidation”) and the role of activation in related network pathways. The second section, “Human Memory Is Composed of One Limited and One Unlimited System,” goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time.

Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, “The Internet Has Not Made Learning Obsolete,” emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner’s productivity as “training wheels” will often hamper the expert user’s as their knowledge has become automated.

The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.

Planet DebianReproducible Builds (diffoscope): diffoscope 258 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 258. This version includes the following changes:

[ Chris Lamb ]
* Use the 7zip package (over p7zip-full) after package transition.
  (Closes: #1063559)
* Update debian/tests/control.

[ Vagrant Cascadian ]
* Fix a typo in the package name field (!) within debian/changelog.

You find out more by visiting the project homepage.

,

Cryptogram New Image/Video Prompt Injection Attacks

Simon Willison has been playing with the video processing capabilities of the new Gemini Pro 1.5 model from Google, and it’s really impressive.

Which means a lot of scary new video prompt injection attacks. And remember, given the current state of technology, prompt injection attacks are impossible to prevent in general.

Krebs on SecurityNew Leak Shows Business Side of China’s APT Menace

A new data leak that appears to have come from one of China’s top private cybersecurity firms provides a rare glimpse into the commercial side of China’s many state-sponsored hacking groups. Experts say the leak illustrates how Chinese government agencies increasingly are contracting out foreign espionage campaigns to the nation’s burgeoning and highly competitive cybersecurity industry.

A marketing slide deck promoting i-SOON’s Advanced Persistent Threat (APT) capabilities.

A large cache of more than 500 documents published to GitHub last week indicate the records come from i-SOON, a technology company headquartered in Shanghai that is perhaps best known for providing cybersecurity training courses throughout China. But the leaked documents, which include candid employee chat conversations and images, show a less public side of i-SOON, one that frequently initiates and sustains cyberespionage campaigns commissioned by various Chinese government agencies.

The leaked documents suggest i-SOON employees were responsible for a raft of cyber intrusions over many years, infiltrating government systems in the United Kingdom and countries throughout Asia. Although the cache does not include raw data stolen from cyber espionage targets, it features numerous documents listing the level of access gained and the types of data exposed in each intrusion.

Security experts who reviewed the leaked data say they believe the information is legitimate, and that i-SOON works closely with China’s Ministry of Public Security and the military. In 2021, the Sichuan provincial government named i-SOON as one of “the top 30 information security companies.”

“The leak provides some of the most concrete details seen publicly to date, revealing the maturing nature of China’s cyber espionage ecosystem,” said Dakota Cary, a China-focused consultant at the security firm SentinelOne. “It shows explicitly how government targeting requirements drive a competitive marketplace of independent contractor hackers-for-hire.”

Mei Danowski is a former intelligence analyst and China expert who now writes about her research in a Substack publication called Natto Thoughts. Danowski said i-SOON has achieved the highest secrecy classification that a non-state-owned company can receive, which qualifies the company to conduct classified research and development related to state security.

i-SOON’s “business services” webpage states that the company’s offerings include public security, anti-fraud, blockchain forensics, enterprise security solutions, and training. Danowski said that in 2013, i-SOON established a department for research on developing new APT network penetration methods.

APT stands for Advanced Persistent Threat, a term that generally refers to state-sponsored hacking groups. Indeed, among the documents apparently leaked from i-SOON is a sales pitch slide boldly highlighting the hacking prowess of the company’s “APT research team” (see screenshot above).

i-SOON CEO Wu Haibo, in 2011. Image: nattothoughts.substack.com.

The leaked documents included a lengthy chat conversation between the company’s founders, who repeatedly discuss flagging sales and the need to secure more employees and government contracts. Danowski said the CEO of i-SOON, Wu Haibo (“Shutdown” in the leaked chats) is a well-known first-generation red hacker or “Honker,” and an early member of Green Army — the very first Chinese hacktivist group founded in 1997. Mr. Haibo has not yet responded to a request for comment.

In October 2023, Danowski detailed how i-SOON became embroiled in a software development contract dispute when it was sued by a competing Chinese cybersecurity company called Chengdu 404. In September 2020, the U.S. Department of Justice unsealed indictments against multiple Chengdu 404 employees, charging that the company was a facade that hid more than a decade’s worth of cyber intrusions attributed to a threat actor group known as “APT 41.”

Danowski said the existence of this legal dispute suggests that Chengdu 404 and i-SOON have or at one time had a business relationship, and that one company likely served as a subcontractor to the other.

“From what they chat about we can see this is a very competitive industry, where companies in this space are constantly poaching each others’ employees and tools,” Danowski said. “The infosec industry is always trying to distinguish [the work] of one APT group from another. But that’s getting harder to do.”

It remains unclear if i-SOON’s work has earned it a unique APT designation. But Will Thomas, a cyber threat intelligence researcher at Equinix, found an Internet address in the leaked data that corresponds to a domain flagged in a 2019 Citizen Lab report about one-click mobile phone exploits that were being used to target groups in Tibet. The 2019 report referred to the threat actor behind those attacks as an APT group called Poison Carp.

Several images and chat records in the data leak suggest i-SOON’s clients periodically gave the company a list of targets they wanted to infiltrate, but sometimes employees confused the instructions. One screenshot shows a conversation in which an employee tells his boss they’ve just hacked one of the universities on their latest list, only to be told that the victim in question was not actually listed as a desired target.

The leaked chats show i-SOON continuously tried to recruit new talent by hosting a series of hacking competitions across China. It also performed charity work, and sought to engage employees and sustain morale with various team-building events.

However, the chats include multiple conversations between employees commiserating over long hours and low pay. The overall tone of the discussions indicates employee morale was quite low and that the workplace environment was fairly toxic. In several of the conversations, i-SOON employees openly discuss with their bosses how much money they just lost gambling online with their mobile phones while at work.

Danowski believes the i-SOON data was probably leaked by one of those disgruntled employees.

“This was released the first working day after the Chinese New Year,” Danowski said. “Definitely whoever did this planned it, because you can’t get all this information all at once.”

SentinelOne’s Cary said he came to the same conclusion, noting that the Protonmail account tied to the GitHub profile that published the records was registered a month before the leak, on January 15, 2024.

China’s much vaunted Great Firewall not only lets the government control and limit what citizens can access online, but this distributed spying apparatus allows authorities to block data on Chinese citizens and companies from ever leaving the country.

As a result, China enjoys a remarkable information asymmetry vis-a-vis virtually all other industrialized nations. Which is why this apparent data leak from i-SOON is such a rare find for Western security researchers.

“I was so excited to see this,” Cary said. “Every day I hope for data leaks coming out of China.”

That information asymmetry is at the heart of the Chinese government’s cyberwarfare goals, according to a 2023 analysis by Margin Research performed on behalf of the Defense Advanced Research Projects Agency (DARPA).

“In the area of cyberwarfare, the western governments see cyberspace as a ‘fifth domain’ of warfare,” the Margin study observed. “The Chinese, however, look at cyberspace in the broader context of information space. The ultimate objective is, not ‘control’ of cyberspace, but control of information, a vision that dominates China’s cyber operations.”

The National Cybersecurity Strategy issued by the White House last year singles out China as the biggest cyber threat to U.S. interests. While the United States government does contract certain aspects of its cyber operations to companies in the private sector, it does not follow China’s example in promoting the wholesale theft of state and corporate secrets for the commercial benefit of its own private industries.

Dave Aitel, a co-author of the Margin Research report and former computer scientist at the U.S. National Security Agency, said it’s nice to see that Chinese cybersecurity firms have to deal with all of the same contracting headaches facing U.S. companies seeking work with the federal government.

“This leak just shows there’s layers of contractors all the way down,” Aitel said. “It’s pretty fun to see the Chinese version of it.”

Worse Than FailureCodeSOD: The Default Path

I've had the misfortune to inherit a VB .Net project which started life as a VB6 project, but changed halfway through. Such projects are at best confused, mixing idioms of VB6's not-quite object oriented programming with .NET's more modern OO paradigms, plus all the chaos that a mid-project lanugage change entails. Honestly, one of the worst choices Microsoft ever made (and they have made a lot of bad choices) was trying to pretend that VB6 could easily transition into VB .Net. It was a lie that too many managers fell for, and too many developers had to try and make true.

Maurice inherited one of these projects. Even worse, the project started in a municipal IT department then was handed of to a large consulting company. Said consulting company then subcontracted the work out to the lowest bidder, who also subcontracted out to an even lower bidder. Things spiraled out of control, and the resulting project had 5,188 GOTO statements in 1321 code files. None of the code used Option Explicit (which requires you to define variables before you use them), or Option Strict (which causes errors when you misuse implicit data-type conversions). In lieu of any error handling, it just pops up message boxes when things go wrong.

Private Function getDefaultPath(ByRef obj As Object, ByRef Categoryid As Integer) As String
              Dim sQRY As String
           Dim dtSysType As New DataTable
              Dim iMPTaxYear As Integer
            Dim lEffTaxYear As Long
            Dim dtControl As New DataTable
               Const sSDATNew As String = "NC"
           getDefaultPath = False
          sQRY = "select TAXBILLINGYEAR from t_controltable"
           dtControl = goDB.GetDataTable("Control", sQRY)
            iMPTaxYear = dtControl.Rows(0).Item("TAXBILLINGYEAR")
            'iMPTaxYear = CShort(cmbTaxYear.Text)
            If goCalendar.effTaxYearByTaxYear(iMPTaxYear, lEffTaxYear) Then

            End If

         sQRY = " "
              sQRY = "select * from T_SysType where MTYPECODE = '" & sSDATNew & "'" & _
               " and msystypecategoryid = " & Categoryid & " and meffstatus = 'A' and " & _
             lEffTaxYear & " between mbegTaxYear and mendTaxYear"
            dtSysType = goDB.GetDataTable("SysType", sQRY)

             If dtSysType.Rows.Count > 0 Then
                      obj.Text = dtSysType.Rows(0).Item("MSYSTYPEVALUE1")
                Else
                     obj.Text = ""
             End If

          getDefaultPath = True
  End Function

obj is defined as Object, but is in fact a TextBox. The function is called getDefaultPath, which is not what it seems to do. What does it do?

Well, it looks up the TAXBILLINGYEAR from a table called t_controltable. It runs this query by using a variable called goDB which thanks to hungarian notation, I know is a global object. I'm not going to get too upset about reusing a single database connection as a global object, but it's definitely hinting at other issues in the code.

We check only the first row from that query, which shows a great deal of optimism about how the data is actually stored in the table. While there are many ways to ensure that tables store data in sorted order, an ORDER BY clause would go a long way to making the query clear. Also, since we only need one row, a TOP N or some equivalent would be nice.

Then we use a global calendar object to do absolutely nothing in our if statement.

That leads us to the second query, which at least Categoryid is an integer and lEffTaxYear is a long, which makes this potential SQL injection not really an issue. We run that query, and then check the number of rows- a sane check which we didn't do for the last query- and then once again, only look at the first row.

I'm going to note that MSYSTYPEVALUE1 may or may not be a "default path", but I certainly have no idea what they're talking about and what data this function is actually getting here. The name of the function and the function of the function seem disconnected.

In any case, I especially like that it doesn't return a value, but directly mutates the text box, ensuring minimal reusability of the function. It could have returned a string, instead.

Speaking of returning strings, that gets us to our final bonus. It does return a string- a string of "True", using the "delightful" functionName = returnValue syntax. Presumably, this is meant to represent a success condition, but it only ever returns true, concealing any failures (or, more likely, just bubbling up an exception). The fact that the return value is a string hints at another code smell- loads of stringly typed data.

The "good" news is that what it took layers of subcontractors to destroy, Maurice's team is going to fix by June. Well, that's the schedule anyway.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsThe Grand Mothering

Author: Amy Lyons I meant to birth children but they slipped my mind. I should have ransom-noted a reminder with the black and white word-magnets on my aughts refrigerator, though those sudden stories trended toward pronoun erasure and my sketchy memory, even as a twenty-something, would have slotted a roommate as the directive’s addressee. My […]

The post The Grand Mothering appeared first on 365tomorrows.

,

Planet DebianNiels Thykier: Expanding on the Language Server (LSP) support for debian/control

I have spent some more time on improving my language server for debian/control. Today, I managed to provide the following features:

  • The X- style prefixes for field names are now understood and handled. This means the language server now considers XC-Package-Type the same as Package-Type.

  • More diagnostics:

    • Fields without values now trigger an error marker
    • Duplicated fields now trigger an error marker
    • Fields used in the wrong paragraph now trigger an error marker
    • Typos in field names or values now trigger a warning marker. For field names, X- style prefixes are stripped before typo detection is done.
    • The value of the Section field is now validated against a dataset of known sections and trigger a warning marker if not known.
  • The "on-save trim end of line whitespace" now works. I had a logic bug in the server side code that made it submit "no change" edits to the editor.

  • The language server now provides "hover" documentation for field names. There is a small screenshot of this below. Sadly, emacs does not support markdown or, if it does, it does not announce the support for markdown. For now, all the documentation is always in markdown format and the language server will tag it as either markdown or plaintext depending on the announced support.

  • The language server now provides quick fixes for some of the more trivial problems such as deprecated fields or typos of fields and values.

  • Added more known fields including the XS-Autobuild field for non-free packages along with a link to the relevant devref section in its hover doc.

This covers basically all my known omissions from last update except spellchecking of the Description field.

An image of emacs showing documentation for the Provides field from the language server.

Spellchecking

Personally, I feel spellchecking would be a very welcome addition to the current feature set. However, reviewing my options, it seems that most of the spellchecking python libraries out there are not packaged for Debian, or at least not other the name I assumed they would be.

The alternative is to pipe the spellchecking to another program like aspell list. I did not test this fully, but aspell list does seem to do some input buffering that I cannot easily default (at least not in the shell). Though, either way, the logic for this will not be trivial and aspell list does not seem to include the corrections either. So best case, you would get typo markers but no suggestions for what you should have typed. Not ideal.

Additionally, I am also concerned with the performance for this feature. For d/control, it will be a trivial matter in practice. However, I would be reusing this for d/changelog which is 99% free text with plenty of room for typos. For a regular linter, some slowness is acceptable as it is basically a batch tool. However, for a language server, this potentially translates into latency for your edits and that gets annoying.

While it is definitely on my long term todo list, I am a bit afraid that it can easily become a time sink. Admittedly, this does annoy me, because I wanted to cross off at least one of Otto's requested features soon.

On wrap-and-sort support

The other obvious request from Otto would be to automate wrap-and-sort formatting. Here, the problem is that "we" in Debian do not agree on the one true formatting of debian/control. In fact, I am fairly certain we do not even agree on whether we should all use wrap-and-sort. This implies we need a style configuration.

However, if we have a style configuration per person, then you get style "ping-pong" for packages where the co-maintainers do not all have the same style configuration. Additionally, it is very likely that you are a member of multiple packaging teams or groups that all have their own unique style. Ergo, only having a personal config file is doomed to fail.

The only "sane" option here that I can think of is to have or support "per package" style configuration. Something that would be committed to git, so the tooling would automatically pick up the configuration. Obviously, that is not fun for large packaging teams where you have to maintain one file per package if you want a consistent style across all packages. But it beats "style ping-pong" any day of the week.

Note that I am perfectly open to having a personal configuration file as a fallback for when the "per package" configuration file is absent.

The second problem is the question of which format to use and what to name this file. Since file formats and naming has never been controversial at all, this will obviously be the easy part of this problem. But the file should be parsable by both wrap-and-sort and the language server, so you get the same result regardless of which tool you use. If we do not ensure this, then we still have the style ping-pong problem as people use different tools.

This also seems like time sink with no end. So, what next then...?

What next?

On the language server front, I will have a look at its support for providing semantic hints to the editors that might be used for syntax highlighting. While I think most common Debian editors have built syntax highlighting already, I would like this language server to stand on its own. I would like us to be in a situation where we do not have implement yet another editor extension for Debian packaging files. At least not for editors that support the LSP spec.

On a different front, I have an idea for how we go about relationship related substvars. It is not directly related to this language server, except I got triggered by the language server "missing" a diagnostic for reminding people to add the magic Depends: ${misc:Depends}[, ${shlibs:Depends}] boilerplate. The magic boilerplate that you have to write even though we really should just fix this at a tooling level instead. Energy permitting, I will formulate a proposal for that and send it to debian-devel.

Beyond that, I think I might start adding support for another file. I also need to wrap up my python-debian branch, so I can get the position support into the Debian soon, which would remove one papercut for using this language server.

Finally, it might be interesting to see if I can extract a "batch-linter" version of the diagnostics and related quickfix features. If nothing else, the "linter" variant would enable many of you to get a "mini-Lintian" without having to do a package build first.

365 TomorrowsSomeWare

Author: Majoki “You occupy space. Therefore you exist.” “Does that Descartes bastardization work in graveyards?” “The dead occupy space.” “Well in a diminishing returns kind of way. You might want to factor biological depreciation into your axiom.” Stenslen eyed Bihrduur icily. “You don’t want this to work.” “No. Not really,” Bihrduur replied. “Call it my […]

The post SomeWare appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Route to Success

Imagine you're building a PHP web application, and you need to display different forms on different pages. Now, for most of us, we'd likely be using some framework to solve this problem, but even if we weren't, the obvious solution of "use a different PHP file for each screen" is a fairly obvious solution.

Dare I say, too obvious a solution?

What if we could have one page handle requests for many different URLs? Think of the convenience of having ONE file to run your entire application? Think of the ifs.

   	if( substr( $_SERVER['REQUEST_URI'], strrpos($_SERVER['REQUEST_URI'], "=" ) + 1 ) == "request" ) 
   	{
   		echo "<form name=\"request\" action=\"\" method=\"post\" enctype=\"multipart/form-data\" onsubmit=\"return validrequest();\">\n";
   	}
   	else if( substr( $_SERVER['REQUEST_URI'], strrpos($_SERVER['REQUEST_URI'], "=" ) + 1 ) == "response" ) 
   	{
   		echo "<form action=\"\" method=\"post\" onsubmit=\"return validresponse()\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 7 ) == "respond" ) 
   	{
   		echo "<form name=\"respond\" action=\"\" method=\"post\" enctype=\"multipart/form-data\" onsubmit=\"return validresponse();\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 6 ) == "upload" )
   	{
   		echo "<form name=\"upload\" method=\"post\" action=\"\" enctype=\"multipart/form-data\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 8 ) == "showitem" ) 
   	{
   		echo "<form name=\"showitem\" action=\"\" method=\"post\" enctype=\"multipart/form-data\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 7 ) == "adduser" ) 
   	{
   		echo "<form name=\"adduser\" action=\"\" method=\"post\" onsubmit=\"return validadduser();\">\n";
   	}
   	else if( substr( substr( $_SERVER['REQUEST_URI'], stripos($_SERVER['REQUEST_URI'], "=" ) + 1 ), 0, 8 ) == "edituser" ) 
   	{
   		echo "<form name=\"adduser\" action=\"\" method=\"post\" onsubmit=\"return validedituser();\">\n";
   	}
   	else
   	{
   		echo "<form action=\"\" method=\"post\">\n";
   	}

Someone reinvented routing, badly. We split the requested URL on an =, so that we can compare the tail of the string against one of our defined routes. Oops, no, we don't split, we take a substring, which means we couldn't have a route upload_image or showitems, since they'd collide with upload and showitem.

And yes, you can safely assume that there are a bunch more ifs that control which specific form fields get output.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityFeds Seize LockBit Ransomware Websites, Offer Decryption Tools, Troll Affiliates

U.S. and U.K. authorities have seized the darknet websites run by LockBit, a prolific and destructive ransomware group that has claimed more than 2,000 victims worldwide and extorted over $120 million in payments. Instead of listing data stolen from ransomware victims who didn’t pay, LockBit’s victim shaming website now offers free recovery tools, as well as news about arrests and criminal charges involving LockBit affiliates.

Investigators used the existing design on LockBit’s victim shaming website to feature press releases and free decryption tools.

Dubbed “Operation Cronos,” the law enforcement action involved the seizure of nearly three-dozen servers; the arrest of two alleged LockBit members; the unsealing of two indictments; the release of a free LockBit decryption tool; and the freezing of more than 200 cryptocurrency accounts thought to be tied to the gang’s activities.

LockBit members have executed attacks against thousands of victims in the United States and around the world, according to the U.S. Department of Justice (DOJ). First surfacing in September 2019, the gang is estimated to have made hundreds of millions of U.S. dollars in ransom demands, and extorted over $120 million in ransom payments.

LockBit operated as a ransomware-as-a-service group, wherein the ransomware gang takes care of everything from the bulletproof hosting and domains to the development and maintenance of the malware. Meanwhile, affiliates are solely responsible for finding new victims, and can reap 60 to 80 percent of any ransom amount ultimately paid to the group.

A statement on Operation Cronos from the European police agency Europol said the months-long infiltration resulted in the compromise of LockBit’s primary platform and other critical infrastructure, including the takedown of 34 servers in the Netherlands, Germany, Finland, France, Switzerland, Australia, the United States and the United Kingdom. Europol said two suspected LockBit actors were arrested in Poland and Ukraine, but no further information has been released about those detained.

The DOJ today unsealed indictments against two Russian men alleged to be active members of LockBit. The government says Russian national Artur Sungatov used LockBit ransomware against victims in manufacturing, logistics, insurance and other companies throughout the United States.

Ivan Gennadievich Kondratyev, a.k.a. “Bassterlord,” allegedly deployed LockBit against targets in the United States, Singapore, Taiwan, and Lebanon. Kondratyev is also charged (PDF) with three criminal counts arising from his alleged use of the Sodinokibi (aka “REvil“) ransomware variant to encrypt data, exfiltrate victim information, and extort a ransom payment from a corporate victim based in Alameda County, California.

With the indictments of Sungatov and Kondratyev, a total of five LockBit affiliates now have been officially charged. In May 2023, U.S. authorities unsealed indictments against two alleged LockBit affiliates, Mikhail “Wazawaka” Matveev and Mikhail Vasiliev.

Vasiliev, 35, of Bradford, Ontario, Canada, is in custody in Canada awaiting extradition to the United States (the complaint against Vasiliev is at this PDF). Matveev remains at large, presumably still in Russia. In January 2022, KrebsOnSecurity published Who is the Network Access Broker ‘Wazawaka,’ which followed clues from Wazawaka’s many pseudonyms and contact details on the Russian-language cybercrime forums back to a 31-year-old Mikhail Matveev from Abaza, RU.

An FBI wanted poster for Matveev.

In June 2023, Russian national Ruslan Magomedovich Astamirov was charged in New Jersey for his participation in the LockBit conspiracy, including the deployment of LockBit against victims in Florida, Japan, France, and Kenya. Astamirov is currently in custody in the United States awaiting trial.

LockBit was known to have recruited affiliates that worked with multiple ransomware groups simultaneously, and it’s unclear what impact this takedown may have on competing ransomware affiliate operations. The security firm ProDaft said on Twitter/X that the infiltration of LockBit by investigators provided “in-depth visibility into each affiliate’s structures, including ties with other notorious groups such as FIN7, Wizard Spider, and EvilCorp.”

In a lengthy thread about the LockBit takedown on the Russian-language cybercrime forum XSS, one of the gang’s leaders said the FBI and the U.K.’s National Crime Agency (NCA) had infiltrated its servers using a known vulnerability in PHP, a scripting language that is widely used in Web development.

Several denizens of XSS wondered aloud why the PHP flaw was not flagged by LockBit’s vaunted “Bug Bounty” program, which promised a financial reward to affiliates who could find and quietly report any security vulnerabilities threatening to undermine LockBit’s online infrastructure.

This prompted several XSS members to start posting memes taunting the group about the security failure.

“Does it mean that the FBI provided a pentesting service to the affiliate program?,” one denizen quipped. “Or did they decide to take part in the bug bounty program? :):)”

Federal investigators also appear to be trolling LockBit members with their seizure notices. LockBit’s data leak site previously featured a countdown timer for each victim organization listed, indicating the time remaining for the victim to pay a ransom demand before their stolen files would be published online. Now, the top entry on the shaming site is a countdown timer until the public doxing of “LockBitSupp,” the unofficial spokesperson or figurehead for the LockBit gang.

“Who is LockbitSupp?” the teaser reads. “The $10m question.”

In January 2024, LockBitSupp told XSS forum members he was disappointed the FBI hadn’t offered a reward for his doxing and/or arrest, and that in response he was placing a bounty on his own head — offering $10 million to anyone who could discover his real name.

“My god, who needs me?,” LockBitSupp wrote on Jan. 22, 2024. “There is not even a reward out for me on the FBI website. By the way, I want to use this chance to increase the reward amount for a person who can tell me my full name from USD 1 million to USD 10 million. The person who will find out my name, tell it to me and explain how they were able to find it out will get USD 10 million. Please take note that when looking for criminals, the FBI uses unclear wording offering a reward of UP TO USD 10 million; this means that the FBI can pay you USD 100, because technically, it’s an amount UP TO 10 million. On the other hand, I am willing to pay USD 10 million, no more and no less.”

Mark Stockley, cybersecurity evangelist at the security firm Malwarebytes, said the NCA is obviously trolling the LockBit group and LockBitSupp.

“I don’t think this is an accident—this is how ransomware groups talk to each other,” Stockley said. “This is law enforcement taking the time to enjoy its moment, and humiliate LockBit in its own vernacular, presumably so it loses face.”

In a press conference today, the FBI said Operation Cronos included investigative assistance from the Gendarmerie-C3N in France; the State Criminal Police Office L-K-A and Federal Criminal Police Office in Germany; Fedpol and Zurich Cantonal Police in Switzerland; the National Police Agency in Japan; the Australian Federal Police; the Swedish Police Authority; the National Bureau of Investigation in Finland; the Royal Canadian Mounted Police; and the National Police in the Netherlands.

The Justice Department said victims targeted by LockBit should contact the FBI at https://lockbitvictims.ic3.gov/ to determine whether affected systems can be successfully decrypted. In addition, the Japanese Police, supported by Europol, have released a recovery tool designed to recover files encrypted by the LockBit 3.0 Black Ransomware.

Planet DebianNiels Thykier: Language Server (LSP) support for debian/control

About a month ago, Otto Kekäläinen asked for editor extensions for debian related files on the debian-devel mailing list. In that thread, I concluded that what we were missing was a "Language Server" (LSP) for our packaging files.

Last week, I started a prototype for such a LSP for the debian/control file as a starting point based on the pygls library. The initial prototype worked and I could do very basic diagnostics plus completion suggestion for field names.

Current features

I got 4 basic features implemented, though I have only been able to test two of them in emacs.

  • Diagnostics or linting of basic issues.
  • Completion suggestions for all known field names that I could think of and values for some fields.
  • Folding ranges (untested). This feature enables the editor to "fold" multiple lines. It is often used with multi-line comments and that is the feature currently supported.
  • On save, trim trailing whitespace at the end of lines (untested). Might not be registered correctly on the server end.

Despite its very limited feature set, I feel editing debian/control in emacs is now a much more pleasant experience.

Coming back to the features that Otto requested, the above covers a grand total of zero. Sorry, Otto. It is not you, it is me.

Completion suggestions

For completion, all known fields are completed. Place the cursor at the start of the line or in a partially written out field name and trigger the completion in your editor. In my case, I can type R-R-R and trigger the completion and the editor will automatically replace it with Rules-Requires-Root as the only applicable match. Your milage may vary since I delegate most of the filtering to the editor, meaning the editor has the final say about whether your input matches anything.

The only filtering done on the server side is that the server prunes out fields already used in the paragraph, so you are not presented with the option to repeat an already used field, which would be an error. Admittedly, not an error the language server detects at the moment, but other tools will.

When completing field, if the field only has one non-default value such as Essential which can be either no (the default, but you should not use it) or yes, then the completion suggestion will complete the field along with its value.

This is mostly only applicable for "yes/no" fields such as Essential and Protected. But it does also trigger for Package-Type at the moment.

As for completing values, here the language server can complete the value for simple fields such as "yes/no" fields, Multi-Arch, Package-Type and Priority. I intend to add support for Section as well - maybe also Architecture.

Diagnostics

On the diagnostic front, I have added multiple diagnostics:

  • An error marker for syntax errors.
  • An error marker for missing a mandatory field like Package or Architecture. This also includes Standards-Version, which is admittedly mandatory by policy rather than tooling falling part.
  • An error marker for adding Multi-Arch: same to an Architecture: all package.
  • Error marker for providing an unknown value to a field with a set of known values. As an example, writing foo in Multi-Arch would trigger this one.
  • Warning marker for using deprecated fields such as DM-Upload-Allowed, or when setting a field to its default value for fields like Essential. The latter rule only applies to selected fields and notably Multi-Arch: no does not trigger a warning.
  • Info level marker if a field like Priority duplicates the value of the Source paragraph.

Notable omission at this time:

  • No errors are raised if a field does not have a value.
  • No errors are raised if a field is duplicated inside a paragraph.
  • No errors are used if a field is used in the wrong paragraph.
  • No spellchecking of the Description field.
  • No understanding that Foo and X[CBS]-Foo are related. As an example, XC-Package-Type is completely ignored despite being the old name for Package-Type.
  • Quick fixes to solve these problems... :)

Trying it out

If you want to try, it is sadly a bit more involved due to things not being uploaded or merged yet. Also, be advised that I will regularly rebase my git branches as I revise the code.

The setup:

  • Build and install the deb of the main branch of pygls from https://salsa.debian.org/debian/pygls The package is in NEW and hopefully this step will soon just be a regular apt install.
  • Build and install the deb of the rts-locatable branch of my python-debian fork from https://salsa.debian.org/nthykier/python-debian There is a draft MR of it as well on the main repo.
  • Build and install the deb of the lsp-support branch of debputy from https://salsa.debian.org/debian/debputy
  • Configure your editor to run debputy lsp debian/control as the language server for debian/control. This is depends on your editor. I figured out how to do it for emacs (see below). I also found a guide for neovim at https://neovim.io/doc/user/lsp. Note that debputy can be run from any directory here. The debian/control is a reference to the file format and not a concrete file in this case.

Obviously, the setup should get easier over time. The first three bullet points should eventually get resolved by merges and upload meaning you end up with an apt install command instead of them.

For the editor part, I would obviously love it if we can add snippets for editors to make the automatically pick up the language server when the relevant file is installed.

Using the debputy LSP in emacs

The guide I found so far relies on eglot. The guide below assumes you have the elpa-dpkg-dev-el package installed for the debian-control-mode. Though it should be a trivially matter to replace debian-control-mode with a different mode if you use a different mode for your debian/control file.

In your emacs init file (such as ~/.emacs or ~/.emacs.d/init.el), you add the follow blob.

(with-eval-after-load 'eglot
    (add-to-list 'eglot-server-programs
        '(debian-control-mode . ("debputy" "lsp" "debian/control"))))

Once you open the debian/control file in emacs, you can type M-x eglot to activate the language server. Not sure why that manual step is needed and if someone knows how to automate it such that eglot activates automatically on opening debian/control, please let me know.

For testing completions, I often have to manually activate them (with C-M-i or M-x complete-symbol). Though, it is a bit unclear to me whether this is an emacs setting that I have not toggled or something I need to do on the language server side.

From here

As next steps, I will probably look into fixing some of the "known missing" items under diagnostics. The quick fix would be a considerable improvement to assisting users.

In the not so distant future, I will probably start to look at supporting other files such as debian/changelog or look into supporting configuration, so I can cover formatting features like wrap-and-sort.

I am also very much open to how we can provide integrations for this feature into editors by default. I will probably create a separate binary package for specifically this feature that pulls all relevant dependencies that would be able to provide editor integrations as well.

Planet DebianNiels Thykier: Language Server (LSP) support for debian/control

Work done:

  • [X] No errors are raised if a field does not have a value.
  • [X] No errors are raised if a field is duplicated inside a paragraph.
  • [X] No errors are used if a field is used in the wrong paragraph.
  • [ ] No spellchecking of the Description field.
  • [X] No understanding that Foo and X[CBS]-Foo are related. As an example, XC-Package-Type is completely ignored despite being the old name for Package-Type.
  • [X] Fixed the on-save trim end of line whitespace. Bug in the server end.
  • [X] Hover text for field names

Planet DebianJonathan Dowland: Propaganda — A Secret Wish

How can I not have done one of these for Propaganda already?

Propaganda: A Secret Wish, and 12"s of Duel and p:Machinery

Propaganda/A Secret Wish is criminally underrated. There seem to be a zillion variants of each track, which keeps completionists busy. Of the variants of Jewel/Duel/etc., I'm fond of the 03:10, almost instrumental mix of Jewel; preferring the lyrics to be exclusive to the more radio friendly Duel (04:42); I don't need them conflating (Jewel 06:21); but there are further depths I've yet to explore (Do Well cassette mix, the 20:07 The First Cut / Duel / Jewel (Cut Rough)/ Wonder / Bejewelled mega-mix...)

I recently watched The Fall of the House of Usher which I think has Poe lodged in my brain, which is how this album popped back into my conciousness this morning, with the opening lines of Dream within a Dream.

But are they Goth?

Worse Than FailureCodeSOD: Merge the Files

XML is, arguably, an overspecified language. Every aspect of XML has a standard to interact with it or transform it or manipulate it, and that standard is also defined in XML. Each specification related to XML fits together into a soup that does all the things and solves every problem you could possibly have.

Though Owe had a problem that didn't quite map to the XML specification(s). Specifically, he needed to parse absolutely broken XML files.

bool Sorter::Work()
{
	if(this->__disposed)
		throw gcnew ObjectDisposedException("Object has been disposed");
	
	if(this->_splitFiles)
	{
		List<Document^>^ docs = gcnew List<Document^>();
		for each(FileInfo ^file in this->_sourceDir->GetFiles("*.xml"))
		{
			XElement ^xml = XElement::Load(file->FullName);
			xml->Save(file->FullName);
			long count = 0;
			for each(XElement^ rec in xml->Elements("REC"))
			{
					if(rec->Attribute("NAME")->Value == this->_mainLevel)
						count++;
			}
			if(count < 2)
				continue;
			StreamReader ^reader = gcnew StreamReader(file->OpenRead());
			StringBuilder ^sb = gcnew StringBuilder("<FILE NAME=\"blah\">");
			bool first = true;
			bool added = false;
			Regex ^isRecOrFld = gcnew Regex("^\\s+\\<[REC|FLD].*$");
			Regex ^isEndOfRecOrFld = gcnew Regex("^\\s+\\<\\/[REC|FLD].*$");
			Regex ^isMainLevelRec = gcnew Regex("^\\s+\\<REC NAME=\\\""+this->_mainLevel+"\\\".*$");
			while(!reader->EndOfStream)
			{
				String ^line = reader->ReadLine();
				if(!isRecOrFld->IsMatch(line) && !isEndOfRecOrFld->IsMatch(line))
					continue;
				if(isMainLevelRec->IsMatch(line) && !String::IsNullOrEmpty(sb->ToString()) && !first)
				{
					sb->AppendLine("</FILE>");
					XElement^ xml = XElement::Parse(sb->ToString());
					String ^key = String::Empty;
					for each(XElement ^rec in xml->Elements("REC"))
					{
						key = this->findKey(rec);
						if(!String::IsNullOrEmpty(key))
							break;
					}
					docs->Add(gcnew Document(key, gcnew XElement("container", xml)));
					sb = gcnew StringBuilder("<FILE NAME=\"blah\">");
					first = true;
					added = true;
				}
				sb->AppendLine(line);
				if(first && !added)
												first = false;
				if(added)
												added = false;
			}
			delete reader;
			file->Delete();
		}
		int i = 1;
		for each(Document ^doc in docs)
		{
										XElement ^splitted = doc->GetData()->Element("FILE");
										splitted->Save(Path::Combine(this->_sourceDir->FullName, this->_docPrefix + "_" + i++ + ".xml"));
										delete splitted;
		}
		delete docs;
	}
	List<Document^>^ docs = gcnew List<Document^>();
	for each(FileInfo ^file in this->_sourceDir->GetFiles(String::Format("{0}*.xml", this->_docPrefix)))
	{
		XElement ^xml = XElement::Load(file->FullName);
		String ^key = findKey(xml->Element("REC")); // will always be first element in document order
		Document ^doc = gcnew Document(key, gcnew XElement("data", xml));
		docs->Add(doc);
		file->Delete();
	}
	List<Document^>^ sorted = MergeSorter::MergeSort(docs);
	XElement ^sortedMergedXml = gcnew XElement("FILE", gcnew XAttribute("NAME", "MergedStuff"));
	for each(Document ^doc in sorted)
	{
		sortedMergedXml->Add(doc->GetData()->Element("FILE")->Elements("REC"));
	}
	sortedMergedXml->Save(Path::Combine(this->_sourceDir->FullName, String::Format("{0}_mergedAndSorted.xml", this->_docPrefix)));
	// returning a sane value
	return true;
}

This is in the .NET dialect of C++, so the odd ^ sigil is a handle to a garbage collected object.

There's a lot going on here. The purpose of this function is to possibly split some pre-merged XML files into separate XML files, and then take a set of XML files and merge them back together (properly sorted).

So we start by confirming that this object hasn't been disposed, and throwing an exception if it has. Then we try and split.

To do this, we search the directory for "*.xml", and then we… load the file and then save the file? The belief about this code is that it corrects the whitespace, because later on we require some whitespace- but the .NET XML writer doesn't add whitespace, only preserve it, so I suspect this line isn't necessary- or at least shouldn't be. I can envision a world where this somehow makes the code work for reasons that are best not thought about.

Owe writes, to the preceding developers: "Thanks guys, I really appreciate this!"

Now, since we're iterating across an entire directory of XML files, some of the files have been pre-merged (and need to be unmerged), and others haven't been merged at all. How do we tell them apart? We find every element named "REC", and check if it's "NAME" attribute is equivalent to our _mainLevel value. If there are at least two such element, we know that this file has been premerged and thus needs to be unmerged.

Owe writes: "Thanks guys, I really appreciate this!"

And then we get into the dreaded parse XML with regex phase. This is done because the XML files aren't actually valid XML. So it's a mix of string operations and regex matches to try and interpret the data. And remember that whitespace that we thought we required back when we wrote the documents out? Well here's why: our regexes are matching on whitespace.

Owe writes: "Thanks guys, I really appreciate this!"

Once we've constructed all the documents in memory, we can then dump them out to a new set of files. And then, once that's done, we can reopen those files, because now the merging happens. Here we find all the "REC" elements and build new XML documents based off of them. Then a MergeSorter::MergeSort function actually does the merging- and honestly, I dread to think about what that looks like.

The merge sorter sorts the documents, but we actually want to output one document with the elements in that sorted order, so we create one last XML document, iterate across all our sorted document fragments, and then inject the "REC" elements into the output.

Owe writes: "Thanks guys, I really appreciate this!"

While the code and the entire process here is terrible, the core WTF is the "we need to store our XML with the elements sorted in a specific order". That's not what XML is for. But obviously, they don't know what XML is for, since they're doing things in their documents that can't successfully be parsed by an XML parser. Or, perhaps more accurately, they couldn't figure out how to parse as XML, hence the regexes and string munging.

Were the documents sensible, this whole thing could probably have been solved with some fairly straightforward (by XML standards) XQuery/XSLT operations. Instead, we have this. Thanks guys, I really appreciate this.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsEvocation

Author: Steve Smith, Staff Writer I have no memory of what came before. It’s as though I didn’t exist prior to this moment and have just come into existence and apparated into this crowd, in this hall, surrounded by the ordered chaos of these several hundred people. We’re collected here for a singular purpose, all […]

The post Evocation appeared first on 365tomorrows.

,

Planet DebianMatthew Garrett: Debugging an odd inability to stream video

We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets)

comment count unavailable comments

Cryptogram Friday Squid Blogging: Illex Squid and Climate Change

There are correlations between the populations of the Illex Argentines squid and water temperatures.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram EU Court of Human Rights Rejects Encryption Backdoors

The European Court of Human Rights has ruled that breaking end-to-end encryption by adding backdoors violates human rights:

Seemingly most critically, the [Russian] government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

Charles StrossThe coming storm

(I should have posted this a couple of weeks ago ...)

2024 looks set to be a somewhat disruptive year.

Never mind the Summer Olympics in Paris; the big news is politics, where close to half the world's population get to vote in elections with a strong prospect of electing outright fascists.

Taiwan was first on 13th January, and elected Democratic People's Party incumbent Vice-President Vice President Lai Ching-te as President. I don't have enough understanding of Taiwanese politics to comment further other than to note that this outcome evinced displeased noises from Beijing (and my interpretation is that pleased noises from Beijing would have been a Very Bad Sign).

Finland gets to elect a new President on January 28th; incumbent Sauli Niinistö will be leaving office, and I'm unable to find details of the current candidates. (As the presidency of Finland is a ceremonial role rather than an executive one it's probably less significant than the current Prime Minister—elected last year—but might be a signal as to whether the Finnish electorate are happy with the right-wing shift at the last election. It's also significant in that Finland is a front-line state with respect to Russia.)

Much larger nations who are voting in parliamentary elections in February include Pakistan and Indonesia: with combined population of over 500 million these are the two most numerous muslim states today. Smaller but geopolitically significant, Belarus is electing a new parliament (probably in accordance with the preferences fo the dictator Lukashenko, a client of Moscow). Of interest mostly to Americans, El Salvador is electing both a president and parliament.

March sees elections in Iran, Ireland, and Portugal; also a rubber stamp for Vladimir Putin's presidency in Russia: and Slovakia votes on a new head of state. (Irish voters also get to decide on two constitutional amendments: one that revises the definition of family to explicitly include durable relationships outside marriage, and another to remove references in the constitution to a woman's "life within the home" and "duties in the home". Both are expected to pass.)

Some Time from early May onwards there will almost certainly be a general election and a change of government in the UK. A general election must take place no later than the first Thursday of 2025, but it is expected that Prime Minister Rishi Sunak will announce the date of a snap election after the budget in March (which is expected to cut taxes on likely voting demographics). A British general election takes no more than 5-6 weeks to organize. It is possible according to some pundits that he'll schedule the vote for June, hoping for a good-weather boost to government polling, but short of a miracle the Conservatives are going to go down hard. (Current polling suggests the election will return a Labour majority government, and the Conservatives will lose more than half their seats in their worst defeat since 1997. I can't wait.)

April: South Korea elects a new parliament. It's worth noting that this has global implications—North Korea is selling munitions to Russia for the Ukraine invasion, while South Korea has closed a major arms deal with Poland (to replace Poland's existing fleet of main battle tanks, which are being sold on to Ukraine) as part of Poland's re-armament. (Russian pundits have been making noises along the lines of "Kiev today, Warsaw and Helsinki tomorrow".)

May: Panama, North Macedonia, Lithuania, and the Dominican Republic all elect a parliament, a president, or both.

June: Iceland, Mexico, and Mauritania all get new Presidents; Mexico, Belgium, and Mongolia all get new parliaments.

July: Rwanda elects a new president and chamber of deputies.

October: Mozambique and Uruguay elect new presidents and parliaments.

November: Palau and Somaliland elect new presidents. Also some other place is voting on the 5th, a date traditionally associated with gunpowder, treason, and plot (or, in more familiar terms, an attempt to overthrow the head of state and replace him with a puppet in thrall to minority religious zealots).

A number of other nations have elections due some time in 2024, but like the UK they follow no set date pattern. The largest is India, where far right nationalist Hindutva leader Narendra Modi looks likely to consolidate power, but the list also includes Algeria, Austria, Botswana, Chad, Croatia, Gabon, Georgia, Ghana, Kiribati, Lithuania, Mauritius, Moldova, Namibia, Romania, San Marino, the Solomon Islands, South Africa, Sotuh Sudan, Sri Lanka, Togo, Uzbekistan, and Venezuela.

Ukraine would elect a new president this year, but it's not clear whether Volodymyr Zelenskyy will face a wartime election: he previously indicated that he would retire from politics when the war ended.

And fuck knows what's going to happen politically in Israel this year.

Here's the thing: this looks like a pivotal year for democracy around the world. Half the planet is voting in elections with various fascists and fundamentalists—there's often no discernable difference: clerico-fascism is resurgent in multiple religions—seeking control.

Some of the potential outcomes are disastrous. A return to the White House by the tangerine shitgibbon would inevitably cut off all US assistance to Ukraine, and probably lead to a US withdrawl from NATO ... just as Russia is attempting to invade and conquer a nation in the process of trying to join both the EU and NATO. This would encourage Russia to follow through with attacks on the Baltic States (Latvia, Lithuania, and Estonia), Finland, and finally Poland, all of which were part of the Russian empire either prior to 1917 or under Stalin and which Putinists see as their property. Having militarized the Russian economy, it's not clear what else Putin could do after occupying Ukraine: global demand for fossil fuels (his main export) is going to fall off a cliff over the next decade and the Russian economy is broken. Hitler's expansion after 1938 was driven by the essential failure of the German economy, leading him to embark on an asset-stripping spree: stealing Eastern Europe probably looks attractive from where the Russian dictator is sitting.

There is, as usual, a risk of conflict between India and Pakistan, potentially aggravated by election outcomes in both nations (both of whom are nuclear-armed). India under the BJP is increasingly authoritarian and alligned with Russia (they're a major oil customer). Iran ... oddly, Iran is least likely to be problematic as a result of election outcomes in 2024: meet the new mullah, just like the old mullah. The regime savagely suppressed the feminist uprising of 2022-23 but is still dealing with dissatisfaction at home, and seems unlikely to want trouble abroad (aside from the usual support for turbulent proxies such as the Houthis and Hezbollah).

It's also worth noting that Premier Xi has made no bones about seeking to regain control of Taiwan, which China views as a breakaway province. The failure of Russia to subdue Ukraine in 2022 was a major reality check, but if Ukraine collapses and NATO disintegrates, leading to Russian expansion in the west and US isolationism, then there may be nothing holding back China from invading a Taiwan stripped of US support.

At which point, by the end of 2024 we could be in Third World War territory, with catastrophically accelerating climate change on top.

On the other hand: none of this is inevitable.

Leaving aside the global fascist insurgency and the oil and climate wars, and it's worth noting that we are seeing exponential growth in the rate of photovoltaic capacity worldwide: each year this decade so far we've collectively installed 50% more PV panels than existed in the previous year. 50% annual compound growth in a new energy resource will rewrite the equations that underly economics in a very short period of time. The renewable energy sector now employs more people than fossil fuels, and the growth is still accelerating.

Most of us have a very poor intuition for exponential growth curves, so here's a metaphor: think back to the first months of 2020 and the onset of the COVID19 pandemic. Now replace the virus with an energy economy transition, and map each week of February-to-April of 2020 onto one year of 2020-2035. We heard about this worrying new disease a few weeks ago, in China: it's now March 1st, and apparently hospitals in Italy are overflowing and health officials are telling us to wash our hands. Governments are holding crisis meetings, and the word "lockdown" is being bandied about on news broadcasts, but nobody knows quite where it's going and the virus hasn't gotten here yet. And this is 2024.

In this metaphor, next week is 2025. Your government is about to go into full-on panic mode. Curfews, empty streets, ambulance sirens a constant background noise. New York, London, and Paris are plague zones.

Now flip the metaphor: instead of curfews and empty streets we have energy crises and really bad storms and floods and food prices destabilizing. But we also have a glimmer of hope: renewables everywhere, coal-fired power stations shutting down for good, e-bikes everywhere (and traffic planning measures to accommodate them), electric cars showing up in significant numbers in those places that are dependent on automobiles. The oil-addicted export economies (think Russia, Saudi Arabia, Venezuela) are hurting.

The metaphor is inexact: but by 2026-27, if we get through 2024 without a nuclear war, it's going to be glaringly obvious that we've turned away from fossil fuel business as usual, and that the political upheavals of 2008-2024 were driven by dark money flows and disinformation campaigns funded by oligarchs who valued retention of their own privileged status above our survival as a species.

MODERATION NOTE

This is not a discussion thread for the upcoming US election in November. Comments relating to Trump/Biden and US politics will be summarily deleted. You can discuss non-American politics instead for once.

365 TomorrowsThe Tavern in the Town

Author: Julian Miles, Staff Writer There’s a tavern by the graveyard. Not one of those new servaraunts, but a real vintage place with tiny lattice windows and a big wooden door that glints in the light from the glows as it swings back and forth. Old Stanislaw told me it used to not do that, […]

The post The Tavern in the Town appeared first on 365tomorrows.

Worse Than FailureRepresentative Line: From a String Builder

Inheritance is one of those object-oriented concepts that creates a lot of conflicts. You'll hear people debating what constitutes an "is-a" versus a "has-a" relationship, you'll hear "favor composition over inheritance", you'll see languages adopt mix-in patterns which use inheritance to do composition. Used well, the feature can make your code cleaner to read and easier to maintain. Used poorly, it's a way to get your polymorphism with a side of spaghetti code.

Greg was working on a medical data application's front end. This representative line shows how they use inheritance:

  public class Patient extends JavascriptStringBuilder

Greg writes: "Unfortunately, the designers were quite unacquainted with newfangled ideas like separating model objects from the UI layer, so they gave us this gem."

This, of course, was a common pattern in the application's front end. Many data objects inherited from string builder. Not all of them, which only helped to add to the confusion.

As for why? Well, it gave these objects a "string" function, which they could override to generate output. You want to print a patient to the screen? What could be easier than calling patient.string()?

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

David BrinSci Fi News & roundup!

First a few items about my own works. In about a month, two of my books - Earth and Glory Season - will be re-released by Open Road Media - with gorgeous new covers, in both trade paperback and ebook versions.  Pre-order now?


Here, Mark Rayner's lovely review of my novel Earth is very flattering. 

For any of you completists out there: my first novel – Sundiver - is the only one that never had a hardcover. Now one is coming… and what an edition!  Alex Berman's Phantasia Press will release “Numbered editions which will feature a full color wrap around dustjacket, frontispiece, and interior art by Jim Burns


Each copy will be printed in two colors throughout, on high quality acid free Smythe sewn paper with full color endsheets.” Phantasia previously released signed limited editions of the second and third books in the series (Startide Rising in 1983 and The Uplift War in 1987).  Expected release date early 2024.

This edition includes my 2020 foreword and a terrific Robert J. Sawyer introduction specifically for this release.


Do you know YA? I am looking for a new publisher for my series of short novels for teens, featuring some of today's brightest new authors, in a consistent future setting for adventures through interstellar space and time!  The "Out of Time" (or "Yanked!") series: Only teens can teleport through time and space! Dollops of fun, adventure and something so rare, nowadays... optimism for young adults. If you think of a publisher who might be compatible, speak up in comments!


Finally, are you a fan of live theater? We were in Pasadena at our alma mater - Caltech - to attend a one-night production of my play, "The Escape: A Confrontation in Four Scenes," presented by the Caltech Playreaders. The directing/acting/performances were beyond my best hopes! You can watch the production on Youtube.



== Sci Fi Roundup! ==


Almost the entire catalogue of sci fi legend Norman Spinrad is available (cheap) online. If you don’t read it… coming generations of AI surely will!


Thomas Easton and Frank Wu have a future-tech spy series going. ESPionage: Regime Change: A Psychic CIA novelWhen the Russians start an undeclared war to bring down the West with assassinations and disinformation attacks, the CIA reactivates a psychic agent from its old Project Stargate to fight off the attacks.  See Paul DiFilippo’s rave review!

Eliot Peper's latest novel Foundry is a near-future thriller featuring (among many things) two spies locked in a room with a gun, leveraging the secrets of semiconductor manufacturing to play the greatest of games, the only game that really matters: power.

 

Just finished reading a YA novel by Gideon Marcus… Kitra. A lovely, lively adventure tale of five teens – one of them a blobby alien – getting into and out of trouble when they buy a used spaceship. Fast paced and Gideon has got the skills. Hook your own teen on the series!


Amid headlines covering the demise of JFK on November 22, 1963, I long knew that an obit on the back pages told of the same-day passing of Aldous Huxley, supposedly during an acid trip (I hear) in the arms of a lady guru. (I’d rather blithely believe that hearsay than look it up.) What I never realized was that C.S. Lewis ALSo died the same day!  This book - Between Heaven and Hell: A Dialog beyond death with John F. Kennedy, C.S. Lewis and Aldous Huxley - imagines JFK, CSL and Aldous meeting in the Bardo just after death. One of them on a lingering acid trip, one shouting "Wait, I was so young and powerful!" and Lewis shocked to find himself in a buddhist waiting room.


Calling nerdy SF + NASA junkies...

As I retire from 12 years on the advisory council of NASA’s Innovative & Advanced Concepts program – (NIAC) – some high NASA officials have asked if any great science fiction was ever inspired by NIAC studies. In addition to my own, I know of as few more. Especially, Doug Van Belle’s A World Adrift - set in the skies of Venus, some 800 years after they were first colonized - incorporates elements from several NIAC studies, including Stoica (2015), Bugga (2016), and Balcerski (2018), with work by space scientist and Nebula winner Geoff Landis (2019) figuring prominently as an essential element of the plot.


==SF & Hollywood ==


Writing this on Star Trek Day!  Count me in with this wave of love for Trek! I do adore it for some unusual reasons, though. Example, the ship in Trek is a vast naval vessel charged with diplomacy, science, exploration and only occasionally fighting... and the captain is no super-force demigod (the core conceit of Star Wars) but merely a way-above-average person, who needs help every time, from above average crewmates. And the Federation is aboard, a topic almost every episode. It's faults and blessings and rules and codes and dreams and possibilities. (See my essay, To Boldly Go.)

(Shoot!  Shoot the Federations starship!" screeched that nasty oven mitt, Yoda, in one of the prequels. Seriously, Lucas? Are you at all the same person who created the wonderfull YIJC?)

In contrast, the ship in Star Wars is a WWI fighter plane (banking against nonexistent air) with the silkscarf lone hero-pilot and his gunner-droid... the knight and squire going back to Achilles & Patroclus. Wars is all about demigods, demigods. Demigods all the way down. Normal folk can only choose which set of feuding gods to die for. And the poor, hapless Galactic Republic has no place on such a ship. Hence it is never really a topic. The Starwarsian Galactic Republic does nothing. Ever. At all. Name an exception. The lesson is the same as in all works by Orson Scott Card: "Hold out no hope for a decent civilization. Throw yourself at the feet of a demigod and pray he'll be a nice one, like Ender!"

All of this and more is in Vivid Tomorrows: Science Fiction and Hollywood


Oh. Also. In this podcast, a brilliant Stanford biologist cites his influences, including science fiction authors. (Especially about 9 minutes in.)


And here's an bit for those who saw and loved (I did, with minor quibbles) the recent Oppenheimer film: a CBS 1965 interview with Oppenheimer. Twenty years after Trinity. Twenty years before Gorbachev.


== Science meets art! ==


If you are interested (as I am) in the intersections of science and art, then this might be a good listen for 20 minutes during your commute: an interview with half a dozen artists and musicians who collaborate closely with scientists. Fun and inspiring! There are also links to other discussions about the societal impact of science fiction. 

 

Of course, having spent my entire post-puberty life in both realms, I have my own take on science overlaps – and conflicts – with art. Or more generally, the tense but often productive interplay between pragmatic-enlightenment methods, on the one hand, and the deeper-rooted human drive for romanticism. 


As I describe in another program – and in Vivid Tomorrows – we would live far poorer, even soulless lives, without the mighty talent of creative imagination. Though we have – at long last – also come to realize how dangerous – even devastatingly deadly – imagination can become, when it seizes control of politics and policy. A failure mode that made the last 6000 years a living hell for 99.99% of our ancestors, until the last few generations began emerging from that world of delusion and ghosts and ‘magic.’


I do what most of those interviewed do - craft art that can collaborate with... or challenge... or project possible outcomes of... a scientific civilization that's dedicated to the kind of progress that only comes from lively, good-natured rivalry among the widest diversity of free minds.  In other words, the diametric opposite to those 6000 dark years.


Obeying unsapient reflexes, many members of a world oligarchy think they can make things much bettwer if they restore those 6000 years of brutal feudalism. Ingrate traitors to the one, unique civilization that gave them everything.


And finally....


North Korean science fiction? In The strange, secretive world of North Korean science fiction, A. Fiscutean reports that “Late dictator Kim Jong-il referenced science fiction books in his speeches and set guidelines for authors, encouraging them to write about optimistic futures for their country.” Of course, the father of the current dictator also had a fetish for moviemaking and (apparently) even arranged for some film creators to be kidnapped and brought to Pyongyang.  Further, “Stories often touch on topics like space travel, benevolent robots, disease-curing nanobots, and deep-sea exploration. They lack aliens and beings with superpowers. Instead, the real superheroes are the exceptional North Korean scientists and technologists who carry the weight of the world on their shoulders.”


Phew, no wonder Donald Trump 'fell in love' with Kim the younger! We must be under an alien Stoopidizer ray, that such 'leaders' are even possible.


Oh, an addendum... on travel to Panama!


Come on by comments if you have suggestions for our trip (to the Beneficial AGI Summit)...


Planet DebianValhalla's Things: Jeans, step one

Posted on February 19, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

CW for body size change mentions

A woman wearing a pair of tight jeans.

Just like the corset, I also needed a new pair of jeans.

Back when my body size changed drastically of course my jeans no longer fit. While I was waiting for my size to stabilize I kept wearing them with a somewhat tight belt, but it was ugly and somewhat uncomfortable.

When I had stopped changing a lot I tried to buy new ones in the same model, and found out that I was too thin for the menswear jeans of that shop. I could have gone back to wearing women’s jeans, but I didn’t want to have to deal with the crappy fabric and short pockets, so I basically spent a few years wearing mostly skirts, and oversized jeans when I really needed trousers.

Meanwhile, I had drafted a jeans pattern for my SO, which we had planned to make in technical fabric, but ended up being made in a cotton-wool mystery mix for winter and in linen-cotton for summer, and the technical fabric version was no longer needed (yay for natural fibres!)

It was clear what the solution to my jeans problems would have been, I just had to stop getting distracted by other projects and draft a new pattern using a womanswear block instead of a menswear one.

Which, in January 2024 I finally did, and I believe it took a bit less time than the previous one, even if it had all of the same fiddly pieces.

I already had a cut of the same cotton-linen I had used for my SO, except in black, and used it to make the pair this post is about.

The parametric pattern is of course online, as #FreeSoftWear, at the usual place. This time it was faster, since I didn’t have to write step-by-step instructions, as they are exactly the same as the other pattern.

Same as above, from the back, with the crotch seam pulling a bit. A faint decoration can be seen on the pockets, with the line art version of the logo seen on this blog.

Making also went smoothly, and the result was fitting. Very fitting. A big too fitting, and the standard bum adjustment of the back was just enough for what apparently still qualifies as a big bum, so I adjusted the pattern to be able to add a custom amount of ease in a few places.

But at least I had a pair of jeans-shaped trousers that fit!

Except, at 200 g/m² I can’t say that fabric is the proper weight for a pair of trousers, and I may have looked around online1 for some denim, and, well, it’s 2024, so my no-fabric-buy 2023 has not been broken, right?

Let us just say that there may be other jeans-related posts in the near future.


  1. I had already asked years ago for denim at my local fabric shops, but they don’t have the proper, sturdy, type I was looking for.↩︎

,

Cory DoctorowHow I Got Scammed

A credit card. Its background is a 'code waterfall' effect from the credit-sequences of the Wachowskis' 'Matrix' movies. On the right side is a cliche'd 'hacker in a hoodie' image whose face is replaced by the hostile red eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' Across the top of the card is 'Li'l Federal Credit Union.' The cardholder's name is 'I.M. Sucker.'

Today for my podcast, I read How I Got Scammed, originally published in my Pluralistic blog. It’s a story of how the attacker has to get lucky once, while the defender has to never make a single mistake.

This is my last podcast before I take off for my next book-tour, for my new novel, The Bezzle. I’m ranging far and wide: LA, San Francisco, Seattle, Vancouver, Calgary, Phoenix, Portland, Providence, Boston, New York City, Toronto, San Diego, Salt Lake City, Tucson, Chicago, Buffalo, as well as Torino and Tartu.

My first two stops are Weller Bookworks in Salt Lake City on Feb 21 and Mysterious Galaxy in San Diego on Feb 22, followed by LA (with Adam Conover!), Seattle (with Neal Stephenson!), and Portland. The canonical link for the schedule is here.


I wuz robbed.

More specifically, I was tricked by a phone-phisher pretending to be from my bank, and he convinced me to hand over my credit-card number, then did $8,000+ worth of fraud with it before I figured out what happened. And then he tried to do it again, a week later!

Here’s what happened. Over the Christmas holiday, I traveled to New Orleans. The day we landed, I hit a Chase ATM in the French Quarter for some cash, but the machine declined the transaction. Later in the day, we passed a little credit-union’s ATM and I used that one instead (I bank with a one-branch credit union and generally there’s no fee to use another CU’s ATM).

A couple days later, I got a call from my credit union. It was a weekend, during the holiday, and the guy who called was obviously working for my little CU’s after-hours fraud contractor. I’d dealt with these folks before – they service a ton of little credit unions, and generally the call quality isn’t great and the staff will often make mistakes like mispronouncing my credit union’s name.

That’s what happened here – the guy was on a terrible VOIP line and I had to ask him to readjust his mic before I could even understand him. He mispronounced my bank’s name and then asked if I’d attempted to spend $1,000 at an Apple Store in NYC that day. No, I said, and groaned inwardly. What a pain in the ass. Obviously, I’d had my ATM card skimmed – either at the Chase ATM (maybe that was why the transaction failed), or at the other credit union’s ATM (it had been a very cheap looking system).


MP3

(Image: Cryteria, CC BY 3.0, modified)


Here’s that tour schedule!

21 Feb: Weller Bookworks, Salt Lake City, 1830h:
https://www.wellerbookworks.com/event/store-cory-doctorow-feb-21-630-pm

22 Feb: Mysterious Galaxy, San Diego, 19h:
https://www.mystgalaxy.com/22224Doctorow

24 Feb: Vroman’s, Pasadena, 17h, with Adam Conover (!!)
https://www.vromansbookstore.com/Cory-Doctorow-discusses-The-Bezzle

26 Feb: Third Place Books, Seattle, 19h, with Neal Stephenson (!!!)
https://www.thirdplacebooks.com/event/cory-doctorow

27 Feb: Powell’s, Portland, 19h:
https://www.powells.com/book/the-bezzle-martin-hench-2-9781250865878/1-2

29 Feb: Changing Hands, Phoenix, 1830h:
https://www.changinghands.com/event/february2024/cory-doctorow

9-10 Mar: Tucson Festival of the Book:
https://tucsonfestivalofbooks.org/?action=display_author&amp;id=15669

13 Mar: San Francisco Public Library, details coming soon!

23 or 24 Mar: Toronto, details coming soon!

25-27 Mar: NYC and DC, details coming soon!

29-31 Mar: Wondercon Anaheim:
https://www.comic-con.org/wc/

11 Apr: Boston, details coming soon!

12 Apr: RISD Debates in AI, Providence, details coming soon!

17 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

Planet DebianIustin Pop: New skis ⛷️ , new fun!

As I wrote a bit back, I had a really, really bad fourth quarter in 2023. As new years approached, and we were getting ready to go on a ski trip, I wasn’t even sure if and how much I’ll be able to ski.

And I felt so out of it that I didn’t even buy a ski pass for the whole week, just bought one day to see if a) I still like, and b) my knee can deal with it. And, of course, it was good.

It was good enough that I ended up skiing the entire week, and my knee got better during the week. WTH?! I don’t understand this anymore, but it was good. Good enough that this early year trip put me back on track and I started doing sports again.

But the main point is, that during this ski week, and talking to the teacher, I realised that my ski equipment is getting a bit old. I bought everything roughly ten years ago, and while they still hold up OK, my ski skills have improved since then. I said to myself, 10 years is a good run, I’ll replace this year the skis, next year the boot & helmet, etc.

I didn’t expect much from new skis - I mean, yes, better skis, but what does “better� mean? Well, once I’ve read enough forum posts, apparently the skis I selected are “that good�, which to me meant they’re not bad.

Oh my, how wrong I was! Double, triple wrong! Rather than fighting with the skis, it’s enough to think what I wand to do, and the skis do it. I felt OK-ish, maybe 10% limited by my previous skis, but the new skis are really good and also I know that I’m just at 30% or so of the new skis - so room to grow. For now, I am able to ski faster, longer, and I feel less tired than before. I’ve actually compared and I can do twice the distance in a day and feel slightly less tired at the end. I’ve moved from “this black is cool but a bit difficult, I’ll do another run later in the day when I’ve recovered� to “how cool, this blacks is quite empty of people, let’s stay here for 2-3 more rounds�.

The skis are new, and I haven’t used them on all the places I’m familiar with - but the upgrade is huge. The people on the ski forum were actually not exaggerating, I realise now. Stöckli++, and they’re also made in Switzerland. Can’t wait to get back to Saas Fee and to the couple of slopes that were killing me before, to see how they feel now.

So, very happy with this choice. I’d be even happier if my legs were less damaged, but well, you can’t win them all.

And not last, the skis are also very cool looking 😉

365 TomorrowsThe Great Bell of Tai Mountain, China

Author: Michael Edwards One night, when you are asleep, you have a dream. And in this dream, you are sitting —on a plain wooden chair — in the belltower of an ancient temple. Which you somehow know is standing at the summit of Tai Mountain, in China. Above you hangs an enormous iron bell. No […]

The post The Great Bell of Tai Mountain, China appeared first on 365tomorrows.

Planet DebianRussell Coker: Release Years

In 2008 I wrote about the idea of having a scheduled release for Debian and other distributions as Mark Shuttleworth had proposed [1]. I still believe that Mark’s original idea for synchronised release dates of Linux distributions (or at least synchronised feature sets) is a good one but unfortunately it didn’t take off.

Having been using Ubuntu a bit recently I’ve found the version numbering system to be really good. Ubuntu version 16.04 was release in April 2016, it’s support ended 5 years later in about April 2021, so any commonly available computers from 2020 should run it well and versions of applications released in about 2017 should run on it. If I have to support a Debian 10 (Buster) system I need to start with a web search to discover when it was released (July 2019). That suggests that applications packaged for Ubuntu 18.04 are likely to run on it.

If we had the same numbering system for Debian and Ubuntu then it would be easier to compare versions. Debian 19.06 would be obviously similar to Ubuntu 18.04, or we could plan for the future and call it Debian 2019.

Then it would be ideal if hardware vendors did the same thing (as car manufacturers have been doing for a long time). Which versions of Ubuntu and Debian would run well on a Dell PowerEdge R750? It takes a little searching to discover that the R750 was released in 2021, but if they called it a PowerEdge 2021R70 then it would be quite obvious that Ubuntu 2022.04 would run well on it and that Ubuntu 2020.04 probably has a kernel update with all the hardware supported.

One of the benefits for the car industry in naming model years is that it drives the purchase of a new car. A 2015 car probably isn’t going to impress anyone and you will know that it is missing some of the features in more recent models. It would be easier to get management to sign off on replacing old computers if they had 2015 on the front, trying to estimate hidden costs of support and lost productivity of using old computers is hard but saying “it’s a 2015 model and way out of date” is easy.

There is a history of using dates as software versions. The “Reference Policy” for SE Linux [2] (which is used for Debian) has releases based on date. During the Debian development process I upload policy to Debian based on the upstream Git and use the same version numbering scheme which is more convenient than the “append git date to last full release” system that some maintainers are forced to use. The users can get an idea of how much the Debian/Unstable policy has diverged from the last full release by looking at the dates. Also an observer might see the short difference between release dates of SE Linux policy and Debian release freeze dates as an indication that I beg the upstream maintainers to make a new release just before each Debian freeze – which is expactly what I do.

When I took over the Portslave [3] program I made releases based on date because there were several forks with different version numbering schemes so my options were to just choose a higher number (which is OK initially but doesn’t scale if there are more forks) or use a date and have people know that the recent date is the most recent version. The fact that the latest release of Portslave is version 2010.04.19 shows that I have not been maintaining it in recent years (due to lack of hardware as well as lack of interest), so if someone wants to take over the project I would be happy to help them do so!

I don’t expect people at Dell and other hardware vendors to take much notice of my ideas (I have tweeted them photographic evidence of a problem with no good response). But hopefully this will start a discussion in the free software community.

,

Planet DebianJames Valleroy: snac2: a minimalist ActivityPub server in Debian

snac2, currently available in Debian testing and unstable, is described by its upstream as “A simple, minimalistic ActivityPub instance written in portable C.” It provides an ActivityPub server with a bare-bones web interface. It does not use JavaScript or require a database.

Basic forms for creating a new post, or following someone

ActivityPub is the protocol for federated social networks that is implemented by Mastodon, Pleroma, and other similar server software. Federated social networks are most often used for “micro-blogging”, or making many small posts. You can decide to follow another user (or bot) to see their posts, even if they happen to be on a different server (as long as the server software is compatible with the ActivityPub standard).

The timeline shows posts from accounts that you follow

In addition, snac2 has preliminary support for the Mastodon Client API. This allows basic support for mobile apps that support Mastodon, but you should expect that many features are not available yet.

If you are interested in running a minimalist ActivityPub server on Debian, please try out snac2, and report any bugs that you find.

365 TomorrowsChrono Crisps

Author: M.D. Smith In 1975, drugs were available in New Orleans. Nestled among the narrow streets and vibrant markets of the French Quarter, lived a young man named Alex. He worked at a small bookstore by day and spent his evenings lost in the pages of science fiction novels. One day, as he strolled through […]

The post Chrono Crisps appeared first on 365tomorrows.

,

Cory DoctorowCome see me on tour! LA, San Francisco, Seattle, Vancouver, Calgary, Phoenix, Portland, Providence, Boston, New York City, Toronto, San Diego, Salt Lake City, Tucson, Chicago, Amherst, Torino, Tartu.

A yellow square featuring the stylized title 'The Bezzle' and the image motif from the book's cover (an Escher-esque triangle whose center is filled with bars that imprison a male figure in a suit; two more male figures run up and down the triangle's edges). Beneath this, a list of cities: 'LA, San Francisco, Seattle, Vancouver, Calgary, Phoenix, Portland, Providence, Boston, New York City, Toronto, San Diego, Salt Lake City, Tucson, Chicago, Amherst, Torino, Tartu.'

My next novel is The Bezzle, a high-tech ice-cold revenge thriller starring Marty Hench, a two-fisted forensic accountant, as he takes on the sleaziest scams of the first two decades of the 2000s, from hamburger-themed Ponzis to the unbelievably sleazy and evil prison-tech industry:

https://us.macmillan.com/books/9781250865878/thebezzle

I’m taking Marty on the road! I’ll be visiting eighteen cities between now and June, and I hope you’ll come out and say hello, visit a beloved local bookseller, and maybe get a book (or two)!

21 Feb: Weller Bookworks, Salt Lake City, 1830h:
https://www.wellerbookworks.com/event/store-cory-doctorow-feb-21-630-pm

22 Feb: Mysterious Galaxy, San Diego, 19h:
https://www.mystgalaxy.com/22224Doctorow

24 Feb: Vroman’s, Pasadena, 17h, with Adam Conover (!!)
https://www.vromansbookstore.com/Cory-Doctorow-discusses-The-Bezzle

26 Feb: Third Place Books, Seattle, 19h, with Neal Stephenson (!!!)
https://www.thirdplacebooks.com/event/cory-doctorow

27 Feb: Powell’s, Portland, 19h:
https://www.powells.com/book/the-bezzle-martin-hench-2-9781250865878/1-2

29 Feb: Changing Hands, Phoenix, 1830h:
https://www.changinghands.com/event/february2024/cory-doctorow

9-10 Mar: Tucson Festival of the Book:
https://tucsonfestivalofbooks.org/?action=display_author&amp;id=15669

13 Mar: San Francisco Public Library (with Robin Sloan):
https://sfpl.org/events/2024/03/13/author-cory-doctrow-bezzle

23 or 24 Mar: Toronto, details coming soon!

25-27 Mar: NYC and DC, details coming soon!

29-31 Mar: Wondercon Anaheim:
https://www.comic-con.org/wc/

11 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

12 Apr: RISD Debates in AI, Providence, details coming soon!

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

Calgary and Vancouver – details coming soon!

Cryptogram Friday Squid Blogging: Vegan Squid-Ink Pasta

It uses black beans for color and seaweed for flavor.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

David BrinOh those idiotic "cycles of history,' yet again!

 A 'rightward shift'? Only if our confidence is destroyed!  Which both MAGA and Hollywood seem bent on achieving.

I quote below a passage from interesting book -- The Aftermath: The last days of the baby boom and the future of power in America, by Philip Bump. Indeed, as far as it goes (see below) the passage and the article/book make some supportable assertions. Certainly my own kids mutter “boomers!” pretty often, sometimes without a smile!


And yet, this obsession is also misleading. For one thing, U.S. boomers aren't French. They like to work and have politically allowed the retirement age to keep incrementing upward, in order to maintain Social Security solvency as lifespans rise. (A further increment is on the table, refused by Republicans, who dread an end to one of their top complaints.)


So far, Boomers are still paying their way.


Are there many of the generational problems that Mr. Bump cites? Sure. For one thing, a substantial minority of boomers are Trumpers, resentful toward America's ongoing, 200 year project of ever-expanding inclusiveness. That large minority of boomers is also biliously hate-drenched toward all of the fact and knowledge castes - those nerds & professionals who seem to understand a world that MAGAs find bewildering. 


(Name an exception to this near-uniform resentment, from science, teaching, journalism, law and medicine to the FBI/Intel/military officer corps who won the Cold War and the War on Terror, all now MAGA-hated. That resentment of achievers is far stronger than their racism or sexism.)


On the other hand, a larger segment of that boomer generation are members of - or associates with - the fact and knowledge castes! The greatest and most creative such clade of humans the world ever saw, leading to heaps of very good news seldom mentioned in exploitive media.


 Moreover, those fact-and-progress-oriented boomers are (for the most part) eager mentors and enablers of their coming replacements. And there's another fact that’s seldom mentioned - that boomers will leave to their heirs the greatest tsunami of middle class wealth the world ever saw. Though – as described below – it could have even even bigger.


No, all this 'boomer' talk is - to some extent - deliberate distraction! Generation-fixation is a cult, especially among rightist intelligencia, desperate to divert attention from the real story - the real divide - which is (as both Adam Smith and Karl Marx told us) about CLASS…


… especially after 40 years of 'supply-side' scams transferred at least $10 trillions from the U.S. middle class into the open maws of a top 0.001% that never, ever invested any substantial amount of it into the promised productive uses. A lordly aristocracy whose bloated wealth disparities are now surpassing French Revolution levels! 


Fully aware that those chickens may come to roost, the rightist mystical-obsession is with CYCLICAL HISTORY. Raving notions of fore-ordained generational cycles like those clutched by Nazis, Confederates and other romantics... 


...and that now propel a desperate fantasy called prepper-ism, as many of the uber-rich build luxury bunker-redoubts and mountain refuges to ride out a coming "Event," or fall of civilization. As if we spurned survivors - especially the nerds who know cyber, bio, nano, and nuclear stuff - won't trivially know where to find those who betrayed and abandoned us.


== A Quasi-Religious Cult to Justify Neo-Feudalism ==


Among the core scriptures of this cyclic-history mysticism that obsesses conservatives, there is a tome called The Fourth Turning, a pile of jibbering hogwash by authors Strauss & Howe, whose apophenia/pareidolia incantations are so easily disproved that obsessives can only double down, obsessively - and religiously - averting their gaze from any doubts.  (I know plenty of these guys - and not one will wager over the testable falsifiability of these cult 'historical cycle' cult assertions.)


That is the undercurrent, beneath all this 'generational shift' blather. A desperate drive to distract from real problems and divides. Or how foolish the sycophancy-lobotomized oligarchy truly is, for rejecting the great benefits they gained from the New Deal and the Great Society. Or their plunge back down the path that old Karl ordained. Dreaming of harems and consigning themselves to tumbrels.


But fine. Now here's that promised quotation from The Aftermath. Read it both for how right it is... and how there are other, unmentioned layers. 


"The Baby Boomers... were a cohort of historically unprecedented size, whose basic need to be clothed, fed, housed and educated was a decades-long jobs creator and economy stimulator. And they were a cohort whose massive size and timing — not just in the immediate aftermath of World War II, but decades into a postindustrial era of growth, government investment and a strong middle class — meant cultural and economic dominance for much of their lives. As teenagers, boomers saw their desires met by marketers who had only just discovered that teenagers with few obligations and a little disposable cash were a huge potential consumer group. Now, as older adults and retirees with waning obligations and a lot of disposable cash, boomers continue to hold a significant chunk of American wealth and the consumer power that goes with it — and in turn, products and services for boomers have proliferated, a trend that will no doubt continue as they enter their sunset years. The government adjusted, too, smoothing boomers’ entrance into the world with significant investments in infrastructure and education, and now smoothing their exit with significant entitlement expenditures."


Statements and assertions that are true – in themselves – can also add up to distraction and bullshit.



== "Caesarism" - the latest treason ==


Let's go past all the headlines and spume. The Mad Right’s neo-feudalism fetish is its only priority. All supposed “policies’ boil down to it, leading at last to this inevitable end state: “For the last three years, parts of the American right have advocated a theory called Caesarism as an authoritarian solution to the claimed collapse of the US republic” (With Peter Thiel among the major subsidizers of dismally microcephalic propagandizers, pushing the same cult incantations under a different name: neo-monarchy.”) Yeah, sure, guys. 

 

Dig it. After 6000 years of wretchedly stupid misrule by inheritance-brat kings and lords, we broke from that feudal Divine Right romantic twaddle. And in so doing, we have accomplished vastly more - in every conceivable category - during the last 250 years of gradually improving constitutionalism – and especially during the post- Rooseveltean era – than all other human cultures… combined. And I include the neolithic. 

 

Yes, I mean combined. Put up wager stakes. Anything that is consensus Good and factually verifiable. Like the percentage of human beings who have been able to raise healthy children in conditions of light and peace. Or science. Or production, economics or entrepreneurship. Or far-sighted literature. Or a burgeoning ecological movement that just might save our children's world... 


... or the thing that Adam Smith promoted, but that these hypocrites now openly hate -- flat-fair-creative competition.

 

Faced with a looming demographic collapse of U.S. conservatism (the mad Bush/Trumpian version) -– and the fact that all fact-using professions have turned against them --  these bozos surround themselves with lobotomizing flatterers who only prove my case! They have replaced politics - the high art of negotiating positive sum solutions to a wide variety of problems, ambitions and interests - with a tiresome litany of rationalizations toward one goal. A goal that is pathetically predictable, rooted in male reproduction reflexes…

 

…that of restored feudal rule by inheritance brats, yearning to restore the practice of kingly harems.  

 

Oh, it’s an insane-masturbatory goal. One chased by incel yammerers. But sure, you guys will try, and if 2024 elections aren’t going your way, you will unleash on us waves of McVeighs. 

 

(Prediction: the caesarites won’t let the ’24 election be about Donald Trump. Especially if he's either self-torching or going full brownshirt. Desperate, the oligarchs will find some way to push their beloved asset aside, either gently or (preferably) in some "Howard Beale Option" designed to incite the base. Think not? Then give me odds? 1:3 it’ll be Nikki Haley or one of her clones. And God bless the US Secret Service.)

 


== A Canticle for Rich Idiots ==


And so, let's explore the logical path they have chosen. 


Hey, feudalist cultists, have you considered what-if - just maybe - we (especially all the nerds you despise) are far more ready than you think? Exactly how do you believe this is gonna go, when you are waging war vs. all fact-using professions? From science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror?  


Open war against the nerds who know cyber, bio, nuclear, chemistry and all the rest? Those who have the exact location and every feature of every single Prepper compound?

 

I know some of the suckers falling for this prepper/aristocracy stuff. They used to be science fiction fans. Today, they think any "Event" will follow a course portrayed in one of the classics - Walter Miller's A Canticle for Leibowitz Alas, they put too much masturbatory faith in ego-pleasing sci fi. 


Oh, it’s a great novel, written in a bygone age. But is it an accurate model of some post-Event future, when survivors will throw themselves at the feet of prepper lords, emerging from sanctums and bunkers? When the inevitable blame backlash gets turned against all nerds? 


Are you guys really so sure the blame won't fall upon those 'lords,' whose active or passive sabotage undermined our institutions of resilience?

 

Want a model for what'll actually happen, if those would-be caesars do trigger an "Event?"


Paris, 1789. 



== They know no history. And hence no future ==

 

Okay, just one more thing.

 

Yeah, bad – tho sometimes well-written – science fiction has played a role in forming caesarism. Foremost among those pushing this rejection of the American Experiment? Take Orson Scott Card – a writer of unquestionable persuasive genius and psychological manipulativeness. Scott spent his entire career relentlessly inveighing that democracy is futile and no institution can be trusted. Certainly, the very notion of self-rule via negotiated, positive-sum politics, thrashed out openly by several hundred million citizens, is ridiculed and dismissed at every turn.

 

Instead, we all should throw ourselves at the feet of some super-uber-demigod-Caesar, hoping (in fact praying) that he’ll be as nice as Ender Wiggin. Only he should also have an all-chastising whip – and maybe cattle cars and smokestacks – to back up his unquestioned (unquestionable) wisdom.  


While this lesson is pushed in all Card tales, including the dangerously misunderstood Ender's Game, the anti-democracy propaganda gets explicit in Card’s EMPIRE, the author's wish fantasy novel that is clutched and extolled as a keystone document of this Caesarist/neo-monarchist ‘movement’ that bodes for the rest of us…


... nothing but pain. I promise though. If that happens, you fellows will share in it.

 


365 TomorrowsNorman the Gentleman Sapien

Author: Chloe Beckett “There you go sweetheart! All fitted up. How does that feel?” Steam erupted from the gaping holes of the nurse’s nostrils like belching geysers, moistening his cheek as she tightened and clipped. The prosthetic bulged off the back of his skull, like a tumor weighing his aching head down. He stared at […]

The post Norman the Gentleman Sapien appeared first on 365tomorrows.

Worse Than FailureError'd: Mirror mirror

An abstitution, an assortment, time travel, bad language, and an error'd.

First up, Jeremy Pereira pushes the boundaries of this column by sharing something right. "Sort of an anti-WTF. It took them 44 minutes to realise they'd made a boo-boo." They probably were notified, but it's still pretty good time to repair. Especially considering the issues we know of that last for years and years.

argos

 

Snecod, Darren S. interjects "We had this bit of marketing from SmartSheet. The irony is that is was all about how to use tags to customise data or views."

smart

 

Apeman Ford Prefect of Collation hooted "Perfectly (machine-)translated , except for sorting. German is called Deutsch auf Deutsch... Spanish might be Espagnol. What may be the local name of Welsh?" I know what it is, in Welsh, and it does start with C, but I don't think the original language of this list is actually Cymraeg.

cymraeg

 

While Wordsmith Sebas whined "Don't see how this is related." It's not.

bitch

 

Finally, junge Jurgen reflects on wording we all took for granted, until just now. "When an upload process failed, it told me to check the logs for more details. This is what I found: Prove that logs can also be recursive..."

log

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Worse Than FailureCodeSOD: While Nothing

José received a bit of VB .Net UI code that left him scratching his head.

While IsNothing(Me.FfrmWait)
    If Not IsNothing(Me.FfrmWait) Then
        Exit While
    End If
End While

While a form doesn't exist, if the form does exist, break out of the loop. I suspect this was intended as a busy loop, waiting for some UI element to be ready. Because busy loops are always the best way to wait for things.

But even with the busy loop, the If is a puzzle- why break out of the loop early, when the loop condition is going to break itself? Did they just not understand how loops work?

Now, the last thing to note is the variable naming conventions. frmMyForm is a common convention in WinForms programming, a little bit of Hungarian notation to tell you what the UI element actually is. But what's that *leading F there? Are they… doing a double Hungarian? Does it tell us that this is a form? Or maybe they're trying to tell us that it's a field of the class? Or maybe a developer was saying "F my life"?

We'd all be better off if this code were nothing.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsAbracadabra

Author: Sophie Carrillo My name is Leichenhund. I was not born like other rat terriers. I was created by a troubled German girl named Heidi. She was a brilliant student at Leipzig University. Her old hund, Hanso, passed away under terrible circumstances. With her science degree and big brain, she snuck into the pet cemetery […]

The post Abracadabra appeared first on 365tomorrows.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at the Munich Security Conference (MSC) 2024 in Munich, Germany, on Friday, February 16, 2024.
  • I’m giving a keynote on “AI and Trust” at Generative AI, Free Speech, & Public Discourse. The symposium will be held at Columbia University in New York City and online, at 3 PM ET on Tuesday, February 20, 2024.
  • I’m speaking (remotely) on “AI, Trust and Democracy” at Indiana University in Bloomington, Indiana, USA, at noon ET on February 20, 2024. The talk is part of the 2023-2024 Beyond the Web Speaker Series, presented by The Ostrom Workshop and Hamilton Lugar School.

The list is maintained on this page.

,

Krebs on SecurityU.S. Internet Leaked Years of Internal, Customer Emails

The Minnesota-based Internet provider U.S. Internet Corp. has a business unit called Securence, which specializes in providing filtered, secure email services to businesses, educational institutions and government agencies worldwide. But until it was notified last week, U.S. Internet was publishing more than a decade’s worth of its internal email — and that of thousands of Securence clients — in plain text out on the Internet and just a click away for anyone with a Web browser.

Headquartered in Minnetonka, Minn., U.S. Internet is a regional ISP that provides fiber and wireless Internet service. The ISP’s Securence division bills itself “a leading provider of email filtering and management software that includes email protection and security services for small business, enterprise, educational and government institutions worldwide.”

U.S. Internet/Securence says your email is secure. Nothing could be further from the truth.

Roughly a week ago, KrebsOnSecurity was contacted by Hold Security, a Milwaukee-based cybersecurity firm. Hold Security founder Alex Holden said his researchers had unearthed a public link to a U.S. Internet email server listing more than 6,500 domain names, each with its own clickable link.

A tiny portion of the more than 6,500 customers who trusted U.S. Internet with their email.

Drilling down into those individual domain links revealed inboxes for each employee or user of these exposed host names. Some of the emails dated back to 2008; others were as recent as the present day.

Securence counts among its customers dozens of state and local governments, including: nc.gov — the official website of North Carolina; stillwatermn.gov, the website for the city of Stillwater, Minn.; and cityoffrederickmd.gov, the website for the government of Frederick, Md.

Incredibly, included in this giant index of U.S. Internet customer emails were the internal messages for every current and former employee of U.S. Internet and its subsidiary USI Wireless. Since that index also included the messages of U.S. Internet’s CEO Travis Carter, KrebsOnSecurity forwarded one of Mr. Carter’s own recent emails to him, along with a request to understand how exactly the company managed to screw things up so spectacularly.

Individual inboxes of U.S. Wireless employees were published in clear text on the Internet.

Within minutes of that notification, U.S. Internet pulled all of the published inboxes offline. Mr. Carter responded and said his team was investigating how it happened. In the same breath, the CEO asked if KrebsOnSecurity does security consulting for hire (I do not).

[Author’s note: Perhaps Mr. Carter was frantically casting about for any expertise he could find in a tough moment. But I found the request personally offensive, because I couldn’t shake the notion that maybe the company was hoping it could buy my silence.]

Earlier this week, Mr. Carter replied with a highly technical explanation that ultimately did little to explain why or how so many internal and customer inboxes were published in plain text on the Internet.

“The feedback from my team was a issue with the Ansible playbook that controls the Nginx configuration for our IMAP servers,” Carter said, noting that this incorrect configuration was put in place by a former employee and never caught. U.S. Internet has not shared how long these messages were exposed.

“The rest of the platform and other backend services are being audited to verify the Ansible playbooks are correct,” Carter said.

Holden said he also discovered that hackers have been abusing a Securence link scrubbing and anti-spam service called Url-Shield to create links that look benign but instead redirect visitors to hacked and malicious websites.

“The bad guys modify the malicious link reporting into redirects to their own malicious sites,” Holden said. “That’s how the bad guys drive traffic to their sites and increase search engine rankings.”

For example, clicking the Securence link shown in the screenshot directly above leads one to a website that tries to trick visitors into allowing site notifications by couching the request as a CAPTCHA request designed to separate humans from bots. After approving the deceptive CAPTCHA/notification request, the link forwards the visitor to a Russian internationalized domain name (рпроаг[.]рф).

The link to this malicious and deceptive website was created using Securence’s link-scrubbing service. Notification pop-ups were blocked when this site tried to disguise a prompt for accepting notifications as a form of CAPTCHA.

U.S. Internet has not responded to questions about how long it has been exposing all of its internal and customer emails, or when the errant configuration changes were made. The company also still has not disclosed the incident on its website. The last press release on the site dates back to March 2020.

KrebsOnSecurity has been writing about data breaches for nearly two decades, but this one easily takes the cake in terms of the level of incompetence needed to make such a huge mistake unnoticed. I’m not sure what the proper response from authorities or regulators should be to this incident, but it’s clear that U.S. Internet should not be allowed to manage anyone’s email unless and until it can demonstrate more transparency, and prove that it has radically revamped its security.

Worse Than FailureA Stalled Upgrade

It was time to start developing version 2 of Initech's flagship software product. This meant planning meetings. So many planning meetings.

The most important one, for the actual development team, was the user story meeting. The core of these meetings was a few folks from the programming team, including Steve, the director of architecture, Brian, and a variety of product owners, responsible for different segments of the overall product.

As a group, they'd review the user stories, and approve them. Once they were approved for development, work would begin.

The meetings were difficult to schedule, because of the number of stakeholders, and they were viewed as a checkpoint- you can't start implementing features until the product owner has walked through the user story with the team, but they were also a priority.

At the meeting, product owner Renee started walking the team through some of the features she owned. "So, here's my user, Fred Flinstone." Her slide popped up an image of the animated character next to a set of bullet points describing the feature. "He needs to add an item into inventory, so that it can actually be sold. To do that…"

Renee walked them through the details of the feature. Heads nodded around the table- it was a pretty straightforward CRUD application. It was good that the product owner walked them through a few workflow things unique to Initech's product, but there wasn't anything particularly shocking.

Renee advanced to the next slide. "And now, Fred needs to run a report. This report needs to…"

Again, Renee walked them through the key fields that needed to be on the report, how the report was triggered, what the reasonable filtering options would be.

"Any questions?"

Brian, the director, looked thoughtful for a moment, and then said, "I think I do. I can't really see a reason why one person would want to both add items to inventory- a receiving job- and run reports on inventory consumption. There's no reason someone doing the task would also need to run the report. I can't accept your user stories until you change one of the users to be a different person."

"What? The name doesn't matter," Renee protested.

"We should probably just stick a pin in this and pick up when we can reschedule another meeting. Thank you everyone," Brian said. He grabbed his laptop, stood, and left the meeting. Everyone sat there for a moment, realized the meeting was actually over, and followed him out a few minutes later.

That afternoon, Steve finally managed to find Brian. "What the heck was that?"

Brian sighed. "Look, the entire development team is swamped, you know it. We don't have bandwidth to take on new work. Tracey should have that priority-one bug done in a few days, and maybe once that's done we can start putting resources on the the version 2 project. For now, we need to stall, and that was the only thing I could think of."

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsInsight

Author: Alastair Millar I want the best for my wife. Of course I do. And what Doctor Singh suggested wouldn’t have been possible even a few years ago; a generation ago, it would have been utterly unthinkable. It’s expensive, but I’ve always said that I’d do anything for my darling – and curing her blindness […]

The post Insight appeared first on 365tomorrows.

Cryptogram Documents about the NSA’s Banning of Furby Toys in the 1990s

Via a FOIA request, we have documents from the NSA about their banning of Furby toys. 404 Media has the story.

EDITED TO ADD: The documents are now on Archive.org.

,

Krebs on SecurityFat Patch Tuesday, February 2024 Edition

Microsoft Corp. today pushed software updates to plug more than 70 security holes in its Windows operating systems and related products, including two zero-day vulnerabilities that are already being exploited in active attacks.

Top of the heap on this Fat Patch Tuesday is CVE-2024-21412, a “security feature bypass” in the way Windows handles Internet Shortcut Files that Microsoft says is being targeted in active exploits. Redmond’s advisory for this bug says an attacker would need to convince or trick a user into opening a malicious shortcut file.

Researchers at Trend Micro have tied the ongoing exploitation of CVE-2024-21412 to an advanced persistent threat group dubbed “Water Hydra,” which they say has being using the vulnerability to execute a malicious Microsoft Installer File (.msi) that in turn unloads a remote access trojan (RAT) onto infected Windows systems.

The other zero-day flaw is CVE-2024-21351, another security feature bypass — this one in the built-in Windows SmartScreen component that tries to screen out potentially malicious files downloaded from the Web. Kevin Breen at Immersive Labs says it’s important to note that this vulnerability alone is not enough for an attacker to compromise a user’s workstation, and instead would likely be used in conjunction with something like a spear phishing attack that delivers a malicious file.

Satnam Narang, senior staff research engineer at Tenable, said this is the fifth vulnerability in Windows SmartScreen patched since 2022 and all five have been exploited in the wild as zero-days. They include CVE-2022-44698 in December 2022, CVE-2023-24880 in March 2023, CVE-2023-32049 in July 2023 and CVE-2023-36025 in November 2023.

Narang called special attention to CVE-2024-21410, an “elevation of privilege” bug in Microsoft Exchange Server that Microsoft says is likely to be exploited by attackers. Attacks on this flaw would lead to the disclosure of NTLM hashes, which could be leveraged as part of an NTLM relay or “pass the hash” attack, which lets an attacker masquerade as a legitimate user without ever having to log in.

“We know that flaws that can disclose sensitive information like NTLM hashes are very valuable to attackers,” Narang said. “A Russian-based threat actor leveraged a similar vulnerability to carry out attacks – CVE-2023-23397 is an Elevation of Privilege vulnerability in Microsoft Outlook patched in March 2023.”

Microsoft notes that prior to its Exchange Server 2019 Cumulative Update 14 (CU14), a security feature called Extended Protection for Authentication (EPA), which provides NTLM credential relay protections, was not enabled by default.

“Going forward, CU14 enables this by default on Exchange servers, which is why it is important to upgrade,” Narang said.

Rapid7’s lead software engineer Adam Barnett highlighted CVE-2024-21413, a critical remote code execution bug in Microsoft Office that could be exploited just by viewing a specially-crafted message in the Outlook Preview pane.

“Microsoft Office typically shields users from a variety of attacks by opening files with Mark of the Web in Protected View, which means Office will render the document without fetching potentially malicious external resources,” Barnett said. “CVE-2024-21413 is a critical RCE vulnerability in Office which allows an attacker to cause a file to open in editing mode as though the user had agreed to trust the file.”

Barnett stressed that administrators responsible for Office 2016 installations who apply patches outside of Microsoft Update should note the advisory lists no fewer than five separate patches which must be installed to achieve remediation of CVE-2024-21413; individual update knowledge base (KB) articles further note that partially-patched Office installations will be blocked from starting until the correct combination of patches has been installed.

It’s a good idea for Windows end-users to stay current with security updates from Microsoft, which can quickly pile up otherwise. That doesn’t mean you have to install them on Patch Tuesday. Indeed, waiting a day or three before updating is a sane response, given that sometimes updates go awry and usually within a few days Microsoft has fixed any issues with its patches. It’s also smart to back up your data and/or image your Windows drive before applying new updates.

For a more detailed breakdown of the individual flaws addressed by Microsoft today, check out the SANS Internet Storm Center’s list. For those admins responsible for maintaining larger Windows environments, it often pays to keep an eye on Askwoody.com, which frequently points out when specific Microsoft updates are creating problems for a number of users.