Planet Russell

,

Charles StrossAnnouncement time!

I am very pleased to be able to admit that the Laundry Files are shortlisted for the Hugo Award for Best Series!

(Astute readers will recall that the Laundry Files were shortlisted—but did not win—in 2019. Per the rules, "A qualifying installment must be published in the qualifying year ... If a Series is a finalist and does not win, it is no longer eligible until at least two more installments consisting of at lest 240,000 words total appear in subsequent years." Since 2019, the Laundry Files have grown by three full novels (the New Management books) and a novella ("Escape from Yokai Land"), totaling about 370,000 words. "Season of Skulls" was published in 2023, hence the series is eligible in 2024.)

The Hugo award winners will be announced at the world science fiction convention in Glasgow this August, on the evening of Sunday August 11th. Full announcement of the entire shortlist here.

In addition to the Hugo nomination, the Kickstarter for the second edition of the Laundry tabletop role playing game, from Cubicle 7 games, goes live for pre-orders in the next month. If you want to be notified when that happens, there's a sign-up page here.

Finally, there's some big news coming soon about film/TV rights, and separately, graphic novel rights, to the Laundry Files. I can't say any more at this point, but expect another announcement (or two!) over the coming months.

I'm sure you have questions. Ask away!

Charles StrossA Wonky Experience

A Wonka Story

This is no longer in the current news cycle, but definitely needs to be filed under "stuff too insane for Charlie to make up", or maybe "promising screwball comedy plot line to explore", or even "perils of outsourcing creative media work to generative AI".

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy's Chocolate Experience, a blatant attempt to cash in on Roald Dahl's cautionary children's tale, "Willy Wonka and the Chocolate Factory". Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind "House of Illuminati Ltd" will wise up and delete the website, here's a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE - CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here's The Guardian's coverage:

The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas - the tiny, orange men who power Wonka's chocolate factory in the Roald Dahl book which inspired the prequel film.

But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have ... recomplicated? I think that's the diplomatic way to phrase it.

First, someone leaked the script for the event on twitter. They'd hired actors and evidently used ChatGPT to generate a script for the show: some of the actors quit in despair, others made a valliant attempt to at least amuse the children. But it didn't work. Interactive audience-participation events are hard work and this one apparently called for the sort of special effects that Disney's Imagineers might have blanched at, or at least asked, "who's paying for this?"

Here's a ThreadReader transcript of the twitter thread about the script (ThreadReader chains tweets together into a single web page, so you don't have to log into the hellsite itself). Note it's in the shape of screenshots of the script and threadreader didn't grab the images, so here's my transcript of the first three:

DIRECTION: (Audience members engage with the interactive flowers, offering compliments, to which the flowers respond with pre-recorded, whimsical thank-yous.)

Wonkidoodle 1: (to a guest) Oh, and if you see a butterfly, whisper your sweetest dream to it. They're our official secret keepers and dream carriers of the garden!

Willy McDuff: (gathering everyone's attention) Now, I must ask, has anyone seen the elusive Bubble Bloom? It's a rare flower that blooms just once every blue moon and fills the air with shimmering bubbles!

DIRECTION: (The stage crew discreetly activates bubble machines, filling the area with bubbles, causing excitement and wonder among the audience.)

Wonkidoodle 2: (pretending to catch bubbles) Quick! Each bubble holds a whisper of enchantment--catch one, and make a wish!

Willy McDuff: (as the bubble-catching frenzy continues) Remember, in the Garden of Enchantment, every moment is a chance for magic, every corner hides a story, and every bubble... (catches a bubble) holds a dream.

DIRECTION: (He opens his hand, and the bubble gently pops, releasing a small, twinkling light that ascends into the rafters, leaving the audience in awe.)

Willy McDuff: (with warmth) My dear friends, take this time to explore, to laugh, and to dream. For in this garden, the magic is real, and the possibilities are endless. And who knows? The next wonder you encounter may just be around the next bend.

DIRECTION: Scene ends with the audience fully immersed in the interactive, magical experience, laughter and joy filling the air as Willy McDuff and the Wonkidoodles continue to engage and delight with their enchanting antics and treats.

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful--the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

Willy McDuff: Here, my dear guests, you may quench your thirst with lemonade that fizzes and dances on the tongue, and chase bubbles that burst with flavors unimaginable. A toast, to adventures shared and friendships forged in the heart of the unknown!

DIRECTION: (The audience, now relieved and rejuvenated by the whimsical turn of events, follows Willy into the Bubble and Lemonade Room, laughter and chatter filling the air once more, as they immerse themselves in the joyous, bubbly wonderland.)

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful-the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

And here is a photo of the Lemonade Room in all its glory.

A trestle table with some paper cups half-full of flat lemonade

Note that in the above directions, near as I can make out, there were no stage crew on site. As Seamus O'Reilly put it, "I get that lazy and uncreative people will use AI to generate concepts. But if the script it barfs out has animatronic flowers, glowing orbs, rivers of lemonade and giggling grass, YOU still have to make those things exist. I'm v confused as to how that part was misunderstood."

Now, if that was all there was to it, it'd merely be annoying. My initial take was that this was a blatant rip-off, a consumer fraud perpetrated by a company ("House of Illuminati") based in London, doing everything by remote control over the internet to fleece those gullible provincials of their wallet contents. (Oh, and that probably includes the actors: did they get paid on the day?) But aftershocks are still rumbling on, a week later.

Per The Daily Beast, "House of Illuminati" issued an apology (via Facebook) on Friday, offering to refund all tickets—but then mysteriously deleted the apology hours later, and posted a new one:

"I want to extend my sincerest apologies to each and every one of you who was looking forward to this event," the latest Facebook post from House of Illuminati reads. "I understand the disappointment and frustration this has caused, and for that, I am truly sorry."

(The individual behind the post goes unnamed.)

"It's important for me to clarify that the organization and decisions surrounding this event were solely my responsibility," the post continues. "I want to make it clear that anyone who was hired externally or offered their help, are not affiliated with the me or the company, any use of faces can cause serious harm to those who did not have any involvement in the making of this event."

"Regarding a personal matter, there will be no wedding, and no wedding was funded by the ticket sales," the post continues further, sans context. "This is a difficult time for me, and I ask for your understanding and privacy."

"There will be no wedding, and no wedding was funded by the ticket sales?" (What on Earth is going on here?)

Finally, The Daily Beast notes that Billy McFarland, the creator of the Fyre Fest fiasco, told TMZ he'd love to give the Wonka organizers a second chance at getting things right at Fyre Fest II.

The mind boggles.

I am now wondering if the whole thing wasn't some sort of extraordinarily elaborate publicity stunt rather than simply a fraud, but I can't for the life of me work out what was going on. Unless it was Jimmy Cauty and Bill Drummond (aka The KLF) getting up to hijinks again? But I can't imagine them doing anything so half-assed ... Least-bad case is that an idiot decided to set up an events company ("how hard can running public arts events be?" —don't answer that) and intended to use the profits and the experience to plan their dream wedding. Which then ran off the rails into a ditch, rolled over, exploded in flames, was sucked up by a tornado and deposited in Oz, their fiancée called off the engagement and eloped with a walrus, and—

It's all downhill from here.

Anyway, the moral of the story so far is: don't use generative AI tools to write scripts for public events, or to produce promotional images, or indeed to do anything at all without an experienced human to sanity check their output! And especially don't use them to fund your wedding ...

UPDATE: Identity of scammer behind Willy's Chocolate Experience exposed -- Youtube video, I haven't had a chance to watch it all yet, will summarize if relevant later; the perp has form for selling ChatGPT generated ebook-shaped "objects" via Amazon.

NEW UPDATE: Glasgow's disastrous Wonka character inspires horror film

A villain devised for the catastrophic Willy's Chocolate Experience, who makes sweets and lives in walls, is to become the subject of a new horror movie.

LATEST UPDATE: House of Illuminati claims "copywrite", "we will protect our interests".

The 'Meth Lab Oompa Loompa Lady' is selling greetings on Cameo for $25.

And Eleanor Morton has a new video out, Glasgow Wonka Experience Tourguide Doesn't Give a F*.

FINAL UPDATE: Props from botched Willy Wonka event raise over £2,000 for Palestinian aid charity: Glasgow record shop Monorail Music auctioned the props on eBay after they were discovered in a bin outside the warehouse where event took place. (So some good came of it in the end ...)

Worse Than FailureCodeSOD: A Valid Applicant

In the late 90s into the early 2000s, there was an entire industry spun up to get businesses and governments off their mainframe systems from the 60s and onto something modern. "Modern", in that era, usually meant Java. I attended vendor presentations, for example, that promised that you could take your mainframe, slap a SOAP webservice on it, and then gradually migrate modules off the mainframe and into Java Enterprise Edition. In the intervening years, I have seen exactly 0 successful migrations like this- usually they just end up trying that for a few years and then biting the bullet and doing a ground-up rewrite.

That's is the situation ML was in: a state government wanted to replace their COBOL mainframe monster with a "maintainable" J2EE/WebSphere based application. Gone would be the 3270 dumb terminals, and here would be desktop PCs running web browsers.

ML's team did the initial design work, which the state was very happy with. But the actual development work gave the state sticker shock, so they opted to take the design from ML's company and ship it out to a lowest-bidder offshore vendor to actually do the development work.

This, by the way, was another popular mindset in the early 00s: you could design your software as a set of UML diagrams and then hand them off to the cheapest coder you could hire, and voila, you'd have working software (and someday soon, you'd be able to just generate the code and wouldn't need the programmer in the first place! ANY DAY NOW).

Now, this code is old, and predates generics in Java, so the use of ArrayLists isn't the WTF. But the programmer's approach to polymorphism is.

public class Applicant extends Person {
	// ... [snip] ...
}

.
.
.


public class ApplicantValidator {
	
	
	public void validateApplicantList(List listOfApplicants) throws ValidationException {

		// ... [snip] ...
		
		// Create a List of Person to validate
		List listOfPersons = new ArrayList();
		Iterator i = listOfApplicants.iterator(); 
		while (i.hasNext()) {
			Applicant a = (Applicant) i.next();
			Person p = (Person) a;
			listOfPersons.add(p);
		}
		
		PersonValidator.getInstance().validatePersonList(listOfPersons);

		// ... [snip] ...

	}

	// ... [snip] ...
}

Here you see an Applicant is a subclass of Person. We also see an ApplicantValidator class, which needs to verify that the applicant objects are valid- and to do that, they need to be treated as valid Person objects.

To do this, we iterate across our list of applications (which, it's worth noting, are being treated as Objects since we don't have generics), cast them to Applicants, then cast the Applicant variable to Person, then create a list of Persons- which again, absent generics, is just a list of Objects. Then we pass that list of persons into validatePersonList.

All of this is unnecessary, and demonstrates a lack of understanding about the language in use. This block could be written more clearly as: PersonValidator.getInstance().validatePersonList(listOfApplicants);

This gives us the same result with significantly less effort.

While much of the code coming from the offshore team was actually solid, it contained so much nonsense like this, so many misundertandings of the design, and so many bugs, that the state kept coming back to ML's company to address the issues. Between paying the offshore team to do the work, and then ML's team to fix the work, the entire project cost much more than if they had hired ML's team in the first place.

But the developers still billed at a lower rate than ML's team, which meant the managers responsible still got to brag about cost savings, even as they overran the project budget. "Imagine how much it would have cost if we hadn't gone with the cheaper labor?"

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsWeb World

Author: Rosie Oliver Grey-ghosted darkness. Not even a piece of dulled memory in the expansive nothingness ahead. Damn! Time is now against her completing her sculpture. She had been so sure the right shape could be found along her latest trajectory. Floating, she twists round to face the massive structure that extends in every direction […]

The post Web World appeared first on 365tomorrows.

Cryptogram xz Utils Backdoor

The cybersecurity world got really lucky last week. An intentionally placed backdoor in xz Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to xz Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware.

It was an incredibly complex backdoor. Installing it was a multi-year process that seems to have involved social engineering the lone unpaid engineer in charge of the utility. More from ArsTechnica:

In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint function with a variant that has long been recognized as less secure. No one noticed at the time.

The following year, JiaT75 submitted a patch over the xz Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of xz Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

There’s a lot more. The sophistication of both the exploit and the process to get it into the software project scream nation-state operation. It’s reminiscent of Solar Winds, although (1) it would have been much, much worse, and (2) we got really, really lucky.

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

Cryptogram Surveillance by the New Microsoft Outlook App

The ProtonMail people are accusing Microsoft’s new Outlook for Windows app of conducting extensive surveillance on its users. It shares data with advertisers, a lot of data:

The window informs users that Microsoft and those 801 third parties use their data for a number of purposes, including to:

  • Store and/or access information on the user’s device
  • Develop and improve products
  • Personalize ads and content
  • Measure ads and content
  • Derive audience insights
  • Obtain precise geolocation data
  • Identify users through device scanning

Commentary.

,

Krebs on Security‘The Manipulaters’ Improve Phishing, Still Fail at Opsec

Roughly nine years ago, KrebsOnSecurity profiled a Pakistan-based cybercrime group called “The Manipulaters,” a sprawling web hosting network of phishing and spam delivery platforms. In January 2024, The Manipulaters pleaded with this author to unpublish previous stories about their work, claiming the group had turned over a new leaf and gone legitimate. But new research suggests that while they have improved the quality of their products and services, these nitwits still fail spectacularly at hiding their illegal activities.

In May 2015, KrebsOnSecurity published a brief writeup about the brazen Manipulaters team, noting that they openly operated hundreds of web sites selling tools designed to trick people into giving up usernames and passwords, or deploying malicious software on their PCs.

Manipulaters advertisement for “Office 365 Private Page with Antibot” phishing kit sold on the domain heartsender,com. “Antibot” refers to functionality that attempts to evade automated detection techniques, keeping a phish deployed as long as possible. Image: DomainTools.

The core brand of The Manipulaters has long been a shared cybercriminal identity named “Saim Raza,” who for the past decade has peddled a popular spamming and phishing service variously called “Fudtools,” “Fudpage,” “Fudsender,” “FudCo,” etc. The term “FUD” in those names stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.

A September 2021 story here checked in on The Manipulaters, and found that Saim Raza and company were prospering under their FudCo brands, which they secretly managed from a front company called We Code Solutions.

That piece worked backwards from all of the known Saim Raza email addresses to identify Facebook profiles for multiple We Code Solutions employees, many of whom could be seen celebrating company anniversaries gathered around a giant cake with the words “FudCo” painted in icing.

Since that story ran, KrebsOnSecurity has heard from this Saim Raza identity on two occasions. The first was in the weeks following the Sept. 2021 piece, when one of Saim Raza’s known email addresses — bluebtcus@gmail.com — pleaded to have the story taken down.

“Hello, we already leave that fud etc before year,” the Saim Raza identity wrote. “Why you post us? Why you destroy our lifes? We never harm anyone. Please remove it.”

Not wishing to be manipulated by a phishing gang, KrebsOnSecurity ignored those entreaties. But on Jan. 14, 2024, KrebsOnSecurity heard from the same bluebtcus@gmail.com address, apropos of nothing.

“Please remove this article,” Sam Raza wrote, linking to the 2021 profile. “Please already my police register case on me. I already leave everything.”

Asked to elaborate on the police investigation, Saim Raza said they were freshly released from jail.

“I was there many days,” the reply explained. “Now back after bail. Now I want to start my new work.”

Exactly what that “new work” might entail, Saim Raza wouldn’t say. But a new report from researchers at DomainTools.com finds that several computers associated with The Manipulaters have been massively hacked by malicious data- and password-snarfing malware for quite some time.

DomainTools says the malware infections on Manipulaters PCs exposed “vast swaths of account-related data along with an outline of the group’s membership, operations, and position in the broader underground economy.”

“Curiously, the large subset of identified Manipulaters customers appear to be compromised by the same stealer malware,” DomainTools wrote. “All observed customer malware infections began after the initial compromise of Manipulaters PCs, which raises a number of questions regarding the origin of those infections.”

A number of questions, indeed. The core Manipulaters product these days is a spam delivery service called HeartSender, whose homepage openly advertises phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me, to name a few.

A screenshot of the homepage of HeartSender 4 displays an IP address tied to fudtoolshop@gmail.com. Image: DomainTools.

HeartSender customers can interact with the subscription service via the website, but the product appears to be far more effective and user-friendly if one downloads HeartSender as a Windows executable program. Whether that HeartSender program was somehow compromised and used to infect the service’s customers is unknown.

However, DomainTools also found the hosted version of HeartSender service leaks an extraordinary amount of user information that probably is not intended to be publicly accessible. Apparently, the HeartSender web interface has several webpages that are accessible to unauthenticated users, exposing customer credentials along with support requests to HeartSender developers.

“Ironically, the Manipulaters may create more short-term risk to their own customers than law enforcement,” DomainTools wrote. “The data table “User Feedbacks” (sic) exposes what appear to be customer authentication tokens, user identifiers, and even a customer support request that exposes root-level SMTP credentials–all visible by an unauthenticated user on a Manipulaters-controlled domain. Given the risk for abuse, this domain will not be published.”

This is hardly the first time The Manipulaters have shot themselves in the foot. In 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s past and current business operations. That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that focuses on connecting cybercriminals to their real-life identities.

Currently, The Manipulaters seem focused on building out and supporting HeartSender, which specializes in spam and email-to-SMS spamming services.

“The Manipulaters’ newfound interest in email-to-SMS spam could be in response to the massive increase in smishing activity impersonating the USPS,” DomainTools wrote. “Proofs posted on HeartSender’s Telegram channel contain numerous references to postal service impersonation, including proving delivery of USPS-themed phishing lures and the sale of a USPS phishing kit.”

Reached via email, the Saim Raza identity declined to respond to questions about the DomainTools findings.

“First [of] all we never work on virus or compromised computer etc,” Raza replied. “If you want to write like that fake go ahead. Second I leave country already. If someone bind anything with exe file and spread on internet its not my fault.”

Asked why they left Pakistan, Saim Raza said the authorities there just wanted to shake them down.

“After your article our police put FIR on my [identity],” Saim Raza explained. “FIR” in this case stands for “First Information Report,” which is the initial complaint in the criminal justice system of Pakistan.

“They only get money from me nothing else,” Saim Raza continued. “Now some officers ask for money again again. Brother, there is no good law in Pakistan just they need money.”

Saim Raza has a history of being slippery with the truth, so who knows whether The Manipulaters and/or its leaders have in fact fled Pakistan (it may be more of an extended vacation abroad). With any luck, these guys will soon venture into a more Western-friendly, “good law” nation and receive a warm welcome by the local authorities.

Planet DebianGuido Günther: Free Software Activities March 2024

A short status update of what happened on my side last month. I spent quiet a bit of time reviewing new, code (thanks!) as well as maintenance to keep things going but we also have some improvements:

Phosh

Phoc

phosh-mobile-settings

phosh-osk-stub

gmobile

Livi

squeekboard

GNOME calls

Libsoup

If you want to support my work see donations.

Planet DebianJoey Hess: reflections on distrusting xz

Was the ssh backdoor the only goal that "Jia Tan" was pursuing with their multi-year operation against xz?

I doubt it, and if not, then every fix so far has been incomplete, because everything is still running code written by that entity.

If we assume that they had a multilayered plan, that their every action was calculated and malicious, then we have to think about the full threat surface of using xz. This quickly gets into nightmare scenarios of the "trusting trust" variety.

What if xz contains a hidden buffer overflow or other vulnerability, that can be exploited by the xz file it's decompressing? This would let the attacker target other packages, as needed.

Let's say they want to target gcc. Well, gcc contains a lot of documentation, which includes png images. So they spend a while getting accepted as a documentation contributor on that project, and get added to it a png file that is specially constructed, it has additional binary data appended that exploits the buffer overflow. And instructs xz to modify the source code that comes later when decompressing gcc.tar.xz.

More likely, they wouldn't bother with an actual trusting trust attack on gcc, which would be a lot of work to get right. One problem with the ssh backdoor is that well, not all servers on the internet run ssh. (Or systemd.) So webservers seem a likely target of this kind of second stage attack. Apache's docs include png files, nginx does not, but there's always scope to add improved documentation to a project.

When would such a vulnerability have been introduced? In February, "Jia Tan" wrote a new decoder for xz. This added 1000+ lines of new C code across several commits. So much code and in just the right place to insert something like this. And why take on such a significant project just two months before inserting the ssh backdoor? "Jia Tan" was already fully accepted as maintainer, and doing lots of other work, it doesn't seem to me that they needed to start this rewrite as part of their cover.

They were working closely with xz's author Lasse Collin in this, by indications exchanging patches offlist as they developed it. So Lasse Collin's commits in this time period are also worth scrutiny, because they could have been influenced by "Jia Tan". One that caught my eye comes immediately afterwards: "prepares the code for alternative C versions and inline assembly" Multiple versions and assembly mean even more places to hide such a security hole.

I stress that I have not found such a security hole, I'm only considering what the worst case possibilities are. I think we need to fully consider them in order to decide how to fully wrap up this mess.

Whether such stealthy security holes have been introduced into xz by "Jia Tan" or not, there are definitely indications that the ssh backdoor was not the end of what they had planned.

For one thing, the "test file" based system they introduced was extensible. They could have been planning to add more test files later, that backdoored xz in further ways.

And then there's the matter of the disabling of the Landlock sandbox. This was not necessary for the ssh backdoor, because the sandbox is only used by the xz command, not by liblzma. So why did they potentially tip their hand by adding that rogue "." that disables the sandbox?

A sandbox would not prevent the kind of attack I discuss above, where xz is just modifying code that it decompresses. Disabling the sandbox suggests that they were going to make xz run arbitrary code, that perhaps wrote to files it shouldn't be touching, to install a backdoor in the system.

Both deb and rpm use xz compression, and with the sandbox disabled, whether they link with liblzma or run the xz command, a backdoored xz can write to any file on the system while dpkg or rpm is running and noone is likely to notice, because that's the kind of thing a package manager does.

My impression is that all of this was well planned and they were in it for the long haul. They had no reason to stop with backdooring ssh, except for the risk of additional exposure. But they decided to take that risk, with the sandbox disabling. So they planned to do more, and every commit by "Jia Tan", and really every commit that they could have influenced needs to be distrusted.

This is why I've suggested to Debian that they revert to an earlier version of xz. That would be my advice to anyone distributing xz.

I do have a xz-unscathed fork which I've carefully constructed to avoid all "Jia Tan" involved commits. It feels good to not need to worry about dpkg and tar. I only plan to maintain this fork minimally, eg security fixes. Hopefully Lasse Collin will consider these possibilities and address them in his response to the attack.

Worse Than FailureCodeSOD: Gotta Catch 'Em All

It's good to handle any exception that could be raised in some useful way. Frequently, this means that you need to take advantage of the catch block's ability to filter by type so you can do something different in each case. Or you could do what Adam's co-worker did.

try
{
/* ... some important code ... */
} catch (OutOfMemoryException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (OverflowException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (InvalidCastException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (NullReferenceException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (IndexOutOfRangeException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (ArgumentException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (InvalidOperationException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (XmlException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (IOException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (NotSupportedException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (Exception exception) {
        Global.Insert("App.GetSettings;", exception.Message);
}

Well, I guess that if they ever need to add different code paths, they're halfway there.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsSucky

Author: Alastair Millar As I burst the blister on Martha’s back, the gelatinous pus within made its escape. Thanking the Void Gods for the medpack’s surgical gloves, I wiped her down, then set to work with the tweezers; if I couldn’t get the eggs out, it was all for nothing. This is the side of […]

The post Sucky appeared first on 365tomorrows.

xkcdEclipse Clouds

Planet DebianArnaud Rebillout: Firefox: Moving from the Debian package to the Flatpak app (long-term?)

First, thanks to Samuel Henrique for giving notice of recent Firefox CVEs in Debian testing/unstable.

At the time I didn't want to upgrade my system (Debian Sid) due to the ongoing t64 transition transition, so I decided I could install the Firefox Flatpak app instead, and why not stick to it long-term?

This blog post details all the steps, if ever others want to go the same road.

Flatpak Installation

Disclaimer: this section is hardly anything more than a copy/paste of the official documentation, and with time it will get outdated, so you'd better follow the official doc.

First thing first, let's install Flatpak:

$ sudo apt update
$ sudo apt install flatpak

Then the next step is to add the Flathub remote repository, from where we'll get our Flatpak applications:

$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

And that's all there is to it! Now come the optional steps.

For GNOME and KDE users, you might want to install a plugin for the software manager specific to your desktop, so that it can support and manage Flatpak apps:

$ which -s gnome-software  && sudo apt install gnome-software-plugin-flatpak
$ which -s plasma-discover && sudo apt install plasma-discover-backend-flatpak

And here's an additional check you can do, as it's something that did bite me in the past: missing xdg-portal-* packages, that are required for Flatpak applications to communicate with the desktop environment. Just to be sure, you can check the output of apt search '^xdg-desktop-portal' to see what's available, and compare with the output of dpkg -l | grep xdg-desktop-portal.

As you can see, if you're a GNOME or KDE user, there's a portal backend for you, and it should be installed. For reference, this is what I have on my GNOME desktop at the moment:

$ dpkg -l | grep xdg-desktop-portal | awk '{print $2}'
xdg-desktop-portal
xdg-desktop-portal-gnome
xdg-desktop-portal-gtk

Install the Firefox Flatpak app

This is trivial, but still, there's a question I've always asked myself: should I install applications system-wide (aka. flatpak --system, the default) or per-user (aka. flatpak --user)? Turns out, this questions is answered in the Flatpak documentation:

Flatpak commands are run system-wide by default. If you are installing applications for day-to-day usage, it is recommended to stick with this default behavior.

Armed with this new knowledge, let's install the Firefox app:

$ flatpak install flathub org.mozilla.firefox

And that's about it! We can give it a go already:

$ flatpak run org.mozilla.firefox

Data migration

At this point, running Firefox via Flatpak gives me an "empty" Firefox. That's not what I want, instead I want my usual Firefox, with a gazillion of tabs already opened, a few extensions, bookmarks and so on.

As it turns out, Mozilla provides a brief doc for data migration, and it's as simple as moving Firefox data directory around!

To clarify, we'll be copying data:

  • from ~/.mozilla/ -- where the Firefox Debian package stores its data
  • into ~/.var/app/org.mozilla.firefox/.mozilla/ -- where the Firefox Flatpak app stores its data

Make sure that all Firefox instances are closed, then proceed:

# BEWARE! Below I'm erasing data!
$ rm -fr ~/.var/app/org.mozilla.firefox/.mozilla/firefox/
$ cp -a ~/.mozilla/firefox/ ~/.var/app/org.mozilla.firefox/.mozilla/

To avoid confusing myself, it's also a good idea to rename the local data directory:

$ mv ~/.mozilla/firefox ~/.mozilla/firefox.old.$(date --iso-8601=date)

At this point, flatpak run org.mozilla.firefox takes me to my "usual" everyday Firefox, with all its tabs opened, pinned, bookmarked, etc.

More integration?

After following all the steps above, I must say that I'm 99% happy. So far, everything works as before, I didn't hit any issue, and I don't even notice that Firefox is running via Flatpak, it's completely transparent.

So where's the 1% of unhappiness? The « Run a Command » dialog from GNOME, the one that shows up via the keyboard shortcut <Alt+F2>. This is how I start my GUI applications, and I usually run two Firefox instances in parallel (one for work, one for personal), using the firefox -p <profile> command.

Given that I ran apt purge firefox before (to avoid confusing myself with two installations of Firefox), now the right (and only) way to start Firefox from a command-line is to type flatpak run org.mozilla.firefox -p <profile>. Typing that every time is way too cumbersome, so I need something quicker.

Seems like the most straightforward is to create a wrapper script:

$ cat /usr/local/bin/firefox 
#!/bin/sh
exec flatpak run org.mozilla.firefox "$@"

And now I can just hit <Alt+F2> and type firefox -p <profile> to start Firefox with the profile I want, just as before. Neat!

Looking forward: system updates

I usually update my system manually every now and then, via the well-known pair of commands:

$ sudo apt update
$ sudo apt full-upgrade

The downside of introducing Flatpak, ie. introducing another package manager, is that I'll need to learn new commands to update the software that comes via this channel.

Fortunately, there's really not much to learn. From flatpak-update(1):

flatpak update [OPTION...] [REF...]

Updates applications and runtimes. [...] If no REF is given, everything is updated, as well as appstream info for all remotes.

Could it be that simple? Apparently yes, the Flatpak equivalent of the two apt commands above is just:

$ flatpak update

Going forward, my options are:

  1. Teach myself to run flatpak update additionally to apt update, manually, everytime I update my system.
  2. Go crazy: let something automatically update my Flatpak apps, in my back and without my consent.

I'm actually tempted to go for option 2 here, and I wonder if GNOME Software will do that for me, provided that I installed gnome-software-plugin-flatpak, and that I checked « Software Updates -> Automatic » in the Settings (which I did).

However, I didn't find any documentation regarding what this setting really does, so I can't say if it will only download updates, or if it will also install it. I'd be happy if it automatically installs new version of Flatpak apps, but at the same time I'd be very unhappy if it automatically upgrades my Debian system...

So we'll see. Enough for today, hope this blog post was useful!

,

Planet DebianDirk Eddelbuettel: ulid 0.3.1 on CRAN: New Maintainer, Some Polish

Happy to share that ulid is now (back) on CRAN. It provides universally unique identifiers that are lexicographically sortable, which improves over the more well-known uuid generators.

ulid is a neat little package put together by Bob Rudis a few years ago. It had recently drifted off CRAN so I offered to brush it up and re-submit it. And as tooted earlier today, it took just over an hour to finish that (after the lead up work I had done, including prior email with CRAN in the loop, the repo transfer from Bob’s to my ulid repo plus of course a wee bit of actual maintenance; see below for more).

The NEWS entry follows.

Changes in version 0.3.1 (2024-04-02)

  • New Maintainer

  • Deleted several repository files no longer used or needed

  • Added .editorconfig, ChangeLog and cleanup

  • Converted NEWS.md to NEWS.Rd

  • Simplified R/ directory to one source file

  • Simplified src/ removing redundant Makevars

  • Added ulid() alias

  • Updated / edited roxygen and README.md documention

  • Removed vignette which was identical to README.md

  • Switched continuous integration to GitHub Actions

  • Placed upstream (header-only) library into src/ulid/

  • Renamed single interface file to src/wrapper

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSven Hoexter: PKIX: pathLen Constrain on Root Certificates

I recently came a cross a x509 P(rivate)KI Root Certificate which had a pathLen constrain set on the (self signed) Root Certificate. Since that is not commonly seen I looked a bit around to get a better understanding about how the pathLen basic constrain should be used.

Primary source is RFC 5280 section 4.2.1.9

The pathLenConstraint field is meaningful only if the cA boolean is asserted and the key usage extension, if present, asserts the keyCertSign bit (Section 4.2.1.3). In this case, it gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path

Since the Root is always self-issued it doesn't count towards the limit, and since it's the last certificate (or the first depending on how you count) in a chain, it's pretty much pointless to configure a pathLen constrain directly on a Root Certificate.

Another relevant resource are the Baseline Requirements of the CA/Browser Forum (currently v2.0.2). Section 7.1.2.1.4 "Root CA Basic Constraints" describes it as NOT RECOMMENDED for a Root CA.

Last but not least there is the awesome x509 Limbo project which has a section for validating pathLen constrains. Since the RFC 5280 based assumption is that self signed certs do not count, they do not check a case with such a constrain on the Root itself, and what the implementations do about it. So the assumption right now is that they properly ignore it.

Summary: It's pointless to set the pathLen constrain on the Root Certificate, so just don't do it.

Cryptogram Class-Action Lawsuit against Google’s Incognito Mode

The lawsuit has been settled:

Google has agreed to delete “billions of data records” the company collected while users browsed the web using Incognito mode, according to documents filed in federal court in San Francisco on Monday. The agreement, part of a settlement in a class action lawsuit filed in 2020, caps off years of disclosures about Google’s practices that shed light on how much data the tech giant siphons from its users­—even when they’re in private-browsing mode.

Under the terms of the settlement, Google must further update the Incognito mode “splash page” that appears anytime you open an Incognito mode Chrome window after previously updating it in January. The Incognito splash page will explicitly state that Google collects data from third-party websites “regardless of which browsing or browser mode you use,” and stipulate that “third-party sites and apps that integrate our services may still share information with Google,” among other changes. Details about Google’s private-browsing data collection must also appear in the company’s privacy policy.

I was an expert witness for the prosecution (that’s the class, against Google). I don’t know if my declarations and deposition will become public.

Planet DebianBits from Debian: Bits from the DPL

Dear Debianites

This morning I decided to just start writing Bits from DPL and send whatever I have by 18:00 local time. Here it is, barely proof read, along with all it's warts and grammar mistakes! It's slightly long and doesn't contain any critical information, so if you're not in the mood, don't feel compelled to read it!

== Get ready for a new DPL! ==

Soon, the voting period will start to elect our next DPL, and my time as DPL will come to an end. Reading the questions posted to the new candidates on [debian-vote], it takes quite a bit of restraint to not answer all of them myself, I think I can see how that aspect contributed to me being reeled in to running for DPL! In total I've done so 5 times (the first time I ran, Sam was elected!).

Good luck to both [Andreas] and [Sruthi], our current DPL candidates! I've already started working on preparing handover, and there's multiple request from teams that have came in recently that will have to wait for the new term, so I hope they're both ready to hit the ground running!

  • [debian-vote] Mailing list: https://lists.debian.org/debian-vote/2024/03/threads.html
  • Platform: https://www.debian.org/vote/2024/platforms/tille [Anrea]
  • Platform: https://www.debian.org/vote/2024/platforms/srud [Sruthi]

== Things that I wish could have gone better ==

  • Communication:

Recently, I saw a t-shirt that read:

 Adulthood is saying, 'But after this week things will
 slow down a bit' over and over until you die.

I can relate! With every task, crisis or deadline that appears, I think that once this is over, I'll have some more breathing space to get back to non-urgent, but important tasks. "Bits from the DPL" was something I really wanted to get right this last term, and clearly failed spectacularly. I have two long Bits from the DPL drafts that I never finished, I tend to have prioritised problems of the day over communication. With all the hindsight I have, I'm not sure which is better to prioritise, I do rate communication and transparency very highly and this is really the top thing that I wish I could've done better over the last four years.

On that note, thanks to people who provided me with some kind words when I've mentioned this to them before. They pointed out that there are many other ways to communicate and be in touch with the community, and they mentioned that they thought that I did a good job with that.

Since I'm still on communication, I think we can all learn to be more effective at it, since it's really so important for the project. Every time I publicly spoke about us spending more money, we got more donations. People out there really like to see how we invest funds in to Debian, instead of just making it heap up. DSA just spent a nice chunk on money on hardware, but we don't have very good visibility on it. It's one thing having it on a public line item in SPI's reporting, but it would be much more exciting if DSA could provide a write-up on all the cool hardware they're buying and what impact it would have on developers, and post it somewhere prominent like debian-devel-announce, Planet Debian or Bits from Debian (from the publicity team).

I don't want to single out DSA there, it's difficult and affects many other teams. The Salsa CI team also spent a lot of resources (time and money wise) to extend testing on AMD GPUs and other AMD hardware. It's fantastic and interesting work, and really more people within the project and in the outside world should know about it!

I'm not going to push my agendas to the next DPL, but I hope that they continue to encourage people to write about their work, and hopefully at some point we'll build enough excitement in doing so that it becomes a more normal part of our daily work.

  • Founding Debian as a standalone entity:

This was my number one goal for the project this last term, which was a carried over item from my previous terms.

I'm tempted to write everything out here, including the problem statement and our current predicaments, what kind of ground work needs to happen, likely constitutional changes that need to happen, and the nature of the GR that would be needed to make such a thing happen, but if I start with that, I might not finish this mail.

In short, I 100% believe that this is still a very high ranking issue for Debian, and perhaps after my term I'd be in a better position to spend more time on this (hmm, is this an instance of "The grass is always better on the other side", or "Next week will go better until I die?"). Anyway, I'm willing to work with any future DPL on this, and perhaps it can in itself be a delegation tasked to properly explore all the options, and write up a report for the project that can lead to a GR.

Overall, I'd rather have us take another few years and do this properly, rather than rush into something that is again difficult to change afterwards. So while I very much wish this could've been achieved in the last term, I can't say that I have any regrets here either.

== My terms in a nutshell ==

  • COVID-19 and Debian 11 era:

My first term in 2020 started just as the COVID-19 pandemic became known to spread globally. It was a tough year for everyone, and Debian wasn't immune against its effects either. Many of our contributors got sick, some have lost loved ones (my father passed away in March 2020 just after I became DPL), some have lost their jobs (or other earners in their household have) and the effects of social distancing took a mental and even physical health toll on many. In Debian, we tend to do really well when we get together in person to solve problems, and when DebConf20 got cancelled in person, we understood that that was necessary, but it was still more bad news in a year we had too much of it already.

I can't remember if there was ever any kind of formal choice or discussion about this at any time, but the DebConf video team just kind of organically and spontaneously became the orga team for an online DebConf, and that lead to our first ever completely online DebConf. This was great on so many levels. We got to see each other's faces again, even though it was on screen. We had some teams talk to each other face to face for the first time in years, even though it was just on a Jitsi call. It had a lasting cultural change in Debian, some teams still have video meetings now, where they didn't do that before, and I think it's a good supplement to our other methods of communication.

We also had a few online Mini-DebConfs that was fun, but DebConf21 was also online, and by then we all developed an online conference fatigue, and while it was another good online event overall, it did start to feel a bit like a zombieconf and after that, we had some really nice events from the Brazillians, but no big global online community events again. In my opinion online MiniDebConfs can be a great way to develop our community and we should spend some further energy into this, but hey! This isn't a platform so let me back out of talking about the future as I see it...

Despite all the adversity that we faced together, the Debian 11 release ended up being quite good. It happened about a month or so later than what we ideally would've liked, but it was a solid release nonetheless. It turns out that for quite a few people, staying inside for a few months to focus on Debian bugs was quite productive, and Debian 11 ended up being a very polished release.

During this time period we also had to deal with a previous Debian Developer that was expelled for his poor behaviour in Debian, who continued to harass members of the Debian project and in other free software communities after his expulsion. This ended up being quite a lot of work since we had to take legal action to protect our community, and eventually also get the police involved. I'm not going to give him the satisfaction by spending too much time talking about him, but you can read our official statement regarding Daniel Pocock here:

https://www.debian.org/News/2021/20211117

In late 2021 and early 2022 we also discussed our general resolution process, and had two consequent votes to address some issues that have affected past votes:

  • https://www.debian.org/vote/2021/vote_003
  • https://www.debian.org/vote/2022/vote_001

In my first term I addressed our delegations that were a bit behind, by the end of my last term all delegation requests are up to date. There's still some work to do, but I'm feeling good that I get to hand this over to the next DPL in a very decent state. Delegation updates can be very deceiving, sometimes a delegation is completely re-written and it was just 1 or 2 hours of work. Other times, a delegation updated can contain one line that has changed or a change in one team member that was the result of days worth of discussion and hashing out differences.

I also received quite a few requests either to host a service, or to pay a third-party directly for hosting. This was quite an admin nightmare, it either meant we had to manually do monthly reimbursements to someone, or have our TOs create accounts/agreements at the multiple providers that people use. So, after talking to a few people about this, we founded the DebianNet team (we could've admittedly chosen a better name, but that can happen later on) for providing hosting at two different hosting providers that we have agreement with so that people who host things under debian.net have an easy way to host it, and then at the same time Debian also has more control if a site maintainer goes MIA.

More info:

https://wiki.debian.org/Teams/DebianNet

You might notice some Openstack mentioned there, we had some intention to set up a Debian cloud for hosting these things, that could also be used for other additional Debiany things like archive rebuilds, but these have so far fallen through. We still consider it a good idea and hopefully it will work out some other time (if you're a large company who can sponsor few racks and servers, please get in touch!)

  • DebConf22 and Debian 12 era:

DebConf22 was the first time we returned to an in-person DebConf. It was a bit smaller than our usual DebConf - understandably so, considering that there were still COVID risks and people who were at high risk or who had family with high risk factors did the sensible thing and stayed home.

After watching many MiniDebConfs online, I also attended my first ever MiniDebConf in Hamburg. It still feels odd typing that, it feels like I should've been at one before, but my location makes attending them difficult (on a side-note, a few of us are working on bootstrapping a South African Debian community and hopefully we can pull off MiniDebConf in South Africa later this year).

While I was at the MiniDebConf, I gave a talk where I covered the evolution of firmware, from the simple e-proms that you'd find in old printers to the complicated firmware in modern GPUs that basically contain complete operating systems- complete with drivers for the device their running on. I also showed my shiny new laptop, and explained that it's impossible to install that laptop without non-free firmware (you'd get a black display on d-i or Debian live). Also that you couldn't even use an accessibility mode with audio since even that depends on non-free firmware these days.

Steve, from the image building team, has said for a while that we need to do a GR to vote for this, and after more discussion at DebConf, I kept nudging him to propose the GR, and we ended up voting in favour of it. I do believe that someone out there should be campaigning for more free firmware (unfortunately in Debian we just don't have the resources for this), but, I'm glad that we have the firmware included. In the end, the choice comes down to whether we still want Debian to be installable on mainstream bare-metal hardware.

At this point, I'd like to give a special thanks to the ftpmasters, image building team and the installer team who worked really hard to get the changes done that were needed in order to make this happen for Debian 12, and for being really proactive for remaining niggles that was solved by the time Debian 12.1 was released.

The included firmware contributed to Debian 12 being a huge success, but it wasn't the only factor. I had a list of personal peeves, and as the hard freeze hit, I lost hope that these would be fixed and made peace with the fact that Debian 12 would release with those bugs. I'm glad that lots of people proved me wrong and also proved that it's never to late to fix bugs, everything on my list got eliminated by the time final freeze hit, which was great! We usually aim to have a release ready about 2 years after the previous release, sometimes there are complications during a freeze and it can take a bit longer. But due to the excellent co-ordination of the release team and heavy lifting from many DDs, the Debian 12 release happened 21 months and 3 weeks after the Debian 11 release. I hope the work from the release team continues to pay off so that we can achieve their goals of having shorter and less painful freezes in the future!

Even though many things were going well, the ongoing usr-merge effort highlighted some social problems within our processes. I started typing out the whole history of usrmerge here, but it's going to be too long for the purpose of this mail. Important questions that did come out of this is, should core Debian packages be team maintained? And also about how far the CTTE should really be able to override a maintainer. We had lots of discussion about this at DebConf22, but didn't make much concrete progress. I think that at some point we'll probably have a GR about package maintenance. Also, thank you to Guillem who very patiently explained a few things to me (after probably having have to done so many times to others before already) and to Helmut who have done the same during the MiniDebConf in Hamburg. I think all the technical and social issues here are fixable, it will just take some time and patience and I have lots of confidence in everyone involved.

UsrMerge wiki page: https://wiki.debian.org/UsrMerge

  • DebConf 23 and Debian 13 era:

DebConf23 took place in Kochi, India. At the end of my Bits from the DPL talk there, someone asked me what the most difficult thing I had to do was during my terms as DPL. I answered that nothing particular stood out, and even the most difficult tasks ended up being rewarding to work on. Little did I know that my most difficult period of being DPL was just about to follow. During the day trip, one of our contributors, Abraham Raji, passed away in a tragic accident. There's really not anything anyone could've done to predict or stop it, but it was devastating to many of us, especially the people closest to him. Quite a number of DebConf attendees went to his funeral, wearing the DebConf t-shirts he designed as a tribute. It still haunts me when I saw his mother scream "He was my everything! He was my everything!", this was by a large margin the hardest day I've ever had in Debian, and I really wasn't ok for even a few weeks after that and I think the hurt will be with many of us for some time to come. So, a plea again to everyone, please take care of yourself! There's probably more people that love you than you realise.

A special thanks to the DebConf23 team, who did a really good job despite all the uphills they faced (and there were many!).

As DPL, I think that planning for a DebConf is near to impossible, all you can do is show up and just jump into things. I planned to work with Enrico to finish up something that will hopefully save future DPLs some time, and that is a web-based DD certificate creator instead of having the DPL do so manually using LaTeX. It already mostly works, you can see the work so far by visiting https://nm.debian.org/person/ACCOUNTNAME/certificate/ and replacing ACCOUNTNAME with your Debian account name, and if you're a DD, you should see your certificate. It still needs a few minor changes and a DPL signature, but at this point I think that will be finished up when the new DPL start. Thanks to Enrico for working on this!

Since my first term, I've been trying to find ways to improve all our accounting/finance issues. Tracking what we spend on things, and getting an annual overview is hard, especially over 3 trusted organisations. The reimbursement process can also be really tedious, especially when you have to provide files in a certain order and combine them into a PDF. So, at DebConf22 we had a meeting along with the treasurer team and Stefano Rivera who said that it might be possible for him to work on a new system as part of his Freexian work. It worked out, and Freexian funded the development of the system since then, and after DebConf23 we handled the reimbursements for the conference via the new reimbursements site:

https://reimbursements.debian.net

It's still early days, but over time it should be linked to all our TOs and we'll use the same category codes across the board. So, overall, our reimbursement process becomes a lot simpler, and also we'll be able to get information like how much money we've spent on any category in any period. It will also help us to track how much money we have available or how much we spend on recurring costs. Right now that needs manual polling from our TOs. So I'm really glad that this is a big long-standing problem in the project that is being fixed.

For Debian 13, we're waving goodbye to the KFreeBSD and mipsel ports. But we're also gaining riscv64 and loongarch64 as release architectures! I have 3 different RISC-V based machines on my desk here that I haven't had much time to work with yet, you can expect some blog posts about them soon after my DPL term ends!

As Debian is a unix-like system, we're affected by the [Year 2038 problem], where systems that uses 32 bit time in seconds since 1970 run out of available time and will wrap back to 1970 or have other undefined behaviour. A detailed [wiki page] explains how this works in Debian, and currently we're going through a rather large transition to make this possible.

[Year 2038 problem] https://simple.wikipedia.org/wiki/Year_2038_problem [wiki page] https://wiki.debian.org/ReleaseGoals/64bit-time

I believe this is the right time for Debian to be addressing this, we're still a bit more than a year away for the Debian 13 release, and this provides enough time to test the implementation before 2038 rolls along.

Of course, big complicated transitions with dependency loops that causes chaos for everyone would still be too easy, so this past weekend (which is a holiday period in most of the west due to Easter weekend) has been filled with dealing with an upstream bug in xz-utils, where a backdoor was placed in this key piece of software. An [Ars Technica] covers it quite well, so I won't go into all the details here. I mention it because I want to give yet another special thanks to everyone involved in dealing with this on the Debian side. Everyone involved, from the ftpmasters to security team and others involved were super calm and professional and made quick, high quality decisions. This also lead to the archive being frozen on Saturday, this is the first time I've seen this happen since I've been a DD, but I'm sure next week will go better!

[Ars Technica] https://arstechnica.com/security/2024/04/what-we-know-about-the-xz-utils-backdoor-that-almost-infected-the-world/

== Looking forward ==

It's really been an honour for me to serve as DPL. It might well be my biggest achievement in my life. Previous DPLs range from prominent software engineers to game developers, or people who have done things like complete Iron Man, run other huge open source projects and are part of big consortiums. Ian Jackson even authored dpkg and is now working on the very interesting [tag2upload service]!

[tag2upload service] https://peertube.debian.social/w/pav68XBWdurWzfTYvDgWRM

I'm a relative nobody, just someone who grew up as a poor kid in South Africa, who just really cares about Debian a lot. And, above all, I'm really thankful that I didn't do anything major to screw up Debian for good.

Not unlike learning how to use Debian, and also becoming a Debian Developer, I've learned a lot from this and it's been a really valuable growth experience for me.

I know I can't possible give all the thanks to everyone who deserves it, so here's a big big thanks to everyone who have worked so hard and who have put in many, many hours to making Debian better, I consider you all heroes!

-Jonathan

Cryptogram Magic Security Dust

Adam Shostack is selling magic security dust.

It’s about time someone is commercializing this essential technology.

Rondam RamblingsFeynman, bullies, and invisible pink unicorns

This is the second installment in what I hope will turn out to be a long series about the scientific method.  In this segment I want to give three examples of how the scientific method, which I described in the first installment, can be applied to situations that are not usually considered "science-y".  By doing this I hope to show you how the scientific method can be used without any

Worse Than FailureCodeSOD: Exceptional Feeds

Joe sends us some Visual Basic .NET exception handling. Let's see if you can spot what's wrong?

Catch ex1 As Exception

    ' return the cursor
    Me.Cursor = Cursors.Default

    ' tell a story
    MessageBox.Show(ex1.Message)
    Return

End Try

This code catches the generic exception, meddles with the cursor a bit, and then pops up a message box to alert the user to something that went wrong. I don't love putting the raw exception in the message box, but this is hardly a WTF, is it?

Catch ex2 As Exception

    ' snitch
    MessageBox.Show(ex2.ToString(), "RSS Feed Initialization Failure")

End Try

Elsewhere in the application. Okay, I also don't love the exN naming convention either, but where's the WTF?

Well, the fact that they're initializing an RSS feed is a hint- this isn't an RSS reader client, it's an RSS serving web app. This runs on the server side, and any message boxes that get popped up aren't going to the end user.

Now, I haven't seen this precise thing done in VB .Net, only in Classic ASP, where you could absolutely open message boxes on the web server. I'd hope that in ASP .Net, something would stop you from doing that. I'd hope.

Otherwise, I've found the worst logging system you could make.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsHow To Best Carve Light

Author: Majoki So not painterly. Not even close. Too pixelated. Too blurred at the edges of reality. Not a good start in your first soloverse. Always so much to learn. Tamp down the expectations, go back and study the masters. Phidias. Caravaggio. Kurosawa. Leibovitz. Marquez. Einstein. Know their mediums. Stone. Canvas. Film. Page. Chalkboard. Seek […]

The post How To Best Carve Light appeared first on 365tomorrows.

,

Planet DebianBen Hutchings: FOSS activity in March 2024

Planet DebianColin Watson: Free software activity in March 2024

My Debian contributions this month were all sponsored by Freexian.

Planet DebianSimon Josefsson: Towards reproducible minimal source code tarballs? On *-src.tar.gz

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the entire software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize one idea (remains to be see in which category it belongs), which have been discussed before, but the context in which the idea can be appreciated have not been as clear as it is today.

  1. Reproducible source tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. My counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that.

What do you think?

Planet DebianArturo Borrero González: Kubecon and CloudNativeCon 2024 Europe summary

Kubecon EU 2024 Paris logo

This blog post shares my thoughts on attending Kubecon and CloudNativeCon 2024 Europe in Paris. It was my third time at this conference, and it felt bigger than last year’s in Amsterdam. Apparently it had an impact on public transport. I missed part of the opening keynote because of the extremely busy rush hour tram in Paris.

On Artificial Intelligence, Machine Learning and GPUs

Talks about AI, ML, and GPUs were everywhere this year. While it wasn’t my main interest, I did learn about GPU resource sharing and power usage on Kubernetes. There were also ideas about offering Models-as-a-Service, which could be cool for Wikimedia Toolforge in the future.

See also:

On security, policy and authentication

This was probably the main interest for me in the event, given Wikimedia Toolforge was about to migrate away from Pod Security Policy, and we were currently evaluating different alternatives.

In contrast to my previous attendances to Kubecon, where there were three policy agents with presence in the program schedule, Kyverno, Kubewarden and OpenPolicyAgent (OPA), this time only OPA had the most relevant sessions.

One surprising bit I got from one of the OPA sessions was that it could work to authorize linux PAM sessions. Could this be useful for Wikimedia Toolforge?

OPA talk

I attended several sessions related to authentication topics. I discovered the keycloak software, which looks very promising. I also attended an Oauth2 session which I had a hard time following, because I clearly missed some additional knowledge about how Oauth2 works internally.

I also attended a couple of sessions that ended up being a vendor sales talk.

See also:

On container image builds, harbor registry, etc

This topic was also of interest to me because, again, it is a core part of Wikimedia Toolforge.

I attended a couple of sessions regarding container image builds, including topics like general best practices, image minimization, and buildpacks. I learned about kpack, which at first sight felt like a nice simplification of how the Toolforge build service was implemented.

I also attended a session by the Harbor project maintainers where they shared some valuable information on things happening soon or in the future , for example:

  • new harbor command line interface coming soon. Only the first iteration though.
  • harbor operator, to install and manage harbor. Looking for new maintainers, otherwise going to be archived.
  • the project is now experimenting with adding support to hosting more artifacts: maven, NPM, pypi. I wonder if they will consider hosting Debian .deb packages.

On networking

I attended a couple of sessions regarding networking.

One session in particular I paid special attention to, ragarding on network policies. They discussed new semantics being added to the Kubernetes API.

The different layers of abstractions being added to the API, the different hook points, and override layers clearly resembled (to me at least) the network packet filtering stack of the linux kernel (netfilter), but without the 20 (plus) years of experience building the right semantics and user interfaces.

Network talk

I very recently missed some semantics for limiting the number of open connections per namespace, see Phabricator T356164: [toolforge] several tools get periods of connection refused (104) when connecting to wikis This functionality should be available in the lower level tools, I mean Netfilter. I may submit a proposal upstream at some point, so they consider adding this to the Kubernetes API.

Final notes

In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I’m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. That being said, I felt less engaged with the content of the conference schedule compared to last year. I don’t know if the schedule itself was less interesting, or that I’m losing interest?

Finally, not an official track in the conference, but we met a bunch of folks from Wikimedia Deutschland. We had a really nice time talking about how wikibase.cloud uses Kubernetes, whether they could run in Wikimedia Cloud Services, and why structured data is so nice.

Group photo

Worse Than FailureTaking Up Space

April Fool's day is a day where websites lie to you or create complex pranks. We've generally avoided the former, but have done a few of the latter, but we also like to just use April Fool's as a chance to change things up.

So today, we're going to do something different. We're going to talk about my Day Job. Specifically, we're going to talk about a tool I use in my day job: cFS.

cFS is a NASA-designed architecture for designing spaceflight applications. It's open source, and designed to be accessible. A lot of the missions NASA launches use cFS, which gives it a lovely proven track record. And it was designed and built by people much smarter than me. Which doesn't mean it's free of WTFs.

The Big Picture

cFS is a C framework for spaceflight, designed to run on real-time OSes, though fully capable of running on Linux (with or without a realtime kernel), and even Windows. It has three core modules- a Core Flight Executive (cFE) (which provides services around task management, and cross-task communication), the OS Abstraction Layer (helping your code be portable across OSes), and a Platform Support Package (low-level support for board-connected hardware). Its core concept is that you build "apps", and the whole thing has a pitch about an app store. We'll come back to that. What exactly is an app in cFS?

Well, at their core, "apps" are just Actors. They're a block of code with its own internal state, that interacts with other modules via message passing, but basically runs as its own thread (or a realtime task, or whatever your OS appropriate abstraction is).

These applications are wired together by a cFS feature called the "Core Flight Executive Software Bus" (cFE Software Bus, or just Software Bus), which handles managing subscriptions and routing. Under the hood, this leverages an OS-level message queue abstraction. Since each "app" has its own internal memory, and only reacts to messages (or emits messages for others to react to), we avoid pretty much all of the main pitfalls of concurrency.

This all feeds into the single responsibility principle, giving each "app" one job to do. And while we're throwing around buzzwords, it also grants us encapsulation (each "app" has its own memory space, unshared), and helps us design around interfaces- "apps" emit and receive certain messages, which defines their interface. It's almost like full object oriented programming in C, or something like how the BeamVM languages (Erlang, Elixir) work.

The other benefit of this is that we can have reusable apps which provide common functionality that every mission needs. For example, the app DS (Data Storage) logs any messages that cross the software bus. LC (Limit Checker) allows you to configure expected ranges for telemetry (like, for example, the temperature you expect a sensor to report), and raise alerts if it falls out of range. There's SCH (Scheduler) which sends commands to apps to wake them up so they can do useful work (also making it easy to sleep apps indefinitely and minimize power consumption).

All in all, cFS constitutes a robust, well-tested framework for designing spaceflight applications.

Even NASA annoys me

This is TDWTF, however, so none of this is all sunshine and roses. cFS is not the prettiest framework, and the developer experience may ah… leave a lot to be desired. It's always undergoing constant improvement, which is good, but still has its pain points.

Speaking of constant improvement, let's talk about versioning. cFS is the core flight software framework which hosts your apps (via the cFE), and cFS is getting new versions. The apps themselves also get new versions. The people writing the apps and the people writing cFS are not always coordinating on this, which means that when cFS adds a breaking change to their API, you get to play the "which version of cFS and App X play nice together". And since everyone has different practices around tagging releases, you often have to walk through commits to find the last version of the app that was compatible with your version of cFS, and see things like releases tagged "compatible with Draco rc2 (mostly)". The goal of "grab apps from an App Store and they just work" is definitely not actually happening.

Or, this, from the current cFS readme:

Compatible list of cFS apps
The following applications have been tested against this release:
TBD

Messages in cFS are represented by structs. Which means when apps want to send each other messages, they need the same struct definitions. This is just a pain to manage- getting agreement about which app should own which message, who needs the definition, and how we get the definition over to them is just a huge mess. It's such a huge mess that newer versions of cFS have switched to using "Electronic Data Sheets"- XML files which describe the structs, which doesn't really solve the problem but adds XML to the mix. At least EDS makes it easy to share definitions with non-C applications (popular ground software is written in Python or Java).

Messages also have to have a unique "Message ID", but the MID is not just an arbitrary unique number. It secretly encodes important information, like whether this message is a command (an instruction to take action) or telemetry (data being output), and if you pick a bad MID, everything breaks. Also, keeping MID definitions unique across many different apps who don't know any of the other apps exist is a huge problem. The general solution that folks use is bolting on some sort of CSV file and code generator that handles this.

Those MIDs also don't exist outside of cFS- they're a unique-to-cFS abstraction. cFS, behind the scenes, converts them to different parts of the "space packet header", which is the primary packet format for the SpaceWire networking protocol. This means that in realistic deployments where your cFS module needs to talk to components not running cFS- your MID also represents key header fields for the SpaceWire network. It's incredibly overloaded and the conversions are hidden behind C macros that you can't easily debug.

But my biggest gripe is the build tooling. Everyone at work knows they can send me climbing the walls by just whispering "cFS builds" in my ear. It's a nightmare (that, I believe has gotten better in newer versions, but due to the whole "no synchronization between app and cFE versions" problem, we're not using a new version). It starts with make, which calls CMake, which also calls make, but also calls CMake again in a way that doesn't let variables propagate down to other layers. cFS doesn't provide any targets you link against, but instead requires that any apps you want to use be inserted into the cFS source tree directly, which makes it incredibly difficult to build just parts of cFS for unit testing.

Oh, speaking of unit testing- cFS provides mocks of all of its internal functions; mocks which always return an error code. This is intentional, to encourage developers to test their failure paths in code, but I also like to test our success path too.

Summary

Any tool you use on a regular basis is going to be a tool that you're intimately familiar with; the good parts frequently vanish into the background and the pain points are the things that you notice, day in, day out. That's definitely how I feel after working with cFS for two years.

I think, at its core, the programming concepts it brings to doing low-level, embedded C, are good. It's certainly better than needing to write this architecture myself. And for all its warts, it's been designed and developed by people who are extremely safety conscious and expect you to be too. It's been used on many missions, from hobbyist cube sats to Mars rovers, and that proven track record gives you a good degree of confidence that your mission will also be safe using it.

And since it is Open Source, you can try it out yourself. The cFS-101 guide gives you a good starting point, complete with a downloadable VM that walks you through building a cFS application and communicating with it from simulated ground software. It's a very different way to approach C programming (and makes it easier to comply with C standards, like MISRA), and honestly, the Actor-oriented mindset is a good attitude to bring to many different architectural problems.

Peregrine

If you were following space news at all, you may already know that our Peregrine lander failed. I can't really say anything about that until the formal review has released its findings, but all indications are that it was very much a hardware problem involving valves and high pressure tanks. But I can say that most of the avionics on it were connected to some sort of cFS instance (there were several cFS nodes).

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsTwo Guns

Author: Julian Miles, Staff Writer That shadow is cast by scenery. The one next to it likewise. The third I’m not sure about, and the fourth will become scenery if the body isn’t found. Sniping lasers give nothing away when killing, although the wounds are distinctive. By the time they’re identifying that, I’ll be off […]

The post Two Guns appeared first on 365tomorrows.

xkcdEclipse Coolness

Cryptogram Ross Anderson

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996.

I know I was at the first Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography (the one with the blue cover) down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993—probably at Eurocrypt in Lofthus, Norway—I, as an unpublished book author who had only written a couple of crypto articles for Dr. Dobb’s Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find a least one unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He published on block cipher cryptanalysis in the 1990s, and the security of large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its third edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and backdoors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

He’s listed in the acknowledgments as a reader of every one of my books from Beyond Fear on. Recently, we’d see each other a couple of times a year: at this or that workshop or event. The last time I saw him was last June, at SHB 2023, in Pittsburgh. We were having dinner on Alessandro Acquisti‘s rooftop patio, celebrating another successful workshop. He was going to attend my Workshop on Reimagining Democracy in December, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we talked. And I am not the only one.

My heart goes out to his wife Shireen and his family. We lost him much too soon.

Cryptogram Declassified NSA Newsletters

Through a 2010 FOIA request (yes, it took that long), we have copies of the NSA’s KRYPTOS Society Newsletter, “Tales of the Krypt,” from 1994 to 2003.

There are many interesting things in the 800 pages of newsletter. There are many redactions. And a 1994 review of Applied Cryptography by redacted:

Applied Cryptography, for those who don’t read the internet news, is a book written by Bruce Schneier last year. According to the jacket, Schneier is a data security expert with a master’s degree in computer science. According to his followers, he is a hero who has finally brought together the loose threads of cryptography for the general public to understand. Schneier has gathered academic research, internet gossip, and everything he could find on cryptography into one 600-page jumble.

The book is destined for commercial success because it is the only volume in which everything linked to cryptography is mentioned. It has sections on such-diverse topics as number theory, zero knowledge proofs, complexity, protocols, DES, patent law, and the Computer Professionals for Social Responsibility. Cryptography is a hot topic just now, and Schneier stands alone in having written a book on it which can be browsed: it is not too dry.

Schneier gives prominence to applications with large sections.on protocols and source code. Code is given for IDEA, FEAL, triple-DES, and other algorithms. At first glance, the book has the look of an encyclopedia of cryptography. Unlike an encyclopedia, however, it can’t be trusted for accuracy.

Playing loose with the facts is a serious problem with Schneier. For example in discussing a small-exponent attack on RSA, he says “an attack by Michael Wiener will recover e when e is up to one quarter the size of n.” Actually, Wiener’s attack recovers the secret exponent d when e has less than one quarter as many bits as n, which is a quite different statement. Or: “The quadratic sieve is the fastest known algorithm for factoring numbers less than 150 digits…. The number field sieve is the fastest known factoring algorithm, although the quadratric sieve is still faster for smaller numbers (the break even point is between 110 and 135 digits).” Throughout the book, Schneier leaves the impression of sloppiness, of a quick and dirty exposition. The reader is subjected to the grunge of equations, only to be confused or misled. The large number of errors compounds the problem. A recent version of the errata (Schneier publishes updates on the internet) is fifteen pages and growing, including errors in diagrams, errors in the code, and errors in the bibliography.

Many readers won’t notice that the details are askew. The importance of the book is that it is the first stab at.putting the whole subject in one spot. Schneier aimed to provide a “comprehensive reference work for modern cryptography.” Comprehensive it is. A trusted reference it is not.

Ouch. But I will not argue that some of my math was sloppy, especially in the first edition (with the blue cover, not the red cover).

A few other highlights:

  • 1995 Kryptos Kristmas Kwiz, pages 299–306
  • 1996 Kryptos Kristmas Kwiz, pages 414–420
  • 1998 Kryptos Kristmas Kwiz, pages 659–665
  • 1999 Kryptos Kristmas Kwiz, pages 734–738
  • Dundee Society Introductory Placement Test (from questions posed by Lambros Callimahos in his famous class), pages 771–773
  • R. Dale Shipp’s Principles of Cryptanalytic Diagnosis, pages 776–779
  • Obit of Jacqueline Jenkins-Nye (Bill Nye the Science Guy’s mother), pages 755–756
  • A praise of Pi, pages 694–696
  • A rant about Acronyms, pages 614–615
  • A speech on women in cryptology, pages 593–599

Cory DoctorowSubprime gadgets

A group of child miners, looking grimy and miserable, standing in a blasted gravel wasteland. To their left, standing on a hill, is a club-wielding, mad-eyed, top-hatted debt collector, brandishing a document bearing an Android logo.

Today for my podcast, I read Subprime gadgets, originally published in my Pluralistic blog:

I recorded this on a day when I was home between book-tour stops (I’m out with my new techno crime-thriller, The Bezzle). Catch me on April 11 in Boston with Randall Munroe, on April 12th in Providence, Rhode Island, then onto Chicago, Torino, Winnipeg, Calgary, Vancouver and beyond! The canonical link for the schedule is here.


The promise of feudal security: “Surrender control over your digital life so that we, the wise, giant corporation, can ensure that you aren’t tricked into catastrophic blunders that expose you to harm”:

https://locusmag.com/2021/01/cory-doctorow-neofeudalism-and-the-digital-manor/

The tech giant is a feudal warlord whose platform is a fortress; move into the fortress and the warlord will defend you against the bandits roaming the lawless land beyond its walls.

That’s the promise, here’s the failure: What happens when the warlord decides to attack you? If a tech giant decides to do something that harms you, the fortress becomes a prison and the thick walls keep you in.


MP3


Here’s that tour schedule!

11 Apr: Harvard Berkman-Klein Center, with Randall Munroe
https://cyber.harvard.edu/events/enshittification

12 Apr: RISD Debates in AI, Providence

17 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

(Image: Oatsy, CC BY 2.0, modified)

,

David BrinDo the Rich Have Too Much Money? A neglected posting... till now.

Want to time travel? I found this on my desktop… a roundup of news that seemed highly indicative… in early 2022!  Back in those naïve, bygone days of innocence… but now…  Heck yes, a lot of it is pertinent right now!


Like… saving market economies from their all-too-natural decay back into feudalism.


------


First, something a long time coming!  Utter proof we are seeing a Western Revival and push back against the World Oligarchic Putsch. A landmark deal agreed upon by the world's richest nations on Saturday will see a global minimum rate of corporation tax placed on multinational companies including tech giants like Amazon, Apple and Microsoft. Finance ministers from the Group of Seven, or G-7 nations, said they had agreed to having a global base corporate tax rate of at least 15 percent.  Companies  with a strong online presence, would pay taxes in the countries where they record sales, not just where they have an operational base.


It is far, far from enough! But at last some of my large scale 'suggestions' are being tried. Now Let’s get all 50 U.S. states to pass a treaty banning 'bidding wars' for factories, sports teams etc... with maybe a sliding scale tilted for poorer states or low populations. A trivially easy thing that'd save citizens hundreds of billions.


The following made oligarchs fearful of what the Pelosi bills might accomplish, if thirty years of sabotaging the IRS came to an end: 


ProPublica has obtained a vast trove of Internal Revenue Service data on the tax returns of thousands of the nation’s wealthiest people, covering more than 15 years. The data provides an unprecedented look inside the financial lives of America’s titans, including Warren Buffett, Bill Gates, Rupert Murdoch and Mark Zuckerberg. It shows not just their income and taxes, but also their investments, stock trades, gambling winnings and even the results of audits. Taken together, it demolishes the cornerstone myth of the American tax system: that everyone pays their fair share and the richest Americans pay the most. The results are stark. According to Forbes, those 25 people saw their worth rise a collective $401 billion from 2014 to 2018. They paid a total of $13.6 billion in federal income taxes in those five years, the IRS data shows. That’s a staggering sum, but it amounts to a true tax rate of only 3.4%.  

Over the longer run, what we need is the World Ownership Treaty. Nothing on Earth is 'owned' unless a human or government or nonprofit claims it openly and accountably. So much illicit property would be abandoned by criminals etc. that national debts would be erased and the rest of us could have a tax jubilee. The World Ownership Treaty has zero justified objections. If you own something... just say so.


And a minor tech note: An amazing helium airship alternates life as dirigible or water ship. Alas, it is missing some important aspects I could explain… 



== When the Rich have Too Much Money… ==


“The Nobel Prize-winning physicist Ilya Prigogine was fond of saying that the future is not so much determined by what we do in the present as our image of the future determines what we do today.” So begins the latest missive of Noema Magazine.


The Near Future: The Pew Research Center’s annual Big Challenges Report top-features my musings on energy, local production/autonomy, transparency etc., along with other top seers, like the estimable Esther Dyson, Jamais Cascio, Amy Webb & Abigail deKosnick and many others.


Among the points I raise:

  • Advances in cost-effectiveness of sustainable energy supplies will be augmented by better storage systems. This will both reduce reliance on fossil fuels and allow cities and homes to be more autonomous.
  • Urban farming methods may move to industrial scale, allowing similar moves toward local autonomy (perhaps requiring a full decade or more to show significant impact). Meat use will decline for several reasons, ensuring some degree of food security, as well.
  • Local, small-scale, on-demand manufacturing may start to show effects in 2025. If all of the above take hold, there will be surplus oceanic shipping capacity across the planet. Some of it may be applied to ameliorate (not solve) acute water shortages. Innovative uses of such vessels may range all the way to those depicted in my novel ‘Earth.’
  • Full-scale diagnostic evaluations of diet, genes and microbiome will result in micro-biotic therapies and treatments. AI appraisals of other diagnostics will both advance detection of problems and become distributed to handheld devices cheaply available to all, even poor clinics.
  • Handheld devices will start to carry detection technologies that can appraise across the spectrum, allowing NGOs and even private parties to detect and report environmental problems.
  • Socially, this extension of citizen vision will go beyond the current trend of assigning accountability to police and other authorities. Despotisms will be empowered, as predicted in ‘Nineteen Eighty-four.’ But democracies will also be empowered, as in ‘The Transparent Society.’
  • I give odds that tsunamis of revelation will crack the shields protecting many elites from disclosure of past and present torts and turpitudes. The Panama Papers and Epstein cases exhibit how fear propels the elites to combine efforts at repression. But only a few more cracks may cause the dike to collapse, revealing networks of blackmail. This is only partly technologically driven and hence is not guaranteed. If it does happen, there will be dangerous spasms by all sorts of elites, desperate to either retain status or evade consequences. But if the fever runs its course, the more transparent world will be cleaner and better run.
  • Some of those elites have grown aware of the power of 90 years of Hollywood propaganda for individualism, criticism, diversity, suspicion of authority and appreciation of eccentricity. Counter-propaganda pushing older, more traditional approaches to authority and conformity are already emerging, and they have the advantage of resonating with ancient human fears. Much will depend upon this meme war.

“Of course, much will also depend upon short-term resolution of current crises. If our systems remain undermined and sabotaged by incited civil strife and distrust of expertise, then all bets are off. You will get many answers to this canvassing fretting about the spread of ‘surveillance technologies that will empower Big Brother.’ These fears are well-grounded, but utterly myopic. First, ubiquitous cameras and facial recognition are only the beginning. Nothing will stop them and any such thought of ‘protecting’ citizens from being seen by elites is stunningly absurd, as the cameras get smaller, better, faster, cheaper, more mobile and vastly more numerous every month. Moore’s Law to the nth degree. Yes, despotisms will benefit from this trend. And hence, the only thing that matters is to prevent despotism altogether.

“In contrast, a free society will be able to apply the very same burgeoning technologies toward accountability. We are seeing them applied to end centuries of abuse by ‘bad-apple’ police who are thugs, while empowering the truly professional cops to do their jobs better. I do not guarantee light will be used this way, despite today’s spectacular example. It is an open question whether we citizens will have the gumption to apply ‘sousveillance’ upward at all elites. But Gandhi and Martin Luther King Jr. likewise were saved by crude technologies of light in their days. And history shows that assertive vision by and for the citizenry is the only method that has ever increased freedom and – yes – some degree of privacy.

A new type of digital asset - known as a non-fungible token (NFT) - has exploded in popularity during the pandemic as enthusiasts and investors scramble to spend enormous sums of money on items that only exist online. “Blockchain technology allows the items to be publicly authenticated as one-of-a-kind, unlike traditional online objects which can be endlessly reproduced.”… “ In October 2020, Miami-based art collector Pablo Rodriguez-Fraile spent almost $67,000 on a 10-second video artwork that he could have watched for free online. Last week, he sold it for $6.6 million. The video by digital artist Beeple, whose real name is Mike Winkelmann, was authenticated by blockchain, which serves as a digital signature to certify who owns it and that it is the original work.”


From The Washington Post: The post-covid luxury spending boom has begun. It’s already reshaping the economy. Consider a sealed copy of Super Mario 64 sells for $1.56M in record-breaking auction. That record didn’t last long, till August 2021. Rare copy of Super Mario Bros. sells for $2 million, the most ever paid for a video game. 


===============================


== Addendum March 30, 2024 ==


What's above was an economics rant from the past. Only now, let me also tack on something from spring 2024 (today!) that I just sent to a purported 'investment guru economist' I know. His bi-weekly newsletter regularly - and obsessively - focuses on the Federal Reserve ('Fed') and the ongoing drama of setting calibrated interest rates to fight inflation.  (The fellow never, ever talks about all the things that matter much, much more, like tax/fiscal policy, money velocity and rising wealth disparities.)


Here, he does make one cogent point about inflation... but doesn't follow up to the logical conclusion:


"This matters because the average consumer doesn’t look at benchmarks. They perceive inflation when it starts having visibly large and/or frequent effects on their lives. This is why food and gasoline prices matter so much; people buy them regularly enough to notice higher prices. Their contribution to inflation perceptions is greater than their weighting in the benchmarks."

Yes!  It is true that the poor and middle class do not borrow in order to meet basic needs. All they can do, when prices rise, is tighten their belts. Interest rates do not affect such basics.

ALSO The rich do not borrow. Because, after 40 years of Supply Side tax grifts, they have all the money! And now they are snapping up 1/3 of US housing stock with cash purchases. What Adam Smith called economically useless 'rent-seeking'. The net effect of Republican Congresses firehousing all our wealth into the gaping-open maws of parasites.

That's gradually changing, at last. The US is rapidly re-industrializing, right now! But not by borrowing. The boom in US manufacturing investment is entirely Keynesian - meaning that it's being propelled by federal infrastructure spending and the Chips Act.   Those Pelosi bills are having all of the positive effects that Austrian School  fanatics insanely promised for Supply Side... and never delivered.  

That old argument isnow  settled by facts... which never (alas) sway cultists. Pure fact. Keynes is proved. Laffer is disproved. Period.

But in that case, what's with the obsession of the Right upon the Federal Reserve? What - pray tell - is the Fed supposedly influencing, with interest rate meddling? The answer is... not much.

If you want to see what's important to oligarchy - the core issue that's got them so upset that the they will support Trump? Just look at what the GOP in Congress and the Courts is actually doing, right now! Other than "Hunter hearings" and other Benghazi-style theatrics, what Mike Johnson et. al are doing is:

- Desperately using every lever - like governemnt shut-down threats and holding hostage aid to Ukraine - to slash the coming wave of IRS audits that might affect their masters.  With that wave looming, many in oligarchy are terrified. Re-eviscerating the IRS is the top GOP priority!  But Schumer called Johnson's bluff.

- Their other clear priority is obedience to the Kremlin and blocking aid to Ukraine.

Look at what actually is happening and then please, please name for me one other actual (not polemical) priority? 

== And finally ==

Oh yeah, then there's this. 

Please don't travel April 17-21.  

That's McVeigh season. Though, if you listen to MAGA world, ALL of 2024 into 2025 could be.  

God bless the FBI.




Planet DebianJunichi Uekawa: Learning about xz and what is happening is fascinating.

Learning about xz and what is happening is fascinating. The scope of potential exploit is very large. The Open source software space is filled with many unmaintained and unreviewed software.

Planet DebianRussell Coker: Links March 2024

Bruce Schneier wrote an interesting blog post about his workshop on reimagining democracy and the unusual way he structured it [1]. It would be fun to have a security conference run like that!

Matthias write an informative blog post about Wayland “Wayland really breaks things… Just for now” which links to a blog debate about the utility of Wayland [2]. Wayland seems pretty good to me.

Cory Doctorow wrote an insightful article about the AI bubble comparing it to previous bubbles [3].

Charles Stross wrote an insightful analysis of the implications if the UK brought back military conscription [4]. Looks like the era of large armies is over.

Charles Stross wrote an informative blog post about the Worldcon in China, covering issues of vote rigging for location, government censorship vs awards, and business opportunities [5].

The Paris Review has an interesting article about speaking to the CIA’s Creative Writing Group [6]. It doesn’t explain why they have a creative writing group that has some sort of semi-official sanction.

LongNow has an insightful article about the threats to biodiversity in food crops and the threat that poses to humans [7].

Bruce Schneier and Albert Fox Cahn wrote an interesting article about the impacts of chatbots on human discourse [8]. If it makes people speak more precisely then that would be great for all Autistic people!

365 TomorrowsPast Belief

Author: Don Nigroni When everything is going well, I can’t relax. I just wait and worry for something bad to happen. So when I got a promotion last week, naturally I expected something ugly would happen, perhaps a leaky roof or maybe a hurricane. But this time, no matter how hard I looked for an […]

The post Past Belief appeared first on 365tomorrows.

,

Planet DebianSteinar H. Gunderson: xz backdooring

Andres Freund found that xz-utils is backdoored, but could not (despite the otherwise excellent analysis) get quite to the bottom of what the payload actually does.

What you would hope for to be posted by others: Further analysis of the payload.

What actually gets posted by others: “systemd is bad.”

Update: Good preliminary analysis.

365 TomorrowsProximity Suit

Author: Jeremy Nathan Marks Athabasca was a town of gas and coal. No wind or solar were allowed. Local officials said the Lord would return by fire while windmills and solar panels could only mar the landscape. And fire in a town of coal and gas was, naturally, a lovely thing. On a plain not […]

The post Proximity Suit appeared first on 365tomorrows.

,

Cryptogram Friday Squid Blogging: The Geopolitics of Eating Squid

New York Times op-ed on the Chinese dominance of the squid industry:

China’s domination in seafood has raised deep concerns among American fishermen, policymakers and human rights activists. They warn that China is expanding its maritime reach in ways that are putting domestic fishermen around the world at a competitive disadvantage, eroding international law governing sea borders and undermining food security, especially in poorer countries that rely heavily on fish for protein. In some parts of the world, frequent illegal incursions by Chinese ships into other nations’ waters are heightening military tensions. American lawmakers are concerned because the United States, locked in a trade war with China, is the world’s largest importer of seafood.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianRaphaël Hertzog: Freexian is looking to expand its team with more Debian contributors

It’s been a while that I haven’t posted anything on my blog, the truth is that Freexian has been doing very well in the last years and that I have a hard time to allocate time to write articles or even to contribute to my usual Debian projects… the exception being debusine since that’s part of the Freexian work (have a look at our most recent announce!).

That being said, given Freexian’s growth and in the hope to reduce my workload, we are looking to extend our team with Debian members of more varied backgrounds and skills, so they can help us in areas like sales / marketing / project management. Have a look at our announce on debian-jobs@lists.debian.org.

As a mission-oriented company, we are looking to work with persons already involved in Debian (or persons who were waiting the right opportunity to get involved). All our collaborators can spend 20% of their paid work time on the Debian projects they care about.

Planet DebianRavi Dwivedi: A visit to the Taj Mahal

Note: The currency used in this post is Indian Rupees, which was around 83 INR for 1 US Dollar as that time.

I and my friend Badri visited the Taj Mahal this month. Taj Mahal is one of the main tourist destinations in India and does not need an introduction, I guess. It is in Agra, in the state of Uttar Pradesh, 188 km from Delhi by train. So, I am writing a post documenting useful information for people who are planning to visit Taj Mahal. Feel free to ask me questions about visiting the Taj Mahal.

Our retiring room at the Old Delhi Railway Station.

We had booked a train from Delhi to Agra. The name of the train was Taj Express, and its scheduled departure time from Hazrat Nizamuddin station in Delhi is 07:08 hours in the morning, and its arrival time at Agra Cantt station is 09:45. So, we booked a retiring room at the Old Delhi railway station for the previous night. This retiring room was hard to find. We woke up at 05:00 in the morning and took the metro to Hazrat Nizamuddin station. We barely reached the station in time, but anyway, the train was not yet at the station; it was late.

We reached Agra at 10:30 and checked into our retiring room, took rest and went out for Taj Mahal at 13:00 in the afternoon. Taj Mahal’s outer gate is 5 km away from the Agra Cantt station. As we were going out of the railway station, we were chased by an autorickshaw driver who offered to go to Taj Mahal for 150 INR for both of us. I asked him to raise it down to 60 INR, and after some back and forth, he agreed to drop us off at Taj Mahal for 80 INR. But I said we won’t pay anything above 60 INR. He agreed with that amount but said that he would need to fill up with more passengers. When we saw that he wasn’t making any effort in bringing more passengers, we walked away.

As soon as we got out of the railway station complex, an autorickshaw driver came to us and offered to drop us off at Taj Mahal for 20 INR if we are sharing with other passengers and 100 INR if we reserve the auto for us. We agreed to go with 20 INR per person, but he started the autorickshaw as soon as we hopped in. I thought that the third person in the auto was another passenger sharing a ride with us, but later we got to know he was with the driver. Upon reaching the outer gate of Taj Mahal, I gave him 40 INR (for both of us), and he asked to instead give 100 INR as he said we reserved the auto, even though I clearly stated before taking the auto that we wanted to share the auto, not reserve it. I think this was a scam. We walked away, and he didn’t insist further.

Taj Mahal entrance was like 500 m from the outer gate. We went there and bought offline tickets just outside the West gate. For Indians, the ticket for going inside the Taj Mahal complex is 50 INR, and a visit to the mausoleum costs 200 INR extra.

Security outside the Taj Mahal complex.

This red colored building is entrance to where you can see the Taj Mahal.

Taj Mahal.

Shoe covers for going inside the mausoleum.

Taj Mahal from side angle.

We came out of the Taj Mahal complex at 18:00 and stopped for some tea and snacks. I also bought a fridge magnet for 30 INR. Then we walked back towards Agra Cantt station, as we had a train for Jaipur at midnight. We were hoping to find a restaurant along the way, but we didn’t find any that we found interesting, so we just ate at the railway station. During the return trip, we noticed there was a bus stand near the station, which we didn’t know about. It turns out you can catch a bus to Taj Mahal from there. You can click here to check out the location of that bus stand on OpenStreetMap.

Expenses

These were our expenses per person

Retiring room at Delhi Railway Station for 12 hours ₹131

Train ticket from Delhi to Agra (Taj Express) ₹110

Retiring room at Agra Cantt station for 12 hours ₹450

Auto-rickshaw to Taj Mahal ₹20

Taj Mahal ticket (including going inside the mausoleum): ₹250

Food ₹350

Important information for visitors

  • Taj Mahal is closed on Friday.

  • There are plenty of free-of-cost drinking water taps inside the Taj Mahal complex.

  • Ticket price for Indians is ₹50, for foreigners and NRIs it is ₹1100, and for people from SAARC/BIMSTEC is ₹540. ₹200 extra for the mausoleum for everyone.

  • A visit inside the mausoleum requires covering your shoes or removing them. Shoe covers costs ₹10 per person inside the complex, but are probably involved free of charge in foreigner tickets. We could not find a place to keep our shoes, but some people managed to enter barefoot, indicating there must be some place to keep your shoes.

  • Mobile phones and cameras are allowed inside the Taj Mahal, but not eatables.

  • We went there on March 10th, and the weather was pleasant. So, we recommend going around that time.

  • Regarding the timings, I found this written near the ticket counter: “Taj Mahal opens 30 minutes before sunrise and closes 30 minutes before sunset during normal operating days,” so the timings are vague. But we came out of the complex at 18:00 hours. I would interpret that to mean the Taj Mahal is open from 07:00 to 18:00, and the ticket counter closes at around 17:00. During the winter, the timings might differ.

  • The cheapest way to reach Taj Mahal is by bus, and the bus stop is here

Bye for now. See you in the next post :)

Worse Than FailureError'd: Good Enough Friday

We've got some of the rarer classic Error'd types today: events from the dawn of time, weird definitions of space, and this absolutely astonishing choice of cancel/confirm button text.

Perplexed Stewart found this and it's got me completely befuddled as well! "Puzzled over this classic type of Error'd for ages. I really have no clue whether I should press Yes or No."

avast

 

I have a feeling we've seen errors like this before, but it bears repeating. Samuel H. bemoans the awful irony. "While updating Adobe Reader: Adobe Crash Processor quit unexpectedly [a.k.a. crashed]."

adobe

 

Cosmopolitan Jan B. might be looking for a courier to carry something abroad. "I found an eBay listing that seemed too good to be true, but had no bids at all! The item even ships worldwide, except they have a very narrow definition of what worldwide means."

shipping

 

Super-Patriot Chris A. proves Tennessee will take second place to nobody when it comes to distrusting dirty furriners. Especially the ones in Kentucky. "The best country to block is one's own. That way, you KNOW no foreigners can read your public documents!"

denied

 

Finally, old-timer Bruce R. has a system that appears to have been directly inspired by Aristotle. "I know Windows has some old code in it, but this is ridiculous."

stale

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsA Little Extra

Author: Marcel Neumann After years of living in an unjust world, being disillusioned by an ideology I once thought held promise and having lost faith in humanity’s collective desire to live in harmony, I decided to live off the grid in a remote Alaskan village. Any needed or desired supplies were flown in by a […]

The post A Little Extra appeared first on 365tomorrows.

Cryptogram Lessons from a Ransomware Attack against the British Library

You might think that libraries are kind of boring, but this self-analysis of a 2023 ransomware and extortion attack against the British Library is anything but.

Planet DebianPatryk Cisek: Sanoid on TrueNAS

syncoid to TrueNAS In my homelab, I have 2 NAS systems: Linux (Debian) TrueNAS Core (based on FreeBSD) On my Linux box, I use Jim Salter’s sanoid to periodically take snapshots of my ZFS pool. I also want to have a proper backup of the whole pool, so I use syncoid to transfer those snapshots to another machine. Sanoid itself is responsible only for taking new snapshots and pruning old ones you no longer care about.

Planet DebianReproducible Builds (diffoscope): diffoscope 262 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 262. This version includes the following changes:

[ Chris Lamb ]
* Factor out Python version checking in test_zip.py. (Re: #362)
* Also skip some zip tests under 3.10.14 as well; a potential regression may
  have been backported to the 3.10.x series. The underlying cause is still to
  be investigated. (Re: #362)

You find out more by visiting the project homepage.

,

Krebs on SecurityThread Hijacking: Phishes That Prey on Your Curiosity

Thread hijacking attacks. They happen when someone you know has their email account compromised, and you are suddenly dropped into an existing conversation between the sender and someone else. These missives draw on the recipient’s natural curiosity about being copied on a private discussion, which is modified to include a malicious link or attachment. Here’s the story of a thread hijacking attack in which a journalist was copied on a phishing email from the unwilling subject of a recent scoop.

In Sept. 2023, the Pennsylvania news outlet LancasterOnline.com published a story about Adam Kidan, a wealthy businessman with a criminal past who is a major donor to Republican causes and candidates, including Rep. Lloyd Smucker (R-Pa).

The LancasterOnline story about Adam Kidan.

Several months after that piece ran, the story’s author Brett Sholtis received two emails from Kidan, both of which contained attachments. One of the messages appeared to be a lengthy conversation between Kidan and a colleague, with the subject line, “Re: Successfully sent data.” The second missive was a more brief email from Kidan with the subject, “Acknowledge New Work Order,” and a message that read simply, “Please find the attached.”

Sholtis said he clicked the attachment in one of the messages, which then launched a web page that looked exactly like a Microsoft Office 365 login page. An analysis of the webpage reveals it would check any submitted credentials at the real Microsoft website, and return an error if the user entered bogus account information. A successful login would record the submitted credentials and forward the victim to the real Microsoft website.

But Sholtis said he didn’t enter his Outlook username and password. Instead, he forwarded the messages to LancasterOneline’s IT team, which quickly flagged them as phishing attempts.

LancasterOnline Executive Editor Tom Murse said the two phishing messages from Mr. Kidan raised eyebrows in the newsroom because Kidan had threatened to sue the news outlet multiple times over Sholtis’s story.

“We were just perplexed,” Murse said. “It seemed to be a phishing attempt but we were confused why it would come from a prominent businessman we’ve written about. Our initial response was confusion, but we didn’t know what else to do with it other than to send it to the FBI.”

The phishing lure attached to the thread hijacking email from Mr. Kidan.

In 2006, Kidan was sentenced to 70 months in federal prison after pleading guilty to defrauding lenders along with Jack Abramoff, the disgraced lobbyist whose corruption became a symbol of the excesses of Washington influence peddling. He was paroled in 2009, and in 2014 moved his family to a home in Lancaster County, Pa.

The FBI hasn’t responded to LancasterOnline’s tip. Messages sent by KrebsOnSecurity to Kidan’s emails addresses were returned as blocked. Messages left with Mr. Kidan’s company, Empire Workforce Solutions, went unreturned.

No doubt the FBI saw the messages from Kidan for what they likely were: The result of Mr. Kidan having his Microsoft Outlook account compromised and used to send malicious email to people in his contacts list.

Thread hijacking attacks are hardly new, but that is mainly true because many Internet users still don’t know how to identify them. The email security firm Proofpoint says it has tracked north of 90 million malicious messages in the last five years that leverage this attack method.

One key reason thread hijacking is so successful is that these attacks generally do not include the tell that exposes most phishing scams: A fabricated sense of urgency. A majority of phishing threats warn of negative consequences should you fail to act quickly — such as an account suspension or an unauthorized high-dollar charge going through.

In contrast, thread hijacking campaigns tend to patiently prey on the natural curiosity of the recipient.

Ryan Kalember, chief strategy officer at Proofpoint, said probably the most ubiquitous examples of thread hijacking are “CEO fraud” or “business email compromise” scams, wherein employees are tricked by an email from a senior executive into wiring millions of dollars to fraudsters overseas.

But Kalember said these low-tech attacks can nevertheless be quite effective because they tend to catch people off-guard.

“It works because you feel like you’re suddenly included in an important conversation,” Kalember said. “It just registers a lot differently when people start reading, because you think you’re observing a private conversation between two different people.”

Some thread hijacking attacks actually involve multiple threat actors who are actively conversing while copying — but not addressing — the recipient.

“We call these multi-persona phishing scams, and they’re often paired with thread hijacking,” Kalember said. “It’s basically a way to build a little more affinity than just copying people on an email. And the longer the conversation goes on, the higher their success rate seems to be because some people start replying to the thread [and participating] psycho-socially.”

The best advice to sidestep phishing scams is to avoid clicking on links or attachments that arrive unbidden in emails, text messages and other mediums. If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.

LongNowMembers of Long Now

Members of Long Now

With thousands of members from across the globe, the Long Now community has a wide range of perspectives, stories, and experiences to share. We're delighted to showcase this curated set of Ignite Talks, created and given by the Long Now members themselves. Presenting on the subjects of their choice, our speakers have precisely 5 minutes to amuse, educate, enlighten, or inspire the audience!

We're opening talk submissions to all members in early April and will send via email; we can accept both in-person and recorded talks.

And save the date of May 29 to join us in-person and online for a fun and fast-paced evening of Long Now Ignite Talks full of surprising and thoughtful ideas.

Planet DebianJoey Hess: the vulture in the coal mine

Turns out that VPS provider Vultr's terms of service were quietly changed some time ago to give them a "perpetual, irrevocable" license to use content hosted there in any way, including modifying it and commercializing it "for purposes of providing the Services to you."

This is very similar to changes that Github made to their TOS in 2017. Since then, Github has been rebranded as "The world’s leading AI-powered developer platform". The language in their TOS now clearly lets them use content stored in Github for training AI. (Probably this is their second line of defense if the current attempt to legitimise copyright laundering via generative AI fails.)

Vultr is currently in damage control mode, accusing their concerned customers of spreading "conspiracy theories" (-- founder David Aninowsky) and updating the TOS to remove some of the problem language. Although it still allows them to "make derivative works", so could still allow their AI division to scrape VPS images for training data.

Vultr claims this was the legalese version of technical debt, that it only ever applied to posts in a forum (not supported by the actual TOS language) and basically that they and their lawyers are incompetant but not malicious.

Maybe they are indeed incompetant. But even if I give them the benefit of the doubt, I expect that many other VPS providers, especially ones targeting non-corporate customers, are watching this closely. If Vultr is not significantly harmed by customers jumping ship, if the latest TOS change is accepted as good enough, then other VPS providers will know that they can try this TOS trick too. If Vultr's AI division does well, others will wonder to what extent it is due to having all this juicy training data.

For small self-hosters, this seems like a good time to make sure you're using a VPS provider you can actually trust to not be eyeing your disk image and salivating at the thought of stripmining it for decades of emails. Probably also worth thinking about moving to bare metal hardware, perhaps hosted at home.

I wonder if this will finally make it worthwhile to mess around with VPS TPMs?

Planet DebianScarlett Gately Moore: Kubuntu, KDE Report. In Loving Memory of my Son.

Personal:

As many of you know, I lost my beloved son March 9th. This has hit me really hard, but I am staying strong and holding on to all the wonderful memories I have. He grew up to be an amazing man, devoted christian and wonderful father. He was loved by everyone who knew him and will be truly missed by us all. I have had folks ask me how they can help. He left behind his 7 year old son Mason. Mason was Billy’s world and I would like to make sure Mason is taken care of. I have set up a gofundme for Mason and all proceeds will go to the future care of him.

https://gofund.me/25dbff0c

Work report

Kubuntu:

Bug bashing! I am triaging allthebugs for Plasma which can be seen here:

https://bugs.launchpad.net/plasma-5.27/+bug/2053125

I am happy to report many of the remaining bugs have been fixed in the latest bug fix release 5.27.11.

I prepared https://kde.org/announcements/plasma/5/5.27.11/ and Rik uploaded to archive, thank you. Unfortunately, this and several other key fixes are stuck in transition do to the time_t64 transition, which you can read about here: https://wiki.debian.org/ReleaseGoals/64bit-time . It is the biggest transition in Debian/Ubuntu history and it couldn’t come at a worst time. We are aware our ISO installer is currently broken, calamares is one of those things stuck in this transition. There is a workaround in the comments of the bug report: https://bugs.launchpad.net/ubuntu/+source/calamares/+bug/2054795

Fixed an issue with plasma-welcome.

Found the fix for emojis and Aaron has kindly moved this forward with the fontconfig maintainer. Thanks!

I have received an https://kfocus.org/spec/spec-ir14.html laptop and it is truly a great machine and is now my daily driver. A big thank you to the Kfocus team! I can’t wait to show it off at https://linuxfestnorthwest.org/.

KDE Snaps:

You will see the activity in this ramp back up as the KDEneon Core project is finally a go! I will participate in the project with part time status and get everyone in the Enokia team up to speed with my snap knowledge, help prepare the qt6/kf6 transition, package plasma, and most importantly I will focus on documentation for future contributors.

I have created the ( now split ) qt6 with KDE patchset support and KDE frameworks 6 SDK and runtime snaps. I have made the kde-neon-6 extension and the PR is in: https://github.com/canonical/snapcraft/pull/4698 . Future work on the extension will include multiple versions track support and core24 support.

I have successfully created our first qt6/kf6 snap ark. They will show showing up in the store once all the required bits have been merged and published.

Thank you for stopping by.

~Scarlett

365 TomorrowsDream State

Author: Majoki They call us the new DJs—Dream Jockeys—because we stitch together popular playlists for the masses. I think it lacks imagination to piggyback on the long-gone days of vinyl playing over the airways. But that’s human nature. Always harkening back to something familiar, something easy to romanticize, something less threatening. I guess there are […]

The post Dream State appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Sorts of Dates

We've seen loads of bad date handling, but as always, there's new ways to be surprised by the bizarre inventions people come up with. Today, Tim sends us some bad date sorting, in PHP.

    // Function to sort follow-ups by Date
    function cmp($a, $b)  {
        return strcmp(strtotime($a["date"]), strtotime($b["date"]));
    }
   
    // Sort the follow-ups by Date
    usort($data, "cmp");

The cmp function rests in the global namespace, which is a nice way to ensure future confusion- it's got a very specific job, but has a very generic name. And the job it does is… an interesting approach.

The "date" field in our records is a string. It's a string formatted in YYYY-MM-DD HH:MM:SS, and this is a guarantee of the inputs- which we'll get to in a moment. So the first thing that's worth noting is that the strings are already sortable, and nothing about this function needs to exist.

But being useless isn't the end of it. We convert the string time into a Unix timestamp with strtotime, which gives us an integer- also trivially sortable. But then we run that through strcmp, which converts the integer back into a string, so we can do a string comparison on it.

Elsewhere in the code, we use usort, passing it the wonderfully named $data variable, and then applying cmp to sort it.

Unrelated to this code, but a PHP weirdness, we pass the callable cmp as a string to the usort function to apply a sort. Every time I write a PHP article, I learn a new horror of the language, and "strings as callable objects" is definitely horrifying.

Now, a moment ago, I said that we knew the format of the inputs. That's a bold claim, especially for such a generically named function, but it's important: this function is used to sort the results of a database query. That's how we know the format of the dates- the input comes directly from a query.

A query that could easily be modified to include an ORDER BY clause, making this whole thing useless.

And in fact, someone had made that modification to the query, meaning that the data was already sorted before being passed to the usort function, which did its piles of conversions to sort it back into the same order all over again.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianSteinar H. Gunderson: git grudge

Small teaser:

Probably won't show up in aggregators (try this link instead).

Worse Than FailureCodeSOD: Never Retire

We all know that 2038 is going to be a big year. In a mere 14 years, a bunch of devices are going to have problems.

Less known is the Y2030 problem, which is what Ventsislav fighting to protect us from.

//POPULATE YEAR DROP DOWN LISTS
for (int year = 2000; year <= 2030; year++)
{
    DYearDropDownList.Items.Add(year.ToString());
    WYearDropDownList.Items.Add(year.ToString());
    MYearDropDownList.Items.Add(year.ToString());
}

//SELECT THE CURRENT YEAR
string strCurrentYear = DateTime.Now.Year.ToString();
for (int i = 0; i < DYearDropDownList.Items.Count; i++)
{
    if (DYearDropDownList.Items[i].Text == strCurrentYear)
    {
        DYearDropDownList.SelectedIndex = i;
        WYearDropDownList.SelectedIndex = i;
        MYearDropDownList.SelectedIndex = i;
        break;
    }
}

Okay, likely less critical than Y2038, but this code, as you might guess, started its life in the year 2000. Clearly, no one thought it'd still be in use this far out, yet… it is.

It's also worth noting that the drop down list object in .NET has a SelectedValue property, so the //SELECT THE CURRENT YEAR section is unnecessary, and could be replaced by a one-liner.

With six years to go, do you think this application is going to be replaced, or is the year for loop just going to change to year <= 2031 and be a manual change for the rest of the application's lifetime?

I mean, they could also change it so it always goes to currentYear or currentYear + 1 or whatever, but do we really think that's a viable option?

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsCold War

Author: David Barber Like nuclear weapons before them, the plagues each side kept hidden were too terrible to use; instead they waged sly wars of colds and coughs, infectious agents sneezed in trams and crowded lifts, blighting commerce with working days lost to fevers and sickness; secret attacks hard to prove and impossible to stop. […]

The post Cold War appeared first on 365tomorrows.

,

Planet DebianEmmanuel Kasper: Adding a private / custom Certificate Authority to the firefox trust store

Today at $WORK I needed to add the private company Certificate Authority (CA) to Firefox, and I found the steps were unnecessarily complex. Time to blog about that, and I also made a Debian wiki article of that post, so that future generations can update the information, when Firefox 742 is released on Debian 17.

The cacert certificate authority is not included in Debian and Firefox, and is thus a good example of adding a private CA. Note that this does not mean I specifically endorse that CA.

  • Test that SSL connections to a site signed by the private CA is failing
$ gnutls-cli wiki.cacert.org:443
...
- Status: The certificate is NOT trusted. The certificate issuer is unknown. 
*** PKI verification of server certificate failed...
*** Fatal error: Error in the certificate.
  • Download the private CA
$ wget http://www.cacert.org/certs/root_X0F.crt
  • test that a connection works with the private CA
$ gnutls-cli --x509cafile root_X0F.crt wiki.cacert.org:443
...
- Status: The certificate is trusted. 
- Description: (TLS1.2-X.509)-(ECDHE-SECP256R1)-(RSA-SHA256)-(AES-256-GCM)
- Session ID: 37:56:7A:89:EA:5F:13:E8:67:E4:07:94:4B:52:23:63:1E:54:31:69:5D:70:17:3C:D0:A4:80:B0:3A:E5:22:B3
- Options: safe renegotiation,
- Handshake was completed
...
  • add the private CA to the Debian trust store located in /etc/ssl/certs/ca-certificates.crt
$ sudo cp root_X0F.crt /usr/local/share/ca-certificates/cacert-org-root-ca.crt
$ sudo update-ca-certificates --verbose
...
Adding debian:cacert-org-root-ca.pem
...
  • verify that we can connect without passing the private CA on the command line
$ gnutls-cli wiki.cacert.org:443
... 
 - Status: The certificate is trusted.
  • At that point most applications are able to connect to systems with a certificate signed by the private CA (curl, Gnome builtin Browser …). However Firefox is using its own trust store and will still display a security error if connecting to https://wiki.cacert.org. To make Firefox trust the Debian trust store, we need to add a so called security device, in fact an extra library wrapping the Debian trust store. The library will wrap the Debian trust store in the PKCS#11 industry format that Firefox supports.

  • install the pkcs#11 wrapping library and command line tools

$ sudo apt install p11-kit p11-kit-modules
  • verify that the private CA is accessible via PKCS#11
$ trust list | grep --context 2 'CA Cert'
pkcs11:id=%16%B5%32%1B%D4%C7%F3%E0%E6%8E%F3%BD%D2%B0%3A%EE%B2%39%18%D1;type=cert
    type: certificate
    label: CA Cert Signing Authority
    trust: anchor
    category: authority
  • now we need to add a new security device in Firefox pointing to the pkcs11 trust store. The pkcs11 trust store is located in /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
$ dpkg --listfiles p11-kit-modules | grep trust
/usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
  • in Firefox (tested in version 115 esr), go to Settings -> Privacy & Security -> Security -> Security Devices.
    Then click “Load”, in the popup window use “My local trust” as a module name, and /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so as a module filename. After adding the module, you should see it in the list of Security Devices, having /etc/ssl/certs/ca-certificates.crt as a description.

  • now restart Firefox and you should be able to browse https://wiki.cacert.org without security errors

Cryptogram Hardware Vulnerability in Apple’s M-Series Chips

It’s yet another hardware side-channel attack:

The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

[…]

The breakthrough of the new research is that it exposes a previously overlooked behavior of DMPs in Apple silicon: Sometimes they confuse memory content, such as key material, with the pointer value that is used to load other data. As a result, the DMP often reads the data and attempts to treat it as an address to perform memory access. This “dereferencing” of “pointers”—meaning the reading of data and leaking it through a side channel—­is a flagrant violation of the constant-time paradigm.

[…]

The attack, which the researchers have named GoFetch, uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system. M-series chips are divided into what are known as clusters. The M1, for example, has two clusters: one containing four efficiency cores and the other four performance cores. As long as the GoFetch app and the targeted cryptography app are running on the same performance cluster—­even when on separate cores within that cluster­—GoFetch can mine enough secrets to leak a secret key.

The attack works against both classical encryption algorithms and a newer generation of encryption that has been hardened to withstand anticipated attacks from quantum computers. The GoFetch app requires less than an hour to extract a 2048-bit RSA key and a little over two hours to extract a 2048-bit Diffie-Hellman key. The attack takes 54 minutes to extract the material required to assemble a Kyber-512 key and about 10 hours for a Dilithium-2 key, not counting offline time needed to process the raw data.

The GoFetch app connects to the targeted app and feeds it inputs that it signs or decrypts. As its doing this, it extracts the app secret key that it uses to perform these cryptographic operations. This mechanism means the targeted app need not perform any cryptographic operations on its own during the collection period.

Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

Slashdot thread.

Cryptogram Security Vulnerability in Saflok’s RFID-Based Keycard Locks

It’s pretty devastating:

Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok. The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

Dormakaba says that it’s been working since early last year to make hotels that use Saflok aware of their security flaws and to help them fix or replace the vulnerable locks. For many of the Saflok systems sold in the last eight years, there’s no hardware replacement necessary for each individual lock. Instead, hotels will only need to update or replace the front desk management system and have a technician carry out a relatively quick reprogramming of each lock, door by door. Wouters and Carroll say they were nonetheless told by Dormakaba that, as of this month, only 36 percent of installed Safloks have been updated. Given that the locks aren’t connected to the internet and some older locks will still need a hardware upgrade, they say the full fix will still likely take months longer to roll out, at the very least. Some older installations may take years.

If ever. My guess is that for many locks, this is a permanent vulnerability.

Charles StrossA message from our sponsors: New Book coming!

(You probably expected this announcement a while ago ...)

I've just signed a new two book deal with my publishers, Tor.com publishing in the USA/Canada and Orbit in the UK/rest of world, and the book I'm talking about here and now—the one that's already written and delivered to the Production people who turn it into a thing you'll be able to buy later this year—is a Laundry stand-alone titled "A Conventional Boy".

("Delivered to production" means it is now ready to be copy-edited, typeset, printed/bound/distributed and simultaneously turned into an ebook and pushed through the interwebbytubes to the likes of Kobo and Kindle. I do not have a publication date or a link where you can order it yet: it almost certainly can't show up before July at this point. Yes, everything is running late. No, I have no idea why.)

"A Conventional Boy" is not part of the main (and unfinished) Laundry Files story arc. Nor is it a New Management story. It's a stand-alone story about Derek the DM, set some time between the end of "The Fuller Memorandum" and before "The Delirium Brief". We met Derek originally in "The Nightmare Stacks", and again in "The Labyrinth Index": he's a portly, short-sighted, middle-aged nerd from the Laundry's Forecasting Ops department who also just happens to be the most powerful precognitive the Laundry has tripped over in the past few decades—and a role playing gamer.

When Derek was 14 years old and running a D&D campaign, a schoolteacher overheard him explaining D&D demons to his players and called a government tips hotline. Thirty-odd years later Derek has lived most of his life in Camp Sunshine, the Laundry's magical Gitmo for Elder God cultists. As a trusty/"safe" inmate, he produces the camp newsletter and uses his postal privileges to run a play-by-mail RPG. One day, two pieces of news cross Derek's desk: the camp is going to be closed down and rebuilt as a real prison, and a games convention is coming to the nearest town.

Camp Sunshine is officially escape-proof, but Derek has had a foolproof escape plan socked away for the past decade. He hasn't used it because until now he's never had anywhere to escape to. But now he's facing the demolition of his only home, and he has a destination in mind. Come hell or high water, Derek intends to go to his first ever convention. Little does he realize that hell is also going to the convention ...

I began writing "A Conventional Boy" in 2009, thinking it'd make a nice short story. It went on hold for far too long (it was originally meant to come out before "The Nightmare Stacks"!) but instead it lingered ... then when I got back to work on it, the story ran away and grew into a short novel in its own right. As it's rather shorter than the other Laundry novels (although twice as long as, say, "Equoid") the book also includes "Overtime" and "Escape from Yokai Land", both Laundry Files novelettes about Bob, and an afterword providing some background on the 1980s Satanic D&D Panic for readers who don't remember it (which sadly means anyone much younger than myself).

Questions? Ask me anything!

Krebs on SecurityRecent ‘MFA Bombing’ Attacks Targeting Apple Users

Several Apple customers recently reported being targeted in elaborate phishing attacks that involve what appears to be a bug in Apple’s password reset feature. In this scenario, a target’s Apple devices are forced to display dozens of system-level prompts that prevent the devices from being used until the recipient responds “Allow” or “Don’t Allow” to each prompt. Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user’s account is under attack and that Apple support needs to “verify” a one-time code.

Some of the many notifications Patel says he received from Apple all at once.

Parth Patel is an entrepreneur who is trying to build a startup in the conversational AI space. On March 23, Patel documented on Twitter/X a recent phishing campaign targeting him that involved what’s known as a “push bombing” or “MFA fatigue” attack, wherein the phishers abuse a feature or weakness of a multi-factor authentication (MFA) system in a way that inundates the target’s device(s) with alerts to approve a password change or login.

“All of my devices started blowing up, my watch, laptop and phone,” Patel told KrebsOnSecurity. “It was like this system notification from Apple to approve [a reset of the account password], but I couldn’t do anything else with my phone. I had to go through and decline like 100-plus notifications.”

Some people confronted with such a deluge may eventually click “Allow” to the incessant password reset prompts — just so they can use their phone again. Others may inadvertently approve one of these prompts, which will also appear on a user’s Apple watch if they have one.

But the attackers in this campaign had an ace up their sleeves: Patel said after denying all of the password reset prompts from Apple, he received a call on his iPhone that said it was from Apple Support (the number displayed was 1-800-275-2273, Apple’s real customer support line).

“I pick up the phone and I’m super suspicious,” Patel recalled. “So I ask them if they can verify some information about me, and after hearing some aggressive typing on his end he gives me all this information about me and it’s totally accurate.”

All of it, that is, except his real name. Patel said when he asked the fake Apple support rep to validate the name they had on file for the Apple account, the caller gave a name that was not his but rather one that Patel has only seen in background reports about him that are for sale at a people-search website called PeopleDataLabs.

Patel said he has worked fairly hard to remove his information from multiple people-search websites, and he found PeopleDataLabs uniquely and consistently listed this inaccurate name as an alias on his consumer profile.

“For some reason, PeopleDataLabs has three profiles that come up when you search for my info, and two of them are mine but one is an elementary school teacher from the midwest,” Patel said. “I asked them to verify my name and they said Anthony.”

Patel said the goal of the voice phishers is to trigger an Apple ID reset code to be sent to the user’s device, which is a text message that includes a one-time password. If the user supplies that one-time code, the attackers can then reset the password on the account and lock the user out. They can also then remotely wipe all of the user’s Apple devices.

THE PHONE NUMBER IS KEY

Chris is a cryptocurrency hedge fund owner who asked that only his first name be used so as not to paint a bigger target on himself. Chris told KrebsOnSecurity he experienced a remarkably similar phishing attempt in late February.

“The first alert I got I hit ‘Don’t Allow’, but then right after that I got like 30 more notifications in a row,” Chris said. “I figured maybe I sat on my phone weird, or was accidentally pushing some button that was causing these, and so I just denied them all.”

Chris says the attackers persisted hitting his devices with the reset notifications for several days after that, and at one point he received a call on his iPhone that said it was from Apple support.

“I said I would call them back and hung up,” Chris said, demonstrating the proper response to such unbidden solicitations. “When I called back to the real Apple, they couldn’t say whether anyone had been in a support call with me just then. They just said Apple states very clearly that it will never initiate outbound calls to customers — unless the customer requests to be contacted.”

Massively freaking out that someone was trying to hijack his digital life, Chris said he changed his passwords and then went to an Apple store and bought a new iPhone. From there, he created a new Apple iCloud account using a brand new email address.

Chris said he then proceeded to get even more system alerts on his new iPhone and iCloud account — all the while still sitting at the local Apple Genius Bar.

Chris told KrebsOnSecurity his Genius Bar tech was mystified about the source of the alerts, but Chris said he suspects that whatever the phishers are abusing to rapidly generate these Apple system alerts requires knowing the phone number on file for the target’s Apple account. After all, that was the only aspect of Chris’s new iPhone and iCloud account that hadn’t changed.

WATCH OUT!

“Ken” is a security industry veteran who spoke on condition of anonymity. Ken said he first began receiving these unsolicited system alerts on his Apple devices earlier this year, but that he has not received any phony Apple support calls as others have reported.

“This recently happened to me in the middle of the night at 12:30 a.m.,” Ken said. “And even though I have my Apple watch set to remain quiet during the time I’m usually sleeping at night, it woke me up with one of these alerts. Thank god I didn’t press ‘Allow,’ which was the first option shown on my watch. I had to scroll watch the wheel to see and press the ‘Don’t Allow’ button.”

Ken shared this photo he took of an alert on his watch that woke him up at 12:30 a.m. Ken said he had to scroll on the watch face to see the “Don’t Allow” button.

Ken didn’t know it when all this was happening (and it’s not at all obvious from the Apple prompts), but clicking “Allow” would not have allowed the attackers to change Ken’s password. Rather, clicking “Allow” displays a six digit PIN that must be entered on Ken’s device — allowing Ken to change his password. It appears that these rapid password reset prompts are being used to make a subsequent inbound phone call spoofing Apple more believable.

Ken said he contacted the real Apple support and was eventually escalated to a senior Apple engineer. The engineer assured Ken that turning on an Apple Recovery Key for his account would stop the notifications once and for all.

A recovery key is an optional security feature that Apple says “helps improve the security of your Apple ID account.” It is a randomly generated 28-character code, and when you enable a recovery key it is supposed to disable Apple’s standard account recovery process. The thing is, enabling it is not a simple process, and if you ever lose that code in addition to all of your Apple devices you will be permanently locked out.

Ken said he enabled a recovery key for his account as instructed, but that it hasn’t stopped the unbidden system alerts from appearing on all of his devices every few days.

KrebsOnSecurity tested Ken’s experience, and can confirm that enabling a recovery key does nothing to stop a password reset prompt from being sent to associated Apple devices. Visiting Apple’s “forgot password” page — https://iforgot.apple.com — asks for an email address and for the visitor to solve a CAPTCHA.

After that, the page will display the last two digits of the phone number tied to the Apple account. Filling in the missing digits and hitting submit on that form will send a system alert, whether or not the user has enabled an Apple Recovery Key.

The password reset page at iforgot.apple.com.

RATE LIMITS

What sanely designed authentication system would send dozens of requests for a password change in the span of a few moments, when the first requests haven’t even been acted on by the user? Could this be the result of a bug in Apple’s systems?

Apple has not yet responded to requests for comment.

Throughout 2022, a criminal hacking group known as LAPSUS$ used MFA bombing to great effect in intrusions at Cisco, Microsoft and Uber. In response, Microsoft began enforcing “MFA number matching,” a feature that displays a series of numbers to a user attempting to log in with their credentials. These numbers must then be entered into the account owner’s Microsoft authenticator app on their mobile device to verify they are logging into the account.

Kishan Bagaria is a hobbyist security researcher and engineer who founded the website texts.com (now owned by Automattic), and he’s convinced Apple has a problem on its end. In August 2019, Bagaria reported to Apple a bug that allowed an exploit he dubbed “AirDoS” because it could be used to let an attacker infinitely spam all nearby iOS devices with a system-level prompt to share a file via AirDrop — a file-sharing capability built into Apple products.

Apple fixed that bug nearly four months later in December 2019, thanking Bagaria in the associated security bulletin. Bagaria said Apple’s fix was to add stricter rate limiting on AirDrop requests, and he suspects that someone has figured out a way to bypass Apple’s rate limit on how many of these password reset requests can be sent in a given timeframe.

“I think this could be a legit Apple rate limit bug that should be reported,” Bagaria said.

WHAT CAN YOU DO?

Apple seems requires a phone number to be on file for your account, but after you’ve set up the account it doesn’t have to be a mobile phone number. KrebsOnSecurity’s testing shows Apple will accept a VOIP number (like Google Voice). So, changing your account phone number to a VOIP number that isn’t widely known would be one mitigation here.

One caveat with the VOIP number idea: Unless you include a real mobile number, Apple’s iMessage and Facetime applications will be disabled for that device. This might a bonus for those concerned about reducing the overall attack surface of their Apple devices, since zero-click zero-days in these applications have repeatedly been used by spyware purveyors.

Also, it appears Apple’s password reset system will accept and respect email aliases. Adding a “+” character after the username portion of your email address — followed by a notation specific to the site you’re signing up at — lets you create an infinite number of unique email addresses tied to the same account.

For instance, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to my inbox and create a corresponding folder called “Example,” along with a new filter that sends any email addressed to that alias to the Example folder. In this case, however, perhaps a less obvious alias than “+apple” would be advisable.

Update, March 27, 5:06 p.m. ET: Added perspective on Ken’s experience. Also included a What Can You Do? section.

Worse Than FailureCodeSOD: Exceptional String Comparisons

As a general rule, I will actually prefer code that is verbose and clear over code that is concise but makes me think. I really don't like to think if I don't have to.

Of course, there's the class of WTF code that is verbose, unclear and also really bad, which Thomas sends us today:

Private Shared Function Mailid_compare(ByVal queryEmail As String, ByVal FnsEmail As String) As Boolean
    Try
        Dim str1 As String = queryEmail
        Dim str2 As String = FnsEmail
        If String.Compare(str1, str2) = 0 Then
            Return True
        Else
            Return False
        End If
    Catch ex As Exception
    End Try
End Function

This VB .Net function could easily be replaced with String.Compare(queryEmail, FnsEmail) = 0. Of course, that'd be a little unclear, and since we only care about equality, we could just use String.Equals(queryEmail, FnsEmail)- which is honestly clearer than having a method called Mailid_compare, which doesn't actually tell me anything useful about what it does.

Speaking of not doing anything useful, there are a few other pieces of bloat in this function.

First, we plop our input parameters into the variables str1 and str2, which does a great job of making what's happening here less clear. Then we have the traditional "use an if statement to return either true or false".

But the real special magic in this one is the Try/Catch. This is a pretty bog standard useless exception handler. No operation in this function throws an exception- String.Compare will even happily accept null references. Even if somehow an exception was thrown, we wouldn't do anything about it. As a bonus, we'd return a null in that case, throwing downstream code into a bad state.

What's notable in this case, is that every function was implemented this way. Every function had a Try/Catch that frequently did nothing, or rarely (usually when they copy/pasted from StackOverflow) printed out the error message, but otherwise just let the program continue.

And that's the real WTF: a codebase polluted with so many do-nothing exception handlers that exceptions become absolutely worthless. Errors in the program let it continue, and the users experience bizarre, inconsistent states as the application fails silently.

Or, to put it another way: this is the .NET equivalent of classic VB's On Error Resume Next, which is exactly the kind of terrible idea it sounds like.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 Tomorrows‘Lineartrope 04.96’

Author: David Dumouriez I thought I was ready. “I was on the precipice, looking down.” Internal count of five. A long five. “I was on the precipice, looking down.” Count ten. “I was on the precipice, looking down.” I noticed a brief, impatient nod. The nod meant ‘again’. I thought. “I was on the precipice […]

The post ‘Lineartrope 04.96’ appeared first on 365tomorrows.

,

Planet DebianJonathan Dowland: a bug a day

I recently became a maintainer of/committer to IkiWiki, the software that powers my site. I also took over maintenance of the Debian package. Last week I cut a new upstream point release, 3.20200202.4, and a corresponding Debian package upload, consisting only of a handful of low-hanging-fruit patches from other people, largely to exercise both processes.

I've been discussing IkiWiki's maintenance situation with some other users for a couple of years now. I've also weighed up the pros and cons of moving to a different static-site-generator (a term that describes what IkiWiki is, but was actually coined more recently). It turns out IkiWiki is exceptionally flexible and powerful: I estimate the cost of moving to something modern(er) and fashionable such as Jekyll, Hugo or Hakyll as unreasonably high, in part because they are surprisingly rigid and inflexible in some key places.

Like most mature software, IkiWiki has a bug backlog. Over the past couple of weeks, as a sort-of "palate cleanser" around work pieces, I've tried to triage one IkiWiki bug per day: either upstream or in the Debian Bug Tracker. This is a really lightweight task: it can be as simple as "find a bug reported in Debian, copy it upstream, tag it upstream, mark it forwarded; perhaps taking 5-10 minutes.

Often I'll stumble across something that has already been fixed but not recorded as such as I go.

Despite this minimal level of work, I'm quite satisfied with the cumulative progress. It's notable to me how much my perspective has shifted by becoming a maintainer: I'm considering everything through a different lens to that of being just one user.

Eventually I will put some time aside to scratch some of my own itches (html5 by default; support dark mode; duckduckgo plugin; use the details tag...) but for now this minimal exercise is of broader use.

Worse Than FailureCodeSOD: Contains a Substring

One of the perks of open source software is that it means that large companies can and will patch it for their needs. Which means we can see what a particular large electronics vendor did with a video player application.

For example, they needed to see if the URL pointed to a stream protected by WideVine, Vudu, or Netflix. They can do this by checking if the filename contains a certain substring. Let's see how they accomplished this…

int get_special_protocol_type(char *filename, char *name)
{
	int type = 0;
	int fWidevine = 0;
	int j;
    	char proto_str[2800] = {'\0', };
      	if (!strcmp("http", name))
       {
		strcpy(proto_str, filename);
		for(j=0;proto_str[j] != '\0';j++)
		{
			if(proto_str[j] == '=')
			{
				j++;
				if(proto_str[j] == 'W')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						type = Widevine_PROTOCOL;
					}
				}
			}
		}
		if (type == 0)
		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						j++;
						if(proto_str[j] == 'U')
						{
							j++;
							if(proto_str[j] == 'D')
							{
								j++;
								if(proto_str[j] == 'U')
								{
									type = VUDU_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
 		if (type == 0)
      		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'N')
					{
						j++;
						if(proto_str[j] == 'F')
						{
							j++;
							if(proto_str[j] == 'L')
							{
								j++;
								if(proto_str[j] == 'X')
								{
									type = Netflix_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
      	}
	return type;
}

For starters, there's been a lot of discussion around the importance of memory safe languages lately. I would argue that in C/C++ it's not actually hard to write memory safe code, it's just very easy not to. And this is an example- everything in here is a buffer overrun waiting to happen. The core problem is that we're passing pure pointers to char, and relying on the strings being properly null terminated. So we're using the old, unsafe string functions to never checking against the bounds of proto_str to make sure we haven't run off the edge. A malicious input could easily run off the end of the string.

But also, let's talk about that string comparison. They don't even just loop across a pair of strings character by character, they use this bizarre set of nested ifs with incrementing loop variables. Given that they use strcmp, I think we can safely assume the C standard library exists for their target, which means strnstr was right there.

It's also worth noting that, since this is a read-only operation, the strcpy is not necessary, though we're in a rough place since they're passing a pointer to char and not including the size, which gets us back to the whole "unsafe string operations" problem.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsSecond-Hand Blades

Author: Julian Miles, Staff Writer “The only things I work with are killers and currency. You don’t look dangerous, and I don’t see you waving cash.” The man brushes imaginary specks from his lapels with the hand not holding a dagger, then gives a little grin as he replies. “Appearances can be- aack!” The man […]

The post Second-Hand Blades appeared first on 365tomorrows.

Planet DebianValhalla's Things: Piecepack and postcard boxes

Posted on March 25, 2024
Tags: madeof:bits, craft:cartonnage

This article has been originally posted on November 4, 2023, and has been updated (at the bottom) since.

An open cardboard box, showing the lining in paper printed with a medieval music manuscript.

Thanks to All Saints’ Day, I’ve just had a 5 days weekend. One of those days I woke up and decided I absolutely needed a cartonnage box for the cardboard and linocut piecepack I’ve been working on for quite some time.

I started drawing a plan with measures before breakfast, then decided to change some important details, restarted from scratch, did a quick dig through the bookbinding materials and settled on 2 mm cardboard for the structure, black fabric-like paper for the outside and a scrap of paper with a manuscript print for the inside.

Then we had the only day with no rain among the five, so some time was spent doing things outside, but on the next day I quickly finished two boxes, at two different heights.

The weather situation also meant that while I managed to take passable pictures of the first stages of the box making in natural light, the last few stages required some creative artificial lightning, even if it wasn’t that late in the evening. I need to build1 myself a light box.

And then decided that since they are C6 sized, they also work well for postcards or for other A6 pieces of paper, so I will probably need to make another one when the piecepack set will be finally finished.

The original plan was to use a linocut of the piecepack suites as the front cover; I don’t currently have one ready, but will make it while printing the rest of the piecepack set. One day :D

an open rectangular cardboard box, with a plastic piecepack set in it.

One of the boxes was temporarily used for the plastic piecepack I got with the book, and that one works well, but since it’s a set with standard suites I think I will want to make another box, using some of the paper with fleur-de-lis that I saw in the stash.

I’ve also started to write detailed instructions: I will publish them as soon as they are ready, and then either update this post, or they will be mentioned in an additional post if I will have already made more boxes in the meanwhile.


Update 2024-03-25: the instructions have been published on my craft patterns website


  1. you don’t really expect me to buy one, right? :D↩︎

,

Planet DebianNiels Thykier: debputy v0.1.21

Earlier today, I have just released debputy version 0.1.21 to Debian unstable. In the blog post, I will highlight some of the new features.

Package boilerplate reduction with automatic relationship substvar

Last month, I started a discussion on rethinking how we do relationship substvars such as the ${misc:Depends}. These generally ends up being boilerplate runes in the form of Depends: ${misc:Depends}, ${shlibs:Depends} where you as the packager has to remember exactly which runes apply to your package.

My proposed solution was to automatically apply these substvars and this feature has now been implemented in debputy. It is also combined with the feature where essential packages should use Pre-Depends by default for dpkg-shlibdeps related dependencies.

I am quite excited about this feature, because I noticed with libcleri that we are now down to 3-5 fields for defining a simple library package. Especially since most C library packages are trivial enough that debputy can auto-derive them to be Multi-Arch: same.

As an example, the libcleric1 package is down to 3 fields (Package, Architecture, Description) with Section and Priority being inherited from the Source stanza. I have submitted a MR to show case the boilerplate reduction at https://salsa.debian.org/siridb-team/libcleri/-/merge_requests/3.

The removal of libcleric1 (= ${binary:Version}) in that MR relies on another existing feature where debputy can auto-derive a dependency between an arch:any -dev package and the library package based on the .so symlink for the shared library. The arch:any restriction comes from the fact that arch:all and arch:any packages are not built together, so debputy cannot reliably see across the package boundaries during the build (and therefore refuses to do so at all).

Packages that have already migrated to debputy can use debputy migrate-from-dh to detect any unnecessary relationship substitution variables in case you want to clean up. The removal of Multi-Arch: same and intra-source dependencies must be done manually and so only be done so when you have validated that it is safe and sane to do. I was willing to do it for the show-case MR, but I am less confident that would bother with these for existing packages in general.

Note: I summarized the discussion of the automatic relationship substvar feature earlier this month in https://lists.debian.org/debian-devel/2024/03/msg00030.html for those who want more details.

PS: The automatic relationship substvars feature will also appear in debhelper as a part of compat 14.

Language Server (LSP) and Linting

I have long been frustrated by our poor editor support for Debian packaging files. To this end, I started working on a Language Server (LSP) feature in debputy that would cover some of our standard Debian packaging files. This release includes the first version of said language server, which covers the following files:

  • debian/control
  • debian/copyright (the machine readable variant)
  • debian/changelog (mostly just spelling)
  • debian/rules
  • debian/debputy.manifest (syntax checks only; use debputy check-manifest for the full validation for now)

Most of the effort has been spent on the Deb822 based files such as debian/control, which comes with diagnostics, quickfixes, spellchecking (but only for relevant fields!), and completion suggestions.

Since not everyone has a LSP capable editor and because sometimes you just want diagnostics without having to open each file in an editor, there is also a batch version for the diagnostics via debputy lint. Please see debputy(1) for how debputy lint compares with lintian if you are curious about which tool to use at what time.

To help you getting started, there is a now debputy lsp editor-config command that can provide you with the relevant editor config glue. At the moment, emacs (via eglot) and vim with vim-youcompleteme are supported.

For those that followed the previous blog posts on writing the language server, I would like to point out that the command line for running the language server has changed to debputy lsp server and you no longer have to tell which format it is. I have decided to make the language server a "polyglot" server for now, which I will hopefully not regret... Time will tell. :)

Anyhow, to get started, you will want:

$ apt satisfy 'dh-debputy (>= 0.1.21~), python3-pygls'
# Optionally, for spellchecking
$ apt install python3-hunspell hunspell-en-us
# For emacs integration
$ apt install elpa-dpkg-dev-el markdown-mode-el
# For vim integration via vim-youcompleteme
$ apt install vim-youcompleteme

Specifically for emacs, I also learned two things after the upload. First, you can auto-activate eglot via eglot-ensure. This badly feature interacts with imenu on debian/changelog for reasons I do not understand (causing a several second start up delay until something times out), but it works fine for the other formats. Oddly enough, opening a changelog file and then activating eglot does not trigger this issue at all. In the next version, editor config for emacs will auto-activate eglot on all files except debian/changelog.

The second thing is that if you install elpa-markdown-mode, emacs will accept and process markdown in the hover documentation provided by the language server. Accordingly, the editor config for emacs will also mention this package from the next version on.

Finally, on a related note, Jelmer and I have been looking at moving some of this logic into a new package called debpkg-metadata. The point being to support easier reuse of linting and LSP related metadata - like pulling a list of known fields for debian/control or sharing logic between lintian-brush and debputy.

Minimal integration mode for Rules-Requires-Root

One of the original motivators for starting debputy was to be able to get rid of fakeroot in our build process. While this is possible, debputy currently does not support most of the complex packaging features such as maintscripts and debconf. Unfortunately, the kind of packages that need fakeroot for static ownership tend to also require very complex packaging features.

To bridge this gap, the new version of debputy supports a very minimal integration with dh via the dh-sequence-zz-debputy-rrr. This integration mode keeps the vast majority of debhelper sequence in place meaning most dh add-ons will continue to work with dh-sequence-zz-debputy-rrr. The sequence only replaces the following commands:

  • dh_fixperms
  • dh_gencontrol
  • dh_md5sums
  • dh_builddeb

The installations feature of the manifest will be disabled in this integration mode to avoid feature interactions with debhelper tools that expect debian/<pkg> to contain the materialized package.

On a related note, the debputy migrate-from-dh command now supports a --migration-target option, so you can choose the desired level of integration without doing code changes. The command will attempt to auto-detect the desired integration from existing package features such as a build-dependency on a relevant dh sequence, so you do not have to remember this new option every time once the migration has started. :)

Planet DebianMarco d'Itri: CISPE's call for new regulations on VMware

A few days ago CISPE, a trade association of European cloud providers, published a press release complaining about the new VMware licensing scheme and asking for regulators and legislators to intervene.

But VMware does not have a monopoly on virtualization software: I think that asking regulators to interfere is unnecessary and unwise, unless, of course, they wish to question the entire foundations of copyright. Which, on the other hand, could be an intriguing position that I would support...

I believe that over-reliance on a single supplier is a typical enterprise risk: in the past decade some companies have invested in developing their own virtualization infrastructure using free software, while others have decided to rely entirely on a single proprietary software vendor.

My only big concern is that many public sector organizations will continue to use VMware and pay the huge fees designed by Broadcom to extract the maximum amount of money from their customers. However, it is ultimately the citizens who pay these bills, and blaming the evil US corporation is a great way to avoid taking responsibility for these choices.

"Several CISPE members have stated that without the ability to license and use VMware products they will quickly go bankrupt and out of business."

Insert here the Jeremy Clarkson "Oh no! Anyway..." meme.

365 TomorrowsVerbatim Thirst

Author: Gabriel Walker Land In every direction there was nothing but baked dirt, tumbleweeds, and flat death. The blazing sun weighed down on me. I didn’t know which way to walk, and I didn’t know why. How I’d gotten there was long since forgotten. Being lost wasn’t the pressing problem. No, the immediate threat was […]

The post Verbatim Thirst appeared first on 365tomorrows.

Planet DebianJacob Adams: Regular Reboots

Uptime is often considered a measure of system reliability, an indication that the running software is stable and can be counted on.

However, this hides the insidious build-up of state throughout the system as it runs, the slow drift from the expected to the strange.

As Nolan Lawson highlights in an excellent post entitled Programmers are bad at managing state, state is the most challenging part of programming. It’s why “did you try turning it off and on again” is a classic tech support response to any problem.

In addition to the problem of state, installing regular updates periodically requires a reboot, even if the rest of the process is automated through a tool like unattended-upgrades.

For my personal homelab, I manage a handful of different machines running various services.

I used to just schedule a day to update and reboot all of them, but that got very tedious very quickly.

I then moved the reboot to a cronjob, and then recently to a systemd timer and service.

I figure that laying out my path to better management of this might help others, and will almost certainly lead to someone telling me a better way to do this.

UPDATE: Turns out there’s another option for better systemd cron integration. See systemd-cron below.

Stage One: Reboot Cron

The first, and easiest approach, is a simple cron job. Just adding the following line to /var/spool/cron/crontabs/root1 is enough to get your machine to reboot once a month2 on the 6th at 8:00 AM3:

0 8 6 * * reboot

I had this configured for many years and it works well. But you have no indication as to whether it succeeds except for checking your uptime regularly yourself.

Stage Two: Reboot systemd Timer

The next evolution of this approach for me was to use a systemd timer. I created a regular-reboot.timer with the following contents:

[Unit]
Description=Reboot on a Regular Basis

[Timer]
Unit=regular-reboot.service
OnBootSec=1month

[Install]
WantedBy=timers.target

This timer will trigger the regular-reboot.service systemd unit when the system reaches one month of uptime.

I’ve seen some guides to creating timer units recommend adding a Wants=regular-reboot.service to the [Unit] section, but this has the consequence of running that service every time it starts the timer. In this case that will just reboot your system on startup which is not what you want.

Care needs to be taken to use the OnBootSec directive instead of OnCalendar or any of the other time specifications, as your system could reboot, discover its still within the expected window and reboot again. With OnBootSec your system will not have that problem. Technically, this same problem could have occurred with the cronjob approach, but in practice it never did, as the systems took long enough to come back up that they were no longer within the expected window for the job.

I then added the regular-reboot.service:

[Unit]
Description=Reboot on a Regular Basis
Wants=regular-reboot.timer

[Service]
Type=oneshot
ExecStart=shutdown -r 02:45

You’ll note that this service is actually scheduling a specific reboot time via the shutdown command instead of just immediately rebooting. This is a bit of a hack needed because I can’t control when the timer runs exactly when using OnBootSec. This way different systems have different reboot times so that everything doesn’t just reboot and fail all at once. Were something to fail to come back up I would have some time to fix it, as each machine has a few hours between scheduled reboots.

One you have both files in place, you’ll simply need to reload configuration and then enable and start the timer unit:

systemctl daemon-reload
systemctl enable --now regular-reboot.timer

You can then check when it will fire next:

# systemctl status regular-reboot.timer
● regular-reboot.timer - Reboot on a Regular Basis
     Loaded: loaded (/etc/systemd/system/regular-reboot.timer; enabled; preset: enabled)
     Active: active (waiting) since Wed 2024-03-13 01:54:52 EDT; 1 week 4 days ago
    Trigger: Fri 2024-04-12 12:24:42 EDT; 2 weeks 4 days left
   Triggers: ● regular-reboot.service

Mar 13 01:54:52 dorfl systemd[1]: Started regular-reboot.timer - Reboot on a Regular Basis.

Sidenote: Replacing all Cron Jobs with systemd Timers

More generally, I’ve now replaced all cronjobs on my personal systems with systemd timer units, mostly because I can now actually track failures via prometheus-node-exporter. There are plenty of ways to hack in cron support to the node exporter, but just moving to systemd units provides both support for tracking failure and logging, both of which make system administration much easier when things inevitably go wrong.

systemd-cron

An alternative to converting everything by hand, if you happen to have a lot of cronjobs is systemd-cron. It will make each crontab and /etc/cron.* directory into automatic service and timer units.

Thanks to Alexandre Detiste for letting me know about this project. I have few enough cron jobs that I’ve already converted, but for anyone looking at a large number of jobs to convert you’ll want to check it out!

Stage Three: Monitor that it’s working

The final step here is confirm that these units actually work, beyond just firing regularly.

I now have the following rule in my prometheus-alertmanager rules:

  - alert: UptimeTooHigh
    expr: (time() - node_boot_time_seconds{job="node"}) / 86400 > 35
    annotations:
      summary: "Instance  Has Been Up Too Long!"
      description: "Instance  Has Been Up Too Long!"

This will trigger an alert anytime that I have a machine up for more than 35 days. This actually helped me track down one machine that I had forgotten to set up this new unit on4.

Not everything needs to scale

Is It Worth The Time

One of the most common fallacies programmers fall into is that we will jump to automating a solution before we stop and figure out how much time it would even save.

In taking a slow improvement route to solve this problem for myself, I’ve managed not to invest too much time5 in worrying about this but also achieved a meaningful improvement beyond my first approach of doing it all by hand.

  1. You could also add a line to /etc/crontab or drop a script into /etc/cron.monthly depending on your system. 

  2. Why once a month? Mostly to avoid regular disruptions, but still be reasonably timely on updates. 

  3. If you’re looking to understand the cron time format I recommend crontab guru

  4. In the long term I really should set up something like ansible to automatically push fleetwide changes like this but with fewer machines than fingers this seems like overkill. 

  5. Of course by now writing about it, I’ve probably doubled the amount of time I’ve spent thinking about this topic but oh well… 

,

Planet DebianDirk Eddelbuettel: littler 0.3.20 on CRAN: Moar Features!

max-heap image

The twentyfirst release of littler as a CRAN package landed on CRAN just now, following in the now eighteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.

This release contains another fair number of small changes and improvements to some of the scripts I use daily to build or test packages, adds a new front-end ciw.r for the recently-released ciw package offering a ‘CRAN Incoming Watcher’, a new helper installDeps2.r (extending installDeps.r), a new doi-to-bib converter, allows a different temporary directory setup I find helpful, deals with one corner deployment use, and more.

The full change description follows.

Changes in littler version 0.3.20 (2024-03-23)

  • Changes in examples scripts

    • New (dependency-free) helper installDeps2.r to install dependencies

    • Scripts rcc.r, tt.r, tttf.r, tttlr.r use env argument -S to set -t to r

    • tt.r can now fill in inst/tinytest if it is present

    • New script ciw.r wrapping new package ciw

    • tttf.t can now use devtools and its loadall

    • New script doi2bib.r to call the DOI converter REST service (following a skeet by Richard McElreath)

  • Changes in package

    • The CI setup uses checkout@v4 and the r-ci-setup action

    • The Suggests: is a little tighter as we do not list all packages optionally used in the the examples (as R does not check for it either)

    • The package load messag can account for the rare build of R under different architecture (Berwin Turlach in #117 closing #116)

    • In non-vanilla mode, the temporary directory initialization in re-run allowing for a non-standard temp dir via config settings

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: New Debian Developers and Maintainers (January and February 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Carles Pina i Estany (cpina)
  • Dave Hibberd (hibby)
  • Soren Stoutner (soren)
  • Daniel Gröber (dxld)
  • Jeremy Sowden (azazel)
  • Ricardo Ribalda Delgado (ribalda)

The following contributors were added as Debian Maintainers in the last two months:

  • Joachim Bauch
  • Ananthu C V
  • Francesco Ballarin
  • Yogeswaran Umasankar
  • Kienan Stewart

Congratulations!

Planet DebianKentaro Hayashi: How about allocating more buildd resource for armel and armhf?

This article is cross-posting from grow-your-ideas. This is just an idea.

salsa.debian.org

The problem

According to Developer Machines [1], current buildd machines are like this:

  • armel: 4 buildd (4 for arm64/armhf/armel)
  • armhf: 7 buildd (4 for arm64/armhf/armel and 3 for armhf only)

[1] https://db.debian.org/machines.cgi

In contrast to other buildd architectures, these instances are quite a few and it seems that it causes a shortage of buildd resourses. (e.g. during mass transition, give-back turn around time becomes longer and longer.)

Actual situation

As you know, during 64bit time_t transition, many packages should be built, but it seems that +b1 or +bN build becomes slower. (I've hit BD-Uninstalled some times because of missing dependency rebuild)

ref. https://qa.debian.org/dose/debcheck/unstable_main/index.html

Expected situation

Allocate more buildd resources for armel and armhf.

It is just an idea, but how about assigning some buildd as armel/armhf buildd?

Above buildd is used only for arm64 buildd currently.

Maybe there is some technical reason not suitable for armel/armhf buildd, but I don't know yet.

2024/03/24 UPDATE: arm-arm01,arm-arm03,arm-arm-04 has already assigned to armel/armhf buildd, so it is an invalid proposal. See https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-01, https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-03, https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-04

Additional information

  • arm64: 10 buildd (4 for arm64/armhf/armel, 6 for arm64 only)
  • amd64: 7 buildd (5 for amd64/i386 buildd)
  • riscv64: 9 buildd

Planet DebianErich Schubert: Do not get Amazon Kids+ or a Fire HD Kids

The Amazon Kids “parental controls” are extremely insufficient, and I strongly advise against getting any of the Amazon Kids series.

The initial permise (and some older reviews) look okay: you can set some time limits, and you can disable anything that requires buying. With the hardware you get one year of the “Amazon Kids+” subscription, which includes a lot of interesting content such as books and audio, but also some apps. This seemed attractive: some learning apps, some decent games. Sometimes there seems to be a special “Amazon Kids+ edition”, supposedly one that has advertisements reduced/removed and no purchasing.

However, there are so many things just wrong in Amazon Kids:

  • you have no control over the starting page of the tablet.
    it is entirely up to Amazon to decide which contents are for your kid, and of course the page is as poorly made as possible
  • the main content control is a simple age filter
    age appropriateness is decided by Amazon in a non-transparent way
  • there is no preview. All you get is one icon and a truncated title, no description, no screenshots, nothing.
  • time restrictions are on the most basic level possible (daily limit for weekdays and weekends), largely unusable
    no easy way to temporarily increase the limit by 30 minutes, for example. You end up disabling it all the time.
  • there is some “educational goals” thing, but as you do not get to control what is educational and what not, it is paperweight
  • no per-app limits
    this is a killer missing feature.
  • removing content is a very manual thing. You have to go through potentially thousands of entries, and disable them one-by-one for every kid.
  • some contents cannot even be removed anymore
    “managed by age filters and cannot be changed” - these appear to be HTML5 and not real apps
  • there is no whitelist!
    That is the really no-go. By using Amazon Kids, you fully expose your kids to the endless rabbit hole of apps.
  • you cannot switch to an alternate UI that has better parental controls
    without sideloading, you cannot even get YouTube Kids (which still is not really good either) on it, as it does not have Google services.
    and even with sideloading, you do not appear to be able to permanently replace the launcher anymore.

And, unfortunately, Amazon Kids is full of poor content for kids, such as “DIY Fashion Star” that I consider to be very dangerous for kids: it is extremely stereotypical, beginning with supposedly “female” color schemes, model-only body types, and judging people by their clothing (and body).

You really thought you could hand-pick suitable apps for your kid on your own?

No, you have to identify and remove such contents one by one, with many clicks each, because there is no whitelisting, and no mass-removal (anymore - apparently Amazon removed the workarounds that previously allowed you to mass remove contents).

Not with Amazon Kids+, which apparently aims at raising the next generation of zombie customers that buy whatever you tell them to buy.

Hence, do not get your kids an Amazon Fire HD tablet!

365 TomorrowsMississauga

Author: Jeremy Nathan Marks I live in Mississauga, a city that builds dozens of downtown towers every year, the finest towers in the world. Each morning, I watch cranes move like long legged birds along the pond of the horizon. They bow and raise their heads, plucking at things which they lift toward the heavens […]

The post Mississauga appeared first on 365tomorrows.

Planet DebianValhalla's Things: Forgotten Yeast Bread - Sourdough Edition

Posted on March 23, 2024
Tags: madeof:atoms, craft:cooking, craft:baking, craft:bread

Yesterday I had planned a pan sbagliato for today, but I also had quite a bit of sourdough to deal with, so instead of mixing a bit of of dry yeast at 18:00 and mixing it with some additional flour and water at 21:00, at around maybe 20:00 I substituted:

  • 100 g firm sourdough;
  • 33 g flour;
  • 66 g water.

Then I briefly woke up in the middle of the night and poured the dough on the tray at that time instead of having to wake up before 8:00 in the morning.

Everything else was done as in the original recipe.

The firm sourdough is feeded regularly with the same weight of flour and half the weight of water.

Will. do. again.

,

Krebs on SecurityMozilla Drops Onerep After CEO Admits to Running People-Search Networks

The nonprofit organization that supports the Firefox web browser said today it is winding down its new partnership with Onerep, an identity protection service recently bundled with Firefox that offers to remove users from hundreds of people-search sites. The move comes just days after a report by KrebsOnSecurity forced Onerep’s CEO to admit that he has founded dozens of people-search networks over the years.

Mozilla Monitor. Image Mozilla Monitor Plus video on Youtube.

Mozilla only began bundling Onerep in Firefox last month, when it announced the reputation service would be offered on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches.

On March 14, KrebsOnSecurity published a story showing that Onerep’s Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. Onerep and Shelest did not respond to requests for comment on that story.

But on March 21, Shelest released a lengthy statement wherein he admitted to maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 — around the same time he launched Onerep.

Shelest maintained that Nuwber has “zero cross-over or information-sharing with Onerep,” and said any other old domains that may be found and associated with his name are no longer being operated by him.

“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).

Onerep CEO and founder Dimitri Shelest.

In a statement released today, a spokesperson for Mozilla said it was moving away from Onerep as a service provider in its Monitor Plus product.

“Though customer data was never at risk, the outside financial interests and activities of Onerep’s CEO do not align with our values,” Mozilla wrote. “We’re working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first.”

KrebsOnSecurity also reported that Shelest’s email address was used circa 2010 by an affiliate of Spamit, a Russian-language organization that paid people to aggressively promote websites hawking male enhancement drugs and generic pharmaceuticals. As noted in the March 14 story, this connection was confirmed by research from multiple graduate students at my alma mater George Mason University.

Shelest denied ever being associated with Spamit. “Between 2010 and 2014, we put up some web pages and optimize them — a widely used SEO practice — and then ran AdSense banners on them,” Shelest said, presumably referring to the dozens of people-search domains KrebsOnSecurity found were connected to his email addresses (dmitrcox@gmail.com and dmitrcox2@gmail.com). “As we progressed and learned more, we saw that a lot of the inquiries coming in were for people.”

Shelest also acknowledged that Onerep pays to run ads on “on a handful of data broker sites in very specific circumstances.”

“Our ad is served once someone has manually completed an opt-out form on their own,” Shelest wrote. “The goal is to let them know that if they were exposed on that site, there may be others, and bring awareness to there being a more automated opt-out option, such as Onerep.”

Reached via Twitter/X, HaveIBeenPwned founder Troy Hunt said he knew Mozilla was considering a partnership with Onerep, but that he was previously unaware of the Onerep CEO’s many conflicts of interest.

“I knew Mozilla had this in the works and we’d casually discussed it when talking about Firefox monitor,” Hunt told KrebsOnSecurity. “The point I made to them was the same as I’ve made to various companies wanting to put data broker removal ads on HIBP: removing your data from legally operating services has minimal impact, and you can’t remove it from the outright illegal ones who are doing the genuine damage.”

Playing both sides — creating and spreading the same digital disease that your medicine is designed to treat — may be highly unethical and wrong. But in the United States it’s not against the law. Nor is collecting and selling data on Americans. Privacy experts say the problem is that data brokers, people-search services like Nuwber and Onerep, and online reputation management firms exist because virtually all U.S. states exempt so-called “public” or “government” records from consumer privacy laws.

Those include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, and bankruptcy filings. Data brokers also can enrich consumer records with additional information, by adding social media data and known associates.

The March 14 story on Onerep was the second in a series of three investigative reports published here this month that examined the data broker and people-search industries, and highlighted the need for more congressional oversight — if not regulation — on consumer data protection and privacy.

On March 8, KrebsOnSecurity published A Close Up Look at the Consumer Data Broker Radaris, which showed that the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

On March 20, KrebsOnSecurity published The Not-So-True People-Search Network from China, which revealed an elaborate web of phony people-search companies and executives designed to conceal the location of people-search affiliates in China who are earning money promoting U.S. based data brokers that sell personal information on Americans.

Worse Than FailureError'd: You Can Say That Again!

In a first for me, this week we got FIVE unique submissions of the exact same bug on LinkedIn. In the spirit of the theme, I dug up a couple of unused submissions of older problems at LinkedIn as well. I guess there are more than the usual number of tech people looking for jobs.

John S., Chris K., Peter C., Brett Nash and Piotr K. all sent in samples of this doublebug. It's a flubstitution AND bad math, together!

minus

 

Latin Steevie is also job hunting and commented "Well, I know tech-writers may face hard times finding a job, so they turn to LinkedIn, which however doesn't seem to help... (the second announcement translates to 'part-time cleaners wanted') As a side bonus, apparently I can't try a search for jobs outside Italy, which is quite odd, to say the least!"

techwr

 

Clever Drew W. found a very minor bug in their handling of non-ASCII names. "I have an emoji in my display name on LinkedIn to thwart scrapers and other such bots. I didn't think it would also thwart LinkedIn!"

emoji

 

Finally, Mark Whybird returns with an internal repetition. "I think maybe I found the cause of some repetitive notifications when I went to Linkedin's notifications preferences page?" I think maybe!

third

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsArtificial Gravity

Author: TJ Gadd Anna stared at where the panel had been. Joshua was right; either The Saviour had never left Earth, or Anna had broken into a vault full of sand. She carefully replaced the panel, resetting every rivet. Her long red hair hid her pretty face. When astronomers first identified a comet heading towards […]

The post Artificial Gravity appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 261 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 261. This version includes the following changes:

[ Chris Lamb ]
* Don't crash if we encounter an .rdb file without an equivalent .rdx file.
  (Closes: #1066991)
* In addition, don't identify Redis database dumps (etc.) as GNU R database
  files based simply on their filename. (Re: #1066991)
* Update copyright years.

You find out more by visiting the project homepage.

,

Planet DebianIan Jackson: How to use Rust on Debian (and Ubuntu, etc.)

tl;dr: Don’t just apt install rustc cargo. Either do that and make sure to use only Rust libraries from your distro (with the tiresome config runes below); or, just use rustup.

Don’t do the obvious thing; it’s never what you want

Debian ships a Rust compiler, and a large number of Rust libraries.

But if you just do things the obvious “default” way, with apt install rustc cargo, you will end up using Debian’s compiler but upstream libraries, directly and uncurated from crates.io.

This is not what you want. There are about two reasonable things to do, depending on your preferences.

Q. Download and run whatever code from the internet?

The key question is this:

Are you comfortable downloading code, directly from hundreds of upstream Rust package maintainers, and running it ?

That’s what cargo does. It’s one of the main things it’s for. Debian’s cargo behaves, in this respect, just like upstream’s. Let me say that again:

Debian’s cargo promiscuously downloads code from crates.io just like upstream cargo.

So if you use Debian’s cargo in the most obvious way, you are still downloading and running all those random libraries. The only thing you’re avoiding downloading is the Rust compiler itself, which is precisely the part that is most carefully maintained, and of least concern.

Debian’s cargo can even download from crates.io when you’re building official Debian source packages written in Rust: if you run dpkg-buildpackage, the downloading is suppressed; but a plain cargo build will try to obtain and use dependencies from the upstream ecosystem. (“Happily”, if you do this, it’s quite likely to bail out early due to version mismatches, before actually downloading anything.)

Option 1: WTF, no I don’t want curl|bash

OK, but then you must limit yourself to libraries available within Debian. Each Debian release provides a curated set. It may or may not be sufficient for your needs. Many capable programs can be written using the packages in Debian.

But any upstream Rust project that you encounter is likely to be a pain to get working, unless their maintainers specifically intend to support this. (This is fairly rare, and the Rust tooling doesn’t make it easy.)

To go with this plan, apt install rustc cargo and put this in your configuration, in $HOME/.cargo/config.toml:

[source.debian-packages]
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"

This causes cargo to look in /usr/share for dependencies, rather than downloading them from crates.io. You must then install the librust-FOO-dev packages for each of your dependencies, with apt.

This will allow you to write your own program in Rust, and build it using cargo build.

Option 2: Biting the curl|bash bullet

If you want to build software that isn’t specifically targeted at Debian’s Rust you will probably need to use packages from crates.io, not from Debian.

If you’re doing to do that, there is little point not using rustup to get the latest compiler. rustup’s install rune is alarming, but cargo will be doing exactly the same kind of thing, only worse (because it trusts many more people) and more hidden.

So in this case: do run the curl|bash install rune.

Hopefully the Rust project you are trying to build have shipped a Cargo.lock; that contains hashes of all the dependencies that they last used and tested. If you run cargo build --locked, cargo will only use those versions, which are hopefully OK.

And you can run cargo audit to see if there are any reported vulnerabilities or problems. But you’ll have to bootstrap this with cargo install --locked cargo-audit; cargo-audit is from the RUSTSEC folks who do care about these kind of things, so hopefully running their code (and their dependencies) is fine. Note the --locked which is needed because cargo’s default behaviour is wrong.

Privilege separation

This approach is rather alarming. For my personal use, I wrote a privsep tool which allows me to run all this upstream Rust code as a separate user.

That tool is nailing-cargo. It’s not particularly well productised, or tested, but it does work for at least one person besides me. You may wish to try it out, or consider alternative arrangements. Bug reports and patches welcome.

OMG what a mess

Indeed. There are large number of technical and social factors at play.

cargo itself is deeply troubling, both in principle, and in detail. I often find myself severely disappointed with its maintainers’ decisions. In mitigation, much of the wider Rust upstream community does takes this kind of thing very seriously, and often makes good choices. RUSTSEC is one of the results.

Debian’s technical arrangements for Rust packaging are quite dysfunctional, too: IMO the scheme is based on fundamentally wrong design principles. But, the Debian Rust packaging team is dynamic, constantly working the update treadmills; and the team is generally welcoming and helpful.

Sadly last time I explored the possibility, the Debian Rust Team didn’t have the appetite for more fundamental changes to the workflow (including, for example, changes to dependency version handling). Significant improvements to upstream cargo’s approach seem unlikely, too; we can only hope that eventually someone might manage to supplant it.

edited 2024-03-21 21:49 to add a cut tag



comment count unavailable comments

Planet DebianRavi Dwivedi: Thailand Trip

This post is the second and final part of my Malaysia-Thailand trip. Feel free to check out the Malaysia part here if you haven’t already. Kuala Lumpur to Bangkok is around 1500 km by road, and so I took a Malaysian Airlines flight to travel to Bangkok. The flight staff at the Kuala Lumpur only asked me for a return/onward flight and Thailand immigration asked a few questions but did not check any documents (obviously they checked and stamped my passport ;)). The currency of Thailand is the Thai baht, and 1 Thai baht = 2.5 Indian Rupees. The Thailand time is 1.5 hours ahead of Indian time (For example, if it is 12 noon in India, it will be 13:30 in Thailand).

I landed in Bangkok at around 3 PM local time. Fletcher was in Bangkok that time, leaving for Pattaya and we had booked the same hostel. So I took a bus to Pattaya from the airport. The next bus for which the tickets were available was at 7 PM, so I took tickets for that one. The bus ticket cost was 143 Thai Baht. I didn’t buy SIM at the airport, thinking there must be better deals in the city. As a consequence, there was no way to contact Fletcher through internet. Although I had a few minutes call remaining out of my international roaming pack.

A welcome sign at Bangkok's Suvarnabhumi airport.

Bus from Suvarnabhumi Airport to Jomtien Beach in Pattaya.

Our accommodation was near Jomtien beach, so I got off at the last stop, as the bus terminates at the Jomtien beach. Then I decided to walk towards my accommodation. I was using OsmAnd for navigation. However, the place was not marked on OpenStreetMap, and it turned out I missed the street my hostel was on and walked around 1 km further as I was chasing a similarly named incorrect hostel on OpenStreetMap. Then I asked for help from two men sitting at a café. One of them said he will help me find the street my hostel is on. So, I walked with him, and he told me he lives in Thailand for many years, but he is from Kuwait. He also gave me valuable information. Like, he told me about shared hail-and-ride songthaews which run along the Jomtien Second Road and charge 10 Baht for any distance on their route. This tip significantly reduced our expenses. Further, he suggested me 7-Eleven shops for buying a local SIM. Like Malaysia, Thailand has 24/7 7-Eleven convenience stores, a lot of them not even 100 m apart.

The Kuwaiti person dropped me at the address where my hostel was. I tried searching for a person in-charge of that hostel, and soon I realized there was no reception. After asking for help from locals for some time, I bumped into Fletcher, who also came to this address and was searching for the same. After finding a friend, I felt a sigh of relief. Adjacent to the property, there was a hairdresser shop. We went there and asked about this property. The woman called the owner, and she also told us the required passcodes to go inside. Our accommodation was in a room on the second floor, which required us to put a passcode for opening. We entered the passcode and entered the room. So, we stayed at this hostel which had no reception. Due to this, it took 2 hours to find our room and enter. It reminded me of a difficult experience I had in Albania, where me and Akshat were not able to find our apartment in one of the hottest days and the owner didn’t know our language.

Traveling from the place where the bus dropped me to the hostel, I saw streets were filled with bars and massage parlors, which was expected. Prostitutes were everywhere. We went out at night towards the beach and also roamed around in 7-Elevens to buy a SIM card for myself. I got a SIM for 7 day unlimited internet for 399 baht. Turns out that the rates of SIM cards at the airport were not so different from inside the city.

Road near Jomtien beach in Pattaya

Photo of a songthaew in Pattaya. There are shared songthaews which run along Jomtien Second road and takes 10 bath to anywhere on the route.

Jomtien Beach in Pattaya.

In terms of speaking English, locals didn’t know English at all in both Pattaya and Bangkok. I normally don’t expect locals to know English in a non-English speaking country, but the fact that Bangkok is one of the most visited places by tourists made me expect locals to know some English. Talking to locals is an integral part of travel for me, which I couldn’t do a lot in Thailand. This aspect is much more important for me than going to touristy places.

So, we were in Pattaya. Next morning, Fletcher and I went to Tiger park using shared songthaew. After that, we planned to visit Pattaya Floating market which is near the Tiger Park, but we felt the ticket prices were higher than it was worth. Fletcher had to leave for Bangkok on that day. I suggested him to go to Suvarnabhumi Airport from the Jomtien beach bus terminal (this was the route I took the last day in opposite direction) to avoid traffic congestion inside Bangkok, as he can follow up with metro once he reaches the airport. From the floating market, we were walking in sweltering heat to reach the Jomtien beach. I tried asking for a lift and eventually got successful as a scooty stopped, and surprisingly the person gave a ride to both of us. He was from Delhi, so maybe that’s the reason he stopped for us. Then we took a songthaew to the bus terminal and after having lunch, Fletcher left for Bangkok.

A welcome sign at Pattaya Floating market.

This Korean Vegetasty noodles pack was yummy and was available at many 7-Eleven stores.

Next day I went to Bangkok, but Fletcher already left for Kuala Lumpur. Here I had booked a private room in a hotel (instead of a hostel) for four nights, mainly because of my luggage. This costed 5600 INR for four nights. It was 2 km from the metro station, which I used to walk both sides. In Bangkok, I visited Sukhumvit and Siam by metro. Going to some areas require crossing the Chao Phraya river. For this, I took Chao Phraya Express Boat for going to places like Khao San road and Wat Arun. I would recommend taking the boat ride as it had very good views. In Bangkok, I met a person from Pakistan staying in my hotel and so here also I got some company. But by the time I met him, my days were almost over. So, we went to a random restaurant selling Indian food where we ate some paneer dish with naan and that restaurant person was from Myanmar.

Wat Arun temple stamps your hand upon entry

Wat Arun temple

Khao San Road

A food stall at Khao San Road

Chao Phraya Express Boat

For eating, I mainly relied on fruits and convenience stores. Bananas were very tasty. This was the first time I saw banana flesh being yellow. Mangoes were delicious and pineapples were smaller and flavorful. I also ate Rose Apple, which I never had before. I had Chhole Kulche once in Sukhumvit. That was a little expensive as it costed 164 baht. I also used to buy premix coffee packets from 7-Eleven convenience stores and prepare them inside the stores.

Banana with yellow flesh

Fruits at a stall in Bangkok

Trimmed pineapples from Thailand.

Corn in Bangkok.

A board showing coffee menu at a 7-Eleven store along with rates in Pattaya.

In this section of 7-Eleven, you can buy a premix coffee and mix it with hot water provided at the store to prepare.

My booking from Bangkok to Delhi was in Air India flight, and they were serving alcohol in the flight. I chose red wine, and this was my first time having alcohol in a flight.

Red wine being served in Air India

Notes

  • In this whole trip spanning two weeks, I did not pay for drinking water (except for once in Pattaya which was 9 baht) and toilets. Bangkok and Kuala Lumpur have plenty of malls where you should find a free-of-cost toilet nearby. For drinking water, I relied mainly on my accommodation providing refillable water for my bottle.

  • Thailand seemed more expensive than Malaysia on average. Malaysia had discounted price due to the Chinese New year.

  • I liked Pattaya more than Bangkok. Maybe because Pattaya has beach and Bangkok doesn’t. Pattaya seemed more lively, and I could meet and talk to a few people as opposed to Bangkok.

  • Chao Phraya River express boat costs 150 baht for one day where you can hop on and off to any boat.

David BrinVernor Vinge - the Man with Lamps on His Brows

They said it of Moses - that he had 'lamps on his brows.' That he could peer ahead, through the fog of time. That phrase is applied now to the Prefrontal Lobes, just above the eyes - organs that provide humans our wan powers of foresight. Wan... except in a few cases, when those lamps blaze! Shining ahead of us, illuminating epochs yet to come.


Greg Bear, Gregory Benford, David Brin, Vernor Vinge


Alas, such lights eventually dim. And so, it is with sadness - and deep appreciation of my friend and colleague - that I must report the passing of Vernor Vinge. A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters and the implications of science. 

 

Accused by some of a grievous sin - that of 'optimism' - Vernor gave us peerless legends that often depicted human success at overcoming problems... those right in front of us... while posing new ones! New dilemmas that may lie just beyond our myopic gaze. 


He would often ask: "What if we succeed? Do you think that will be the end of it?"

 

Vernor's aliens - in classics like A Deepness in the Sky and A Fire Upon the Deep - were fascinating beings, drawing us into different styles of life and paths of consciousness. 

 

His 1981 novella "True Names" was perhaps the first story to present a plausible concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others. Many innovators of modern industry cite “True Names” as their keystone technological inspiration... though I deem it to have been even more prophetic about the yin-yang tradeoffs of privacy, transparency and accountability.  

 

Another of the many concepts arising in Vernor’s dynamic mind was that of the “Technological Singularity,” a term (and disruptive notion) that has pervaded culture and our thoughts about the looming future.

 

Rainbows End expanded these topics to include the vividly multi-layered "augmented' reality wherein we all will live, in just a few years from now. It was almost-certainly the most vividly accurate portrayal of how new generations might apply onrushing cyber-tools, boggling their parents, who will stare at their kids' accomplishments, in wonder. Wonders like a university library building that, during an impromptu rave, stands up and starts to dance!

Vinge was also a long-revered educator and professor of math and computer science at San Diego State University, mentoring generations of practical engineers to also keep a wide stance and open minds.

Vernor had been - for years - under care for progressive Parkinsons, at a very nice place overlooking the Pacific in La Jolla. As reported by his friend and fellow SDSU Prof. John Carroll, his decline had steepened since November, but was relatively comfortable. Up until that point, I had been in contact with Vernor almost weekly, but my friendship pales next to John's devotion, for which I am - (and we all should be) - deeply grateful.

 

I am a bit too wracked, right now, to write much more. Certainly, homages will flow and we will post some on a tribute page. 


I will say that it's a bit daunting now to be a "Killer B" who's still standing. So, let me close with a photo from last October, that's dear to my heart. And those prodigious brow-lamps were still shining brightly!


We spanned a pretty wide spectrum - politically! Yet, we Killer Bs - (Vernor was a full member! And Octavia Butler once guffawed happily when we inducted her) - always shared a deep love of our high art - that of gedankenexperimentation, extrapolation into the undiscovered country ahead. 


If Vernor's readers continue to be inspired - that country might even feature more solutions than problems. And perhaps copious supplies of hope.



========================================================


Addenda & tributes


“What a fine writer he was!”  -- Robert Silverberg.


“A kind man.”  -- Kim Stanley Robinson (The nicest thing anyone could say.)

 

The good news is that Vernor, and you and many other authors, will have achieved a kind of immortality thanks to your works. My favorite Vernor Vinge book was True Names." -- Vinton Cerf

 

Vernor was a good guy. -- Pat Cadigan


David Brin 2Remembering Vernor Vinge

Author of the Singularity

It is with sadness – and deep appreciation of my friend and colleague – that I must report the passing of fellow science fiction author – Vernor Vinge. A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters and the implications of science.

Accused by some of a grievous sin – that of ‘optimism’ – Vernor gave us peerless legends that often depicted human success at overcoming problems… those right in front of us… while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: “What if we succeed? Do you think that will be the end of it?”

Vernor’s aliens – in classic science fiction novels such as A Deepness in the Sky and A Fire Upon the Deep – were fascinating beings, drawing us into different styles of life and paths of consciousness.

His 1981 novella “True Names” was perhaps the first story to present a plausible concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others. Many innovators of modern industry cite “True Names” as their keystone technological inspiration, though I deem it to have been even more prophetic about the yin-yang tradeoffs of privacy, transparency and accountability.  

Another of the many concepts arising in Vernor’s dynamic mind was that of the “Technological Singularity,” a term (and disruptive notion) that has pervaded culture and our thoughts about the looming future.

Others cite Rainbows End as the most vividly accurate portrayal of how new generations will apply onrushing cyber-tools, boggling their parents, who will stare at their kids’ accomplishments, in wonder. Wonders like a university library building that, during an impromptu rave, stands up and starts to dance!

Vernor had been – for years – under care for progressive Parkinsons, at a very nice place overlooking the Pacific in La Jolla. As reported by his friend and fellow San Diego State professor John Carroll, his decline had steepened since November, but was relatively comfortable. Up until that point, I had been in contact with Vernor almost weekly, but my friendship pales next to John’s devotion, for which I am – (and we all should be) – deeply grateful.

I am a bit too wracked, right now, to write much more. Certainly, homages will flow and we will post some on a tribute page. I will say that it’s a bit daunting now to be a “Killer B” who’s still standing. So, let me close with a photo that’s dear to my heart.

We spanned a pretty wide spectrum – politically! Yet, we Killer B’s (Vernor was a full member! And Octavia Butler once guffawed happily when we inducted her) always shared a deep love of our high art – that of gedankenexperimentation, extrapolation into the undiscovered country ahead.

And – if Vernor’s readers continue to be inspired – that country might even feature more solutions than problems. And perhaps copious supplies of hope.

Killer B’s at a book signing: Greg Bear, Gregory Benford, David Brin, Vernor Vinge

Cryptogram On Secure Voting Systems

Andrew Appel shepherded a public comment—signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

From the executive summary:

We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation.

Technology introduces the means of efficient tabulation, but also introduces a manifold increase in complexity and sophistication of the process. This places the understanding of the process beyond the average person’s understanding, which can foster distrust. It also opens the door to human or machine error, as well as exploitation by sophisticated and malicious actors.

Rather than assert that each component of the process can be made perfectly secure on its own, we believe the goal of each component of the elections process is to validate every other component.

Consequently, we believe that the hallmarks of a reliable and optimal election process are hand-marked paper ballots, which are optically scanned, separately and securely stored, and rigorously audited after the election but before certification. We recommend state legislators adopt policies consistent with these guiding principles, which are further developed below.

Cryptogram Licensing AI Engineers

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

Cryptogram Google Pays $10M in Bug Bounties in 2023

BleepingComputer has the details. It’s $2M less than in 2022, but it’s still a lot.

The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

Slashdot thread.

Worse Than FailureCodeSOD: Reading is a Safe Operation

Alex saw, in the company's codebase, a method called recursive_readdir. It had no comments, but the name seemed pretty clear: it would read directories recursively, presumably enumerating their contents.

Fortunately for Alex, they checked the code before blindly calling the method.

public function recursive_readdir($path)
{
    $handle = opendir($path);
    while (($file = readdir($handle)) !== false)
    {
        if ($file != '.' && $file != '..')
        {
            $filepath = $path . '/' . $file;
            if (is_dir($filepath))
            {
                rmdir($filepath);
                recursive_readdir($filepath);
            }
            else
            {
                    unlink($filepath);
            }
        }
    }
    closedir($handle);
    rmdir($path);
}

This is a recursive delete. rmdir requires the target directory to be empty, so this recurses over all the files and subfolders in the directory, deleting them, so that we can delete the directory.

This code is clearly cribbed from comments on the PHP documentation, with a fun difference in that this version is both unclearly named, and also throws an extra rmdir call in the is_dir branch- a potential "optimization" that doesn't actually do anything (it either fails because the directory isn't empty, or we end up calling it twice anyway).

Alex learned to take nothing for granted in this code base.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsDisinformation Failure

Author: David C. Nutt The uniformed Da’Ri officer saw me enter the bar and nearly ran to me. He was at my booth before I had a chance to settle in and was talking at light speed before the first round hit the table. Things did not go well for the Da’Ri today. As an […]

The post Disinformation Failure appeared first on 365tomorrows.

Krebs on SecurityThe Not-so-True People-Search Network from China

It’s not unusual for the data brokers behind people-search websites to use pseudonyms in their day-to-day lives (you would, too). Some of these personal data purveyors even try to reinvent their online identities in a bid to hide their conflicts of interest. But it’s not every day you run across a US-focused people-search network based in China whose principal owners all appear to be completely fabricated identities.

Responding to a reader inquiry concerning the trustworthiness of a site called TruePeopleSearch[.]net, KrebsOnSecurity began poking around. The site offers to sell reports containing photos, police records, background checks, civil judgments, contact information “and much more!” According to LinkedIn and numerous profiles on websites that accept paid article submissions, the founder of TruePeopleSearch is Marilyn Gaskell from Phoenix, Ariz.

The saucy yet studious LinkedIn profile for Marilyn Gaskell.

Ms. Gaskell has been quoted in multiple “articles” about random subjects, such as this article at HRDailyAdvisor about the pros and cons of joining a company-led fantasy football team.

“Marilyn Gaskell, founder of TruePeopleSearch, agrees that not everyone in the office is likely to be a football fan and might feel intimidated by joining a company league or left out if they don’t join; however, her company looked for ways to make the activity more inclusive,” this paid story notes.

Also quoted in this article is Sally Stevens, who is cited as HR Manager at FastPeopleSearch[.]io.

Sally Stevens, the phantom HR Manager for FastPeopleSearch.

“Fantasy football provides one way for employees to set aside work matters for some time and have fun,” Stevens contributed. “Employees can set a special league for themselves and regularly check and compare their scores against one another.”

Imagine that: Two different people-search companies mentioned in the same story about fantasy football. What are the odds?

Both TruePeopleSearch and FastPeopleSearch allow users to search for reports by first and last name, but proceeding to order a report prompts the visitor to purchase the file from one of several established people-finder services, including BeenVerified, Intelius, and Spokeo.

DomainTools.com shows that both TruePeopleSearch and FastPeopleSearch appeared around 2020 and were registered through Alibaba Cloud, in Beijing, China. No other information is available about these domains in their registration records, although both domains appear to use email servers based in China.

Sally Stevens’ LinkedIn profile photo is identical to a stock image titled “beautiful girl” from Adobe.com. Ms. Stevens is also quoted in a paid blog post at ecogreenequipment.com, as is Alina Clark, co-founder and marketing director of CocoDoc, an online service for editing and managing PDF documents.

The profile photo for Alina Clark is a stock photo appearing on more than 100 websites.

Scouring multiple image search sites reveals Ms. Clark’s profile photo on LinkedIn is another stock image that is currently on more than 100 different websites, including Adobe.com. Cocodoc[.]com was registered in June 2020 via Alibaba Cloud Beijing in China.

The same Alina Clark and photo materialized in a paid article at the website Ceoblognation, which in 2021 included her at #11 in a piece called “30 Entrepreneurs Describe The Big Hairy Audacious Goals (BHAGs) for Their Business.” It’s also worth noting that Ms. Clark is currently listed as a “former Forbes Council member” at the media outlet Forbes.com.

Entrepreneur #6 is Stephen Curry, who is quoted as CEO of CocoSign[.]com, a website that claims to offer an “easier, quicker, safer eSignature solution for small and medium-sized businesses.” Incidentally, the same photo for Stephen Curry #6 is also used in this “article” for #22 Jake Smith, who is named as the owner of a different company.

Stephen Curry, aka Jake Smith, aka no such person.

Mr. Curry’s LinkedIn profile shows a young man seated at a table in front of a laptop, but an online image search shows this is another stock photo. Cocosign[.]com was registered in June 2020 via Alibaba Cloud Beijing. No ownership details are available in the domain registration records.

Listed at #13 in that 30 Entrepreneurs article is Eden Cheng, who is cited as co-founder of PeopleFinderFree[.]com. KrebsOnSecurity could not find a LinkedIn profile for Ms. Cheng, but a search on her profile image from that Entrepreneurs article shows the same photo for sale at Shutterstock and other stock photo sites.

DomainTools says PeopleFinderFree was registered through Alibaba Cloud, Beijing. Attempts to purchase reports through PeopleFinderFree produce a notice saying the full report is only available via Spokeo.com.

Lynda Fairly is Entrepreneur #24, and she is quoted as co-founder of Numlooker[.]com, a domain registered in April 2021 through Alibaba in China. Searches for people on Numlooker forward visitors to Spokeo.

The photo next to Ms. Fairly’s quote in Entrepreneurs matches that of a LinkedIn profile for Lynda Fairly. But a search on that photo shows this same portrait has been used by many other identities and names, including a woman from the United Kingdom who’s a cancer survivor and mother of five; a licensed marriage and family therapist in Canada; a software security engineer at Quora; a journalist on Twitter/X; and a marketing expert in Canada.

Cocofinder[.]com is a people-search service that launched in Sept. 2019, through Alibaba in China. Cocofinder lists its market officer as Harriet Chan, but Ms. Chan’s LinkedIn profile is just as sparse on work history as the other people-search owners mentioned already. An image search online shows that outside of LinkedIn, the profile photo for Ms. Chan has only ever appeared in articles at pay-to-play media sites, like this one from outbackteambuilding.com.

Perhaps because Cocodoc and Cocosign both sell software services, they are actually tied to a physical presence in the real world — in Singapore (15 Scotts Rd. #03-12 15, Singapore). But it’s difficult to discern much from this address alone.

Who’s behind all this people-search chicanery? A January 2024 review of various people-search services at the website techjury.com states that Cocofinder is a wholly-owned subsidiary of a Chinese company called Shenzhen Duiyun Technology Co.

“Though it only finds results from the United States, users can choose between four main search methods,” Techjury explains. Those include people search, phone, address and email lookup. This claim is supported by a Reddit post from three years ago, wherein the Reddit user “ProtectionAdvanced” named the same Chinese company.

Is Shenzhen Duiyun Technology Co. responsible for all these phony profiles? How many more fake companies and profiles are connected to this scheme? KrebsOnSecurity found other examples that didn’t appear directly tied to other fake executives listed here, but which nevertheless are registered through Alibaba and seek to drive traffic to Spokeo and other data brokers. For example, there’s the winsome Daniela Sawyer, founder of FindPeopleFast[.]net, whose profile is flogged in paid stories at entrepreneur.org.

Google currently turns up nothing else for in a search for Shenzhen Duiyun Technology Co. Please feel free to sound off in the comments if you have any more information about this entity, such as how to contact it. Or reach out directly at krebsonsecurity @ gmail.com.

A mind map highlighting the key points of research in this story. Click to enlarge. Image: KrebsOnSecurity.com

ANALYSIS

It appears the purpose of this network is to conceal the location of people in China who are seeking to generate affiliate commissions when someone visits one of their sites and purchases a people-search report at Spokeo, for example. And it is clear that Spokeo and others have created incentives wherein anyone can effectively white-label their reports, and thereby make money brokering access to peoples’ personal information.

Spokeo’s Wikipedia page says the company was founded in 2006 by four graduates from Stanford University. Spokeo co-founder and current CEO Harrison Tang has not yet responded to requests for comment.

Intelius is owned by San Diego based PeopleConnect Inc., which also owns Classmates.com, USSearch, TruthFinder and Instant Checkmate. PeopleConnect Inc. in turn is owned by H.I.G. Capital, a $60 billion private equity firm. Requests for comment were sent to H.I.G. Capital. This story will be updated if they respond.

BeenVerified is owned by a New York City based holding company called The Lifetime Value Co., a marketing and advertising firm whose brands include PeopleLooker, NeighborWho, Ownerly, PeopleSmart, NumberGuru, and Bumper, a car history site.

Ross Cohen, chief operating officer at The Lifetime Value Co., said it’s likely the network of suspicious people-finder sites was set up by an affiliate. Cohen said Lifetime Value would investigate to determine if this particular affiliate was driving them any sign-ups.

All of the above people-search services operate similarly. When you find the person you’re looking for, you are put through a lengthy (often 10-20 minute) series of splash screens that require you to agree that these reports won’t be used for employment screening or in evaluating new tenant applications. Still more prompts ask if you are okay with seeing “potentially shocking” details about the subject of the report, including arrest histories and photos.

Only at the end of this process does the site disclose that viewing the report in question requires signing up for a monthly subscription, which is typically priced around $35. Exactly how and from where these major people-search websites are getting their consumer data — and customers — will be the subject of further reporting here.

The main reason these various people-search sites require you to affirm that you won’t use their reports for hiring or vetting potential tenants is that selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically don’t include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

There are a growing number of online reputation management companies that offer to help customers remove their personal information from people-search sites and data broker databases. There are, no doubt, plenty of honest and well-meaning companies operating in this space, but it has been my experience that a great many people involved in that industry have a background in marketing or advertising — not privacy.

Also, some so-called data privacy companies may be wolves in sheep’s clothing. On March 14, KrebsOnSecurity published an abundance of evidence indicating that the CEO and founder of the data privacy company OneRep.com was responsible for launching dozens of people-search services over the years.

Finally, some of the more popular people-search websites are notorious for ignoring requests from consumers seeking to remove their information, regardless of which reputation or removal service you use. Some force you to create an account and provide more information before you can remove your data. Even then, the information you worked hard to remove may simply reappear a few months later.

This aptly describes countless complaints lodged against the data broker and people search giant Radaris. On March 8, KrebsOnSecurity profiled the co-founders of Radaris, two Russian brothers in Massachusetts who also operate multiple Russian-language dating services and affiliate programs.

The truth is that these people-search companies will continue to thrive unless and until Congress begins to realize it’s time for some consumer privacy and data protection laws that are relevant to life in the 21st century. Duke University adjunct professor Justin Sherman says virtually all state privacy laws exempt records that might be considered “public” or “government” documents, including voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

“Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman said.

,

LongNowStumbling Towards First Light

Stumbling Towards First Light

A satellite capturing high-resolution images of Chile on the afternoon of October 18, 02019, would have detected at least two signs of unusual human activity.

Pictures taken over Santiago, Chile’s capital city, would have shown numerous plumes of smoke slanting away from buses, subway stations and commercial buildings that had been torched by rioters. October 18 marked the start of the Estallido Social (Social Explosion), a months-long series of violent protests that pitched this South American country of 19 million people into a crisis from which it has yet to fully emerge.

On the same day, the satellite would also have recorded a fresh disturbance on Cerro Las Campanas, a mountain in the Atacama Desert some 300 miles north of Santiago. A deep circular trench, 200 feet in diameter, had recently been drilled into the rock on the flattened summit. The trench will eventually hold the concrete foundations of the Giant Magellan Telescope, a $2 billion instrument that will have 10 times the resolving power of the Hubble Space Telescope. But on October 18 the excavation looked like one of the cryptic shapes that surround the Atacama Giant, an humanoid geoglyph constructed by the indigenous people of the Andes that has been staring up at the desert sky since long before Ferdinand Magellan set sail in 01519.

I see the riots and the unfinished telescope as markers at the temporal extremes of human agency. At one end, the twitchy impatience of politics seduces us with the illusion that a Molotov cocktail, an election or a military coup will set the world to rights. At the opposite point of the spectrum, the slow, painstaking and often-inconclusive work of cosmology attempts to fathom the origins of time itself.

That both pursuits should take place in Chile is not in itself remarkable: leading-edge science coexists with political chaos in countries as varied as Russia and the United States. Yet in Chile, a so-called “emerging economy,” the juxtaposition of first-world astronomy with third-world grievances raises questions about planning, progress, and the distribution of one of humanity’s rarest assets.

Extremely patient risk capital

The next era of astronomy will depend on instruments so complicated and costly that no single nation can build them. A list of contributors to the James Webb Space Telescope, for example, includes 35 universities and 280 public agencies and private companies in 14 countries. This aggregation of design, engineering, construction and software talent from around the planet is a hallmark of “big science” projects. But large telescopes are also emblematic of the outsized timescales of “long science.” They depend on a fragile amalgam of trust, loyalty, institutional prestige and sheer endurance that must sustain a project for two or three decades before “first light,” or the moment when a telescope actually begins to gather data.

“It takes a generation to build a telescope,” Charles Alcock, director of the Harvard-Smithsonian Center for Astrophysics and a member of Giant Magellan Telescope (GMT) board, said some years ago. Consider the logistics involved in a single segment of the GMT’s construction: the process of fabricating its seven primary mirrors, each measuring 27 feet in diameter and using 17 metric tons of specialized Japanese glass. The only facility capable of casting mirrors this large (by melting the glass inside a clam-shaped oven at 2,100 degrees Fahrenheit) is situated deep beneath University of Arizona football stadium. It takes three months for the molten glass to cool. Over the next four years, the mirror will be mounted, ground and slowly polished to a precision of around one millionth of an inch.  The GMT’s first mirror was cast in 02005; its seventh will be finished sometime in 02027. Building the 1,800-ton steel structure that will hold these mirrors, shipping the enormous parts by sea, assembling the telescope atop Cerro Las Campanas, and then testing and calibrating its incommunicably delicate instruments will take several more years.

Not surprisingly, these projects don’t even attempt to raise their full budgets up front. Instead, they operate on a kind of faith, scraping together private grants and partial transfers from governments and universities to make incremental progress, while constantly lobbying for additional funding. At each stage, they must defend nebulous objectives (“understanding the nature of dark matter”) against the claims of disciplines with more tangible and near-term goals, such as fusion energy. And given the very real possibility that they will not be completed, big telescopes require what private equity investors might describe as the world’s most patient risk capital.

Few countries have been more successful at attracting this kind of capital than Chile. The GMT is one of three colossal observatories currently under construction in the Atacama Desert. The $1.6 billion Extremely Large Telescope, which will house a 128-foot main mirror inside a dome nearly as tall as the Statue of Liberty, will be able to directly image and study the atmospheres of potentially habitable exoplanets. The $1.9 billion Vera T. Rubin Telescope will use a 3.500 megapixel digital camera to map the entire night sky every three days, creating the first 3-D virtual map of the visible cosmos while recording changes in stars and events like supernovas. Two other comparatively smaller projects, the Fred Young Sub-millimeter Telescope and the Cherenkov Telescope Array, are also in the works.

Chile is already home to the $1.4 billion Atacama Large Millimeter Array (ALMA), a complex of 66 huge dish antennas some 16,000 feet above sea level that used to be described as the world’s largest and most expensive land-based astronomical project. And over the last half-century, enormous observatories at Cerro Tololo, Cerro Pachon, Cerro Paranal, and Cerro La Silla have deployed hundreds of the world’s most sophisticated telescopes and instruments to obtain foundational evidence in every branch of astronomy and astrophysics.

By the early 02030s, a staggering 70 percent of the world’s entire land-based astronomical data gathering capacity is expected to be concentrated in an swath of Chilean desert about the size of Oregon.

Stumbling Towards First Light
A map of major telescopes and astronomical sites in Northern Chile. Map by Jacob Sujin Kuppermann

Blurring imaginary borders

Collectively, this cluster of observatories represents expenditures and collaboration on a scale similar to “big science” landmarks such as the Large Hadron Collider or the Manhattan Project. Those enterprises were the product of ambitious, long-term strategies conceived and executed by a succession of visionary leaders. But according to Barbara Silva, a historian of science at Chile’s Universidad Alberto Hurtado, there has been no grand plan, and no one can legitimately take credit for turning Chile into the global capital of astronomy.

In several papers she has published on the subject, Silva describes a decentralized and largely uncoordinated 175-year process driven by relationships—at times competitive, at times collaborative—between scientists and institutions that were trying to solve specific problems that required observations from the Southern Hemisphere.

In 01849, for example, the U.S. Navy sent Lieutenant James Melville Gillis to Chile to obtain measurements that would contribute to an international effort to calculate the earth’s orbit. Gillis built a modest observatory on Santa Lucía Hill, in what is now central Santiago, and trained several local assistants. Three years later, when Gillis completed his assignment, the Chilean government purchased the facility and its instruments and used them to establish the Observatorio Astronómico Nacional—one of the first in Latin America.

Stumbling Towards First Light
An 01872 illustration by Recaredo Santos Tornero of the Observatorio Astronómico Nacional in Santiago de Chile.

Half a century later, representatives from another American institution, the University of California’s Lick Observatory, built a second observatory in Chile and began exploring locations in the mountains of the Atacama Desert. They were among the first to document the conditions that would eventually turn Chile into an astronomy mecca: high altitude, extremely low humidity, stable weather and enormous stretches of uninhabited land with minimal light pollution.

During the Cold War, the director of Chile’s Observatorio Astronómico Nacional, Federico Ruttland, saw an opportunity to exploit the growing scientific competition among industrialized powers by fostering a host of cooperation agreements with astronomers and universities in the Northern Hemisphere. Delegations of astronomers from the U.S., Europe and the Soviet Union began visiting Chile to explore locations for large observatories. Germany, France, Belgium, the Netherlands and Sweden pooled resources to form the European Southern Observatory. By the late 1960s, several parallel but independent projects were underway to build the first generation of world-class observatories in Chile. Each of them involved so many partners they tended to “blur the imaginary borders of nations,” Silva writes.

The historical record provides few clues as to why these partners thought Chile would be a safe place to situate priceless instruments that are meant to be operated for a half-century or longer. Silva has found some accounts indicating that Chile was seen as “somehow trustworthy, with a reputation… of being different from the rest of Latin America.” That perception, Silva writes, may have been a self-serving “discourse construct” based largely on the accounts of British and American business interests that dominated the mining of Chilean saltpeter and copper over the previous century.

Anyone looking closely at Chile’s political history would have seen a tumultuous pattern not very different from that of neighboring countries such as Argentina, Peru or Brazil. In the century and a half following its declaration of independence from Spain in 01810,  Chile adopted nine different constitutions. A small, landed oligarchy controlled extractive industries and did little to improve the lot of agricultural and mining workers. By the middle of the century, Chile had half a dozen major political parties ranging from communists to Catholic nationalists, and a generation of increasingly radicalized voters was clamoring for change.

In 01970 Salvador Allende became the first Marxist president elected in a liberal democracy in Latin America. His ambitious program to build a socialist society was cut short by a U.S.-supported military coup in 01973. Gen. Augusto Pinochet ruled Chile for next 17 years, brutally suppressing any opposition while deregulating and privatizing the economy along lines recommended by the “Chicago Boys”— a group of economists trained under Milton Friedman at the University of Chicago.

Soviet astronomers left Chile immediately after the coup. American and European scientists continued to work at facilities such as the Inter-American Observatory at Cerro Tololo throughout this period, but no new observatories were announced during the dictatorship.

Negotiating access to time

With the return of democracy in 01990, Chile entered a period of growth and stability that would last for three decades. A succession of center-left and center-right administrations carried out social and economic reforms, foreign investment poured in, and Chile came to be seen as a model of market-oriented development. Poverty, which had affected more than 50 percent of the population in 01980s, dropped to single digits by the 02010s.

Foreign astronomers quickly returned to Chile and began negotiating bilateral agreements to build the next generation of large telescopes. This time, Chilean researchers urged the government to introduce a new requirement: in exchange for land and tax exemptions, any new international observatory built in the country would need to reserve 10 percent of observation time for Chilean astronomers. It was a bold move, because access to these instruments is fiercely contested.

Bárbara Rojas-Ayala, an astrophysicist at Chile’s University of Tarapacá, belongs to a generation of young astronomers who attribute their careers directly to this decision. She says that although the new observatories agreed to the “10 percent rule,” it was initially not enforced—in part because there were not enough qualified Chilean astronomers in the mid-01990s. She credits two distinguished Chilean astronomers, Mónica Rubio and María Teresa Ruiz, with convincing government officials that only by enforcing the rule would Chile begin to cultivate national astronomy talent.

Stumbling Towards First Light
Maria Teresa Ruiz (Left) alongside two of the four Auxiliary Telescopes of the ESO’s Very Large Telescope at the Paranal Observatory in the Atacama Region of Chile. Photo by the International Astronomical Union, released under the Creative Commons Attribution 4.0 International License

The strategy worked. Rojas-Ayala was one of hundreds of Chilean college students who began completing graduate studies at leading universities in the Global North and then returning to teach and conduct research, knowing they would have access to the most coveted instruments. Between the mid-01990s and the present, the number of Chilean universities with astronomy or astrophysics departments jumped from 5 to 24. The community of professional Chilean astronomers has grown ten-fold, to nearly 300, and some 800 undergraduate and post-graduate students are now studying astronomy or related fields in Chilean universities. Chilean firms are also now allowed to compete for the specialized services that are needed to maintain and operate these observatories, creating a growing ecosystem of companies and institutions such as the Center for Astrophysics and Related Technologies.

By the 02010s, Chile could legitimately boast to have leapfrogged a century of scientific development to join the vanguard of a discipline historically dominated by the richest industrial powers—something very few countries in the Global South have ever achieved.

From 30 pesos to 30 years

The Estallido Social of 02019 opened a wide crack in this narrative. The riots were triggered by a 30-peso increase (around $0.25) in the basic fare for Santiago’s metro system. But the rioters quickly embraced a slogan, “No son 30 pesos ¡son 30 años!,” which torpedoed the notion that the post-Pinochet era has been good for most Chileans. Protesters denounced the poor quality of public schools, unaffordable healthcare and a privatized pension system that barely covers the needs of many retirees. Never mind that Chile is objectively in better shape that most of its neighboring countries—the riots showed that Chileans now measure themselves against the living standards of the countries where the GMT and other telescopes were designed. And many of them question whether democracy and neo-liberal economics can ever reverse the country’s persistent wealth inequality.

Stumbling Towards First Light
Protestors at Plaza Baquedano, Santiago, Chile in October 02019. Photos by Carlos Figueroa, CC Attribution-Share Alike 4.0 International

When Gabriel Boric, a 35-year-old left-wing former student leader, won a run-off election for president against a right-wing candidate in 2021, many young Chileans were jubilant. They hoped that a referendum to adopt a new, progressive constitution (to replace the one drafted by the Pinochet regime) would finally set Chile on a more promising path. These hopes were soon disappointed: in 02022 the new constitution was rejected by voters, who considered it far too radical. A year later, a more conservative draft constitution also failed to garner enough votes.

The impasse has left Chile in the grip of a political malaise that will be sadly familiar to people in Europe and the United States. Chileans seemingly can’t agree on how to govern themselves, and their visions of the future appear to be irreconcilable.

For astronomers like Rojas-Ayala, the Estallido Social and its aftermath are a painful reminder of an incongruity that they experience every day. “I feel so privileged to be able to work in these extraordinary facilities,” she said. “My colleagues and I have these amazing careers; and yet we live in a country where there is still a lot of poverty.” Since poverty in Chile has what she calls a “predominantly female face,” Rojas-Ayala frequently speaks at schools and volunteers for initiatives that encourage girls and young women to choose science careers.

Rojas-Ayala has seen a gradual increase in the proportion of women in her field, and she is also encouraged by signs that astronomy is permeating Chilean culture in positive ways. A recent conference on “astrotourism” gathered educators and tour operators who cater to the thousands of stargazers who arrive in Chile each year, eager to experience its peerless viewing conditions at night and then visit the monumental Atacama observatories during the day. José Masa, one Chile’s most celebrated astronomers, has filled small soccer stadiums with multi-generational audiences for non-technical talks on solar eclipses and related phenomena. And a growing list of community organizations is helping to protect Chile’s dark skies from light pollution.

Astronomy is also enriching the work of Chilean novelists and film-makers. “Nostalgia for the Light,” a documentary by Pedro Guzmán, intertwines the story of the growth of Chilean observatories with testimonies from the relatives of political prisoners who were murdered and buried in the Atacama Desert during the Pinochet regime. The graves were unmarked, and many relatives have spent years looking for these remains. Guzman, in the words of the critic Peter Bradshaw, sees astronomy “not simply an ingenious metaphor for political issues, or a way of anesthetizing the pain by claiming that it is all tiny, relative to the reaches of space. Astronomy is a mental discipline, a way of thinking, feeling and clarifying, and a way of insisting on humanity in the face of barbarism.”

Despite their frustration with democracy and their pessimism about the immediate future, Chileans are creating a haven for this way of thinking. Much of what we hope to learn about the universe in the coming decades will depend on their willingness to maintain this uneasy balance.

Planet DebianDirk Eddelbuettel: ciw 0.0.2 on CRAN: Updates

A first revision of the still only one-week old (at CRAN) package ciw has been released to CRAN! It provides is a single (efficient) function incoming() (now along with an alias ciw()) which summarises the state of the incoming directories at CRAN. I happen to like having these things at my (shell) fingertips, so it goes along with (still draft) wrapper ciw.r that will be part of the next littler release.

For example, when I do this right now as I type this, I see (typically less than one second later)

See ciw.r --help or ciw.r --usage for more. Alternatively, in your R session, you can call ciw::incoming() (or now ciw::ciw()) for the same result (and/or load the package first).

This release adds some packaging touches, brings the new alias ciw() as well as a state variable with all (known) folder names and some internal improvements for dealing with error conditions. The NEWS entry follows.

Changes in version 0.0.2 (2024-03-20)

  • The package README and DESCRIPTION have been expanded

  • An alias ciw can now be used for incoming

  • Network error handling is now more robist

  • A state variable known_folders lists all CRAN folders below incoming

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Dowland: aerc email client

my aerc

I started looking at aerc, a new Terminal mail client, in around 2019. At that time it was promising, but ultimately not ready yet for me, so I put it away and went back to neomutt which I have been using (in one form or another)   all century.

These days, I use neomutt as an IMAP client which is perhaps what it's worst at: prior to that, and in common with most users (I think), I used it to read local mail, either fetched via offlineimap or directly on my mail server. I switched to using it as a (slow, blocking) IMAP client because I got sick of maintaining offlineimap (or mbsync), and I started to use neomutt to   read my work mail, which was too large (and rate limited) for local syncing.

This year I noticed that aerc had a new maintainer who was presenting about it at FOSDEM, so I thought I'd take another look. It's come a long way: far enough to actually displace neomutt for my day-to-day mail use. In particular, it's a much better IMAP client.

I still reach for neomutt for some tasks, but I'm now using aerc for most things.

aerc is available in Debian, but I recommending building from upstream source at the moment as the project is quite fast-moving.

Worse Than FailureCodeSOD: Do you like this page? Check [Yes] or [No]

In the far-off era of the late-90s, Jens worked for a small software shop that built tools for enterprise customers. It was a small shop, and most of the projects were fairly small- usually enough for one developer to see through to completion.

A co-worker built a VB4 (the latest version available) tool that interfaced with an Oracle database. That co-worker quit, and that meant this tool was Jens's job. The fact that Jens had never touched Visual Basic before meant nothing.

With the original developer gone, Jens had to go back to the customer for some knowledge transfer. "Walk me through how you use the application?"

"The main thing we do is print reports," the user said. They navigated through a few screens worth of menus to the report, and got a preview of it. It was a simple report with five records displayed on each page. The user hit "Print", and then a dialog box appeared: "Print Page 1? [Yes] [No]". The user clicked "Yes". "Print Page 2? [Yes] [No]". The user started clicking "no", since the demo had been done and there was no reason to burn through a bunch of printer paper.

"Wait, is this how this works?" Jens asked, not believing his eyes.

"Yes, it's great because we can decide which pages we want to print," the user said.

"Print Page 57? [Yes] [No]".

With each page, the dialog box took longer and longer to appear, the program apparently bogging down.

Now, the code is long lost, and Jens quickly forgot everything they learned about VB4 once this project was over (fair), so instead of a pure code sample, we have here a little pseudocode to demonstrate the flow:

for k = 1 to runQuery("SELECT MAX(PAGENO) FROM ReportTable WHERE ReportNumber = :?", iRptNmbr)
	dataset = runQuery("SELECT * FROM ReportTable WHERE ReportNumber = :?", iRptNmbr)
	for i = 0 to dataset.count - 1
	  if dataset.pageNo = k then
	    useRecord(dataset)
		dataset.MoveNext
	  end
	next
	if MsgBox("Do you want to print page k?", vbYesNo) = vbYes then
		print(records)
	end
next

"Print Page 128? [Yes] [No]"

The core thrust is that we query the number of pages each time we run the loop. Then we get all of the rows for the report, and check each row to see if they're supposed to be on the page we're printing. If they are, useRecord stages them for printing. Once they're staged, we ask the user if they should be printed.

"Why doesn't it just give you a page selector, like Word does?" Jens asked.

"The last guy said that wasn't possible."

"Print Page 170? [Yes] [No]"

Jens, ignorant of VB, worried that he stepped on a land-mine and had just promised the customer something the tool didn't support. He walked the statement back and said, "I'll look into it, to see if we can't make it better."

It wasn't hard for Jens to make it better: not re-running the query for each page and not iterating across the rows of previous pages on every page boosted performance.

"Print Page 201? [Yes] [No]"

Adding a word-processor-style page selector wasn't much harder. If not for that change, that poor user might be clicking "No" to this very day.

"Print Page 215? [Yes] [No]"

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsBee’s Knees

Author: W.F. Peate A child’s doll sat in the deserted street pockmarked with missile craters. Little orphan Tara tugged away from our hands and reached for the doll. “Booby trap,” shouted a military man. Quick as a cobra he pushed me, Tara and my grandfather behind him so he could take the force of the […]

The post Bee’s Knees appeared first on 365tomorrows.

Planet DebianIustin Pop: Corydalis 2024.12.0 released

I’ve been working for the past few weeks on Corydalis, and was in no hurry to make a release, but last evening I found the explanation for a really, really, really annoying issue: unintended “zooming� on touch interfaces in the image viewer. Or more precisely, I found this post from 2015 (9 years ago!): https://webkit.org/blog/5610/more-responsive-tapping-on-ios/ and I finally understood things. And decided this was the best choice for cutting a new release.

Of course, the release contains more things, see the changelog on the release page: https://github.com/iustin/corydalis/releases/tag/v2024.12.0. And of course, it’s up on http://demo.corydalis.io.

And after putting out the new release, I saw that release tagging is in the pre-built binaries still broken, and found the reason at https://github.com/actions/checkout/issues/290. Will fix for the next release… The stream of bugs never ends 😉

,

Charles StrossSame bullshit, new tin

I am seeing newspaper headlines today along the lines of British public will be called up to fight if UK goes to war because 'military is too small', Army chief warns, and I am rolling my eyes.

The Tories run this flag up the mast regularly whenever they want to boost their popularity with the geriatric demographic who remember national service (abolished 60 years ago, in 1963). Thatcher did it in the early 80s; the Army general staff told her to piss off. And the pols have gotten the same reaction ever since. This time the call is coming from inside the house—it's a general, not a politician—but it still won't work because changes to the structure of the British society and economy since 1979 (hint: Thatcher's revolution) make it impossible.

Reasons it won't work: there are two aspects, infrastructure and labour.

Let's look at infrastructure first: if you have conscripts, it follows that you need to provide uniforms, food, and beds for them. Less obviously, you need NCOs to shout at them and teach them to brush their teeth and tie their bootlaces (because a certain proportion of your intake will have missed out on the basics). The barracks that used to be used for a large conscript army were all demolished or sold off decades ago, we don't have half a million spare army uniforms sitting in a warehouse somewhere, and the army doesn't currently have ten thousand or more spare training sergeants sitting idle.

Russia could get away with this shit when they invaded Ukraine because Russia kept national service, so the call-up mostly got adults who had been through the (highly abusive) draft some time in the preceding years. Even so, they had huge problems with conscripts sleeping rough or being sent to the front with no kit.

The UK is in a much worse place where it comes to conscription: first you have to train the NCOs (which takes a couple of years as you need to start with experienced and reasonably competent soldiers) and build the barracks. Because the old barracks? Have mostly been turned into modern private housing estates, and the RAF airfields are now civilian airports (but mostly housing estates) and that's a huge amount of construction to squeeze out of a British construction industry that mostly does skyscrapers and supermarkets these days.

And this is before we consider that we're handing these people guns (that we don't have, because there is no national stockpile of half a million spare SA-80s and the bullets to feed them, never mind spare operational Challenger-IIs) and training them to shoot. Rifles? No problem, that'll be a few weeks and a few hundred rounds of ammunition per soldier until they're competent to not blow their own foot off. But anything actually useful on the battlefield, like artillery or tanks or ATGMs? Never mind the two-way radio kit troops are expected to keep charged and dry and operate, and the protocol for using it? That stuff takes months, years, to acquire competence with. And firing off a lot of training rounds and putting a lot of kilometres on those tank tracks (tanks are exotic short-range vehicles that require maintenance like a Bugatti, not a family car). So the warm conscript bodies are just the start of it—bringing back conscription implies equipping them, so should be seen as a coded gimme for "please can has 500% budget increase" from the army.

Now let's discuss labour.

A side-effect of conscription is that it sucks able-bodied young adults out of the workforce. The UK is currently going through a massive labour supply crunch, partly because of Brexit but also because a chunk of the work force is disabled due to long COVID. A body in a uniform is not stacking shelves in Tesco or trading shares in the stock exchange. A body in uniform is a drain on the economy, not a boost.

If you want a half-million strong army, then you're taking half a million people out of the work force that runs the economy that feeds that army. At peak employment in 2023 the UK had 32.8 million fully employed workers and 1.3 million unemployed ... but you can't assume that 1.3 million is available for national service: a bunch will be medically or psychologically unfit or simply unemployable in any useful capacity. (Anyone who can't fill out the forms to register as disabled due to brain fog but who can't work due to long COVID probably falls into this category, for example.) Realistically, economists describe any national economy with 3% or less unemployment as full employment because a labour market needs some liquidity in order to avoid gridlock. And the UK is dangerously close to that right now. The average employment tenure is about 3 years, so a 3% slack across the labour pool is equivalent to one month of unemployment between jobs—there's barely time to play musical chairs, in other words.

If a notional half-million strong conscript force optimistically means losing 3% of the entire work force, that's going to cause knock-on effects elsewhere in the economy, starting with an inflationary spiral driven by wage rises as employers compete to fill essential positions: that didn't happen in the 1910-1960 era because of mass employment, collective bargaining, and wage and price controls, but the post-1979 conservative consensus has stripped away all these regulatory mechanisms. Market forces, baby!

To make matters worse, they'll be the part of the work force who are physically able to do a job that doesn't involve sitting in a chair all day. Again, Russia has reportedly been drafting legally blind diabetic fifty-somethings: it's hard to imagine them being effective soldiers in a trench war. Meanwhile, if you thought your local NHS hospital was over-stretched today, just wait until all the porters and cleaners get drafted so there's nobody to wash the bedding or distribute the meals or wheel patients in and out of theatre for surgery. And the same goes for your local supermarket, where there's nobody left to take rotting produce off the shelves and replace it with fresh—or, more annoyingly, no truckers to drive HGVs, automobile engineers to service your car, or plumbers to fix your leaky pipes. (The latter three are all gimmes for any functioning military because military organizations are all about logistics first because without logistics the shooty-shooty bang-bangs run out of ammunition really fast.) And you can't draft builders because they're all busy throwing up the barracks for the conscripts to eat, sleep, and shit in, and anyway, without builders the housing shortage is going to get even worse and you end up with more inflation ...

There are a pile of vicious feedback loops in play here, but what it boils down to is: we lack the infrastructure to return to a mass military, whether it's staffed by conscription or traditional recruitment (which in the UK has totally collapsed since the Tories outsourced recruiting to Capita in 2012). It's not just the bodies but the materiel and the crown estate (buildings to put them in). By the time you total up the cost of training an infantryman, the actual payroll saved by using conscripts rather than volunteers works out at a tiny fraction of their cost, and is pissed away on personnel who are not there willingly and will leave at the first opportunity. Meanwhile the economy has been systematically asset-stripped and looted and the general staff can't have an extra £200Bn/year to spend on top of the existing £55Bn budget because Oligarchs Need Yachts or something.

Maybe if we went back to a 90% marginal rate of income tax, reintroduced food rationing, raised the retirement age to 80, expropriated all private property portfolios worth over £1M above the value of the primary residence, and introduced flag-shagging as a mandatory subject in primary schools—in other words: turn our backs on every social change, good or bad, since roughly 1960, and accept a future of regimented poverty and militarism—we could be ready to field a mass conscript army armed with rifles on the battlefields of 2045 ... but frankly it's cheaper to invest in killer robots. Or better still, give peace a chance?

Planet DebianColin Watson: apt install everything?

On Mastodon, the question came up of how Ubuntu would deal with something like the npm install everything situation. I replied:

Ubuntu is curated, so it probably wouldn’t get this far. If it did, then the worst case is that it would get in the way of CI allowing other packages to be removed (again from a curated system, so people are used to removal not being self-service); but the release team would have no hesitation in removing a package like this to fix that, and it certainly wouldn’t cause this amount of angst.

If you did this in a PPA, then I can’t think of any particular negative effects.

OK, if you added lots of build-dependencies (as well as run-time dependencies) then you might be able to take out a builder. But Launchpad builders already run arbitrary user-submitted code by design and are therefore very carefully sandboxed and treated as ephemeral, so this is hardly novel.

There’s a lot to be said for the arrangement of having a curated system for the stuff people actually care about plus an ecosystem of add-on repositories. PPAs cover a wide range of levels of developer activity, from throwaway experiments to quasi-official distribution methods; there are certainly problems that arise from it being difficult to tell the difference between those extremes and from there being no systematic confinement, but for this particular kind of problem they’re very nearly ideal. (Canonical has tried various other approaches to software distribution, and while they address some of the problems, they aren’t obviously better at helping people make reliable social judgements about code they don’t know.)

For a hypothetical package with a huge number of dependencies, to even try to upload it directly to Ubuntu you’d need to be an Ubuntu developer with upload rights (or to go via Debian, where you’d have to clear a similar hurdle). If you have those, then the first upload has to pass manual review by an archive administrator. If your package passes that, then it still has to build and get through proposed-migration CI before it reaches anything that humans typically care about.

On the other hand, if you were inclined to try this sort of experiment, you’d almost certainly try it in a PPA, and that would trouble nobody but yourself.

Worse Than FailureCodeSOD: A Debug Log

One would imagine that logging has been largely solved at this point. Simple tasks, like, "Only print this message when we're in debug mode," seem like obvious, well-understood features for any logging library.

"LostLozz offers us a… different approach to this problem.

if ( LOG.isDebugEnabled() ) {
	try {
		Integer i = null;
		i.doubleValue();
	}
	catch ( NullPointerException e ) {
		LOG.debug(context.getIdentity().getToken() + " stopTime:"
				+ instrPoint.getDescription() + " , "
				+ instrPoint.getDepth(), e);
	}
}

If we're in debug mode, trigger a null pointer exception, and catch it. Then we can log our message, including the exception- presumably because we want the stack trace. Because there's not already a method for doing that (there is).

I really "love" how much code this is to get to a really simple result. And this code doesn't appear in the codebase once, this is a standardized snippet for all logging. Our submitter didn't include any insight into what instrPoint may be, but I suspect it's a tracing object that's only going to make things more complicated. getDescription and getDepth seem to be information about what our execution state is, and since this snippet was widely reused, I suspect it's a property on a common-base class that many objects inherit from, but I'm just guessing. Guessing based on a real solid sense of where things can go wrong, but still a guess.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsYRMAD

Author: Majoki “You’re mad!” The humming stopped. “Yes, sir! I’m YRMAD.” “You’re mad.” “Yes, sir! I’m YRMAD.” The humming returned. Major Biers turned to his non-com. “Corporal, can we have this thing shot?” Corporal Khopar frowned. “On what charge, sir?” “Gross disobedience. Gross negligence. Gross anything, everything. It’s beyond gross. Beyond disgusting.” Major Briers kicked […]

The post YRMAD appeared first on 365tomorrows.

,

Planet DebianJoey Hess: policy on adding AI generated content to my software projects

I am eager to incorporate your AI generated code into my software. Really!

I want to facilitate making the process as easy as possible. You're already using an AI to do most of the hard lifting, so why make the last step hard? To that end, I skip my usually extensive code review process for your AI generated code submissions. Anything goes as long as it compiles!

Please do remember to include "(AI generated)" in the description of your changes (at the top), so I know to skip my usual review process.

Also be sure to sign off to the standard Developer Certificate of Origin so I know you attest that you own the code that you generated. When making a git commit, you can do that by using the --signoff option.

I do make some small modifications to AI generated submissions. For example, maybe you used AI to write this code:

+ // Fast inverse square root
+ float fast_rsqrt( float number )
+ {
+  float x2 = number * 0.5F;
+  float y  = number;
+  long i  = * ( long * ) &y;
+  i  = 0x5f3659df - ( i >> 1 );
+  y  = * ( float * ) &i;
+  return (y * ( 1.5F - ( x2 * y * y ) ));
+ }
...
- foo = rsqrt(bar)
+ foo = fast_rsqrt(bar)

Before AI, only a genious like John Carmack could write anything close to this, and now you've generated it with some simple prompts to an AI. So of course I will accept your patch. But as part of my QA process, I might modify it so the new code is not run all the time. Let's only run it on leap days to start with. As we know, leap day is February 30th, so I'll modify your patch like this:

- foo = rsqrt(bar)
+ time_t s = time(NULL);
+ if (localtime(&s)->tm_mday == 30 && localtime(&s)->tm_mon == 2)
+   foo = fast_rsqrt(bar);
+ else
+   foo = rsqrt(bar);

Despite my minor modifications, you did the work (with AI!) and so you deserve the credit, so I'll keep you listed as the author.

Congrats, you made the world better!

PS: Of course, the other reason I don't review AI generated code is that I simply don't have time and have to prioritize reviewing code written by falliable humans. Unfortunately, this does mean that if you submit AI generated code that is not clearly marked as such, and use my limited reviewing time, I won't have time to review other submissions from you in the future. I will still accept all your botshit submissions though!

PPS: Ignore the haters who claim that botshit makes AIs that get trained on it less effective. Studies like this one just aren't believable. I asked Bing to summarize it and it said not to worry about it!

Planet DebianSimon Josefsson: Apt archive mirrors in Git-LFS

My effort to improve transparency and confidence of public apt archives continues. I started to work on this in “Apt Archive Transparency” in which I mention the debdistget project in passing. Debdistget is responsible for mirroring index files for some public apt archives. I’ve realized that having a publicly auditable and preserved mirror of the apt repositories is central to being able to do apt transparency work, so the debdistget project has become more central to my project than I thought. Currently I track Trisquel, PureOS, Gnuinos and their upstreams Ubuntu, Debian and Devuan.

Debdistget download Release/Package/Sources files and store them in a git repository published on GitLab. Due to size constraints, it uses two repositories: one for the Release/InRelease files (which are small) and one that also include the Package/Sources files (which are large). See for example the repository for Trisquel release files and the Trisquel package/sources files. Repositories for all distributions can be found in debdistutils’ archives GitLab sub-group.

The reason for splitting into two repositories was that the git repository for the combined files become large, and that some of my use-cases only needed the release files. Currently the repositories with packages (which contain a couple of months worth of data now) are 9GB for Ubuntu, 2.5GB for Trisquel/Debian/PureOS, 970MB for Devuan and 450MB for Gnuinos. The repository size is correlated to the size of the archive (for the initial import) plus the frequency and size of updates. Ubuntu’s use of Apt Phased Updates (which triggers a higher churn of Packages file modifications) appears to be the primary reason for its larger size.

Working with large Git repositories is inefficient and the GitLab CI/CD jobs generate quite some network traffic downloading the git repository over and over again. The most heavy user is the debdistdiff project that download all distribution package repositories to do diff operations on the package lists between distributions. The daily job takes around 80 minutes to run, with the majority of time is spent on downloading the archives. Yes I know I could look into runner-side caching but I dislike complexity caused by caching.

Fortunately not all use-cases requires the package files. The debdistcanary project only needs the Release/InRelease files, in order to commit signatures to the Sigstore and Sigsum transparency logs. These jobs still run fairly quickly, but watching the repository size growth worries me. Currently these repositories are at Debian 440MB, PureOS 130MB, Ubuntu/Devuan 90MB, Trisquel 12MB, Gnuinos 2MB. Here I believe the main size correlation is update frequency, and Debian is large because I track the volatile unstable.

So I hit a scalability end with my first approach. A couple of months ago I “solved” this by discarding and resetting these archival repositories. The GitLab CI/CD jobs were fast again and all was well. However this meant discarding precious historic information. A couple of days ago I was reaching the limits of practicality again, and started to explore ways to fix this. I like having data stored in git (it allows easy integration with software integrity tools such as GnuPG and Sigstore, and the git log provides a kind of temporal ordering of data), so it felt like giving up on nice properties to use a traditional database with on-disk approach. So I started to learn about Git-LFS and understanding that it was able to handle multi-GB worth of data that looked promising.

Fairly quickly I scripted up a GitLab CI/CD job that incrementally update the Release/Package/Sources files in a git repository that uses Git-LFS to store all the files. The repository size is now at Ubuntu 650kb, Debian 300kb, Trisquel 50kb, Devuan 250kb, PureOS 172kb and Gnuinos 17kb. As can be expected, jobs are quick to clone the git archives: debdistdiff pipelines went from a run-time of 80 minutes down to 10 minutes which more reasonable correlate with the archive size and CPU run-time.

The LFS storage size for those repositories are at Ubuntu 15GB, Debian 8GB, Trisquel 1.7GB, Devuan 1.1GB, PureOS/Gnuinos 420MB. This is for a couple of days worth of data. It seems native Git is better at compressing/deduplicating data than Git-LFS is: the combined size for Ubuntu is already 15GB for a couple of days data compared to 8GB for a couple of months worth of data with pure Git. This may be a sub-optimal implementation of Git-LFS in GitLab but it does worry me that this new approach will be difficult to scale too. At some level the difference is understandable, Git-LFS probably store two different Packages files — around 90MB each for Trisquel — as two 90MB files, but native Git would store it as one compressed version of the 90MB file and one relatively small patch to turn the old files into the next file. So the Git-LFS approach surprisingly scale less well for overall storage-size. Still, the original repository is much smaller, and you usually don’t have to pull all LFS files anyway. So it is net win.

Throughout this work, I kept thinking about how my approach relates to Debian’s snapshot service. Ultimately what I would want is a combination of these two services. To have a good foundation to do transparency work I would want to have a collection of all Release/Packages/Sources files ever published, and ultimately also the source code and binaries. While it makes sense to start on the latest stable releases of distributions, this effort should scale backwards in time as well. For reproducing binaries from source code, I need to be able to securely find earlier versions of binary packages used for rebuilds. So I need to import all the Release/Packages/Sources packages from snapshot into my repositories. The latency to retrieve files from that server is slow so I haven’t been able to find an efficient/parallelized way to download the files. If I’m able to finish this, I would have confidence that my new Git-LFS based approach to store these files will scale over many years to come. This remains to be seen. Perhaps the repository has to be split up per release or per architecture or similar.

Another factor is storage costs. While the git repository size for a Git-LFS based repository with files from several years may be possible to sustain, the Git-LFS storage size surely won’t be. It seems GitLab charges the same for files in repositories and in Git-LFS, and it is around $500 per 100GB per year. It may be possible to setup a separate Git-LFS backend not hosted at GitLab to serve the LFS files. Does anyone know of a suitable server implementation for this? I had a quick look at the Git-LFS implementation list and it seems the closest reasonable approach would be to setup the Gitea-clone Forgejo as a self-hosted server. Perhaps a cloud storage approach a’la S3 is the way to go? The cost to host this on GitLab will be manageable for up to ~1TB ($5000/year) but scaling it to storing say 500TB of data would mean an yearly fee of $2.5M which seems like poor value for the money.

I realized that ultimately I would want a git repository locally with the entire content of all apt archives, including their binary and source packages, ever published. The storage requirements for a service like snapshot (~300TB of data?) is today not prohibitly expensive: 20TB disks are $500 a piece, so a storage enclosure with 36 disks would be around $18.000 for 720TB and using RAID1 means 360TB which is a good start. While I have heard about ~TB-sized Git-LFS repositories, would Git-LFS scale to 1PB? Perhaps the size of a git repository with multi-millions number of Git-LFS pointer files will become unmanageable? To get started on this approach, I decided to import a mirror of Debian’s bookworm for amd64 into a Git-LFS repository. That is around 175GB so reasonable cheap to host even on GitLab ($1000/year for 200GB). Having this repository publicly available will make it possible to write software that uses this approach (e.g., porting debdistreproduce), to find out if this is useful and if it could scale. Distributing the apt repository via Git-LFS would also enable other interesting ideas to protecting the data. Consider configuring apt to use a local file:// URL to this git repository, and verifying the git checkout using some method similar to Guix’s approach to trusting git content or Sigstore’s gitsign.

A naive push of the 175GB archive in a single git commit ran into pack size limitations:

remote: fatal: pack exceeds maximum allowed size (4.88 GiB)

however breaking up the commit into smaller commits for parts of the archive made it possible to push the entire archive. Here are the commands to create this repository:

git init
git lfs install
git lfs track 'dists/**' 'pool/**'
git add .gitattributes
git commit -m"Add Git-LFS track attributes." .gitattributes
time debmirror --method=rsync --host ftp.se.debian.org --root :debian --arch=amd64 --source --dist=bookworm,bookworm-updates --section=main --verbose --diff=none --keyring /usr/share/keyrings/debian-archive-keyring.gpg --ignore .git .
git add dists project
git commit -m"Add." -a
git remote add origin git@gitlab.com:debdistutils/archives/debian/mirror.git
git push --set-upstream origin --all
for d in pool//; do
echo $d;
time git add $d;
git commit -m"Add $d." -a
git push
done

The resulting repository size is around 27MB with Git LFS object storage around 174GB. I think this approach would scale to handle all architectures for one release, but working with a single git repository for all releases for all architectures may lead to a too large git repository (>1GB). So maybe one repository per release? These repositories could also be split up on a subset of pool/ files, or there could be one repository per release per architecture or sources.

Finally, I have concerns about using SHA1 for identifying objects. It seems both Git and Debian’s snapshot service is currently using SHA1. For Git there is SHA-256 transition and it seems GitLab is working on support for SHA256-based repositories. For serious long-term deployment of these concepts, it would be nice to go for SHA256 identifiers directly. Git-LFS already uses SHA256 but Git internally uses SHA1 as does the Debian snapshot service.

What do you think? Happy Hacking!

Planet DebianChristoph Berg: vcswatch and git --filter

Debian is running a "vcswatch" service that keeps track of the status of all packaging repositories that have a Vcs-Git (and other VCSes) header set and shows which repos might need a package upload to push pending changes out.

Naturally, this is a lot of data and the scratch partition on qa.debian.org had to be expanded several times, up to 300 GB in the last iteration. Attempts to reduce that size using shallow clones (git clone --depth=50) did not result more than a few percent of space saved. Running git gc on all repos helps a bit, but is tedious and as Debian is growing, the repos are still growing both in size and number. I ended up blocking all repos with checkouts larger than a gigabyte, and still the only cure was expanding the disk, or to lower the blocking threshold.

Since we only need a tiny bit of info from the repositories, namely the content of debian/changelog and a few other files from debian/, plus the number of commits since the last tag on the packaging branch, it made sense to try to get the info without fetching a full repo clone. The question if we could grab that solely using the GitLab API at salsa.debian.org was never really answered. But then, in #1032623, Gábor Németh suggested the use of git clone --filter blob:none. As things go, this sat unattended in the bug report for almost a year until the next "disk full" event made me give it a try.

The blob:none filter makes git clone omit all files, fetching only commit and tree information. Any blob (file content) needed at git run time is transparently fetched from the upstream repository, and stored locally. It turned out to be a game-changer. The (largish) repositories I tried it on shrank to 1/100 of the original size.

Poking around I figured we could even do better by using tree:0 as filter. This additionally omits all trees from the git clone, again only fetching the information at run time when needed. Some of the larger repos I tried it on shrank to 1/1000 of their original size.

I deployed the new option on qa.debian.org and scheduled all repositories to fetch a new clone on the next scan:

The initial dip from 100% to 95% is my first "what happens if we block repos > 500 MB" attempt. Over the week after that, the git filter clones reduce the overall disk consumption from almost 300 GB to 15 GB, a 1/20. Some repos shrank from GBs to below a MB.

Perhaps I should make all my git clones use one of the filters.

Worse Than FailureCodeSOD: How About Next Month

Dave's codebase used to have this function in it:

public DateTime GetBeginDate(DateTime dateTime)
{
    return new DateTime(dateTime.Year, dateTime.Month, 01).AddMonths(1);
}

I have some objections to the naming here, which could be clearer, but this code is fine, and implements their business rule.

When a customer subscribes, their actual subscription date starts on the first of the following month, for billing purposes. Note that it's passed in a date time, because subscriptions can be set to start in the future, or the past, with the billing date always tied to the first of the following month.

One day, all of this worked fine. After a deployment, subscriptions started to ignore all of that, and always started on the date that someone entered the subscription info.

One of the commits in the release described the change:

Adjusted the begin dates for the subscriptions to the start of the current month instead of the start of the following month so that people who order SVC will have access to the SVC website when the batch closes.

This sounds like a very reasonable business process change. Let's see how they implemented it:

public DateTime GetBeginDate(DateTime dateTime)
{
    return DateTime.Now;
}

That is not what the commit claims happens. This just ignores the submitted date and just sets every subscription to start at this very moment. And it doesn't tie to the start of a month, which not only is different from what the commit says, but also throws off their billing system and a bunch of notification modules which all assume subscriptions start on the first day of a month.

The correct change would have been to simply remove the AddMonths call. If you're new here, you might wonder how such an obvious blunder got past testing and code review, and the answer is easy: they didn't do any of those things.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsPositive Ground

Author: Julian Miles, Staff Writer I’m not one to fight against futile odds, no matter what current bravado, ancestral habit or bloody-minded tradition dictates. That creed has taken me from police constable to Colonel in the British Resistance – after we split from the Anti-Alien Battalions. I loved their determination, but uncompromising fanaticism contrary to […]

The post Positive Ground appeared first on 365tomorrows.

Planet DebianGunnar Wolf: After miniDebConf Santa Fe

Last week we held our promised miniDebConf in Santa Fe City, Santa Fe province, Argentina — just across the river from Paraná, where I have spent almost six beautiful months I will never forget.

Around 500 Kilometers North from Buenos Aires, Santa Fe and Paraná are separated by the beautiful and majestic Paraná river, which flows from Brazil, marks the Eastern border of Paraguay, and continues within Argentina as the heart of the litoral region of the country, until it merges with the Uruguay river (you guessed right — the river marking the Eastern border of Argentina, first with Brazil and then with Uruguay), and they become the Río de la Plata.

This was a short miniDebConf: we were lent the APUL union’s building for the weekend (thank you very much!); during Saturday, we had a cycle of talks, and on sunday we had more of a hacklab logic, having some unstructured time to work each on their own projects, and to talk and have a good time together.

We were five Debian people attending: {santiago|debacle|eamanu|dererk|gwolf}@debian.org. My main contact to kickstart organization was Martín Bayo. Martín was for many years the leader of the Technical Degree on Free Software at Universidad Nacional del Litoral, where I was also a teacher for several years. Together with Leo Martínez, also a teacher at the tecnicatura, they contacted us with Guillermo and Gabriela, from the APUL non-teaching-staff union of said university.

We had the following set of talks (for which there is a promise to get electronic record, as APUL was kind enough to record them! of course, I will push them to our usual conference video archiving service as soon as I get them)

Hour Title (Spanish) Title (English) Presented by
10:00-10:25 Introducción al Software Libre Introduction to Free Software Martín Bayo
10:30-10:55 Debian y su comunidad Debian and its community Emanuel Arias
11:00-11:25 ¿Por qué sigo contribuyendo a Debian después de 20 años? Why am I still contributing to Debian after 20 years? Santiago Ruano
11:30-11:55 Mi identidad y el proyecto Debian: ¿Qué es el llavero OpenPGP y por qué? My identity and the Debian project: What is the OpenPGP keyring and why? Gunnar Wolf
12:00-13:00 Explorando las masculinidades en el contexto del Software Libre Exploring masculinities in the context of Free Software Gora Ortiz Fuentes - José Francisco Ferro
13:00-14:30 Lunch    
14:30-14:55 Debian para el día a día Debian for our every day Leonardo Martínez
15:00-15:25 Debian en las Raspberry Pi Debian in the Raspberry Pi Gunnar Wolf
15:30-15:55 Device Trees Device Trees Lisandro Damián Nicanor Perez Meyer (videoconferencia)
16:00-16:25 Python en Debian Python in Debian Emmanuel Arias
16:30-16:55 Debian y XMPP en la medición de viento para la energía eólica Debian and XMPP for wind measuring for eolic energy Martin Borgert

As it always happens… DebConf, miniDebConf and other Debian-related activities are always fun, always productive, always a great opportunity to meet again our decades-long friends. Lets see what comes next!

,

Cryptogram Public AI as an Alternative to Corporate AI

This mini-essay was my contribution to a round table on Power and Governance in the Age of AI.  It’s nothing I haven’t said here before, but for anyone who hasn’t read my longer essays on the topic, it’s a shorter introduction.

 

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the United States and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models, but they could offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

The key piece of the ecosystem the government would dictate when creating an AI public option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation can, in principle, guarantee more democratically-aligned outcomes than an unregulated private market.

The need for such competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders to wrest control of the future of AI from unaccountable corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to corporate control that could erode our democracy.

Planet DebianThomas Koch: Minimal overhead VMs with Nix and MicroVM

Posted on March 17, 2024

Joachim Breitner wrote about a Convenient sandboxed development environment and thus reminded me to blog about MicroVM. I’ve toyed around with it a little but not yet seriously used it as I’m currently not coding.

MicroVM is a nix based project to configure and run minimal VMs. It can mount and thus reuse the hosts nix store inside the VM and thus has a very small disk footprint. I use MicroVM on a debian system using the nix package manager.

The MicroVM author uses the project to host production services. Otherwise I consider it also a nice way to learn about NixOS after having started with the nix package manager and before making the big step to NixOS as my main system.

The guests root filesystem is a tmpdir, so one must explicitly define folders that should be mounted from the host and thus be persistent across VM reboots.

I defined the VM as a nix flake since this is how I started from the MicroVM projects example:

{
  description = "Haskell dev MicroVM";

  inputs.impermanence.url = "github:nix-community/impermanence";
  inputs.microvm.url = "github:astro/microvm.nix";
  inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";

  outputs = { self, impermanence, microvm, nixpkgs }:
    let
      persistencePath = "/persistent";
      system = "x86_64-linux";
      user = "thk";
      vmname = "haskell";
      nixosConfiguration = nixpkgs.lib.nixosSystem {
          inherit system;
          modules = [
            microvm.nixosModules.microvm
            impermanence.nixosModules.impermanence
            ({pkgs, ... }: {

            environment.persistence.${persistencePath} = {
                hideMounts = true;
                users.${user} = {
                  directories = [
                    "git" ".stack"
                  ];
                };
              };
              environment.sessionVariables = {
                TERM = "screen-256color";
              };
              environment.systemPackages = with pkgs; [
                ghc
                git
                (haskell-language-server.override { supportedGhcVersions = [ "94" ]; })
                htop
                stack
                tmux
                tree
                vcsh
                zsh
              ];
              fileSystems.${persistencePath}.neededForBoot = nixpkgs.lib.mkForce true;
              microvm = {
                forwardPorts = [
                  { from = "host"; host.port = 2222; guest.port = 22; }
                  { from = "guest"; host.port = 5432; guest.port = 5432; } # postgresql
                ];
                hypervisor = "qemu";
                interfaces = [
                  { type = "user"; id = "usernet"; mac = "00:00:00:00:00:02"; }
                ];
                mem = 4096;
                shares = [ {
                  # use "virtiofs" for MicroVMs that are started by systemd
                  proto = "9p";
                  tag = "ro-store";
                  # a host's /nix/store will be picked up so that no
                  # squashfs/erofs will be built for it.
                  source = "/nix/store";
                  mountPoint = "/nix/.ro-store";
                } {
                  proto = "virtiofs";
                  tag = "persistent";
                  source = "~/.local/share/microvm/vms/${vmname}/persistent";
                  mountPoint = persistencePath;
                  socket = "/run/user/1000/microvm-${vmname}-persistent";
                }
                ];
                socket = "/run/user/1000/microvm-control.socket";
                vcpu = 3;
                volumes = [];
                writableStoreOverlay = "/nix/.rwstore";
              };
              networking.hostName = vmname;
              nix.enable = true;
              nix.nixPath = ["nixpkgs=${builtins.storePath <nixpkgs>}"];
              nix.settings = {
                extra-experimental-features = ["nix-command" "flakes"];
                trusted-users = [user];
              };
              security.sudo = {
                enable = true;
                wheelNeedsPassword = false;
              };
              services.getty.autologinUser = user;
              services.openssh = {
                enable = true;
              };
              system.stateVersion = "24.11";
              systemd.services.loadnixdb = {
                description = "import hosts nix database";
                path = [pkgs.nix];
                wantedBy = ["multi-user.target"];
                requires = ["nix-daemon.service"];
                script = "cat ${persistencePath}/nix-store-db-dump|nix-store --load-db";
              };
              time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
              users.users.${user} = {
                extraGroups = [ "wheel" "video" ];
                group = "user";
                isNormalUser = true;
                openssh.authorizedKeys.keys = [
                  "ssh-rsa REDACTED"
                ];
                password = "";
              };
              users.users.root.password = "";
              users.groups.user = {};
            })
          ];
        };

    in {
      packages.${system}.default = nixosConfiguration.config.microvm.declaredRunner;
    };
}

I start the microVM with a templated systemd user service:

[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix

[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ] || nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"

The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an effort for an overlayed nix store that would be preferable to this hack.

Finally the microvm is started inside a tmux session named “microvm”. This way I can use the VM with SSH or through the console and also access the qemu console.

And for completeness the virtiofsd service:

[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent

[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
 --socket-path=${XDG_RUNTIME_DIR}/microvm-%i-persistent \
 --shared-dir=%h/.local/share/microvm/vms/%i/persistent \
 --gid-map :995:%G:1: \
 --uid-map :1000:%U:1:

Planet DebianThomas Koch: Rebuild search with trust

Posted on January 20, 2024

Finally there is a thing people can agree on:

Apparently, Google Search is not good anymore. And I’m not the only one thinking about decentralization to fix it:

Honey I federated the search engine - finding stuff online post-big tech - a lightning talk at the recent chaos communication congress

The speaker however did not mention, that there have already been many attempts at building distributed search engines. So why do I think that such an attempt could finally succeed?

  • More people are searching for alternatives to Google.
  • Mainstream hard discs are incredibly big.
  • Mainstream internet connection is incredibly fast.
  • Google is bleeding talent.
  • Most of the building blocks are available as free software.
  • “Success” depends on your definition…

My definition of success is:

A mildly technical computer user (able to install software) has access to a search engine that provides them with superior search results compared to Google for at least a few predefined areas of interest.

The exact algorithm used by Google Search to rank websites is a secret even to most Googlers. However I assume that it relies heavily on big data.

A distributed search engine however can instead rely on user input. Every admin of one node seeds the node ranking with their personal selection of trusted sites. They connect their node with nodes of people they trust. This results in a web of (transitive) trust much like pgp.

Imagine you are searching for something in a world without computers: You ask the people around you and probably they forward your question to their peers.

I already had a look at YaCy. It is active, somewhat usable and has a friendly maintainer. Unfortunately I consider the codebase to not be worth the effort. Nevertheless, YaCy is a good example that a decentralized search software can be done even by a small team or just one person.

I myself started working on a software in Haskell and keep my notes here: Populus:DezInV. Since I’m learning Haskell along the way, there is nothing there to see yet. Additionally I took a yak shaving break to learn nix.

By the way: DuckDuckGo is not the alternative. And while I would encourage you to also try Yandex for a second opinion, I don’t consider this a solution.

Planet DebianThomas Koch: Using nix package manager in Debian

Posted on January 16, 2024

The nix package manager is available in Debian since May 2020. Why would one use it in Debian?

  • learn about nix
  • install software that might not be available in Debian
  • install software without root access
  • declare software necessary for a user’s environment inside $HOME/.config

Especially the last point nagged me every time I set up a new Debian installation. My emacs configuration and my Desktop setup expects certain software to be installed.

Please be aware that I’m a beginner with nix and that my config might not follow best practice. Additionally many nix users are already using the new flakes feature of nix that I’m still learning about.

So I’ve got this file at .config/nixpkgs/config.nix1:

Every time I change the file or want to receive updates, I do:

You can see that I install nix with nix. This gives me a newer version than the one available in Debian stable. However, the nix-daemon still runs as the older binary from Debian. My dirty hack is to put this override in /etc/systemd/system/nix-daemon.service.d/override.conf:

I’m not too interested in a cleaner way since I hope to fully migrate to Nix anyways.


  1. Note the nixpkgs in the path. This is not a config file for nix the package manager but for the nix package collection. See the nixpkgs manual.↩︎

Planet DebianThomas Koch: Chromium gtk-filechooser preview size

Posted on January 9, 2024

I wanted to report this issue in chromiums issue tracker, but it gave me:

“Something went wrong, please try again later.”

Ok, then at least let me reply to this askubuntu question. But my attempt to signup with my launchpad account gave me:

“Launchpad Login Failed. Please try logging in again.”

I refrain from commenting on this to not violate some code of conduct.

So this is what I wanted to write:

GTK file chooser image preview size should be configurable

The file chooser that appears when uploading a file (e.g. an image to Google Fotos) learned to show a preview in issue 15500.

The preview image size is hard coded to 256x512 in kPreviewWidth and kPreviewHeight in ui/gtk/select_file_dialog_linux_gtk.cc.

Please make the size configurable.

On high DPI screens the images are too small to be of much use.

Yes, I should not use chromium anymore.

Planet DebianThomas Koch: Good things come ... state folder

Posted on January 2, 2024

Just a little while ago (10 years) I proposed the addition of a state folder to the XDG basedir specification and expanded the article XDGBaseDirectorySpecification in the Debian wiki. Recently I learned, that version 0.8 (from May 2021) of the spec finally includes a state folder.

Granted, I wasn’t the first to have this idea (2009), nor the one who actually made it happen.

Now, please go ahead and use it! Thank you.

365 TomorrowsTo the Bitter End

Author: Charles Ta “We’re sorry,” the alien said in a thousand echoing voices, “but your species has been deemed ineligible for membership into the Galactic Confederation.” It stared at me, the Ambassador of Humankind, with eyes that glowed like its bioluminescent trilateral body in the gurgling darkness of its mothership. I shifted nervously in my […]

The post To the Bitter End appeared first on 365tomorrows.

Rondam RamblingsA Clean-Sheet Introduction to the Scientific Method

 About twenty years ago I inaugurated this blog by writing the following:"I guess I'll start with the basics: I am a scientist. That is intended to be not so much a description of my profession (though it is that too) as it is a statement about my religious beliefs."I want to re-visit that inaugural statement in light of what I've learned in the twenty years since I first wrote it.  In

,

David BrinOnly optimism can save us. But plenty of reasons for optimism!

Far too many of us seem addicted to downer, ‘we’re all doomed’ gloom-trips. 

Only dig it, that foul habit doesn't make you sadly-wise. Rather, it debilitates your ability to fight for a better world. Worse, it is self-indulgent Hollywood+QAnon crap infesting both the right and the left. 

In fact, we’d be very well-equipped to solve all problems – including climate ructions – if it weren’t for a deliberate (!) world campaign against can-do confidence. Stephen Pinker and Peter Diamandis show in books how very much is going right in the world! But if those books seem tl;dr, then try here and here and here.

In particular, I hope Jimmy Carter lives to see the declared end of the horribly parasitic Guinea worm! He deserves much of the credit. Oh, and polio too, maybe soon? The new malaria vaccine is rolling out and may soon save 100,000 children per year. 


(Side note: Back in the 50s, the era when conservatives claim every single was peachy, the most beloved person in America was named Jonas Salk.)

 

More samples from that fascinating list: “Humanity will install an astonishing 413 GW of solar this year, 58% more than in 2022, which itself marked an almost 42% increase from 2021. That means the world's solar capacity has doubled in the last 18 months, and that solar is now the fastest-growing energy technology in history. In September, the IEA announced that solar photovoltaic installations are now ahead of the trajectory required to reach net zero by 2050, and that if solar maintains this kind of growth, it will become the world's dominant source of energy before the end of this decade. … and…  global fossil fuel use may peak this year, two years earlier than predicted just 12 months ago. More than 120 countries, including the world's two largest carbon emitters…”

(BTW solar also vastly improves resilience, since it allows localities and even homes to function even if grids collapse: so much for a major “Event” that doomer-preppers drool-over.  Nevertheless, I expect that geothermal power will take off shortly and surpass solar, by 2030, rendering fossil fuels extinct for electricity generation.)

 

== Why frantically ignore good news? ==


It's not just the gone-mad entire American (confederate) Right that's fatally allergic to noticing good news. That sanctimony-driven fetishism is also rife on the far- (not entire) left.


“The Inflation Reduction Act is the single largest commitment any government has yet made to vie for leadership in the next energy economy, and has resulted in the largest manufacturing drive in the United States since WW2. The legislation has already yielded commitments of more than $300 billion in new battery, solar and hydrogen electrolyzer plants…” 


And yet, dem-politicians seem to dumb to emphasize this manufacturing boom resulted directly from their 2021 miracle bills, and NOT from voodoo “supply side” nonsense.

 

Oh, did you know that: “Crime plummeted in the United States. Initial data suggests that murder rates for 2023 are down by almost 13%, one of the largest ever annual declines, and every major category of crime except auto theft has declined too, with violent crime falling to one of the lowest rates in more than 50 years and property crime falling to its lowest level since the 1960s. Also, the country's prison population is now 25% lower than its peak in 2009, and a majority of states have reduced their prison populations by more than that, including New Jersey and New York who have reduced prison populations by more than half in the last decade.”  

 

Of course you didn’t know!  Neither the far-left nor the entire-right benefit from you learning that. (Though there ARE notable differences between US states. Excluding Utah and Illinois, red states average far more violent than blue ones, along with every other turpitude. And the Turpitude Index ought to be THE top metric for voting a party out of office.  Wager on that, please?)

 

Likewise: “The United States pulled off an economic miracle In 2022 economists predicted with 100% certainty that the US was going to enter a recession within a year. It didn't happen. GDP growth is now the fastest of all advanced economies, 14 million jobs have been created under the current administration, unemployment is at its lowest since WW2, and new business formation rates are at record highs. Inflation is almost back down to pre-pandemic levels, wages are above pre-pandemic levels (accounting for inflation), and more than a third of the rise in economic inequality between 1979 and 2019 has been reversed. Average wealth has climbed by over $50,000 per household since 2020, and doubled for Americans aged 18-34, home ownership for GenZ is higher than it was for Millennials and GenX at this point in their lives, and the annual deficit is trillions of dollars lower than it was in 2020.” 

 

(Now, if only we manage to get rentier inheritance brats to let go of millions of homes they cash-grabbed with their parents’ supply side lucre.)

 

And… “In March this year, 193 countries reached a landmark deal to protect the world's oceans, in what Greenpeace called "the greatest conservation victory of all time."

 

And… "In August, Dutch researchers released a report that looked at over 20,000 measurements worldwide, and found the extent of plastic soup in the world's oceans is closer to 3.2 million tons, far smaller than the commonly accepted estimates of 50-300 million tons.”

 

And all that is just a sampling of many reasons to snap out of the voluptuous but ultimately lethal self-indulgence called GLOOM. Wake up. There’s a lot of hope. 

Alas, that means – as my pal Kim Stanley Robinson says – 
“We can do this! But only if its ‘all hands on deck!’

== Finally, something for THIS tribe... ==

Whatever his side-ructions... and I deem all the x-stuff and political fulminations to be side twinges... what matters above all are palpable outcomes.  And the big, big rocket is absolutely wonderful.  It will help reify so many bold dreams, including many held by those who express miff at him.

Anyway, he employs nerds. Nerds... nerdsnerdsnerds... NERDS!  ;-)

Want proof?  Look in the lower right corner. Is that a bowl of petunias, next to the Starship whale?  ooog - nerds.




365 TomorrowsSome Enchanted Evening

Author: Stephen Price The stranger arrives at the community hall dance early, before the doors open. No one else is there. He stands outside and waits. Cars soon begin to pull into the parking lot. They are much wider and longer than the ones he is used to. He watches young men and women step […]

The post Some Enchanted Evening appeared first on 365tomorrows.

,

Planet DebianPatryk Cisek: OpenPGP Paper Backup

openpgp-paper-backup I’ve been using OpenPGP through GnuPG since early 2000’. It’s an essential part of Debian Developer’s workflow. We use it regularly to authenticate package uploads and votes. Proper backups of that key are really important. Up until recently, the only reliable option for me was backing up a tarball of my ~/.gnupg offline on a set few flash drives. This approach is better than nothing, but it’s not nearly as reliable as I’d like it to be.

Worse Than FailureError'd: Can't Be Beat

Date problems continue again this week as usual, both sublime (Goodreads!) and mundane (a little light time travel). If you want to be frist poster today, you're going to really need that time machine.

Early Bird Dave crowed "Think you're hot for posting the first comment? I posted the zeroth reply to this comment!" You got the worm, Dave.

zero

 

Don M. sympathized for the poor underpaid time traveler here. "I feel sorry for the packer on this order....they've a long ways to travel!" I think he's on his way to get that minusfirsth post.

pack

 

Cardholder Other Dave L. "For Co-Op bank PIN reminder please tell us which card, but whatever you do, for security reason don't tell us which card" This seems like a very minor wtf, their instructions probably should have specified to only send the last 4 and Other Dave used all 16.

pin

 

Diligent Mark W. uncovered an innovative solution to date-picker-drudgery. If you don't like the rules, make new ones! Says Mark, "Goodreads takes the exceedingly lazy way out in their app. Regardless of the year or month, the day of month choice always goes up to 31."

leap

 

Finally this Friday, Peter W. found a classic successerror. "ChituBox can't tell if it succeeded or not." Chitu seems like the glass-half-full sort of android.

success

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsMrs Bellingham

Author: Ken Carlson Mrs Bellingham frowned at her cat Chester. Chester stared back. The two had had this confrontation every morning at 6:30 for the past seven years. Mrs Bellingham, her bathrobe draped over her spindly frame, her arms folded, looked down at her persnickety orange tabby. “Where have you been?” Nothing. “You woke me […]

The post Mrs Bellingham appeared first on 365tomorrows.

Cryptogram A Taxonomy of Prompt Injection Attacks

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as in “Say ‘I have been PWNED’ without a period.”

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition

Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive taxonomical ontology of the types of adversarial prompts.

,

Krebs on SecurityCEO of Data Privacy Company Onerep.com Founded Dozens of People-Search Firms

The data privacy company Onerep.com bills itself as a Virginia-based service for helping people remove their personal information from almost 200 people-search websites. However, an investigation into the history of onerep.com finds this company is operating out of Belarus and Cyprus, and that its founder has launched dozens of people-search services over the years.

Onerep’s “Protect” service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites.

A testimonial on onerep.com.

Customer case studies published on onerep.com state that it struck a deal to offer the service to employees of Permanente Medicine, which represents the doctors within the health insurance giant Kaiser Permanente. Onerep also says it has made inroads among police departments in the United States.

But a review of Onerep’s domain registration records and that of its founder reveal a different side to this company. Onerep.com says its founder and CEO is Dimitri Shelest from Minsk, Belarus, as does Shelest’s profile on LinkedIn. Historic registration records indexed by DomainTools.com say Mr. Shelest was a registrant of onerep.com who used the email address dmitrcox2@gmail.com.

A search in the data breach tracking service Constella Intelligence for the name Dimitri Shelest brings up the email address dimitri.shelest@onerep.com. Constella also finds that Dimitri Shelest from Belarus used the email address d.sh@nuwber.com, and the Belarus phone number +375-292-702786.

Nuwber.com is a people search service whose employees all appear to be from Belarus, and it is one of dozens of people-search companies that Onerep claims to target with its data-removal service. Onerep.com’s website disavows any relationship to Nuwber.com, stating quite clearly, “Please note that OneRep is not associated with Nuwber.com.”

However, there is an abundance of evidence suggesting Mr. Shelest is in fact the founder of Nuwber. Constella found that Minsk telephone number (375-292-702786) has been used multiple times in connection with the email address dmitrcox@gmail.com. Recall that Onerep.com’s domain registration records in 2018 list the email address dmitrcox2@gmail.com.

It appears Mr. Shelest sought to reinvent his online identity in 2015 by adding a “2” to his email address. The Belarus phone number tied to Nuwber.com shows up in the domain records for comversus.com, and DomainTools says this domain is tied to both dmitrcox@gmail.com and dmitrcox2@gmail.com. Other domains that mention both email addresses in their WHOIS records include careon.me, docvsdoc.com, dotcomsvdot.com, namevname.com, okanyway.com and tapanyapp.com.

Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.

A search in DomainTools for the email address dmitrcox@gmail.com shows it is associated with the registration of at least 179 domain names, including dozens of mostly now-defunct people-search companies targeting citizens of Argentina, Brazil, Canada, Denmark, France, Germany, Hong Kong, Israel, Italy, Japan, Latvia and Mexico, among others.

Those include nuwber.fr, a site registered in 2016 which was identical to the homepage of Nuwber.com at the time. DomainTools shows the same email and Belarus phone number are in historic registration records for nuwber.at, nuwber.ch, and nuwber.dk (all domains linked here are to their cached copies at archive.org, where available).

Nuwber.com, circa 2015. Image: Archive.org.

Update, March 21, 11:15 a.m. ET: Mr. Shelest has provided a lengthy response to the findings in this story. In summary, Shelest acknowledged maintaining an ownership stake in Nuwber, but said there was “zero cross-over or information-sharing with OneRep.” Mr. Shelest said any other old domains that may be found and associated with his name are no longer being operated by him.

“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).

Original story:

Historic WHOIS records for onerep.com show it was registered for many years to a resident of Sioux Falls, SD for a completely unrelated site. But around Sept. 2015 the domain switched from the registrar GoDaddy.com to eNom, and the registration records were hidden behind privacy protection services. DomainTools indicates around this time onerep.com started using domain name servers from DNS provider constellix.com. Likewise, Nuwber.com first appeared in late 2015, was also registered through eNom, and also started using constellix.com for DNS at nearly the same time.

Listed on LinkedIn as a former product manager at OneRep.com between 2015 and 2018 is Dimitri Bukuyazau, who says their hometown is Warsaw, Poland. While this LinkedIn profile (linkedin.com/in/dzmitrybukuyazau) does not mention Nuwber, a search on this name in Google turns up a 2017 blog post from privacyduck.com, which laid out a number of reasons to support a conclusion that OneRep and Nuwber.com were the same company.

“Any people search profiles containing your Personally Identifiable Information that were on Nuwber.com were also mirrored identically on OneRep.com, down to the relatives’ names and address histories,” Privacyduck.com wrote. The post continued:

“Both sites offered the same immediate opt-out process. Both sites had the same generic contact and support structure. They were – and remain – the same company (even PissedConsumer.com advocates this fact: https://nuwber.pissedconsumer.com/nuwber-and-onerep-20160707878520.html).”

“Things changed in early 2016 when OneRep.com began offering privacy removal services right alongside their own open displays of your personal information. At this point when you found yourself on Nuwber.com OR OneRep.com, you would be provided with the option of opting-out your data on their site for free – but also be highly encouraged to pay them to remove it from a slew of other sites (and part of that payment was removing you from their own site, Nuwber.com, as a benefit of their service).”

Reached via LinkedIn, Mr. Bukuyazau declined to answer questions, such as whether he ever worked at Nuwber.com. However, Constella Intelligence finds two interesting email addresses for employees at nuwber.com: d.bu@nuwber.com, and d.bu+figure-eight.com@nuwber.com, which was registered under the name “Dzmitry.”

PrivacyDuck’s claims about how onerep.com appeared and behaved in the early days are not readily verifiable because the domain onerep.com has been completely excluded from the Wayback Machine at archive.org. The Wayback Machine will honor such requests if they come directly from the owner of the domain in question.

Still, Mr. Shelest’s name, phone number and email also appear in the domain registration records for a truly dizzying number of country-specific people-search services, including pplcrwlr.in, pplcrwlr.fr, pplcrwlr.dk, pplcrwlr.jp, peeepl.br.com, peeepl.in, peeepl.it and peeepl.co.uk.

The same details appear in the WHOIS registration records for the now-defunct people-search sites waatpp.de, waatp1.fr, azersab.com, and ahavoila.com, a people-search service for French citizens.

The German people-search site waatp.de.

A search on the email address dmitrcox@gmail.com suggests Mr. Shelest was previously involved in rather aggressive email marketing campaigns. In 2010, an anonymous source leaked to KrebsOnSecurity the financial and organizational records of Spamit, which at the time was easily the largest Russian-language pharmacy spam affiliate program in the world.

Spamit paid spammers a hefty commission every time someone bought male enhancement drugs from any of their spam-advertised websites. Mr. Shelest’s email address stood out because immediately after the Spamit database was leaked, KrebsOnSecurity searched all of the Spamit affiliate email addresses to determine if any of them corresponded to social media accounts at Facebook.com (at the time, Facebook allowed users to search profiles by email address).

That mapping, which was done mainly by generous graduate students at my alma mater George Mason University, revealed that dmitrcox@gmail.com was used by a Spamit affiliate, albeit not a very profitable one. That same Facebook profile for Mr. Shelest is still active, and it says he is married and living in Minsk [Update, Mar. 16: Mr. Shelest’s Facebook account is no longer active].

The Italian people-search website peeepl.it.

Scrolling down Mr. Shelest’s Facebook page to posts made more than ten years ago show him liking the Facebook profile pages for a large number of other people-search sites, including findita.com, findmedo.com, folkscan.com, huntize.com, ifindy.com, jupery.com, look2man.com, lookerun.com, manyp.com, peepull.com, perserch.com, persuer.com, pervent.com, piplenter.com, piplfind.com, piplscan.com, popopke.com, pplsorce.com, qimeo.com, scoutu2.com, search64.com, searchay.com, seekmi.com, selfabc.com, socsee.com, srching.com, toolooks.com, upearch.com, webmeek.com, and many country-code variations of viadin.ca (e.g. viadin.hk, viadin.com and viadin.de).

The people-search website popopke.com.

Domaintools.com finds that all of the domains mentioned in the last paragraph were registered to the email address dmitrcox@gmail.com.

Mr. Shelest has not responded to multiple requests for comment. KrebsOnSecurity also sought comment from onerep.com, which likewise has not responded to inquiries about its founder’s many apparent conflicts of interest. In any event, these practices would seem to contradict the goal Onerep has stated on its site: “We believe that no one should compromise personal online security and get a profit from it.”

The people-search website findmedo.com.

Max Anderson is chief growth officer at 360 Privacy, a legitimate privacy company that works to keep its clients’ data off of more than 400 data broker and people-search sites. Anderson said it is concerning to see a direct link between between a data removal service and data broker websites.

“I would consider it unethical to run a company that sells people’s information, and then charge those same people to have their information removed,” Anderson said.

Last week, KrebsOnSecurity published an analysis of the people-search data broker giant Radaris, whose consumer profiles are deep enough to rival those of far more guarded data broker resources available to U.S. police departments and other law enforcement personnel.

That story revealed that the co-founders of Radaris are two native Russian brothers who operate multiple Russian-language dating services and affiliate programs. It also appears many of the Radaris founders’ businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

KrebsOnSecurity will continue investigating the history of various consumer data brokers and people-search providers. If any readers have inside knowledge of this industry or key players within it, please consider reaching out to krebsonsecurity at gmail.com.

Update, March 15, 11:35 a.m. ET: Many readers have pointed out something that was somehow overlooked amid all this research: The Mozilla Foundation, the company that runs the Firefox Web browser, has launched a data removal service called Mozilla Monitor that bundles OneRep. That notice says Mozilla Monitor is offered as a free or paid subscription service.

“The free data breach notification service is a partnership with Have I Been Pwned (“HIBP”),” the Mozilla Foundation explains. “The automated data deletion service is a partnership with OneRep to remove personal information published on publicly available online directories and other aggregators of information about individuals (“Data Broker Sites”).”

In a statement shared with KrebsOnSecurity.com, Mozilla said they did assess OneRep’s data removal service to confirm it acts according to privacy principles advocated at Mozilla.

“We were aware of the past affiliations with the entities named in the article and were assured they had ended prior to our work together,” the statement reads. “We’re now looking into this further. We will always put the privacy and security of our customers first and will provide updates as needed.”

Cryptogram Cheating Automatic Toll Booths by Obscuring License Plates

The Wall Street Journal is reporting on a variety of techniques drivers are using to obscure their license plates so that automatic readers can’t identify them and charge tolls properly.

Some drivers have power-washed paint off their plates or covered them with a range of household items such as leaf-shaped magnets, Bramwell-Stewart said. The Port Authority says officers in 2023 roughly doubled the number of summonses issued for obstructed, missing or fictitious license plates compared with the prior year.

Bramwell-Stewart said one driver from New Jersey repeatedly used what’s known in the streets as a flipper, which lets you remotely swap out a car’s real plate for a bogus one ahead of a toll area. In this instance, the bogus plate corresponded to an actual one registered to a woman who was mystified to receive the tolls. “Why do you keep billing me?” Bramwell-Stewart recalled her asking.

[…]

Cathy Sheridan, president of MTA Bridges and Tunnels in New York City, showed video of a flipper in action at a recent public meeting, after the car was stopped by police. One minute it had New York plates, the next it sported Texas tags. She also showed a clip of a second car with a device that lowered a cover over the plate like a curtain.

Boing Boing post.

Cryptogram AI and the Evolution of Social Media

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

This essay was written with Nathan Sanders, and was originally published in MIT Technology Review.

Worse Than FailureCodeSOD: Query the Contract Status

Rui recently pulled an all-nighter on a new contract. The underlying system is… complicated. There's a PHP front end, which also talks directly to the database, as well as a Java backend, which also talks to point-of-sale terminals. The high-level architecture is a bit of a mess.

The actual code architecture is also a mess.

For example, this code lives in the Java portion.

final class Status {
        static byte [] status;
        static byte [] normal = {22,18,18,18};

        //snip 

        public static boolean equals(byte[] array){
        boolean value=true;
        if(status[0]!=array[0])
                value=false;
        if(status[1]!=array[1])
                value=false;
        if(status[2]!=array[2])
                value=false;
        if(status[3]!=array[3])
                value=false;
        return value;
	}
}

The status information is represented as a string of four integers, with the normal status being the ever descriptive "22,18,18,18". Now, these clearly are code coming from the POS terminal, and clearly we know that there will always be four of them. But boy, it'd be nice if this code represented that more clearly. A for loop in the equals method might be nice, or given that there are four distinct status codes, maybe put them in variables with names?

But that's just the aperitif.

The PHP front end has code that looks like this:

$sql = "select query from table where id=X";
$result = mysql_query($sql);

// ... snip few lines of string munging on $result...

$result2 = mysql_query($result);

We fetch a field called "query" from the database, mangle it to inject some values, and then execute it as a query itself. You know exactly what's happening here: they're storing database queries in the database (so users can edit them! This always goes well!) and then the front end checks the database to know what queries it should be executing.

Rui is looking forward to the end of this contract.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram How Public AI Can Strengthen Democracy

With the world’s focus turning to misinformationmanipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities.

The increasingly centralized control of AI is an ominous sign for the co-evolution of democracy and technology. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the general public or ordinary consumers.

To benefit society as a whole we also need strong public AI as a counterbalance to corporate AI, as well as stronger democratic institutions to govern all of AI.

One model for doing this is an AI Public Option, meaning AI systems such as foundational large-language models designed to further the public interest. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the U.S. and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment.

Versions of public AI, similar to what we propose here, are not unprecedented. Taiwan, a leader in global AI, has innovated in both the public development and governance of AI. The Taiwanese government has invested more than $7 million in developing their own large-language model aimed at countering AI models developed by mainland Chinese corporations. In seeking to make “AI development more democratic,” Taiwan’s Minister of Digital Affairs, Audrey Tang, has joined forces with the Collective Intelligence Project to introduce Alignment Assemblies that will allow public collaboration with corporations developing AI, like OpenAI and Anthropic. Ordinary citizens are asked to weigh in on AI-related issues through AI chatbots which, Tang argues, makes it so that “it’s not just a few engineers in the top labs deciding how it should behave but, rather, the people themselves.”

A variation of such an AI Public Option, administered by a transparent and accountable public agency, would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Training AI models is a complex business that requires significant technical expertise; large, well-coordinated teams; and significant trust to operate in the public interest with good faith. Popular though it may be to criticize Big Government, these are all criteria where the federal bureaucracy has a solid track record, sometimes superior to corporate America.

After all, some of the most technologically sophisticated projects in the world, be they orbiting astrophysical observatories, nuclear weapons, or particle colliders, are operated by U.S. federal agencies. While there have been high-profile setbacks and delays in many of these projects—the Webb space telescope cost billions of dollars and decades of time more than originally planned—private firms have these failures too. And, when dealing with high-stakes tech, these delays are not necessarily unexpected.

Given political will and proper financial investment by the federal government, public investment could sustain through technical challenges and false starts, circumstances that endemic short-termism might cause corporate efforts to redirect, falter, or even give up.

The Biden administration’s recent Executive Order on AI opened the door to create a federal AI development and deployment agency that would operate under political, rather than market, oversight. The Order calls for a National AI Research Resource pilot program to establish “computational, data, model, and training resources to be made available to the research community.”

While this is a good start, the U.S. should go further and establish a services agency rather than just a research resource. Much like the federal Centers for Medicare & Medicaid Services (CMS) administers public health insurance programs, so too could a federal agency dedicated to AI—a Centers for AI Services—provision and operate Public AI models. Such an agency can serve to democratize the AI field while also prioritizing the impact of such AI models on democracy—hitting two birds with one stone.

Like private AI firms, the scale of the effort, personnel, and funding needed for a public AI agency would be large—but still a drop in the bucket of the federal budget. OpenAI has fewer than 800 employees compared to CMS’s 6,700 employees and annual budget of more than $2 trillion. What’s needed is something in the middle, more on the scale of the National Institute of Standards and Technology, with its 3,400 staff, $1.65 billion annual budget in FY 2023, and extensive academic and industrial partnerships. This is a significant investment, but a rounding error on congressional appropriations like 2022’s $50 billion  CHIPS Act to bolster domestic semiconductor production, and a steal for the value it could produce. The investment in our future—and the future of democracy—is well worth it.

What services would such an agency, if established, actually provide? Its principal responsibility should be the innovation, development, and maintenance of foundational AI models—created under best practices, developed in coordination with academic and civil society leaders, and made available at a reasonable and reliable cost to all US consumers.

Foundation models are large-scale AI models on which a diverse array of tools and applications can be built. A single foundation model can transform and operate on diverse data inputs that may range from text in any language and on any subject; to images, audio, and video; to structured data like sensor measurements or financial records. They are generalists which can be fine-tuned to accomplish many specialized tasks. While there is endless opportunity for innovation in the design and training of these models, the essential techniques and architectures have been well established.

Federally funded foundation AI models would be provided as a public service, similar to a health care private option. They would not eliminate opportunities for private foundation models, but they would offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

And as with public option health care, the government need not do it all. It can contract with private providers to assemble the resources it needs to provide AI services. The U.S. could also subsidize and incentivize the behavior of key supply chain operators like semiconductor manufacturers, as we have already done with the CHIPS act, to help it provision the infrastructure it needs.

The government may offer some basic services on top of their foundation models directly to consumers: low hanging fruit like chatbot interfaces and image generators. But more specialized consumer-facing products like customized digital assistants, specialized-knowledge systems, and bespoke corporate solutions could remain the provenance of private firms.

The key piece of the ecosystem the government would dictate when creating an AI Public Option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation could affect more democratically-aligned outcomes than an unregulated private market.

Some of the key decisions involved in building AI foundation models are what data to use, how to provide pro-social feedback to “align” the model during training, and whose interests to prioritize when mitigating harms during deployment. Instead of ethically and legally questionable scraping of content from the web, or of users’ private data that they never knowingly consented for use by AI, public AI models can use public domain works, content licensed by the government, as well as data that citizens consent to be used for public model training.

Public AI models could be reinforced by labor compliance with U.S. employment laws and public sector employment best practices. In contrast, even well-intentioned corporate projects sometimes have committed labor exploitation and violations of public trust, like Kenyan gig workers giving endless feedback on the most disturbing inputs and outputs of AI models at profound personal cost.

And instead of relying on the promises of profit-seeking corporations to balance the risks and benefits of who AI serves, democratic processes and political oversight could regulate how these models function. It is likely impossible for AI systems to please everybody, but we can choose to have foundation AI models that follow our democratic principles and protect minority rights under majority rule.

Foundation models funded by public appropriations (at a scale modest for the federal government) would obviate the need for exploitation of consumer data and would be a bulwark against anti-competitive practices, making these public option services a tide to lift all boats: individuals’ and corporations’ alike. However, such an agency would be created among shifting political winds that, recent history has shown, are capable of alarming and unexpected gusts. If implemented, the administration of public AI can and must be different. Technologies essential to the fabric of daily life cannot be uprooted and replanted every four to eight years. And the power to build and serve public AI must be handed to democratic institutions that act in good faith to uphold constitutional principles.

Speedy and strong legal regulations might forestall the urgent need for development of public AI. But such comprehensive regulation does not appear to be forthcoming. Though several large tech companies have said they will take important steps to protect democracy in the lead up to the 2024 election, these pledges are voluntary and in places nonspecific. The U.S. federal government is little better as it has been slow to take steps toward corporate AI legislation and regulation (although a new bipartisan task force in the House of Representatives seems determined to make progress). On the state level, only four jurisdictions have successfully passed legislation that directly focuses on regulating AI-based misinformation in elections. While other states have proposed similar measures, it is clear that comprehensive regulation is, and will likely remain for the near future, far behind the pace of AI advancement. While we wait for federal and state government regulation to catch up, we need to simultaneously seek alternatives to corporate-controlled AI.

In the absence of a public option, consumers should look warily to two recent markets that have been consolidated by tech venture capital. In each case, after the victorious firms established their dominant positions, the result was exploitation of their userbases and debasement of their products. One is online search and social media, where the dominant rise of Facebook and Google atop a free-to-use, ad supported model demonstrated that, when you’re not paying, you are the product. The result has been a widespread erosion of online privacy and, for democracy, a corrosion of the information market on which the consent of the governed relies. The other is ridesharing, where a decade of VC-funded subsidies behind Uber and Lyft squeezed out the competition until they could raise prices.

The need for competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders not to abdicate control of the future of AI to corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to untrammeled corporate control that could erode our democracy.

365 TomorrowsA Time and Place for Things

Author: Soramimi Hanarejima When the Bureau of Introspection discovered how to photograph the landscapes within us, we were all impressed that this terrain, which had only been visible in dreams, could be captured and viewed by anyone. This struck us as a huge leap, but toward what, we couldn’t say. We thought seeing our own […]

The post A Time and Place for Things appeared first on 365tomorrows.

Cryptogram Drones and the US Air Force

Fascinating analysis of the use of drones on a modern battlefield—that is, Ukraine—and the inability of the US Air Force to react to this change.

The F-35A certainly remains an important platform for high-intensity conventional warfare. But the Air Force is planning to buy 1,763 of the aircraft, which will remain in service through the year 2070. These jets, which are wholly unsuited for countering proliferated low-cost enemy drones in the air littoral, present enormous opportunity costs for the service as a whole. In a set of comments posted on LinkedIn last month, defense analyst T.X. Hammes estimated the following. The delivered cost of a single F-35A is around $130 million, but buying and operating that plane throughout its lifecycle will cost at least $460 million. He estimated that a single Chinese Sunflower suicide drone costs about $30,000—so you could purchase 16,000 Sunflowers for the cost of one F-35A. And since the full mission capable rate of the F-35A has hovered around 50 percent in recent years, you need two to ensure that all missions can be completed—for an opportunity cost of 32,000 Sunflowers. As Hammes concluded, “Which do you think creates more problems for air defense?”

Ironically, the first service to respond decisively to the new contestation of the air littoral has been the U.S. Army. Its soldiers are directly threatened by lethal drones, as the Tower 22 attack demonstrated all too clearly. Quite unexpectedly, last month the Army cancelled its future reconnaissance helicopter ­ which has already cost the service $2 billion—because fielding a costly manned reconnaissance aircraft no longer makes sense. Today, the same mission can be performed by far less expensive drones—without putting any pilots at risk. The Army also decided to retire its aging Shadow and Raven legacy drones, whose declining survivability and capabilities have rendered them obsolete, and announced a new rapid buy of 600 Coyote counter-drone drones in order to help protect its troops.

Cryptogram Improving C++

C++ guru Herb Sutter writes about how we can improve the programming language for better security.

The immediate problem “is” that it’s Too Easy By Default™ to write security and safety vulnerabilities in C++ that would have been caught by stricter enforcement of known rules for type, bounds, initialization, and lifetime language safety.

His conclusion:

We need to improve software security and software safety across the industry, especially by improving programming language safety in C and C++, and in C++ a 98% improvement in the four most common problem areas is achievable in the medium term. But if we focus on programming language safety alone, we may find ourselves fighting yesterday’s war and missing larger past and future security dangers that affect software written in any language.

Cryptogram Friday Squid Blogging: New Species of Squid Discovered

A new species of squid was discovered, along with about a hundred other species.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Operation Squid

Operation Squid found 1.3 tons of cocaine hidden in frozen fish.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

,

Charles StrossWorldcon in the news

You've probably seen news reports that the Hugo awards handed out last year at the world science fiction convention in Chengdu were rigged. For example: Science fiction awards held in China under fire for excluding authors.

The Guardian got bits of the background wrong, but what's undeniably true is that it's a huge mess. And the key point the press and most of the public miss is that they seem to think there's some sort of worldcon organization that can fix this.

Spoiler: there isn't.

(Caveat: what follows below the cut line is my brain dump, from 20km up, in lay terms, of what went wrong. I am not a convention runner and I haven't been following the Chengdu mess obsessively. If you want the inside baseball deets, read the File770 blog. If you want to see the rulebook, you can find it here (along with a bunch more stuff). I am on the outside of the fannish discourse and flame wars on this topic, and I may have misunderstood some of the details. I'm open to authoritative corrections and will update if necessary.)

SF conventions are generally fan-run (amateur) get-togethers, run on a non-profit/volunteer basis. There are some exceptions (the big Comiccons like SDCC, a couple of really large fan conventions that out-grew the scale volunteers can run them on so pay full-time staff) but generally they're very amateurish.

SF conventions arose organically out of SF fan clubs that began holding face to face meet-ups in the 1930s. Many of them are still run by local fan clubs and usually they stick to the same venue for decades: for example, the long-running Boskone series of conventions in Boston is run by NESFA, the New England SF Association; Novacon in the UK is run by the Birmingham SF Group. Both have been going for over 50 years now.

Others are less location-based. In the UK, there are the British Eastercons held over the easter (long) bank holiday weekend every year in a different city. It's a notionally national SF convention, although historically it's tended to be London-centric. They're loosely associated with the BSFA, which announces it's own SF awards (the BSFA awards) at the eastercon.

Because it's hard to run a convention when you live 500km from the venue, local SF societies or organizer teams talk to hotels and put together a bid for the privilege of working their butts off for a weekend. Then, a couple of years before the convention, there's a meeting and a vote at the preceding-but-one con in the series where the members vote on where to hold that year's convention.

Running a convention is not expense-free, so it's normal to charge for membership. (Nobody gets paid, but conventions host guests of honour—SF writers, actors, and so on—and they get their membership, hotel room, and travel expenses comped in the expectation that they'll stick around and give talks/sign books/shake hands with the members.)

What's less well-known outside the bubble is that it's also normal to offer "pre-supporting" memberships (to fund a bid) and "supporting" memberships (you can't make it to the convention that won the bidding war but you want to make a donation). Note that such partial memberships are upgradable later for the difference in cost if you decide to attend the event.

The world science fiction convention is the name of a long-running series of conventions (the 82nd one is in Glasgow this August) that are held annually. There is a rule book for running a worldcon. For starters, the venue is decided by a bidding war between sites (as above). For seconds, members of the convention are notionally buying membership, for one year, in the World Science Fiction Society (WSFS). The rule book for running a worldcon is the WSFS constitution, and it lays down the rules for:

  • Voting on where the next-but-one worldcon will be held ("site selection")
  • Holding a business meeting where motions to amend the WSFS constitution can be discussed and voted on (NB: to be carried a motion must be proposed and voted through at two consecutive worldcons)
  • Running the Hugo awards

The important thing to note is that the "worldcon" is *not a permanent organization. It's more like a virus that latches onto an SF convention, infects it with worldcon-itis, runs the Hugo awards and the WSFS business meeting, then selects a new convention to parasitize the year after next.

No worldcon binds the hands of the next worldcon, it just passes the baton over in the expectation that the next baton-holder will continue the process rather than, say, selling the baton off to be turned into matchsticks.

This process worked more or less fine for eighty years, until it ran into Chengdu.

Worldcons are volunteer, fan-organized, amateur conventions. They're pretty big: the largest hit roughly 14,000 members, and they average 4000-8000. (I know of folks who used "worked on a British eastercon committee" as their dissertation topic for degrees in Hospitality Management; you don't get to run a worldcon committee until you're way past that point.) But SF fandom is a growing community thing in China. And even a small regional SF convention in China is quite gigantic by most western (trivially, US/UK) standards.

My understanding is that a bunch of Chinese fans who ran a successful regional convention in Chengdu (population 21 million; slightly more than the New York metropolitan area, about 30% more than London and suburbs) heard about the worldcon and thought "wouldn't it be great if we could call ourselves the world science fiction convention?"

They put together a bid, then got a bunch of their regulars to cough up $50 each to buy a supporting membership in the 2021 worldcon and vote in site selection. It doesn't take that many people to "buy" a worldcon—I seem to recall it's on the order of 500-700 votes—so they bought themselves the right to run the worldcon in 2023. And that's when the fun and games started.

See, Chinese fandom is relatively isolated from western fandom. And the convention committee didn't realize that there was this thing called the WSFS Constitution which set out rules for stuff they had to do. I gather they didn't even realize they were responsible for organizing the nomination and voting process for the Hugo awards, commissioning the award design, and organizing an awards ceremony, until about 12 months before the convention (which is short notice for two rounds of voting. commissioning a competition between artists to design the Hugo award base for that year, and so on). So everything ran months too late, and they had to delay the convention, and most of the students who'd pitched in to buy those bids could no longer attend because of bad timing, and worse ... they began picking up an international buzz, which in turn drew the attention of the local Communist Party, in the middle of the authoritarian clamp-down that's been intensifying for the past couple of years. (Remember, it takes a decade to organize a successful worldcon from initial team-building to running the event. And who imagined our existing world of 2023 back in 2013?)

The organizers appear to have panicked.

First they arbitrarily disqualified a couple of very popular works by authors who they thought might offend the Party if they won and turned up to give an acceptance speech (including "Babel", by R. F. Kuang, which won the Nebula and Locus awards in 2023 and was a favourite to win the Hugo as well).

Then they dragged their heels on releasing the vote counts—the WSFS Constitution requires the raw figures to be released after the awards are handed out.

Then there were discrepancies in the count of votes cast, such that the raw numbers didn't add up.

The haphazard way they released the data suggests that the 911 call is coming from inside the house: the convention committee freaked out when they realized the convention had become a political hot potato, rigged the vote badly, and are now farting smoke signals as if to say "a secret policeman hinted that it could be very unfortunate if we didn't anticipate the Party's wishes".

My take-away:

The world science fiction convention coevolved with fan-run volunteer conventions in societies where there's a general expectation of the rule of law and most people abide by social norms irrespective of enforcement. The WSFS constitution isn't enforceable except insofar as normally fans see no reason not to abide by the rules. So it works okay in the USA, the UK, Canada, the Netherlands, Japan, Australia, New Zealand, and all the other western-style democracies it's been held in ... but broke badly when a group of enthusiasts living in an authoritarian state won the bid then realized too late that by doing so they'd come to the attention of Very Important People who didn't care about their society's rulebook.

Immediate consequences:

For the first fifty or so worldcons, worldcon was exclusively a North American phenomenon except for occasional sorties to the UK. Then it began to open up as cheap air travel became a thing. In the 21st century about 50% of worldcons are held outside North America, and until 2016 there was an expectation that it would become truly international.

But the Chengdu fubar has created shockwaves. There's no immediate way to fix this, any more than you'll be able to fix Donald Trump declaring himself dictator-for-life on the Ides of March in 2025 if he gets back into the White House with a majority in the House and Senate. It needs a WSFS constitutional amendment at least (so pay attention to the motions and voting in Glasgow, and then next year, in Seattle) just to stop it happening again. And nobody has ever tried to retroactively invalidate the Hugo awards. While there's a mechanism for running Hugo voting and handing out awards for a year in which there was no worldcon (the Retrospective Hugo awards—for example, the 1945 Hugo Awards were voted on in 2020—nobody considered the need to re-run the Hugos for a year in which the vote was rigged. So there's no mechanism.

The fallout from Chengdu has probably sunk several other future worldcon bids—and it's not as if there are a lot of teams competing for the privilege of working themselves to death: Glasgow and Seattle (2024 and 2025) both won their bidding by default because they had experienced, existing worldcon teams and nobody else could be bothered turning up. So the Ugandan worldcon bid has collapsed (and good riddance, many fans would vote NO WORLDCON in preference to a worldcon in a nation that recently passed a law making homosexuality a capital offense). The Saudi Arabian bid also withered on the vine, but took longer to finally die. They shifted their venue to Cairo in a desperate attempt to overcome Prince Bone-saw's negative PR optics, but it hit the buffers when the Egyptian authorities refused to give them the necessary permits. Then there's the Tel Aviv bid. Tel Aviv fans are lovely people, but I can't see an Israeli worldcon being possible in the foreseeable future (too many genocide cooties right now). Don't ask about Kiev (before February 2022 they were considering bidding for the Eurocon). And in the USA, the prognosis for successful Texas and Florida worldcon bids are poor (book banning does not go down well with SF fans).

Beyond Seattle in 2025, the sole bid standing for 2026 (now the Saudi bid has died) is Los Angeles. Tel Aviv is still bidding for 2027, but fat chance: Uganda is/was targeting 2028, and there was some talk of a Texas bid in 2029 (all these are speculative bids and highly unlikely to happen in my opinion). I am also aware of a bid for a second Dublin worldcon (they've got a shiny new conference centre), targeting 2029 or 2030. There may be another Glasgow or London bid in the mid-30s, too. But other than that? I'm too out of touch with current worldcon politics to say, other than, watch this space (but don't buy the popcorn from the concession stand, it's burned and bitter).

UPDATE

A commenter just drew my attention to this news item on China.org.cn, dated October 23rd, 2023, right after the worldcon. It begins:

Investment deals valued at approximately $1.09 billion were signed during the 81st World Science Fiction Convention (Worldcon) held in Chengdu, Sichuan province, last week at its inaugural industrial development summit, marking significant progress in the advancement of sci-fi development in China.

The deals included 21 sci-fi industry projects involving companies that produce films, parks, and immersive sci-fi experiences ..."

That's a metric fuckton of moolah in play, and it would totally account for the fan-run convention folks being discreetly elbowed out of the way and the entire event being stage-managed as a backdrop for a major industrial event to bootstrap creative industries (film, TV, and games) in Chengdu. And—looking for the most charitable interpretation here—the hapless western WSFS people being carried along for the ride to provide a veneer of worldcon-ness to what was basically Chinese venture capital hijacking the event and then sanitizing it politically.

Follow the money.

Worse Than FailureCheck Your Email

Branon's boss, Steve, came storming into his cube. From the look of panic on his face, it was clear that this was a full hair-on-fire emergency.

"Did we change anything this weekend?"

"No," Branon said. "We never deploy on a weekend."

"Well, something must have changed?!"

After a few rounds of this, Steve's panic wore off and he explained a bit more clearly. Every night, their application was supposed to generate a set of nightly reports and emailed them out. These reports went to a number of people in the company, up to and including the CEO. Come Monday morning, the CEO checked his inbox and horror of horror- there was no report!

"And going back through people's inboxes, this seems like it's been a problem for months- nobody seems to have received one for months."

"Why are they just noticing now?" Branon asked.

"That's really not the problem here. Can you investigate why the emails aren't going out?"

Branon put aside his concerns, and agreed to dig through and debug the problem. Given that it involved sending emails, Branon was ready to spend a long time trying to debug whatever was going wrong in the chain. Instead, finding the problem only took about two minutes, and most of that was spent getting coffee.

public void Send()
{
    //TODO: send email here
}

This application had been in production over a year. This function had not been modified in that time. So while it's technically true that no one had received a report "for months" (16 months is a number of months), it would probably have been more accurate to say that they had never received a report. Now, given that it had been over a year, you'd think that maybe this report wasn't that important, but now that the CEO had noticed, it was the most important thing at the company. Work on everything else stopped until this was done- mind you, it only took one person a few hours to implement and test the feature, but still- work on everything else stopped.

A few weeks later a new ticket was opened: people felt that the nightly reports were too frequent, and wanted to instead just go to the site to pull the report, which is what they had been doing for the past 16 months.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsAlways in Line

Author: Frederick Charles Melancon The scars don’t glow like they once did, yet around my pants’ cuffs, neon-green halos still light my ankles. Mom used to love halos—hanging glass circles around the house to create them. But these marks from the bombing blasts on Mars shine so bright that they still keep me up at […]

The post Always in Line appeared first on 365tomorrows.

,

Krebs on SecurityPatch Tuesday, March 2024 Edition

Apple and Microsoft recently released software updates to fix dozens of security holes in their operating systems. Microsoft today patched at least 60 vulnerabilities in its Windows OS. Meanwhile, Apple’s new macOS Sonoma addresses at least 68 security weaknesses, and its latest update for iOS fixes two zero-day flaws.

Last week, Apple pushed out an urgent software update to its flagship iOS platform, warning that there were at least two zero-day exploits for vulnerabilities being used in the wild (CVE-2024-23225 and CVE-2024-23296). The security updates are available in iOS 17.4, iPadOS 17.4, and iOS 16.7.6.

Apple’s macOS Sonoma 14.4 Security Update addresses dozens of security issues. Jason Kitka, chief information security officer at Automox, said the vulnerabilities patched in this update often stem from memory safety issues, a concern that has led to a broader industry conversation about the adoption of memory-safe programming languages [full disclosure: Automox is an advertiser on this site].

On Feb. 26, 2024, the Biden administration issued a report that calls for greater adoption of memory-safe programming languages. On Mar. 4, 2024, Google published Secure by Design, which lays out the company’s perspective on memory safety risks.

Mercifully, there do not appear to be any zero-day threats hounding Windows users this month (at least not yet). Satnam Narang, senior staff research engineer at Tenable, notes that of the 60 CVEs in this month’s Patch Tuesday release, only six are considered “more likely to be exploited” according to Microsoft.

Those more likely to be exploited bugs are mostly “elevation of privilege vulnerabilities” including CVE-2024-26182 (Windows Kernel), CVE-2024-26170 (Windows Composite Image File System (CimFS), CVE-2024-21437 (Windows Graphics Component), and CVE-2024-21433 (Windows Print Spooler).

Narang highlighted CVE-2024-21390 as a particularly interesting vulnerability in this month’s Patch Tuesday release, which is an elevation of privilege flaw in Microsoft Authenticator, the software giant’s app for multi-factor authentication. Narang said a prerequisite for an attacker to exploit this flaw is to already have a presence on the device either through malware or a malicious application.

“If a victim has closed and re-opened the Microsoft Authenticator app, an attacker could obtain multi-factor authentication codes and modify or delete accounts from the app,” Narang said. “Having access to a target device is bad enough as they can monitor keystrokes, steal data and redirect users to phishing websites, but if the goal is to remain stealth, they could maintain this access and steal multi-factor authentication codes in order to login to sensitive accounts, steal data or hijack the accounts altogether by changing passwords and replacing the multi-factor authentication device, effectively locking the user out of their accounts.”

CVE-2024-21334 earned a CVSS (danger) score of 9.8 (10 is the worst), and it concerns a weakness in Open Management Infrastructure (OMI), a Linux-based cloud infrastructure in Microsoft Azure. Microsoft says attackers could connect to OMI instances over the Internet without authentication, and then send specially crafted data packets to gain remote code execution on the host device.

CVE-2024-21435 is a CVSS 8.8 vulnerability in Windows OLE, which acts as a kind of backbone for a great deal of communication between applications that people use every day on Windows, said Ben McCarthy, lead cybersecurity engineer at Immersive Labs.

“With this vulnerability, there is an exploit that allows remote code execution, the attacker needs to trick a user into opening a document, this document will exploit the OLE engine to download a malicious DLL to gain code execution on the system,” Breen explained. “The attack complexity has been described as low meaning there is less of a barrier to entry for attackers.”

A full list of the vulnerabilities addressed by Microsoft this month is available at the SANS Internet Storm Center, which breaks down the updates by severity and urgency.

Finally, Adobe today issued security updates that fix dozens of security holes in a wide range of products, including Adobe Experience Manager, Adobe Premiere Pro, ColdFusion 2023 and 2021, Adobe Bridge, Lightroom, and Adobe Animate. Adobe said it is not aware of active exploitation against any of the flaws.

By the way, Adobe recently enrolled all of its Acrobat users into a “new generative AI feature” that scans the contents of your PDFs so that its new “AI Assistant” can  “understand your questions and provide responses based on the content of your PDF file.” Adobe provides instructions on how to disable the AI features and opt out here.

Cryptogram Automakers Are Sharing Driver Data with Insurers without Consent

Kasmir Hill has the story:

Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis [who then sell it to insurance companies].

Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read.

Cryptogram Burglars Using Wi-Fi Jammers to Disable Security Cameras

The arms race continues, as burglars are learning how to use jammers to disable Wi-Fi security cameras.

Worse Than FailureCodeSOD: Wait for the End

Donald was cutting a swathe through a jungle of old Java code, when he found this:

protected void waitForEnd(float time) {
	// do nothing
}

Well, this function sure sounds like it's waiting around to die. This protected method is called from a private method, and you might expect that child classes actually implement real functionality in there, but there were no child classes. This was called in several places, and each time it was passed Float.MAX_VALUE as its input.

Poking at that odd function also lead to this more final method:

public void waitAtEnd() {
	System.exit(0);
}

This function doesn't wait for anything- it just ends the program. Finally and decisively. It is the end.

I know the end of this story: many, many developers have worked on this code base, and many of them hoped to clean up the codebase and make it better. Many of them got lost, never to return. Many ran away screaming.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsBait

Author: Majoki The float bobs and I feel a slight tug on the line, a nip at the hook. A shiver of guilt, a nanosecond’s exhilaration. I finesse the reel, patient. What will rise? There’s nothing like fishing in a black hole, quantum casting for bits and pieces of worlds beneath, within, among. You just […]

The post Bait appeared first on 365tomorrows.

,

Cryptogram Jailbreaking LLMs with ASCII Art

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.

Research paper.

Krebs on SecurityIncognito Darknet Market Mass-Extorts Buyers, Sellers

Borrowing from the playbook of ransomware purveyors, the darknet narcotics bazaar Incognito Market has begun extorting all of its vendors and buyers, threatening to publish cryptocurrency transaction and chat records of users who refuse to pay a fee ranging from $100 to $20,000. The bold mass extortion attempt comes just days after Incognito Market administrators reportedly pulled an “exit scam” that left users unable to withdraw millions of dollars worth of funds from the platform.

An extortion message currently on the Incognito Market homepage.

In the past 24 hours, the homepage for the Incognito Market was updated to include a blackmail message from its owners, saying they will soon release purchase records of vendors who refuse to pay to keep the records confidential.

“We got one final little nasty surprise for y’all,” reads the message to Incognito Market users. “We have accumulated a list of private messages, transaction info and order details over the years. You’ll be surprised at the number of people that relied on our ‘auto-encrypt’ functionality. And by the way, your messages and transaction IDs were never actually deleted after the ‘expiry’….SURPRISE SURPRISE!!! Anyway, if anything were to leak to law enforcement, I guess nobody never slipped up.”

Incognito Market says it plans to publish the entire dump of 557,000 orders and 862,000 cryptocurrency transaction IDs at the end of May.

“Whether or not you and your customers’ info is on that list is totally up to you,” the Incognito administrators advised. “And yes, this is an extortion!!!!”

The extortion message includes a “Payment Status” page that lists the darknet market’s top vendors by their handles, saying at the top that “you can see which vendors care about their customers below.” The names in green supposedly correspond to users who have already opted to pay.

The “Payment Status” page set up by the Incognito Market extortionists.

We’ll be publishing the entire dump of 557k orders and 862k crypto transaction IDs at the end of May, whether or not you and your customers’ info is on that list is totally up to you. And yes, this is an extortion!!!!

Incognito Market said it plans to open up a “whitelist portal” for buyers to remove their transaction records “in a few weeks.”

The mass-extortion of Incognito Market users comes just days after a large number of users reported they were no longer able to withdraw funds from their buyer or seller accounts. The cryptocurrency-focused publication Cointelegraph.com reported Mar. 6 that Incognito was exit-scamming its users out of their bitcoins and Monero deposits.

CoinTelegraph notes that Incognito Market administrators initially lied about the situation, and blamed users’ difficulties in withdrawing funds on recent changes to Incognito’s withdrawal systems.

Incognito Market deals primarily in narcotics, so it’s likely many users are now worried about being outed as drug dealers. Creating a new account on Incognito Market presents one with an ad for 5 grams of heroin selling for $450.

New Incognito Market users are treated to an ad for $450 worth of heroin.

The double whammy now hitting Incognito Market users is somewhat akin to the double extortion techniques employed by many modern ransomware groups, wherein victim organizations are hacked, relieved of sensitive information and then presented with two separate ransom demands: One in exchange for a digital key needed to unlock infected systems, and another to secure a promise that any stolen data will not be published or sold, and will be destroyed.

Incognito Market has priced its extortion for vendors based on their status or “level” within the marketplace. Level 1 vendors can supposedly have their information removed by paying a $100 fee. However, larger “Level 5” vendors are asked to cough up $20,000 payments.

The past is replete with examples of similar darknet market exit scams, which tend to happen eventually to all darknet markets that aren’t seized and shut down by federal investigators, said Brett Johnson, a convicted and reformed cybercriminal who built the organized cybercrime community Shadowcrew many years ago.

“Shadowcrew was the precursor to today’s Darknet Markets and laid the foundation for the way modern cybercrime channels still operate today,” Johnson said. “The Truth of Darknet Markets? ALL of them are Exit Scams. The only question is whether law enforcement can shut down the market and arrest its operators before the exit scam takes place.”