Planet Russell

,

Charles StrossAnnouncement time!

I am very pleased to be able to admit that the Laundry Files are shortlisted for the Hugo Award for Best Series!

(Astute readers will recall that the Laundry Files were shortlisted—but did not win—in 2019. Per the rules, "A qualifying installment must be published in the qualifying year ... If a Series is a finalist and does not win, it is no longer eligible until at least two more installments consisting of at lest 240,000 words total appear in subsequent years." Since 2019, the Laundry Files have grown by three full novels (the New Management books) and a novella ("Escape from Yokai Land"), totaling about 370,000 words. "Season of Skulls" was published in 2023, hence the series is eligible in 2024.)

The Hugo award winners will be announced at the world science fiction convention in Glasgow this August, on the evening of Sunday August 11th. Full announcement of the entire shortlist here.

In addition to the Hugo nomination, the Kickstarter for the second edition of the Laundry tabletop role playing game, from Cubicle 7 games, goes live for pre-orders in the next month. If you want to be notified when that happens, there's a sign-up page here.

Finally, there's some big news coming soon about film/TV rights, and separately, graphic novel rights, to the Laundry Files. I can't say any more at this point, but expect another announcement (or two!) over the coming months.

I'm sure you have questions. Ask away!

Worse Than FailureCodeSOD: A List of Mistakes

Yesterday we talked about bad CSS. Today, we're going to talk about bad HTML.

Corey inherited a web page that, among other things, wanted to display a bulleted list of links. Now, you or I might reach for the ul element, which is for displaying bulleted lists. But we do not have the galaxy sized brains of this individual:

<table style="font-family: Verdana;">
  <tr>
     <td valign="top"></td>
     <td>
       Google
       <br />
       <a href="http://www.google.com" target="_blank">http://www.google.com</a>
     </td>
    </tr>
    <tr>
     <td valign="top"></td>
     <td>
       Yahoo
       <br />
       <a href="http://www.yahoo.com" target="_blank">http://www.yahoo.com/</a>
     </td>
   </tr>
   <tr>
     <td valign="top"></td>
     <td>
       Bing
       <br />
       <a href="http://www.bing.com" target="_blank">http://www.bing.com</a>
     </td>
   </tr>
</table>

Here, they opted to use a table, where each list item is a row, and the bullet is a literal symbol in the code.

For web developers of a certain age, we remember when laying out your entire page inside of tables was a common practice. This let you easily define distinct header, sidebar, and footer sections in an era before CSS and divs everywhere.

But they were never meant to be used like this.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsDraxas Arena Blues

Author: Thomas Godfrey I should have just pleaded guilty. I should have just gone off to some decrepit moon somewhere and put in my ten years of hard labour or whatever it was they were going to have me doing. Breaking my back in the mines of Tormen IV, or being drafted into the Imperial […]

The post Draxas Arena Blues appeared first on 365tomorrows.

Cryptogram Other Attempts to Take Over Open Source Projects

After the XZ Utils discovery, people have been examining other open-source projects. Surprising no one, the incident is not unique:

The OpenJS Foundation Cross Project Council received a suspicious series of emails with similar messages, bearing different names and overlapping GitHub-associated emails. These emails implored OpenJS to take action to update one of its popular JavaScript projects to “address any critical vulnerabilities,” yet cited no specifics. The email author(s) wanted OpenJS to designate them as a new maintainer of the project despite having little prior involvement. This approach bears strong resemblance to the manner in which “Jia Tan” positioned themselves in the XZ/liblzma backdoor.

[…]

The OpenJS team also recognized a similar suspicious pattern in two other popular JavaScript projects not hosted by its Foundation, and immediately flagged the potential security concerns to respective OpenJS leaders, and the Cybersecurity and Infrastructure Security Agency (CISA) within the United States Department of Homeland Security (DHS).

The article includes a list of suspicious patterns, and another list of security best practices.

Planet DebianRuss Allbery: Review: Unseen Academicals

Review: Unseen Academicals, by Terry Pratchett

Series: Discworld #37
Publisher: Harper
Copyright: October 2009
Printing: November 2014
ISBN: 0-06-233500-6
Format: Mass market
Pages: 517

Unseen Academicals is the 37th Discworld novel and includes many of the long-standing Ankh-Morpork cast, but mostly as supporting characters. The main characters are a new (and delightful) bunch with their own concerns. You arguably could start reading here if you really wanted to, although you would risk spoiling several previous books (most notably Thud!) and will miss some references that depend on familiarity with the cast.

The Unseen University is, like most institutions of its sort, funded by an endowment that allows the wizards to focus on the pure life of the mind (or the stomach). Much to their dismay, they have just discovered that an endowment that amounts to most of their food budget requires that they field a football team.

Glenda runs the night kitchen at the Unseen University. Given the deep and abiding love that wizards have for food, there is both a main kitchen and a night kitchen. The main kitchen is more prestigious, but the night kitchen is responsible for making pies, something that Glenda is quietly but exceptionally good at.

Juliet is Glenda's new employee. She is exceptionally beautiful, not very bright, and a working class girl of the Ankh-Morpork streets down to her bones. Trevor Likely is a candle dribbler, responsible for assisting the Candle Knave in refreshing the endless university candles and ensuring that their wax is properly dribbled, although he pushes most of that work off onto the infallibly polite and oddly intelligent Mr. Nutt.

Glenda, Trev, and Juliet are the sort of people who populate the great city of Ankh-Morpork. While the people everyone has heard of have political crises, adventures, and book plots, they keep institutions like the Unseen University running. They read romance novels, go to the football games, and nurse long-standing rivalries. They do not expect the high mucky-mucks to enter their world, let alone mess with their game.

I approached Unseen Academicals with trepidation because I normally don't get along as well with the Discworld wizard books. I need not have worried; Pratchett realized that the wizards would work better as supporting characters and instead turns the main plot (or at least most of it; more on that later) over to the servants. This was a brilliant decision. The setup of this book is some of the best of Discworld up to this point.

Trev is a streetwise rogue with an uncanny knack for kicking around a can that he developed after being forbidden to play football by his dear old mum. He falls for Juliet even though their families support different football teams, so you may think that a Romeo and Juliet spoof is coming. There are a few gestures of one, but Pratchett deftly avoids the pitfalls and predictability and instead makes Juliet one of the best characters in the book by playing directly against type. She is one of the characters that Pratchett is so astonishingly good at, the ones that are so thoroughly themselves that they transcend the stories they're put into.

The heart of this book, though, is Glenda.

Glenda enjoyed her job. She didn't have a career; they were for people who could not hold down jobs.

She is the kind of person who knows where she fits in the world and likes what she does and is happy to stay there until she decides something isn't right, and then she changes the world through the power of common sense morality, righteous indignation, and sheer stubborn persistence. Discworld is full of complex and subtle characters fencing with each other, but there are few things I have enjoyed more than Glenda being a determinedly good person. Vetinari of course recognizes and respects (and uses) that inner core immediately.

Unfortunately, as great as the setup and characters are, Unseen Academicals falls apart a bit at the end. I was eagerly reading the story, wondering what Pratchett was going to weave out of the stories of these individuals, and then it partly turned into yet another wizard book. Pratchett pulled another of his deus ex machina tricks for the climax in a way that I found unsatisfying and contrary to the tone of the rest of the story, and while the characters do get reasonable endings, it lacked the oomph I was hoping for. Rincewind is as determinedly one-note as ever, the wizards do all the standard wizard things, and the plot just isn't that interesting.

I liked Mr. Nutt a great deal in the first part of the book, and I wish he could have kept that edge of enigmatic competence and unflappableness. Pratchett wanted to tell a different story that involved more angst and self-doubt, and while I appreciate that story, I found it less engaging and a bit more melodramatic than I was hoping for. Mr. Nutt's reactions in the last half of the book were believable and fit his background, but that was part of the problem: he slotted back into an archetype that I thought Pratchett was going to twist and upend.

Mr. Nutt does, at least, get a fantastic closing line, and as usual there are a lot of great asides and quotes along the way, including possibly the sharpest and most biting Vetinari speech of the entire series.

The Patrician took a sip of his beer. "I have told this to few people, gentlemen, and I suspect never will again, but one day when I was a young boy on holiday in Uberwald I was walking along the bank of a stream when I saw a mother otter with her cubs. A very endearing sight, I'm sure you will agree, and even as I watched, the mother otter dived into the water and came up with a plump salmon, which she subdued and dragged on to a half-submerged log. As she ate it, while of course it was still alive, the body split and I remember to this day the sweet pinkness of its roes as they spilled out, much to the delight of the baby otters who scrambled over themselves to feed on the delicacy. One of nature's wonders, gentlemen: mother and children dining on mother and children. And that's when I first learned about evil. It is built into the very nature of the universe. Every world spins in pain. If there is any kind of supreme being, I told myself, it is up to all of us to become his moral superior."

My dissatisfaction with the ending prevents Unseen Academicals from rising to the level of Night Watch, and it's a bit more uneven than the best books of the series. Still, though, this is great stuff; recommended to anyone who is reading the series.

Followed in publication order by I Shall Wear Midnight.

Rating: 8 out of 10

Planet DebianSamuel Henrique: Hello World

This is my very first post, just to make sure everything is working as expected.

Made with Zola and the Abridge theme.

,

Planet DebianPetter Reinholdtsen: RAID status from LSI Megaraid controllers in Debian

I am happy to report that the megactl package, useful to fetch RAID status when using the LSI Megaraid controller, now is available in Debian. It passed NEW a few days ago, and is now available in unstable, and probably showing up in testing in a weeks time. The new version should provide Appstream hardware mapping and should integrate nicely with isenkram.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Charles StrossA Wonky Experience

A Wonka Story

This is no longer in the current news cycle, but definitely needs to be filed under "stuff too insane for Charlie to make up", or maybe "promising screwball comedy plot line to explore", or even "perils of outsourcing creative media work to generative AI".

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy's Chocolate Experience, a blatant attempt to cash in on Roald Dahl's cautionary children's tale, "Willy Wonka and the Chocolate Factory". Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind "House of Illuminati Ltd" will wise up and delete the website, here's a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE - CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here's The Guardian's coverage:

The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas - the tiny, orange men who power Wonka's chocolate factory in the Roald Dahl book which inspired the prequel film.

But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have ... recomplicated? I think that's the diplomatic way to phrase it.

First, someone leaked the script for the event on twitter. They'd hired actors and evidently used ChatGPT to generate a script for the show: some of the actors quit in despair, others made a valliant attempt to at least amuse the children. But it didn't work. Interactive audience-participation events are hard work and this one apparently called for the sort of special effects that Disney's Imagineers might have blanched at, or at least asked, "who's paying for this?"

Here's a ThreadReader transcript of the twitter thread about the script (ThreadReader chains tweets together into a single web page, so you don't have to log into the hellsite itself). Note it's in the shape of screenshots of the script and threadreader didn't grab the images, so here's my transcript of the first three:

DIRECTION: (Audience members engage with the interactive flowers, offering compliments, to which the flowers respond with pre-recorded, whimsical thank-yous.)

Wonkidoodle 1: (to a guest) Oh, and if you see a butterfly, whisper your sweetest dream to it. They're our official secret keepers and dream carriers of the garden!

Willy McDuff: (gathering everyone's attention) Now, I must ask, has anyone seen the elusive Bubble Bloom? It's a rare flower that blooms just once every blue moon and fills the air with shimmering bubbles!

DIRECTION: (The stage crew discreetly activates bubble machines, filling the area with bubbles, causing excitement and wonder among the audience.)

Wonkidoodle 2: (pretending to catch bubbles) Quick! Each bubble holds a whisper of enchantment--catch one, and make a wish!

Willy McDuff: (as the bubble-catching frenzy continues) Remember, in the Garden of Enchantment, every moment is a chance for magic, every corner hides a story, and every bubble... (catches a bubble) holds a dream.

DIRECTION: (He opens his hand, and the bubble gently pops, releasing a small, twinkling light that ascends into the rafters, leaving the audience in awe.)

Willy McDuff: (with warmth) My dear friends, take this time to explore, to laugh, and to dream. For in this garden, the magic is real, and the possibilities are endless. And who knows? The next wonder you encounter may just be around the next bend.

DIRECTION: Scene ends with the audience fully immersed in the interactive, magical experience, laughter and joy filling the air as Willy McDuff and the Wonkidoodles continue to engage and delight with their enchanting antics and treats.

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful--the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

Willy McDuff: Here, my dear guests, you may quench your thirst with lemonade that fizzes and dances on the tongue, and chase bubbles that burst with flavors unimaginable. A toast, to adventures shared and friendships forged in the heart of the unknown!

DIRECTION: (The audience, now relieved and rejuvenated by the whimsical turn of events, follows Willy into the Bubble and Lemonade Room, laughter and chatter filling the air once more, as they immerse themselves in the joyous, bubbly wonderland.)

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful-the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

And here is a photo of the Lemonade Room in all its glory.

A trestle table with some paper cups half-full of flat lemonade

Note that in the above directions, near as I can make out, there were no stage crew on site. As Seamus O'Reilly put it, "I get that lazy and uncreative people will use AI to generate concepts. But if the script it barfs out has animatronic flowers, glowing orbs, rivers of lemonade and giggling grass, YOU still have to make those things exist. I'm v confused as to how that part was misunderstood."

Now, if that was all there was to it, it'd merely be annoying. My initial take was that this was a blatant rip-off, a consumer fraud perpetrated by a company ("House of Illuminati") based in London, doing everything by remote control over the internet to fleece those gullible provincials of their wallet contents. (Oh, and that probably includes the actors: did they get paid on the day?) But aftershocks are still rumbling on, a week later.

Per The Daily Beast, "House of Illuminati" issued an apology (via Facebook) on Friday, offering to refund all tickets—but then mysteriously deleted the apology hours later, and posted a new one:

"I want to extend my sincerest apologies to each and every one of you who was looking forward to this event," the latest Facebook post from House of Illuminati reads. "I understand the disappointment and frustration this has caused, and for that, I am truly sorry."

(The individual behind the post goes unnamed.)

"It's important for me to clarify that the organization and decisions surrounding this event were solely my responsibility," the post continues. "I want to make it clear that anyone who was hired externally or offered their help, are not affiliated with the me or the company, any use of faces can cause serious harm to those who did not have any involvement in the making of this event."

"Regarding a personal matter, there will be no wedding, and no wedding was funded by the ticket sales," the post continues further, sans context. "This is a difficult time for me, and I ask for your understanding and privacy."

"There will be no wedding, and no wedding was funded by the ticket sales?" (What on Earth is going on here?)

Finally, The Daily Beast notes that Billy McFarland, the creator of the Fyre Fest fiasco, told TMZ he'd love to give the Wonka organizers a second chance at getting things right at Fyre Fest II.

The mind boggles.

I am now wondering if the whole thing wasn't some sort of extraordinarily elaborate publicity stunt rather than simply a fraud, but I can't for the life of me work out what was going on. Unless it was Jimmy Cauty and Bill Drummond (aka The KLF) getting up to hijinks again? But I can't imagine them doing anything so half-assed ... Least-bad case is that an idiot decided to set up an events company ("how hard can running public arts events be?" —don't answer that) and intended to use the profits and the experience to plan their dream wedding. Which then ran off the rails into a ditch, rolled over, exploded in flames, was sucked up by a tornado and deposited in Oz, their fiancée called off the engagement and eloped with a walrus, and—

It's all downhill from here.

Anyway, the moral of the story so far is: don't use generative AI tools to write scripts for public events, or to produce promotional images, or indeed to do anything at all without an experienced human to sanity check their output! And especially don't use them to fund your wedding ...

UPDATE: Identity of scammer behind Willy's Chocolate Experience exposed -- Youtube video, I haven't had a chance to watch it all yet, will summarize if relevant later; the perp has form for selling ChatGPT generated ebook-shaped "objects" via Amazon.

NEW UPDATE: Glasgow's disastrous Wonka character inspires horror film

A villain devised for the catastrophic Willy's Chocolate Experience, who makes sweets and lives in walls, is to become the subject of a new horror movie.

LATEST UPDATE: House of Illuminati claims "copywrite", "we will protect our interests".

The 'Meth Lab Oompa Loompa Lady' is selling greetings on Cameo for $25.

And Eleanor Morton has a new video out, Glasgow Wonka Experience Tourguide Doesn't Give a F*.

FINAL UPDATE: Props from botched Willy Wonka event raise over £2,000 for Palestinian aid charity: Glasgow record shop Monorail Music auctioned the props on eBay after they were discovered in a bin outside the warehouse where event took place. (So some good came of it in the end ...)

Worse Than FailureCodeSOD: Classical Design

There is a surprising amount of debate about how to use CSS classes. The correct side of this debate argues that we should use classes to describe what the content is, what role it serves in our UI; i.e., a section of a page displaying employee information might be classed employee. If we want the "name" field of an employee to have a red underline, we might write a rule like:

.employee .name { text-decoration: underline red; }

The wrong side argues that this doesn't tell you what it looks like, so we should have classes like red-underline, because they care what the UI element looks like, not what it is.

Mark sends us some HTML that highlights how wrong things can go:


<div class="print ProfileBox floatRight" style="background-color:#ffffff;" >
	<div class="horiz">
		<div class="horiz">
			<div class="vert">
				<div class="vert">
					<div class="roundGray">
						<div class="roundPurple">
							<div class="roundGray">
								<div class="roundPurple">
									<div class="roundGray">
										<div class="roundPurple">
											<div class="roundGray">
												<div class="roundPurple">
													<div class="bold">
														Renew Profile
														<br><br>
														Change of Employer
														<br><br>
														Change of Position
														<br><br>
														Print Training Reports
														<br><br><br><br>
													</div>
												</div>
											</div>
										</div>
									</div>
								</div>
							</div>
						</div>
					</div>
				</div>
			</div>
		</div>
	</div>
</div>
<div class="floatRight">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</div>
<div class="ProfileBox print floatRight" style="background-color:#ffffff" >
	<div class="horiz">
		<div class="horiz">
			<div class="vert">
				<div class="vert">
					<div class="roundGray">
						<div class="roundPurple">
							<div class="roundGray">
								<div class="roundPurple">
									<div class="roundGray">
										<div class="roundPurple">
											<div class="roundGray">
												<div class="roundPurple">
									<div class="bold">
										Profile Status:
										<br><br>
										Registry Code:
										<br><br>
										Career Level:
										<br><br>
										Member Since:
										<br><br>
										Renewal Date:
										<br><br>
									</div>
								</div>
							</div>
						</div>
					</div>
				</div>
			</div>
		</div>
		</div>
				</div>
			</div>
		</div>
	</div>
</div>

It's worth noting, this was not machine-generated HTML. The developer responsible wrote this code. Mark didn't supply the CSS, so I have no idea what it would display like, but the code looks horrible.

Mark adds:

A colleague of mine committed this code. This is the perfect example of why you define your CSS classes based on the content, not the styling they apply.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsForward to “Should the Land Take Me”

Author: Thomas Desrochers It is one of the great mysteries of the late 21st century that the land of Alaska remains as nearly untrammeled as it was a hundred years before. Though its harsh climate was well-preserved by the collapse of the Atlantic Gyre, the exodus from Europe caused by that same calamity created a […]

The post Forward to “Should the Land Take Me” appeared first on 365tomorrows.

xkcdEclipse Path Maps

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.12.8.2.1 on CRAN: Micro Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1135 other packages on CRAN, downloaded 33.7 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 579 times according to Google Scholar.

Yesterday’s release accommodates reticulate by suspending a single test that now ‘croaks’ creating a reverse-dependency issue for that package. No other changes were made.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.2.1 (2024-04-15)

  • One-char bug fix release commenting out one test that upsets reticulate when accessing a scipy sparse matrix

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cryptogram Smuggling Gold by Disguising it as Machine Parts

Someone got caught trying to smuggle 322 pounds of gold (that’s about a quarter of a cubic foot) out of Hong Kong. It was disguised as machine parts:

On March 27, customs officials x-rayed two air compressors and discovered that they contained gold that had been “concealed in the integral parts” of the compressors. Those gold parts had also been painted silver to match the other components in an attempt to throw customs off the trail.

Cryptogram Backdoor in XZ Utils That Almost Happened

Last week, the Internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global Internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it.

Programmers dislike doing extra work. If they can find already-written code that does what they want, they’re going to use it rather than recreate the functionality. These code repositories, called libraries, are hosted on sites like GitHub. There are libraries for everything: displaying objects in 3D, spell-checking, performing complex mathematics, managing an e-commerce shopping cart, moving files around the Internet—everything. Libraries are essential to modern programming; they’re the building blocks of complex software. The modularity they provide makes software projects tractable. Everything you use contains dozens of these libraries: some commercial, some open source and freely available. They are essential to the functionality of the finished software. And to its security.

You’ve likely never heard of an open-source library called XZ Utils, but it’s on hundreds of millions of computers. It’s probably on yours. It’s certainly in whatever corporate or organizational network you use. It’s a freely available library that does data compression. It’s important, in the same way that hundreds of other similar obscure libraries are important.

Many open-source libraries, like XZ Utils, are maintained by volunteers. In the case of XZ Utils, it’s one person, named Lasse Collin. He has been in charge of XZ Utils since he wrote it in 2009. And, at least in 2022, he’s had some “longterm mental health issues.” (To be clear, he is not to blame in this story. This is a systems problem.)

Beginning in at least 2021, Collin was personally targeted. We don’t know by whom, but we have account names: Jia Tan, Jigar Kumar, Dennis Ens. They’re not real names. They pressured Collin to transfer control over XZ Utils. In early 2023, they succeeded. Tan spent the year slowly incorporating a backdoor into XZ Utils: disabling systems that might discover his actions, laying the groundwork, and finally adding the complete backdoor earlier this year. On March 25, Hans Jansen—another fake name—tried to push the various Unix systems to upgrade to the new version of XZ Utils.

And everyone was poised to do so. It’s a routine update. In the span of a few weeks, it would have been part of both Debian and Red Hat Linux, which run on the vast majority of servers on the Internet. But on March 29, another unpaid volunteer, Andres Freund—a real person who works for Microsoft but who was doing this in his spare time—noticed something weird about how much processing the new version of XZ Utils was doing. It’s the sort of thing that could be easily overlooked, and even more easily ignored. But for whatever reason, Freund tracked down the weirdness and discovered the backdoor.

It’s a masterful piece of work. It affects the SSH remote login protocol, basically by adding a hidden piece of functionality that requires a specific key to enable. Someone with that key can use the backdoored SSH to upload and execute an arbitrary piece of code on the target machine. SSH runs as root, so that code could have done anything. Let your imagination run wild.

This isn’t something a hacker just whips up. This backdoor is the result of a years-long engineering effort. The ways the code evades detection in source form, how it lies dormant and undetectable until activated, and its immense power and flexibility give credence to the widely held assumption that a major nation-state is behind this.

If it hadn’t been discovered, it probably would have eventually ended up on every computer and server on the Internet. Though it’s unclear whether the backdoor would have affected Windows and macOS, it would have worked on Linux. Remember in 2020, when Russia planted a backdoor into SolarWinds that affected 14,000 networks? That seemed like a lot, but this would have been orders of magnitude more damaging. And again, the catastrophe was averted only because a volunteer stumbled on it. And it was possible in the first place only because the first unpaid volunteer, someone who turned out to be a national security single point of failure, was personally targeted and exploited by a foreign actor.

This is no way to run critical national infrastructure. And yet, here we are. This was an attack on our software supply chain. This attack subverted software dependencies. The SolarWinds attack targeted the update process. Other attacks target system design, development, and deployment. Such attacks are becoming increasingly common and effective, and also are increasingly the weapon of choice of nation-states.

It’s impossible to count how many of these single points of failure are in our computer systems. And there’s no way to know how many of the unpaid and unappreciated maintainers of critical software libraries are vulnerable to pressure. (Again, don’t blame them. Blame the industry that is happy to exploit their unpaid labor.) Or how many more have accidentally created exploitable vulnerabilities. How many other coercion attempts are ongoing? A dozen? A hundred? It seems impossible that the XZ Utils operation was a unique instance.

Solutions are hard. Banning open source won’t work; it’s precisely because XZ Utils is open source that an engineer discovered the problem in time. Banning software libraries won’t work, either; modern software can’t function without them. For years, security engineers have been pushing something called a “software bill of materials”: an ingredients list of sorts so that when one of these packages is compromised, network owners at least know if they’re vulnerable. The industry hates this idea and has been fighting it for years, but perhaps the tide is turning.

The fundamental problem is that tech companies dislike spending extra money even more than programmers dislike doing extra work. If there’s free software out there, they are going to use it—and they’re not going to do much in-house security testing. Easier software development equals lower costs equals more profits. The market economy rewards this sort of insecurity.

We need some sustainable ways to fund open-source projects that become de facto critical infrastructure. Public shaming can help here. The Open Source Security Foundation (OSSF), founded in 2022 after another critical vulnerability in an open-source library—Log4j—was discovered, addresses this problem. The big tech companies pledged $30 million in funding after the critical Log4j supply chain vulnerability, but they never delivered. And they are still happy to make use of all this free labor and free resources, as a recent Microsoft anecdote indicates. The companies benefiting from these freely available libraries need to actually step up, and the government can force them to.

There’s a lot of tech that could be applied to this problem, if corporations were willing to spend the money. Liabilities will help. The Cybersecurity and Infrastructure Security Agency’s (CISA’s) “secure by design” initiative will help, and CISA is finally partnering with OSSF on this problem. Certainly the security of these libraries needs to be part of any broad government cybersecurity initiative.

We got extraordinarily lucky this time, but maybe we can learn from the catastrophe that didn’t happen. Like the power grid, communications network, and transportation systems, the software supply chain is critical infrastructure, part of national security, and vulnerable to foreign attack. The US government needs to recognize this as a national security problem and start treating it as such.

This essay originally appeared in Lawfare.

Cryptogram In Memoriam: Ross Anderson, 1956–2024

Last week, I posted a short memorial of Ross Anderson. The Communications of the ACM asked me to expand it. Here’s the longer version.

EDITED TO ADD (4/11): Two weeks before he passed away, Ross gave an 80-minute interview where he told his life story.

Cryptogram US Cyber Safety Review Board on the 2023 Microsoft Exchange Hack

The US Cyber Safety Review Board released a report on the summer 2023 hack of Microsoft Exchange by China. It was a serious attack by the Chinese government that accessed the emails of senior US government officials.

From the executive summary:

The Board finds that this intrusion was preventable and should never have occurred. The Board also concludes that Microsoft’s security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations. The Board reaches this conclusion based on:

  1. the cascade of Microsoft’s avoidable errors that allowed this intrusion to succeed;
  2. Microsoft’s failure to detect the compromise of its cryptographic crown jewels on its own, relying instead on a customer to reach out to identify anomalies the customer had observed;
  3. the Board’s assessment of security practices at other cloud service providers, which maintained security controls that Microsoft did not;
  4. Microsoft’s failure to detect a compromise of an employee’s laptop from a recently acquired company prior to allowing it to connect to Microsoft’s corporate network in 2021;
  5. Microsoft’s decision not to correct, in a timely manner, its inaccurate public statements about this incident, including a corporate statement that Microsoft believed it had determined the likely root cause of the intrusion when in fact, it still has not; even though Microsoft acknowledged to the Board in November 2023 that its September 6, 2023 blog post about the root cause was inaccurate, it did not update that post until March 12, 2024, as the Board was concluding its review and only after the Board’s repeated questioning about Microsoft’s plans to issue a correction;
  6. the Board’s observation of a separate incident, disclosed by Microsoft in January 2024, the investigation of which was not in the purview of the Board’s review, which revealed a compromise that allowed a different nation-state actor to access highly-sensitive Microsoft corporate email accounts, source code repositories, and internal systems; and
  7. how Microsoft’s ubiquitous and critical products, which underpin essential services that support national security, the foundations of our economy, and public health and safety, require the company to demonstrate the highest standards of security, accountability, and transparency.

The report includes a bunch of recommendations. It’s worth reading in its entirety.

The board was established in early 2022, modeled in spirit after the National Transportation Safety Board. This is their third report.

Here are a few news articles.

EDITED TO ADD (4/15): Adam Shostack has some good commentary.

Krebs on SecurityWho Stole 3.6M Tax Records from South Carolina?

For nearly a dozen years, residents of South Carolina have been kept in the dark by state and federal investigators over who was responsible for hacking into the state’s revenue department in 2012 and stealing tax and bank account information for 3.6 million people. The answer may no longer be a mystery: KrebsOnSecurity found compelling clues suggesting the intrusion was carried out by the same Russian hacking crew that stole of millions of payment card records from big box retailers like Home Depot and Target in the years that followed.

Questions about who stole tax and financial data on roughly three quarters of all South Carolina residents came to the fore last week at the confirmation hearing of Mark Keel, who was appointed in 2011 by Gov. Nikki Haley to head the state’s law enforcement division. If approved, this would be Keel’s third six-year term in that role.

The Associated Press reports that Keel was careful not to release many details about the breach at his hearing, telling lawmakers he knows who did it but that he wasn’t ready to name anyone.

“I think the fact that we didn’t come up with a whole lot of people’s information that got breached is a testament to the work that people have done on this case,” Keel asserted.

A ten-year retrospective published in 2022 by The Post and Courier in Columbia, S.C. said investigators determined the breach began on Aug. 13, 2012, after a state IT contractor clicked a malicious link in an email. State officials said they found out about the hack from federal law enforcement on October 10, 2012.

KrebsOnSecurity examined posts across dozens of cybercrime forums around that time, and found only one instance of someone selling large volumes of tax data in the year surrounding the breach date.

On Oct. 7, 2012 — three days before South Carolina officials say they first learned of the intrusion — a notorious cybercriminal who goes by the handle “Rescator” advertised the sale of “a database of the tax department of one of the states.”

“Bank account information, SSN and all other information,” Rescator’s sales thread on the Russian-language crime forum Embargo read. “If you purchase the entire database, I will give you access to it.”

A week later, Rescator posted a similar offer on the exclusive Russian forum Mazafaka, saying he was selling information from a U.S. state tax database, without naming the state. Rescator said the data exposed included Social Security Number (SSN), employer, name, address, phone, taxable income, tax refund amount, and bank account number.

“There is a lot of information, I am ready to sell the entire database, with access to the database, and in parts,” Rescator told Mazafaka members. “There is also information on corporate taxpayers.”

On Oct. 26, 2012, the state announced the breach publicly. State officials said they were working with investigators from the U.S. Secret Service and digital forensics experts from Mandiant, which produced an incident report (PDF) that was later published by South Carolina Dept. of Revenue. KrebsOnSecurity sought comment from the Secret Service, South Carolina prosecutors, and Mr. Keel’s office. This story will be updated if any of them respond.

On Nov. 18, 2012, Rescator told fellow denizens of the forum Verified he was selling a database of 65,000 records with bank account information from several smaller, regional financial institutions. Rescator’s sales thread on Verified listed more than a dozen database fields, including account number, name, address, phone, tax ID, date of birth, employer and occupation.

Asked to provide more context about the database for sale, Rescator told forum members the database included financial records related to tax filings of a U.S. state. Rescator added that there was a second database of around 80,000 corporations that included social security numbers, names and addresses, but no financial information.

The AP says South Carolina paid $12 million to Experian for identity theft protection and credit monitoring for its residents after the breach.

“At the time, it was one of the largest breaches in U.S. history but has since been surpassed greatly by hacks to Equifax, Yahoo, Home Depot, Target and PlayStation,” the AP’s Jeffrey Collins wrote.

As it happens, Rescator’s criminal hacking crew was directly responsible for the 2013 breach at Target and the 2014 hack of Home Depot. The Target intrusion saw Rescator’s cybercrime shops selling roughly 40 million stolen payment cards, and 56 million cards from Home Depot customers.

Who is Rescator? On Dec. 14, 2023, KrebsOnSecurity published the results of a 10-year investigation into the identity of Rescator, a.k.a. Mikhail Borisovich Shefel, a 36-year-old who lives in Moscow and who recently changed his last name to Lenin.

Mr. Keel’s assertion that somehow the efforts of South Carolina officials following the breach may have lessened its impact on citizens seems unlikely. The stolen tax and financial data appears to have been sold openly on cybercrime forums by one of the Russian underground’s most aggressive and successful hacking crews.

While there are no indications from reviewing forum posts that Rescator ever sold the data, his sales threads came at a time when the incidence of tax refund fraud was skyrocketing.

Tax-related identity theft occurs when someone uses a stolen identity and SSN to file a tax return in that person’s name claiming a fraudulent refund. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually owed a refund from the U.S. Internal Revenue Service (IRS).

According to a 2013 report from the Treasury Inspector General’s office, the IRS issued nearly $4 billion in bogus tax refunds in 2012, and more than $5.8 billion in 2013. The money largely was sent to people who stole SSNs and other information on U.S. citizens, and then filed fraudulent tax returns on those individuals claiming a large refund but at a different address.

It remains unclear why Shefel has never been officially implicated in the breaches at Target, Home Depot, or in South Carolina. It may be that Shefel has been indicted, and that those indictments remain sealed for some reason. Perhaps prosecutors were hoping Shefel would decide to leave Russia, at which point it would be easier to apprehend him if he believed no one was looking for him.

But all signs are that Shefel is deeply rooted in Russia, and has no plans to leave. In January 2024, authorities in Australia, the United States and the U.K. levied financial sanctions against 33-year-old Russian man Aleksandr Ermakov for allegedly stealing data on 10 million customers of the Australian health insurance giant Medibank.

A week after those sanctions were put in place, KrebsOnSecurity published a deep dive on Ermakov, which found that he co-ran a Moscow-based IT security consulting business along with Mikhail Shefel called Shtazi-IT.

A Google-translated version of Shtazi dot ru. Image: Archive.org.

Worse Than FailureCodeSOD: A Small Partition

Once upon a time, I was tuning a database performance issue. The backing database was an Oracle database, and the key problem was simply that the data needed to be partitioned. Great, easy, I wrote up a change script, applied it to a test environment, gathered some metrics to prove that it had the effects we expected, and submitted a request to apply it to production.

And the DBAs came down on me like a sledgehammer. Why? Well, according to our DBAs, the license we had with Oracle didn't let us use partitioning. The feature wasn't disabled in any way, but when an Oracle compliance check was performed, we'd get dinged and they'd charge us big bucks for having used the feature- and if we wanted to enable it, it'd cost us $10,000 a year, and no one was willing to pay that.

Now, I have no idea how true this actually was. I have no reason to disbelieve the DBAs I was working with, but perhaps they were being overly cautious. But the result is that I had to manually partition the data into different tables. The good news was all the writes always went into the most recent table, almost all of the reads went to either the current table or last month's table, and everything else was basically legacy and while it might be used in a report, if it was slower than the pitch drop experiment, that was fine.

It was stupid, and it sucked, but it wasn't the worst sin I'd ever committed.

Which is why I have at least some sympathy for this stored procedure, found by Ayende.

ALTER PROCEDURE GetDataForDate
   @date DATETIME
AS
   DECLARE @sql nvarchar(max)
   SET @sql = 'select * from data_' + convert(nvarchar(30),getdate(),112)
   EXEC sp_executesql @sql

Now, this is for an MS SQL database, which does not have any weird licensing around using partitions. But we can see here the manual partitioning in use.

There are a set of data_yyyymmdd tables. When we call this function, it takes the supplied date and writes a query specific to that table. This means that there is a table for every day.

Ayende got called in for this because one of the reports was running slowly. This report simply… used all of the tables. It just UNIONed them together. This, of course, removed any benefit of partitioning, and didn't exactly make the query planning engine's job easy, either. The execution paths it generated were not terribly efficient.

At the time Ayende first found it, there were 750 tables. And obviously, as each day ticked past, a new table was created. And yes, someone manually updated the view, every day.

Ayende sent this to us many tables ago, and I dread to think how many tables are yet to be created.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsTo The Flame

Author: Majoki We’ve all heard about light pollution and how the glow from cities and towns obscures the night sky, making it difficult to view stars and planets. Maybe we’ve even learned how our luminescent nightlife affects nocturnal animals, migrating birds, and all manner of insects, confusing them and contributing to their alarming decline. But […]

The post To The Flame appeared first on 365tomorrows.

,

Cryptogram X.com Automatically Changing Link Text but Not URLs

Brian Krebs reported that X (formerly known as Twitter) started automatically changing twitter.com links to x.com links. The problem is: (1) it changed any domain name that ended with “twitter.com,” and (2) it only changed the link’s appearance (anchortext), not the underlying URL. So if you were a clever phisher and registered fedetwitter.com, people would see the link as fedex.com, but it would send people to fedetwitter.com.

Thankfully, the problem has been fixed.

Planet DebianAndreas Rönnquist: Status update for Allegro packaging in Debian

I have mailed to a Debian bug on allegro4.4 describing my reasoning regarding the allegro libraries – in short, allegro4.4 is pretty much dead upstream, and my interest was basically to keep alex4 (which is cool) in Debian, but since it migrated to non-free, my interest in allegro4.4 has waned. So, if anybody would like to still see allegro4.4 in Debian, please step up now and help out. Since it is dead upstream, my reasoning is that it is better to remove it from Debian if no maintainer who wants to help steps up.

Previously Tobias Hansen has helped out, but now it is 8 (!) years since his last upload of either package. (Please don’t interpret this as judgement, I am very happy for the help he has provided and all the work he has done on the packages).

Allegro5 is another deal – still active upstream, and I have kept it up to date in Debian, and while I have held the latest upload a short while because of the time_t transition, it will come sooner or later – There I am also waiting on a final decision on this bug from upstream. Other than that allegro 5 is in a very good state, and I will keep maintaining it as long as I can. But help would of course be appreciated on allegro5 too.

Krebs on SecurityCrickets from Chirp Systems in Smart Lock Key Leak

The U.S. government is warning that “smart locks” securing entry to an estimated 50,000 dwellings nationwide contain hard-coded credentials that can be used to remotely open any of the locks. The lock’s maker Chirp Systems remains unresponsive, even though it was first notified about the critical weakness in March 2021. Meanwhile, Chirp’s parent company, RealPage, Inc., is being sued by multiple U.S. states for allegedly colluding with landlords to illegally raise rents.

On March 7, 2024, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) warned about a remotely exploitable vulnerability with “low attack complexity” in Chirp Systems smart locks.

“Chirp Access improperly stores credentials within its source code, potentially exposing sensitive information to unauthorized access,” CISA’s alert warned, assigning the bug a CVSS (badness) rating of 9.1 (out of a possible 10). “Chirp Systems has not responded to requests to work with CISA to mitigate this vulnerability.”

Matt Brown, the researcher CISA credits with reporting the flaw, is a senior systems development engineer at Amazon Web Services. Brown said he discovered the weakness and reported it to Chirp in March 2021, after the company that manages his apartment building started using Chirp smart locks and told everyone to install Chirp’s app to get in and out of their apartments.

“I use Android, which has a pretty simple workflow for downloading and decompiling the APK apps,” Brown told KrebsOnSecurity. “Given that I am pretty picky about what I trust on my devices, I downloaded Chirp and after decompiling, found that they were storing passwords and private key strings in a file.”

Using those hard-coded credentials, Brown found an attacker could then connect to an application programming interface (API) that Chirp uses which is managed by smart lock vendor August.com, and use that to enumerate and remotely lock or unlock any door in any building that uses the technology.

Brown said when he complained to his leasing office, they sold him a small $50 key fob that uses Near-Field Communications (NFC) to toggle the lock when he brings the fob close to his front door. But he said the fob doesn’t eliminate the ability for anyone to remotely unlock his front door using the exposed credentials and the Chirp mobile app.

A smart lock enabled with Chirp. Image: Camdenliving.com

Also, the fobs pass the credentials to his front door over the air in plain text, meaning someone could clone the fob just by bumping against him with a smartphone app made to read and write NFC tags.

Neither August nor Chirp Systems responded to requests for comment. It’s unclear exactly how many apartments and other residences are using the vulnerable Chirp locks, but multiple articles about the company from 2020 state that approximately 50,000 units use Chirp smart locks with August’s API.

Roughly a year before Brown reported the flaw to Chirp Systems, the company was bought by RealPage, a firm founded in 1998 as a developer of multifamily property management and data analytics software. In 2021, RealPage was acquired by the private equity giant Thoma Bravo.

Brown said the exposure he found in Chirp’s products is “an obvious flaw that is super easy to fix.”

“It’s just a matter of them being motivated to do it,” he said. “But they’re part of a private equity company now, so they’re not answerable to anybody. It’s too bad, because it’s not like residents of [the affected] properties have another choice. It’s either agree to use the app or move.”

In October 2022, an investigation by ProPublica examined RealPage’s dominance in the rent-setting software market, and that it found “uses a mysterious algorithm to help landlords push the highest possible rents on tenants.”

“For tenants, the system upends the practice of negotiating with apartment building staff,” ProPublica found. “RealPage discourages bargaining with renters and has even recommended that landlords in some cases accept a lower occupancy rate in order to raise rents and make more money. One of the algorithm’s developers told ProPublica that leasing agents had ‘too much empathy’ compared to computer generated pricing.”

Last year, the U.S. Department of Justice threw its weight behind a massive lawsuit filed by dozens of tenants who are accusing the $9 billion apartment software company of helping landlords collude to inflate rents.

In February 2024, attorneys general for Arizona and the District of Columbia sued RealPage, alleging RealPage’s software helped create a rental monopoly.

Worse Than FailureCodeSOD: A Top Level Validator

As oft stated, the specification governing email addresses is complicated, and isn't really well suited for regular expressions. You can get there, but honestly, most applications can get away with checking for something that looks vaguely email like and call it a day.

Now, as complicated as the "accurate" regex can get, we can certainly find worse regexes for validating emails. Morgan did, while on a contract.

The client side had this lovely regex for validating emails:

/*
Check if a string is in valid email format.
Returns true if valid, false otherwise.
*/
function isEmail(str)
{
        var regex = /^[-_.a-z0-9]+@(([-_a-z0-9]+\.)+(ad|ae|aero|af|ag|ai|al|am|an|ao|aq|ar|arpa|as|at|au|aw|az|ba|bb|bd|be|bf|bg|bh|bi|biz|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|com|coop|cr|cs|cu|cv|cx|cy|cz|de|dj|dk|dm|do|dz|ec|edu|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gh|gi|gl|gm|gn|gov|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|in|info|int|io|iq|ir|is|it|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|mg|mh|mil|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|museum|mv|mw|mx|my|mz|na|name|nc|ne|net|nf|ng|ni|nl|no|np|nr|nt|nu|nz|om|org|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|pro|ps|pt|pw|py|qa|re|ro|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|sk|sl|sm|sn|so|sr|st|su|sv|sy|sz|tc|td|tf|tg|th|tj|tk|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|um|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)|(([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5])\.){3}([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5]))$/i;
        return regex.test(str);
}

They check a long list of TLDs to ensure that the email address is potentially valid, or accept an email address. Is the list exhaustive? Of course not. There are loads of TLDs not on this list- perhaps not widely used ones, but it's incomplete. And also, unnecessary.

But not so unnecessary that they didn't do it twice- they mirrored this code on the server side, in PHP:

function isEmail($email)
{
        return(preg_match("/^[-_.[:alnum:]]+@((([[:alnum:]]|[[:alnum:]][[:alnum:]-]*[[:alnum:]])\.)+(ad|ae|aero|af|ag|ai|al|am|an|ao|aq|ar|arpa|as|at|au|aw|az|ba|bb|bd|be|bf|bg|bh|bi|biz|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|com|coop|cr|cs|cu|cv|cx|cy|cz|de|dj|dk|dm|do|dz|ec|edu|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gh|gi|gl|gm|gn|gov|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|in|info|int|io|iq|ir|is|it|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|mg|mh|mil|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|museum|mv|mw|mx|my|mz|na|name|nc|ne|net|nf|ng|ni|nl|no|np|nr|nt|nu|nz|om|org|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|pro|ps|pt|pw|py|qa|re|ro|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|sk|sl|sm|sn|so|sr|st|su|sv|sy|sz|tc|td|tf|tg|th|tj|tk|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|um|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)$|(([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5])\.){3}([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5]))$/i"
                        ,$email));
}

Bad code is even better when you have to maintain it in two places and in two languages. I suppose I should just be happy that they're doing some kind of server-side validation.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsVertebrating

Author: Julian Miles, Staff Writer The room is greener than my natural dermal shade in springtime, and the air conditioning is more noisy than effective. Both of which are features of another day on Earth, the quirkiest destination in Cluster 644984, catchily known as ‘The Milky Way’ among the locals. “I hate humans.” I turn […]

The post Vertebrating appeared first on 365tomorrows.

xkcdSurvey Marker

Cory DoctorowCapitalists Hate Capitalism

A scene out of an 11th century tome on demon-summoning called 'Compendium rarissimum totius Artis Magicae sistematisatae per celeberrimos Artis hujus Magistros. Anno 1057. Noli me tangere.' It depicts a demon tormenting two unlucky would-be demon-summoners who have dug up a grave in a graveyard. One summoner is held aloft by his hair, screaming; the other screams from inside the grave he is digging up. The scene has been altered to remove the demon's prominent, urinating penis, to add in a Tesla supercharger, and a red Tesla Model S nosing into the scene.

Today for my podcast, I read Capitalists Hate Capitalism, my latest column from Locus Magazine. It’s a meditation on the difference between feudalism and capitalism, and how to know which one you’re living under.

I recorded this on a day when I was home between book-tour stops (I’m out with my new techno crime-thriller, The Bezzle). Catch me this Wednesday (Apr 17) in Chicago at Anderson’s Books, then in Torino for the Biennale keynote on Apr 21, then in Marin County at Book Passage Corte Madera on Apr 27, then in Winnipeg, Calgary, Vancouver, and beyond! The canonical link for the schedule is here.


Varoufakis’s argument turns on an important distinction between two types of income: profits and rents. These terms have colloquial meanings that are widely understood, but Varoufakis is interested in the precise technical definitions used by economists.

For an economist, ‘‘profit’’ is income obtained by mixing capital – tools, machines, systems – with your employees’ labor. The value created by that labor is then divided between the worker, who draws a wage, and the capitalist, who takes the rest as profit.

‘‘Rent,’’ meanwhile, was income derived from owning something that the capitalist needs in order to realize a profit. In feudal times, hereditary lords owned plots of land that serfs were bound to, and those serfs owed an annual rent to their lords. This wasn’t a great deal for the serfs, but it also needled the nascent capitalist class, who would have very much preferred to have those lands enclosed for sheep grazing. The sheep would produce wool, which could be woven into cloth in the ‘‘dark, Satanic mills’’ of the industrial revolution. The former serfs, turned off their land, could be set to work in those factories.


MP3


Here’s that tour schedule!

17 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

3 May, Wordfest, Calgary
https://wordfest.com/2024/event/wordfest-presents-cory-doctorow-2/

4 May, Massy Arts, Vancouver
https://www.eventbrite.ca/e/solo-reading-cory-doctorow-the-bezzle-tickets-876989167207

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

(Image: Steve Jurvetson, CC BY 2.0, modified)

,

Cryptogram Using AI-Generated Legislative Amendments as a Delaying Technique

Canadian legislators proposed 19,600 amendments—almost certainly AI-generated—to a bill in an attempt to delay its adoption.

I wrote about many different legislative delaying tactics in A Hacker’s Mind, but this is a new one.

Cryptogram New Lattice Cryptanalytic Technique

A new paper presents a polynomial-time quantum algorithm for solving certain hard lattice problems. This could be a big deal for post-quantum cryptographic algorithms, since many of them base their security on hard lattice problems.

A few things to note. One, this paper has not yet been peer reviewed. As this comment points out: “We had already some cases where efficient quantum algorithms for lattice problems were discovered, but they turned out not being correct or only worked for simple special cases.”

Two, this is a quantum algorithm, which means that it has not been tested. There is a wide gulf between quantum algorithms in theory and in practice. And until we can actually code and test these algorithms, we should be suspicious of their speed and complexity claims.

And three, I am not surprised at all. We don’t have nearly enough analysis of lattice-based cryptosystems to be confident in their security.

Planet DebianPetter Reinholdtsen: Time to move orphaned Debian packages to git

There are several packages in Debian without a associated git repository with the packaging history. This is unfortunate and it would be nice if more of these would do so. Quote a lot of these are without a maintainer, ie listed as maintained by the 'Debian QA Group' place holder. In fact, 438 packages have this property according to UDD (SELECT source FROM sources WHERE release = 'sid' AND (vcs_url ilike '%anonscm.debian.org%' OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%';). Such packages can be updated without much coordination by any Debian developer, as they are considered orphaned.

To try to improve the situation and reduce the number of packages without associated git repository, I started a few days ago to search out candiates and provide them with a git repository under the 'debian' collaborative Salsa project. I started with the packages pointing to obsolete Alioth git repositories, and am now working my way across the ones completely without git references. In addition to updating the Vcs-* debian/control fields, I try to update Standards-Version, debhelper compat level, simplify d/rules, switch to Rules-Requires-Root: no and fix lintian issues reported. I only implement those that are trivial to fix, to avoid spending too much time on each orphaned package. So far my experience is that it take aproximately 20 minutes to convert a package without any git references, and a lot more for packages with existing git repositories incompatible with git-buildpackages.

So far I have converted 10 packages, and I will keep going until I run out of steam. As should be clear from the numbers, there is enough packages remaining for more people to do the same without stepping on each others toes. I find it useful to start by searching for a git repo already on salsa, as I find that some times a git repo has already been created, but no new version is uploaded to Debian yet. In those cases I start with the existing git repository. I convert to the git-buildpackage+pristine-tar workflow, and ensure a debian/gbp.conf file with "pristine-tar=True" is added early, to avoid uploading a orig.tar.gz with the wrong checksum by mistake. Did that three times in the begin before I remembered my mistake.

So, if you are a Debian Developer and got some spare time, perhaps considering migrating some orphaned packages to git?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking twice at RSA Conference 2024 in San Francisco. I’ll be on a panel on software liability on May 6, 2024 at 8:30 AM, and I’m giving a keynote on AI and democracy on May 7, 2024 at 2:25 PM.

The list is maintained on this page.

365 TomorrowsI, Chaos Machinist

Author: Guy Lingham My job as a chaos machinist is simple: I inject failure. I’m unleashed upon a system to disrupt its dependencies and tease out its vulnerabilities. It’s all about building resiliency. Chaos exists everywhere, in everything, so better to break and fix things now, before they’re broken for you. The practice used to […]

The post I, Chaos Machinist appeared first on 365tomorrows.

,

David BrinAccelerationism and the recurring flaw in human civilizations

Youtuber Joe Scott runs a series of entertaining commentaries that mix a bit of gonzo with big picture perspectives on science or history or culture.  Not quite as big-picture as I’d like, but still, better than most impatient digital citizens are used to. In this above-average episode, he discusses ‘accelerationism,’ which used to stand for efforts to accelerate the pace of human progress… 

… only now its more frequent use is among cults of rightwing zillionaires and their sycophants (and there is a version on the far left!) who actively aim to tip our current civilization into upheaval and crisis. 


Why would anyone want to do such a thing?  


Well, I’ve been pointing at this trend toward elitist solipsism ever since writing The Postman, way back in the eighties, a tale that featured devastating wars provoked by ‘the Mystic of Leningrad’ and by a homegrown, all-American fascist movement led by ‘Nathan Holn.’ And yes, both forecasts struck way close to today’s marks! 


...though I confess I was partly inspired by Robert Heinlein’s projection of a USA that toppled into fundamentalist theocracy under “Nehemiah Scudder.” 


Seriously, see what Heinlein had to say about the nasty current that seeps erosively through one aspect of our national character; his perspective might surprise you.


As Joe Scott explains: under this current, perversely sick New Accelerationism, some current ‘prepper’ zillionaire-romantics figure ‘it’s all gonna blow inevitably anyway, so let’s force it sooner, get it over with, while there’s still some control.’  *

Sure. Control by you guys, of course, expecting to emerge from your secret castle-enclaves to cheers and rapturous obeisance by the desperate survivors, who will obediently revert to proper peasantry, while helping you crush all the remaining nerds. Nerds and fact-people who will – naturally - get all the blame, as in Walter Miller's SF classic A Canticle for Leibowitz. 

ElsewhereI show how much likelier it is that survivors will side with the nerds, joining us to crack open every luscious, luxurious prepper castle, like a fresh nut…


… because the prepper lords deliberately incited and accelerated a crisis that never had to happen, in the first place.



== Things are only 'bad' because villains want them to be ==


Seriously. Our current ‘crisis’ is almost completely made up!  If weighed according to actual facts and capabilities, we are right now in pretty good shape. And not just by superficialities, like the best economy in 25+ years.  


By almost any statistical metric, we’re making fantastic progress against world poverty, for example, and getting a helluva lot done (if not yet enough!) toward saving the planet.  While crime and other turpitudes are rising in red-run America (except Utah) they are declining in most blue areas (excluding sores like Illinois). 


In fact, nearly all of the social war taking place right now in the USA is about made-up stuff – like woke-ist symbol bullying and fact-averse Foxite diatribes. What it boils down to is deliberate incitement of an old psychological sickness that has simmered at the heart of America since its founding. 


I refer to a cultural rift between diametrically opposed elements of our national character… a majority who approve generally of modernity... vs. a large, persistent and vocal minority who are always (sometimes violently) nostalgic for one form or another of feudalism...


... leading to my own diagnosis that we are in Phase Eight of the ever-recurring, 240 year American Civil War.


Hm… I just said recurring. Does that mean I credit cyclical history?

 

Bite your tongue for suggesting it!



== A holy writ of MAGA ==


During the cited video, Joe Scott holds up a book that has become a quasi-religious tome, clutched desperately by so many modern confederates.  The Fourth Turning and its sequel The Fourth Turning is Here. While soft-pedaling politics, Scott telegraphs his own skepticism and dislike of this maniacally insane cult. 


I’m more open (less subtle) than Scott, in despising these cult tomes of deeply-sick pareidolia. Elsewhere I weigh in, showing how romantics of “the Right” almost always clutch memes of cyclical historyas mystics and feudalists have for 6000 years. 


This noxious allure was especially strong among both Confederates and Nazis It seems as if that kind of character draws comfort from shrugs that actual, actual human progress inevitably failsThey desperately cling to a notion that “what goes around, comes around…” 


…despite the fact that none of it - at any level, in any way - is supportable by actual facts. 


I reiterate that! And offer major wager stakes. There are no elements of the Fourth Turning cult that aren’t at-best delusional and at worst outright propaganda aimed at demoralizing the one human civilization that ever truly broke away from 6000 years of brutally stoopid feudalism. The first to not only make Progress very real, but give our children a real chance at survival.


Giving them a very real chance at the stars.


And so -- and especially in light of the Civil War flick that's right now in theaters -- do we stand a chance of preventing spoiled-rich ‘accelerationists’ and their sycophant toadies and blackmailed politicals from deliberately inciting (accelerating) a bloody Ninth Phase of the American Civil War?  


(You can help in small ways, like preventing any of your friends from joining yet another hemorrhage of frippy sanctimony-preeners, running off to the next Nader-Stein-Cornel distraction treason. (And yes, we now know these lefty masturbatory distractions are often Kremlin-based.)


 But know what’s seriously needed? YOU can help by...


… becoming avolunteer poll worker during the 2024 election year. Recruit one. Be one. You’ll be doing your share.


And now... to make some of you furious with me...!



== Please Bill, Use This Word! == ==


Yeah yeah, I know some of you will be triggered by this name I am about to type here. You shouldn't be.


I assert that one word would help Bill Maher immensely. 


"Tactics." 


If he said even more clearly: "We share the same general goals of tolerance, diversity, justice, a level-playing field, compassion, rights, peace... and we share an overall strategy of seeking all those good things through continuously-improving, open/transparent and calmly-debated democracy!


"Where we differ is over pragmatic methods. I assert that these fine goals and our shared strategy are NOT helped by some of the left's tactics. Especially those that bully and drive away allies! Tactics that make our coalition smaller rather than bigger and overwhelming, as blue waves needed to be in the 1770s, in the 1860s. In the 1940s and 1960s... 


"...leading to victories at Yorktown, at Appomattox and Berlin. In Selma and in the dreams that came largely - if never completely true - that were preached on the Lincoln steps by Martin Luther King.


"Another such blue wave is desperately-needed right now!  But it is hindered, rather than helped, by tactics that promote a cult of fragility - a cult of personal outrage over symbolic purities! A cult that offers sanctimony-addicts a luscious rush, but that doesn't at all help to achieve the pragmatic, incremental progress sought by Gandhi and MLK. 


"Worst of all? Refusal ever to admit that any particular, favorite tactic might be wrongheaded, counterproductive or in need of critical adjustment...


"...and that includes demonizing allies like me, who point out possible tactical errors."


Okay, I admit it's futile. Maher is too caught up in the smarmy, smartass side of his schtick ever to say that.  


But let me ask this: are YOU qualified to judge, when you can't tell the difference between an ally who's an irksome smartass...


...and a revived confederacy that's biliously allied with Nazis and the Kremlin and Putin and his slightly-relabeled, fascist+neo-commie KGB, aiming for our downfall and the end of everything that actually made America great?


Want a litmus for this phase of civil war? Anyone prioritizing sanctimony over that broad, overwhelmingly victorious blue wave is at best a sucker and useless. 


And at-worst...



====================================




AFTER-THOUGHTS:


*  Triggering a crisis early, in order to minimize the overall effects, was the theme of Donald Bensen's 1979 Campbell Award nominated novel And Having Writ....


And yes, I openly declare again that EVERY ASPECT of the Fourth Turning cult is a lie. There are zero validities whatsoever. And anyone citing that pile of steaming donkey-doo must be confronted with... wagers.

Planet DebianSimon Josefsson: Reproducible and minimal source-only tarballs

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn’t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are hopefully reproducible on several architectures.

What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader…

This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work.

Let’s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:

git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make distcheck
gpg -b libntlm-1.8.tar.gz

The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980’s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened.

The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs.

The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs — although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like.

Publishing minimal source-only tarballs enable easier auditing of a project’s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover.

With all that discussion and rationale out of the way, let’s return to the release process. I have added another step here:

make srcdist
gpg -b libntlm-1.8-src.tar.gz

Now the release is ready. I publish these four files in the Libntlm’s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:

91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca  libntlm-1.8-src.tar.gz
ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1  libntlm-1.8.tar.gz

So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:

podman run -it --rm ubuntu:22.04
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make dist srcdist
sha256sum libntlm-*.tar.gz

You should see the exact same SHA256 checksum values. Hooray!

This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed.

Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container – replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:

podman run -it --rm almalinux:8
dnf update -y
dnf install -y make wget gcc
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz
tar xfa libntlm-1.8.tar.gz
cd libntlm-1.8
./configure
make dist
sha256sum libntlm-1.8.tar.gz

The source-only minimal tarball can be regenerated on Debian 11:

podman run -it --rm debian:11
apt-get update
apt-get install -y --no-install-recommends make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
make -f cfg.mk srcdist
sha256sum libntlm-1.8-src.tar.gz 

As the Magnus Opus or chef-d’œuvre, let’s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 – replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.

podman run -it --rm docker.io/kpengboy/trisquel:11.0
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz
tar xfa libntlm-1.8-src.tar.gz
cd libntlm-v1.8
./bootstrap
./configure
make dist
sha256sum libntlm-1.8.tar.gz

Yay! You should now have great confidence in that the release artifacts correspond to what’s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts.

I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot.

Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I’ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab’s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the ‘v’ in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah’s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah’s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I’ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion.

Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs – even though the content would.

Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm’s ChangeLog file died on the surgery table here.

So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian’s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm’s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team.

Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work — see my notes in gnulib’s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately.

One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don’t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We’ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn’t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don’t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn’t require the git tool to extract the necessary files, but none that I found practical — ideas welcome!

Finally some brief notes on how this was implemented. Enabling bootstrappable source-only minimal tarballs via gnulib’s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf.

The srcdist make rule is simple:

git archive --prefix=libntlm-v1.8/ -o libntlm-v1.8.tar.gz HEAD

Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to the timestamp of the last commit in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn’t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I’m using now is fairly similar to the one I suggested over a year ago. If there are problems because all files in the tarball now use the same modification time, there is a solution by Bruno Haible that could be implemented.

Side note on git tags: Some people may wonder why not verify a signed git tag instead of verifying a signed tarball of the git archive. Currently most git repositories uses SHA-1 for git commit identities, but SHA-1 is not a secure hash function. While current SHA-1 attacks can be detected and mitigated, there are fundamental doubts that a git SHA-1 commit identity uniquely refers to the same content that was intended. Verifying a git tag will never offer the same assurance, since a git tag can be moved or re-signed at any time. Verifying a git commit is better but then we need to trust SHA-1. Migrating git to SHA-256 would resolve this aspect, but most hosting sites such as GitLab and GitHub does not support this yet. There are other advantages to using signed tarballs instead of signed git commits or git tags as well, e.g., tar.gz can be a deterministically reproducible persistent stable offline storage format but .git sub-directory trees or git bundles do not offer this property.

Doing continous testing of all this is critical to make sure things don’t regress. Libntlm’s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa.

If this way of working plays out well, I hope to implement it in other projects too.

What do you think? Happy Hacking!

Planet DebianPaul Tagliamonte: Domo Arigato, Mr. debugfs

Years ago, at what I think I remember was DebConf 15, I hacked for a while on debhelper to write build-ids to debian binary control files, so that the build-id (more specifically, the ELF note .note.gnu.build-id) wound up in the Debian apt archive metadata. I’ve always thought this was super cool, and seeing as how Michael Stapelberg blogged some great pointers around the ecosystem, including the fancy new debuginfod service, and the find-dbgsym-packages helper, which uses these same headers, I don’t think I’m the only one.

At work I’ve been using a lot of rust, specifically, async rust using tokio. To try and work on my style, and to dig deeper into the how and why of the decisions made in these frameworks, I’ve decided to hack up a project that I’ve wanted to do ever since 2015 – write a debug filesystem. Let’s get to it.

Back to the Future

Time to admit something. I really love Plan 9. It’s just so good. So many ideas from Plan 9 are just so prescient, and everything just feels right. Not just right like, feels good – like, correct. The bit that I’ve always liked the most is 9p, the network protocol for serving a filesystem over a network. This leads to all sorts of fun programs, like the Plan 9 ftp client being a 9p server – you mount the ftp server and access files like any other files. It’s kinda like if fuse were more fully a part of how the operating system worked, but fuse is all running client-side. With 9p there’s a single client, and different servers that you can connect to, which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic.

The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9 in terms of adoption – 9p is in all sorts of places folks don’t usually expect. For instance, the Windows Subsystem for Linux uses the 9p protocol to share files between Windows and Linux. ChromeOS uses it to share files with Crostini, and qemu uses 9p (virtio-p9) to share files between guest and host. If you’re noticing a pattern here, you’d be right; for some reason 9p is the go-to protocol to exchange files between hypervisor and guest. Why? I have no idea, except maybe due to being designed well, simple to implement, and it’s a lot easier to validate the data being shared and validate security boundaries. Simplicity has its value.

As a result, there’s a lot of lingering 9p support kicking around. Turns out Linux can even handle mounting 9p filesystems out of the box. This means that I can deploy a filesystem to my LAN or my localhost by running a process on top of a computer that needs nothing special, and mount it over the network on an unmodified machine – unlike fuse, where you’d need client-specific software to run in order to mount the directory. For instance, let’s mount a 9p filesystem running on my localhost machine, serving requests on 127.0.0.1:564 (tcp) that goes by the name “mountpointname” to /mnt.

$ mount -t 9p \
-o trans=tcp,port=564,version=9p2000.u,aname=mountpointname \
127.0.0.1 \
/mnt

Linux will mount away, and attach to the filesystem as the root user, and by default, attach to that mountpoint again for each local user that attempts to use it. Nifty, right? I think so. The server is able to keep track of per-user access and authorization along with the host OS.

WHEREIN I STYX WITH IT

Since I wanted to push myself a bit more with rust and tokio specifically, I opted to implement the whole stack myself, without third party libraries on the critical path where I could avoid it. The 9p protocol (sometimes called Styx, the original name for it) is incredibly simple. It’s a series of client to server requests, which receive a server to client response. These are, respectively, “T” messages, which transmit a request to the server, which trigger an “R” message in response (Reply messages). These messages are TLV payload with a very straight forward structure – so straight forward, in fact, that I was able to implement a working server off nothing more than a handful of man pages.

Later on after the basics worked, I found a more complete spec page that contains more information about the unix specific variant that I opted to use (9P2000.u rather than 9P2000) due to the level of Linux specific support for the 9P2000.u variant over the 9P2000 protocol.

MR ROBOTO

The backend stack over at zoo is rust and tokio running i/o for an HTTP and WebRTC server. I figured I’d pick something fairly similar to write my filesystem with, since 9P can be implemented on basically anything with I/O. That means tokio tcp server bits, which construct and use a 9p server, which has an idiomatic Rusty API that partially abstracts the raw R and T messages, but not so much as to cause issues with hiding implementation possibilities. At each abstraction level, there’s an escape hatch – allowing someone to implement any of the layers if required. I called this framework arigato which can be found over on docs.rs and crates.io.

/// Simplified version of the arigato File trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.File.html
trait File {
/// OpenFile is the type returned by this File via an Open call.
 type OpenFile: OpenFile;
/// Return the 9p Qid for this file. A file is the same if the Qid is
 /// the same. A Qid contains information about the mode of the file,
 /// version of the file, and a unique 64 bit identifier.
 fn qid(&self) -> Qid;
/// Construct the 9p Stat struct with metadata about a file.
 async fn stat(&self) -> FileResult<Stat>;
/// Attempt to update the file metadata.
 async fn wstat(&mut self, s: &Stat) -> FileResult<()>;
/// Traverse the filesystem tree.
 async fn walk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>;
/// Request that a file's reference be removed from the file tree.
 async fn unlink(&mut self) -> FileResult<()>;
/// Create a file at a specific location in the file tree.
 async fn create(
&mut self,
name: &str,
perm: u16,
ty: FileType,
mode: OpenMode,
extension: &str,
) -> FileResult<Self>;
/// Open the File, returning a handle to the open file, which handles
 /// file i/o. This is split into a second type since it is genuinely
 /// unrelated -- and the fact that a file is Open or Closed can be
 /// handled by the `arigato` server for us.
 async fn open(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>;
}
/// Simplified version of the arigato OpenFile trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html
trait OpenFile {
/// iounit to report for this file. The iounit reported is used for Read
 /// or Write operations to signal, if non-zero, the maximum size that is
 /// guaranteed to be transferred atomically.
 fn iounit(&self) -> u32;
/// Read some number of bytes up to `buf.len()` from the provided
 /// `offset` of the underlying file. The number of bytes read is
 /// returned.
 async fn read_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
/// Write some number of bytes up to `buf.len()` from the provided
 /// `offset` of the underlying file. The number of bytes written
 /// is returned.
 fn write_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
}

Thanks, decade ago paultag!

Let’s do it! Let’s use arigato to implement a 9p filesystem we’ll call debugfs that will serve all the debug files shipped according to the Packages metadata from the apt archive. We’ll fetch the Packages file and construct a filesystem based on the reported Build-Id entries. For those who don’t know much about how an apt repo works, here’s the 2-second crash course on what we’re doing. The first is to fetch the Packages file, which is specific to a binary architecture (such as amd64, arm64 or riscv64). That architecture is specific to a component (such as main, contrib or non-free). That component is specific to a suite, such as stable, unstable or any of its aliases (bullseye, bookworm, etc). Let’s take a look at the Packages.xz file for the unstable-debug suite, main component, for all amd64 binaries.

$ curl \
https://deb.debian.org/debian-debug/dists/unstable-debug/main/binary-amd64/Packages.xz \
| unxz

This will return the Debian-style rfc2822-like headers, which is an export of the metadata contained inside each .deb file which apt (or other tools that can use the apt repo format) use to fetch information about debs. Let’s take a look at the debug headers for the netlabel-tools package in unstable – which is a package named netlabel-tools-dbgsym in unstable-debug.

Package: netlabel-tools-dbgsym
Source: netlabel-tools (0.30.0-1)
Version: 0.30.0-1+b1
Installed-Size: 79
Maintainer: Paul Tagliamonte <paultag@debian.org>
Architecture: amd64
Depends: netlabel-tools (= 0.30.0-1+b1)
Description: debug symbols for netlabel-tools
Auto-Built-Package: debug-symbols
Build-Ids: e59f81f6573dadd5d95a6e4474d9388ab2777e2a
Description-md5: a0e587a0cf730c88a4010f78562e6db7
Section: debug
Priority: optional
Filename: pool/main/n/netlabel-tools/netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
Size: 62776
SHA256: 0e9bdb087617f0350995a84fb9aa84541bc4df45c6cd717f2157aa83711d0c60

So here, we can parse the package headers in the Packages.xz file, and store, for each Build-Id, the Filename where we can fetch the .deb at. Each .deb contains a number of files – but we’re only really interested in the files inside the .deb located at or under /usr/lib/debug/.build-id/, which you can find in debugfs under rfc822.rs. It’s crude, and very single-purpose, but I’m feeling a bit lazy.

Who needs dpkg?!

For folks who haven’t seen it yet, a .deb file is a special type of .ar file, that contains (usually) three files inside – debian-binary, control.tar.xz and data.tar.xz. The core of an .ar file is a fixed size (60 byte) entry header, followed by the specified size number of bytes.

[8 byte .ar file magic]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
...

First up was to implement a basic ar parser in ar.rs. Before we get into using it to parse a deb, as a quick diversion, let’s break apart a .deb file by hand – something that is a bit of a rite of passage (or at least it used to be? I’m getting old) during the Debian nm (new member) process, to take a look at where exactly the .debug file lives inside the .deb file.

$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ ls
control.tar.xz debian-binary
data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ tar --list -f data.tar.xz | grep '.debug$'
./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug

Since we know quite a bit about the structure of a .deb file, and I had to implement support from scratch anyway, I opted to implement a (very!) basic debfile parser using HTTP Range requests. HTTP Range requests, if supported by the server (denoted by a accept-ranges: bytes HTTP header in response to an HTTP HEAD request to that file) means that we can add a header such as range: bytes=8-68 to specifically request that the returned GET body be the byte range provided (in the above case, the bytes starting from byte offset 8 until byte offset 68). This means we can fetch just the ar file entry from the .deb file until we get to the file inside the .deb we are interested in (in our case, the data.tar.xz file) – at which point we can request the body of that file with a final range request. I wound up writing a struct to handle a read_at-style API surface in hrange.rs, which we can pair with ar.rs above and start to find our data in the .deb remotely without downloading and unpacking the .deb at all.

After we have the body of the data.tar.xz coming back through the HTTP response, we get to pipe it through an xz decompressor (this kinda sucked in Rust, since a tokio AsyncRead is not the same as an http Body response is not the same as std::io::Read, is not the same as an async (or sync) Iterator is not the same as what the xz2 crate expects; leading me to read blocks of data to a buffer and stuff them through the decoder by looping over the buffer for each lzma2 packet in a loop), and tarfile parser (similarly troublesome). From there we get to iterate over all entries in the tarfile, stopping when we reach our file of interest. Since we can’t seek, but gdb needs to, we’ll pull it out of the stream into a Cursor<Vec<u8>> in-memory and pass a handle to it back to the user.

From here on out its a matter of gluing together a File traited struct in debugfs, and serving the filesystem over TCP using arigato. Done deal!

A quick diversion about compression

I was originally hoping to avoid transferring the whole tar file over the network (and therefore also reading the whole debug file into ram, which objectively sucks), but quickly hit issues with figuring out a way around seeking around an xz file. What’s interesting is xz has a great primitive to solve this specific problem (specifically, use a block size that allows you to seek to the block as close to your desired seek position just before it, only discarding at most block size - 1 bytes), but data.tar.xz files generated by dpkg appear to have a single mega-huge block for the whole file. I don’t know why I would have expected any different, in retrospect. That means that this now devolves into the base case of “How do I seek around an lzma2 compressed data stream”; which is a lot more complex of a question.

Thankfully, notoriously brilliant tianon was nice enough to introduce me to Jon Johnson who did something super similar – adapted a technique to seek inside a compressed gzip file, which lets his service oci.dag.dev seek through Docker container images super fast based on some prior work such as soci-snapshotter, gztool, and zran.c. He also pulled this party trick off for apk based distros over at apk.dag.dev, which seems apropos. Jon was nice enough to publish a lot of his work on this specifically in a central place under the name “targz” on his GitHub, which has been a ton of fun to read through.

The gist is that, by dumping the decompressor’s state (window of previous bytes, in-memory data derived from the last N-1 bytes) at specific “checkpoints” along with the compressed data stream offset in bytes and decompressed offset in bytes, one can seek to that checkpoint in the compressed stream and pick up where you left off – creating a similar “block” mechanism against the wishes of gzip. It means you’d need to do an O(n) run over the file, but every request after that will be sped up according to the number of checkpoints you’ve taken.

Given the complexity of xz and lzma2, I don’t think this is possible for me at the moment – especially given most of the files I’ll be requesting will not be loaded from again – especially when I can “just” cache the debug header by Build-Id. I want to implement this (because I’m generally curious and Jon has a way of getting someone excited about compression schemes, which is not a sentence I thought I’d ever say out loud), but for now I’m going to move on without this optimization. Such a shame, since it kills a lot of the work that went into seeking around the .deb file in the first place, given the debian-binary and control.tar.gz members are so small.

The Good

First, the good news right? It works! That’s pretty cool. I’m positive my younger self would be amused and happy to see this working; as is current day paultag. Let’s take debugfs out for a spin! First, we need to mount the filesystem. It even works on an entirely unmodified, stock Debian box on my LAN, which is huge. Let’s take it for a spin:

$ mount \
-t 9p \
-o trans=tcp,version=9p2000.u,aname=unstable-debug \
192.168.0.2 \
/usr/lib/debug/.build-id/

And, let’s prove to ourselves that this actually mounted before we go trying to use it:

$ mount | grep build-id
192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)

Slick. We’ve got an open connection to the server, where our host will keep a connection alive as root, attached to the filesystem provided in aname. Let’s take a look at it.

$ ls /usr/lib/debug/.build-id/
00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3
01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4
02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5
03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6
04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7
05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8
06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9
07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa
08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb
09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc
0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd
0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe
0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff

Outstanding. Let’s try using gdb to debug a binary that was provided by the Debian archive, and see if it’ll load the ELF by build-id from the right .deb in the unstable-debug suite:

$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)

Yes! Yes it will!

$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
/usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped

The Bad

Linux’s support for 9p is mainline, which is great, but it’s not robust. Network issues or server restarts will wedge the mountpoint (Linux can’t reconnect when the tcp connection breaks), and things that work fine on local filesystems get translated in a way that causes a lot of network chatter – for instance, just due to the way the syscalls are translated, doing an ls, will result in a stat call for each file in the directory, even though linux had just got a stat entry for every file while it was resolving directory names. On top of that, Linux will serialize all I/O with the server, so there’s no concurrent requests for file information, writes, or reads pending at the same time to the server; and read and write throughput will degrade as latency increases due to increasing round-trip time, even though there are offsets included in the read and write calls. It works well enough, but is frustrating to run up against, since there’s not a lot you can do server-side to help with this beyond implementing the 9P2000.L variant (which, maybe is worth it).

The Ugly

Unfortunately, we don’t know the file size(s) until we’ve actually opened the underlying tar file and found the correct member, so for most files, we don’t know the real size to report when getting a stat. We can’t parse the tarfiles for every stat call, since that’d make ls even slower (bummer). Only hiccup is that when I report a filesize of zero, gdb throws a bit of a fit; let’s try with a size of 0 to start:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug]
[...]

This obviously won’t work since gdb will throw away all our hard work because of stat’s output, and neither will loading the real size of the underlying file. That only leaves us with hardcoding a file size and hope nothing else breaks significantly as a result. Let’s try it again:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)

Much better. I mean, terrible but better. Better for now, anyway.

Kilroy was here

Do I think this is a particularly good idea? I mean; kinda. I’m probably going to make some fun 9p arigato-based filesystems for use around my LAN, but I don’t think I’ll be moving to use debugfs until I can figure out how to ensure the connection is more resilient to changing networks, server restarts and fixes on i/o performance. I think it was a useful exercise and is a pretty great hack, but I don’t think this’ll be shipping anywhere anytime soon.

Along with me publishing this post, I’ve pushed up all my repos; so you should be able to play along at home! There’s a lot more work to be done on arigato; but it does handshake and successfully export a working 9P2000.u filesystem. Check it out on on my github at arigato, debugfs and also on crates.io and docs.rs.

At least I can say I was here and I got it working after all these years.

Planet DebianRussell Coker: Software Needed for Work

When I first started studying computer science setting up a programming project was easy, write source code files and a Makefile and that was it. IRC was the only IM system and email was the only other communications system that was used much. Writing Makefiles is difficult but products like the Borland Turbo series of IDEs did all that for you so you could just start typing code and press a function key to compile and run (F5 from memory).

Over the years the requirements and expectations of computer use have grown significantly. The typical office worker is now doing many more things with computers than serious programmers used to do. Running an IM system, an online document editing system, and a series of web apps is standard for companies nowadays. Developers have to do all that in addition to tools for version control, continuous integration, bug reporting, and feature tracking. The development process is also more complex with extra steps for reproducible builds, automated tests, and code coverage metrics for the tests. I wonder how many programmers who started in the 90s would have done something else if faced with Github as their introduction.

How much of this is good? Having the ability to send instant messages all around the world is great. Having dozens of different ways of doing so is awful. When a company uses multiple IM systems such as MS-Teams and Slack and forces some of it’s employees to use them both it’s getting ridiculous. Having different friend groups on different IM systems is anti-social networking. In the EU the Digital Markets Act [1] forces some degree of interoperability between different IM systems and as it’s impossible to know who’s actually in the EU that will end up being world-wide.

In corporations document management often involves multiple ways of storing things, you have Google Docs, MS Office online, hosted Wikis like Confluence, and more. Large companies tend to use several such systems which means that people need to learn multiple systems to be able to work and they also need to know which systems are used by the various groups that they communicate with. Microsoft deserves some sort of award for the range of ways they have for managing documents, Sharepoint, OneDrive, Office Online, attachments to Teams rooms, and probably lots more.

During WW2 the predecessor to the CIA produced an excellent manual for simple sabotage [2]. If something like that was written today the section General Interference with Organisations and Production would surely have something about using as many incompatible programs and web sites as possible in the work flow. The proliferation of software required for work is a form of denial of service attack against corporations.

The efficiency of companies doesn’t really bother me. It sucks that companies are creating a demoralising workplace that is unpleasant for workers. But the upside is that the biggest companies are the ones doing the worst things and are also the most afflicted by these problems. It’s almost like the Bureau of Sabotage in some of Frank Herbert’s fiction [3].

The thing that concerns me is the effect of multiple standards on free software development. We have IRC the most traditional IM support system which is getting replaced by Matrix but we also have some projects using Telegram, and Jabber hasn’t gone away. I’m sure there are others too. There are also multiple options for version control (although github seems to dominate the market), forums, bug trackers, etc. Reporting bugs or getting support in free software often requires interacting with several of them. Developing free software usually involves dealing with the bug tracking and documentation systems of the distribution you use as well as the upstream developers of the software. If the problem you have is related to compatibility between two different pieces of free software then you can end up dealing with even more bug tracking systems.

There are real benefits to some of the newer programs to track bugs, write documentation, etc. There is also going to be a cost in changing which gives an incentive for the older projects to keep using what has worked well enough for them in the past,

How can we improve things? Use only the latest tools? Prioritise ease of use? Aim more for the entry level contributors?

365 TomorrowsAudio Transmission From Storm Rider One

Author: James Flanagan From Elizabeth I to Elizabeth II this storm has raged unabated. Wars and plagues have scoured the Earth while eras of enlightenment and eras of disgrace have risen and slipped away, and always the mother of all storms has boiled and churned — the Big Red Eye of Jupiter. Annie Edson Taylor […]

The post Audio Transmission From Storm Rider One appeared first on 365tomorrows.

,

Planet DebianScarlett Gately Moore: Kubuntu: Noble Numbat Beta available! Qt6 snaps coming soon.

It has been a very busy couple of weeks as we worked against some major transitions and a security fix that required a rebuild of the $world. I am happy to report that against all odds we have a beta release! You can read all about it here: https://kubuntu.org/news/kubuntu-24-04-beta-released/ Post beta freeze I have already begun pushing our fixes for known issues today. A big one being our new branding! Very exciting times in the Kubuntu world.

In the snap world I will be using my free time to start knocking out KDE applications ( not covered by the project ). I have also recruited some help, so you should start seeing these pop up in the edge channel very soon!

Now that we are nearing the release of Noble Numbat, my contract is coming to an end with Kubuntu. If you would like to see Plasma 6 in the next release and in a PPA for Noble, please consider donating to extend my contract at https://kubuntu.org/donate !

On a personal level, I am still looking to help with my grandson and you can find that here: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

Thanks for stopping by,

Scarlett

Planet DebianNOKUBI Takatsugu: mailman3-web error when upgrading to bookworm

I tried to upgrade bullseye machien to bookworm, so I got the following error:

File “/usr/lib/python3/dist-packages/django/contrib/auth/mixins.py”, line 5, in
from django.contrib.auth.views import redirect_to_login
File “/usr/lib/python3/dist-packages/django/contrib/auth/views.py”, line 20, in
from django.utils.http import (
ImportError: cannot import name ‘url_has_allowed_host_and_scheme’ from ‘django.utils.http’ (/usr/lib/python3/dist-packages/django/utils/http.py)

During handling of the above exception, another exception occurred:

It is similar to #1000810, but it is already closed.

My solution is:

  • apt remove mailman3-web
    • keep db and config files (do not purge)
  • apt autoremove
    • remove django related packages
  • apt install mailman3-web mailman3-full

I tried to send to the report, but it rerutns `550 Unknown or archived bug’ …

Worse Than FailureError'd: Paycheque

There are an infinite variety of ways to be wrong, but only very small number of ways to be right.

Patient Peter W. discovers that MS Word is of two minds about English usage. "Microsoft Word just can't seem to agree with itself on how to spell paycheck/pay check." Faithful readers know it's even worse than that.

cheque

 

Slow Daniel confused me with this complaint. He writes "It seems that the eclipse has reversed the flow of time for the traffic-free travel time." I don't get it. Can readers explain? The only WTF I see here is how much faster it is to walk than to drive 2 miles. Franconia isn't Conway!

time

 

Parsimonious Adam found a surprise discount. "This album was initially listed as pay-what-you-want. I tried to pay $4 for it, but the price changed to a minimum of $5 before I was able to check out, and checkout rightfully failed. My shopping cart then reverted to saying this." I want to know what happened next.

minus

 

Some of you dislike it when I thaw a thematic item from the deep freeze, so please note that B.J. sent us this sometime last year, noting that "Time works in mysterious ways for some companies. I earned a Verizon community 2 Year badge 36 hours after earning a 4 Year badge." I did consider saving it until July 26 this year but I'm just not that patient.

wapr

 

By comparison, I only had to drag this entry from the far back bottom corner of February's fridge.
I imagine chill Paul intoning with his radio voice "This next saturday is going to be all ice ice baby."

snip

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsThe Sea People

Author: Alastair Millar If you’re a trillionaire, you can get powerful people to turn up when you call an informal meeting. It’s one of the perks. As the Industrialist’s guests finished their excellent meal, the Diplomat put down his glass and said, “This is all very pleasant, but why are we here?” “I’ve decided to […]

The post The Sea People appeared first on 365tomorrows.

Planet DebianFreexian Collaborators: Monthly report about Debian Long Term Support, March 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In March, 19 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 0.0h (out of 10.0h assigned and 4.0h from previous period), thus carrying over 14.0h to the next month.
  • Adrian Bunk did 59.5h (out of 47.5h assigned and 52.5h from previous period), thus carrying over 40.5h to the next month.
  • Bastien Roucariès did 22.0h (out of 20.0h assigned and 2.0h from previous period).
  • Ben Hutchings did 9.0h (out of 2.0h assigned and 22.0h from previous period), thus carrying over 15.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 12.0h (out of 12.0h assigned).
  • Emilio Pozuelo Monfort did 0.0h (out of 3.0h assigned and 57.0h from previous period), thus carrying over 60.0h to the next month.
  • Guilhem Moulin did 22.5h (out of 7.25h assigned and 15.25h from previous period).
  • Holger Levsen did 0.0h (out of 0.5h assigned and 11.5h from previous period), thus carrying over 12.0h to the next month.
  • Lee Garrett did 0.0h (out of 0.0h assigned and 60.0h from previous period), thus carrying over 60.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 19.5h (out of 24.0h assigned), thus carrying over 4.5h to the next month.
  • Roberto C. Sánchez did 9.25h (out of 3.5h assigned and 8.5h from previous period), thus carrying over 2.75h to the next month.
  • Santiago Ruano Rincón did 19.0h (out of 16.5h assigned and 2.5h from previous period).
  • Sean Whitton did 4.5h (out of 4.5h assigned and 1.5h from previous period), thus carrying over 1.5h to the next month.
  • Sylvain Beucler did 25.0h (out of 24.5h assigned and 35.5h from previous period), thus carrying over 35.0h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).
  • Utkarsh Gupta did 19.5h (out of 0.0h assigned and 48.75h from previous period), thus carrying over 29.25h to the next month.

Evolution of the situation

In March, we have released 31 DLAs.

Adrian Bunk was responsible for updating gtkwave not only in LTS, but also in unstable, stable, and old-stable as well. This update involved an upload of a new upstream release of gtkwave to each target suite to address 82 separate CVEs. Guilhem Moulin prepared an update of libvirt which was particularly notable, as it fixed multiple vulnerabilities which would lead to denial of service or information disclosure.

In addition to the normal security updates, multiple LTS contributors worked at getting various packages updated in more recent Debian releases, including gross for bullseye/bookworm (by Adrian Bunk), imlib2 for bullseye, jetty9 and tomcat9/10 for bullseye/bookworm (by Markus Koschany), samba for bullseye, py7zr for bullseye (by Santiago Ruano Rincón), cacti for bullseye/bookwork (by Sylvain Beucler), and libmicrohttpd for bullseye (by Thorsten Alteholz). Additionally, Sylvain actively coordinated with cacti upstream concerning an incomplete fix for CVE-2024-29894.

Thanks to our sponsors

Sponsors that joined recently are in bold.

Planet DebianFreexian Collaborators: Debian Contributions: SSO Authentication for jitsi.debian.social, /usr-move updates, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

P.S. We’ve completed over a year of writing these blogs. If you have any suggestions on how to make them better or what you’d like us to cover, or any other opinions/reviews you might have, et al, please let us know by dropping an email to us. We’d be happy to hear your thoughts. :)

SSO Authentication for jitsi.debian.social, by Stefano Rivera

Debian.social’s jitsi instance has been getting some abuse by (non-Debian) people sharing sexually explicit content on the service. After playing whack-a-mole with this for a month, and shutting the instance off for another month, we opened it up again and the abuse immediately re-started.

Stefano sat down and wrote an SSO Implementation that hooks into Jitsi’s existing JWT SSO support. This requires everyone using jitsi.debian.social to have a Salsa account.

With only a little bit of effort, we could change this in future, to only require an account to open a room, and allow guests to join the call.

/usr-move, by Helmut Grohne

The biggest task this month was sending mitigation patches for all of the /usr-move issues arising from package renames due to the 2038 transition. As a result, we can now say that every affected package in unstable can either be converted with dh-sequence-movetousr or has an open bug report. The package set relevant to debootstrap except for the set that has to be uploaded concurrently has been moved to /usr and is awaiting migration. The move of coreutils happened to affect piuparts which hard codes the location of /bin/sync and received multiple updates as a result.

Miscellaneous contributions

  • Stefano Rivera uploaded a stable release update to python3.11 for bookworm, fixing a use-after-free crash.
  • Stefano uploaded a new version of python-html2text, and updated python3-defaults to build with it.
  • In support of Python 3.12, Stefano dropped distutils as a Build-Dependency from a few packages, and uploaded a complex set of patches to python-mitogen.
  • Stefano landed some merge requests to clean up dead code in dh-python, removed the flit plugin, and uploaded it.
  • Stefano uploaded new upstream versions of twisted, hatchling, python-flexmock, python-authlib, python–mitogen, python-pipx, and xonsh.
  • Stefano requested removal of a few packages supporting the Opsis HDMI2USB hardware that DebConf Video team used to use for HDMI capture, as they are not being maintained upstream. They started to FTBFS, with recent sdcc changes.
  • DebConf 24 is getting ready to open registration, Stefano spent some time fixing bugs in the website, caused by infrastructure updates.
  • Stefano reviewed all the DebConf 23 travel reimbursements, filing requests for more information from SPI where our records mismatched.
  • Stefano spun up a Wafer website for the Berlin 2024 mini DebConf.
  • Roberto C. Sánchez worked on facilitating the transfer of upstream maintenance responsibility for the dormant Shorewall project to a new team led by the current maintainer of the Shorewall packages in Debian.
  • Colin Watson fixed build failures in celery-haystack-ng, db1-compat, jsonpickle, libsdl-perl, kali, knews, openssh-ssh1, python-json-log-formatter, python-typing-extensions, trn4, vigor, and wcwidth. Some of these were related to the 64-bit time_t transition, since that involved enabling -Werror=implicit-function-declaration.
  • Colin fixed an off-by-one error in neovim, which was already causing a build failure in Ubuntu and would eventually have caused a build failure in Debian with stricter toolchain settings.
  • Colin added an sshd@.service template to openssh to help newer systemd versions make containers and VMs SSH-accessible over AF_VSOCK sockets.
  • Following the xz-utils backdoor, Colin spent some time testing and discussing OpenSSH upstream’s proposed inline systemd notification patch, since the current implementation via libsystemd was part of the attack vector used by that backdoor.
  • Utkarsh reviewed and sponsored some Go packages for Lena Voytek and Rajudev.
  • Utkarsh also helped Mitchell Dzurick with the adoption of pyparted package.
  • Helmut sent 10 patches for cross build failures.
  • Helmut partially fixed architecture cross bootstrap tooling to deal with changes in linux-libc-dev and the recent gcc-for-host changes and also fixed a 64bit-time_t FTBFS in libtextwrap.
  • Thorsten Alteholz uploaded several packages from debian-printing: cjet, lprng, rlpr and epson-inkjet-printer-escpr were affected by the newly enabled compiler switch -Werror=implicit-function-declaration. Besides fixing these serious bugs, Thorsten also worked on other bugs and could fix one or the other.
  • Carles updated simplemonitor and python-ring-doorbell packages with new upstream versions.
  • Santiago is still working on the Salsa CI MRs to adapt the build jobs so they can rely on sbuild. Current work includes adapting the images used by the build job, implementing the basic sbuild support the related jobs, and adjusting the support for experimental and *-backports releases..
    Additionally, Santiago reviewed some MR such as Make timeout action explicit in the logs and the subsequent Implement conditional timeout verbosity, and the batch of MRs included in https://salsa.debian.org/salsa-ci-team/pipeline/-/merge_requests/482.
  • Santiago also reviewed applications for the improving Salsa CI in Debian GSoC 2024 project. We received applications from four very talented candidates. The selection process is currently ongoing. A huge thanks to all of them!
  • As part of the DebConf 24 organization, Santiago has taken part in the Content team discussions.

Planet DebianReproducible Builds (diffoscope): diffoscope 264 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 264. This version includes the following changes:

[ Chris Lamb ]
* Don't crash on invalid zipfiles, even if we encounter 'badness'
  through through the file. (Re: #1068705)

[ FC (Fay) Stegerman ]
* Add note when there are duplicate entries in ZIP files.
  (Closes: reproducible-builds/diffoscope!140)

[ Vagrant Cascadian ]
* Add an external tool reference for GNU Guix for zipdetails.

You find out more by visiting the project homepage.

,

Krebs on SecurityWhy CISA is Warning CISOs About a Breach at Sisense

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) said today it is investigating a breach at business intelligence company Sisense, whose products are designed to allow companies to view the status of multiple third-party online services in a single dashboard. CISA urged all Sisense customers to reset any credentials and secrets that may have been shared with the company, which is the same advice Sisense gave to its customers Wednesday evening.

New York City based Sisense has more than a thousand customers across a range of industry verticals, including financial services, telecommunications, healthcare and higher education. On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)”

“We are taking this matter seriously and promptly commenced an investigation,” Dash continued. “We engaged industry-leading experts to assist us with the investigation. This matter has not resulted in an interruption to our business operations. Out of an abundance of caution, and while we continue to investigate, we urge you to promptly rotate any credentials that you use within your Sisense application.”

In its alert, CISA said it was working with private industry partners to respond to a recent compromise discovered by independent security researchers involving Sisense.

“CISA is taking an active role in collaborating with private industry partners to respond to this incident, especially as it relates to impacted critical infrastructure sector organizations,” the sparse alert reads. “We will provide updates as more information becomes available.”

Sisense declined to comment when asked about the veracity of information shared by two trusted sources with close knowledge of the breach investigation. Those sources said the breach appears to have started when the attackers somehow gained access to the company’s Gitlab code repository, and in that repository was a token or credential that gave the bad guys access to Sisense’s Amazon S3 buckets in the cloud.

Customers can use Gitlab either as a solution that is hosted in the cloud at Gitlab.com, or as a self-managed deployment. KrebsOnSecurity understands that Sisense was using the self-managed version of Gitlab.

Both sources said the attackers used the S3 access to copy and exfiltrate several terabytes worth of Sisense customer data, which apparently included millions of access tokens, email account passwords, and even SSL certificates.

The incident raises questions about whether Sisense was doing enough to protect sensitive data entrusted to it by customers, such as whether the massive volume of stolen customer data was ever encrypted while at rest in these Amazon cloud servers.

It is clear, however, that unknown attackers now have all of the credentials that Sisense customers used in their dashboards.

The breach also makes clear that Sisense is somewhat limited in the clean-up actions that it can take on behalf of customers, because access tokens are essentially text files on your computer that allow you to stay logged in for extended periods of time — sometimes indefinitely. And depending on which service we’re talking about, it may be possible for attackers to re-use those access tokens to authenticate as the victim without ever having to present valid credentials.

Beyond that, it is largely up to Sisense customers to decide if and when they change passwords to the various third-party services that they’ve previously entrusted to Sisense.

Earlier today, a public relations firm working with Sisense reached out to learn if KrebsOnSecurity planned to publish any further updates on their breach (KrebsOnSecurity posted a screenshot of the CISO’s customer email to both LinkedIn and Mastodon on Wednesday evening). The PR rep said Sisense wanted to make sure they had an opportunity to comment before the story ran.

But when confronted with the details shared by my sources, Sisense apparently changed its mind.

“After consulting with Sisense, they have told me that they don’t wish to respond,” the PR rep said in an emailed reply.

Update, 6:49 p.m., ET: Added clarification that Sisense is using a self-hosted version of Gitlab, not the cloud version managed by Gitlab.com.

Also, Sisense’s CISO Dash just sent an update to customers directly. The latest advice from the company is far more detailed, and involves resetting a potentially large number of access tokens across multiple technologies, including Microsoft Active Directory credentials, GIT credentials, web access tokens, and any single sign-on (SSO) secrets or tokens.

The full message from Dash to customers is below:

“Good Afternoon,

We are following up on our prior communication of April 10, 2024, regarding reports that certain Sisense company information may have been made available on a restricted access server. As noted, we are taking this matter seriously and our investigation remains ongoing.

Our customers must reset any keys, tokens, or other credentials in their environment used within the Sisense application.

Specifically, you should:
– Change Your Password: Change all Sisense-related passwords on http://my.sisense.com
– Non-SSO:
– Replace the Secret in the Base Configuration Security section with your GUID/UUID.
– Reset passwords for all users in the Sisense application.
– Logout all users by running GET /api/v1/authentication/logout_all under Admin user.
– Single Sign-On (SSO):
– If you use SSO JWT for the user’s authentication in Sisense, you will need to update sso.shared_secret in Sisense and then use the newly generated value on the side of the SSO handler.
– We strongly recommend rotating the x.509 certificate for your SSO SAML identity provider.
– If you utilize OpenID, it’s imperative to rotate the client secret as well.
– Following these adjustments, update the SSO settings in Sisense with the revised values.
– Logout all users by running GET /api/v1/authentication/logout_all under Admin user.
– Customer Database Credentials: Reset credentials in your database that were used in the Sisense application to ensure continuity of connection between the systems.
– Data Models: Change all usernames and passwords in the database connection string in the data models.
– User Params: If you are using the User Params feature, reset them.
– Active Directory/LDAP: Change the username and user password of users whose authorization is used for AD synchronization.
– HTTP Authentication for GIT: Rotate the credentials in every GIT project.
– B2D Customers: Use the following API PATCH api/v2/b2d-connection in the admin section to update the B2D connection.
– Infusion Apps: Rotate the associated keys.
– Web Access Token: Rotate all tokens.
– Custom Email Server: Rotate associated credentials.
– Custom Code: Reset any secrets that appear in custom code Notebooks.

If you need any assistance, please submit a customer support ticket at https://community.sisense.com/t5/support-portal/bd-p/SupportPortal and mark it as critical. We have a dedicated response team on standby to assist with your requests.

At Sisense, we give paramount importance to security and are committed to our customers’ success. Thank you for your partnership and commitment to our mutual security.

Regards,

Sangram Dash
Chief Information Security Officer”

Planet DebianJonathan McDowell: Sorting out backup internet #1: recursive DNS

I work from home these days, and my nearest office is over 100 miles away, 3 hours door to door if I travel by train (and, to be honest, probably not a lot faster given rush hour traffic if I drive). So I’m reliant on a functional internet connection in order to be able to work. I’m lucky to have access to Openreach FTTP, provided by Aquiss, but I worry about what happens if there’s a cable cut somewhere or some other long lasting problem. Worst case I could tether to my work phone, or try to find some local coworking space to use while things get sorted, but I felt like arranging a backup option was a wise move.

Step 1 turned out to be sorting out recursive DNS. It’s been many moons since I had to deal with running DNS in a production setting, and I’ve mostly done my best to avoid doing it at home too. dnsmasq has done a decent job at providing for my needs over the years, covering DHCP, DNS (+ tftp for my test device network). However I just let it slave off my ISP’s nameservers, which means if that link goes down it’ll no longer be able to resolve anything outside the house.

One option would have been to either point to a different recursive DNS server (Cloudfare’s 1.1.1.1 or Google’s Public DNS being the common choices), but I’ve no desire to share my lookup information with them. As another approach I could have done some sort of failover of resolv.conf when the primary network went down, but then I would have to get into moving files around based on networking status and that felt a bit clunky.

So I decided to finally setup a proper local recursive DNS server, which is something I’ve kinda meant to do for a while but never had sufficient reason to look into. Last time I did this I did it with BIND 9 but there are more options these days, and I decided to go with unbound, which is primarily focused on recursive DNS.

One extra wrinkle, pointed out by Lars, is that having dynamic name information from DHCP hosts is exceptionally convenient. I’ve kept dnsmasq as the local DHCP server, so I wanted to be able to forward local queries there.

I’m doing all of this on my RB5009, running Debian. Installing unbound was a simple matter of apt install unbound. I needed 2 pieces of configuration over the default, one to enable recursive serving for the house networks, and one to enable forwarding of queries for the local domain to dnsmasq. I originally had specified the wildcard address for listening, but this caused problems with the fact my router has many interfaces and would sometimes respond from a different address than the request had come in on.

/etc/unbound/unbound.conf.d/network-resolver.conf
server:
  interface: 192.0.2.1
  interface: 2001::db8:f00d::1
  access-control: 192.0.2.0/24 allow
  access-control: 2001::db8:f00d::/56 allow


/etc/unbound/unbound.conf.d/local-to-dnsmasq.conf
server:
  domain-insecure: "example.org"
  do-not-query-localhost: no

forward-zone:
  name: "example.org"
  forward-addr: 127.0.0.1@5353


I then had to configure dnsmasq to not listen on port 53 (so unbound could), respond to requests on the loopback interface (I have dnsmasq restricted to only explicitly listed interfaces), and to hand out unbound as the appropriate nameserver in DHCP requests - once dnsmasq is not listening on port 53 it no longer does this by default.

/etc/dnsmasq.d/behind-unbound
interface=lo
port=5353
dhcp-option=option6:dns-server,[2001::db8:f00d::1]
dhcp-option=option:dns-server,192.0.2.1


With these minor changes in place I now have local recursive DNS being handled by unbound, without losing dynamic local DNS for DHCP hosts. As an added bonus I now get 10/10 on Test IPv6 - previously I was getting dinged on the ability for my DNS server to resolve purely IPv6 reachable addresses.

Next step, actually sorting out a backup link.

Planet DebianReproducible Builds: Reproducible Builds in March 2024

Welcome to the March 2024 report from the Reproducible Builds project! In our reports, we attempt to outline what we have been up to over the past month, as well as mentioning some of the important things happening more generally in software supply-chain security. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Arch Linux minimal container userland now 100% reproducible
  2. Validating Debian’s build infrastructure after the XZ backdoor
  3. Making Fedora Linux (more) reproducible
  4. Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management
  5. Software and source code identification with GNU Guix and reproducible builds
  6. Two new Rust-based tools for post-processing determinism
  7. Distribution work
  8. Mailing list highlights
  9. Website updates
  10. Delta chat clients now reproducible
  11. diffoscope updates
  12. Upstream patches
  13. Reproducibility testing framework

Arch Linux minimal container userland now 100% reproducible

In remarkable news, Reproducible builds developer kpcyrd reported that that the Arch Linux “minimal container userland” is now 100% reproducible after work by developers dvzv and Foxboron on the one remaining package. This represents a “real world”, widely-used Linux distribution being reproducible.

Their post, which kpcyrd suffixed with the question “now what?”, continues on to outline some potential next steps, including validating whether the container image itself could be reproduced bit-for-bit. The post, which was itself a followup for an Arch Linux update earlier in the month, generated a significant number of replies.


Validating Debian’s build infrastructure after the XZ backdoor

From our mailing list this month, Vagrant Cascadian wrote about being asked about trying to perform concrete reproducibility checks for recent Debian security updates, in an attempt to gain some confidence about Debian’s build infrastructure given that they performed builds in environments running the high-profile XZ vulnerability.

Vagrant reports (with some caveats):

So far, I have not found any reproducibility issues; everything I tested I was able to get to build bit-for-bit identical with what is in the Debian archive.

That is to say, reproducibility testing permitted Vagrant and Debian to claim with some confidence that builds performed when this vulnerable version of XZ was installed were not interfered with.


Making Fedora Linux (more) reproducible

In March, Davide Cavalca gave a talk at the 2024 Southern California Linux Expo (aka SCALE 21x) about the ongoing effort to make the Fedora Linux distribution reproducible.

Documented in more detail on Fedora’s website, the talk touched on topics such as the specifics of implementing reproducible builds in Fedora, the challenges encountered, the current status and what’s coming next. (YouTube video)


Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management

Julien Malka published a brief but interesting paper in the HAL open archive on Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management:

Functional package managers (FPMs) and reproducible builds (R-B) are technologies and methodologies that are conceptually very different from the traditional software deployment model, and that have promising properties for software supply chain security. This thesis aims to evaluate the impact of FPMs and R-B on the security of the software supply chain and propose improvements to the FPM model to further improve trust in the open source supply chain. PDF

Julien’s paper poses a number of research questions on how the model of distributions such as GNU Guix and NixOS can “be leveraged to further improve the safety of the software supply chain”, etc.


Software and source code identification with GNU Guix and reproducible builds

In a long line of commendably detailed blog posts, Ludovic Courtès, Maxim Cournoyer, Jan Nieuwenhuizen and Simon Tournier have together published two interesting posts on the GNU Guix blog this month. In early March, Ludovic Courtès, Maxim Cournoyer, Jan Nieuwenhuizen and Simon Tournier wrote about software and source code identification and how that might be performed using Guix, rhetorically posing the questions: “What does it take to ‘identify software’? How can we tell what software is running on a machine to determine, for example, what security vulnerabilities might affect it?”

Later in the month, Ludovic Courtès wrote a solo post describing adventures on the quest for long-term reproducible deployment. Ludovic’s post touches on GNU Guix’s aim to support “time travel”, the ability to reliably (and reproducibly) revert to an earlier point in time, employing the iconic image of Harold Lloyd hanging off the clock in Safety Last! (1925) to poetically illustrate both the slapstick nature of current modern technology and the gymnastics required to navigate hazards of our own making.


Two new Rust-based tools for post-processing determinism

Zbigniew Jędrzejewski-Szmek announced add-determinism, a work-in-progress reimplementation of the Reproducible Builds project’s own strip-nondeterminism tool in the Rust programming language, intended to be used as a post-processor in RPM-based distributions such as Fedora

In addition, Yossi Kreinin published a blog post titled “refix: fast, debuggable, reproducible builds that describes a tool that post-processes binaries in such a way that they are still debuggable with gdb, etc.. Yossi post details the motivation and techniques behind the (fast) performance of the tool.


Distribution work

In Debian this month, since the testing framework no longer varies the build path, James Addison performed a bulk downgrade of the bug severity for issues filed with a level of normal to a new level of wishlist. In addition, 28 reviews of Debian packages were added, 38 were updated and 23 were removed this month adding to ever-growing knowledge about identified issues. As part of this effort, a number of issue types were updated, including Chris Lamb adding a new ocaml_include_directories toolchain issue [] and James Addison adding a new filesystem_order_in_java_jar_manifest_mf_include_resource issue [] and updating the random_uuid_in_notebooks_generated_by_nbsphinx to reference a relevant discussion thread [].

In addition, Roland Clobus posted his 24th status update of reproducible Debian ISO images. Roland highlights that the images for Debian unstable often cannot be generated due to changes in that distribution related to the 64-bit time_t transition.

Lastly, Bernhard M. Wiedemann posted another monthly update for his reproducibility work in openSUSE.


Mailing list highlights

Elsewhere on our mailing list this month:


Website updates

There were made a number of improvements to our website this month, including:

  • Pol Dellaiera noticed the frequent need to correctly cite the website itself in academic work. To facilitate easier citation across multiple formats, Pol contributed a Citation File Format (CIF) file. As a result, an export in BibTeX format is now available in the Academic Publications section. Pol encourages community contributions to further refine the CITATION.cff file. Pol also added an substantial new section to the “buy in” page documenting the role of Software Bill of Materials (SBOMs) and ephemeral development environments. [][]

  • Bernhard M. Wiedemann added a new “commandments” page to the documentation [][] and fixed some incorrect YAML elsewhere on the site [].

  • Chris Lamb add three recent academic papers to the publications page of the website. []

  • Mattia Rizzolo and Holger Levsen collaborated to add Infomaniak as a sponsor of amd64 virtual machines. [][][]

  • Roland Clobus updated the “stable outputs” page, dropping version numbers from Python documentation pages [] and noting that Python’s set data structure is also affected by the PYTHONHASHSEED functionality. []


Delta chat clients now reproducible

Delta Chat, an open source messaging application that can work over email, announced this month that the Rust-based core library underlying Delta chat application is now reproducible.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 259, 260 and 261 to Debian and made the following additional changes:

  • New features:

    • Add support for the zipdetails tool from the Perl distribution. Thanks to Fay Stegerman and Larry Doolittle et al. for the pointer and thread about this tool. []
  • Bug fixes:

    • Don’t identify Redis database dumps as GNU R database files based simply on their filename. []
    • Add a missing call to File.recognizes so we actually perform the filename check for GNU R data files. []
    • Don’t crash if we encounter an .rdb file without an equivalent .rdx file. (#1066991)
    • Correctly check for 7z being available—and not lz4—when testing 7z. []
    • Prevent a traceback when comparing a contentful .pyc file with an empty one. []
  • Testsuite improvements:

    • Fix .epub tests after supporting the new zipdetails tool. []
    • Don’t use parenthesis within test “skipping…” messages, as PyTest adds its own parenthesis. []
    • Factor out Python version checking in test_zip.py. []
    • Skip some Zip-related tests under Python 3.10.14, as a potential regression may have been backported to the 3.10.x series. []
    • Actually test 7z support in the test_7z set of tests, not the lz4 functionality. (Closes: reproducible-builds/diffoscope#359). []

In addition, Fay Stegerman updated diffoscope’s monkey patch for supporting the unusual Mozilla ZIP file format after Python’s zipfile module changed to detect potentially insecure overlapping entries within .zip files. (#362)

Chris Lamb also updated the trydiffoscope command line client, dropping a build-dependency on the deprecated python3-distutils package to fix Debian bug #1065988 [], taking a moment to also refresh the packaging to the latest Debian standards []. Finally, Vagrant Cascadian submitted an update for diffoscope version 260 in GNU Guix. []


Upstream patches

This month, we wrote a large number of patches, including:

Bernhard M. Wiedemann used reproducibility-tooling to detect and fix packages that added changes in their %check section, thus failing when built with the --no-checks option. Only half of all openSUSE packages were tested so far, but a large number of bugs were filed, including ones against caddy, exiv2, gnome-disk-utility, grisbi, gsl, itinerary, kosmindoormap, libQuotient, med-tools, plasma6-disks, pspp, python-pypuppetdb, python-urlextract, rsync, vagrant-libvirt and xsimd.

Similarly, Jean-Pierre De Jesus DIAZ employed reproducible builds techniques in order to test a proposed refactor of the ath9k-htc-firmware package. As the change produced bit-for-bit identical binaries to the previously shipped pre-built binaries:

I don’t have the hardware to test this firmware, but the build produces the same hashes for the firmware so it’s safe to say that the firmware should keep working.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility.

In March, an enormous number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Sleep less after a so-called “404” package state has occurred. []
    • Schedule package builds more often. [][]
    • Regenerate all our HTML indexes every hour, but only every 12h for the released suites. []
    • Create and update unstable and experimental base systems on armhf again. [][]
    • Don’t reschedule so many “depwait” packages due to the current size of the i386 architecture queue. []
    • Redefine our scheduling thresholds and amounts. []
    • Schedule untested packages with a higher priority, otherwise slow architectures cannot keep up with the experimental distribution growing. []
    • Only create the stats_buildinfo.png graph once per day. [][]
    • Reproducible Debian dashboard: refactoring, update several more static stats only every 12h. []
    • Document how to use systemctl with new systemd-based services. []
    • Temporarily disable armhf and i386 continuous integration tests in order to get some stability back. []
    • Use the deb.debian.org CDN everywhere. []
    • Remove the rsyslog logging facility on bookworm systems. []
    • Add zst to the list of packages which are false-positive diskspace issues. []
    • Detect failures to bootstrap Debian base systems. []
  • Arch Linux-related changes:

    • Temporarily disable builds because the pacman package manager is broken. [][]
    • Split reproducible_html_live_status and split the scheduling timing . [][][]
    • Improve handling when database is locked. [][]
  • Misc changes:

    • Show failed services that require manual cleanup. [][]
    • Integrate two new Infomaniak nodes. [][][][]
    • Improve IRC notifications for artifacts. []
    • Run diffoscope in different systemd slices. []
    • Run the node health check more often, as it can now repair some issues. [][]
    • Also include the string Bot in the userAgent for Git. (Re: #929013). []
    • Document increased tmpfs size on our OUSL nodes. []
    • Disable memory account for the reproducible_build service. [][]
    • Allow 10 times as many open files for the Jenkins service. []
    • Set OOMPolicy=continue and OOMScoreAdjust=-1000 for both the Jenkins and the reproducible_build service. []

Mattia Rizzolo also made the following changes:

  • Debian-related changes:

    • Define a systemd slice to group all relevant services. [][]
    • Add a bunch of quotes in scripts to assuage the shellcheck tool. []
    • Add stats on how many packages have been built today so far. []
    • Instruct systemd-run to handle diffoscope’s exit codes specially. []
    • Prefer the pgrep tool over grepping the output of ps. []
    • Re-enable a couple of i386 and armhf architecture builders. [][]
    • Fix some stylistic issues flagged by the Python flake8 tool. []
    • Cease scheduling Debian unstable and experimental on the armhf architecture due to the time_t transition. []
    • Start a few more i386 & armhf workers. [][][]
    • Temporarly skip pbuilder updates in the unstable distribution, but only on the armhf architecture. []
  • Other changes:

    • Perform some large-scale refactoring on how the systemd service operates. [][]
    • Move the list of workers into a separate file so it’s accessible to a number of scripts. []
    • Refactor the powercycle_x86_nodes.py script to use the new IONOS API and its new Python bindings. []
    • Also fix nph-logwatch after the worker changes. []
    • Do not install the stunnel tool anymore, it shouldn’t be needed by anything anymore. []
    • Move temporary directories related to Arch Linux into a single directory for clarity. []
    • Update the arm64 architecture host keys. []
    • Use a common Postfix configuration. []

The following changes were also made by:

  • Jan-Benedict Glaw:

    • Initial work to clean up a messy NetBSD-related script. [][]
  • Roland Clobus:

    • Show the installer log if the installer fails to build. []
    • Avoid the minus character (i.e. -) in a variable in order to allow for tags in openQA. []
    • Update the schedule of Debian live image builds. []
  • Vagrant Cascadian:

    • Maintenance on the virt* nodes is completed so bring them back online. []
    • Use the fully qualified domain name in configuration. []

Node maintenance was also performed by Holger Levsen, Mattia Rizzolo [][] and Vagrant Cascadian [][][][]



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Planet DebianRussell Coker: ML Training License

Last year a Debian Developer blogged about writing Haskell code to give a bad result for LLMs that were trained on it. I forgot who wrote the post and I’d appreciate the URL if anyone has it.

I respect such technical work to enforce one’s legal rights when they aren’t respected by corporations, but I have a different approach.

As an aside the Fosdem lecture “Fortify AI against regulation, litigation and lobotomies” is interesting on this topic [1], it’s what inspired me to write about this.

For what I write I am at this time happy to allow it to be used as part of a large training data set (consider this blog post a licence grant that applies until such time as I edit this post to change it). But only if aggregated with so much other data that my content is only a tiny portion of the data set by any metric. So I don’t want someone to make a programming LLM that has my code as the only C code or a political data set that has my blog posts as the only left-wing content. If someone wants to train an LLM on only my content to make a Russell-simulator then I don’t license my work for that purpose but also as it’s small enough that anyone with a bit of skill could do it on a weekend I can’t stop it. I would be really interested in seeing the results if someone from the FOSS community wanted to make a Russell-simulator and would probably issue them a license for such work if asked.

If my work comprises more than 0.1% of the content in a particular measure (theme, programming language, political position, etc) in a training data set then I don’t permit that without prior discussion.

Finally if someone wants to make a FOSS training data set to be used for FOSS LLM systems (maybe under the AGPL or some similar license) then I’ll allow my writing to be used as part of that.

Planet DebianWouter Verhelst: OpenSC and the Belgian eID

Getting the Belgian eID to work on Linux systems should be fairly easy, although some people do struggle with it.

For that reason, there is a lot of third-party documentation out there in the form of blog posts, wiki pages, and other kinds of things. Unfortunately, some of this documentation is simply wrong. Written by people who played around with things until it kind of worked, sometimes you get a situation where something that used to work in the past (but wasn't really necessary) now stopped working, but it's still added to a number of locations as though it were the gospel.

And then people follow these instructions and now things don't work anymore.

One of these revolves around OpenSC.

OpenSC is an open source smartcard library that has support for a pretty large number of smartcards, amongst which the Belgian eID. It provides a PKCS#11 module as well as a number of supporting tools.

For those not in the know, PKCS#11 is a standardized C API for offloading cryptographic operations. It is an API that can be used when talking to a hardware cryptographic module, in order to make that module perform some actions, and it is especially popular in the open source world, with support in NSS, amongst others. This library is written and maintained by mozilla, and is a low-level cryptographic library that is used by Firefox (on all platforms it supports) as well as by Google Chrome and other browsers based on that (but only on Linux, and as I understand it, only for linking with smartcards; their BoringSSL library is used for other things).

The official eID software that we ship through eid.belgium.be, also known as "BeID", provides a PKCS#11 module for the Belgian eID, as well as a number of support tools to make interacting with the card easier, such as the "eID viewer", which provides the ability to read data from the card, and validate their signatures. While the very first public version of this eID PKCS#11 module was originally based on OpenSC, it has since been reimplemented as a PKCS#11 module in its own right, with no lineage to OpenSC whatsoever anymore.

About five years ago, the Belgian eID card was renewed. At the time, a new physical appearance was the most obvious difference with the old card, but there were also some technical, on-chip, differences that are not so apparent. The most important one here, although it is not the only one, is the fact that newer eID cards now use a NIST P-384 elliptic curve-based private keys, rather than the RSA-based ones that were used in the past. This change required some changes to any PKCS#11 module that supports the eID; both the BeID one, as well as the OpenSC card-belpic driver that is written in support of the Belgian eID.

Obviously, the required changes were implemented for the BeID module; however, the OpenSC card-belpic driver was not updated. While I did do some preliminary work on the required changes, I was unable to get it to work, and eventually other things took up my time so I never finished the implementation. If someone would like to finish the work that I started, the preliminal patch that I wrote could be a good start -- but like I said, it doesn't yet work. Also, you'll probably be interested in the official documentation of the eID card.

Unfortunately, in the mean time someone added the Applet 1.8 ATR to the card-belpic.c file, without also implementing the required changes to the driver so that the PKCS#11 driver actually supports the eID card. The result of this is that if you have OpenSC installed in NSS for either Firefox or any Chromium-based browser, and it gets picked up before the BeID PKCS#11 module, then NSS will stop looking and pass all crypto operations to the OpenSC PKCS#11 module rather than to the official eID PKCS#11 module, and things will not work at all, causing a lot of confusion.

I have therefore taken the following two steps:

  1. The official eID packages now conflict with the OpenSC PKCS#11 module. Specifically only the PKCS#11 module, not the rest of OpenSC, so you can theoretically still use its tools. This means that once we release this new version of the eID software, when you do an upgrade and you have OpenSC installed, it will remove the PKCS#11 module and anything that depends on it. This is normal and expected.
  2. I have filed a pull request against OpenSC that removes the Applet 1.8 ATR from the driver, so that OpenSC will stop claiming that it supports the 1.8 applet.

When the pull request is accepted, we will update the official eID software to make the conflict versioned, so that as soon as it works again you will again be able to install the OpenSC and BeID packages at the same time.

In the mean time, if you have the OpenSC PKCS#11 module installed on your system, and your eID authentication does not work, try removing it.

Worse Than FailureCodeSOD: To Tell the Truth

So many languages eschew "truth" for "truthiness". Today, we're looking at PHP's approach.

PHP automatically coerces types to a boolean with some fairly simple rules:

  • the boolean false is false
  • the integer 0 is false, as is the float 0.0 and -0.0.
  • empty strings and the string "0" are false
  • arrays with no elements are false
  • NULL is false
  • objects may also override the cast behavior to define their own
  • everything else is true

Honestly, for PHP, this is fairly sane and reasonable. The string "0" makes my skin itch a bit, and the fact that the documentation page needs a big disclaimer warning that -1 is true hints at some people blundering as they convert from other languages.

But we're not doing a language critique. We're looking at a very specific block of code, which Jakub inherited. You see, someone didn't like this, so they implemented their own version:

protected static function emulate_filter_bool( $value ) {
	// @codingStandardsIgnoreStart
	static $true  = array(
		'1',
		'true', 'True', 'TRUE',
		'y', 'Y',
		'yes', 'Yes', 'YES',
		'on', 'On', 'ON',
	);
	static $false = array(
		'0',
		'false', 'False', 'FALSE',
		'n', 'N',
		'no', 'No', 'NO',
		'off', 'Off', 'OFF',
	);
	// @codingStandardsIgnoreEnd

	if ( is_bool( $value ) ) {
		return $value;
	} elseif ( is_int( $value ) && ( 0 === $value || 1 === $value ) ) {
		return (bool) $value;
	} elseif ( ( is_float( $value ) && ! is_nan( $value ) ) && ( (float) 0 === $value || (float) 1 === $value ) ) {
		return (bool) $value;
	} elseif ( is_string( $value ) ) {
		$value = trim( $value );
		if ( in_array( $value, $true, true ) ) {
			return true;
		} elseif ( in_array( $value, $false, true ) ) {
			return false;
		} else {
			return false;
		}
	}

	return false;
}

I love when a code block starts with an annotation to force the linter to ignore them. Always great.

We start by declaring arrays of a variety of possible true and false values. Most of the variations are in capitalization, but not exhaustive variations in capitalization, which I'm sure will cause problems at some point. But let's see how this code works.

First, we check if the value is a boolean, and if so, return it. Fine, very reasonable.

Second, we check if it's an integer, and if it is either 0 or 1, we cast it to a boolean and return it. In pure PHP, anything non-zero is true, but here only 1 is true.

We then do a similar check for floats- if it's a float, it is a number, and it's either 0 or 1, we cast it to float and return it. Note, however, that they're doing a simple equality comparison- which means that floating point inputs like 1.000000001 will fail this test, but will often still get printed as 1 when formatted for output, causing developers to get very confused (ask Jakub how he knows).

Finally, we check for strings, and do an in_array comparison to check if the value is either true or false. If it's neither true nor false, we simply return false (instead of FILE_NOT_FOUND). This raises the obvious question: why even bother with the check against the false values array, when we can just assume the input is false if it isn't in the true values array?

Jakub has this to add:

This method finds application in numerous places, particularly where front-end interfaces display checkboxes. I truly dread discovering the full extent of the code responsible for that...

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsRefusal

Author: Rick Tobin Her lips were soft as marshmallows fresh out of the bag—tender yet unyielding to Aaron’s hard press against them. They’d been torn apart from their love for years, but now, suddenly renewed, he could not hold back tears as they kissed. His strong hands held her thick dark hair as he pulled […]

The post Refusal appeared first on 365tomorrows.

,

Krebs on SecurityTwitter’s Clumsy Pivot to X.com Is a Gift to Phishers

On April 9, Twitter/X began automatically modifying links that mention “twitter.com” to read “x.com” instead. But over the past 48 hours, dozens of new domain names have been registered that demonstrate how this change could be used to craft convincing phishing links — such as fedetwitter[.]com, which until very recently rendered as fedex.com in tweets.

The message displayed when one visits goodrtwitter.com, which Twitter/X displayed as goodrx.com in tweets and messages.

A search at DomainTools.com shows at least 60 domain names have been registered over the past two days for domains ending in “twitter.com,” although research so far shows the majority of these domains have been registered “defensively” by private individuals to prevent the domains from being purchased by scammers.

Those include carfatwitter.com, which Twitter/X truncated to carfax.com when the domain appeared in user messages or tweets. Visiting this domain currently displays a message that begins, “Are you serious, X Corp?”

Update: It appears Twitter/X has corrected its mistake, and no longer truncates any domain ending in “twitter.com” to “x.com.”

Original story:

The same message is on other newly registered domains, including goodrtwitter.com (goodrx.com), neobutwitter.com (neobux.com), roblotwitter.com (roblox.com), square-enitwitter.com (square-enix.com) and yandetwitter.com (yandex.com). The message left on these domains indicates they were defensively registered by a user on Mastodon whose bio says they are a systems admin/engineer. That profile has not responded to requests for comment.

A number of these new domains including “twitter.com” appear to be registered defensively by Twitter/X users in Japan. The domain netflitwitter.com (netflix.com, to Twitter/X users) now displays a message saying it was “acquired to prevent its use for malicious purposes,” along with a Twitter/X username.

The domain mentioned at the beginning of this story — fedetwitter.com — redirects users to the blog of a Japanese technology enthusiast. A user with the handle “amplest0e” appears to have registered space-twitter.com, which Twitter/X users would see as the CEO’s “space-x.com.” The domain “ametwitter.com” already redirects to the real americanexpress.com.

Some of the domains registered recently and ending in “twitter.com” currently do not resolve and contain no useful contact information in their registration records. Those include firefotwitter[.]com (firefox.com), ngintwitter[.]com (nginx.com), and webetwitter[.]com (webex.com).

The domain setwitter.com, which Twitter/X until very recently rendered as “sex.com,” redirects to this blog post warning about the recent changes and their potential use for phishing.

Sean McNee, vice president of research and data at DomainTools, told KrebsOnSecurity it appears Twitter/X did not properly limit its redirection efforts.

“Bad actors could register domains as a way to divert traffic from legitimate sites or brands given the opportunity — many such brands in the top million domains end in x, such as webex, hbomax, xerox, xbox, and more,” McNee said. “It is also notable that several other globally popular brands, such as Rolex and Linux, were also on the list of registered domains.”

The apparent oversight by Twitter/X was cause for amusement and amazement from many former users who have migrated to other social media platforms since the new CEO took over. Matthew Garrett, a lecturer at U.C. Berkeley’s School of Information, summed up the Schadenfreude thusly:

“Twitter just doing a ‘redirect links in tweets that go to x.com to twitter.com instead but accidentally do so for all domains that end x.com like eg spacex.com going to spacetwitter.com’ is not absolutely the funniest thing I could imagine but it’s high up there.”

Cryptogram XZ Utils Backdoor

The cybersecurity world got really lucky last week. An intentionally placed backdoor in XZ Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to XZ Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware.

It was an incredibly complex backdoor. Installing it was a multi-year process that seems to have involved social engineering the lone unpaid engineer in charge of the utility. More from ArsTechnica:

In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint function with a variant that has long been recognized as less secure. No one noticed at the time.

The following year, JiaT75 submitted a patch over the XZ Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of XZ Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

There’s a lot more. The sophistication of both the exploit and the process to get it into the software project scream nation-state operation. It’s reminiscent of Solar Winds, although (1) it would have been much, much worse, and (2) we got really, really lucky.

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

Worse Than FailureCodeSOD: Terminated By Nulls

Strings in C are a unique collection of mistakes. The biggest one is the idea of null termination. Null termination is not without its advantages: because you're using a single byte to mark the end of the string, you can have strings of arbitrary length. No need to track the size and worry if your size variable is big enough to hold the end of the string. No complicated data structures. Just "read till you find a 0 byte, and you know you're done."

Of course, this is the root of a lot of evils. Malicious inputs that lack a null terminator, for example, are a common exploit. It's so dangerous that all of the str* functions have strn* versions, which allow you to pass sizes to ensure you don't overrun any buffers.

Dmitri sends us a simple example of someone not quite fully understanding this.

strcpy( buffer, string);
strcat( buffer, "\0");

The first line here copies the contents of string into buffer. It leverages the null terminator to know when the copy can stop. Then, we use strcat, which scans the string for the null terminator, and inserts a new string at the end- the new string, in this case, being the null terminator.

The developer responsible for this is protecting against a string lacking its null terminator by using functions which absolutely require it to be null terminated.

C strings are hard in the best case, but they're a lot harder when you don't understand them.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsDeath by Entropy

Author: DJ Tantillo So why am I trapped in a steel box? Intelligence. Once the connection between intelligence and entropy – the latter in the form of the maximization of possible futures as a marker of the former – was shown to be valid for individuals as well as species, the sewer cover was rolled […]

The post Death by Entropy appeared first on 365tomorrows.

,

Planet DebianIan Jackson: Why we’ve voted No to CfD for Derril Water solar farm

[personal profile] ceb and I are members of the Derril Water Solar Park cooperative.

We were recently invited to vote on whether the coop should bid for a Contract for Difference, in a government green electricity auction.

We’ve voted No.

“Green electricity” from your mainstream supplier is a lie

For a while [personal profile] ceb and I have wanted to contribute directly to green energy provision. This isn’t really possible in the mainstream consumer electricy market.

Mainstream electricity suppliers’ “100% green energy” tariffs are pure greenwashing. In a capitalist boondoogle, they basically “divvy up” the electricity so that customers on the (typically more expensive) “green” tariff “get” the green electricity, and the other customers “get” whatever is left. (Of course the electricity is actually all mixed up by the National Grid.) There are fewer people signed up for these tariffs than there is green power generated, so this basically means signing up for a “green” tariff has no effect whatsoever, other than giving evil people more money.

Ripple

About a year ago we heard about Ripple. The structure is a little complicated, but the basic upshot is:

Ripple promote and manage renewable energy schemes. The schemes themselves are each an individual company; the company is largely owned by a co-operative. The co-op is owned by consumers of electricity in the UK., To stop the co-operative being an purely financial investment scheme, shares ownership is limited according to your electricity usage. The electricity is be sold on the open market, and the profits are used to offset members’ electricity bills. (One gotcha from all of this is that for this to work your electricity billing provider has to be signed up with Ripple, but ours, Octopus, is.)

It seemed to us that this was a way for us to directly cause (and pay for!) the actual generation of green electricity.

So, we bought shares in one these co-operatives: we are co-owners of the Derril Water Solar Farm. We signed up for the maximum: funding generating capacity corresponding to 120% of our current electricity usage. We paid a little over £5000 for our shares.

Contracts for Difference

The UK has a renewable energy subsidy scheme, which goes by the name of Contracts for Difference. The idea is that a renewable energy generation company bids in advance, saying that they’ll sell their electricity at Y price, for the duration of the contract (15 years in the current round). The lowest bids win. All the electricity from the participating infrastructure is sold on the open market, but if the market price is low the government makes up the difference, and if the price is high, the government takes the winnings.

This is supposedly good for giving a stable investment environment, since the price the developer is going to get now doesn’t depends on the electricity market over the next 15 years. The CfD system is supposed to encourage development, so you can only apply before you’ve commissioned your generation infrastructure.

Ripple and CfD

Ripple recently invited us to agree that the Derril Water co-operative should bid in the current round of CfDs.

If this goes ahead, and we are one of the auction’s winners, the result would be that, instead of selling our electricity at the market price, we’ll sell it at the fixed CfD price.

This would mean that our return on our investment (which show up as savings on our electricity bills) would be decoupled from market electricity prices, and be much more predictable.

They can’t tell us the price they’d want to bid at, and future electricity prices are rather hard to predict, but it’s clear from the accompanying projections that they think we’d be better off on average with a CfD.

The documentation is very full of financial projections and graphs; other factors aren’t really discussed in any detail.

The rules of the co-op didn’t require them to hold a vote, but very sensibly, for such a fundamental change in the model, they decided to treat it roughly the same way as for a rules change: they’re hoping to get 75% Yes votes.

Voting No

The reason we’re in this co-op at all is because we want to directly fund renewable electricity.

Participating in the CfD auction would involve us competing with capitalist energy companies for government subsidies. Subsidies which are supposed to encourage the provision of green electricity.

It seems to us that participating in this auction would remove most of the difference between what we hoped to do by investing in Derril Water, and just participating in the normal consumer electricity market.

In particular, if we do win in the auction, that’s probably directly removing the funding and investment support model for other, market-investor-funded, projects.

In other words, our buying into Derril Water ceases to be an additional green energy project, changing (in its minor way) the UK’s electricity mix. It becomes a financial transaction much more tenously connected (if connected at all) to helping mitigate the climate emergency.

So our conclusion was that we must vote against.



comment count unavailable comments

Krebs on SecurityApril’s Patch Tuesday Brings Record Number of Fixes

If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.

Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.

“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”

Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.

Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.

Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.

Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.

“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”

CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one of ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.

“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”

Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.

Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.

“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”

For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.

Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.

KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.

“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.

Planet DebianGunnar Wolf: Think outside the box • Welcome Eclipse!

Now that we are back from our six month period in Argentina, we decided to adopt a kitten, to bring more diversity into our lives. Perhaps this little girl will teach us to think outside the box!

Yesterday we witnessed a solar eclipse — Mexico City was not in the totality range (we reached ~80%), but it was a great experience to go with the kids. A couple dozen thousand people gathered for a massive picnic in las islas, the main area inside our university campus.

Afterwards, we went briefly back home, then crossed the city to fetch the little kitten. Of course, the kids were unanimous: Her name is Eclipse.

Worse Than FailureTwo Pizzas for ME

Gloria was a senior developer at IniMirage, a company that makes custom visualizations for their clients. Over a few years, IniMirage had grown to more than 100 people, but was still very much in startup mode. Because of that, Gloria tried to keep her teams sized for two pizzas. Thomas, the product manager, on the other hand, felt that the company was ready to make big moves, and could scale up the teams: more people could move products faster. And Thomas was her manager, so he was "setting direction."

Gloria's elderly dog had spent the night at the emergency vet, and the company hadn't grown up to "giving sick days" yet, so she was nursing a headache from lack of sleep, when Thomas tried to initiate a Slack huddle. He had a habit of pushing the "Huddle" button any time the mood struct, without rhyme or reason.

She put on her headphones and accepted the call. "It's Gloria. Can you hear me?" She checked her mic, and repeated the check. She waited a minute before hanging up and getting back to work.

About five minutes later, Thomas called again."Hey, did you call me like 10 minutes ago?"

"No, you called me." Gloria facepalmed and took a deep, calming breath. "I couldn't hear you."

Thomas said, "Huh, okay. Anyway, is that demo ready for today?"

Thomas loved making schedules. He usually used Excel. There was just one problem: he rarely shared them, and he rarely read them after making them. Gloria had nothing on her calendar. "What demo?"

"Oh, Dylan said he was ready for a demo. So I scheduled it with Jack."

Jack was the CEO. Dylan was one of Gloria's peers. Gloria checked Github, and said, "Well, Dylan hasn't pushed anything for… months. I haven't heard anything from him. Has he showed you this demo?"

Gloria heard crunching. Thomas was munching on some chips. She heard him typing. After a minute, she said, "Thomas?"

"Oh, sorry, I was distracted."

Clearly. "Okay, I think we should cancel this meeting. I've seen this before, and with a bad demo, we could lose buy in."

Thomas said, "No, no, it'll be fine."

Gloria said, "Okay, well, let me know how that demo goes." She left the call and went back to work, thinking that it'd be Thomas's funeral. A few minutes before the meeting, her inbox dinged. She was now invited to the demo.

She joined the meeting, only to learn that Dylan was out sick and couldn't make the meeting. She spent the time giving project updates on her work, instead of demos, which is what the CEO actually wanted anyway. The meeting ended and everyone was happy- everyone but Gloria.

Gloria wrote an email to the CEO, expressing her concerns. Thomas was inattentive, incommunicative, and had left her alone to manage the team. She felt that she was doing more of the product management work than Thomas was. Jack replied that he appreciated her concerns, but that Thomas was growing into the position.

Julia, one of the other product managers, popped by Gloria's desk a few weeks later. "You know Dylan?"

Gloria said, "Well, I know he hasn't push any code in a literal year and keeps getting sick. I think I've pushed more code to his project than he has, and I'm not on it."

Julia laughed. "Well, he's been fired, but not for that."

Thomas had been pushing for more demos. Which meant he pulled Dylan into more meetings with the CEO. Jack was a "face time" person, and required everyone to turn on their webcams during meetings. It didn't take very many meetings to discover that Dylan was an entirely different person each time. There were multiple Dylans.

"But even without that, HR was going to fire him for not showing up to work," Julia said.

"But… if there were multiple people… why couldn't someone show up?" Gloria realized she was asking the wrong question. "How did Thomas never realize it?"

And if he was multiple people, how could he never get any work done? Dylan was a two-pizza team all by himself.

After the Dylan debacle, Thomas resigned suddenly and left to work at a competitor. A new product manager, Penny, came on board, and was organized, communicative, and attentive. Gloria never heard about Dylan again, and Penny kept the team sizes small.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsFar Off Sirens

Author: Majoki It’s peaceful now. I can concentrate better. Even reflect a little. It hasn’t been like that in a long time. Living in a city that’s eating itself is a noisy place. Even on the calmest days at the lab, there was always the sound of far off sirens. Plaintive calls, as if from […]

The post Far Off Sirens appeared first on 365tomorrows.

Planet DebianMatthew Palmer: How I Tripped Over the Debian Weak Keys Vulnerability

Those of you who haven’t been in IT for far, far too long might not know that next month will be the 16th(!) anniversary of the disclosure of what was, at the time, a fairly earth-shattering revelation: that for about 18 months, the Debian OpenSSL package was generating entirely predictable private keys.

The recent xz-stential threat (thanks to @nixCraft for making me aware of that one), has got me thinking about my own serendipitous interaction with a major vulnerability. Given that the statute of limitations has (probably) run out, I thought I’d share it as a tale of how “huh, that’s weird” can be a powerful threat-hunting tool – but only if you’ve got the time to keep pulling at the thread.

Prelude to an Adventure

Our story begins back in March 2008. I was working at Engine Yard (EY), a now largely-forgotten Rails-focused hosting company, which pioneered several advances in Rails application deployment. Probably EY’s greatest claim to lasting fame is that they helped launch a little code hosting platform you might have heard of, by providing them free infrastructure when they were little more than a glimmer in the Internet’s eye.

I am, of course, talking about everyone’s favourite Microsoft product: GitHub.

Since GitHub was in the right place, at the right time, with a compelling product offering, they quickly started to gain traction, and grow their userbase. With growth comes challenges, amongst them the one we’re focusing on today: SSH login times. Then, as now, GitHub provided SSH access to the git repos they hosted, by SSHing to git@github.com with publickey authentication. They were using the standard way that everyone manages SSH keys: the ~/.ssh/authorized_keys file, and that became a problem as the number of keys started to grow.

The way that SSH uses this file is that, when a user connects and asks for publickey authentication, SSH opens the ~/.ssh/authorized_keys file and scans all of the keys listed in it, looking for a key which matches the key that the user presented. This linear search is normally not a huge problem, because nobody in their right mind puts more than a few keys in their ~/.ssh/authorized_keys, right?

2008-era GitHub giving monkey puppet side-eye to the idea that nobody stores many keys in an authorized_keys file

Of course, as a popular, rapidly-growing service, GitHub was gaining users at a fair clip, to the point that the one big file that stored all the SSH keys was starting to visibly impact SSH login times. This problem was also not going to get any better by itself. Something Had To Be Done.

EY management was keen on making sure GitHub ran well, and so despite it not really being a hosting problem, they were willing to help fix this problem. For some reason, the late, great, Ezra Zygmuntowitz pointed GitHub in my direction, and let me take the time to really get into the problem with the GitHub team. After examining a variety of different possible solutions, we came to the conclusion that the least-worst option was to patch OpenSSH to lookup keys in a MySQL database, indexed on the key fingerprint.

We didn’t take this decision on a whim – it wasn’t a case of “yeah, sure, let’s just hack around with OpenSSH, what could possibly go wrong?”. We knew it was potentially catastrophic if things went sideways, so you can imagine how much worse the other options available were. Ensuring that this wouldn’t compromise security was a lot of the effort that went into the change. In the end, though, we rolled it out in early April, and lo! SSH logins were fast, and we were pretty sure we wouldn’t have to worry about this problem for a long time to come.

Normally, you’d think “patching OpenSSH to make mass SSH logins super fast” would be a good story on its own. But no, this is just the opening scene.

Chekov’s Gun Makes its Appearance

Fast forward a little under a month, to the first few days of May 2008. I get a message from one of the GitHub team, saying that somehow users were able to access other users’ repos over SSH. Naturally, as we’d recently rolled out the OpenSSH patch, which touched this very thing, the code I’d written was suspect number one, so I was called in to help.

The lineup scene from the movie The Usual Suspects They're called The Usual Suspects for a reason, but sometimes, it really is Keyser Söze

Eventually, after more than a little debugging, we discovered that, somehow, there were two users with keys that had the same key fingerprint. This absolutely shouldn’t happen – it’s a bit like winning the lottery twice in a row1 – unless the users had somehow shared their keys with each other, of course. Still, it was worth investigating, just in case it was a web application bug, so the GitHub team reached out to the users impacted, to try and figure out what was going on.

The users professed no knowledge of each other, neither admitted to publicising their key, and couldn’t offer any explanation as to how the other person could possibly have gotten their key.

Then things went from “weird” to “what the…?”. Because another pair of users showed up, sharing a key fingerprint – but it was a different shared key fingerprint. The odds now have gone from “winning the lottery multiple times in a row” to as close to “this literally cannot happen” as makes no difference.

Milhouse from The Simpsons says that We're Through The Looking Glass Here, People

Once we were really, really confident that the OpenSSH patch wasn’t the cause of the problem, my involvement in the problem basically ended. I wasn’t a GitHub employee, and EY had plenty of other customers who needed my help, so I wasn’t able to stay deeply involved in the on-going investigation of The Mystery of the Duplicate Keys.

However, the GitHub team did keep talking to the users involved, and managed to determine the only apparent common factor was that all the users claimed to be using Debian or Ubuntu systems, which was where their SSH keys would have been generated.

That was as far as the investigation had really gotten, when along came May 13, 2008.

Chekov’s Gun Goes Off

With the publication of DSA-1571-1, everything suddenly became clear. Through a well-meaning but ultimately disasterous cleanup of OpenSSL’s randomness generation code, the Debian maintainer had inadvertently reduced the number of possible keys that could be generated by a given user from “bazillions” to a little over 32,000. With so many people signing up to GitHub – some of them no doubt following best practice and freshly generating a separate key – it’s unsurprising that some collisions occurred.

You can imagine the sense of “oooooooh, so that’s what’s going on!” that rippled out once the issue was understood. I was mostly glad that we had conclusive evidence that my OpenSSH patch wasn’t at fault, little knowing how much more contact I was to have with Debian weak keys in the future, running a huge store of known-compromised keys and using them to find misbehaving Certificate Authorities, amongst other things.

Lessons Learned

While I’ve not found a description of exactly when and how Luciano Bello discovered the vulnerability that became CVE-2008-0166, I presume he first came across it some time before it was disclosed – likely before GitHub tripped over it. The stable Debian release that included the vulnerable code had been released a year earlier, so there was plenty of time for Luciano to have discovered key collisions and go “hmm, I wonder what’s going on here?”, then keep digging until the solution presented itself.

The thought “hmm, that’s odd”, followed by intense investigation, leading to the discovery of a major flaw is also what ultimately brought down the recent XZ backdoor. The critical part of that sequence is the ability to do that intense investigation, though.

When I reflect on my brush with the Debian weak keys vulnerability, what sticks out to me is the fact that I didn’t do the deep investigation. I wonder if Luciano hadn’t found it, how long it might have been before it was found. The GitHub team would have continued investigating, presumably, and perhaps they (or I) would have eventually dug deep enough to find it. But we were all super busy – myself, working support tickets at EY, and GitHub feverishly building features and fighting the fires in their rapidly-growing service.

As it was, Luciano was able to take the time to dig in and find out what was happening, but just like the XZ backdoor, I feel like we, as an industry, got a bit lucky that someone with the skills, time, and energy was on hand at the right time to make a huge difference.

It’s a luxury to be able to take the time to really dig into a problem, and it’s a luxury that most of us rarely have. Perhaps an understated takeaway is that somehow we all need to wrestle back some time to follow our hunches and really dig into the things that make us go “hmm…”.

Support My Hunches

If you’d like to help me be able to do intense investigations of mysterious software phenomena, you can shout me a refreshing beverage on ko-fi.


  1. the odds are actually probably more like winning the lottery about twenty times in a row. The numbers involved are staggeringly huge, so it’s easiest to just approximate it as “really, really unlikely”. 

,

Planet DebianBastian Blank: Python dataclasses for Deb822 format

Python includes some helping support for classes that are designed to just hold some data and not much more: Data Classes. It uses plain Python type definitions to specify what you can have and some further information for every field. This will then generate you some useful methods, like __init__ and __repr__, but on request also more. But given that those type definitions are available to other code, a lot more can be done.

There exists several separate packages to work on data classes. For example you can have data validation from JSON with dacite.

But Debian likes a pretty strange format usually called Deb822, which is in fact derived from the RFC 822 format of e-mail messages. Those files includes single messages with a well known format.

So I'd like to introduce some Deb822 format support for Python Data Classes. For now the code resides in the Debian Cloud tool.

Usage

Setup

It uses the standard data classes support and several helper functions. Also you need to enable support for postponed evaluation of annotations.

from __future__ import annotations
from dataclasses import dataclass
from dataclasses_deb822 import read_deb822, field_deb822

Class definition start

Data classes are just normal classes, just with a decorator.

@dataclass
class Package:

Field definitions

You need to specify the exact key to be used for this field.

    package: str = field_deb822('Package')
    version: str = field_deb822('Version')
    arch: str = field_deb822('Architecture')

Default values are also supported.

    multi_arch: Optional[str] = field_deb822(
        'Multi-Arch',
        default=None,
    )

Reading files

for p in read_deb822(Package, sys.stdin, ignore_unknown=True):
    print(p)

Full example

from __future__ import annotations
from dataclasses import dataclass
from debian_cloud_images.utils.dataclasses_deb822 import read_deb822, field_deb822
from typing import Optional
import sys


@dataclass
class Package:
    package: str = field_deb822('Package')
    version: str = field_deb822('Version')
    arch: str = field_deb822('Architecture')
    multi_arch: Optional[str] = field_deb822(
        'Multi-Arch',
        default=None,
    )


for p in read_deb822(Package, sys.stdin, ignore_unknown=True):
    print(p)

Known limitations

Worse Than FailureCodeSOD: They Key To Dictionaries

It's incredibly common to convert objects to dictionaries/maps and back, for all sorts of reasons. Jeff's co-worker was tasked with taking a dictionary which contained three keys, "mail", "telephonenumber", and "facsimiletelephonenumber" into an object representing a contact. This was their solution:

foreach (string item in _ptAttributeDic.Keys)
{
string val = _ptAttributeDic[item];
switch (item)
{
    case "mail":
    if (string.IsNullOrEmpty(base._email))
        base._email = val;
    break;
    case "facsimiletelephonenumber":
    base._faxNum = val;
    break;
    case "telephonenumber":
    base._phoneNumber = val;
    break;
}
}

Yes, we iterate across all of them to find the ones that we want. The dictionary in question is actually quite large, so we spend most of our time here looking at keys we don't care about, to find the three we do. If only there were some easier way, some efficient option for finding items in a dictionary by their name. If only we could just fetch items by their key, then we wouldn't need to have this loop. If only.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsBorsen Rules

Author: Julian Miles, Staff Writer The bodies plummeting from the starry sky are screaming. Esteban chuckles. “Shock fields up!” Ambusan stares at him. “Shock fields? Surely you mean catch fields? Shock fields save, but it’ll hurt.” “If a Mistress saw fit to drop them from that high, she didn’t mean for them to have a […]

The post Borsen Rules appeared first on 365tomorrows.

,

Planet DebianThorsten Alteholz: My Debian Activities in March 2024

FTP master

This month I accepted 147 and rejected 12 packages. The overall number of packages that got accepted was 151.

If you file an RM bug, please do check whether there are reverse dependencies as well and file RM bugs for them. It is annoying and time-consuming when I have to do the moreinfo dance.

Debian LTS

This was my hundred-seventeenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3770-1] libnet-cidr-lite-perl security update for one CVE to fix IP parsing and ACLs based on the result
  • [#1067544] Bullseye PU bug for libmicrohttpd
  • Unfortunately XZ happened at the end of month and I had to delay/intentionally delayed other uploads: they will appear as DLA-3781-1 and DLA-3784-1 in April

I also continued to work on qtbase-opensource-src and last but not least did a week of FD.

Debian ELTS

This month was the sixty-eighth ELTS month. During my allocated time I uploaded:

  • [ELA-1062-1]libnet-cidr-lite-perl security update for one CVE to improve parsing of IP addresses in Jessie and Stretch
  • Due to XZ I also delayed the uploads here. They will appear as ELA-1069-1 and DLA-1070-1 in April

I also continued on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well) and did a week of FD.

Debian Printing

This month I uploaded new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded a new upstream or bugfix version of:

misc

This month I uploaded new upstream or bugfix versions of:

365 TomorrowsCinder Three

Author: Stephen Dougherty The smoke rose from a fire that wasn’t a fire. Dr Alvin shifted his old bones in his favorite seat while his young visitor poured him a drink at his request. “How did you end up on the rock in the first place?” The boy sat in the only other seat in […]

The post Cinder Three appeared first on 365tomorrows.

,

Planet DebianJohn Goerzen: Facebook is Censoring Stories about Climate Change and Illegal Raid in Marion, Kansas

It is, sadly, not entirely surprising that Facebook is censoring articles critical of Meta.

The Kansas Reflector published an artical about Meta censoring environmental articles about climate change — deeming them “too controversial”.

Facebook then censored the article about Facebook censorship, and then after an independent site published a copy of the climate change article, Facebook censored it too.

The CNN story says Facebook apologized and said it was a mistake and was fixing it.

Color me skeptical, because today I saw this:

Yes, that’s right: today, April 6, I get a notification that they removed a post from August 12. The notification was dated April 4, but only showed up for me today.

I wonder why my post from August 12 was fine for nearly 8 months, and then all of a sudden, when the same website runs an article critical of Facebook, my 8-month-old post is a problem. Hmm.

Riiiiiight. Cybersecurity.

This isn’t even the first time they’ve done this to me.

On September 11, 2021, they removed my post about the social network Mastodon (click that link for screenshot). A post that, incidentally, had been made 10 months prior to being removed.

While they ultimately reversed themselves, I subsequently wrote Facebook’s Blocking Decisions Are Deliberate — Including Their Censorship of Mastodon.

That this same pattern has played out a second time — again with something that is a very slight challenege to Facebook — seems to validate my conclusion. Facebook lets all sort of hateful garbage infest their site, but anything about climate change — or their own censorship — gets removed, and this pattern persists for years.

There’s a reason I prefer Mastodon these days. You can find me there as @jgoerzen@floss.social.

So. I’ve written this blog post. And then I’m going to post it to Facebook. Let’s see if they try to censor me for a third time. Bring it, Facebook.

365 TomorrowsThe Lift Rider

Author: Aubrey Williams Every Tuesday and Thursday I have business in the Kirk Tower, and take the lift to the 21st floor. I’ve done this for the past three months, and every time I take it, regardless of which floor I start from, there’s always the same man in there. He’s clean-cut, a bit like […]

The post The Lift Rider appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: Trying to explain analogue clock.

Trying to explain analogue clock. It's hard to explain. Tried adding some things for affordance, and it is still not enough. So it's not obvious which arm is the hour and which arm is the minute. analog clock

,

Planet DebianPaul Wise: FLOSS Activities March 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Administration

  • Debian wiki: approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.12.8.2.0 on CRAN: Upstream Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1136 other packages on CRAN, downloaded 33.5 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 578 times according to Google Scholar.

This release brings a new upstream bugfix release Armadillo 12.8.2 prepared by Conrad two days ago. It took the usual day to noodle over 1100+ reverse dependencies and ensure two failures were independent of the upgrade (i.e., “no change to worse” in CRAN parlance). It took CRAN another because we hit a random network outage for (spurious) NOTE on a remote URL, and were then caught in the shrapnel from another large package ecosystem update spuriously pointing some build failures that were due to a missing rebuild to us. All good, as human intervention comes to the rescue.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.2.0 (2024-04-02)

  • Upgraded to Armadillo release 12.8.2 (Cortisol Injector)

    • Workaround for FFTW3 header clash

    • Workaround in testing framework for issue under macOS

    • Minor cleanups to reduce code bloat

    • Improved documentation

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David BrinCynics are no help… nor are those pushing the remaking of humanity

Heading out for the eclipse... likely to be messed up by thunderstorms!  These things never work out.  Anyway, here's a little mini-rant about CYNICISM that might amuse you. BTW for the record, I really approve of the guys I am criticizing here!  They are of value to the world. I just wish they'd add a little bit of a song to their messages of gloom. Optimism is a partner of cynicism, if you wanna get things done.  It makes you more effective!



== Jeez, man. Always look (at least a bit) on the bright side of life! ==


Carumba! Bruce Sterling and Cory Doctorow are younger than me. But in this interview with Tim Ventura, Bruce goes full – “I hope I don’t live so long that I’ll see the world’s final decay into dull, incompetent worldwide oligarchy.” (Or something like that.)  


Dang, well I hope never to become a cranky geezer shouting at (literal) clouds! I mean ‘recent science is boring’? Jeepers, Bruce, you really need to get out more. The wave of great new stuff is accelerating!  


It’s not that cynicism has no place – I bought a terrifically cynical novella from Bruce some years back – about a delightful, fictionalized future Silvio Berlusconi. But there the cynicism was mixed with lovely riffs of optimistic techno-joy and faith in progress. Still, misanthropists can contribute! Indeed, beyond his relentless cynicism-chic, Bruce does note some real, worrisome trends, like looming oligarchy-dominance and their blatant War on Science. And, of course, we’re all fretful about the spasm of crises recently unleashed by Vlad Putin all over the globe, desperately hurling every trick he’s gathered, in order to stave off his own version of the movie “Downfall.” 

Still, when Bruce goes: “I thought civilization would become stodgy…”  Well, sure. Maybe? But do you have to be the poster boy?

It’s not that Bruce doesn’t say way interesting things!  About half of his cynicism riffs are clever or very informative, adding to the piles of contrarian tradeoffs that I keep on the cluttered desk of my mind, weighing how – not whether – to fight against doom and oligarchy, with all my might and to my last breath!

Example: He asserts that there’s no Russo-Ukrainian ‘war’ going on, in Belgrade? Well, maybe not with overt violence. But expatriate Russians are talking to Ukrainian refugees and learning they aren’t Nazis, or remotely interested in being Russian, and are absolutely determined to be Ukrainian. And Vlad has to fear those expatriates coming home. And talking.


What all of this reveals is the truly controlling factor that underlies all our surface struggles over ideology and perception of the world. That factor is personality. 


Want an example? WHY do almost all conservative intellectuals drift toward obsession with so-called “cyclical history?” 


Like the current fetish on the right for an insanely stoop'id book called The Fourth Turning? 


Likewise Nazi ice-moon cults and confederate Book of Revelation millennialism? Their one shared personality driver is a desperate wish for things to cycle back to changelessness, Especially to counter the Left’s equally-compulsive, dreamy mania to “re-forge humanity!” 


But neither of those personality-propelled obsessions hold a candle to the grumpy-dour geezer mantra: 


Fie on the future! It’s only an ever-dull muddling-through of more-of-the-same! Oh, and get off my lawn!”


Two last thoughts: 


First, Bruce opines that we’ve already lost to the oligarchies (so why bother?) But that’s okay since oligarchies inevitably collapse through incompetence!


Well, that’s half right. Gaze across the last 6000 years, a dismal epoch when feudal rule by owner-lord families and their inheritance brats dominated every continent (those with agriculture). 


Across all those bleak centuries of relentlessly-enforced stupidity and misrule by inheritance brats, specific families and dynasties generally did ‘collapse’! 


What did NOT collapse was the overall pattern of male-run, harem-seeking called feudalism, which continued, unvaried, almost all that time. For that pattern to change called for both new technologies+education plus determination by savvy new generations. 


For 200 years, successively smarter innovators have struggled to overcome repeated putsch attempts by that oligarchic-feudal attractor state. And those savvy, determined, progressive-egalitarian-scientific innovators succeeded - sometimes just barely – at maintaining this enlightenment miracle.  Counting on their heirs and successors – we, the living – to do the same.


Moreover, what other Blue Americans accomplished in the 1770s and 1860s and 1960s, we can do yet again, with vastly improved tools and skills plus some of that ancestral grit! Moreover, we’ll do it with or without the help of addictive cynics.


Finally, Bruce is – or once was - a maven of techno-cyberpunk – and yet, what was the most-thrilling article of tool/hardware he chose to share with us this time? A mega Swiss Army knife?  


Okay, guy. Enjoy cynical retirement. Us young fools will keep fighting for the dream.


--- 


Posted in honor of another friend and colleague - older but perennially optimistic. My friend Vernor Vinge.


Planet DebianBits from Debian: apt install dpl-candidate: Sruthi Chandran

The Debian Project Developers will shortly vote for a new Debian Project Leader known as the DPL.

The DPL is the official representative of representative of The Debian Project tasked with managing the overall project, its vision, direction, and finances.

The DPL is also responsible for the selection of Delegates, defining areas of responsibility within the project, the coordination of Developers, and making decisions required for the project.

Our outgoing and present DPL Jonathan Carter served 4 terms, from 2020 through 2024. Jonathan shared his last Bits from the DPL post to Debian recently and his hopes for the future of Debian.

Recently, we sat with the two present candidates for the DPL position asking questions to find out who they really are in a series of interviews about their platforms, visions for Debian, lives, and even their favorite text editors. The interviews were conducted by disaster2life (Yashraj Moghe) and made available from video and audio transcriptions:

  • Andreas Tille [Interview]
  • Sruthi Chandran [this document]

Voting for the position starts on April 6, 2024.

Editors' note: This is our official return to Debian interviews, readers should stay tuned for more upcoming interviews with Developers and other important figures in Debian as part of our "Meet your Debian Developer" series. We used the following tools and services: Turboscribe.ai for the transcription from the audio and video files, IRC: Oftc.net for communication, Jitsi meet for interviews, and Open Broadcaster Software (OBS) for editing and video. While we encountered many technical difficulties in the return to this process, we are still able and proud to present the transcripts of the interviews edited only in a few areas for readability.

2024 Debian Project Leader Candidate: Sruthi Chandran

Sruthi's interview

Hi Sruthi, so for the first question, who are you and could you tell us a little bit about yourself?

[Sruthi]:

I usually talk about me whenever I am talking about answering the question who am I, I usually say like I am a librarian turned free software enthusiast and a Debian Developer. So I had no technical background and I learned, I was introduced to free software through my husband and then I learned Debian packaging, and eventually I became a Debian Developer. So I always give my example to people who say I am not technically inclined, I don't have technical background so I can't contribute to free software.

So yeah, that's what I refer to myself.

For the next question, could you tell me what do you do in Debian, and could you mention your story up until here today?

[Sruthi]:

Okay, so let me start from my initial days in Debian. I started contributing to Debian, my first contribution was a Tibetan font. We went to a Tibetan place and they were saying they didn't have a font in Linux.

So that's how I started contributing. Then I moved on to Ruby packages, then I have some JavaScript and Go packages, all dependencies of GitLab. So I was involved with maintaining GitLab for some time, now I'm not very active there.

But yeah, so GitLab was the main package I was contributing to since I contributed since 2016 to maybe like 2020 or something. Later I have come [over to] packaging. Now I am part of some of the teams, delegated teams, like community team and outreach team, as well as the Debconf committee. And the biggest, I think, my activity in Debian, I would say is organizing Debconf 2023. So it was a great experience and yeah, so that's my story in Debian.

So what are three key terms about you and your candidacy?

[Sruthi]:

Okay, let me first think about it. For candidacy, I can start with diversity is one point I started expressing from the first time I contested for DPL. But to be honest, that's the main point I want to bring.

[Yashraj]:

So for diversity, if you could break down your thoughts on diversity and make them, [about] your three points including diversity.

[Sruthi]:

So in addition to, eventually when starting it was just diversity. Now I have like a bit more ideas, like community, like I want to be a leader for the Debian community. More than, I don't know, maybe people may not agree, but I would say I want to be a leader of Debian community rather than a Debian operating system.

I connect to community more and third point I would say.

The term of a DPL lasts for an year. So what do you think during, what would you try to do during that, that you can't do from your position now?

[Sruthi]:

Okay. So I, like, I am very happy with the structure of Debian and how things work in Debian. Like you can do almost a lot of things, like almost all things without being a DPL.

Whatever change you want to bring about or whatever you want to do, you can do without being a DPL. Anyone, like every DD has the same rights. Only things I feel [the] DPL has hold on are mainly the budget or the funding part, which like, that's where they do the decision making part.

And then comes like, and one advantage of DPL driving some idea is that somehow people tend to listen to that with more, like, tend to give more attention to what DPL is saying rather than a normal DD. So I wanted to, like, I have answered some of the questions on how to, how I plan to do the financial budgeting part, how I want to handle, like, and the other thing is using the extra attention that I get as a DPL, I would like to obviously start with the diversity aspect in Debian. And yeah, like, I, what I want to do is not, like, be a leader and say, like, take Debian to one direction where I want to go, but I would rather take suggestions and inputs from the whole community and go about with that.

So yes, that's what I would say.

And taking a less serious question now, what is your preferred text editor?

[Sruthi]:

Vim.

[Yashraj]:

Vim, wholeheartedly team Vim?

[Sruthi]:

Yes.

[Yashraj]:

Great. Well, this was made in Vim, all the text for this.

[Sruthi]:

So, like, since you mentioned extra data, I'll give my example, like, it's just a fun note, when I started contributing to Debian, as I mentioned, I didn't have any knowledge about free software, like Debian, and I was not used to even using Linux. So, and I didn't have experience with these text editors. So, when I started contributing, I used to do the editing part using gedit.

So, that's how I started. Eventually, I moved to Nano, and once I reached Vim, I didn't move on.

Team Vim. Next question. What, what do you think is the importance of the Debian project in the world today? And where would you like to see it in 10 years, like 10 years into the future?

[Sruthi]:

Okay. So, Debian, as we all know, is referred to as the universal operating system without, like, it is said for a reason. We have hundreds and hundreds of operating systems, like Linux, distributions based on Debian.

So, I believe Debian, like even now, Debian has good influence on the, at least on the Linux or Linux ecosystem. So, what we implement in Debian has, like, is going to affect quite a lot of, like, a very good percentage of people using Linux. So, yes.

So, I think Debian is one of the leading Linux distributions. And I think in 10 years, we should be able to reach a position, like, where we are not, like, even now, like, even these many years after having Linux, we face a lot of problems in newer and newer hardware coming up and installing on them is a big problem. Like, firmwares and all those things are getting more and more complicated.

Like, it should be getting simpler, but it's getting more and more complicated. So, I, one thing I would imagine, like, I don't know if we will ever reach there, but I would imagine that eventually with the Debian, we should be able to have some, at least a few of the hardware developers or hardware producers have Debian pre-installed and those kind of things. Like, not, like, become, I'm not saying it's all, it's also available right now.

What I'm saying is that it becomes prominent enough to be opted as, like, default distro.

What part of Debian has made you And what part of the project has kept you going all through these years?

[Sruthi]:

Okay. So, I started to contribute in 2016, and I was part of the team doing GitLab packaging, and we did have a lot of training workshops and those kind of things within India. And I was, like, I had interacted with some of the Indian DDs, but I never got, like, even through chat or mail.

I didn't have a lot of interaction with the rest of the world, DDs. And the 2019 Debconf changed my whole perspective about Debian. Before that, I wasn't, like, even, I was interested in free software.

I was doing the technical stuff and all. But after DebConf, my whole idea has been, like, my focus changed to the community. Debian community is a very welcoming, very interesting community to be with.

And so, I believe that, like, 2019 DebConf was a for me. And that kept, from 2019, my focus has been to how to support, like, how, I moved to the community part of Debian from there. Then in 2020 I became part of the community team, and, like, I started being part of other teams.

So, these, I would say, the Debian community is the one, like, aspect of Debian that keeps me whole, keeps me held on to the Debian ecosystem as a whole.

Continuing to speak about Debian, what do you think, what is the first thing that comes to your mind when you think of Debian, like, the word, the community, what's the first thing?

[Sruthi]:

I think I may sound like a broken record or something.

[Yashraj]:

No, no.

[Sruthi]:

Again, I would say the Debian community, like, it's the people who makes Debian, that makes Debian special.

Like, apart from that, if I say, I would say I'm very, like, one part of Debian that makes me very happy is the, how the governing system of Debian works, the Debian constitution and all those things, like, it's a very unique thing for Debian. And, and it's like, when people say you can't work without a proper, like, establishment or even somebody deciding everything for you, it's difficult. When people say, like, we have been, Debian has been proving it for quite a long time now, that it's possible.

So, so that's one thing I believe, like, that's one unique point. And I am very proud about that.

What areas do you think Debian is failing in, how can it (that standing) be improved?

[Sruthi]:

So, I think where Debian is failing now is getting new people into Debian. Like, I don't remember, like, exactly the answer. But I remember hearing someone mention, like, the average age of a Debian Developer is, like, above 40 or 45 or something, like, exact age, I don't remember.

But it's like, Debian is getting old. Like, the people in Debian are getting old and we are not getting enough of new people into Debian. And that's very important to have people, like, new people coming up.

Otherwise, eventually, like, after a few years, nobody, like, we won't have enough people to take the project forward. So, yeah, I believe that is where we need to work on. We are doing some efforts, like, being part of GSOC or outreachy and having maybe other events, like, local events. Like, we used to have a lot of Debian packaging workshops in India. And those kind of, I think, in Brazil and all, they all have, like, local communities are doing. But we are not very successful in retaining the people who maybe come and try out things.

But we are not very good at retaining the people, like, retaining people who come. So, we need to work on those things. Right now, I don't have a solid answer for that.

But one thing, like, I was thinking about is, like, having a Debian specific outreach project, wherein the focus will be about the Debian, like, starting will be more on, like, usually what happens in GSOC and outreach is that people come, have the, do the contributions, and they go back. Like, they don't have that connection with the Debian, like, Debian community or Debian project. So, what I envision with these, the Debian outreach, the Debian specific outreach is that we have some part of the internship, like, even before starting the internship, we have some sessions and, like, with the people in Debian having, like, getting them introduced to the Debian philosophy and Debian community and Debian, how Debian works.

And those things, we focus on that. And then we move on to the technical internship parts. So, I believe this could do some good in having, like, when you have people you can connect to, you tend to stay back in a project mode.

When you feel something more than, like, right now, we have so many technical stuff to do, like, the choice for a college student is endless. So, if they want, if they stay back for something, like, maybe for Debian, I would say, we need to have them connected to the Debian project before we go into technical parts. Like, technical parts, like, there are other things as well, where they can go and do the technical part, but, like, they can come here, like, yeah.

So, that's what I was saying. Focused outreach projects is one thing. That's just one.

That's not enough. We need more of, like, more ideas to have more new people come up. And I'm very happy with, like, the DebConf thing. We tend to get more and more people from the places where we have a DebConf. Brazil is an example. After the Debconf, they have quite a good improvement on Debian contributors.

And I think in India also, it did give a good result. Like, we have more people contributing and staying back and those things. So, yeah.

So, these were the things I would say, like, we can do to improve.

For the final question, what field in free software do you, what field in free software generally do you think requires the most work to be put into it? What do you think is Debian's part in that field?

[Sruthi]:

Okay. Like, right now, what comes to my mind is the free software licenses parts. Like, we have a lot of free software licenses, and there are non-free software licenses.

But currently, I feel free software is having a big problem in enforcing these licenses. Like, there are, there may be big corporations or like some people who take up the whole, the code and may not follow the whole, for example, the GPL licenses. Like, we don't know how much of those, how much of the free softwares are used in the bigger things.

Yeah, I agree. There are a lot of corporations who are afraid to touch free software. But there would be good amount of free software, free work that converts into property, things violating the free software licenses and those things.

And we do not have the kind of like, we have SFLC, SFC, etc. But still, we do not have the ability to go behind and trace and implement the licenses. So, enforce those licenses and bring people who are violating the licenses forward and those kind of things is challenging because one thing is it takes time, like, and most importantly, money is required for the legal stuff.

And not always people who like people who make small software, or maybe big, but they may not have the kind of time and money to have these things enforced. So, that's a big challenge free software is facing, especially in our current scenario. I feel we are having those, like, we need to find ways how we can get it sorted.

I don't have an answer right now what to do. But this is a challenge I felt like and Debian's part in that. Yeah, as I said, I don't have a solution for that.

But the Debian, so DFSG and Debian sticking on to the free software licenses is a good support, I think.

So, that was the final question, Do you have anything else you want to mention for anyone watching this?

[Sruthi]:

Not really, like, I am happy, like, I think I was able to answer the questions. And yeah, I would say who is watching. I won't say like, I'm the best DPL candidate, you can't have a better one or something.

I stand for a reason. And if you believe in that, or the Debian community and Debian diversity, and those kinds of things, if you believe it, I hope you would be interested, like, you would want to vote for me. That's it.

Like, I'm not, I'll make it very clear. I'm not doing a technical leadership part here. So, those, I can't convince people who want technical leadership to vote for me.

But I would say people who connect with me, I hope they vote for me.

Planet DebianBits from Debian: apt install dpl-candidate: Andreas Tille

The Debian Project Developers will shortly vote for a new Debian Project Leader known as the DPL.

The Project Leader is the official representative of The Debian Project tasked with managing the overall project, its vision, direction, and finances.

The DPL is also responsible for the selection of Delegates, defining areas of responsibility within the project, the coordination of Developers, and making decisions required for the project.

Our outgoing and present DPL Jonathan Carter served 4 terms, from 2020 through 2024. Jonathan shared his last Bits from the DPL post to Debian recently and his hopes for the future of Debian.

Recently, we sat with the two present candidates for the DPL position asking questions to find out who they really are in a series of interviews about their platforms, visions for Debian, lives, and even their favorite text editors. The interviews were conducted by disaster2life (Yashraj Moghe) and made available from video and audio transcriptions:

  • Andreas Tille [this document]
  • Sruthi Chandran [Interview]

Voting for the position starts on April 6, 2024.

Editors' note: This is our official return to Debian interviews, readers should stay tuned for more upcoming interviews with Developers and other important figures in Debian as part of our "Meet your Debian Developer" series. We used the following tools and services: Turboscribe.ai for the transcription from the audio and video files, IRC: Oftc.net for communication, Jitsi meet for interviews, and Open Broadcaster Software (OBS) for editing and video. While we encountered many technical difficulties in the return to this process, we are still able and proud to present the transcripts of the interviews edited only in a few areas for readability.

2024 Debian Project Leader Candidate: Andrea Tille

Andreas' Interview

Who are you? Tell us a little about yourself.

[Andreas]:

How am I? Well, I'm, as I wrote in my platform, I'm a proud grandfather doing a lot of free software stuff, doing a lot of sports, have some goals in mind which I like to do and hopefully for the best of Debian.

And How are you today?

[Andreas]:

How I'm doing today? Well, actually I have some headaches but it's fine for the interview.

So, usually I feel very good. Spring was coming here and today it's raining and I plan to do a bicycle tour tomorrow and hope that I do not get really sick but yeah, for the interview it's fine.

What do you do in Debian? Could you mention your story here?

[Andreas]:

Yeah, well, I started with Debian kind of an accident because I wanted to have some package salvaged which is called WordNet. It's a monolingual dictionary and I did not really plan to do more than maybe 10 packages or so. I had some kind of training with xTeddy which is totally unimportant, a cute teddy you can put on your desktop.

So, and then well, more or less I thought how can I make Debian attractive for my employer which is a medical institute and so on. It could make sense to package bioinformatics and medicine software and it somehow evolved in a direction I did neither expect it nor wanted to do, that I'm currently the most busy uploader in Debian, created several teams around it.

DebianMate is very well known from me. I created the Blends team to create teams and techniques around what we are doing which was Debian TIS, Debian Edu, Debian Science and so on and I also created the packaging team for R, for the statistics package R which is technically based and not topic based. All these blends are covering a certain topic and R is just needed by lots of these blends.

So, yeah, and to cope with all this I have written a script which is routing an update to manage all these uploads more or less automatically. So, I think I had one day where I uploaded 21 new packages but it's just automatically generated, right? So, it's on one day more than I ever planned to do.

What is the first thing you think of when you think of Debian?

Editors' note: The question was misunderstood as the “worst thing you think of when you think of Debian”

[Andreas]:

The worst thing I think about Debian, it's complicated. I think today on Debian board I was asked about the technical progress I want to make and in my opinion we need to standardize things inside Debian. For instance, bringing all the packages to salsa, follow some common standards, some common workflow which is extremely helpful.

As I said, if I'm that productive with my own packages we can adopt this in general, at least in most cases I think. I made a lot of good experience by the support of well-formed teams. Well-formed teams are those teams where people support each other, help each other.

For instance, how to say, I'm a physicist by profession so I'm not an IT expert. I can tell apart what works and what not but I'm not an expert in those packages. I do and the amount of packages is so high that I do not even understand all the techniques they are covering like Go, Rust and something like this.

And I also don't speak Java and I had a problem once in the middle of the night and I've sent the email to the list and was a Java problem and I woke up in the morning and it was solved. This is what I call a team. I don't call a team some common repository that is used by random people for different packages also but it's working together, don't hesitate to solve other people's problems and permit people to get active.

This is what I call a team and this is also something I observed in, it's hard to give a percentage, in a lot of other teams but we have other people who do not even understand the concept of the team. Why is working together make some advantage and this is also a tough thing. I [would] like to tackle in my term if I get elected to form solid teams using the common workflow. This is one thing.

The other thing is that we have a lot of good people in our infrastructure like FTP masters, DSA and so on. I have the feeling they have a lot of work and are working more or less on their limits, and I like to talk to them [to ask] what kind of change we could do to move that limits or move their personal health to the better side.

The DPL term lasts for a year, What would you do during that you couldn't do now?

[Andreas]:

Yeah, well this is basically what I said are my main issues. I need to admit I have no really clear imagination what kind of tasks will come to me as a DPL because all these financial issues and law issues possible and issues [that] people who are not really friendly to Debian might create. I'm afraid these things might occupy a lot of time and I can't say much about this because I simply don't know.

What are three key terms about you and your candidacy?

[Andreas]:

As I said, I like to work on standards, I’d like to make Debian try [to get it right so] that people don't get overworked, this third key point is be inviting to newcomers, to everybody who wants to come. Yeah, I also mentioned in my term this diversity issue, geographical and from gender point of view. This may be the three points I consider most important.

Preferred text editor?

[Andreas]:

Yeah, my preferred one? Ah, well, I have no preferred text editor. I'm using the Midnight Commander very frequently which has an internal editor which is convenient for small text. For other things, I usually use VI but I also use Emacs from time to time. So, no, I have not preferred text editor. Whatever works nicely for me.

What is the importance of the community in the Debian Project? How would like to see it evolving over the next few years?

[Andreas]:

Yeah, I think the community is extremely important. So, I was on a lot of DebConfs. I think it's not really 20 but 17 or 18 DebCons and I really enjoyed these events every year because I met so many friends and met so many interesting people that it's really enriching my life and those who I never met in person but have read interesting things and yeah, Debian community makes really a part of my life.

And how do you think it should evolve specifically?

[Andreas]:

Yeah, for instance, last year in Kochi, it became even clearer to me that the geographical diversity is a really strong point. Just discussing with some women from India who is afraid about not coming next year to Busan because there's a problem with Shanghai and so on. I'm not really sure how we can solve this but I think this is a problem at least I wish to tackle and yeah, this is an interesting point, the geographical diversity and I'm running the so-called mentoring of the month.

This is a small project to attract newcomers for the Debian Med team which has the focus on medical packages and I learned that we had always men applying for this and so I said, okay, I dropped the constraint of medical packages.

Any topic is fine, I teach you packaging but it must be someone who does not consider himself a man. I got only two applicants, no, actually, I got one applicant and one response which was kind of strange if I'm hunting for women or so.

I did not understand but I got one response and interestingly, it was for me one of the least expected counters. It was from Iran and I met a very nice woman, very open, very skilled and gifted and did a good job or have even lose contact today and maybe we need more actively approach groups that are underrepresented. I don't know if what's a good means which I did but at least I tried and so I try to think about these kind of things.

What part of Debian has made you smile? What part of the project has kept you going all through the years?

[Andreas]:

Well, the card game which is called Mao on the DebConf made me smile all the time. I admit I joined only two or three times even if I really love this kind of games but I was occupied by other stuff so this made me really smile. I also think the first online DebConf in 2020 made me smile because we had this kind of short video sequences and I tried to make a funny video sequence about every DebConf I attended before. This is really funny moments but yeah, it's not only smile but yeah.

One thing maybe it's totally unconnected to Debian but I learned personally something in Debian that we have a do-ocracy and you can do things which you think that are right if not going in between someone else, right? So respect everybody else but otherwise you can do so.

And in 2020 I also started to take trees which are growing widely in my garden and plant them into the woods because in our woods a lot of trees are dying and so I just do something because I can. I have the resource to do something, take the small tree and bring it into the woods because it does not harm anybody. I asked the forester if it is okay, yes, yes, okay. So everybody can do so but I think the idea to do something like this came also because of the free software idea. You have the resources, you have the computer, you can do something and you do something productive, right? And when thinking about this I think it was also my Debian work.

Meanwhile I have planted more than 3,000 trees so it's not a small number but yeah, I enjoy this.

What part of Debian would you have some criticisms for?

[Andreas]:

Yeah, it's basically the same as I said before. We need more standards to work together. I do not want to repeat this but this is what I think, yeah.

What field in Free Software generally do you think requires the most work to be put into it? What do you think is Debian's part in the field?

[Andreas]:

It's also in general, the thing is the fact that I'm maintaining packages which are usually as modern software is maintained in Git, which is fine but we have some software which is at Sourceport, we have software laying around somewhere, we have software where Debian somehow became Upstream because nobody is caring anymore and free software is very different in several things, ways and well, I in principle like freedom of choice which is the basic of all our work.

Sometimes this freedom goes in the way of productivity because everybody is free to re-implement. You asked me for the most favorite editor. In principle one really good working editor would be great to have and would work and we have maybe 500 in Debian or so, I don't know.

I could imagine if people would concentrate and say five instead of 500 editors, we could get more productive, right? But I know this will not happen, right? But I think this is one thing which goes in the way of making things smooth and productive and we could have more manpower to replace one person who's [having] children, doing some other stuff and can't continue working on something and maybe this is a problem I will not solve, definitely not, but which I see.

What do you think is Debian's part in the field?

[Andreas]:

Yeah, well, okay, we can bring together different Upstreams, so we are building some packages and have some general overview about similar things and can say, oh, you are doing this and some other person is doing more or less the same, do you want to join each other or so, but this is kind of a channel we have to our Upstreams which is probably not very successful.

It starts with code copies of some libraries which are changed a little bit, which is fine license-wise, but not so helpful for different things and so I've tried to convince those Upstreams to forward their patches to the original one, but for this and I think we could do some kind of, yeah, [find] someone who brings Upstream together or to make them stop their forking stuff, but it costs a lot of energy and we probably don't have this and it's also not realistic that we can really help with this problem.

Do you have any questions for me?

[Andreas]:

I enjoyed the interview, I enjoyed seeing you again after half a year or so. Yeah, actually I've seen you in the eating room or cheese and wine party or so, I do not remember we had to really talk together, but yeah, people around, yeah, for sure. Yeah.

Cryptogram Security Vulnerability of HTML Emails

This is a newly discovered email vulnerability:

The email your manager received and forwarded to you was something completely innocent, such as a potential customer asking a few questions. All that email was supposed to achieve was being forwarded to you. However, the moment the email appeared in your inbox, it changed. The innocent pretext disappeared and the real phishing email became visible. A phishing email you had to trust because you knew the sender and they even confirmed that they had forwarded it to you.

This attack is possible because most email clients allow CSS to be used to style HTML emails. When an email is forwarded, the position of the original email in the DOM usually changes, allowing for CSS rules to be selectively applied only when an email has been forwarded.

An attacker can use this to include elements in the email that appear or disappear depending on the context in which the email is viewed. Because they are usually invisible, only appear in certain circumstances, and can be used for all sorts of mischief, I’ll refer to these elements as kobold letters, after the elusive sprites of mythology.

I can certainly imagine the possibilities.

Cryptogram Friday Squid Blogging: The Awfulness of Squid Fishing Boats

It’s a pretty awful story.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianEmanuele Rocca: PGP keys on Yubikey, with a side of Mutt

Here are my notes about copying PGP keys to external hardware devices such as Yubikeys. Let me begin by saying that the gpg tools are pretty bad at this.

MAKE A COUPLE OF BACKUPS OF ~/.gnupg/ TO DIFFERENT ENCRYPTED USB STICKS BEFORE YOU START. GPG WILL MESS UP YOUR KEYS. SERIOUSLY.

For example, would you believe me if I said that saving changes results in the removal of your private key? Well check this out.

Now that you have multiple safe, offline backups of your keys, here are my notes.

apt install yubikey-manager scdaemon

Plug the Yubikey in, see if it’s recognized properly:

ykman list
gpg --card-status

Change the default PIN (123456) and Admin PIN (12345678):

gpg --card-edit
gpg/card> admin
gpg/card> passwd

Look at the openpgp information and change the maximum number of retries, if you like. I have seen this failing a couple of times, unplugging the Yubikey and putting it back in worked.

ykman openpgp info
ykman openpgp access set-retries 7 7 7

Copy your keys. MAKE A BACKUP OF ~/.gnupg/ BEFORE YOU DO THIS.

gpg --edit-key $KEY_ID
gpg> keytocard # follow the prompts to copy the first key

Now choose the next key and copy that one too. Repeat till all subkeys are copied.

gpg> key 1
gpg> keytocard

Typing gpg --card-status you should be able to see all your keys on the Yubikey now.

Using the key on another machine

How do you use your PGP keys on the Yubikey on other systems?

Go to another system, if it does have a ~/.gnupg directory already move it somewhere else.

apt install scdaemon

Import your public key:

gpg -k
gpg --keyserver pgp.mit.edu --recv-keys $KEY_ID

Check the fingerprint and if it is indeed your key say you trust it:

gpg --edit-key $KEY_ID
> trust
> 5
> y
> save

Now try gpg --card-status and gpg --list-secret-keys, you should be able to see your keys. Try signing something, it should work.

gpg --output /tmp/x.out --sign /etc/motd
gpg --verify /tmp/x.out

Using the Yubikey with Mutt

If you’re using mutt with IMAP, there is a very simple trick to safely store your password on disk. Create an encrypted file with your IMAP password:

echo SUPERSECRET | gpg --encrypt > ~/.mutt_password.gpg

Add the following to ~/.muttrc:

set imap_pass=`gpg --decrypt ~/.mutt_password.gpg`

With the above, mutt now prompts you to insert the Yubikey and type your PIN in order to connect to the IMAP server.

Worse Than FailureError'd: Past Imperfect

A twitchy anonymous reporter pointed out that our form validation code is flaky. He's not wrong. But at least it can report time without needing emoticons! :-3

That same anon sent us the following, explaining "Folks at Twitch are very brave. So brave, they wrote their own time math."

twitch

 

Secret Agent Sjoerd reports "There was no train two minutes ago so I presume I should have caught it in an alternate universe." Denver is a key nexus in the multiverse, according to Myka.

train

 

Chilly Dallin H. is a tiny bit heated about ambiguous abbreviations - or at least about the software that interprets them with inadequate context. "With a range that short, I'd hate to take one of the older generation planes." At least it might be visible!

dallin

 

Phoney François P. "I was running a return of a phone through a big company website that shall not be named. Thankfully, they processed my order on April 1st 2024, or 2024年4月1日 in Japanese. There is a slight delay though as it shows 14月2024, which should be the 14th month of 2024. Dates are hard. Date formatting is complicated. For international date formatting, please come back later."

frank

 

At some time in the past, the original Adam R. encountered a time slip. We're just getting to see it even now. "GitHub must be operating on a different calendar than the Gregorian. Comments made just 4 weeks ago [today is 2023-02-07] are being displayed as made last year."

adam

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAunty Dotty Goes to the Marathon

Author: Shannon O’Connor We all thought it was funny that Aunty Dotty got excited for events that nobody else cared about, like the 250th anniversary of the Boston Marathon. She had been in hibernation for years, nobody knew exactly how long. She would go to events, because she hadn’t seen a lot of the world, […]

The post Aunty Dotty Goes to the Marathon appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 263 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 263. This version includes the following changes:

[ Chris Lamb ]
* Add support for the zipdetails(1) tool included in the Perl distribution.
  Thanks to Larry Doolittle et al. for the pointer to this tool.
* Don't use parenthesis within test "skipping…" messages; PyTest adds its own
  parenthesis, so we were ending up with double nested parens.
* Fix the .epub tests after supporting zipdetails(1).
* Update copyright years and debian/tests/control.

[ FC (Fay) Stegerman ]
* Fix MozillaZipContainer's monkeypatch after Python's zipfile module changed
  to detect potentially insecure overlapping entries within .zip files.
  (Closes: reproducible-builds/diffoscope#362)

You find out more by visiting the project homepage.

,

Planet DebianJohn Goerzen: The xz Issue Isn’t About Open Source

You’ve probably heard of the recent backdoor in xz. There have been a lot of takes on this, most of them boiling down to some version of:

The problem here is with Open Source Software.

I want to say not only is that view so myopic that it pushes towards the incorrect, but also it blinds us to more serious problems.

Now, I don’t pretend that there are no problems in the FLOSS community. There have been various pieces written about what this issue says about the FLOSS community (usually without actionable solutions). I’m not here to say those pieces are wrong. Just that there’s a bigger picture.

So with this xz issue, it may well be a state actor (aka “spy”) that added this malicious code to xz. We also know that proprietary software and systems can be vulnerable. For instance, a Twitter whistleblower revealed that Twitter employed Indian and Chinese spies, some knowingly. A recent report pointed to security lapses at Microsoft, including “preventable” lapses in security. According to the Wikipedia article on the SolarWinds attack, it was facilitated by various kinds of carelessness, including passwords being posted to Github and weak default passwords. They directly distributed malware-infested updates, encouraged customers to disable anti-malware tools when installing SolarWinds products, and so forth.

It would be naive indeed to assume that there aren’t black hat actors among the legions of programmers employed by companies that outsource work to low-cost countries — some of which have challenges with bribery.

So, given all this, we can’t really say the problem is Open Source. Maybe it’s more broad:

The problem here is with software.

Maybe that inches us closer, but is it really accurate? We have all heard of Boeing’s recent issues, which seem to have some element of root causes in corporate carelessness, cost-cutting, and outsourcing. That sounds rather similar to the SolarWinds issue, doesn’t it?

Well then, the problem is capitalism.

Maybe it has a role to play, but isn’t it a little too easy to just say “capitalism” and throw up our hands helplessly, just as some do with FLOSS as at the start of this article? After all, capitalism also brought us plenty of products of very high quality over the years. When we can point to successful, non-careless products — and I own some of them (for instance, my Framework laptop). We clearly haven’t reached the root cause yet.

And besides, what would you replace it with? All the major alternatives that have been tried have even stronger downsides. Maybe you replace it with “better regulated capitalism”, but that’s still capitalism.

Then the problem must be with consumers.

As this argument would go, it’s consumers’ buying patterns that drive problems. Buyers — individual and corporate — seek flashy features and low cost, prizing those over quality and security.

No doubt this is true in a lot of cases. Maybe greed or status-conscious societies foster it: Temu promises people to “shop like a billionaire”, and unloads on them cheap junk, which “all but guarantees that shipments from Temu containing products made with forced labor are entering the United States on a regular basis“.

But consumers are also people, and some fraction of them are quite capable of writing fantastic software, and in fact, do so.

So what we need is some way to seize control. Some way to do what is right, despite the pressures of consumers or corporations.

Ah yes, dear reader, you have been slogging through all these paragraphs and now realize I have been leading you to this:

Then the solution is Open Source.

Indeed. Faults and all, FLOSS is the most successful movement I know where people are bringing us back to the commons: working and volunteering for the common good, unleashing a thousand creative variants on a theme, iterating in every direction imaginable. We have FLOSS being vital parts of everything from $30 Raspberry Pis to space missions. It is bringing education and communication to impoverished parts of the world. It lets everyone write and release software. And, unlike the SolarWinds and Twitter issues, it exposes both clever solutions and security flaws to the world.

If an authentication process in Windows got slower, we would all shrug and mutter “Microsoft” under our breath. Because, really, what else can we do? We have no agency with Windows.

If an authentication process in Linux gets slower, anybody that’s interested — anybody at all — can dive in and ask “why” and trace it down to root causes.

Some look at this and say “FLOSS is responsible for this mess.” I look at it and say, “this would be so much worse if it wasn’t FLOSS” — and experience backs me up on this.

FLOSS doesn’t prevent security issues itself.

What it does do is give capabilities to us all. The ability to investigate. Ability to fix. Yes, even the ability to break — and its cousin, the power to learn.

And, most rewarding, the ability to contribute.

Cory DoctorowI’m coming to Los Angeles, Boston, Providence, Chicago, Turin, Marin, Winnipeg, Calgary, Vancouver, and Tartu, Estonia!

I’m about to hit the road again for a series of back-to-back public appearances as I travel with my new, nationally bestselling novel The Bezzle. I’ll be in Los Angeles, Boston, Providence, Chicago, Turin, Marin, Winnipeg, Calgary, Vancouver, and Tartu, Estonia!

I hope to see you! Bring friends!

Cryptogram Friday Squid Blogging: SqUID Bots

They’re AI warehouse robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Maybe the Phone System Surveillance Vulnerabilities Will Be Fixed

It seems that the FCC might be fixing the vulnerabilities in SS7 and the Diameter protocol:

On March 27 the commission asked telecommunications providers to weigh in and detail what they are doing to prevent SS7 and Diameter vulnerabilities from being misused to track consumers’ locations.

The FCC has also asked carriers to detail any exploits of the protocols since 2018. The regulator wants to know the date(s) of the incident(s), what happened, which vulnerabilities were exploited and with which techniques, where the location tracking occurred, and ­ if known ­ the attacker’s identity.

This time frame is significant because in 2018, the Communications Security, Reliability, and Interoperability Council (CSRIC), a federal advisory committee to the FCC, issued several security best practices to prevent network intrusions and unauthorized location tracking.

I have written about this over the past decade.

Planet DebianLukas Märdian: Netplan v1.0 paves the way to stable, declarative network management

New “netplan status –diff” subcommand, finding differences between configuration and system state

As the maintainer and lead developer for Netplan, I’m proud to announce the general availability of Netplan v1.0 after more than 7 years of development efforts. Over the years, we’ve so far had about 80 individual contributors from around the globe. This includes many contributions from our Netplan core-team at Canonical, but also from other big corporations such as Microsoft or Deutsche Telekom. Those contributions, along with the many we receive from our community of individual contributors, solidify Netplan as a healthy and trusted open source project. In an effort to make Netplan even more dependable, we started shipping upstream patch releases, such as 0.106.1 and 0.107.1, which make it easier to integrate fixes into our users’ custom workflows.

With the release of version 1.0 we primarily focused on stability. However, being a major version upgrade, it allowed us to drop some long-standing legacy code from the libnetplan1 library. Removing this technical debt increases the maintainability of Netplan’s codebase going forward. The upcoming Ubuntu 24.04 LTS and Debian 13 releases will ship Netplan v1.0 to millions of users worldwide.

Highlights of version 1.0

In addition to stability and maintainability improvements, it’s worth looking at some of the new features that were included in the latest release:

  • Simultaneous WPA2 & WPA3 support.
  • Introduction of a stable libnetplan1 API.
  • Mellanox VF-LAG support for high performance SR-IOV networking.
  • New hairpin and port-mac-learning settings, useful for VXLAN tunnels with FRRouting.
  • New netplan status –diff subcommand, finding differences between configuration and system state.

Besides those highlights of the v1.0 release, I’d also like to shed some light on new functionality that was integrated within the past two years for those upgrading from the previous Ubuntu 22.04 LTS which used Netplan v0.104:

  • We added support for the management of new network interface types, such as veth, dummy, VXLAN, VRF or InfiniBand (IPoIB). 
  • Wireless functionality was improved by integrating Netplan with NetworkManager on desktop systems, adding support for WPA3 and adding the notion of a regulatory-domain, to choose proper frequencies for specific regions. 
  • To improve maintainability, we moved to Meson as Netplan’s buildsystem, added upstream CI coverage for multiple Linux distributions and integrations (such as Debian testing, NetworkManager, snapd or cloud-init), checks for ABI compatibility, and automatic memory leak detection. 
  • We increased consistency between the supported backend renderers (systemd-networkd and NetworkManager), by matching physical network interfaces on permanent MAC address, when the match.macaddress setting is being used, and added new hardware offloading functionality for high performance networking, such as Single-Root IO Virtualisation virtual function link-aggregation (SR-IOV VF-LAG).

The much improved Netplan documentation, that is now hosted on “Read the Docs”, and new command line subcommands, such as netplan status, make Netplan a well vested tool for declarative network management and troubleshooting.

Integrations

Those changes pave the way to integrate Netplan in 3rd party projects, such as system installers or cloud deployment methods. By shipping the new python3-netplan Python bindings to libnetplan, it is now easier than ever to access Netplan functionality and network validation from other projects. We are proud that the Debian Cloud Team chose Netplan to be the default network management tool in their official cloud-images for Debian Bookworm and beyond. Ubuntu’s NetworkManager package now uses Netplan as it’s default backend on Ubuntu 23.10 Desktop systems and beyond. Further integrations happened with cloud-init and the Calamares installer.

Please check out the Netplan version 1.0 release on GitHub! If you want to learn more, follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

Krebs on SecurityFake Lawsuit Threat Exposes Privnote Phishing Sites

A cybercrook who has been setting up websites that mimic the self-destructing message service privnote.com accidentally exposed the breadth of their operations recently when they threatened to sue a software company. The disclosure revealed a profitable network of phishing sites that behave and look like the real Privnote, except that any messages containing cryptocurrency addresses will be automatically altered to include a different payment address controlled by the scammers.

The real Privnote, at privnote.com.

Launched in 2008, privnote.com employs technology that encrypts each message so that even Privnote itself cannot read its contents. And it doesn’t send or receive messages. Creating a message merely generates a link. When that link is clicked or visited, the service warns that the message will be gone forever after it is read.

Privnote’s ease-of-use and popularity among cryptocurrency enthusiasts has made it a perennial target of phishers, who erect Privnote clones that function more or less as advertised but also quietly inject their own cryptocurrency payment addresses when a note is created that contains crypto wallets.

Last month, a new user on GitHub named fory66399 lodged a complaint on the “issues” page for MetaMask, a software cryptocurrency wallet used to interact with the Ethereum blockchain. Fory66399 insisted that their website — privnote[.]co — was being wrongly flagged by MetaMask’s “eth-phishing-detect” list as malicious.

“We filed a lawsuit with a lawyer for dishonestly adding a site to the block list, damaging reputation, as well as ignoring the moderation department and ignoring answers!” fory66399 threatened. “Provide evidence or I will demand compensation!”

MetaMask’s lead product manager Taylor Monahan replied by posting several screenshots of privnote[.]co showing the site did indeed swap out any cryptocurrency addresses.

After being told where they could send a copy of their lawsuit, Fory66399 appeared to become flustered, and proceeded to mention a number of other interesting domain names:

You sent me screenshots from some other site! It’s red!!!!
The tornote.io website has a different color altogether
The privatenote,io website also has a different color! What’s wrong?????

A search at DomainTools.com for privatenote[.]io shows it has been registered to two names over as many years, including Andrey Sokol from Moscow and Alexandr Ermakov from Kiev. There is no indication these are the real names of the phishers, but the names are useful in pointing to other sites targeting Privnote since 2020.

DomainTools says other domains registered to Alexandr Ermakov include pirvnota[.]com, privatemessage[.]net, privatenote[.]io, and tornote[.]io.

A screenshot of the phishing domain privatemessage dot net.

The registration records for pirvnota[.]com at one point were updated from Andrey Sokol to “BPW” as the registrant organization, and “Tambov district” in the registrant state/province field. Searching DomainTools for domains that include both of these terms reveals pirwnote[.]com.

Other Privnote phishing domains that also phoned home to the same Internet address as pirwnote[.]com include privnode[.]com, privnate[.]com, and prevnóte[.]com. Pirwnote[.]com is currently selling security cameras made by the Chinese manufacturer Hikvision, via an Internet address based in Hong Kong.

It appears someone has gone to great lengths to make tornote[.]io seem like a legitimate website. For example, this account at Medium has authored more than a dozen blog posts in the past year singing the praises of Tornote as a secure, self-destructing messaging service. However, testing shows tornote[.]io will also replace any cryptocurrency addresses in messages with their own payment address.

These malicious note sites attract visitors by gaming search engine results to make the phishing domains appear prominently in search results for “privnote.” A search in Google for “privnote” currently returns tornote[.]io as the fifth result. Like other phishing sites tied to this network, Tornote will use the same cryptocurrency addresses for roughly 5 days, and then rotate in new payment addresses.

Tornote changed the cryptocurrency address entered into a test note to this address controlled by the phishers.

Throughout 2023, Tornote was hosted with the Russian provider DDoS-Guard, at the Internet address 186.2.163[.]216. A review of the passive DNS records tied to this address shows that apart from subdomains dedicated to tornote[.]io, the main other domain at this address was hkleaks[.]ml.

In August 2019, a slew of websites and social media channels dubbed “HKLEAKS” began doxing the identities and personal information of pro-democracy activists in Hong Kong. According to a report (PDF) from Citizen Lab, hkleaks[.]ml was the second domain that appeared as the perpetrators began to expand the list of those doxed.

HKleaks, as indexed by The Wayback Machine.

DomainTools shows there are more than 1,000 other domains whose registration records include the organization name “BPW” and “Tambov District” as the location. Virtually all of those domains were registered through one of two registrars — Hong Kong-based Nicenic and Singapore-based WebCC — and almost all appear to be phishing or pill-spam related.

Among those is rustraitor[.]info, a website erected after Russia invaded Ukraine in early 2022 that doxed Russians perceived to have helped the Ukrainian cause.

An archive.org copy of Rustraitor.

In keeping with the overall theme, these phishing domains appear focused on stealing usernames and passwords to some of the cybercrime underground’s busiest shops, including Brian’s Club. What do all the phished sites have in common? They all accept payment via virtual currencies.

It appears MetaMask’s Monahan made the correct decision in forcing these phishers to tip their hand: Among the websites at that DDoS-Guard address are multiple MetaMask phishing domains, including metarrnask[.]com, meternask[.]com, and rnetamask[.]com.

How profitable are these private note phishing sites? Reviewing the four malicious cryptocurrency payment addresses that the attackers swapped into notes passed through privnote[.]co (as pictured in Monahan’s screenshot above) shows that between March 15 and March 19, 2024, those address raked in and transferred out nearly $18,000 in cryptocurrencies. And that’s just one of their phishing websites.

Worse Than FailureCodeSOD: A Valid Applicant

In the late 90s into the early 2000s, there was an entire industry spun up to get businesses and governments off their mainframe systems from the 60s and onto something modern. "Modern", in that era, usually meant Java. I attended vendor presentations, for example, that promised that you could take your mainframe, slap a SOAP webservice on it, and then gradually migrate modules off the mainframe and into Java Enterprise Edition. In the intervening years, I have seen exactly 0 successful migrations like this- usually they just end up trying that for a few years and then biting the bullet and doing a ground-up rewrite.

That's is the situation ML was in: a state government wanted to replace their COBOL mainframe monster with a "maintainable" J2EE/WebSphere based application. Gone would be the 3270 dumb terminals, and here would be desktop PCs running web browsers.

ML's team did the initial design work, which the state was very happy with. But the actual development work gave the state sticker shock, so they opted to take the design from ML's company and ship it out to a lowest-bidder offshore vendor to actually do the development work.

This, by the way, was another popular mindset in the early 00s: you could design your software as a set of UML diagrams and then hand them off to the cheapest coder you could hire, and voila, you'd have working software (and someday soon, you'd be able to just generate the code and wouldn't need the programmer in the first place! ANY DAY NOW).

Now, this code is old, and predates generics in Java, so the use of ArrayLists isn't the WTF. But the programmer's approach to polymorphism is.

public class Applicant extends Person {
	// ... [snip] ...
}

.
.
.


public class ApplicantValidator {
	
	
	public void validateApplicantList(List listOfApplicants) throws ValidationException {

		// ... [snip] ...
		
		// Create a List of Person to validate
		List listOfPersons = new ArrayList();
		Iterator i = listOfApplicants.iterator(); 
		while (i.hasNext()) {
			Applicant a = (Applicant) i.next();
			Person p = (Person) a;
			listOfPersons.add(p);
		}
		
		PersonValidator.getInstance().validatePersonList(listOfPersons);

		// ... [snip] ...

	}

	// ... [snip] ...
}

Here you see an Applicant is a subclass of Person. We also see an ApplicantValidator class, which needs to verify that the applicant objects are valid- and to do that, they need to be treated as valid Person objects.

To do this, we iterate across our list of applications (which, it's worth noting, are being treated as Objects since we don't have generics), cast them to Applicants, then cast the Applicant variable to Person, then create a list of Persons- which again, absent generics, is just a list of Objects. Then we pass that list of persons into validatePersonList.

All of this is unnecessary, and demonstrates a lack of understanding about the language in use. This block could be written more clearly as: PersonValidator.getInstance().validatePersonList(listOfApplicants);

This gives us the same result with significantly less effort.

While much of the code coming from the offshore team was actually solid, it contained so much nonsense like this, so many misundertandings of the design, and so many bugs, that the state kept coming back to ML's company to address the issues. Between paying the offshore team to do the work, and then ML's team to fix the work, the entire project cost much more than if they had hired ML's team in the first place.

But the developers still billed at a lower rate than ML's team, which meant the managers responsible still got to brag about cost savings, even as they overran the project budget. "Imagine how much it would have cost if we hadn't gone with the cheaper labor?"

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsWeb World

Author: Rosie Oliver Grey-ghosted darkness. Not even a piece of dulled memory in the expansive nothingness ahead. Damn! Time is now against her completing her sculpture. She had been so sure the right shape could be found along her latest trajectory. Floating, she twists round to face the massive structure that extends in every direction […]

The post Web World appeared first on 365tomorrows.

Cryptogram Surveillance by the New Microsoft Outlook App

The ProtonMail people are accusing Microsoft’s new Outlook for Windows app of conducting extensive surveillance on its users. It shares data with advertisers, a lot of data:

The window informs users that Microsoft and those 801 third parties use their data for a number of purposes, including to:

  • Store and/or access information on the user’s device
  • Develop and improve products
  • Personalize ads and content
  • Measure ads and content
  • Derive audience insights
  • Obtain precise geolocation data
  • Identify users through device scanning

Commentary.

,

Planet DebianBits from Debian: Proxmox Platinum Sponsor of DebConf24

proxmoxlogo

We are pleased to announce that Proxmox has committed to sponsor DebConf24 as a Platinum Sponsor.

Proxmox provides powerful and user-friendly open-source server software. Enterprises of all sizes and industries use Proxmox solutions to deploy efficient and simplified IT infrastructures, minimize total cost of ownership, and avoid vendor lock-in. Proxmox also offers commercial support, training services, and an extensive partner ecosystem to ensure business continuity for its customers. Proxmox Server Solutions GmbH was established in 2005 and is headquartered in Vienna, Austria.

Proxmox builds its product offerings on top of the Debian operating system.

With this commitment as Platinum Sponsor, Proxmox is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much, Proxmox, for your support of DebConf24!

Become a sponsor too!

DebConf24 will take place from 28th July to 4th August 2024 in Busan, South Korea, and will be preceded by DebCamp, from 21st to 27th July 2024.

DebConf24 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, or visit the Become a DebConf Sponsor website.

Krebs on Security‘The Manipulaters’ Improve Phishing, Still Fail at Opsec

Roughly nine years ago, KrebsOnSecurity profiled a Pakistan-based cybercrime group called “The Manipulaters,” a sprawling web hosting network of phishing and spam delivery platforms. In January 2024, The Manipulaters pleaded with this author to unpublish previous stories about their work, claiming the group had turned over a new leaf and gone legitimate. But new research suggests that while they have improved the quality of their products and services, these nitwits still fail spectacularly at hiding their illegal activities.

In May 2015, KrebsOnSecurity published a brief writeup about the brazen Manipulaters team, noting that they openly operated hundreds of web sites selling tools designed to trick people into giving up usernames and passwords, or deploying malicious software on their PCs.

Manipulaters advertisement for “Office 365 Private Page with Antibot” phishing kit sold on the domain heartsender,com. “Antibot” refers to functionality that attempts to evade automated detection techniques, keeping a phish deployed as long as possible. Image: DomainTools.

The core brand of The Manipulaters has long been a shared cybercriminal identity named “Saim Raza,” who for the past decade has peddled a popular spamming and phishing service variously called “Fudtools,” “Fudpage,” “Fudsender,” “FudCo,” etc. The term “FUD” in those names stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.

A September 2021 story here checked in on The Manipulaters, and found that Saim Raza and company were prospering under their FudCo brands, which they secretly managed from a front company called We Code Solutions.

That piece worked backwards from all of the known Saim Raza email addresses to identify Facebook profiles for multiple We Code Solutions employees, many of whom could be seen celebrating company anniversaries gathered around a giant cake with the words “FudCo” painted in icing.

Since that story ran, KrebsOnSecurity has heard from this Saim Raza identity on two occasions. The first was in the weeks following the Sept. 2021 piece, when one of Saim Raza’s known email addresses — bluebtcus@gmail.com — pleaded to have the story taken down.

“Hello, we already leave that fud etc before year,” the Saim Raza identity wrote. “Why you post us? Why you destroy our lifes? We never harm anyone. Please remove it.”

Not wishing to be manipulated by a phishing gang, KrebsOnSecurity ignored those entreaties. But on Jan. 14, 2024, KrebsOnSecurity heard from the same bluebtcus@gmail.com address, apropos of nothing.

“Please remove this article,” Sam Raza wrote, linking to the 2021 profile. “Please already my police register case on me. I already leave everything.”

Asked to elaborate on the police investigation, Saim Raza said they were freshly released from jail.

“I was there many days,” the reply explained. “Now back after bail. Now I want to start my new work.”

Exactly what that “new work” might entail, Saim Raza wouldn’t say. But a new report from researchers at DomainTools.com finds that several computers associated with The Manipulaters have been massively hacked by malicious data- and password-snarfing malware for quite some time.

DomainTools says the malware infections on Manipulaters PCs exposed “vast swaths of account-related data along with an outline of the group’s membership, operations, and position in the broader underground economy.”

“Curiously, the large subset of identified Manipulaters customers appear to be compromised by the same stealer malware,” DomainTools wrote. “All observed customer malware infections began after the initial compromise of Manipulaters PCs, which raises a number of questions regarding the origin of those infections.”

A number of questions, indeed. The core Manipulaters product these days is a spam delivery service called HeartSender, whose homepage openly advertises phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me, to name a few.

A screenshot of the homepage of HeartSender 4 displays an IP address tied to fudtoolshop@gmail.com. Image: DomainTools.

HeartSender customers can interact with the subscription service via the website, but the product appears to be far more effective and user-friendly if one downloads HeartSender as a Windows executable program. Whether that HeartSender program was somehow compromised and used to infect the service’s customers is unknown.

However, DomainTools also found the hosted version of HeartSender service leaks an extraordinary amount of user information that probably is not intended to be publicly accessible. Apparently, the HeartSender web interface has several webpages that are accessible to unauthenticated users, exposing customer credentials along with support requests to HeartSender developers.

“Ironically, the Manipulaters may create more short-term risk to their own customers than law enforcement,” DomainTools wrote. “The data table “User Feedbacks” (sic) exposes what appear to be customer authentication tokens, user identifiers, and even a customer support request that exposes root-level SMTP credentials–all visible by an unauthenticated user on a Manipulaters-controlled domain. Given the risk for abuse, this domain will not be published.”

This is hardly the first time The Manipulaters have shot themselves in the foot. In 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s past and current business operations. That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that focuses on connecting cybercriminals to their real-life identities.

Currently, The Manipulaters seem focused on building out and supporting HeartSender, which specializes in spam and email-to-SMS spamming services.

“The Manipulaters’ newfound interest in email-to-SMS spam could be in response to the massive increase in smishing activity impersonating the USPS,” DomainTools wrote. “Proofs posted on HeartSender’s Telegram channel contain numerous references to postal service impersonation, including proving delivery of USPS-themed phishing lures and the sale of a USPS phishing kit.”

Reached via email, the Saim Raza identity declined to respond to questions about the DomainTools findings.

“First [of] all we never work on virus or compromised computer etc,” Raza replied. “If you want to write like that fake go ahead. Second I leave country already. If someone bind anything with exe file and spread on internet its not my fault.”

Asked why they left Pakistan, Saim Raza said the authorities there just wanted to shake them down.

“After your article our police put FIR on my [identity],” Saim Raza explained. “FIR” in this case stands for “First Information Report,” which is the initial complaint in the criminal justice system of Pakistan.

“They only get money from me nothing else,” Saim Raza continued. “Now some officers ask for money again again. Brother, there is no good law in Pakistan just they need money.”

Saim Raza has a history of being slippery with the truth, so who knows whether The Manipulaters and/or its leaders have in fact fled Pakistan (it may be more of an extended vacation abroad). With any luck, these guys will soon venture into a more Western-friendly, “good law” nation and receive a warm welcome by the local authorities.

Planet DebianGuido Günther: Free Software Activities March 2024

A short status update of what happened on my side last month. I spent quiet a bit of time reviewing new, code (thanks!) as well as maintenance to keep things going but we also have some improvements:

Phosh

Phoc

phosh-mobile-settings

phosh-osk-stub

gmobile

Livi

squeekboard

GNOME calls

Libsoup

If you want to support my work see donations.

Planet DebianJoey Hess: reflections on distrusting xz

Was the ssh backdoor the only goal that "Jia Tan" was pursuing with their multi-year operation against xz?

I doubt it, and if not, then every fix so far has been incomplete, because everything is still running code written by that entity.

If we assume that they had a multilayered plan, that their every action was calculated and malicious, then we have to think about the full threat surface of using xz. This quickly gets into nightmare scenarios of the "trusting trust" variety.

What if xz contains a hidden buffer overflow or other vulnerability, that can be exploited by the xz file it's decompressing? This would let the attacker target other packages, as needed.

Let's say they want to target gcc. Well, gcc contains a lot of documentation, which includes png images. So they spend a while getting accepted as a documentation contributor on that project, and get added to it a png file that is specially constructed, it has additional binary data appended that exploits the buffer overflow. And instructs xz to modify the source code that comes later when decompressing gcc.tar.xz.

More likely, they wouldn't bother with an actual trusting trust attack on gcc, which would be a lot of work to get right. One problem with the ssh backdoor is that well, not all servers on the internet run ssh. (Or systemd.) So webservers seem a likely target of this kind of second stage attack. Apache's docs include png files, nginx does not, but there's always scope to add improved documentation to a project.

When would such a vulnerability have been introduced? In February, "Jia Tan" wrote a new decoder for xz. This added 1000+ lines of new C code across several commits. So much code and in just the right place to insert something like this. And why take on such a significant project just two months before inserting the ssh backdoor? "Jia Tan" was already fully accepted as maintainer, and doing lots of other work, it doesn't seem to me that they needed to start this rewrite as part of their cover.

They were working closely with xz's author Lasse Collin in this, by indications exchanging patches offlist as they developed it. So Lasse Collin's commits in this time period are also worth scrutiny, because they could have been influenced by "Jia Tan". One that caught my eye comes immediately afterwards: "prepares the code for alternative C versions and inline assembly" Multiple versions and assembly mean even more places to hide such a security hole.

I stress that I have not found such a security hole, I'm only considering what the worst case possibilities are. I think we need to fully consider them in order to decide how to fully wrap up this mess.

Whether such stealthy security holes have been introduced into xz by "Jia Tan" or not, there are definitely indications that the ssh backdoor was not the end of what they had planned.

For one thing, the "test file" based system they introduced was extensible. They could have been planning to add more test files later, that backdoored xz in further ways.

And then there's the matter of the disabling of the Landlock sandbox. This was not necessary for the ssh backdoor, because the sandbox is only used by the xz command, not by liblzma. So why did they potentially tip their hand by adding that rogue "." that disables the sandbox?

A sandbox would not prevent the kind of attack I discuss above, where xz is just modifying code that it decompresses. Disabling the sandbox suggests that they were going to make xz run arbitrary code, that perhaps wrote to files it shouldn't be touching, to install a backdoor in the system.

Both deb and rpm use xz compression, and with the sandbox disabled, whether they link with liblzma or run the xz command, a backdoored xz can write to any file on the system while dpkg or rpm is running and noone is likely to notice, because that's the kind of thing a package manager does.

My impression is that all of this was well planned and they were in it for the long haul. They had no reason to stop with backdooring ssh, except for the risk of additional exposure. But they decided to take that risk, with the sandbox disabling. So they planned to do more, and every commit by "Jia Tan", and really every commit that they could have influenced needs to be distrusted.

This is why I've suggested to Debian that they revert to an earlier version of xz. That would be my advice to anyone distributing xz.

I do have a xz-unscathed fork which I've carefully constructed to avoid all "Jia Tan" involved commits. It feels good to not need to worry about dpkg and tar. I only plan to maintain this fork minimally, eg security fixes. Hopefully Lasse Collin will consider these possibilities and address them in his response to the attack.

Worse Than FailureCodeSOD: Gotta Catch 'Em All

It's good to handle any exception that could be raised in some useful way. Frequently, this means that you need to take advantage of the catch block's ability to filter by type so you can do something different in each case. Or you could do what Adam's co-worker did.

try
{
/* ... some important code ... */
} catch (OutOfMemoryException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (OverflowException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (InvalidCastException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (NullReferenceException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (IndexOutOfRangeException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (ArgumentException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (InvalidOperationException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (XmlException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (IOException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (NotSupportedException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (Exception exception) {
        Global.Insert("App.GetSettings;", exception.Message);
}

Well, I guess that if they ever need to add different code paths, they're halfway there.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsSucky

Author: Alastair Millar As I burst the blister on Martha’s back, the gelatinous pus within made its escape. Thanking the Void Gods for the medpack’s surgical gloves, I wiped her down, then set to work with the tweezers; if I couldn’t get the eggs out, it was all for nothing. This is the side of […]

The post Sucky appeared first on 365tomorrows.

Planet DebianArnaud Rebillout: Firefox: Moving from the Debian package to the Flatpak app (long-term?)

First, thanks to Samuel Henrique for giving notice of recent Firefox CVEs in Debian testing/unstable.

At the time I didn't want to upgrade my system (Debian Sid) due to the ongoing t64 transition transition, so I decided I could install the Firefox Flatpak app instead, and why not stick to it long-term?

This blog post details all the steps, if ever others want to go the same road.

Flatpak Installation

Disclaimer: this section is hardly anything more than a copy/paste of the official documentation, and with time it will get outdated, so you'd better follow the official doc.

First thing first, let's install Flatpak:

$ sudo apt update
$ sudo apt install flatpak

Then the next step is to add the Flathub remote repository, from where we'll get our Flatpak applications:

$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

And that's all there is to it! Now come the optional steps.

For GNOME and KDE users, you might want to install a plugin for the software manager specific to your desktop, so that it can support and manage Flatpak apps:

$ which -s gnome-software  && sudo apt install gnome-software-plugin-flatpak
$ which -s plasma-discover && sudo apt install plasma-discover-backend-flatpak

And here's an additional check you can do, as it's something that did bite me in the past: missing xdg-portal-* packages, that are required for Flatpak applications to communicate with the desktop environment. Just to be sure, you can check the output of apt search '^xdg-desktop-portal' to see what's available, and compare with the output of dpkg -l | grep xdg-desktop-portal.

As you can see, if you're a GNOME or KDE user, there's a portal backend for you, and it should be installed. For reference, this is what I have on my GNOME desktop at the moment:

$ dpkg -l | grep xdg-desktop-portal | awk '{print $2}'
xdg-desktop-portal
xdg-desktop-portal-gnome
xdg-desktop-portal-gtk

Install the Firefox Flatpak app

This is trivial, but still, there's a question I've always asked myself: should I install applications system-wide (aka. flatpak --system, the default) or per-user (aka. flatpak --user)? Turns out, this questions is answered in the Flatpak documentation:

Flatpak commands are run system-wide by default. If you are installing applications for day-to-day usage, it is recommended to stick with this default behavior.

Armed with this new knowledge, let's install the Firefox app:

$ flatpak install flathub org.mozilla.firefox

And that's about it! We can give it a go already:

$ flatpak run org.mozilla.firefox

Data migration

At this point, running Firefox via Flatpak gives me an "empty" Firefox. That's not what I want, instead I want my usual Firefox, with a gazillion of tabs already opened, a few extensions, bookmarks and so on.

As it turns out, Mozilla provides a brief doc for data migration, and it's as simple as moving Firefox data directory around!

To clarify, we'll be copying data:

  • from ~/.mozilla/ -- where the Firefox Debian package stores its data
  • into ~/.var/app/org.mozilla.firefox/.mozilla/ -- where the Firefox Flatpak app stores its data

Make sure that all Firefox instances are closed, then proceed:

# BEWARE! Below I'm erasing data!
$ rm -fr ~/.var/app/org.mozilla.firefox/.mozilla/firefox/
$ cp -a ~/.mozilla/firefox/ ~/.var/app/org.mozilla.firefox/.mozilla/

To avoid confusing myself, it's also a good idea to rename the local data directory:

$ mv ~/.mozilla/firefox ~/.mozilla/firefox.old.$(date --iso-8601=date)

At this point, flatpak run org.mozilla.firefox takes me to my "usual" everyday Firefox, with all its tabs opened, pinned, bookmarked, etc.

More integration?

After following all the steps above, I must say that I'm 99% happy. So far, everything works as before, I didn't hit any issue, and I don't even notice that Firefox is running via Flatpak, it's completely transparent.

So where's the 1% of unhappiness? The « Run a Command » dialog from GNOME, the one that shows up via the keyboard shortcut <Alt+F2>. This is how I start my GUI applications, and I usually run two Firefox instances in parallel (one for work, one for personal), using the firefox -p <profile> command.

Given that I ran apt purge firefox before (to avoid confusing myself with two installations of Firefox), now the right (and only) way to start Firefox from a command-line is to type flatpak run org.mozilla.firefox -p <profile>. Typing that every time is way too cumbersome, so I need something quicker.

Seems like the most straightforward is to create a wrapper script:

$ cat /usr/local/bin/firefox 
#!/bin/sh
exec flatpak run org.mozilla.firefox "$@"

And now I can just hit <Alt+F2> and type firefox -p <profile> to start Firefox with the profile I want, just as before. Neat!

Looking forward: system updates

I usually update my system manually every now and then, via the well-known pair of commands:

$ sudo apt update
$ sudo apt full-upgrade

The downside of introducing Flatpak, ie. introducing another package manager, is that I'll need to learn new commands to update the software that comes via this channel.

Fortunately, there's really not much to learn. From flatpak-update(1):

flatpak update [OPTION...] [REF...]

Updates applications and runtimes. [...] If no REF is given, everything is updated, as well as appstream info for all remotes.

Could it be that simple? Apparently yes, the Flatpak equivalent of the two apt commands above is just:

$ flatpak update

Going forward, my options are:

  1. Teach myself to run flatpak update additionally to apt update, manually, everytime I update my system.
  2. Go crazy: let something automatically update my Flatpak apps, in my back and without my consent.

I'm actually tempted to go for option 2 here, and I wonder if GNOME Software will do that for me, provided that I installed gnome-software-plugin-flatpak, and that I checked « Software Updates -> Automatic » in the Settings (which I did).

However, I didn't find any documentation regarding what this setting really does, so I can't say if it will only download updates, or if it will also install it. I'd be happy if it automatically installs new version of Flatpak apps, but at the same time I'd be very unhappy if it automatically upgrades my Debian system...

So we'll see. Enough for today, hope this blog post was useful!

,

Planet DebianDirk Eddelbuettel: ulid 0.3.1 on CRAN: New Maintainer, Some Polish

Happy to share that ulid is now (back) on CRAN. It provides universally unique identifiers that are lexicographically sortable, which improves over the more well-known uuid generators.

ulid is a neat little package put together by Bob Rudis a few years ago. It had recently drifted off CRAN so I offered to brush it up and re-submit it. And as tooted earlier today, it took just over an hour to finish that (after the lead up work I had done, including prior email with CRAN in the loop, the repo transfer from Bob’s to my ulid repo plus of course a wee bit of actual maintenance; see below for more).

The NEWS entry follows.

Changes in version 0.3.1 (2024-04-02)

  • New Maintainer

  • Deleted several repository files no longer used or needed

  • Added .editorconfig, ChangeLog and cleanup

  • Converted NEWS.md to NEWS.Rd

  • Simplified R/ directory to one source file

  • Simplified src/ removing redundant Makevars

  • Added ulid() alias

  • Updated / edited roxygen and README.md documention

  • Removed vignette which was identical to README.md

  • Switched continuous integration to GitHub Actions

  • Placed upstream (header-only) library into src/ulid/

  • Renamed single interface file to src/wrapper

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSven Hoexter: PKIX: pathLen Constrain on Root Certificates

I recently came a cross a x509 P(rivate)KI Root Certificate which had a pathLen constrain set on the (self signed) Root Certificate. Since that is not commonly seen I looked a bit around to get a better understanding about how the pathLen basic constrain should be used.

Primary source is RFC 5280 section 4.2.1.9

The pathLenConstraint field is meaningful only if the cA boolean is asserted and the key usage extension, if present, asserts the keyCertSign bit (Section 4.2.1.3). In this case, it gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path

Since the Root is always self-issued it doesn't count towards the limit, and since it's the last certificate (or the first depending on how you count) in a chain, it's pretty much pointless to configure a pathLen constrain directly on a Root Certificate.

Another relevant resource are the Baseline Requirements of the CA/Browser Forum (currently v2.0.2). Section 7.1.2.1.4 "Root CA Basic Constraints" describes it as NOT RECOMMENDED for a Root CA.

Last but not least there is the awesome x509 Limbo project which has a section for validating pathLen constrains. Since the RFC 5280 based assumption is that self signed certs do not count, they do not check a case with such a constrain on the Root itself, and what the implementations do about it. So the assumption right now is that they properly ignore it.

Summary: It's pointless to set the pathLen constrain on the Root Certificate, so just don't do it.

Cryptogram Class-Action Lawsuit against Google’s Incognito Mode

The lawsuit has been settled:

Google has agreed to delete “billions of data records” the company collected while users browsed the web using Incognito mode, according to documents filed in federal court in San Francisco on Monday. The agreement, part of a settlement in a class action lawsuit filed in 2020, caps off years of disclosures about Google’s practices that shed light on how much data the tech giant siphons from its users­—even when they’re in private-browsing mode.

Under the terms of the settlement, Google must further update the Incognito mode “splash page” that appears anytime you open an Incognito mode Chrome window after previously updating it in January. The Incognito splash page will explicitly state that Google collects data from third-party websites “regardless of which browsing or browser mode you use,” and stipulate that “third-party sites and apps that integrate our services may still share information with Google,” among other changes. Details about Google’s private-browsing data collection must also appear in the company’s privacy policy.

I was an expert witness for the prosecution (that’s the class, against Google). I don’t know if my declarations and deposition will become public.

Planet DebianBits from Debian: Bits from the DPL

Dear Debianites

This morning I decided to just start writing Bits from DPL and send whatever I have by 18:00 local time. Here it is, barely proof read, along with all it's warts and grammar mistakes! It's slightly long and doesn't contain any critical information, so if you're not in the mood, don't feel compelled to read it!

Get ready for a new DPL!

Soon, the voting period will start to elect our next DPL, and my time as DPL will come to an end. Reading the questions posted to the new candidates on debian-vote, it takes quite a bit of restraint to not answer all of them myself, I think I can see how that aspect contributed to me being reeled in to running for DPL! In total I've done so 5 times (the first time I ran, Sam was elected!).

Good luck to both Andreas and Sruthi, our current DPL candidates! I've already started working on preparing handover, and there's multiple request from teams that have came in recently that will have to wait for the new term, so I hope they're both ready to hit the ground running!

Things that I wish could have gone better

Communication

Recently, I saw a t-shirt that read:

Adulthood is saying, 'But after this week things will slow down a bit' over and over until you die.

I can relate! With every task, crisis or deadline that appears, I think that once this is over, I'll have some more breathing space to get back to non-urgent, but important tasks. "Bits from the DPL" was something I really wanted to get right this last term, and clearly failed spectacularly. I have two long Bits from the DPL drafts that I never finished, I tend to have prioritised problems of the day over communication. With all the hindsight I have, I'm not sure which is better to prioritise, I do rate communication and transparency very highly and this is really the top thing that I wish I could've done better over the last four years.

On that note, thanks to people who provided me with some kind words when I've mentioned this to them before. They pointed out that there are many other ways to communicate and be in touch with the community, and they mentioned that they thought that I did a good job with that.

Since I'm still on communication, I think we can all learn to be more effective at it, since it's really so important for the project. Every time I publicly spoke about us spending more money, we got more donations. People out there really like to see how we invest funds in to Debian, instead of just making it heap up. DSA just spent a nice chunk on money on hardware, but we don't have very good visibility on it. It's one thing having it on a public line item in SPI's reporting, but it would be much more exciting if DSA could provide a write-up on all the cool hardware they're buying and what impact it would have on developers, and post it somewhere prominent like debian-devel-announce, Planet Debian or Bits from Debian (from the publicity team).

I don't want to single out DSA there, it's difficult and affects many other teams. The Salsa CI team also spent a lot of resources (time and money wise) to extend testing on AMD GPUs and other AMD hardware. It's fantastic and interesting work, and really more people within the project and in the outside world should know about it!

I'm not going to push my agendas to the next DPL, but I hope that they continue to encourage people to write about their work, and hopefully at some point we'll build enough excitement in doing so that it becomes a more normal part of our daily work.

Founding Debian as a standalone entity

This was my number one goal for the project this last term, which was a carried over item from my previous terms.

I'm tempted to write everything out here, including the problem statement and our current predicaments, what kind of ground work needs to happen, likely constitutional changes that need to happen, and the nature of the GR that would be needed to make such a thing happen, but if I start with that, I might not finish this mail.

In short, I 100% believe that this is still a very high ranking issue for Debian, and perhaps after my term I'd be in a better position to spend more time on this (hmm, is this an instance of "The grass is always better on the other side", or "Next week will go better until I die?"). Anyway, I'm willing to work with any future DPL on this, and perhaps it can in itself be a delegation tasked to properly explore all the options, and write up a report for the project that can lead to a GR.

Overall, I'd rather have us take another few years and do this properly, rather than rush into something that is again difficult to change afterwards. So while I very much wish this could've been achieved in the last term, I can't say that I have any regrets here either.

My terms in a nutshell

COVID-19 and Debian 11 era

My first term in 2020 started just as the COVID-19 pandemic became known to spread globally. It was a tough year for everyone, and Debian wasn't immune against its effects either. Many of our contributors got sick, some have lost loved ones (my father passed away in March 2020 just after I became DPL), some have lost their jobs (or other earners in their household have) and the effects of social distancing took a mental and even physical health toll on many. In Debian, we tend to do really well when we get together in person to solve problems, and when DebConf20 got cancelled in person, we understood that that was necessary, but it was still more bad news in a year we had too much of it already.

I can't remember if there was ever any kind of formal choice or discussion about this at any time, but the DebConf video team just kind of organically and spontaneously became the orga team for an online DebConf, and that lead to our first ever completely online DebConf. This was great on so many levels. We got to see each other's faces again, even though it was on screen. We had some teams talk to each other face to face for the first time in years, even though it was just on a Jitsi call. It had a lasting cultural change in Debian, some teams still have video meetings now, where they didn't do that before, and I think it's a good supplement to our other methods of communication.

We also had a few online Mini-DebConfs that was fun, but DebConf21 was also online, and by then we all developed an online conference fatigue, and while it was another good online event overall, it did start to feel a bit like a zombieconf and after that, we had some really nice events from the Brazillians, but no big global online community events again. In my opinion online MiniDebConfs can be a great way to develop our community and we should spend some further energy into this, but hey! This isn't a platform so let me back out of talking about the future as I see it...

Despite all the adversity that we faced together, the Debian 11 release ended up being quite good. It happened about a month or so later than what we ideally would've liked, but it was a solid release nonetheless. It turns out that for quite a few people, staying inside for a few months to focus on Debian bugs was quite productive, and Debian 11 ended up being a very polished release.

During this time period we also had to deal with a previous Debian Developer that was expelled for his poor behaviour in Debian, who continued to harass members of the Debian project and in other free software communities after his expulsion. This ended up being quite a lot of work since we had to take legal action to protect our community, and eventually also get the police involved. I'm not going to give him the satisfaction by spending too much time talking about him, but you can read our official statement regarding Daniel Pocock here: https://www.debian.org/News/2021/20211117

In late 2021 and early 2022 we also discussed our general resolution process, and had two consequent votes to address some issues that have affected past votes:

In my first term I addressed our delegations that were a bit behind, by the end of my last term all delegation requests are up to date. There's still some work to do, but I'm feeling good that I get to hand this over to the next DPL in a very decent state. Delegation updates can be very deceiving, sometimes a delegation is completely re-written and it was just 1 or 2 hours of work. Other times, a delegation updated can contain one line that has changed or a change in one team member that was the result of days worth of discussion and hashing out differences.

I also received quite a few requests either to host a service, or to pay a third-party directly for hosting. This was quite an admin nightmare, it either meant we had to manually do monthly reimbursements to someone, or have our TOs create accounts/agreements at the multiple providers that people use. So, after talking to a few people about this, we founded the DebianNet team (we could've admittedly chosen a better name, but that can happen later on) for providing hosting at two different hosting providers that we have agreement with so that people who host things under debian.net have an easy way to host it, and then at the same time Debian also has more control if a site maintainer goes MIA.

More info: https://wiki.debian.org/Teams/DebianNet

You might notice some Openstack mentioned there, we had some intention to set up a Debian cloud for hosting these things, that could also be used for other additional Debiany things like archive rebuilds, but these have so far fallen through. We still consider it a good idea and hopefully it will work out some other time (if you're a large company who can sponsor few racks and servers, please get in touch!)

DebConf22 and Debian 12 era

DebConf22 was the first time we returned to an in-person DebConf. It was a bit smaller than our usual DebConf - understandably so, considering that there were still COVID risks and people who were at high risk or who had family with high risk factors did the sensible thing and stayed home.

After watching many MiniDebConfs online, I also attended my first ever MiniDebConf in Hamburg. It still feels odd typing that, it feels like I should've been at one before, but my location makes attending them difficult (on a side-note, a few of us are working on bootstrapping a South African Debian community and hopefully we can pull off MiniDebConf in South Africa later this year).

While I was at the MiniDebConf, I gave a talk where I covered the evolution of firmware, from the simple e-proms that you'd find in old printers to the complicated firmware in modern GPUs that basically contain complete operating systems- complete with drivers for the device their running on. I also showed my shiny new laptop, and explained that it's impossible to install that laptop without non-free firmware (you'd get a black display on d-i or Debian live). Also that you couldn't even use an accessibility mode with audio since even that depends on non-free firmware these days.

Steve, from the image building team, has said for a while that we need to do a GR to vote for this, and after more discussion at DebConf, I kept nudging him to propose the GR, and we ended up voting in favour of it. I do believe that someone out there should be campaigning for more free firmware (unfortunately in Debian we just don't have the resources for this), but, I'm glad that we have the firmware included. In the end, the choice comes down to whether we still want Debian to be installable on mainstream bare-metal hardware.

At this point, I'd like to give a special thanks to the ftpmasters, image building team and the installer team who worked really hard to get the changes done that were needed in order to make this happen for Debian 12, and for being really proactive for remaining niggles that was solved by the time Debian 12.1 was released.

The included firmware contributed to Debian 12 being a huge success, but it wasn't the only factor. I had a list of personal peeves, and as the hard freeze hit, I lost hope that these would be fixed and made peace with the fact that Debian 12 would release with those bugs. I'm glad that lots of people proved me wrong and also proved that it's never to late to fix bugs, everything on my list got eliminated by the time final freeze hit, which was great! We usually aim to have a release ready about 2 years after the previous release, sometimes there are complications during a freeze and it can take a bit longer. But due to the excellent co-ordination of the release team and heavy lifting from many DDs, the Debian 12 release happened 21 months and 3 weeks after the Debian 11 release. I hope the work from the release team continues to pay off so that we can achieve their goals of having shorter and less painful freezes in the future!

Even though many things were going well, the ongoing usr-merge effort highlighted some social problems within our processes. I started typing out the whole history of usrmerge here, but it's going to be too long for the purpose of this mail. Important questions that did come out of this is, should core Debian packages be team maintained? And also about how far the CTTE should really be able to override a maintainer. We had lots of discussion about this at DebConf22, but didn't make much concrete progress. I think that at some point we'll probably have a GR about package maintenance. Also, thank you to Guillem who very patiently explained a few things to me (after probably having have to done so many times to others before already) and to Helmut who have done the same during the MiniDebConf in Hamburg. I think all the technical and social issues here are fixable, it will just take some time and patience and I have lots of confidence in everyone involved.

UsrMerge wiki page: https://wiki.debian.org/UsrMerge

DebConf 23 and Debian 13 era

DebConf23 took place in Kochi, India. At the end of my Bits from the DPL talk there, someone asked me what the most difficult thing I had to do was during my terms as DPL. I answered that nothing particular stood out, and even the most difficult tasks ended up being rewarding to work on. Little did I know that my most difficult period of being DPL was just about to follow. During the day trip, one of our contributors, Abraham Raji, passed away in a tragic accident. There's really not anything anyone could've done to predict or stop it, but it was devastating to many of us, especially the people closest to him. Quite a number of DebConf attendees went to his funeral, wearing the DebConf t-shirts he designed as a tribute. It still haunts me when I saw his mother scream "He was my everything! He was my everything!", this was by a large margin the hardest day I've ever had in Debian, and I really wasn't ok for even a few weeks after that and I think the hurt will be with many of us for some time to come. So, a plea again to everyone, please take care of yourself! There's probably more people that love you than you realise.

A special thanks to the DebConf23 team, who did a really good job despite all the uphills they faced (and there were many!).

As DPL, I think that planning for a DebConf is near to impossible, all you can do is show up and just jump into things. I planned to work with Enrico to finish up something that will hopefully save future DPLs some time, and that is a web-based DD certificate creator instead of having the DPL do so manually using LaTeX. It already mostly works, you can see the work so far by visiting https://nm.debian.org/person/ACCOUNTNAME/certificate/ and replacing ACCOUNTNAME with your Debian account name, and if you're a DD, you should see your certificate. It still needs a few minor changes and a DPL signature, but at this point I think that will be finished up when the new DPL start. Thanks to Enrico for working on this!

Since my first term, I've been trying to find ways to improve all our accounting/finance issues. Tracking what we spend on things, and getting an annual overview is hard, especially over 3 trusted organisations. The reimbursement process can also be really tedious, especially when you have to provide files in a certain order and combine them into a PDF. So, at DebConf22 we had a meeting along with the treasurer team and Stefano Rivera who said that it might be possible for him to work on a new system as part of his Freexian work. It worked out, and Freexian funded the development of the system since then, and after DebConf23 we handled the reimbursements for the conference via the new reimbursements site: https://reimbursements.debian.net/

It's still early days, but over time it should be linked to all our TOs and we'll use the same category codes across the board. So, overall, our reimbursement process becomes a lot simpler, and also we'll be able to get information like how much money we've spent on any category in any period. It will also help us to track how much money we have available or how much we spend on recurring costs. Right now that needs manual polling from our TOs. So I'm really glad that this is a big long-standing problem in the project that is being fixed.

For Debian 13, we're waving goodbye to the KFreeBSD and mipsel ports. But we're also gaining riscv64 and loongarch64 as release architectures! I have 3 different RISC-V based machines on my desk here that I haven't had much time to work with yet, you can expect some blog posts about them soon after my DPL term ends!

As Debian is a unix-like system, we're affected by the Year 2038 problem, where systems that uses 32 bit time in seconds since 1970 run out of available time and will wrap back to 1970 or have other undefined behaviour. A detailed wiki page explains how this works in Debian, and currently we're going through a rather large transition to make this possible.

I believe this is the right time for Debian to be addressing this, we're still a bit more than a year away for the Debian 13 release, and this provides enough time to test the implementation before 2038 rolls along.

Of course, big complicated transitions with dependency loops that causes chaos for everyone would still be too easy, so this past weekend (which is a holiday period in most of the west due to Easter weekend) has been filled with dealing with an upstream bug in xz-utils, where a backdoor was placed in this key piece of software. An Ars Technica covers it quite well, so I won't go into all the details here. I mention it because I want to give yet another special thanks to everyone involved in dealing with this on the Debian side. Everyone involved, from the ftpmasters to security team and others involved were super calm and professional and made quick, high quality decisions. This also lead to the archive being frozen on Saturday, this is the first time I've seen this happen since I've been a DD, but I'm sure next week will go better!

Looking forward

It's really been an honour for me to serve as DPL. It might well be my biggest achievement in my life. Previous DPLs range from prominent software engineers to game developers, or people who have done things like complete Iron Man, run other huge open source projects and are part of big consortiums. Ian Jackson even authored dpkg and is now working on the very interesting tag2upload service!

I'm a relative nobody, just someone who grew up as a poor kid in South Africa, who just really cares about Debian a lot. And, above all, I'm really thankful that I didn't do anything major to screw up Debian for good.

Not unlike learning how to use Debian, and also becoming a Debian Developer, I've learned a lot from this and it's been a really valuable growth experience for me.

I know I can't possible give all the thanks to everyone who deserves it, so here's a big big thanks to everyone who have worked so hard and who have put in many, many hours to making Debian better, I consider you all heroes!

-Jonathan

Cryptogram Magic Security Dust

Adam Shostack is selling magic security dust.

It’s about time someone is commercializing this essential technology.

Rondam RamblingsFeynman, bullies, and invisible pink unicorns

This is the second installment in what I hope will turn out to be a long series about the scientific method.  In this segment I want to give three examples of how the scientific method, which I described in the first installment, can be applied to situations that are not usually considered "science-y".  By doing this I hope to show you how the scientific method can be used without any

Worse Than FailureCodeSOD: Exceptional Feeds

Joe sends us some Visual Basic .NET exception handling. Let's see if you can spot what's wrong?

Catch ex1 As Exception

    ' return the cursor
    Me.Cursor = Cursors.Default

    ' tell a story
    MessageBox.Show(ex1.Message)
    Return

End Try

This code catches the generic exception, meddles with the cursor a bit, and then pops up a message box to alert the user to something that went wrong. I don't love putting the raw exception in the message box, but this is hardly a WTF, is it?

Catch ex2 As Exception

    ' snitch
    MessageBox.Show(ex2.ToString(), "RSS Feed Initialization Failure")

End Try

Elsewhere in the application. Okay, I also don't love the exN naming convention either, but where's the WTF?

Well, the fact that they're initializing an RSS feed is a hint- this isn't an RSS reader client, it's an RSS serving web app. This runs on the server side, and any message boxes that get popped up aren't going to the end user.

Now, I haven't seen this precise thing done in VB .Net, only in Classic ASP, where you could absolutely open message boxes on the web server. I'd hope that in ASP .Net, something would stop you from doing that. I'd hope.

Otherwise, I've found the worst logging system you could make.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsHow To Best Carve Light

Author: Majoki So not painterly. Not even close. Too pixelated. Too blurred at the edges of reality. Not a good start in your first soloverse. Always so much to learn. Tamp down the expectations, go back and study the masters. Phidias. Caravaggio. Kurosawa. Leibovitz. Marquez. Einstein. Know their mediums. Stone. Canvas. Film. Page. Chalkboard. Seek […]

The post How To Best Carve Light appeared first on 365tomorrows.

,

Planet DebianBen Hutchings: FOSS activity in March 2024

Planet DebianColin Watson: Free software activity in March 2024

My Debian contributions this month were all sponsored by Freexian.

Planet DebianSimon Josefsson: Towards reproducible minimal source code tarballs? On *-src.tar.gz

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize two ideas, which have been discussed before, but the context in which they can be appreciated have not been as clear as it is today.

  1. Reproducible tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. One problem with this is that SHA-1 is broken, so placing trust in a SHA-1 identifier is simply not secure. Another counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that. Update: In my experiment with source-only tarballs for Libntlm I actually did use git-archive output.

What do you think?

Planet DebianArturo Borrero González: Kubecon and CloudNativeCon 2024 Europe summary

Kubecon EU 2024 Paris logo

This blog post shares my thoughts on attending Kubecon and CloudNativeCon 2024 Europe in Paris. It was my third time at this conference, and it felt bigger than last year’s in Amsterdam. Apparently it had an impact on public transport. I missed part of the opening keynote because of the extremely busy rush hour tram in Paris.

On Artificial Intelligence, Machine Learning and GPUs

Talks about AI, ML, and GPUs were everywhere this year. While it wasn’t my main interest, I did learn about GPU resource sharing and power usage on Kubernetes. There were also ideas about offering Models-as-a-Service, which could be cool for Wikimedia Toolforge in the future.

See also:

On security, policy and authentication

This was probably the main interest for me in the event, given Wikimedia Toolforge was about to migrate away from Pod Security Policy, and we were currently evaluating different alternatives.

In contrast to my previous attendances to Kubecon, where there were three policy agents with presence in the program schedule, Kyverno, Kubewarden and OpenPolicyAgent (OPA), this time only OPA had the most relevant sessions.

One surprising bit I got from one of the OPA sessions was that it could work to authorize linux PAM sessions. Could this be useful for Wikimedia Toolforge?

OPA talk

I attended several sessions related to authentication topics. I discovered the keycloak software, which looks very promising. I also attended an Oauth2 session which I had a hard time following, because I clearly missed some additional knowledge about how Oauth2 works internally.

I also attended a couple of sessions that ended up being a vendor sales talk.

See also:

On container image builds, harbor registry, etc

This topic was also of interest to me because, again, it is a core part of Wikimedia Toolforge.

I attended a couple of sessions regarding container image builds, including topics like general best practices, image minimization, and buildpacks. I learned about kpack, which at first sight felt like a nice simplification of how the Toolforge build service was implemented.

I also attended a session by the Harbor project maintainers where they shared some valuable information on things happening soon or in the future , for example:

  • new harbor command line interface coming soon. Only the first iteration though.
  • harbor operator, to install and manage harbor. Looking for new maintainers, otherwise going to be archived.
  • the project is now experimenting with adding support to hosting more artifacts: maven, NPM, pypi. I wonder if they will consider hosting Debian .deb packages.

On networking

I attended a couple of sessions regarding networking.

One session in particular I paid special attention to, ragarding on network policies. They discussed new semantics being added to the Kubernetes API.

The different layers of abstractions being added to the API, the different hook points, and override layers clearly resembled (to me at least) the network packet filtering stack of the linux kernel (netfilter), but without the 20 (plus) years of experience building the right semantics and user interfaces.

Network talk

I very recently missed some semantics for limiting the number of open connections per namespace, see Phabricator T356164: [toolforge] several tools get periods of connection refused (104) when connecting to wikis This functionality should be available in the lower level tools, I mean Netfilter. I may submit a proposal upstream at some point, so they consider adding this to the Kubernetes API.

Final notes

In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I’m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. That being said, I felt less engaged with the content of the conference schedule compared to last year. I don’t know if the schedule itself was less interesting, or that I’m losing interest?

Finally, not an official track in the conference, but we met a bunch of folks from Wikimedia Deutschland. We had a really nice time talking about how wikibase.cloud uses Kubernetes, whether they could run in Wikimedia Cloud Services, and why structured data is so nice.

Group photo

Worse Than FailureTaking Up Space

April Fool's day is a day where websites lie to you or create complex pranks. We've generally avoided the former, but have done a few of the latter, but we also like to just use April Fool's as a chance to change things up.

So today, we're going to do something different. We're going to talk about my Day Job. Specifically, we're going to talk about a tool I use in my day job: cFS.

cFS is a NASA-designed architecture for designing spaceflight applications. It's open source, and designed to be accessible. A lot of the missions NASA launches use cFS, which gives it a lovely proven track record. And it was designed and built by people much smarter than me. Which doesn't mean it's free of WTFs.

The Big Picture

cFS is a C framework for spaceflight, designed to run on real-time OSes, though fully capable of running on Linux (with or without a realtime kernel), and even Windows. It has three core modules- a Core Flight Executive (cFE) (which provides services around task management, and cross-task communication), the OS Abstraction Layer (helping your code be portable across OSes), and a Platform Support Package (low-level support for board-connected hardware). Its core concept is that you build "apps", and the whole thing has a pitch about an app store. We'll come back to that. What exactly is an app in cFS?

Well, at their core, "apps" are just Actors. They're a block of code with its own internal state, that interacts with other modules via message passing, but basically runs as its own thread (or a realtime task, or whatever your OS appropriate abstraction is).

These applications are wired together by a cFS feature called the "Core Flight Executive Software Bus" (cFE Software Bus, or just Software Bus), which handles managing subscriptions and routing. Under the hood, this leverages an OS-level message queue abstraction. Since each "app" has its own internal memory, and only reacts to messages (or emits messages for others to react to), we avoid pretty much all of the main pitfalls of concurrency.

This all feeds into the single responsibility principle, giving each "app" one job to do. And while we're throwing around buzzwords, it also grants us encapsulation (each "app" has its own memory space, unshared), and helps us design around interfaces- "apps" emit and receive certain messages, which defines their interface. It's almost like full object oriented programming in C, or something like how the BeamVM languages (Erlang, Elixir) work.

The other benefit of this is that we can have reusable apps which provide common functionality that every mission needs. For example, the app DS (Data Storage) logs any messages that cross the software bus. LC (Limit Checker) allows you to configure expected ranges for telemetry (like, for example, the temperature you expect a sensor to report), and raise alerts if it falls out of range. There's SCH (Scheduler) which sends commands to apps to wake them up so they can do useful work (also making it easy to sleep apps indefinitely and minimize power consumption).

All in all, cFS constitutes a robust, well-tested framework for designing spaceflight applications.

Even NASA annoys me

This is TDWTF, however, so none of this is all sunshine and roses. cFS is not the prettiest framework, and the developer experience may ah… leave a lot to be desired. It's always undergoing constant improvement, which is good, but still has its pain points.

Speaking of constant improvement, let's talk about versioning. cFS is the core flight software framework which hosts your apps (via the cFE), and cFS is getting new versions. The apps themselves also get new versions. The people writing the apps and the people writing cFS are not always coordinating on this, which means that when cFS adds a breaking change to their API, you get to play the "which version of cFS and App X play nice together". And since everyone has different practices around tagging releases, you often have to walk through commits to find the last version of the app that was compatible with your version of cFS, and see things like releases tagged "compatible with Draco rc2 (mostly)". The goal of "grab apps from an App Store and they just work" is definitely not actually happening.

Or, this, from the current cFS readme:

Compatible list of cFS apps
The following applications have been tested against this release:
TBD

Messages in cFS are represented by structs. Which means when apps want to send each other messages, they need the same struct definitions. This is just a pain to manage- getting agreement about which app should own which message, who needs the definition, and how we get the definition over to them is just a huge mess. It's such a huge mess that newer versions of cFS have switched to using "Electronic Data Sheets"- XML files which describe the structs, which doesn't really solve the problem but adds XML to the mix. At least EDS makes it easy to share definitions with non-C applications (popular ground software is written in Python or Java).

Messages also have to have a unique "Message ID", but the MID is not just an arbitrary unique number. It secretly encodes important information, like whether this message is a command (an instruction to take action) or telemetry (data being output), and if you pick a bad MID, everything breaks. Also, keeping MID definitions unique across many different apps who don't know any of the other apps exist is a huge problem. The general solution that folks use is bolting on some sort of CSV file and code generator that handles this.

Those MIDs also don't exist outside of cFS- they're a unique-to-cFS abstraction. cFS, behind the scenes, converts them to different parts of the "space packet header", which is the primary packet format for the SpaceWire networking protocol. This means that in realistic deployments where your cFS module needs to talk to components not running cFS- your MID also represents key header fields for the SpaceWire network. It's incredibly overloaded and the conversions are hidden behind C macros that you can't easily debug.

But my biggest gripe is the build tooling. Everyone at work knows they can send me climbing the walls by just whispering "cFS builds" in my ear. It's a nightmare (that, I believe has gotten better in newer versions, but due to the whole "no synchronization between app and cFE versions" problem, we're not using a new version). It starts with make, which calls CMake, which also calls make, but also calls CMake again in a way that doesn't let variables propagate down to other layers. cFS doesn't provide any targets you link against, but instead requires that any apps you want to use be inserted into the cFS source tree directly, which makes it incredibly difficult to build just parts of cFS for unit testing.

Oh, speaking of unit testing- cFS provides mocks of all of its internal functions; mocks which always return an error code. This is intentional, to encourage developers to test their failure paths in code, but I also like to test our success path too.

Summary

Any tool you use on a regular basis is going to be a tool that you're intimately familiar with; the good parts frequently vanish into the background and the pain points are the things that you notice, day in, day out. That's definitely how I feel after working with cFS for two years.

I think, at its core, the programming concepts it brings to doing low-level, embedded C, are good. It's certainly better than needing to write this architecture myself. And for all its warts, it's been designed and developed by people who are extremely safety conscious and expect you to be too. It's been used on many missions, from hobbyist cube sats to Mars rovers, and that proven track record gives you a good degree of confidence that your mission will also be safe using it.

And since it is Open Source, you can try it out yourself. The cFS-101 guide gives you a good starting point, complete with a downloadable VM that walks you through building a cFS application and communicating with it from simulated ground software. It's a very different way to approach C programming (and makes it easier to comply with C standards, like MISRA), and honestly, the Actor-oriented mindset is a good attitude to bring to many different architectural problems.

Peregrine

If you were following space news at all, you may already know that our Peregrine lander failed. I can't really say anything about that until the formal review has released its findings, but all indications are that it was very much a hardware problem involving valves and high pressure tanks. But I can say that most of the avionics on it were connected to some sort of cFS instance (there were several cFS nodes).

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsTwo Guns

Author: Julian Miles, Staff Writer That shadow is cast by scenery. The one next to it likewise. The third I’m not sure about, and the fourth will become scenery if the body isn’t found. Sniping lasers give nothing away when killing, although the wounds are distinctive. By the time they’re identifying that, I’ll be off […]

The post Two Guns appeared first on 365tomorrows.

Cryptogram Ross Anderson

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996.

I know I was at the first Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography (the one with the blue cover) down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993—probably at Eurocrypt in Lofthus, Norway—I, as an unpublished book author who had only written a couple of crypto articles for Dr. Dobb’s Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find a least one unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He published on block cipher cryptanalysis in the 1990s, and the security of large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its third edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and backdoors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

He’s listed in the acknowledgments as a reader of every one of my books from Beyond Fear on. Recently, we’d see each other a couple of times a year: at this or that workshop or event. The last time I saw him was last June, at SHB 2023, in Pittsburgh. We were having dinner on Alessandro Acquisti‘s rooftop patio, celebrating another successful workshop. He was going to attend my Workshop on Reimagining Democracy in December, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we talked. And I am not the only one.

My heart goes out to his wife Shireen and his family. We lost him much too soon.

Cryptogram Declassified NSA Newsletters

Through a 2010 FOIA request (yes, it took that long), we have copies of the NSA’s KRYPTOS Society Newsletter, “Tales of the Krypt,” from 1994 to 2003.

There are many interesting things in the 800 pages of newsletter. There are many redactions. And a 1994 review of Applied Cryptography by redacted:

Applied Cryptography, for those who don’t read the internet news, is a book written by Bruce Schneier last year. According to the jacket, Schneier is a data security expert with a master’s degree in computer science. According to his followers, he is a hero who has finally brought together the loose threads of cryptography for the general public to understand. Schneier has gathered academic research, internet gossip, and everything he could find on cryptography into one 600-page jumble.

The book is destined for commercial success because it is the only volume in which everything linked to cryptography is mentioned. It has sections on such-diverse topics as number theory, zero knowledge proofs, complexity, protocols, DES, patent law, and the Computer Professionals for Social Responsibility. Cryptography is a hot topic just now, and Schneier stands alone in having written a book on it which can be browsed: it is not too dry.

Schneier gives prominence to applications with large sections.on protocols and source code. Code is given for IDEA, FEAL, triple-DES, and other algorithms. At first glance, the book has the look of an encyclopedia of cryptography. Unlike an encyclopedia, however, it can’t be trusted for accuracy.

Playing loose with the facts is a serious problem with Schneier. For example in discussing a small-exponent attack on RSA, he says “an attack by Michael Wiener will recover e when e is up to one quarter the size of n.” Actually, Wiener’s attack recovers the secret exponent d when e has less than one quarter as many bits as n, which is a quite different statement. Or: “The quadratic sieve is the fastest known algorithm for factoring numbers less than 150 digits…. The number field sieve is the fastest known factoring algorithm, although the quadratric sieve is still faster for smaller numbers (the break even point is between 110 and 135 digits).” Throughout the book, Schneier leaves the impression of sloppiness, of a quick and dirty exposition. The reader is subjected to the grunge of equations, only to be confused or misled. The large number of errors compounds the problem. A recent version of the errata (Schneier publishes updates on the internet) is fifteen pages and growing, including errors in diagrams, errors in the code, and errors in the bibliography.

Many readers won’t notice that the details are askew. The importance of the book is that it is the first stab at.putting the whole subject in one spot. Schneier aimed to provide a “comprehensive reference work for modern cryptography.” Comprehensive it is. A trusted reference it is not.

Ouch. But I will not argue that some of my math was sloppy, especially in the first edition (with the blue cover, not the red cover).

A few other highlights:

  • 1995 Kryptos Kristmas Kwiz, pages 299–306
  • 1996 Kryptos Kristmas Kwiz, pages 414–420
  • 1998 Kryptos Kristmas Kwiz, pages 659–665
  • 1999 Kryptos Kristmas Kwiz, pages 734–738
  • Dundee Society Introductory Placement Test (from questions posed by Lambros Callimahos in his famous class), pages 771–773
  • R. Dale Shipp’s Principles of Cryptanalytic Diagnosis, pages 776–779
  • Obit of Jacqueline Jenkins-Nye (Bill Nye the Science Guy’s mother), pages 755–756
  • A praise of Pi, pages 694–696
  • A rant about Acronyms, pages 614–615
  • A speech on women in cryptology, pages 593–599

Cory DoctorowSubprime gadgets

A group of child miners, looking grimy and miserable, standing in a blasted gravel wasteland. To their left, standing on a hill, is a club-wielding, mad-eyed, top-hatted debt collector, brandishing a document bearing an Android logo.

Today for my podcast, I read Subprime gadgets, originally published in my Pluralistic blog:

I recorded this on a day when I was home between book-tour stops (I’m out with my new techno crime-thriller, The Bezzle). Catch me on April 11 in Boston with Randall Munroe, on April 12th in Providence, Rhode Island, then onto Chicago, Torino, Winnipeg, Calgary, Vancouver and beyond! The canonical link for the schedule is here.


The promise of feudal security: “Surrender control over your digital life so that we, the wise, giant corporation, can ensure that you aren’t tricked into catastrophic blunders that expose you to harm”:

https://locusmag.com/2021/01/cory-doctorow-neofeudalism-and-the-digital-manor/

The tech giant is a feudal warlord whose platform is a fortress; move into the fortress and the warlord will defend you against the bandits roaming the lawless land beyond its walls.

That’s the promise, here’s the failure: What happens when the warlord decides to attack you? If a tech giant decides to do something that harms you, the fortress becomes a prison and the thick walls keep you in.


MP3


Here’s that tour schedule!

11 Apr: Harvard Berkman-Klein Center, with Randall Munroe
https://cyber.harvard.edu/events/enshittification

12 Apr: RISD Debates in AI, Providence

17 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

(Image: Oatsy, CC BY 2.0, modified)

,

David BrinDo the Rich Have Too Much Money? A neglected posting... till now.

Want to time travel? I found this on my desktop… a roundup of news that seemed highly indicative… in early 2022!  Back in those naïve, bygone days of innocence… but now…  Heck yes, a lot of it is pertinent right now!


Like… saving market economies from their all-too-natural decay back into feudalism.


------


First, something a long time coming!  Utter proof we are seeing a Western Revival and push back against the World Oligarchic Putsch. A landmark deal agreed upon by the world's richest nations on Saturday will see a global minimum rate of corporation tax placed on multinational companies including tech giants like Amazon, Apple and Microsoft. Finance ministers from the Group of Seven, or G-7 nations, said they had agreed to having a global base corporate tax rate of at least 15 percent.  Companies  with a strong online presence, would pay taxes in the countries where they record sales, not just where they have an operational base.


It is far, far from enough! But at last some of my large scale 'suggestions' are being tried. Now Let’s get all 50 U.S. states to pass a treaty banning 'bidding wars' for factories, sports teams etc... with maybe a sliding scale tilted for poorer states or low populations. A trivially easy thing that'd save citizens hundreds of billions.


The following made oligarchs fearful of what the Pelosi bills might accomplish, if thirty years of sabotaging the IRS came to an end: 


ProPublica has obtained a vast trove of Internal Revenue Service data on the tax returns of thousands of the nation’s wealthiest people, covering more than 15 years. The data provides an unprecedented look inside the financial lives of America’s titans, including Warren Buffett, Bill Gates, Rupert Murdoch and Mark Zuckerberg. It shows not just their income and taxes, but also their investments, stock trades, gambling winnings and even the results of audits. Taken together, it demolishes the cornerstone myth of the American tax system: that everyone pays their fair share and the richest Americans pay the most. The results are stark. According to Forbes, those 25 people saw their worth rise a collective $401 billion from 2014 to 2018. They paid a total of $13.6 billion in federal income taxes in those five years, the IRS data shows. That’s a staggering sum, but it amounts to a true tax rate of only 3.4%.  

Over the longer run, what we need is the World Ownership Treaty. Nothing on Earth is 'owned' unless a human or government or nonprofit claims it openly and accountably. So much illicit property would be abandoned by criminals etc. that national debts would be erased and the rest of us could have a tax jubilee. The World Ownership Treaty has zero justified objections. If you own something... just say so.


And a minor tech note: An amazing helium airship alternates life as dirigible or water ship. Alas, it is missing some important aspects I could explain… 



== When the Rich have Too Much Money… ==


“The Nobel Prize-winning physicist Ilya Prigogine was fond of saying that the future is not so much determined by what we do in the present as our image of the future determines what we do today.” So begins the latest missive of Noema Magazine.


The Near Future: The Pew Research Center’s annual Big Challenges Report top-features my musings on energy, local production/autonomy, transparency etc., along with other top seers, like the estimable Esther Dyson, Jamais Cascio, Amy Webb & Abigail deKosnick and many others.


Among the points I raise:

  • Advances in cost-effectiveness of sustainable energy supplies will be augmented by better storage systems. This will both reduce reliance on fossil fuels and allow cities and homes to be more autonomous.
  • Urban farming methods may move to industrial scale, allowing similar moves toward local autonomy (perhaps requiring a full decade or more to show significant impact). Meat use will decline for several reasons, ensuring some degree of food security, as well.
  • Local, small-scale, on-demand manufacturing may start to show effects in 2025. If all of the above take hold, there will be surplus oceanic shipping capacity across the planet. Some of it may be applied to ameliorate (not solve) acute water shortages. Innovative uses of such vessels may range all the way to those depicted in my novel ‘Earth.’
  • Full-scale diagnostic evaluations of diet, genes and microbiome will result in micro-biotic therapies and treatments. AI appraisals of other diagnostics will both advance detection of problems and become distributed to handheld devices cheaply available to all, even poor clinics.
  • Handheld devices will start to carry detection technologies that can appraise across the spectrum, allowing NGOs and even private parties to detect and report environmental problems.
  • Socially, this extension of citizen vision will go beyond the current trend of assigning accountability to police and other authorities. Despotisms will be empowered, as predicted in ‘Nineteen Eighty-four.’ But democracies will also be empowered, as in ‘The Transparent Society.’
  • I give odds that tsunamis of revelation will crack the shields protecting many elites from disclosure of past and present torts and turpitudes. The Panama Papers and Epstein cases exhibit how fear propels the elites to combine efforts at repression. But only a few more cracks may cause the dike to collapse, revealing networks of blackmail. This is only partly technologically driven and hence is not guaranteed. If it does happen, there will be dangerous spasms by all sorts of elites, desperate to either retain status or evade consequences. But if the fever runs its course, the more transparent world will be cleaner and better run.
  • Some of those elites have grown aware of the power of 90 years of Hollywood propaganda for individualism, criticism, diversity, suspicion of authority and appreciation of eccentricity. Counter-propaganda pushing older, more traditional approaches to authority and conformity are already emerging, and they have the advantage of resonating with ancient human fears. Much will depend upon this meme war.

“Of course, much will also depend upon short-term resolution of current crises. If our systems remain undermined and sabotaged by incited civil strife and distrust of expertise, then all bets are off. You will get many answers to this canvassing fretting about the spread of ‘surveillance technologies that will empower Big Brother.’ These fears are well-grounded, but utterly myopic. First, ubiquitous cameras and facial recognition are only the beginning. Nothing will stop them and any such thought of ‘protecting’ citizens from being seen by elites is stunningly absurd, as the cameras get smaller, better, faster, cheaper, more mobile and vastly more numerous every month. Moore’s Law to the nth degree. Yes, despotisms will benefit from this trend. And hence, the only thing that matters is to prevent despotism altogether.

“In contrast, a free society will be able to apply the very same burgeoning technologies toward accountability. We are seeing them applied to end centuries of abuse by ‘bad-apple’ police who are thugs, while empowering the truly professional cops to do their jobs better. I do not guarantee light will be used this way, despite today’s spectacular example. It is an open question whether we citizens will have the gumption to apply ‘sousveillance’ upward at all elites. But Gandhi and Martin Luther King Jr. likewise were saved by crude technologies of light in their days. And history shows that assertive vision by and for the citizenry is the only method that has ever increased freedom and – yes – some degree of privacy.

A new type of digital asset - known as a non-fungible token (NFT) - has exploded in popularity during the pandemic as enthusiasts and investors scramble to spend enormous sums of money on items that only exist online. “Blockchain technology allows the items to be publicly authenticated as one-of-a-kind, unlike traditional online objects which can be endlessly reproduced.”… “ In October 2020, Miami-based art collector Pablo Rodriguez-Fraile spent almost $67,000 on a 10-second video artwork that he could have watched for free online. Last week, he sold it for $6.6 million. The video by digital artist Beeple, whose real name is Mike Winkelmann, was authenticated by blockchain, which serves as a digital signature to certify who owns it and that it is the original work.”


From The Washington Post: The post-covid luxury spending boom has begun. It’s already reshaping the economy. Consider a sealed copy of Super Mario 64 sells for $1.56M in record-breaking auction. That record didn’t last long, till August 2021. Rare copy of Super Mario Bros. sells for $2 million, the most ever paid for a video game. 


===============================


== Addendum March 30, 2024 ==


What's above was an economics rant from the past. Only now, let me also tack on something from spring 2024 (today!) that I just sent to a purported 'investment guru economist' I know. His bi-weekly newsletter regularly - and obsessively - focuses on the Federal Reserve ('Fed') and the ongoing drama of setting calibrated interest rates to fight inflation.  (The fellow never, ever talks about all the things that matter much, much more, like tax/fiscal policy, money velocity and rising wealth disparities.)


Here, he does make one cogent point about inflation... but doesn't follow up to the logical conclusion:


"This matters because the average consumer doesn’t look at benchmarks. They perceive inflation when it starts having visibly large and/or frequent effects on their lives. This is why food and gasoline prices matter so much; people buy them regularly enough to notice higher prices. Their contribution to inflation perceptions is greater than their weighting in the benchmarks."

Yes!  It is true that the poor and middle class do not borrow in order to meet basic needs. All they can do, when prices rise, is tighten their belts. Interest rates do not affect such basics.

ALSO The rich do not borrow. Because, after 40 years of Supply Side tax grifts, they have all the money! And now they are snapping up 1/3 of US housing stock with cash purchases. What Adam Smith called economically useless 'rent-seeking'. The net effect of Republican Congresses firehousing all our wealth into the gaping-open maws of parasites.

That's gradually changing, at last. The US is rapidly re-industrializing, right now! But not by borrowing. The boom in US manufacturing investment is entirely Keynesian - meaning that it's being propelled by federal infrastructure spending and the Chips Act.   Those Pelosi bills are having all of the positive effects that Austrian School  fanatics insanely promised for Supply Side... and never delivered.  

That old argument isnow  settled by facts... which never (alas) sway cultists. Pure fact. Keynes is proved. Laffer is disproved. Period.

But in that case, what's with the obsession of the Right upon the Federal Reserve? What - pray tell - is the Fed supposedly influencing, with interest rate meddling? The answer is... not much.

If you want to see what's important to oligarchy - the core issue that's got them so upset that the they will support Trump? Just look at what the GOP in Congress and the Courts is actually doing, right now! Other than "Hunter hearings" and other Benghazi-style theatrics, what Mike Johnson et. al are doing is:

- Desperately using every lever - like governemnt shut-down threats and holding hostage aid to Ukraine - to slash the coming wave of IRS audits that might affect their masters.  With that wave looming, many in oligarchy are terrified. Re-eviscerating the IRS is the top GOP priority!  But Schumer called Johnson's bluff.

- Their other clear priority is obedience to the Kremlin and blocking aid to Ukraine.

Look at what actually is happening and then please, please name for me one other actual (not polemical) priority? 

== And finally ==

Oh yeah, then there's this. 

Please don't travel April 17-21.  

That's McVeigh season. Though, if you listen to MAGA world, ALL of 2024 into 2025 could be.  

God bless the FBI.




Planet DebianJunichi Uekawa: Learning about xz and what is happening is fascinating.

Learning about xz and what is happening is fascinating. The scope of potential exploit is very large. The Open source software space is filled with many unmaintained and unreviewed software.

Planet DebianRussell Coker: Links March 2024

Bruce Schneier wrote an interesting blog post about his workshop on reimagining democracy and the unusual way he structured it [1]. It would be fun to have a security conference run like that!

Matthias write an informative blog post about Wayland “Wayland really breaks things… Just for now” which links to a blog debate about the utility of Wayland [2]. Wayland seems pretty good to me.

Cory Doctorow wrote an insightful article about the AI bubble comparing it to previous bubbles [3].

Charles Stross wrote an insightful analysis of the implications if the UK brought back military conscription [4]. Looks like the era of large armies is over.

Charles Stross wrote an informative blog post about the Worldcon in China, covering issues of vote rigging for location, government censorship vs awards, and business opportunities [5].

The Paris Review has an interesting article about speaking to the CIA’s Creative Writing Group [6]. It doesn’t explain why they have a creative writing group that has some sort of semi-official sanction.

LongNow has an insightful article about the threats to biodiversity in food crops and the threat that poses to humans [7].

Bruce Schneier and Albert Fox Cahn wrote an interesting article about the impacts of chatbots on human discourse [8]. If it makes people speak more precisely then that would be great for all Autistic people!

365 TomorrowsPast Belief

Author: Don Nigroni When everything is going well, I can’t relax. I just wait and worry for something bad to happen. So when I got a promotion last week, naturally I expected something ugly would happen, perhaps a leaky roof or maybe a hurricane. But this time, no matter how hard I looked for an […]

The post Past Belief appeared first on 365tomorrows.

,

Planet DebianSteinar H. Gunderson: xz backdooring

Andres Freund found that xz-utils is backdoored, but could not (despite the otherwise excellent analysis) get quite to the bottom of what the payload actually does.

What you would hope for to be posted by others: Further analysis of the payload.

What actually gets posted by others: “systemd is bad.”

Update: Good preliminary analysis.

365 TomorrowsProximity Suit

Author: Jeremy Nathan Marks Athabasca was a town of gas and coal. No wind or solar were allowed. Local officials said the Lord would return by fire while windmills and solar panels could only mar the landscape. And fire in a town of coal and gas was, naturally, a lovely thing. On a plain not […]

The post Proximity Suit appeared first on 365tomorrows.

,

Cryptogram Friday Squid Blogging: The Geopolitics of Eating Squid

New York Times op-ed on the Chinese dominance of the squid industry:

China’s domination in seafood has raised deep concerns among American fishermen, policymakers and human rights activists. They warn that China is expanding its maritime reach in ways that are putting domestic fishermen around the world at a competitive disadvantage, eroding international law governing sea borders and undermining food security, especially in poorer countries that rely heavily on fish for protein. In some parts of the world, frequent illegal incursions by Chinese ships into other nations’ waters are heightening military tensions. American lawmakers are concerned because the United States, locked in a trade war with China, is the world’s largest importer of seafood.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianRaphaël Hertzog: Freexian is looking to expand its team with more Debian contributors

It’s been a while that I haven’t posted anything on my blog, the truth is that Freexian has been doing very well in the last years and that I have a hard time to allocate time to write articles or even to contribute to my usual Debian projects… the exception being debusine since that’s part of the Freexian work (have a look at our most recent announce!).

That being said, given Freexian’s growth and in the hope to reduce my workload, we are looking to extend our team with Debian members of more varied backgrounds and skills, so they can help us in areas like sales / marketing / project management. Have a look at our announce on debian-jobs@lists.debian.org.

As a mission-oriented company, we are looking to work with persons already involved in Debian (or persons who were waiting the right opportunity to get involved). All our collaborators can spend 20% of their paid work time on the Debian projects they care about.

Planet DebianRavi Dwivedi: A visit to the Taj Mahal

Note: The currency used in this post is Indian Rupees, which was around 83 INR for 1 US Dollar as that time.

I and my friend Badri visited the Taj Mahal this month. Taj Mahal is one of the main tourist destinations in India and does not need an introduction, I guess. It is in Agra, in the state of Uttar Pradesh, 188 km from Delhi by train. So, I am writing a post documenting useful information for people who are planning to visit Taj Mahal. Feel free to ask me questions about visiting the Taj Mahal.

Our retiring room at the Old Delhi Railway Station.

We had booked a train from Delhi to Agra. The name of the train was Taj Express, and its scheduled departure time from Hazrat Nizamuddin station in Delhi is 07:08 hours in the morning, and its arrival time at Agra Cantt station is 09:45. So, we booked a retiring room at the Old Delhi railway station for the previous night. This retiring room was hard to find. We woke up at 05:00 in the morning and took the metro to Hazrat Nizamuddin station. We barely reached the station in time, but anyway, the train was not yet at the station; it was late.

We reached Agra at 10:30 and checked into our retiring room, took rest and went out for Taj Mahal at 13:00 in the afternoon. Taj Mahal’s outer gate is 5 km away from the Agra Cantt station. As we were going out of the railway station, we were chased by an autorickshaw driver who offered to go to Taj Mahal for 150 INR for both of us. I asked him to bring it down to 60 INR, and after some back and forth, he agreed to drop us off at Taj Mahal for 80 INR. But I said we won’t pay anything above 60 INR. He agreed with that amount but said that he would need to fill up with more passengers. When we saw that he wasn’t making any effort in bringing more passengers, we walked away.

As soon as we got out of the railway station complex, an autorickshaw driver came to us and offered to drop us off at Taj Mahal for 20 INR if we are sharing with other passengers and 100 INR if we reserve the auto for us. We agreed to go with 20 INR per person, but he started the autorickshaw as soon as we hopped in. I thought that the third person in the auto was another passenger sharing a ride with us, but later we got to know he was with the driver. Upon reaching the outer gate of Taj Mahal, I gave him 40 INR (for both of us), and he asked to instead give 100 INR as he said we reserved the auto, even though I clearly stated before taking the auto that we wanted to share the auto, not reserve it. I think this was a scam. We walked away, and he didn’t insist further.

Taj Mahal entrance was like 500 m from the outer gate. We went there and bought offline tickets just outside the West gate. For Indians, the ticket for going inside the Taj Mahal complex is 50 INR, and a visit to the mausoleum costs 200 INR extra.

Security outside the Taj Mahal complex.

This red colored building is entrance to where you can see the Taj Mahal.

Taj Mahal.

Shoe covers for going inside the mausoleum.

Taj Mahal from side angle.

We came out of the Taj Mahal complex at 18:00 and stopped for some tea and snacks. I also bought a fridge magnet for 30 INR. Then we walked back towards Agra Cantt station, as we had a train for Jaipur at midnight. We were hoping to find a restaurant along the way, but we didn’t find any that we found interesting, so we just ate at the railway station. During the return trip, we noticed there was a bus stand near the station, which we didn’t know about. It turns out you can catch a bus to Taj Mahal from there. You can click here to check out the location of that bus stand on OpenStreetMap.

Expenses

These were our expenses per person

Retiring room at Delhi Railway Station for 12 hours ₹131

Train ticket from Delhi to Agra (Taj Express) ₹110

Retiring room at Agra Cantt station for 12 hours ₹450

Auto-rickshaw to Taj Mahal ₹20

Taj Mahal ticket (including going inside the mausoleum): ₹250

Food ₹350

Important information for visitors

  • Taj Mahal is closed on Friday.

  • There are plenty of free-of-cost drinking water taps inside the Taj Mahal complex.

  • Ticket price for Indians is ₹50, for foreigners and NRIs it is ₹1100, and for people from SAARC/BIMSTEC is ₹540. ₹200 extra for the mausoleum for everyone.

  • A visit inside the mausoleum requires covering your shoes or removing them. Shoe covers costs ₹10 per person inside the complex, but are probably involved free of charge in foreigner tickets. We could not find a place to keep our shoes, but some people managed to enter barefoot, indicating there must be some place to keep your shoes.

  • Mobile phones and cameras are allowed inside the Taj Mahal, but not eatables.

  • We went there on March 10th, and the weather was pleasant. So, we recommend going around that time.

  • Regarding the timings, I found this written near the ticket counter: “Taj Mahal opens 30 minutes before sunrise and closes 30 minutes before sunset during normal operating days,” so the timings are vague. But we came out of the complex at 18:00 hours. I would interpret that to mean the Taj Mahal is open from 07:00 to 18:00, and the ticket counter closes at around 17:00. During the winter, the timings might differ.

  • The cheapest way to reach Taj Mahal is by bus, and the bus stop is here

Bye for now. See you in the next post :)

Worse Than FailureError'd: Good Enough Friday

We've got some of the rarer classic Error'd types today: events from the dawn of time, weird definitions of space, and this absolutely astonishing choice of cancel/confirm button text.

Perplexed Stewart found this and it's got me completely befuddled as well! "Puzzled over this classic type of Error'd for ages. I really have no clue whether I should press Yes or No."

avast

 

I have a feeling we've seen errors like this before, but it bears repeating. Samuel H. bemoans the awful irony. "While updating Adobe Reader: Adobe Crash Processor quit unexpectedly [a.k.a. crashed]."

adobe

 

Cosmopolitan Jan B. might be looking for a courier to carry something abroad. "I found an eBay listing that seemed too good to be true, but had no bids at all! The item even ships worldwide, except they have a very narrow definition of what worldwide means."

shipping

 

Super-Patriot Chris A. proves Tennessee will take second place to nobody when it comes to distrusting dirty furriners. Especially the ones in Kentucky. "The best country to block is one's own. That way, you KNOW no foreigners can read your public documents!"

denied

 

Finally, old-timer Bruce R. has a system that appears to have been directly inspired by Aristotle. "I know Windows has some old code in it, but this is ridiculous."

stale

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsA Little Extra

Author: Marcel Neumann After years of living in an unjust world, being disillusioned by an ideology I once thought held promise and having lost faith in humanity’s collective desire to live in harmony, I decided to live off the grid in a remote Alaskan village. Any needed or desired supplies were flown in by a […]

The post A Little Extra appeared first on 365tomorrows.

Cryptogram Lessons from a Ransomware Attack against the British Library

You might think that libraries are kind of boring, but this self-analysis of a 2023 ransomware and extortion attack against the British Library is anything but.

Planet DebianPatryk Cisek: Sanoid on TrueNAS

syncoid to TrueNAS In my homelab, I have 2 NAS systems: Linux (Debian) TrueNAS Core (based on FreeBSD) On my Linux box, I use Jim Salter’s sanoid to periodically take snapshots of my ZFS pool. I also want to have a proper backup of the whole pool, so I use syncoid to transfer those snapshots to another machine. Sanoid itself is responsible only for taking new snapshots and pruning old ones you no longer care about.

,

Krebs on SecurityThread Hijacking: Phishes That Prey on Your Curiosity

Thread hijacking attacks. They happen when someone you know has their email account compromised, and you are suddenly dropped into an existing conversation between the sender and someone else. These missives draw on the recipient’s natural curiosity about being copied on a private discussion, which is modified to include a malicious link or attachment. Here’s the story of a thread hijacking attack in which a journalist was copied on a phishing email from the unwilling subject of a recent scoop.

In Sept. 2023, the Pennsylvania news outlet LancasterOnline.com published a story about Adam Kidan, a wealthy businessman with a criminal past who is a major donor to Republican causes and candidates, including Rep. Lloyd Smucker (R-Pa).

The LancasterOnline story about Adam Kidan.

Several months after that piece ran, the story’s author Brett Sholtis received two emails from Kidan, both of which contained attachments. One of the messages appeared to be a lengthy conversation between Kidan and a colleague, with the subject line, “Re: Successfully sent data.” The second missive was a more brief email from Kidan with the subject, “Acknowledge New Work Order,” and a message that read simply, “Please find the attached.”

Sholtis said he clicked the attachment in one of the messages, which then launched a web page that looked exactly like a Microsoft Office 365 login page. An analysis of the webpage reveals it would check any submitted credentials at the real Microsoft website, and return an error if the user entered bogus account information. A successful login would record the submitted credentials and forward the victim to the real Microsoft website.

But Sholtis said he didn’t enter his Outlook username and password. Instead, he forwarded the messages to LancasterOneline’s IT team, which quickly flagged them as phishing attempts.

LancasterOnline Executive Editor Tom Murse said the two phishing messages from Mr. Kidan raised eyebrows in the newsroom because Kidan had threatened to sue the news outlet multiple times over Sholtis’s story.

“We were just perplexed,” Murse said. “It seemed to be a phishing attempt but we were confused why it would come from a prominent businessman we’ve written about. Our initial response was confusion, but we didn’t know what else to do with it other than to send it to the FBI.”

The phishing lure attached to the thread hijacking email from Mr. Kidan.

In 2006, Kidan was sentenced to 70 months in federal prison after pleading guilty to defrauding lenders along with Jack Abramoff, the disgraced lobbyist whose corruption became a symbol of the excesses of Washington influence peddling. He was paroled in 2009, and in 2014 moved his family to a home in Lancaster County, Pa.

The FBI hasn’t responded to LancasterOnline’s tip. Messages sent by KrebsOnSecurity to Kidan’s emails addresses were returned as blocked. Messages left with Mr. Kidan’s company, Empire Workforce Solutions, went unreturned.

No doubt the FBI saw the messages from Kidan for what they likely were: The result of Mr. Kidan having his Microsoft Outlook account compromised and used to send malicious email to people in his contacts list.

Thread hijacking attacks are hardly new, but that is mainly true because many Internet users still don’t know how to identify them. The email security firm Proofpoint says it has tracked north of 90 million malicious messages in the last five years that leverage this attack method.

One key reason thread hijacking is so successful is that these attacks generally do not include the tell that exposes most phishing scams: A fabricated sense of urgency. A majority of phishing threats warn of negative consequences should you fail to act quickly — such as an account suspension or an unauthorized high-dollar charge going through.

In contrast, thread hijacking campaigns tend to patiently prey on the natural curiosity of the recipient.

Ryan Kalember, chief strategy officer at Proofpoint, said probably the most ubiquitous examples of thread hijacking are “CEO fraud” or “business email compromise” scams, wherein employees are tricked by an email from a senior executive into wiring millions of dollars to fraudsters overseas.

But Kalember said these low-tech attacks can nevertheless be quite effective because they tend to catch people off-guard.

“It works because you feel like you’re suddenly included in an important conversation,” Kalember said. “It just registers a lot differently when people start reading, because you think you’re observing a private conversation between two different people.”

Some thread hijacking attacks actually involve multiple threat actors who are actively conversing while copying — but not addressing — the recipient.

“We call these multi-persona phishing scams, and they’re often paired with thread hijacking,” Kalember said. “It’s basically a way to build a little more affinity than just copying people on an email. And the longer the conversation goes on, the higher their success rate seems to be because some people start replying to the thread [and participating] psycho-socially.”

The best advice to sidestep phishing scams is to avoid clicking on links or attachments that arrive unbidden in emails, text messages and other mediums. If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.

LongNowMembers of Long Now

Members of Long Now

With thousands of members from across the globe, the Long Now community has a wide range of perspectives, stories, and experiences to share. We're delighted to showcase this curated set of Ignite Talks, created and given by the Long Now members themselves. Presenting on the subjects of their choice, our speakers have precisely 5 minutes to amuse, educate, enlighten, or inspire the audience!

We're opening talk submissions to all members in early April and will send via email; we can accept both in-person and recorded talks.

And save the date of May 29 to join us in-person and online for a fun and fast-paced evening of Long Now Ignite Talks full of surprising and thoughtful ideas.

365 TomorrowsDream State

Author: Majoki They call us the new DJs—Dream Jockeys—because we stitch together popular playlists for the masses. I think it lacks imagination to piggyback on the long-gone days of vinyl playing over the airways. But that’s human nature. Always harkening back to something familiar, something easy to romanticize, something less threatening. I guess there are […]

The post Dream State appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Sorts of Dates

We've seen loads of bad date handling, but as always, there's new ways to be surprised by the bizarre inventions people come up with. Today, Tim sends us some bad date sorting, in PHP.

    // Function to sort follow-ups by Date
    function cmp($a, $b)  {
        return strcmp(strtotime($a["date"]), strtotime($b["date"]));
    }
   
    // Sort the follow-ups by Date
    usort($data, "cmp");

The cmp function rests in the global namespace, which is a nice way to ensure future confusion- it's got a very specific job, but has a very generic name. And the job it does is… an interesting approach.

The "date" field in our records is a string. It's a string formatted in YYYY-MM-DD HH:MM:SS, and this is a guarantee of the inputs- which we'll get to in a moment. So the first thing that's worth noting is that the strings are already sortable, and nothing about this function needs to exist.

But being useless isn't the end of it. We convert the string time into a Unix timestamp with strtotime, which gives us an integer- also trivially sortable. But then we run that through strcmp, which converts the integer back into a string, so we can do a string comparison on it.

Elsewhere in the code, we use usort, passing it the wonderfully named $data variable, and then applying cmp to sort it.

Unrelated to this code, but a PHP weirdness, we pass the callable cmp as a string to the usort function to apply a sort. Every time I write a PHP article, I learn a new horror of the language, and "strings as callable objects" is definitely horrifying.

Now, a moment ago, I said that we knew the format of the inputs. That's a bold claim, especially for such a generically named function, but it's important: this function is used to sort the results of a database query. That's how we know the format of the dates- the input comes directly from a query.

A query that could easily be modified to include an ORDER BY clause, making this whole thing useless.

And in fact, someone had made that modification to the query, meaning that the data was already sorted before being passed to the usort function, which did its piles of conversions to sort it back into the same order all over again.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Worse Than FailureCodeSOD: Never Retire

We all know that 2038 is going to be a big year. In a mere 14 years, a bunch of devices are going to have problems.

Less known is the Y2030 problem, which is what Ventsislav fighting to protect us from.

//POPULATE YEAR DROP DOWN LISTS
for (int year = 2000; year <= 2030; year++)
{
    DYearDropDownList.Items.Add(year.ToString());
    WYearDropDownList.Items.Add(year.ToString());
    MYearDropDownList.Items.Add(year.ToString());
}

//SELECT THE CURRENT YEAR
string strCurrentYear = DateTime.Now.Year.ToString();
for (int i = 0; i < DYearDropDownList.Items.Count; i++)
{
    if (DYearDropDownList.Items[i].Text == strCurrentYear)
    {
        DYearDropDownList.SelectedIndex = i;
        WYearDropDownList.SelectedIndex = i;
        MYearDropDownList.SelectedIndex = i;
        break;
    }
}

Okay, likely less critical than Y2038, but this code, as you might guess, started its life in the year 2000. Clearly, no one thought it'd still be in use this far out, yet… it is.

It's also worth noting that the drop down list object in .NET has a SelectedValue property, so the //SELECT THE CURRENT YEAR section is unnecessary, and could be replaced by a one-liner.

With six years to go, do you think this application is going to be replaced, or is the year for loop just going to change to year <= 2031 and be a manual change for the rest of the application's lifetime?

I mean, they could also change it so it always goes to currentYear or currentYear + 1 or whatever, but do we really think that's a viable option?

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsCold War

Author: David Barber Like nuclear weapons before them, the plagues each side kept hidden were too terrible to use; instead they waged sly wars of colds and coughs, infectious agents sneezed in trams and crowded lifts, blighting commerce with working days lost to fevers and sickness; secret attacks hard to prove and impossible to stop. […]

The post Cold War appeared first on 365tomorrows.

,

Cryptogram Hardware Vulnerability in Apple’s M-Series Chips

It’s yet another hardware side-channel attack:

The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

[…]

The breakthrough of the new research is that it exposes a previously overlooked behavior of DMPs in Apple silicon: Sometimes they confuse memory content, such as key material, with the pointer value that is used to load other data. As a result, the DMP often reads the data and attempts to treat it as an address to perform memory access. This “dereferencing” of “pointers”—meaning the reading of data and leaking it through a side channel—­is a flagrant violation of the constant-time paradigm.

[…]

The attack, which the researchers have named GoFetch, uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system. M-series chips are divided into what are known as clusters. The M1, for example, has two clusters: one containing four efficiency cores and the other four performance cores. As long as the GoFetch app and the targeted cryptography app are running on the same performance cluster—­even when on separate cores within that cluster­—GoFetch can mine enough secrets to leak a secret key.

The attack works against both classical encryption algorithms and a newer generation of encryption that has been hardened to withstand anticipated attacks from quantum computers. The GoFetch app requires less than an hour to extract a 2048-bit RSA key and a little over two hours to extract a 2048-bit Diffie-Hellman key. The attack takes 54 minutes to extract the material required to assemble a Kyber-512 key and about 10 hours for a Dilithium-2 key, not counting offline time needed to process the raw data.

The GoFetch app connects to the targeted app and feeds it inputs that it signs or decrypts. As its doing this, it extracts the app secret key that it uses to perform these cryptographic operations. This mechanism means the targeted app need not perform any cryptographic operations on its own during the collection period.

Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

Slashdot thread.

Cryptogram Security Vulnerability in Saflok’s RFID-Based Keycard Locks

It’s pretty devastating:

Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok. The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

Dormakaba says that it’s been working since early last year to make hotels that use Saflok aware of their security flaws and to help them fix or replace the vulnerable locks. For many of the Saflok systems sold in the last eight years, there’s no hardware replacement necessary for each individual lock. Instead, hotels will only need to update or replace the front desk management system and have a technician carry out a relatively quick reprogramming of each lock, door by door. Wouters and Carroll say they were nonetheless told by Dormakaba that, as of this month, only 36 percent of installed Safloks have been updated. Given that the locks aren’t connected to the internet and some older locks will still need a hardware upgrade, they say the full fix will still likely take months longer to roll out, at the very least. Some older installations may take years.

If ever. My guess is that for many locks, this is a permanent vulnerability.

Charles StrossA message from our sponsors: New Book coming!

(You probably expected this announcement a while ago ...)

I've just signed a new two book deal with my publishers, Tor.com publishing in the USA/Canada and Orbit in the UK/rest of world, and the book I'm talking about here and now—the one that's already written and delivered to the Production people who turn it into a thing you'll be able to buy later this year—is a Laundry stand-alone titled "A Conventional Boy".

("Delivered to production" means it is now ready to be copy-edited, typeset, printed/bound/distributed and simultaneously turned into an ebook and pushed through the interwebbytubes to the likes of Kobo and Kindle. I do not have a publication date or a link where you can order it yet: it almost certainly can't show up before July at this point. Yes, everything is running late. No, I have no idea why.)

"A Conventional Boy" is not part of the main (and unfinished) Laundry Files story arc. Nor is it a New Management story. It's a stand-alone story about Derek the DM, set some time between the end of "The Fuller Memorandum" and before "The Delirium Brief". We met Derek originally in "The Nightmare Stacks", and again in "The Labyrinth Index": he's a portly, short-sighted, middle-aged nerd from the Laundry's Forecasting Ops department who also just happens to be the most powerful precognitive the Laundry has tripped over in the past few decades—and a role playing gamer.

When Derek was 14 years old and running a D&D campaign, a schoolteacher overheard him explaining D&D demons to his players and called a government tips hotline. Thirty-odd years later Derek has lived most of his life in Camp Sunshine, the Laundry's magical Gitmo for Elder God cultists. As a trusty/"safe" inmate, he produces the camp newsletter and uses his postal privileges to run a play-by-mail RPG. One day, two pieces of news cross Derek's desk: the camp is going to be closed down and rebuilt as a real prison, and a games convention is coming to the nearest town.

Camp Sunshine is officially escape-proof, but Derek has had a foolproof escape plan socked away for the past decade. He hasn't used it because until now he's never had anywhere to escape to. But now he's facing the demolition of his only home, and he has a destination in mind. Come hell or high water, Derek intends to go to his first ever convention. Little does he realize that hell is also going to the convention ...

I began writing "A Conventional Boy" in 2009, thinking it'd make a nice short story. It went on hold for far too long (it was originally meant to come out before "The Nightmare Stacks"!) but instead it lingered ... then when I got back to work on it, the story ran away and grew into a short novel in its own right. As it's rather shorter than the other Laundry novels (although twice as long as, say, "Equoid") the book also includes "Overtime" and "Escape from Yokai Land", both Laundry Files novelettes about Bob, and an afterword providing some background on the 1980s Satanic D&D Panic for readers who don't remember it (which sadly means anyone much younger than myself).

Questions? Ask me anything!

Krebs on SecurityRecent ‘MFA Bombing’ Attacks Targeting Apple Users

Several Apple customers recently reported being targeted in elaborate phishing attacks that involve what appears to be a bug in Apple’s password reset feature. In this scenario, a target’s Apple devices are forced to display dozens of system-level prompts that prevent the devices from being used until the recipient responds “Allow” or “Don’t Allow” to each prompt. Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user’s account is under attack and that Apple support needs to “verify” a one-time code.

Some of the many notifications Patel says he received from Apple all at once.

Parth Patel is an entrepreneur who is trying to build a startup in the conversational AI space. On March 23, Patel documented on Twitter/X a recent phishing campaign targeting him that involved what’s known as a “push bombing” or “MFA fatigue” attack, wherein the phishers abuse a feature or weakness of a multi-factor authentication (MFA) system in a way that inundates the target’s device(s) with alerts to approve a password change or login.

“All of my devices started blowing up, my watch, laptop and phone,” Patel told KrebsOnSecurity. “It was like this system notification from Apple to approve [a reset of the account password], but I couldn’t do anything else with my phone. I had to go through and decline like 100-plus notifications.”

Some people confronted with such a deluge may eventually click “Allow” to the incessant password reset prompts — just so they can use their phone again. Others may inadvertently approve one of these prompts, which will also appear on a user’s Apple watch if they have one.

But the attackers in this campaign had an ace up their sleeves: Patel said after denying all of the password reset prompts from Apple, he received a call on his iPhone that said it was from Apple Support (the number displayed was 1-800-275-2273, Apple’s real customer support line).

“I pick up the phone and I’m super suspicious,” Patel recalled. “So I ask them if they can verify some information about me, and after hearing some aggressive typing on his end he gives me all this information about me and it’s totally accurate.”

All of it, that is, except his real name. Patel said when he asked the fake Apple support rep to validate the name they had on file for the Apple account, the caller gave a name that was not his but rather one that Patel has only seen in background reports about him that are for sale at a people-search website called PeopleDataLabs.

Patel said he has worked fairly hard to remove his information from multiple people-search websites, and he found PeopleDataLabs uniquely and consistently listed this inaccurate name as an alias on his consumer profile.

“For some reason, PeopleDataLabs has three profiles that come up when you search for my info, and two of them are mine but one is an elementary school teacher from the midwest,” Patel said. “I asked them to verify my name and they said Anthony.”

Patel said the goal of the voice phishers is to trigger an Apple ID reset code to be sent to the user’s device, which is a text message that includes a one-time password. If the user supplies that one-time code, the attackers can then reset the password on the account and lock the user out. They can also then remotely wipe all of the user’s Apple devices.

THE PHONE NUMBER IS KEY

Chris is a cryptocurrency hedge fund owner who asked that only his first name be used so as not to paint a bigger target on himself. Chris told KrebsOnSecurity he experienced a remarkably similar phishing attempt in late February.

“The first alert I got I hit ‘Don’t Allow’, but then right after that I got like 30 more notifications in a row,” Chris said. “I figured maybe I sat on my phone weird, or was accidentally pushing some button that was causing these, and so I just denied them all.”

Chris says the attackers persisted hitting his devices with the reset notifications for several days after that, and at one point he received a call on his iPhone that said it was from Apple support.

“I said I would call them back and hung up,” Chris said, demonstrating the proper response to such unbidden solicitations. “When I called back to the real Apple, they couldn’t say whether anyone had been in a support call with me just then. They just said Apple states very clearly that it will never initiate outbound calls to customers — unless the customer requests to be contacted.”

Massively freaking out that someone was trying to hijack his digital life, Chris said he changed his passwords and then went to an Apple store and bought a new iPhone. From there, he created a new Apple iCloud account using a brand new email address.

Chris said he then proceeded to get even more system alerts on his new iPhone and iCloud account — all the while still sitting at the local Apple Genius Bar.

Chris told KrebsOnSecurity his Genius Bar tech was mystified about the source of the alerts, but Chris said he suspects that whatever the phishers are abusing to rapidly generate these Apple system alerts requires knowing the phone number on file for the target’s Apple account. After all, that was the only aspect of Chris’s new iPhone and iCloud account that hadn’t changed.

WATCH OUT!

“Ken” is a security industry veteran who spoke on condition of anonymity. Ken said he first began receiving these unsolicited system alerts on his Apple devices earlier this year, but that he has not received any phony Apple support calls as others have reported.

“This recently happened to me in the middle of the night at 12:30 a.m.,” Ken said. “And even though I have my Apple watch set to remain quiet during the time I’m usually sleeping at night, it woke me up with one of these alerts. Thank god I didn’t press ‘Allow,’ which was the first option shown on my watch. I had to scroll watch the wheel to see and press the ‘Don’t Allow’ button.”

Ken shared this photo he took of an alert on his watch that woke him up at 12:30 a.m. Ken said he had to scroll on the watch face to see the “Don’t Allow” button.

Ken didn’t know it when all this was happening (and it’s not at all obvious from the Apple prompts), but clicking “Allow” would not have allowed the attackers to change Ken’s password. Rather, clicking “Allow” displays a six digit PIN that must be entered on Ken’s device — allowing Ken to change his password. It appears that these rapid password reset prompts are being used to make a subsequent inbound phone call spoofing Apple more believable.

Ken said he contacted the real Apple support and was eventually escalated to a senior Apple engineer. The engineer assured Ken that turning on an Apple Recovery Key for his account would stop the notifications once and for all.

A recovery key is an optional security feature that Apple says “helps improve the security of your Apple ID account.” It is a randomly generated 28-character code, and when you enable a recovery key it is supposed to disable Apple’s standard account recovery process. The thing is, enabling it is not a simple process, and if you ever lose that code in addition to all of your Apple devices you will be permanently locked out.

Ken said he enabled a recovery key for his account as instructed, but that it hasn’t stopped the unbidden system alerts from appearing on all of his devices every few days.

KrebsOnSecurity tested Ken’s experience, and can confirm that enabling a recovery key does nothing to stop a password reset prompt from being sent to associated Apple devices. Visiting Apple’s “forgot password” page — https://iforgot.apple.com — asks for an email address and for the visitor to solve a CAPTCHA.

After that, the page will display the last two digits of the phone number tied to the Apple account. Filling in the missing digits and hitting submit on that form will send a system alert, whether or not the user has enabled an Apple Recovery Key.

The password reset page at iforgot.apple.com.

RATE LIMITS

What sanely designed authentication system would send dozens of requests for a password change in the span of a few moments, when the first requests haven’t even been acted on by the user? Could this be the result of a bug in Apple’s systems?

Apple has not yet responded to requests for comment.

Throughout 2022, a criminal hacking group known as LAPSUS$ used MFA bombing to great effect in intrusions at Cisco, Microsoft and Uber. In response, Microsoft began enforcing “MFA number matching,” a feature that displays a series of numbers to a user attempting to log in with their credentials. These numbers must then be entered into the account owner’s Microsoft authenticator app on their mobile device to verify they are logging into the account.

Kishan Bagaria is a hobbyist security researcher and engineer who founded the website texts.com (now owned by Automattic), and he’s convinced Apple has a problem on its end. In August 2019, Bagaria reported to Apple a bug that allowed an exploit he dubbed “AirDoS” because it could be used to let an attacker infinitely spam all nearby iOS devices with a system-level prompt to share a file via AirDrop — a file-sharing capability built into Apple products.

Apple fixed that bug nearly four months later in December 2019, thanking Bagaria in the associated security bulletin. Bagaria said Apple’s fix was to add stricter rate limiting on AirDrop requests, and he suspects that someone has figured out a way to bypass Apple’s rate limit on how many of these password reset requests can be sent in a given timeframe.

“I think this could be a legit Apple rate limit bug that should be reported,” Bagaria said.

WHAT CAN YOU DO?

Apple seems requires a phone number to be on file for your account, but after you’ve set up the account it doesn’t have to be a mobile phone number. KrebsOnSecurity’s testing shows Apple will accept a VOIP number (like Google Voice). So, changing your account phone number to a VOIP number that isn’t widely known would be one mitigation here.

One caveat with the VOIP number idea: Unless you include a real mobile number, Apple’s iMessage and Facetime applications will be disabled for that device. This might a bonus for those concerned about reducing the overall attack surface of their Apple devices, since zero-click zero-days in these applications have repeatedly been used by spyware purveyors.

Also, it appears Apple’s password reset system will accept and respect email aliases. Adding a “+” character after the username portion of your email address — followed by a notation specific to the site you’re signing up at — lets you create an infinite number of unique email addresses tied to the same account.

For instance, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to my inbox and create a corresponding folder called “Example,” along with a new filter that sends any email addressed to that alias to the Example folder. In this case, however, perhaps a less obvious alias than “+apple” would be advisable.

Update, March 27, 5:06 p.m. ET: Added perspective on Ken’s experience. Also included a What Can You Do? section.

Worse Than FailureCodeSOD: Exceptional String Comparisons

As a general rule, I will actually prefer code that is verbose and clear over code that is concise but makes me think. I really don't like to think if I don't have to.

Of course, there's the class of WTF code that is verbose, unclear and also really bad, which Thomas sends us today:

Private Shared Function Mailid_compare(ByVal queryEmail As String, ByVal FnsEmail As String) As Boolean
    Try
        Dim str1 As String = queryEmail
        Dim str2 As String = FnsEmail
        If String.Compare(str1, str2) = 0 Then
            Return True
        Else
            Return False
        End If
    Catch ex As Exception
    End Try
End Function

This VB .Net function could easily be replaced with String.Compare(queryEmail, FnsEmail) = 0. Of course, that'd be a little unclear, and since we only care about equality, we could just use String.Equals(queryEmail, FnsEmail)- which is honestly clearer than having a method called Mailid_compare, which doesn't actually tell me anything useful about what it does.

Speaking of not doing anything useful, there are a few other pieces of bloat in this function.

First, we plop our input parameters into the variables str1 and str2, which does a great job of making what's happening here less clear. Then we have the traditional "use an if statement to return either true or false".

But the real special magic in this one is the Try/Catch. This is a pretty bog standard useless exception handler. No operation in this function throws an exception- String.Compare will even happily accept null references. Even if somehow an exception was thrown, we wouldn't do anything about it. As a bonus, we'd return a null in that case, throwing downstream code into a bad state.

What's notable in this case, is that every function was implemented this way. Every function had a Try/Catch that frequently did nothing, or rarely (usually when they copy/pasted from StackOverflow) printed out the error message, but otherwise just let the program continue.

And that's the real WTF: a codebase polluted with so many do-nothing exception handlers that exceptions become absolutely worthless. Errors in the program let it continue, and the users experience bizarre, inconsistent states as the application fails silently.

Or, to put it another way: this is the .NET equivalent of classic VB's On Error Resume Next, which is exactly the kind of terrible idea it sounds like.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 Tomorrows‘Lineartrope 04.96’

Author: David Dumouriez I thought I was ready. “I was on the precipice, looking down.” Internal count of five. A long five. “I was on the precipice, looking down.” Count ten. “I was on the precipice, looking down.” I noticed a brief, impatient nod. The nod meant ‘again’. I thought. “I was on the precipice […]

The post ‘Lineartrope 04.96’ appeared first on 365tomorrows.

,

Worse Than FailureCodeSOD: Contains a Substring

One of the perks of open source software is that it means that large companies can and will patch it for their needs. Which means we can see what a particular large electronics vendor did with a video player application.

For example, they needed to see if the URL pointed to a stream protected by WideVine, Vudu, or Netflix. They can do this by checking if the filename contains a certain substring. Let's see how they accomplished this…

int get_special_protocol_type(char *filename, char *name)
{
	int type = 0;
	int fWidevine = 0;
	int j;
    	char proto_str[2800] = {'\0', };
      	if (!strcmp("http", name))
       {
		strcpy(proto_str, filename);
		for(j=0;proto_str[j] != '\0';j++)
		{
			if(proto_str[j] == '=')
			{
				j++;
				if(proto_str[j] == 'W')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						type = Widevine_PROTOCOL;
					}
				}
			}
		}
		if (type == 0)
		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						j++;
						if(proto_str[j] == 'U')
						{
							j++;
							if(proto_str[j] == 'D')
							{
								j++;
								if(proto_str[j] == 'U')
								{
									type = VUDU_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
 		if (type == 0)
      		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'N')
					{
						j++;
						if(proto_str[j] == 'F')
						{
							j++;
							if(proto_str[j] == 'L')
							{
								j++;
								if(proto_str[j] == 'X')
								{
									type = Netflix_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
      	}
	return type;
}

For starters, there's been a lot of discussion around the importance of memory safe languages lately. I would argue that in C/C++ it's not actually hard to write memory safe code, it's just very easy not to. And this is an example- everything in here is a buffer overrun waiting to happen. The core problem is that we're passing pure pointers to char, and relying on the strings being properly null terminated. So we're using the old, unsafe string functions to never checking against the bounds of proto_str to make sure we haven't run off the edge. A malicious input could easily run off the end of the string.

But also, let's talk about that string comparison. They don't even just loop across a pair of strings character by character, they use this bizarre set of nested ifs with incrementing loop variables. Given that they use strcmp, I think we can safely assume the C standard library exists for their target, which means strnstr was right there.

It's also worth noting that, since this is a read-only operation, the strcpy is not necessary, though we're in a rough place since they're passing a pointer to char and not including the size, which gets us back to the whole "unsafe string operations" problem.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsSecond-Hand Blades

Author: Julian Miles, Staff Writer “The only things I work with are killers and currency. You don’t look dangerous, and I don’t see you waving cash.” The man brushes imaginary specks from his lapels with the hand not holding a dagger, then gives a little grin as he replies. “Appearances can be- aack!” The man […]

The post Second-Hand Blades appeared first on 365tomorrows.

,

365 TomorrowsVerbatim Thirst

Author: Gabriel Walker Land In every direction there was nothing but baked dirt, tumbleweeds, and flat death. The blazing sun weighed down on me. I didn’t know which way to walk, and I didn’t know why. How I’d gotten there was long since forgotten. Being lost wasn’t the pressing problem. No, the immediate threat was […]

The post Verbatim Thirst appeared first on 365tomorrows.