Planet Russell

,

Charles StrossAnnouncement time!

I am very pleased to be able to admit that the Laundry Files are shortlisted for the Hugo Award for Best Series!

(Astute readers will recall that the Laundry Files were shortlisted—but did not win—in 2019. Per the rules, "A qualifying installment must be published in the qualifying year ... If a Series is a finalist and does not win, it is no longer eligible until at least two more installments consisting of at lest 240,000 words total appear in subsequent years." Since 2019, the Laundry Files have grown by three full novels (the New Management books) and a novella ("Escape from Yokai Land"), totaling about 370,000 words. "Season of Skulls" was published in 2023, hence the series is eligible in 2024.)

The Hugo award winners will be announced at the world science fiction convention in Glasgow this August, on the evening of Sunday August 11th. Full announcement of the entire shortlist here.

In addition to the Hugo nomination, the Kickstarter for the second edition of the Laundry tabletop role playing game, from Cubicle 7 games, goes live for pre-orders in the next month. If you want to be notified when that happens, there's a sign-up page here.

Finally, there's some big news coming soon about film/TV rights, and separately, graphic novel rights, to the Laundry Files. I can't say any more at this point, but expect another announcement (or two!) over the coming months.

I'm sure you have questions. Ask away!

Charles StrossA Wonky Experience

A Wonka Story

This is no longer in the current news cycle, but definitely needs to be filed under "stuff too insane for Charlie to make up", or maybe "promising screwball comedy plot line to explore", or even "perils of outsourcing creative media work to generative AI".

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy's Chocolate Experience, a blatant attempt to cash in on Roald Dahl's cautionary children's tale, "Willy Wonka and the Chocolate Factory". Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind "House of Illuminati Ltd" will wise up and delete the website, here's a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE - CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here's The Guardian's coverage:

The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas - the tiny, orange men who power Wonka's chocolate factory in the Roald Dahl book which inspired the prequel film.

But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have ... recomplicated? I think that's the diplomatic way to phrase it.

First, someone leaked the script for the event on twitter. They'd hired actors and evidently used ChatGPT to generate a script for the show: some of the actors quit in despair, others made a valliant attempt to at least amuse the children. But it didn't work. Interactive audience-participation events are hard work and this one apparently called for the sort of special effects that Disney's Imagineers might have blanched at, or at least asked, "who's paying for this?"

Here's a ThreadReader transcript of the twitter thread about the script (ThreadReader chains tweets together into a single web page, so you don't have to log into the hellsite itself). Note it's in the shape of screenshots of the script and threadreader didn't grab the images, so here's my transcript of the first three:

DIRECTION: (Audience members engage with the interactive flowers, offering compliments, to which the flowers respond with pre-recorded, whimsical thank-yous.)

Wonkidoodle 1: (to a guest) Oh, and if you see a butterfly, whisper your sweetest dream to it. They're our official secret keepers and dream carriers of the garden!

Willy McDuff: (gathering everyone's attention) Now, I must ask, has anyone seen the elusive Bubble Bloom? It's a rare flower that blooms just once every blue moon and fills the air with shimmering bubbles!

DIRECTION: (The stage crew discreetly activates bubble machines, filling the area with bubbles, causing excitement and wonder among the audience.)

Wonkidoodle 2: (pretending to catch bubbles) Quick! Each bubble holds a whisper of enchantment--catch one, and make a wish!

Willy McDuff: (as the bubble-catching frenzy continues) Remember, in the Garden of Enchantment, every moment is a chance for magic, every corner hides a story, and every bubble... (catches a bubble) holds a dream.

DIRECTION: (He opens his hand, and the bubble gently pops, releasing a small, twinkling light that ascends into the rafters, leaving the audience in awe.)

Willy McDuff: (with warmth) My dear friends, take this time to explore, to laugh, and to dream. For in this garden, the magic is real, and the possibilities are endless. And who knows? The next wonder you encounter may just be around the next bend.

DIRECTION: Scene ends with the audience fully immersed in the interactive, magical experience, laughter and joy filling the air as Willy McDuff and the Wonkidoodles continue to engage and delight with their enchanting antics and treats.

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful--the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

Willy McDuff: Here, my dear guests, you may quench your thirst with lemonade that fizzes and dances on the tongue, and chase bubbles that burst with flavors unimaginable. A toast, to adventures shared and friendships forged in the heart of the unknown!

DIRECTION: (The audience, now relieved and rejuvenated by the whimsical turn of events, follows Willy into the Bubble and Lemonade Room, laughter and chatter filling the air once more, as they immerse themselves in the joyous, bubbly wonderland.)

DIRECTION: Transition to the Bubble and Lemonade Room

Willy McDuff: (suddenly brightening) Speaking of light spirits, I find myself quite parched after our...unexpected adventure. But fortune smiles upon us, for just beyond this door lies a room filled with refreshments most delightful-the Bubble and Lemonade Room!

DIRECTION: (With a flourish, Willy opens a previously unnoticed door, revealing a room where the air sparkles with floating bubbles, and rivers of sparkling lemonade flow freely.)

And here is a photo of the Lemonade Room in all its glory.

A trestle table with some paper cups half-full of flat lemonade

Note that in the above directions, near as I can make out, there were no stage crew on site. As Seamus O'Reilly put it, "I get that lazy and uncreative people will use AI to generate concepts. But if the script it barfs out has animatronic flowers, glowing orbs, rivers of lemonade and giggling grass, YOU still have to make those things exist. I'm v confused as to how that part was misunderstood."

Now, if that was all there was to it, it'd merely be annoying. My initial take was that this was a blatant rip-off, a consumer fraud perpetrated by a company ("House of Illuminati") based in London, doing everything by remote control over the internet to fleece those gullible provincials of their wallet contents. (Oh, and that probably includes the actors: did they get paid on the day?) But aftershocks are still rumbling on, a week later.

Per The Daily Beast, "House of Illuminati" issued an apology (via Facebook) on Friday, offering to refund all tickets—but then mysteriously deleted the apology hours later, and posted a new one:

"I want to extend my sincerest apologies to each and every one of you who was looking forward to this event," the latest Facebook post from House of Illuminati reads. "I understand the disappointment and frustration this has caused, and for that, I am truly sorry."

(The individual behind the post goes unnamed.)

"It's important for me to clarify that the organization and decisions surrounding this event were solely my responsibility," the post continues. "I want to make it clear that anyone who was hired externally or offered their help, are not affiliated with the me or the company, any use of faces can cause serious harm to those who did not have any involvement in the making of this event."

"Regarding a personal matter, there will be no wedding, and no wedding was funded by the ticket sales," the post continues further, sans context. "This is a difficult time for me, and I ask for your understanding and privacy."

"There will be no wedding, and no wedding was funded by the ticket sales?" (What on Earth is going on here?)

Finally, The Daily Beast notes that Billy McFarland, the creator of the Fyre Fest fiasco, told TMZ he'd love to give the Wonka organizers a second chance at getting things right at Fyre Fest II.

The mind boggles.

I am now wondering if the whole thing wasn't some sort of extraordinarily elaborate publicity stunt rather than simply a fraud, but I can't for the life of me work out what was going on. Unless it was Jimmy Cauty and Bill Drummond (aka The KLF) getting up to hijinks again? But I can't imagine them doing anything so half-assed ... Least-bad case is that an idiot decided to set up an events company ("how hard can running public arts events be?" —don't answer that) and intended to use the profits and the experience to plan their dream wedding. Which then ran off the rails into a ditch, rolled over, exploded in flames, was sucked up by a tornado and deposited in Oz, their fiancée called off the engagement and eloped with a walrus, and—

It's all downhill from here.

Anyway, the moral of the story so far is: don't use generative AI tools to write scripts for public events, or to produce promotional images, or indeed to do anything at all without an experienced human to sanity check their output! And especially don't use them to fund your wedding ...

UPDATE: Identity of scammer behind Willy's Chocolate Experience exposed -- Youtube video, I haven't had a chance to watch it all yet, will summarize if relevant later; the perp has form for selling ChatGPT generated ebook-shaped "objects" via Amazon.

NEW UPDATE: Glasgow's disastrous Wonka character inspires horror film

A villain devised for the catastrophic Willy's Chocolate Experience, who makes sweets and lives in walls, is to become the subject of a new horror movie.

LATEST UPDATE: House of Illuminati claims "copywrite", "we will protect our interests".

The 'Meth Lab Oompa Loompa Lady' is selling greetings on Cameo for $25.

And Eleanor Morton has a new video out, Glasgow Wonka Experience Tourguide Doesn't Give a F*.

FINAL UPDATE: Props from botched Willy Wonka event raise over £2,000 for Palestinian aid charity: Glasgow record shop Monorail Music auctioned the props on eBay after they were discovered in a bin outside the warehouse where event took place. (So some good came of it in the end ...)

Cryptogram XZ Utils Backdoor

The cybersecurity world got really lucky last week. An intentionally placed backdoor in XZ Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to XZ Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware.

It was an incredibly complex backdoor. Installing it was a multi-year process that seems to have involved social engineering the lone unpaid engineer in charge of the utility. More from ArsTechnica:

In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint function with a variant that has long been recognized as less secure. No one noticed at the time.

The following year, JiaT75 submitted a patch over the XZ Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of XZ Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

There’s a lot more. The sophistication of both the exploit and the process to get it into the software project scream nation-state operation. It’s reminiscent of Solar Winds, although (1) it would have been much, much worse, and (2) we got really, really lucky.

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

Cryptogram In Memoriam: Ross Anderson, 1956-2024

Last week I posted a short memorial of Ross Anderson. The Communications of the ACM asked me to expand it. Here’s the longer version.

Worse Than FailureCodeSOD: Terminated By Nulls

Strings in C are a unique collection of mistakes. The biggest one is the idea of null termination. Null termination is not without its advantages: because you're using a single byte to mark the end of the string, you can have strings of arbitrary length. No need to track the size and worry if your size variable is big enough to hold the end of the string. No complicated data structures. Just "read till you find a 0 byte, and you know you're done."

Of course, this is the root of a lot of evils. Malicious inputs that lack a null terminator, for example, are a common exploit. It's so dangerous that all of the str* functions have strn* versions, which allow you to pass sizes to ensure you don't overrun any buffers.

Dmitri sends us a simple example of someone not quite fully understanding this.

strcpy( buffer, string);
strcat( buffer, "\0");

The first line here copies the contents of string into buffer. It leverages the null terminator to know when the copy can stop. Then, we use strcat, which scans the string for the null terminator, and inserts a new string at the end- the new string, in this case, being the null terminator.

The developer responsible for this is protecting against a string lacking its null terminator by using functions which absolutely require it to be null terminated.

C strings are hard in the best case, but they're a lot harder when you don't understand them.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsDeath by Entropy

Author: DJ Tantillo So why am I trapped in a steel box? Intelligence. Once the connection between intelligence and entropy – the latter in the form of the maximization of possible futures as a marker of the former – was shown to be valid for individuals as well as species, the sewer cover was rolled […]

The post Death by Entropy appeared first on 365tomorrows.

,

Planet DebianIan Jackson: Why we’ve voted No to CfD for Derril Water solar farm

[personal profile] ceb and I are members of the Derril Water Solar Park cooperative.

We were recently invited to vote on whether the coop should bid for a Contract for Difference, in a government green electricity auction.

We’ve voted No.

“Green electricity” from your mainstream supplier is a lie

For a while [personal profile] ceb and I have wanted to contribute directly to green energy provision. This isn’t really possible in the mainstream consumer electricy market.

Mainstream electricity suppliers’ “100% green energy” tariffs are pure greenwashing. In a capitalist boondoogle, they basically “divvy up” the electricity so that customers on the (typically more expensive) “green” tariff “get” the green electricity, and the other customers “get” whatever is left. (Of course the electricity is actually all mixed up by the National Grid.) There are fewer people signed up for these tariffs than there is green power generated, so this basically means signing up for a “green” tariff has no effect whatsoever, other than giving evil people more money.

Ripple

About a year ago we heard about Ripple. The structure is a little complicated, but the basic upshot is:

Ripple promote and manage renewable energy schemes. The schemes themselves are each an individual company; the company is largely owned by a co-operative. The co-op is owned by consumers of electricity in the UK., To stop the co-operative being an purely financial investment scheme, shares ownership is limited according to your electricity usage. The electricity is be sold on the open market, and the profits are used to offset members’ electricity bills. (One gotcha from all of this is that for this to work your electricity billing provider has to be signed up with Ripple, but ours, Octopus, is.)

It seemed to us that this was a way for us to directly cause (and pay for!) the actual generation of green electricity.

So, we bought shares in one these co-operatives: we are co-owners of the Derril Water Solar Farm. We signed up for the maximum: funding generating capacity corresponding to 120% of our current electricity usage. We paid a little over £5000 for our shares.

Contracts for Difference

The UK has a renewable energy subsidy scheme, which goes by the name of Contracts for Difference. The idea is that a renewable energy generation company bids in advance, saying that they’ll sell their electricity at Y price, for the duration of the contract (15 years in the current round). The lowest bids win. All the electricity from the participating infrastructure is sold on the open market, but if the market price is low the government makes up the difference, and if the price is high, the government takes the winnings.

This is supposedly good for giving a stable investment environment, since the price the developer is going to get now doesn’t depends on the electricity market over the next 15 years. The CfD system is supposed to encourage development, so you can only apply before you’ve commissioned your generation infrastructure.

Ripple and CfD

Ripple recently invited us to agree that the Derril Water co-operative should bid in the current round of CfDs.

If this goes ahead, and we are one of the auction’s winners, the result would be that, instead of selling our electricity at the market price, we’ll sell it at the fixed CfD price.

This would mean that our return on our investment (which show up as savings on our electricity bills) would be decoupled from market electricity prices, and be much more predictable.

They can’t tell us the price they’d want to bid at, and future electricity prices are rather hard to predict, but it’s clear from the accompanying projections that they think we’d be better off on average with a CfD.

The documentation is very full of financial projections and graphs; other factors aren’t really discussed in any detail.

The rules of the co-op didn’t require them to hold a vote, but very sensibly, for such a fundamental change in the model, they decided to treat it roughly the same way as for a rules change: they’re hoping to get 75% Yes votes.

Voting No

The reason we’re in this co-op at all is because we want to directly fund renewable electricity.

Participating in the CfD auction would involve us competing with capitalist energy companies for government subsidies. Subsidies which are supposed to encourage the provision of green electricity.

It seems to us that participating in this auction would remove most of the difference between what we hoped to do by investing in Derril Water, and just participating in the normal consumer electricity market.

In particular, if we do win in the auction, that’s probably directly removing the funding and investment support model for other, market-investor-funded, projects.

In other words, our buying into Derril Water ceases to be an additional green energy project, changing (in its minor way) the UK’s electricity mix. It becomes a financial transaction much more tenously connected (if connected at all) to helping mitigate the climate emergency.

So our conclusion was that we must vote against.



comment count unavailable comments

Krebs on SecurityApril’s Patch Tuesday Brings Record Number of Fixes

If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.

Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.

“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”

Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.

Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.

Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.

Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.

“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”

CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.

“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”

Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.

Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.

“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”

For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.

Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.

KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.

“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.

Cryptogram US Cyber Safety Review Board on the 2023 Microsoft Exchange Hack

US Cyber Safety Review Board released a report on the summer 2023 hack of Microsoft Exchange by China. It was a serious attack by the Chinese government that accessed the emails of senior U.S. government officials.

From the executive summary:

The Board finds that this intrusion was preventable and should never have occurred. The Board also concludes that Microsoft’s security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations. The Board reaches this conclusion based on:

  1. the cascade of Microsoft’s avoidable errors that allowed this intrusion to succeed;
  2. Microsoft’s failure to detect the compromise of its cryptographic crown jewels on its own, relying instead on a customer to reach out to identify anomalies the customer had observed;
  3. the Board’s assessment of security practices at other cloud service providers, which maintained security controls that Microsoft did not;
  4. Microsoft’s failure to detect a compromise of an employee’s laptop from a recently acquired company prior to allowing it to connect to Microsoft’s corporate network in 2021;
  5. Microsoft’s decision not to correct, in a timely manner, its inaccurate public statements about this incident, including a corporate statement that Microsoft believed it had determined the likely root cause of the intrusion when in fact, it still has not; even though Microsoft acknowledged to the Board in November 2023 that its September 6, 2023 blog post about the root cause was inaccurate, it did not update that post until March 12, 2024, as the Board was concluding its review and only after the Board’s repeated questioning about Microsoft’s plans to issue a correction;
  6. the Board’s observation of a separate incident, disclosed by Microsoft in January 2024, the investigation of which was not in the purview of the Board’s review, which revealed a compromise that allowed a different nation-state actor to access highly-sensitive Microsoft corporate email accounts, source code repositories, and internal systems; and
  7. how Microsoft’s ubiquitous and critical products, which underpin essential services that support national security, the foundations of our economy, and public health and safety, require the company to demonstrate the highest standards of security, accountability, and transparency.

The report includes a bunch of recommendations. It’s worth reading in its entirety.

The board was established in early 2022, modeled in spirit after the National Transportation Safety Board. This is their third report.

Here are a few news articles.

Worse Than FailureTwo Pizzas for ME

Gloria was a senior developer at IniMirage, a company that makes custom visualizations for their clients. Over a few years, IniMirage had grown to more than 100 people, but was still very much in startup mode. Because of that, Gloria tried to keep her teams sized for two pizzas. Thomas, the product manager, on the other hand, felt that the company was ready to make big moves, and could scale up the teams: more people could move products faster. And Thomas was her manager, so he was "setting direction."

Gloria's elderly dog had spent the night at the emergency vet, and the company hadn't grown up to "giving sick days" yet, so she was nursing a headache from lack of sleep, when Thomas tried to initiate a Slack huddle. He had a habit of pushing the "Huddle" button any time the mood struct, without rhyme or reason.

She put on her headphones and accepted the call. "It's Gloria. Can you hear me?" She checked her mic, and repeated the check. She waited a minute before hanging up and getting back to work.

About five minutes later, Thomas called again."Hey, did you call me like 10 minutes ago?"

"No, you called me." Gloria facepalmed and took a deep, calming breath. "I couldn't hear you."

Thomas said, "Huh, okay. Anyway, is that demo ready for today?"

Thomas loved making schedules. He usually used Excel. There was just one problem: he rarely shared them, and he rarely read them after making them. Gloria had nothing on her calendar. "What demo?"

"Oh, Dylan said he was ready for a demo. So I scheduled it with Jack."

Jack was the CEO. Dylan was one of Gloria's peers. Gloria checked Github, and said, "Well, Dylan hasn't pushed anything for… months. I haven't heard anything from him. Has he showed you this demo?"

Gloria heard crunching. Thomas was munching on some chips. She heard him typing. After a minute, she said, "Thomas?"

"Oh, sorry, I was distracted."

Clearly. "Okay, I think we should cancel this meeting. I've seen this before, and with a bad demo, we could lose buy in."

Thomas said, "No, no, it'll be fine."

Gloria said, "Okay, well, let me know how that demo goes." She left the call and went back to work, thinking that it'd be Thomas's funeral. A few minutes before the meeting, her inbox dinged. She was now invited to the demo.

She joined the meeting, only to learn that Dylan was out sick and couldn't make the meeting. She spent the time giving project updates on her work, instead of demos, which is what the CEO actually wanted anyway. The meeting ended and everyone was happy- everyone but Gloria.

Gloria wrote an email to the CEO, expressing her concerns. Thomas was inattentive, incommunicative, and had left her alone to manage the team. She felt that she was doing more of the product management work than Thomas was. Jack replied that he appreciated her concerns, but that Thomas was growing into the position.

Julia, one of the other product managers, popped by Gloria's desk a few weeks later. "You know Dylan?"

Gloria said, "Well, I know he hasn't push any code in a literal year and keeps getting sick. I think I've pushed more code to his project than he has, and I'm not on it."

Julia laughed. "Well, he's been fired, but not for that."

Thomas had been pushing for more demos. Which meant he pulled Dylan into more meetings with the CEO. Jack was a "face time" person, and required everyone to turn on their webcams during meetings. It didn't take very many meetings to discover that Dylan was an entirely different person each time. There were multiple Dylans.

"But even without that, HR was going to fire him for not showing up to work," Julia said.

"But… if there were multiple people… why couldn't someone show up?" Gloria realized she was asking the wrong question. "How did Thomas never realize it?"

And if he was multiple people, how could he never get any work done? Dylan was a two-pizza team all by himself.

After the Dylan debacle, Thomas resigned suddenly and left to work at a competitor. A new product manager, Penny, came on board, and was organized, communicative, and attentive. Gloria never heard about Dylan again, and Penny kept the team sizes small.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsFar Off Sirens

Author: Majoki It’s peaceful now. I can concentrate better. Even reflect a little. It hasn’t been like that in a long time. Living in a city that’s eating itself is a noisy place. Even on the calmest days at the lab, there was always the sound of far off sirens. Plaintive calls, as if from […]

The post Far Off Sirens appeared first on 365tomorrows.

Planet DebianMatthew Palmer: How I Tripped Over the Debian Weak Keys Vulnerability

Those of you who haven’t been in IT for far, far too long might not know that next month will be the 16th(!) anniversary of the disclosure of what was, at the time, a fairly earth-shattering revelation: that for about 18 months, the Debian OpenSSL package was generating entirely predictable private keys.

The recent xz-stential threat (thanks to @nixCraft for making me aware of that one), has got me thinking about my own serendipitous interaction with a major vulnerability. Given that the statute of limitations has (probably) run out, I thought I’d share it as a tale of how “huh, that’s weird” can be a powerful threat-hunting tool – but only if you’ve got the time to keep pulling at the thread.

Prelude to an Adventure

Our story begins back in March 2008. I was working at Engine Yard (EY), a now largely-forgotten Rails-focused hosting company, which pioneered several advances in Rails application deployment. Probably EY’s greatest claim to lasting fame is that they helped launch a little code hosting platform you might have heard of, by providing them free infrastructure when they were little more than a glimmer in the Internet’s eye.

I am, of course, talking about everyone’s favourite Microsoft product: GitHub.

Since GitHub was in the right place, at the right time, with a compelling product offering, they quickly started to gain traction, and grow their userbase. With growth comes challenges, amongst them the one we’re focusing on today: SSH login times. Then, as now, GitHub provided SSH access to the git repos they hosted, by SSHing to git@github.com with publickey authentication. They were using the standard way that everyone manages SSH keys: the ~/.ssh/authorized_keys file, and that became a problem as the number of keys started to grow.

The way that SSH uses this file is that, when a user connects and asks for publickey authentication, SSH opens the ~/.ssh/authorized_keys file and scans all of the keys listed in it, looking for a key which matches the key that the user presented. This linear search is normally not a huge problem, because nobody in their right mind puts more than a few keys in their ~/.ssh/authorized_keys, right?

2008-era GitHub giving monkey puppet side-eye to the idea that nobody stores many keys in an authorized_keys file

Of course, as a popular, rapidly-growing service, GitHub was gaining users at a fair clip, to the point that the one big file that stored all the SSH keys was starting to visibly impact SSH login times. This problem was also not going to get any better by itself. Something Had To Be Done.

EY management was keen on making sure GitHub ran well, and so despite it not really being a hosting problem, they were willing to help fix this problem. For some reason, the late, great, Ezra Zygmuntowitz pointed GitHub in my direction, and let me take the time to really get into the problem with the GitHub team. After examining a variety of different possible solutions, we came to the conclusion that the least-worst option was to patch OpenSSH to lookup keys in a MySQL database, indexed on the key fingerprint.

We didn’t take this decision on a whim – it wasn’t a case of “yeah, sure, let’s just hack around with OpenSSH, what could possibly go wrong?”. We knew it was potentially catastrophic if things went sideways, so you can imagine how much worse the other options available were. Ensuring that this wouldn’t compromise security was a lot of the effort that went into the change. In the end, though, we rolled it out in early April, and lo! SSH logins were fast, and we were pretty sure we wouldn’t have to worry about this problem for a long time to come.

Normally, you’d think “patching OpenSSH to make mass SSH logins super fast” would be a good story on its own. But no, this is just the opening scene.

Chekov’s Gun Makes its Appearance

Fast forward a little under a month, to the first few days of May 2008. I get a message from one of the GitHub team, saying that somehow users were able to access other users’ repos over SSH. Naturally, as we’d recently rolled out the OpenSSH patch, which touched this very thing, the code I’d written was suspect number one, so I was called in to help.

The lineup scene from the movie The Usual Suspects They're called The Usual Suspects for a reason, but sometimes, it really is Keyser Söze

Eventually, after more than a little debugging, we discovered that, somehow, there were two users with keys that had the same key fingerprint. This absolutely shouldn’t happen – it’s a bit like winning the lottery twice in a row1 – unless the users had somehow shared their keys with each other, of course. Still, it was worth investigating, just in case it was a web application bug, so the GitHub team reached out to the users impacted, to try and figure out what was going on.

The users professed no knowledge of each other, neither admitted to publicising their key, and couldn’t offer any explanation as to how the other person could possibly have gotten their key.

Then things went from “weird” to “what the…?”. Because another pair of users showed up, sharing a key fingerprint – but it was a different shared key fingerprint. The odds now have gone from “winning the lottery multiple times in a row” to as close to “this literally cannot happen” as makes no difference.

Milhouse from The Simpsons says that We're Through The Looking Glass Here, People

Once we were really, really confident that the OpenSSH patch wasn’t the cause of the problem, my involvement in the problem basically ended. I wasn’t a GitHub employee, and EY had plenty of other customers who needed my help, so I wasn’t able to stay deeply involved in the on-going investigation of The Mystery of the Duplicate Keys.

However, the GitHub team did keep talking to the users involved, and managed to determine the only apparent common factor was that all the users claimed to be using Debian or Ubuntu systems, which was where their SSH keys would have been generated.

That was as far as the investigation had really gotten, when along came May 13, 2008.

Chekov’s Gun Goes Off

With the publication of DSA-1571-1, everything suddenly became clear. Through a well-meaning but ultimately disasterous cleanup of OpenSSL’s randomness generation code, the Debian maintainer had inadvertently reduced the number of possible keys that could be generated by a given user from “bazillions” to a little over 32,000. With so many people signing up to GitHub – some of them no doubt following best practice and freshly generating a separate key – it’s unsurprising that some collisions occurred.

You can imagine the sense of “oooooooh, so that’s what’s going on!” that rippled out once the issue was understood. I was mostly glad that we had conclusive evidence that my OpenSSH patch wasn’t at fault, little knowing how much more contact I was to have with Debian weak keys in the future, running a huge store of known-compromised keys and using them to find misbehaving Certificate Authorities, amongst other things.

Lessons Learned

While I’ve not found a description of exactly when and how Luciano Bello discovered the vulnerability that became CVE-2008-0166, I presume he first came across it some time before it was disclosed – likely before GitHub tripped over it. The stable Debian release that included the vulnerable code had been released a year earlier, so there was plenty of time for Luciano to have discovered key collisions and go “hmm, I wonder what’s going on here?”, then keep digging until the solution presented itself.

The thought “hmm, that’s odd”, followed by intense investigation, leading to the discovery of a major flaw is also what ultimately brought down the recent XZ backdoor. The critical part of that sequence is the ability to do that intense investigation, though.

When I reflect on my brush with the Debian weak keys vulnerability, what sticks out to me is the fact that I didn’t do the deep investigation. I wonder if Luciano hadn’t found it, how long it might have been before it was found. The GitHub team would have continued investigating, presumably, and perhaps they (or I) would have eventually dug deep enough to find it. But we were all super busy – myself, working support tickets at EY, and GitHub feverishly building features and fighting the fires in their rapidly-growing service.

As it was, Luciano was able to take the time to dig in and find out what was happening, but just like the XZ backdoor, I feel like we, as an industry, got a bit lucky that someone with the skills, time, and energy was on hand at the right time to make a huge difference.

It’s a luxury to be able to take the time to really dig into a problem, and it’s a luxury that most of us rarely have. Perhaps an understated takeaway is that somehow we all need to wrestle back some time to follow our hunches and really dig into the things that make us go “hmm…”.

Support My Hunches

If you’d like to help me be able to do intense investigations of mysterious software phenomena, you can shout me a refreshing beverage on ko-fi.


  1. the odds are actually probably more like winning the lottery about twenty times in a row. The numbers involved are staggeringly huge, so it’s easiest to just approximate it as “really, really unlikely”. 

,

Planet DebianBastian Blank: Python dataclasses for Deb822 format

Python includes some helping support for classes that are designed to just hold some data and not much more: Data Classes. It uses plain Python type definitions to specify what you can have and some further information for every field. This will then generate you some useful methods, like __init__ and __repr__, but on request also more. But given that those type definitions are available to other code, a lot more can be done.

There exists several separate packages to work on data classes. For example you can have data validation from JSON with dacite.

But Debian likes a pretty strange format usually called Deb822, which is in fact derived from the RFC 822 format of e-mail messages. Those files includes single messages with a well known format.

So I'd like to introduce some Deb822 format support for Python Data Classes. For now the code resides in the Debian Cloud tool.

Usage

Setup

It uses the standard data classes support and several helper functions. Also you need to enable support for postponed evaluation of annotations.

from __future__ import annotations
from dataclasses import dataclass
from dataclasses_deb822 import read_deb822, field_deb822

Class definition start

Data classes are just normal classes, just with a decorator.

@dataclass
class Package:

Field definitions

You need to specify the exact key to be used for this field.

    package: str = field_deb822('Package')
    version: str = field_deb822('Version')
    arch: str = field_deb822('Architecture')

Default values are also supported.

    multi_arch: Optional[str] = field_deb822(
        'Multi-Arch',
        default=None,
    )

Reading files

for p in read_deb822(Package, sys.stdin, ignore_unknown=True):
    print(p)

Full example

from __future__ import annotations
from dataclasses import dataclass
from debian_cloud_images.utils.dataclasses_deb822 import read_deb822, field_deb822
from typing import Optional
import sys


@dataclass
class Package:
    package: str = field_deb822('Package')
    version: str = field_deb822('Version')
    arch: str = field_deb822('Architecture')
    multi_arch: Optional[str] = field_deb822(
        'Multi-Arch',
        default=None,
    )


for p in read_deb822(Package, sys.stdin, ignore_unknown=True):
    print(p)

Known limitations

Worse Than FailureCodeSOD: They Key To Dictionaries

It's incredibly common to convert objects to dictionaries/maps and back, for all sorts of reasons. Jeff's co-worker was tasked with taking a dictionary which contained three keys, "mail", "telephonenumber", and "facsimiletelephonenumber" into an object representing a contact. This was their solution:

foreach (string item in _ptAttributeDic.Keys)
{
string val = _ptAttributeDic[item];
switch (item)
{
    case "mail":
    if (string.IsNullOrEmpty(base._email))
        base._email = val;
    break;
    case "facsimiletelephonenumber":
    base._faxNum = val;
    break;
    case "telephonenumber":
    base._phoneNumber = val;
    break;
}
}

Yes, we iterate across all of them to find the ones that we want. The dictionary in question is actually quite large, so we spend most of our time here looking at keys we don't care about, to find the three we do. If only there were some easier way, some efficient option for finding items in a dictionary by their name. If only we could just fetch items by their key, then we wouldn't need to have this loop. If only.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsBorsen Rules

Author: Julian Miles, Staff Writer The bodies plummeting from the starry sky are screaming. Esteban chuckles. “Shock fields up!” Ambusan stares at him. “Shock fields? Surely you mean catch fields? Shock fields save, but it’ll hurt.” “If a Mistress saw fit to drop them from that high, she didn’t mean for them to have a […]

The post Borsen Rules appeared first on 365tomorrows.

xkcdTypes of Eclipse Photo

,

Planet DebianThorsten Alteholz: My Debian Activities in March 2024

FTP master

This month I accepted 147 and rejected 12 packages. The overall number of packages that got accepted was 151.

If you file an RM bug, please do check whether there are reverse dependencies as well and file RM bugs for them. It is annoying and time-consuming when I have to do the moreinfo dance.

Debian LTS

This was my hundred-seventeenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3770-1] libnet-cidr-lite-perl security update for one CVE to fix IP parsing and ACLs based on the result
  • [#1067544] Bullseye PU bug for libmicrohttpd
  • Unfortunately XZ happened at the end of month and I had to delay/intentionally delayed other uploads: they will appear as DLA-3781-1 and DLA-3784-1 in April

I also continued to work on qtbase-opensource-src and last but not least did a week of FD.

Debian ELTS

This month was the sixty-eighth ELTS month. During my allocated time I uploaded:

  • [ELA-1062-1]libnet-cidr-lite-perl security update for one CVE to improve parsing of IP addresses in Jessie and Stretch
  • Due to XZ I also delayed the uploads here. They will appear as ELA-1069-1 and DLA-1070-1 in April

I also continued on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well) and did a week of FD.

Debian Printing

This month I uploaded new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded a new upstream or bugfix version of:

misc

This month I uploaded new upstream or bugfix versions of:

365 TomorrowsCinder Three

Author: Stephen Dougherty The smoke rose from a fire that wasn’t a fire. Dr Alvin shifted his old bones in his favorite seat while his young visitor poured him a drink at his request. “How did you end up on the rock in the first place?” The boy sat in the only other seat in […]

The post Cinder Three appeared first on 365tomorrows.

,

Planet DebianJohn Goerzen: Facebook is Censoring Stories about Climate Change and Illegal Raid in Marion, Kansas

It is, sadly, not entirely surprising that Facebook is censoring articles critical of Meta.

The Kansas Reflector published an artical about Meta censoring environmental articles about climate change — deeming them “too controversial”.

Facebook then censored the article about Facebook censorship, and then after an independent site published a copy of the climate change article, Facebook censored it too.

The CNN story says Facebook apologized and said it was a mistake and was fixing it.

Color me skeptical, because today I saw this:

Yes, that’s right: today, April 6, I get a notification that they removed a post from August 12. The notification was dated April 4, but only showed up for me today.

I wonder why my post from August 12 was fine for nearly 8 months, and then all of a sudden, when the same website runs an article critical of Facebook, my 8-month-old post is a problem. Hmm.

Riiiiiight. Cybersecurity.

This isn’t even the first time they’ve done this to me.

On September 11, 2021, they removed my post about the social network Mastodon (click that link for screenshot). A post that, incidentally, had been made 10 months prior to being removed.

While they ultimately reversed themselves, I subsequently wrote Facebook’s Blocking Decisions Are Deliberate — Including Their Censorship of Mastodon.

That this same pattern has played out a second time — again with something that is a very slight challenege to Facebook — seems to validate my conclusion. Facebook lets all sort of hateful garbage infest their site, but anything about climate change — or their own censorship — gets removed, and this pattern persists for years.

There’s a reason I prefer Mastodon these days. You can find me there as @jgoerzen@floss.social.

So. I’ve written this blog post. And then I’m going to post it to Facebook. Let’s see if they try to censor me for a third time. Bring it, Facebook.

365 TomorrowsThe Lift Rider

Author: Aubrey Williams Every Tuesday and Thursday I have business in the Kirk Tower, and take the lift to the 21st floor. I’ve done this for the past three months, and every time I take it, regardless of which floor I start from, there’s always the same man in there. He’s clean-cut, a bit like […]

The post The Lift Rider appeared first on 365tomorrows.

Planet DebianJunichi Uekawa: Trying to explain analogue clock.

Trying to explain analogue clock. It's hard to explain. Tried adding some things for affordance, and it is still not enough. So it's not obvious which arm is the hour and which arm is the minute. analog clock

,

Planet DebianPaul Wise: FLOSS Activities March 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Administration

  • Debian wiki: approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.12.8.2.0 on CRAN: Upstream Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1136 other packages on CRAN, downloaded 33.5 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 578 times according to Google Scholar.

This release brings a new upstream bugfix release Armadillo 12.8.2 prepared by Conrad two days ago. It took the usual day to noodle over 1100+ reverse dependencies and ensure two failures were independent of the upgrade (i.e., “no change to worse” in CRAN parlance). It took CRAN another because we hit a random network outage for (spurious) NOTE on a remote URL, and were then caught in the shrapnel from another large package ecosystem update spuriously pointing some build failures that were due to a missing rebuild to us. All good, as human intervention comes to the rescue.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.2.0 (2024-04-02)

  • Upgraded to Armadillo release 12.8.2 (Cortisol Injector)

    • Workaround for FFTW3 header clash

    • Workaround in testing framework for issue under macOS

    • Minor cleanups to reduce code bloat

    • Improved documentation

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David BrinCynics are no help… nor are those pushing the remaking of humanity

Heading out for the eclipse... likely to be messed up by thunderstorms!  These things never work out.  Anyway, here's a little mini-rant about CYNICISM that might amuse you. BTW for the record, I really approve of the guys I am criticizing here!  They are of value to the world. I just wish they'd add a little bit of a song to their messages of gloom. Optimism is a partner of cynicism, if you wanna get things done.  It makes you more effective!



== Jeez, man. Always look (at least a bit) on the bright side of life! ==


Carumba! Bruce Sterling and Cory Doctorow are younger than me. But in this interview with Tim Ventura, Bruce goes full – “I hope I don’t live so long that I’ll see the world’s final decay into dull, incompetent worldwide oligarchy.” (Or something like that.)  


Dang, well I hope never to become a cranky geezer shouting at (literal) clouds! I mean ‘recent science is boring’? Jeepers, Bruce, you really need to get out more. The wave of great new stuff is accelerating!  


It’s not that cynicism has no place – I bought a terrifically cynical novella from Bruce some years back – about a delightful, fictionalized future Silvio Berlusconi. But there the cynicism was mixed with lovely riffs of optimistic techno-joy and faith in progress. Still, misanthropists can contribute! Indeed, beyond his relentless cynicism-chic, Bruce does note some real, worrisome trends, like looming oligarchy-dominance and their blatant War on Science. And, of course, we’re all fretful about the spasm of crises recently unleashed by Vlad Putin all over the globe, desperately hurling every trick he’s gathered, in order to stave off his own version of the movie “Downfall.” 

Still, when Bruce goes: “I thought civilization would become stodgy…”  Well, sure. Maybe? But do you have to be the poster boy?

It’s not that Bruce doesn’t say way interesting things!  About half of his cynicism riffs are clever or very informative, adding to the piles of contrarian tradeoffs that I keep on the cluttered desk of my mind, weighing how – not whether – to fight against doom and oligarchy, with all my might and to my last breath!

Example: He asserts that there’s no Russo-Ukrainian ‘war’ going on, in Belgrade? Well, maybe not with overt violence. But expatriate Russians are talking to Ukrainian refugees and learning they aren’t Nazis, or remotely interested in being Russian, and are absolutely determined to be Ukrainian. And Vlad has to fear those expatriates coming home. And talking.


What all of this reveals is the truly controlling factor that underlies all our surface struggles over ideology and perception of the world. That factor is personality. 


Want an example? WHY do almost all conservative intellectuals drift toward obsession with so-called “cyclical history?” 


Like the current fetish on the right for an insanely stoop'id book called The Fourth Turning? 


Likewise Nazi ice-moon cults and confederate Book of Revelation millennialism? Their one shared personality driver is a desperate wish for things to cycle back to changelessness, Especially to counter the Left’s equally-compulsive, dreamy mania to “re-forge humanity!” 


But neither of those personality-propelled obsessions hold a candle to the grumpy-dour geezer mantra: 


Fie on the future! It’s only an ever-dull muddling-through of more-of-the-same! Oh, and get off my lawn!”


Two last thoughts: 


First, Bruce opines that we’ve already lost to the oligarchies (so why bother?) But that’s okay since oligarchies inevitably collapse through incompetence!


Well, that’s half right. Gaze across the last 6000 years, a dismal epoch when feudal rule by owner-lord families and their inheritance brats dominated every continent (those with agriculture). 


Across all those bleak centuries of relentlessly-enforced stupidity and misrule by inheritance brats, specific families and dynasties generally did ‘collapse’! 


What did NOT collapse was the overall pattern of male-run, harem-seeking called feudalism, which continued, unvaried, almost all that time. For that pattern to change called for both new technologies+education plus determination by savvy new generations. 


For 200 years, successively smarter innovators have struggled to overcome repeated putsch attempts by that oligarchic-feudal attractor state. And those savvy, determined, progressive-egalitarian-scientific innovators succeeded - sometimes just barely – at maintaining this enlightenment miracle.  Counting on their heirs and successors – we, the living – to do the same.


Moreover, what other Blue Americans accomplished in the 1770s and 1860s and 1960s, we can do yet again, with vastly improved tools and skills plus some of that ancestral grit! Moreover, we’ll do it with or without the help of addictive cynics.


Finally, Bruce is – or once was - a maven of techno-cyberpunk – and yet, what was the most-thrilling article of tool/hardware he chose to share with us this time? A mega Swiss Army knife?  


Okay, guy. Enjoy cynical retirement. Us young fools will keep fighting for the dream.


--- 


Posted in honor of another friend and colleague - older but perennially optimistic. My friend Vernor Vinge.


Planet DebianBits from Debian: apt install dpl-candidate: Sruthi Chandran

The Debian Project Developers will shortly vote for a new Debian Project Leader known as the DPL.

The DPL is the official representative of representative of The Debian Project tasked with managing the overall project, its vision, direction, and finances.

The DPL is also responsible for the selection of Delegates, defining areas of responsibility within the project, the coordination of Developers, and making decisions required for the project.

Our outgoing and present DPL Jonathan Carter served 4 terms, from 2020 through 2024. Jonathan shared his last Bits from the DPL post to Debian recently and his hopes for the future of Debian.

Recently, we sat with the two present candidates for the DPL position asking questions to find out who they really are in a series of interviews about their platforms, visions for Debian, lives, and even their favorite text editors. The interviews were conducted by disaster2life (Yashraj Moghe) and made available from video and audio transcriptions:

  • Andreas Tille [Interview]
  • Sruthi Chandran [this document]

Voting for the position starts on April 6, 2024.

Editors' note: This is our official return to Debian interviews, readers should stay tuned for more upcoming interviews with Developers and other important figures in Debian as part of our "Meet your Debian Developer" series. We used the following tools and services: Turboscribe.ai for the transcription from the audio and video files, IRC: Oftc.net for communication, Jitsi meet for interviews, and Open Broadcaster Software (OBS) for editing and video. While we encountered many technical difficulties in the return to this process, we are still able and proud to present the transcripts of the interviews edited only in a few areas for readability.

2024 Debian Project Leader Candidate: Sruthi Chandran

Sruthi's interview

Hi Sruthi, so for the first question, who are you and could you tell us a little bit about yourself?

[Sruthi]:

I usually talk about me whenever I am talking about answering the question who am I, I usually say like I am a librarian turned free software enthusiast and a Debian Developer. So I had no technical background and I learned, I was introduced to free software through my husband and then I learned Debian packaging, and eventually I became a Debian Developer. So I always give my example to people who say I am not technically inclined, I don't have technical background so I can't contribute to free software.

So yeah, that's what I refer to myself.

For the next question, could you tell me what do you do in Debian, and could you mention your story up until here today?

[Sruthi]:

Okay, so let me start from my initial days in Debian. I started contributing to Debian, my first contribution was a Tibetan font. We went to a Tibetan place and they were saying they didn't have a font in Linux.

So that's how I started contributing. Then I moved on to Ruby packages, then I have some JavaScript and Go packages, all dependencies of GitLab. So I was involved with maintaining GitLab for some time, now I'm not very active there.

But yeah, so GitLab was the main package I was contributing to since I contributed since 2016 to maybe like 2020 or something. Later I have come [over to] packaging. Now I am part of some of the teams, delegated teams, like community team and outreach team, as well as the Debconf committee. And the biggest, I think, my activity in Debian, I would say is organizing Debconf 2023. So it was a great experience and yeah, so that's my story in Debian.

So what are three key terms about you and your candidacy?

[Sruthi]:

Okay, let me first think about it. For candidacy, I can start with diversity is one point I started expressing from the first time I contested for DPL. But to be honest, that's the main point I want to bring.

[Yashraj]:

So for diversity, if you could break down your thoughts on diversity and make them, [about] your three points including diversity.

[Sruthi]:

So in addition to, eventually when starting it was just diversity. Now I have like a bit more ideas, like community, like I want to be a leader for the Debian community. More than, I don't know, maybe people may not agree, but I would say I want to be a leader of Debian community rather than a Debian operating system.

I connect to community more and third point I would say.

The term of a DPL lasts for an year. So what do you think during, what would you try to do during that, that you can't do from your position now?

[Sruthi]:

Okay. So I, like, I am very happy with the structure of Debian and how things work in Debian. Like you can do almost a lot of things, like almost all things without being a DPL.

Whatever change you want to bring about or whatever you want to do, you can do without being a DPL. Anyone, like every DD has the same rights. Only things I feel [the] DPL has hold on are mainly the budget or the funding part, which like, that's where they do the decision making part.

And then comes like, and one advantage of DPL driving some idea is that somehow people tend to listen to that with more, like, tend to give more attention to what DPL is saying rather than a normal DD. So I wanted to, like, I have answered some of the questions on how to, how I plan to do the financial budgeting part, how I want to handle, like, and the other thing is using the extra attention that I get as a DPL, I would like to obviously start with the diversity aspect in Debian. And yeah, like, I, what I want to do is not, like, be a leader and say, like, take Debian to one direction where I want to go, but I would rather take suggestions and inputs from the whole community and go about with that.

So yes, that's what I would say.

And taking a less serious question now, what is your preferred text editor?

[Sruthi]:

Vim.

[Yashraj]:

Vim, wholeheartedly team Vim?

[Sruthi]:

Yes.

[Yashraj]:

Great. Well, this was made in Vim, all the text for this.

[Sruthi]:

So, like, since you mentioned extra data, I'll give my example, like, it's just a fun note, when I started contributing to Debian, as I mentioned, I didn't have any knowledge about free software, like Debian, and I was not used to even using Linux. So, and I didn't have experience with these text editors. So, when I started contributing, I used to do the editing part using gedit.

So, that's how I started. Eventually, I moved to Nano, and once I reached Vim, I didn't move on.

Team Vim. Next question. What, what do you think is the importance of the Debian project in the world today? And where would you like to see it in 10 years, like 10 years into the future?

[Sruthi]:

Okay. So, Debian, as we all know, is referred to as the universal operating system without, like, it is said for a reason. We have hundreds and hundreds of operating systems, like Linux, distributions based on Debian.

So, I believe Debian, like even now, Debian has good influence on the, at least on the Linux or Linux ecosystem. So, what we implement in Debian has, like, is going to affect quite a lot of, like, a very good percentage of people using Linux. So, yes.

So, I think Debian is one of the leading Linux distributions. And I think in 10 years, we should be able to reach a position, like, where we are not, like, even now, like, even these many years after having Linux, we face a lot of problems in newer and newer hardware coming up and installing on them is a big problem. Like, firmwares and all those things are getting more and more complicated.

Like, it should be getting simpler, but it's getting more and more complicated. So, I, one thing I would imagine, like, I don't know if we will ever reach there, but I would imagine that eventually with the Debian, we should be able to have some, at least a few of the hardware developers or hardware producers have Debian pre-installed and those kind of things. Like, not, like, become, I'm not saying it's all, it's also available right now.

What I'm saying is that it becomes prominent enough to be opted as, like, default distro.

What part of Debian has made you And what part of the project has kept you going all through these years?

[Sruthi]:

Okay. So, I started to contribute in 2016, and I was part of the team doing GitLab packaging, and we did have a lot of training workshops and those kind of things within India. And I was, like, I had interacted with some of the Indian DDs, but I never got, like, even through chat or mail.

I didn't have a lot of interaction with the rest of the world, DDs. And the 2019 Debconf changed my whole perspective about Debian. Before that, I wasn't, like, even, I was interested in free software.

I was doing the technical stuff and all. But after DebConf, my whole idea has been, like, my focus changed to the community. Debian community is a very welcoming, very interesting community to be with.

And so, I believe that, like, 2019 DebConf was a for me. And that kept, from 2019, my focus has been to how to support, like, how, I moved to the community part of Debian from there. Then in 2020 I became part of the community team, and, like, I started being part of other teams.

So, these, I would say, the Debian community is the one, like, aspect of Debian that keeps me whole, keeps me held on to the Debian ecosystem as a whole.

Continuing to speak about Debian, what do you think, what is the first thing that comes to your mind when you think of Debian, like, the word, the community, what's the first thing?

[Sruthi]:

I think I may sound like a broken record or something.

[Yashraj]:

No, no.

[Sruthi]:

Again, I would say the Debian community, like, it's the people who makes Debian, that makes Debian special.

Like, apart from that, if I say, I would say I'm very, like, one part of Debian that makes me very happy is the, how the governing system of Debian works, the Debian constitution and all those things, like, it's a very unique thing for Debian. And, and it's like, when people say you can't work without a proper, like, establishment or even somebody deciding everything for you, it's difficult. When people say, like, we have been, Debian has been proving it for quite a long time now, that it's possible.

So, so that's one thing I believe, like, that's one unique point. And I am very proud about that.

What areas do you think Debian is failing in, how can it (that standing) be improved?

[Sruthi]:

So, I think where Debian is failing now is getting new people into Debian. Like, I don't remember, like, exactly the answer. But I remember hearing someone mention, like, the average age of a Debian Developer is, like, above 40 or 45 or something, like, exact age, I don't remember.

But it's like, Debian is getting old. Like, the people in Debian are getting old and we are not getting enough of new people into Debian. And that's very important to have people, like, new people coming up.

Otherwise, eventually, like, after a few years, nobody, like, we won't have enough people to take the project forward. So, yeah, I believe that is where we need to work on. We are doing some efforts, like, being part of GSOC or outreachy and having maybe other events, like, local events. Like, we used to have a lot of Debian packaging workshops in India. And those kind of, I think, in Brazil and all, they all have, like, local communities are doing. But we are not very successful in retaining the people who maybe come and try out things.

But we are not very good at retaining the people, like, retaining people who come. So, we need to work on those things. Right now, I don't have a solid answer for that.

But one thing, like, I was thinking about is, like, having a Debian specific outreach project, wherein the focus will be about the Debian, like, starting will be more on, like, usually what happens in GSOC and outreach is that people come, have the, do the contributions, and they go back. Like, they don't have that connection with the Debian, like, Debian community or Debian project. So, what I envision with these, the Debian outreach, the Debian specific outreach is that we have some part of the internship, like, even before starting the internship, we have some sessions and, like, with the people in Debian having, like, getting them introduced to the Debian philosophy and Debian community and Debian, how Debian works.

And those things, we focus on that. And then we move on to the technical internship parts. So, I believe this could do some good in having, like, when you have people you can connect to, you tend to stay back in a project mode.

When you feel something more than, like, right now, we have so many technical stuff to do, like, the choice for a college student is endless. So, if they want, if they stay back for something, like, maybe for Debian, I would say, we need to have them connected to the Debian project before we go into technical parts. Like, technical parts, like, there are other things as well, where they can go and do the technical part, but, like, they can come here, like, yeah.

So, that's what I was saying. Focused outreach projects is one thing. That's just one.

That's not enough. We need more of, like, more ideas to have more new people come up. And I'm very happy with, like, the DebConf thing. We tend to get more and more people from the places where we have a DebConf. Brazil is an example. After the Debconf, they have quite a good improvement on Debian contributors.

And I think in India also, it did give a good result. Like, we have more people contributing and staying back and those things. So, yeah.

So, these were the things I would say, like, we can do to improve.

For the final question, what field in free software do you, what field in free software generally do you think requires the most work to be put into it? What do you think is Debian's part in that field?

[Sruthi]:

Okay. Like, right now, what comes to my mind is the free software licenses parts. Like, we have a lot of free software licenses, and there are non-free software licenses.

But currently, I feel free software is having a big problem in enforcing these licenses. Like, there are, there may be big corporations or like some people who take up the whole, the code and may not follow the whole, for example, the GPL licenses. Like, we don't know how much of those, how much of the free softwares are used in the bigger things.

Yeah, I agree. There are a lot of corporations who are afraid to touch free software. But there would be good amount of free software, free work that converts into property, things violating the free software licenses and those things.

And we do not have the kind of like, we have SFLC, SFC, etc. But still, we do not have the ability to go behind and trace and implement the licenses. So, enforce those licenses and bring people who are violating the licenses forward and those kind of things is challenging because one thing is it takes time, like, and most importantly, money is required for the legal stuff.

And not always people who like people who make small software, or maybe big, but they may not have the kind of time and money to have these things enforced. So, that's a big challenge free software is facing, especially in our current scenario. I feel we are having those, like, we need to find ways how we can get it sorted.

I don't have an answer right now what to do. But this is a challenge I felt like and Debian's part in that. Yeah, as I said, I don't have a solution for that.

But the Debian, so DFSG and Debian sticking on to the free software licenses is a good support, I think.

So, that was the final question, Do you have anything else you want to mention for anyone watching this?

[Sruthi]:

Not really, like, I am happy, like, I think I was able to answer the questions. And yeah, I would say who is watching. I won't say like, I'm the best DPL candidate, you can't have a better one or something.

I stand for a reason. And if you believe in that, or the Debian community and Debian diversity, and those kinds of things, if you believe it, I hope you would be interested, like, you would want to vote for me. That's it.

Like, I'm not, I'll make it very clear. I'm not doing a technical leadership part here. So, those, I can't convince people who want technical leadership to vote for me.

But I would say people who connect with me, I hope they vote for me.

Planet DebianBits from Debian: apt install dpl-candidate: Andreas Tille

The Debian Project Developers will shortly vote for a new Debian Project Leader known as the DPL.

The Project Leader is the official representative of The Debian Project tasked with managing the overall project, its vision, direction, and finances.

The DPL is also responsible for the selection of Delegates, defining areas of responsibility within the project, the coordination of Developers, and making decisions required for the project.

Our outgoing and present DPL Jonathan Carter served 4 terms, from 2020 through 2024. Jonathan shared his last Bits from the DPL post to Debian recently and his hopes for the future of Debian.

Recently, we sat with the two present candidates for the DPL position asking questions to find out who they really are in a series of interviews about their platforms, visions for Debian, lives, and even their favorite text editors. The interviews were conducted by disaster2life (Yashraj Moghe) and made available from video and audio transcriptions:

  • Andreas Tille [this document]
  • Sruthi Chandran [Interview]

Voting for the position starts on April 6, 2024.

Editors' note: This is our official return to Debian interviews, readers should stay tuned for more upcoming interviews with Developers and other important figures in Debian as part of our "Meet your Debian Developer" series. We used the following tools and services: Turboscribe.ai for the transcription from the audio and video files, IRC: Oftc.net for communication, Jitsi meet for interviews, and Open Broadcaster Software (OBS) for editing and video. While we encountered many technical difficulties in the return to this process, we are still able and proud to present the transcripts of the interviews edited only in a few areas for readability.

2024 Debian Project Leader Candidate: Andrea Tille

Andreas' Interview

Who are you? Tell us a little about yourself.

[Andreas]:

How am I? Well, I'm, as I wrote in my platform, I'm a proud grandfather doing a lot of free software stuff, doing a lot of sports, have some goals in mind which I like to do and hopefully for the best of Debian.

And How are you today?

[Andreas]:

How I'm doing today? Well, actually I have some headaches but it's fine for the interview.

So, usually I feel very good. Spring was coming here and today it's raining and I plan to do a bicycle tour tomorrow and hope that I do not get really sick but yeah, for the interview it's fine.

What do you do in Debian? Could you mention your story here?

[Andreas]:

Yeah, well, I started with Debian kind of an accident because I wanted to have some package salvaged which is called WordNet. It's a monolingual dictionary and I did not really plan to do more than maybe 10 packages or so. I had some kind of training with xTeddy which is totally unimportant, a cute teddy you can put on your desktop.

So, and then well, more or less I thought how can I make Debian attractive for my employer which is a medical institute and so on. It could make sense to package bioinformatics and medicine software and it somehow evolved in a direction I did neither expect it nor wanted to do, that I'm currently the most busy uploader in Debian, created several teams around it.

DebianMate is very well known from me. I created the Blends team to create teams and techniques around what we are doing which was Debian TIS, Debian Edu, Debian Science and so on and I also created the packaging team for R, for the statistics package R which is technically based and not topic based. All these blends are covering a certain topic and R is just needed by lots of these blends.

So, yeah, and to cope with all this I have written a script which is routing an update to manage all these uploads more or less automatically. So, I think I had one day where I uploaded 21 new packages but it's just automatically generated, right? So, it's on one day more than I ever planned to do.

What is the first thing you think of when you think of Debian?

Editors' note: The question was misunderstood as the “worst thing you think of when you think of Debian”

[Andreas]:

The worst thing I think about Debian, it's complicated. I think today on Debian board I was asked about the technical progress I want to make and in my opinion we need to standardize things inside Debian. For instance, bringing all the packages to salsa, follow some common standards, some common workflow which is extremely helpful.

As I said, if I'm that productive with my own packages we can adopt this in general, at least in most cases I think. I made a lot of good experience by the support of well-formed teams. Well-formed teams are those teams where people support each other, help each other.

For instance, how to say, I'm a physicist by profession so I'm not an IT expert. I can tell apart what works and what not but I'm not an expert in those packages. I do and the amount of packages is so high that I do not even understand all the techniques they are covering like Go, Rust and something like this.

And I also don't speak Java and I had a problem once in the middle of the night and I've sent the email to the list and was a Java problem and I woke up in the morning and it was solved. This is what I call a team. I don't call a team some common repository that is used by random people for different packages also but it's working together, don't hesitate to solve other people's problems and permit people to get active.

This is what I call a team and this is also something I observed in, it's hard to give a percentage, in a lot of other teams but we have other people who do not even understand the concept of the team. Why is working together make some advantage and this is also a tough thing. I [would] like to tackle in my term if I get elected to form solid teams using the common workflow. This is one thing.

The other thing is that we have a lot of good people in our infrastructure like FTP masters, DSA and so on. I have the feeling they have a lot of work and are working more or less on their limits, and I like to talk to them [to ask] what kind of change we could do to move that limits or move their personal health to the better side.

The DPL term lasts for a year, What would you do during that you couldn't do now?

[Andreas]:

Yeah, well this is basically what I said are my main issues. I need to admit I have no really clear imagination what kind of tasks will come to me as a DPL because all these financial issues and law issues possible and issues [that] people who are not really friendly to Debian might create. I'm afraid these things might occupy a lot of time and I can't say much about this because I simply don't know.

What are three key terms about you and your candidacy?

[Andreas]:

As I said, I like to work on standards, I’d like to make Debian try [to get it right so] that people don't get overworked, this third key point is be inviting to newcomers, to everybody who wants to come. Yeah, I also mentioned in my term this diversity issue, geographical and from gender point of view. This may be the three points I consider most important.

Preferred text editor?

[Andreas]:

Yeah, my preferred one? Ah, well, I have no preferred text editor. I'm using the Midnight Commander very frequently which has an internal editor which is convenient for small text. For other things, I usually use VI but I also use Emacs from time to time. So, no, I have not preferred text editor. Whatever works nicely for me.

What is the importance of the community in the Debian Project? How would like to see it evolving over the next few years?

[Andreas]:

Yeah, I think the community is extremely important. So, I was on a lot of DebConfs. I think it's not really 20 but 17 or 18 DebCons and I really enjoyed these events every year because I met so many friends and met so many interesting people that it's really enriching my life and those who I never met in person but have read interesting things and yeah, Debian community makes really a part of my life.

And how do you think it should evolve specifically?

[Andreas]:

Yeah, for instance, last year in Kochi, it became even clearer to me that the geographical diversity is a really strong point. Just discussing with some women from India who is afraid about not coming next year to Busan because there's a problem with Shanghai and so on. I'm not really sure how we can solve this but I think this is a problem at least I wish to tackle and yeah, this is an interesting point, the geographical diversity and I'm running the so-called mentoring of the month.

This is a small project to attract newcomers for the Debian Med team which has the focus on medical packages and I learned that we had always men applying for this and so I said, okay, I dropped the constraint of medical packages.

Any topic is fine, I teach you packaging but it must be someone who does not consider himself a man. I got only two applicants, no, actually, I got one applicant and one response which was kind of strange if I'm hunting for women or so.

I did not understand but I got one response and interestingly, it was for me one of the least expected counters. It was from Iran and I met a very nice woman, very open, very skilled and gifted and did a good job or have even lose contact today and maybe we need more actively approach groups that are underrepresented. I don't know if what's a good means which I did but at least I tried and so I try to think about these kind of things.

What part of Debian has made you smile? What part of the project has kept you going all through the years?

[Andreas]:

Well, the card game which is called Mao on the DebConf made me smile all the time. I admit I joined only two or three times even if I really love this kind of games but I was occupied by other stuff so this made me really smile. I also think the first online DebConf in 2020 made me smile because we had this kind of short video sequences and I tried to make a funny video sequence about every DebConf I attended before. This is really funny moments but yeah, it's not only smile but yeah.

One thing maybe it's totally unconnected to Debian but I learned personally something in Debian that we have a do-ocracy and you can do things which you think that are right if not going in between someone else, right? So respect everybody else but otherwise you can do so.

And in 2020 I also started to take trees which are growing widely in my garden and plant them into the woods because in our woods a lot of trees are dying and so I just do something because I can. I have the resource to do something, take the small tree and bring it into the woods because it does not harm anybody. I asked the forester if it is okay, yes, yes, okay. So everybody can do so but I think the idea to do something like this came also because of the free software idea. You have the resources, you have the computer, you can do something and you do something productive, right? And when thinking about this I think it was also my Debian work.

Meanwhile I have planted more than 3,000 trees so it's not a small number but yeah, I enjoy this.

What part of Debian would you have some criticisms for?

[Andreas]:

Yeah, it's basically the same as I said before. We need more standards to work together. I do not want to repeat this but this is what I think, yeah.

What field in Free Software generally do you think requires the most work to be put into it? What do you think is Debian's part in the field?

[Andreas]:

It's also in general, the thing is the fact that I'm maintaining packages which are usually as modern software is maintained in Git, which is fine but we have some software which is at Sourceport, we have software laying around somewhere, we have software where Debian somehow became Upstream because nobody is caring anymore and free software is very different in several things, ways and well, I in principle like freedom of choice which is the basic of all our work.

Sometimes this freedom goes in the way of productivity because everybody is free to re-implement. You asked me for the most favorite editor. In principle one really good working editor would be great to have and would work and we have maybe 500 in Debian or so, I don't know.

I could imagine if people would concentrate and say five instead of 500 editors, we could get more productive, right? But I know this will not happen, right? But I think this is one thing which goes in the way of making things smooth and productive and we could have more manpower to replace one person who's [having] children, doing some other stuff and can't continue working on something and maybe this is a problem I will not solve, definitely not, but which I see.

What do you think is Debian's part in the field?

[Andreas]:

Yeah, well, okay, we can bring together different Upstreams, so we are building some packages and have some general overview about similar things and can say, oh, you are doing this and some other person is doing more or less the same, do you want to join each other or so, but this is kind of a channel we have to our Upstreams which is probably not very successful.

It starts with code copies of some libraries which are changed a little bit, which is fine license-wise, but not so helpful for different things and so I've tried to convince those Upstreams to forward their patches to the original one, but for this and I think we could do some kind of, yeah, [find] someone who brings Upstream together or to make them stop their forking stuff, but it costs a lot of energy and we probably don't have this and it's also not realistic that we can really help with this problem.

Do you have any questions for me?

[Andreas]:

I enjoyed the interview, I enjoyed seeing you again after half a year or so. Yeah, actually I've seen you in the eating room or cheese and wine party or so, I do not remember we had to really talk together, but yeah, people around, yeah, for sure. Yeah.

Cryptogram Security Vulnerability of HTML Emails

This is a newly discovered email vulnerability:

The email your manager received and forwarded to you was something completely innocent, such as a potential customer asking a few questions. All that email was supposed to achieve was being forwarded to you. However, the moment the email appeared in your inbox, it changed. The innocent pretext disappeared and the real phishing email became visible. A phishing email you had to trust because you knew the sender and they even confirmed that they had forwarded it to you.

This attack is possible because most email clients allow CSS to be used to style HTML emails. When an email is forwarded, the position of the original email in the DOM usually changes, allowing for CSS rules to be selectively applied only when an email has been forwarded.

An attacker can use this to include elements in the email that appear or disappear depending on the context in which the email is viewed. Because they are usually invisible, only appear in certain circumstances, and can be used for all sorts of mischief, I’ll refer to these elements as kobold letters, after the elusive sprites of mythology.

I can certainly imagine the possibilities.

Planet DebianEmanuele Rocca: PGP keys on Yubikey, with a side of Mutt

Here are my notes about copying PGP keys to external hardware devices such as Yubikeys. Let me begin by saying that the gpg tools are pretty bad at this.

MAKE A COUPLE OF BACKUPS OF ~/.gnupg/ TO DIFFERENT ENCRYPTED USB STICKS BEFORE YOU START. GPG WILL MESS UP YOUR KEYS. SERIOUSLY.

For example, would you believe me if I said that saving changes results in the removal of your private key? Well check this out.

Now that you have multiple safe, offline backups of your keys, here are my notes.

apt install yubikey-manager scdaemon

Plug the Yubikey in, see if it’s recognized properly:

ykman list
gpg --card-status

Change the default PIN (123456) and Admin PIN (12345678):

gpg --card-edit
gpg/card> admin
gpg/card> passwd

Look at the openpgp information and change the maximum number of retries, if you like. I have seen this failing a couple of times, unplugging the Yubikey and putting it back in worked.

ykman openpgp info
ykman openpgp access set-retries 7 7 7

Copy your keys. MAKE A BACKUP OF ~/.gnupg/ BEFORE YOU DO THIS.

gpg --edit-key $KEY_ID
gpg> keytocard # follow the prompts to copy the first key

Now choose the next key and copy that one too. Repeat till all subkeys are copied.

gpg> key 1
gpg> keytocard

Typing gpg --card-status you should be able to see all your keys on the Yubikey now.

Using the key on another machine

How do you use your PGP keys on the Yubikey on other systems?

Go to another system, if it does have a ~/.gnupg directory already move it somewhere else.

apt install scdaemon

Import your public key:

gpg -k
gpg --keyserver pgp.mit.edu --recv-keys $KEY_ID

Check the fingerprint and if it is indeed your key say you trust it:

gpg --edit-key $KEY_ID
> trust
> 5
> y
> save

Now try gpg --card-status and gpg --list-secret-keys, you should be able to see your keys. Try signing something, it should work.

gpg --output /tmp/x.out --sign /etc/motd
gpg --verify /tmp/x.out

Using the Yubikey with Mutt

If you’re using mutt with IMAP, there is a very simple trick to safely store your password on disk. Create an encrypted file with your IMAP password:

echo SUPERSECRET | gpg --encrypt > ~/.mutt_password.gpg

Add the following to ~/.muttrc:

set imap_pass=`gpg --decrypt ~/.mutt_password.gpg`

With the above, mutt now prompts you to insert the Yubikey and type your PIN in order to connect to the IMAP server.

Worse Than FailureError'd: Past Imperfect

A twitchy anonymous reporter pointed out that our form validation code is flaky. He's not wrong. But at least it can report time without needing emoticons! :-3

That same anon sent us the following, explaining "Folks at Twitch are very brave. So brave, they wrote their own time math."

twitch

 

Secret Agent Sjoerd reports "There was no train two minutes ago so I presume I should have caught it in an alternate universe." Denver is a key nexus in the multiverse, according to Myka.

train

 

Chilly Dallin H. is a tiny bit heated about ambiguous abbreviations - or at least about the software that interprets them with inadequate context. "With a range that short, I'd hate to take one of the older generation planes." At least it might be visible!

dallin

 

Phoney François P. "I was running a return of a phone through a big company website that shall not be named. Thankfully, they processed my order on April 1st 2024, or 2024年4月1日 in Japanese. There is a slight delay though as it shows 14月2024, which should be the 14th month of 2024. Dates are hard. Date formatting is complicated. For international date formatting, please come back later."

frank

 

At some time in the past, the original Adam R. encountered a time slip. We're just getting to see it even now. "GitHub must be operating on a different calendar than the Gregorian. Comments made just 4 weeks ago [today is 2023-02-07] are being displayed as made last year."

adam

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsAunty Dotty Goes to the Marathon

Author: Shannon O’Connor We all thought it was funny that Aunty Dotty got excited for events that nobody else cared about, like the 250th anniversary of the Boston Marathon. She had been in hibernation for years, nobody knew exactly how long. She would go to events, because she hadn’t seen a lot of the world, […]

The post Aunty Dotty Goes to the Marathon appeared first on 365tomorrows.

xkcdMachine

Planet DebianReproducible Builds (diffoscope): diffoscope 263 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 263. This version includes the following changes:

[ Chris Lamb ]
* Add support for the zipdetails(1) tool included in the Perl distribution.
  Thanks to Larry Doolittle et al. for the pointer to this tool.
* Don't use parenthesis within test "skipping…" messages; PyTest adds its own
  parenthesis, so we were ending up with double nested parens.
* Fix the .epub tests after supporting zipdetails(1).
* Update copyright years and debian/tests/control.

[ FC (Fay) Stegerman ]
* Fix MozillaZipContainer's monkeypatch after Python's zipfile module changed
  to detect potentially insecure overlapping entries within .zip files.
  (Closes: reproducible-builds/diffoscope#362)

You find out more by visiting the project homepage.

,

Planet DebianJohn Goerzen: The xz Issue Isn’t About Open Source

You’ve probably heard of the recent backdoor in xz. There have been a lot of takes on this, most of them boiling down to some version of:

The problem here is with Open Source Software.

I want to say not only is that view so myopic that it pushes towards the incorrect, but also it blinds us to more serious problems.

Now, I don’t pretend that there are no problems in the FLOSS community. There have been various pieces written about what this issue says about the FLOSS community (usually without actionable solutions). I’m not here to say those pieces are wrong. Just that there’s a bigger picture.

So with this xz issue, it may well be a state actor (aka “spy”) that added this malicious code to xz. We also know that proprietary software and systems can be vulnerable. For instance, a Twitter whistleblower revealed that Twitter employed Indian and Chinese spies, some knowingly. A recent report pointed to security lapses at Microsoft, including “preventable” lapses in security. According to the Wikipedia article on the SolarWinds attack, it was facilitated by various kinds of carelessness, including passwords being posted to Github and weak default passwords. They directly distributed malware-infested updates, encouraged customers to disable anti-malware tools when installing SolarWinds products, and so forth.

It would be naive indeed to assume that there aren’t black hat actors among the legions of programmers employed by companies that outsource work to low-cost countries — some of which have challenges with bribery.

So, given all this, we can’t really say the problem is Open Source. Maybe it’s more broad:

The problem here is with software.

Maybe that inches us closer, but is it really accurate? We have all heard of Boeing’s recent issues, which seem to have some element of root causes in corporate carelessness, cost-cutting, and outsourcing. That sounds rather similar to the SolarWinds issue, doesn’t it?

Well then, the problem is capitalism.

Maybe it has a role to play, but isn’t it a little too easy to just say “capitalism” and throw up our hands helplessly, just as some do with FLOSS as at the start of this article? After all, capitalism also brought us plenty of products of very high quality over the years. When we can point to successful, non-careless products — and I own some of them (for instance, my Framework laptop). We clearly haven’t reached the root cause yet.

And besides, what would you replace it with? All the major alternatives that have been tried have even stronger downsides. Maybe you replace it with “better regulated capitalism”, but that’s still capitalism.

Then the problem must be with consumers.

As this argument would go, it’s consumers’ buying patterns that drive problems. Buyers — individual and corporate — seek flashy features and low cost, prizing those over quality and security.

No doubt this is true in a lot of cases. Maybe greed or status-conscious societies foster it: Temu promises people to “shop like a billionaire”, and unloads on them cheap junk, which “all but guarantees that shipments from Temu containing products made with forced labor are entering the United States on a regular basis“.

But consumers are also people, and some fraction of them are quite capable of writing fantastic software, and in fact, do so.

So what we need is some way to seize control. Some way to do what is right, despite the pressures of consumers or corporations.

Ah yes, dear reader, you have been slogging through all these paragraphs and now realize I have been leading you to this:

Then the solution is Open Source.

Indeed. Faults and all, FLOSS is the most successful movement I know where people are bringing us back to the commons: working and volunteering for the common good, unleashing a thousand creative variants on a theme, iterating in every direction imaginable. We have FLOSS being vital parts of everything from $30 Raspberry Pis to space missions. It is bringing education and communication to impoverished parts of the world. It lets everyone write and release software. And, unlike the SolarWinds and Twitter issues, it exposes both clever solutions and security flaws to the world.

If an authentication process in Windows got slower, we would all shrug and mutter “Microsoft” under our breath. Because, really, what else can we do? We have no agency with Windows.

If an authentication process in Linux gets slower, anybody that’s interested — anybody at all — can dive in and ask “why” and trace it down to root causes.

Some look at this and say “FLOSS is responsible for this mess.” I look at it and say, “this would be so much worse if it wasn’t FLOSS” — and experience backs me up on this.

FLOSS doesn’t prevent security issues itself.

What it does do is give capabilities to us all. The ability to investigate. Ability to fix. Yes, even the ability to break — and its cousin, the power to learn.

And, most rewarding, the ability to contribute.

Cory DoctorowI’m coming to Los Angeles, Boston, Providence, Chicago, Turin, Marin, Winnipeg, Calgary, Vancouver, and Tartu, Estonia!

I’m about to hit the road again for a series of back-to-back public appearances as I travel with my new, nationally bestselling novel The Bezzle. I’ll be in Los Angeles, Boston, Providence, Chicago, Turin, Marin, Winnipeg, Calgary, Vancouver, and Tartu, Estonia!

I hope to see you! Bring friends!

Cryptogram Friday Squid Blogging: SqUID Bots

They’re AI warehouse robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Maybe the Phone System Surveillance Vulnerabilities Will Be Fixed

It seems that the FCC might be fixing the vulnerabilities in SS7 and the Diameter protocol:

On March 27 the commission asked telecommunications providers to weigh in and detail what they are doing to prevent SS7 and Diameter vulnerabilities from being misused to track consumers’ locations.

The FCC has also asked carriers to detail any exploits of the protocols since 2018. The regulator wants to know the date(s) of the incident(s), what happened, which vulnerabilities were exploited and with which techniques, where the location tracking occurred, and ­ if known ­ the attacker’s identity.

This time frame is significant because in 2018, the Communications Security, Reliability, and Interoperability Council (CSRIC), a federal advisory committee to the FCC, issued several security best practices to prevent network intrusions and unauthorized location tracking.

I have written about this over the past decade.

Planet DebianLukas Märdian: Netplan v1.0 paves the way to stable, declarative network management

New “netplan status –diff” subcommand, finding differences between configuration and system state

As the maintainer and lead developer for Netplan, I’m proud to announce the general availability of Netplan v1.0 after more than 7 years of development efforts. Over the years, we’ve so far had about 80 individual contributors from around the globe. This includes many contributions from our Netplan core-team at Canonical, but also from other big corporations such as Microsoft or Deutsche Telekom. Those contributions, along with the many we receive from our community of individual contributors, solidify Netplan as a healthy and trusted open source project. In an effort to make Netplan even more dependable, we started shipping upstream patch releases, such as 0.106.1 and 0.107.1, which make it easier to integrate fixes into our users’ custom workflows.

With the release of version 1.0 we primarily focused on stability. However, being a major version upgrade, it allowed us to drop some long-standing legacy code from the libnetplan1 library. Removing this technical debt increases the maintainability of Netplan’s codebase going forward. The upcoming Ubuntu 24.04 LTS and Debian 13 releases will ship Netplan v1.0 to millions of users worldwide.

Highlights of version 1.0

In addition to stability and maintainability improvements, it’s worth looking at some of the new features that were included in the latest release:

  • Simultaneous WPA2 & WPA3 support.
  • Introduction of a stable libnetplan1 API.
  • Mellanox VF-LAG support for high performance SR-IOV networking.
  • New hairpin and port-mac-learning settings, useful for VXLAN tunnels with FRRouting.
  • New netplan status –diff subcommand, finding differences between configuration and system state.

Besides those highlights of the v1.0 release, I’d also like to shed some light on new functionality that was integrated within the past two years for those upgrading from the previous Ubuntu 22.04 LTS which used Netplan v0.104:

  • We added support for the management of new network interface types, such as veth, dummy, VXLAN, VRF or InfiniBand (IPoIB). 
  • Wireless functionality was improved by integrating Netplan with NetworkManager on desktop systems, adding support for WPA3 and adding the notion of a regulatory-domain, to choose proper frequencies for specific regions. 
  • To improve maintainability, we moved to Meson as Netplan’s buildsystem, added upstream CI coverage for multiple Linux distributions and integrations (such as Debian testing, NetworkManager, snapd or cloud-init), checks for ABI compatibility, and automatic memory leak detection. 
  • We increased consistency between the supported backend renderers (systemd-networkd and NetworkManager), by matching physical network interfaces on permanent MAC address, when the match.macaddress setting is being used, and added new hardware offloading functionality for high performance networking, such as Single-Root IO Virtualisation virtual function link-aggregation (SR-IOV VF-LAG).

The much improved Netplan documentation, that is now hosted on “Read the Docs”, and new command line subcommands, such as netplan status, make Netplan a well vested tool for declarative network management and troubleshooting.

Integrations

Those changes pave the way to integrate Netplan in 3rd party projects, such as system installers or cloud deployment methods. By shipping the new python3-netplan Python bindings to libnetplan, it is now easier than ever to access Netplan functionality and network validation from other projects. We are proud that the Debian Cloud Team chose Netplan to be the default network management tool in their official cloud-images for Debian Bookworm and beyond. Ubuntu’s NetworkManager package now uses Netplan as it’s default backend on Ubuntu 23.10 Desktop systems and beyond. Further integrations happened with cloud-init and the Calamares installer.

Please check out the Netplan version 1.0 release on GitHub! If you want to learn more, follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

Krebs on SecurityFake Lawsuit Threat Exposes Privnote Phishing Sites

A cybercrook who has been setting up websites that mimic the self-destructing message service privnote.com accidentally exposed the breadth of their operations recently when they threatened to sue a software company. The disclosure revealed a profitable network of phishing sites that behave and look like the real Privnote, except that any messages containing cryptocurrency addresses will be automatically altered to include a different payment address controlled by the scammers.

The real Privnote, at privnote.com.

Launched in 2008, privnote.com employs technology that encrypts each message so that even Privnote itself cannot read its contents. And it doesn’t send or receive messages. Creating a message merely generates a link. When that link is clicked or visited, the service warns that the message will be gone forever after it is read.

Privnote’s ease-of-use and popularity among cryptocurrency enthusiasts has made it a perennial target of phishers, who erect Privnote clones that function more or less as advertised but also quietly inject their own cryptocurrency payment addresses when a note is created that contains crypto wallets.

Last month, a new user on GitHub named fory66399 lodged a complaint on the “issues” page for MetaMask, a software cryptocurrency wallet used to interact with the Ethereum blockchain. Fory66399 insisted that their website — privnote[.]co — was being wrongly flagged by MetaMask’s “eth-phishing-detect” list as malicious.

“We filed a lawsuit with a lawyer for dishonestly adding a site to the block list, damaging reputation, as well as ignoring the moderation department and ignoring answers!” fory66399 threatened. “Provide evidence or I will demand compensation!”

MetaMask’s lead product manager Taylor Monahan replied by posting several screenshots of privnote[.]co showing the site did indeed swap out any cryptocurrency addresses.

After being told where they could send a copy of their lawsuit, Fory66399 appeared to become flustered, and proceeded to mention a number of other interesting domain names:

You sent me screenshots from some other site! It’s red!!!!
The tornote.io website has a different color altogether
The privatenote,io website also has a different color! What’s wrong?????

A search at DomainTools.com for privatenote[.]io shows it has been registered to two names over as many years, including Andrey Sokol from Moscow and Alexandr Ermakov from Kiev. There is no indication these are the real names of the phishers, but the names are useful in pointing to other sites targeting Privnote since 2020.

DomainTools says other domains registered to Alexandr Ermakov include pirvnota[.]com, privatemessage[.]net, privatenote[.]io, and tornote[.]io.

A screenshot of the phishing domain privatemessage dot net.

The registration records for pirvnota[.]com at one point were updated from Andrey Sokol to “BPW” as the registrant organization, and “Tambov district” in the registrant state/province field. Searching DomainTools for domains that include both of these terms reveals pirwnote[.]com.

Other Privnote phishing domains that also phoned home to the same Internet address as pirwnote[.]com include privnode[.]com, privnate[.]com, and prevnóte[.]com. Pirwnote[.]com is currently selling security cameras made by the Chinese manufacturer Hikvision, via an Internet address based in Hong Kong.

It appears someone has gone to great lengths to make tornote[.]io seem like a legitimate website. For example, this account at Medium has authored more than a dozen blog posts in the past year singing the praises of Tornote as a secure, self-destructing messaging service. However, testing shows tornote[.]io will also replace any cryptocurrency addresses in messages with their own payment address.

These malicious note sites attract visitors by gaming search engine results to make the phishing domains appear prominently in search results for “privnote.” A search in Google for “privnote” currently returns tornote[.]io as the fifth result. Like other phishing sites tied to this network, Tornote will use the same cryptocurrency addresses for roughly 5 days, and then rotate in new payment addresses.

Tornote changed the cryptocurrency address entered into a test note to this address controlled by the phishers.

Throughout 2023, Tornote was hosted with the Russian provider DDoS-Guard, at the Internet address 186.2.163[.]216. A review of the passive DNS records tied to this address shows that apart from subdomains dedicated to tornote[.]io, the main other domain at this address was hkleaks[.]ml.

In August 2019, a slew of websites and social media channels dubbed “HKLEAKS” began doxing the identities and personal information of pro-democracy activists in Hong Kong. According to a report (PDF) from Citizen Lab, hkleaks[.]ml was the second domain that appeared as the perpetrators began to expand the list of those doxed.

HKleaks, as indexed by The Wayback Machine.

DomainTools shows there are more than 1,000 other domains whose registration records include the organization name “BPW” and “Tambov District” as the location. Virtually all of those domains were registered through one of two registrars — Hong Kong-based Nicenic and Singapore-based WebCC — and almost all appear to be phishing or pill-spam related.

Among those is rustraitor[.]info, a website erected after Russia invaded Ukraine in early 2022 that doxed Russians perceived to have helped the Ukrainian cause.

An archive.org copy of Rustraitor.

In keeping with the overall theme, these phishing domains appear focused on stealing usernames and passwords to some of the cybercrime underground’s busiest shops, including Brian’s Club. What do all the phished sites have in common? They all accept payment via virtual currencies.

It appears MetaMask’s Monahan made the correct decision in forcing these phishers to tip their hand: Among the websites at that DDoS-Guard address are multiple MetaMask phishing domains, including metarrnask[.]com, meternask[.]com, and rnetamask[.]com.

How profitable are these private note phishing sites? Reviewing the four malicious cryptocurrency payment addresses that the attackers swapped into notes passed through privnote[.]co (as pictured in Monahan’s screenshot above) shows that between March 15 and March 19, 2024, those address raked in and transferred out nearly $18,000 in cryptocurrencies. And that’s just one of their phishing websites.

Worse Than FailureCodeSOD: A Valid Applicant

In the late 90s into the early 2000s, there was an entire industry spun up to get businesses and governments off their mainframe systems from the 60s and onto something modern. "Modern", in that era, usually meant Java. I attended vendor presentations, for example, that promised that you could take your mainframe, slap a SOAP webservice on it, and then gradually migrate modules off the mainframe and into Java Enterprise Edition. In the intervening years, I have seen exactly 0 successful migrations like this- usually they just end up trying that for a few years and then biting the bullet and doing a ground-up rewrite.

That's is the situation ML was in: a state government wanted to replace their COBOL mainframe monster with a "maintainable" J2EE/WebSphere based application. Gone would be the 3270 dumb terminals, and here would be desktop PCs running web browsers.

ML's team did the initial design work, which the state was very happy with. But the actual development work gave the state sticker shock, so they opted to take the design from ML's company and ship it out to a lowest-bidder offshore vendor to actually do the development work.

This, by the way, was another popular mindset in the early 00s: you could design your software as a set of UML diagrams and then hand them off to the cheapest coder you could hire, and voila, you'd have working software (and someday soon, you'd be able to just generate the code and wouldn't need the programmer in the first place! ANY DAY NOW).

Now, this code is old, and predates generics in Java, so the use of ArrayLists isn't the WTF. But the programmer's approach to polymorphism is.

public class Applicant extends Person {
	// ... [snip] ...
}

.
.
.


public class ApplicantValidator {
	
	
	public void validateApplicantList(List listOfApplicants) throws ValidationException {

		// ... [snip] ...
		
		// Create a List of Person to validate
		List listOfPersons = new ArrayList();
		Iterator i = listOfApplicants.iterator(); 
		while (i.hasNext()) {
			Applicant a = (Applicant) i.next();
			Person p = (Person) a;
			listOfPersons.add(p);
		}
		
		PersonValidator.getInstance().validatePersonList(listOfPersons);

		// ... [snip] ...

	}

	// ... [snip] ...
}

Here you see an Applicant is a subclass of Person. We also see an ApplicantValidator class, which needs to verify that the applicant objects are valid- and to do that, they need to be treated as valid Person objects.

To do this, we iterate across our list of applications (which, it's worth noting, are being treated as Objects since we don't have generics), cast them to Applicants, then cast the Applicant variable to Person, then create a list of Persons- which again, absent generics, is just a list of Objects. Then we pass that list of persons into validatePersonList.

All of this is unnecessary, and demonstrates a lack of understanding about the language in use. This block could be written more clearly as: PersonValidator.getInstance().validatePersonList(listOfApplicants);

This gives us the same result with significantly less effort.

While much of the code coming from the offshore team was actually solid, it contained so much nonsense like this, so many misundertandings of the design, and so many bugs, that the state kept coming back to ML's company to address the issues. Between paying the offshore team to do the work, and then ML's team to fix the work, the entire project cost much more than if they had hired ML's team in the first place.

But the developers still billed at a lower rate than ML's team, which meant the managers responsible still got to brag about cost savings, even as they overran the project budget. "Imagine how much it would have cost if we hadn't gone with the cheaper labor?"

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsWeb World

Author: Rosie Oliver Grey-ghosted darkness. Not even a piece of dulled memory in the expansive nothingness ahead. Damn! Time is now against her completing her sculpture. She had been so sure the right shape could be found along her latest trajectory. Floating, she twists round to face the massive structure that extends in every direction […]

The post Web World appeared first on 365tomorrows.

Cryptogram Surveillance by the New Microsoft Outlook App

The ProtonMail people are accusing Microsoft’s new Outlook for Windows app of conducting extensive surveillance on its users. It shares data with advertisers, a lot of data:

The window informs users that Microsoft and those 801 third parties use their data for a number of purposes, including to:

  • Store and/or access information on the user’s device
  • Develop and improve products
  • Personalize ads and content
  • Measure ads and content
  • Derive audience insights
  • Obtain precise geolocation data
  • Identify users through device scanning

Commentary.

,

Planet DebianBits from Debian: Proxmox Platinum Sponsor of DebConf24

proxmoxlogo

We are pleased to announce that Proxmox has committed to sponsor DebConf24 as a Platinum Sponsor.

Proxmox provides powerful and user-friendly open-source server software. Enterprises of all sizes and industries use Proxmox solutions to deploy efficient and simplified IT infrastructures, minimize total cost of ownership, and avoid vendor lock-in. Proxmox also offers commercial support, training services, and an extensive partner ecosystem to ensure business continuity for its customers. Proxmox Server Solutions GmbH was established in 2005 and is headquartered in Vienna, Austria.

Proxmox builds its product offerings on top of the Debian operating system.

With this commitment as Platinum Sponsor, Proxmox is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much, Proxmox, for your support of DebConf24!

Become a sponsor too!

DebConf24 will take place from 28th July to 4th August 2024 in Busan, South Korea, and will be preceded by DebCamp, from 21st to 27th July 2024.

DebConf24 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, or visit the Become a DebConf Sponsor website.

Krebs on Security‘The Manipulaters’ Improve Phishing, Still Fail at Opsec

Roughly nine years ago, KrebsOnSecurity profiled a Pakistan-based cybercrime group called “The Manipulaters,” a sprawling web hosting network of phishing and spam delivery platforms. In January 2024, The Manipulaters pleaded with this author to unpublish previous stories about their work, claiming the group had turned over a new leaf and gone legitimate. But new research suggests that while they have improved the quality of their products and services, these nitwits still fail spectacularly at hiding their illegal activities.

In May 2015, KrebsOnSecurity published a brief writeup about the brazen Manipulaters team, noting that they openly operated hundreds of web sites selling tools designed to trick people into giving up usernames and passwords, or deploying malicious software on their PCs.

Manipulaters advertisement for “Office 365 Private Page with Antibot” phishing kit sold on the domain heartsender,com. “Antibot” refers to functionality that attempts to evade automated detection techniques, keeping a phish deployed as long as possible. Image: DomainTools.

The core brand of The Manipulaters has long been a shared cybercriminal identity named “Saim Raza,” who for the past decade has peddled a popular spamming and phishing service variously called “Fudtools,” “Fudpage,” “Fudsender,” “FudCo,” etc. The term “FUD” in those names stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.

A September 2021 story here checked in on The Manipulaters, and found that Saim Raza and company were prospering under their FudCo brands, which they secretly managed from a front company called We Code Solutions.

That piece worked backwards from all of the known Saim Raza email addresses to identify Facebook profiles for multiple We Code Solutions employees, many of whom could be seen celebrating company anniversaries gathered around a giant cake with the words “FudCo” painted in icing.

Since that story ran, KrebsOnSecurity has heard from this Saim Raza identity on two occasions. The first was in the weeks following the Sept. 2021 piece, when one of Saim Raza’s known email addresses — bluebtcus@gmail.com — pleaded to have the story taken down.

“Hello, we already leave that fud etc before year,” the Saim Raza identity wrote. “Why you post us? Why you destroy our lifes? We never harm anyone. Please remove it.”

Not wishing to be manipulated by a phishing gang, KrebsOnSecurity ignored those entreaties. But on Jan. 14, 2024, KrebsOnSecurity heard from the same bluebtcus@gmail.com address, apropos of nothing.

“Please remove this article,” Sam Raza wrote, linking to the 2021 profile. “Please already my police register case on me. I already leave everything.”

Asked to elaborate on the police investigation, Saim Raza said they were freshly released from jail.

“I was there many days,” the reply explained. “Now back after bail. Now I want to start my new work.”

Exactly what that “new work” might entail, Saim Raza wouldn’t say. But a new report from researchers at DomainTools.com finds that several computers associated with The Manipulaters have been massively hacked by malicious data- and password-snarfing malware for quite some time.

DomainTools says the malware infections on Manipulaters PCs exposed “vast swaths of account-related data along with an outline of the group’s membership, operations, and position in the broader underground economy.”

“Curiously, the large subset of identified Manipulaters customers appear to be compromised by the same stealer malware,” DomainTools wrote. “All observed customer malware infections began after the initial compromise of Manipulaters PCs, which raises a number of questions regarding the origin of those infections.”

A number of questions, indeed. The core Manipulaters product these days is a spam delivery service called HeartSender, whose homepage openly advertises phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me, to name a few.

A screenshot of the homepage of HeartSender 4 displays an IP address tied to fudtoolshop@gmail.com. Image: DomainTools.

HeartSender customers can interact with the subscription service via the website, but the product appears to be far more effective and user-friendly if one downloads HeartSender as a Windows executable program. Whether that HeartSender program was somehow compromised and used to infect the service’s customers is unknown.

However, DomainTools also found the hosted version of HeartSender service leaks an extraordinary amount of user information that probably is not intended to be publicly accessible. Apparently, the HeartSender web interface has several webpages that are accessible to unauthenticated users, exposing customer credentials along with support requests to HeartSender developers.

“Ironically, the Manipulaters may create more short-term risk to their own customers than law enforcement,” DomainTools wrote. “The data table “User Feedbacks” (sic) exposes what appear to be customer authentication tokens, user identifiers, and even a customer support request that exposes root-level SMTP credentials–all visible by an unauthenticated user on a Manipulaters-controlled domain. Given the risk for abuse, this domain will not be published.”

This is hardly the first time The Manipulaters have shot themselves in the foot. In 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s past and current business operations. That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that focuses on connecting cybercriminals to their real-life identities.

Currently, The Manipulaters seem focused on building out and supporting HeartSender, which specializes in spam and email-to-SMS spamming services.

“The Manipulaters’ newfound interest in email-to-SMS spam could be in response to the massive increase in smishing activity impersonating the USPS,” DomainTools wrote. “Proofs posted on HeartSender’s Telegram channel contain numerous references to postal service impersonation, including proving delivery of USPS-themed phishing lures and the sale of a USPS phishing kit.”

Reached via email, the Saim Raza identity declined to respond to questions about the DomainTools findings.

“First [of] all we never work on virus or compromised computer etc,” Raza replied. “If you want to write like that fake go ahead. Second I leave country already. If someone bind anything with exe file and spread on internet its not my fault.”

Asked why they left Pakistan, Saim Raza said the authorities there just wanted to shake them down.

“After your article our police put FIR on my [identity],” Saim Raza explained. “FIR” in this case stands for “First Information Report,” which is the initial complaint in the criminal justice system of Pakistan.

“They only get money from me nothing else,” Saim Raza continued. “Now some officers ask for money again again. Brother, there is no good law in Pakistan just they need money.”

Saim Raza has a history of being slippery with the truth, so who knows whether The Manipulaters and/or its leaders have in fact fled Pakistan (it may be more of an extended vacation abroad). With any luck, these guys will soon venture into a more Western-friendly, “good law” nation and receive a warm welcome by the local authorities.

Planet DebianGuido Günther: Free Software Activities March 2024

A short status update of what happened on my side last month. I spent quiet a bit of time reviewing new, code (thanks!) as well as maintenance to keep things going but we also have some improvements:

Phosh

Phoc

phosh-mobile-settings

phosh-osk-stub

gmobile

Livi

squeekboard

GNOME calls

Libsoup

If you want to support my work see donations.

Planet DebianJoey Hess: reflections on distrusting xz

Was the ssh backdoor the only goal that "Jia Tan" was pursuing with their multi-year operation against xz?

I doubt it, and if not, then every fix so far has been incomplete, because everything is still running code written by that entity.

If we assume that they had a multilayered plan, that their every action was calculated and malicious, then we have to think about the full threat surface of using xz. This quickly gets into nightmare scenarios of the "trusting trust" variety.

What if xz contains a hidden buffer overflow or other vulnerability, that can be exploited by the xz file it's decompressing? This would let the attacker target other packages, as needed.

Let's say they want to target gcc. Well, gcc contains a lot of documentation, which includes png images. So they spend a while getting accepted as a documentation contributor on that project, and get added to it a png file that is specially constructed, it has additional binary data appended that exploits the buffer overflow. And instructs xz to modify the source code that comes later when decompressing gcc.tar.xz.

More likely, they wouldn't bother with an actual trusting trust attack on gcc, which would be a lot of work to get right. One problem with the ssh backdoor is that well, not all servers on the internet run ssh. (Or systemd.) So webservers seem a likely target of this kind of second stage attack. Apache's docs include png files, nginx does not, but there's always scope to add improved documentation to a project.

When would such a vulnerability have been introduced? In February, "Jia Tan" wrote a new decoder for xz. This added 1000+ lines of new C code across several commits. So much code and in just the right place to insert something like this. And why take on such a significant project just two months before inserting the ssh backdoor? "Jia Tan" was already fully accepted as maintainer, and doing lots of other work, it doesn't seem to me that they needed to start this rewrite as part of their cover.

They were working closely with xz's author Lasse Collin in this, by indications exchanging patches offlist as they developed it. So Lasse Collin's commits in this time period are also worth scrutiny, because they could have been influenced by "Jia Tan". One that caught my eye comes immediately afterwards: "prepares the code for alternative C versions and inline assembly" Multiple versions and assembly mean even more places to hide such a security hole.

I stress that I have not found such a security hole, I'm only considering what the worst case possibilities are. I think we need to fully consider them in order to decide how to fully wrap up this mess.

Whether such stealthy security holes have been introduced into xz by "Jia Tan" or not, there are definitely indications that the ssh backdoor was not the end of what they had planned.

For one thing, the "test file" based system they introduced was extensible. They could have been planning to add more test files later, that backdoored xz in further ways.

And then there's the matter of the disabling of the Landlock sandbox. This was not necessary for the ssh backdoor, because the sandbox is only used by the xz command, not by liblzma. So why did they potentially tip their hand by adding that rogue "." that disables the sandbox?

A sandbox would not prevent the kind of attack I discuss above, where xz is just modifying code that it decompresses. Disabling the sandbox suggests that they were going to make xz run arbitrary code, that perhaps wrote to files it shouldn't be touching, to install a backdoor in the system.

Both deb and rpm use xz compression, and with the sandbox disabled, whether they link with liblzma or run the xz command, a backdoored xz can write to any file on the system while dpkg or rpm is running and noone is likely to notice, because that's the kind of thing a package manager does.

My impression is that all of this was well planned and they were in it for the long haul. They had no reason to stop with backdooring ssh, except for the risk of additional exposure. But they decided to take that risk, with the sandbox disabling. So they planned to do more, and every commit by "Jia Tan", and really every commit that they could have influenced needs to be distrusted.

This is why I've suggested to Debian that they revert to an earlier version of xz. That would be my advice to anyone distributing xz.

I do have a xz-unscathed fork which I've carefully constructed to avoid all "Jia Tan" involved commits. It feels good to not need to worry about dpkg and tar. I only plan to maintain this fork minimally, eg security fixes. Hopefully Lasse Collin will consider these possibilities and address them in his response to the attack.

Worse Than FailureCodeSOD: Gotta Catch 'Em All

It's good to handle any exception that could be raised in some useful way. Frequently, this means that you need to take advantage of the catch block's ability to filter by type so you can do something different in each case. Or you could do what Adam's co-worker did.

try
{
/* ... some important code ... */
} catch (OutOfMemoryException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (OverflowException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (InvalidCastException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (NullReferenceException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (IndexOutOfRangeException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (ArgumentException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (InvalidOperationException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (XmlException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (IOException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (NotSupportedException exception) {
        Global.Insert("App.GetSettings;", exception.Message);
} catch (Exception exception) {
        Global.Insert("App.GetSettings;", exception.Message);
}

Well, I guess that if they ever need to add different code paths, they're halfway there.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsSucky

Author: Alastair Millar As I burst the blister on Martha’s back, the gelatinous pus within made its escape. Thanking the Void Gods for the medpack’s surgical gloves, I wiped her down, then set to work with the tweezers; if I couldn’t get the eggs out, it was all for nothing. This is the side of […]

The post Sucky appeared first on 365tomorrows.

xkcdEclipse Clouds

Planet DebianArnaud Rebillout: Firefox: Moving from the Debian package to the Flatpak app (long-term?)

First, thanks to Samuel Henrique for giving notice of recent Firefox CVEs in Debian testing/unstable.

At the time I didn't want to upgrade my system (Debian Sid) due to the ongoing t64 transition transition, so I decided I could install the Firefox Flatpak app instead, and why not stick to it long-term?

This blog post details all the steps, if ever others want to go the same road.

Flatpak Installation

Disclaimer: this section is hardly anything more than a copy/paste of the official documentation, and with time it will get outdated, so you'd better follow the official doc.

First thing first, let's install Flatpak:

$ sudo apt update
$ sudo apt install flatpak

Then the next step is to add the Flathub remote repository, from where we'll get our Flatpak applications:

$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

And that's all there is to it! Now come the optional steps.

For GNOME and KDE users, you might want to install a plugin for the software manager specific to your desktop, so that it can support and manage Flatpak apps:

$ which -s gnome-software  && sudo apt install gnome-software-plugin-flatpak
$ which -s plasma-discover && sudo apt install plasma-discover-backend-flatpak

And here's an additional check you can do, as it's something that did bite me in the past: missing xdg-portal-* packages, that are required for Flatpak applications to communicate with the desktop environment. Just to be sure, you can check the output of apt search '^xdg-desktop-portal' to see what's available, and compare with the output of dpkg -l | grep xdg-desktop-portal.

As you can see, if you're a GNOME or KDE user, there's a portal backend for you, and it should be installed. For reference, this is what I have on my GNOME desktop at the moment:

$ dpkg -l | grep xdg-desktop-portal | awk '{print $2}'
xdg-desktop-portal
xdg-desktop-portal-gnome
xdg-desktop-portal-gtk

Install the Firefox Flatpak app

This is trivial, but still, there's a question I've always asked myself: should I install applications system-wide (aka. flatpak --system, the default) or per-user (aka. flatpak --user)? Turns out, this questions is answered in the Flatpak documentation:

Flatpak commands are run system-wide by default. If you are installing applications for day-to-day usage, it is recommended to stick with this default behavior.

Armed with this new knowledge, let's install the Firefox app:

$ flatpak install flathub org.mozilla.firefox

And that's about it! We can give it a go already:

$ flatpak run org.mozilla.firefox

Data migration

At this point, running Firefox via Flatpak gives me an "empty" Firefox. That's not what I want, instead I want my usual Firefox, with a gazillion of tabs already opened, a few extensions, bookmarks and so on.

As it turns out, Mozilla provides a brief doc for data migration, and it's as simple as moving Firefox data directory around!

To clarify, we'll be copying data:

  • from ~/.mozilla/ -- where the Firefox Debian package stores its data
  • into ~/.var/app/org.mozilla.firefox/.mozilla/ -- where the Firefox Flatpak app stores its data

Make sure that all Firefox instances are closed, then proceed:

# BEWARE! Below I'm erasing data!
$ rm -fr ~/.var/app/org.mozilla.firefox/.mozilla/firefox/
$ cp -a ~/.mozilla/firefox/ ~/.var/app/org.mozilla.firefox/.mozilla/

To avoid confusing myself, it's also a good idea to rename the local data directory:

$ mv ~/.mozilla/firefox ~/.mozilla/firefox.old.$(date --iso-8601=date)

At this point, flatpak run org.mozilla.firefox takes me to my "usual" everyday Firefox, with all its tabs opened, pinned, bookmarked, etc.

More integration?

After following all the steps above, I must say that I'm 99% happy. So far, everything works as before, I didn't hit any issue, and I don't even notice that Firefox is running via Flatpak, it's completely transparent.

So where's the 1% of unhappiness? The « Run a Command » dialog from GNOME, the one that shows up via the keyboard shortcut <Alt+F2>. This is how I start my GUI applications, and I usually run two Firefox instances in parallel (one for work, one for personal), using the firefox -p <profile> command.

Given that I ran apt purge firefox before (to avoid confusing myself with two installations of Firefox), now the right (and only) way to start Firefox from a command-line is to type flatpak run org.mozilla.firefox -p <profile>. Typing that every time is way too cumbersome, so I need something quicker.

Seems like the most straightforward is to create a wrapper script:

$ cat /usr/local/bin/firefox 
#!/bin/sh
exec flatpak run org.mozilla.firefox "$@"

And now I can just hit <Alt+F2> and type firefox -p <profile> to start Firefox with the profile I want, just as before. Neat!

Looking forward: system updates

I usually update my system manually every now and then, via the well-known pair of commands:

$ sudo apt update
$ sudo apt full-upgrade

The downside of introducing Flatpak, ie. introducing another package manager, is that I'll need to learn new commands to update the software that comes via this channel.

Fortunately, there's really not much to learn. From flatpak-update(1):

flatpak update [OPTION...] [REF...]

Updates applications and runtimes. [...] If no REF is given, everything is updated, as well as appstream info for all remotes.

Could it be that simple? Apparently yes, the Flatpak equivalent of the two apt commands above is just:

$ flatpak update

Going forward, my options are:

  1. Teach myself to run flatpak update additionally to apt update, manually, everytime I update my system.
  2. Go crazy: let something automatically update my Flatpak apps, in my back and without my consent.

I'm actually tempted to go for option 2 here, and I wonder if GNOME Software will do that for me, provided that I installed gnome-software-plugin-flatpak, and that I checked « Software Updates -> Automatic » in the Settings (which I did).

However, I didn't find any documentation regarding what this setting really does, so I can't say if it will only download updates, or if it will also install it. I'd be happy if it automatically installs new version of Flatpak apps, but at the same time I'd be very unhappy if it automatically upgrades my Debian system...

So we'll see. Enough for today, hope this blog post was useful!

,

Planet DebianDirk Eddelbuettel: ulid 0.3.1 on CRAN: New Maintainer, Some Polish

Happy to share that ulid is now (back) on CRAN. It provides universally unique identifiers that are lexicographically sortable, which improves over the more well-known uuid generators.

ulid is a neat little package put together by Bob Rudis a few years ago. It had recently drifted off CRAN so I offered to brush it up and re-submit it. And as tooted earlier today, it took just over an hour to finish that (after the lead up work I had done, including prior email with CRAN in the loop, the repo transfer from Bob’s to my ulid repo plus of course a wee bit of actual maintenance; see below for more).

The NEWS entry follows.

Changes in version 0.3.1 (2024-04-02)

  • New Maintainer

  • Deleted several repository files no longer used or needed

  • Added .editorconfig, ChangeLog and cleanup

  • Converted NEWS.md to NEWS.Rd

  • Simplified R/ directory to one source file

  • Simplified src/ removing redundant Makevars

  • Added ulid() alias

  • Updated / edited roxygen and README.md documention

  • Removed vignette which was identical to README.md

  • Switched continuous integration to GitHub Actions

  • Placed upstream (header-only) library into src/ulid/

  • Renamed single interface file to src/wrapper

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSven Hoexter: PKIX: pathLen Constrain on Root Certificates

I recently came a cross a x509 P(rivate)KI Root Certificate which had a pathLen constrain set on the (self signed) Root Certificate. Since that is not commonly seen I looked a bit around to get a better understanding about how the pathLen basic constrain should be used.

Primary source is RFC 5280 section 4.2.1.9

The pathLenConstraint field is meaningful only if the cA boolean is asserted and the key usage extension, if present, asserts the keyCertSign bit (Section 4.2.1.3). In this case, it gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path

Since the Root is always self-issued it doesn't count towards the limit, and since it's the last certificate (or the first depending on how you count) in a chain, it's pretty much pointless to configure a pathLen constrain directly on a Root Certificate.

Another relevant resource are the Baseline Requirements of the CA/Browser Forum (currently v2.0.2). Section 7.1.2.1.4 "Root CA Basic Constraints" describes it as NOT RECOMMENDED for a Root CA.

Last but not least there is the awesome x509 Limbo project which has a section for validating pathLen constrains. Since the RFC 5280 based assumption is that self signed certs do not count, they do not check a case with such a constrain on the Root itself, and what the implementations do about it. So the assumption right now is that they properly ignore it.

Summary: It's pointless to set the pathLen constrain on the Root Certificate, so just don't do it.

Cryptogram Class-Action Lawsuit against Google’s Incognito Mode

The lawsuit has been settled:

Google has agreed to delete “billions of data records” the company collected while users browsed the web using Incognito mode, according to documents filed in federal court in San Francisco on Monday. The agreement, part of a settlement in a class action lawsuit filed in 2020, caps off years of disclosures about Google’s practices that shed light on how much data the tech giant siphons from its users­—even when they’re in private-browsing mode.

Under the terms of the settlement, Google must further update the Incognito mode “splash page” that appears anytime you open an Incognito mode Chrome window after previously updating it in January. The Incognito splash page will explicitly state that Google collects data from third-party websites “regardless of which browsing or browser mode you use,” and stipulate that “third-party sites and apps that integrate our services may still share information with Google,” among other changes. Details about Google’s private-browsing data collection must also appear in the company’s privacy policy.

I was an expert witness for the prosecution (that’s the class, against Google). I don’t know if my declarations and deposition will become public.

Planet DebianBits from Debian: Bits from the DPL

Dear Debianites

This morning I decided to just start writing Bits from DPL and send whatever I have by 18:00 local time. Here it is, barely proof read, along with all it's warts and grammar mistakes! It's slightly long and doesn't contain any critical information, so if you're not in the mood, don't feel compelled to read it!

Get ready for a new DPL!

Soon, the voting period will start to elect our next DPL, and my time as DPL will come to an end. Reading the questions posted to the new candidates on debian-vote, it takes quite a bit of restraint to not answer all of them myself, I think I can see how that aspect contributed to me being reeled in to running for DPL! In total I've done so 5 times (the first time I ran, Sam was elected!).

Good luck to both Andreas and Sruthi, our current DPL candidates! I've already started working on preparing handover, and there's multiple request from teams that have came in recently that will have to wait for the new term, so I hope they're both ready to hit the ground running!

Things that I wish could have gone better

Communication

Recently, I saw a t-shirt that read:

Adulthood is saying, 'But after this week things will slow down a bit' over and over until you die.

I can relate! With every task, crisis or deadline that appears, I think that once this is over, I'll have some more breathing space to get back to non-urgent, but important tasks. "Bits from the DPL" was something I really wanted to get right this last term, and clearly failed spectacularly. I have two long Bits from the DPL drafts that I never finished, I tend to have prioritised problems of the day over communication. With all the hindsight I have, I'm not sure which is better to prioritise, I do rate communication and transparency very highly and this is really the top thing that I wish I could've done better over the last four years.

On that note, thanks to people who provided me with some kind words when I've mentioned this to them before. They pointed out that there are many other ways to communicate and be in touch with the community, and they mentioned that they thought that I did a good job with that.

Since I'm still on communication, I think we can all learn to be more effective at it, since it's really so important for the project. Every time I publicly spoke about us spending more money, we got more donations. People out there really like to see how we invest funds in to Debian, instead of just making it heap up. DSA just spent a nice chunk on money on hardware, but we don't have very good visibility on it. It's one thing having it on a public line item in SPI's reporting, but it would be much more exciting if DSA could provide a write-up on all the cool hardware they're buying and what impact it would have on developers, and post it somewhere prominent like debian-devel-announce, Planet Debian or Bits from Debian (from the publicity team).

I don't want to single out DSA there, it's difficult and affects many other teams. The Salsa CI team also spent a lot of resources (time and money wise) to extend testing on AMD GPUs and other AMD hardware. It's fantastic and interesting work, and really more people within the project and in the outside world should know about it!

I'm not going to push my agendas to the next DPL, but I hope that they continue to encourage people to write about their work, and hopefully at some point we'll build enough excitement in doing so that it becomes a more normal part of our daily work.

Founding Debian as a standalone entity

This was my number one goal for the project this last term, which was a carried over item from my previous terms.

I'm tempted to write everything out here, including the problem statement and our current predicaments, what kind of ground work needs to happen, likely constitutional changes that need to happen, and the nature of the GR that would be needed to make such a thing happen, but if I start with that, I might not finish this mail.

In short, I 100% believe that this is still a very high ranking issue for Debian, and perhaps after my term I'd be in a better position to spend more time on this (hmm, is this an instance of "The grass is always better on the other side", or "Next week will go better until I die?"). Anyway, I'm willing to work with any future DPL on this, and perhaps it can in itself be a delegation tasked to properly explore all the options, and write up a report for the project that can lead to a GR.

Overall, I'd rather have us take another few years and do this properly, rather than rush into something that is again difficult to change afterwards. So while I very much wish this could've been achieved in the last term, I can't say that I have any regrets here either.

My terms in a nutshell

COVID-19 and Debian 11 era

My first term in 2020 started just as the COVID-19 pandemic became known to spread globally. It was a tough year for everyone, and Debian wasn't immune against its effects either. Many of our contributors got sick, some have lost loved ones (my father passed away in March 2020 just after I became DPL), some have lost their jobs (or other earners in their household have) and the effects of social distancing took a mental and even physical health toll on many. In Debian, we tend to do really well when we get together in person to solve problems, and when DebConf20 got cancelled in person, we understood that that was necessary, but it was still more bad news in a year we had too much of it already.

I can't remember if there was ever any kind of formal choice or discussion about this at any time, but the DebConf video team just kind of organically and spontaneously became the orga team for an online DebConf, and that lead to our first ever completely online DebConf. This was great on so many levels. We got to see each other's faces again, even though it was on screen. We had some teams talk to each other face to face for the first time in years, even though it was just on a Jitsi call. It had a lasting cultural change in Debian, some teams still have video meetings now, where they didn't do that before, and I think it's a good supplement to our other methods of communication.

We also had a few online Mini-DebConfs that was fun, but DebConf21 was also online, and by then we all developed an online conference fatigue, and while it was another good online event overall, it did start to feel a bit like a zombieconf and after that, we had some really nice events from the Brazillians, but no big global online community events again. In my opinion online MiniDebConfs can be a great way to develop our community and we should spend some further energy into this, but hey! This isn't a platform so let me back out of talking about the future as I see it...

Despite all the adversity that we faced together, the Debian 11 release ended up being quite good. It happened about a month or so later than what we ideally would've liked, but it was a solid release nonetheless. It turns out that for quite a few people, staying inside for a few months to focus on Debian bugs was quite productive, and Debian 11 ended up being a very polished release.

During this time period we also had to deal with a previous Debian Developer that was expelled for his poor behaviour in Debian, who continued to harass members of the Debian project and in other free software communities after his expulsion. This ended up being quite a lot of work since we had to take legal action to protect our community, and eventually also get the police involved. I'm not going to give him the satisfaction by spending too much time talking about him, but you can read our official statement regarding Daniel Pocock here: https://www.debian.org/News/2021/20211117

In late 2021 and early 2022 we also discussed our general resolution process, and had two consequent votes to address some issues that have affected past votes:

In my first term I addressed our delegations that were a bit behind, by the end of my last term all delegation requests are up to date. There's still some work to do, but I'm feeling good that I get to hand this over to the next DPL in a very decent state. Delegation updates can be very deceiving, sometimes a delegation is completely re-written and it was just 1 or 2 hours of work. Other times, a delegation updated can contain one line that has changed or a change in one team member that was the result of days worth of discussion and hashing out differences.

I also received quite a few requests either to host a service, or to pay a third-party directly for hosting. This was quite an admin nightmare, it either meant we had to manually do monthly reimbursements to someone, or have our TOs create accounts/agreements at the multiple providers that people use. So, after talking to a few people about this, we founded the DebianNet team (we could've admittedly chosen a better name, but that can happen later on) for providing hosting at two different hosting providers that we have agreement with so that people who host things under debian.net have an easy way to host it, and then at the same time Debian also has more control if a site maintainer goes MIA.

More info: https://wiki.debian.org/Teams/DebianNet

You might notice some Openstack mentioned there, we had some intention to set up a Debian cloud for hosting these things, that could also be used for other additional Debiany things like archive rebuilds, but these have so far fallen through. We still consider it a good idea and hopefully it will work out some other time (if you're a large company who can sponsor few racks and servers, please get in touch!)

DebConf22 and Debian 12 era

DebConf22 was the first time we returned to an in-person DebConf. It was a bit smaller than our usual DebConf - understandably so, considering that there were still COVID risks and people who were at high risk or who had family with high risk factors did the sensible thing and stayed home.

After watching many MiniDebConfs online, I also attended my first ever MiniDebConf in Hamburg. It still feels odd typing that, it feels like I should've been at one before, but my location makes attending them difficult (on a side-note, a few of us are working on bootstrapping a South African Debian community and hopefully we can pull off MiniDebConf in South Africa later this year).

While I was at the MiniDebConf, I gave a talk where I covered the evolution of firmware, from the simple e-proms that you'd find in old printers to the complicated firmware in modern GPUs that basically contain complete operating systems- complete with drivers for the device their running on. I also showed my shiny new laptop, and explained that it's impossible to install that laptop without non-free firmware (you'd get a black display on d-i or Debian live). Also that you couldn't even use an accessibility mode with audio since even that depends on non-free firmware these days.

Steve, from the image building team, has said for a while that we need to do a GR to vote for this, and after more discussion at DebConf, I kept nudging him to propose the GR, and we ended up voting in favour of it. I do believe that someone out there should be campaigning for more free firmware (unfortunately in Debian we just don't have the resources for this), but, I'm glad that we have the firmware included. In the end, the choice comes down to whether we still want Debian to be installable on mainstream bare-metal hardware.

At this point, I'd like to give a special thanks to the ftpmasters, image building team and the installer team who worked really hard to get the changes done that were needed in order to make this happen for Debian 12, and for being really proactive for remaining niggles that was solved by the time Debian 12.1 was released.

The included firmware contributed to Debian 12 being a huge success, but it wasn't the only factor. I had a list of personal peeves, and as the hard freeze hit, I lost hope that these would be fixed and made peace with the fact that Debian 12 would release with those bugs. I'm glad that lots of people proved me wrong and also proved that it's never to late to fix bugs, everything on my list got eliminated by the time final freeze hit, which was great! We usually aim to have a release ready about 2 years after the previous release, sometimes there are complications during a freeze and it can take a bit longer. But due to the excellent co-ordination of the release team and heavy lifting from many DDs, the Debian 12 release happened 21 months and 3 weeks after the Debian 11 release. I hope the work from the release team continues to pay off so that we can achieve their goals of having shorter and less painful freezes in the future!

Even though many things were going well, the ongoing usr-merge effort highlighted some social problems within our processes. I started typing out the whole history of usrmerge here, but it's going to be too long for the purpose of this mail. Important questions that did come out of this is, should core Debian packages be team maintained? And also about how far the CTTE should really be able to override a maintainer. We had lots of discussion about this at DebConf22, but didn't make much concrete progress. I think that at some point we'll probably have a GR about package maintenance. Also, thank you to Guillem who very patiently explained a few things to me (after probably having have to done so many times to others before already) and to Helmut who have done the same during the MiniDebConf in Hamburg. I think all the technical and social issues here are fixable, it will just take some time and patience and I have lots of confidence in everyone involved.

UsrMerge wiki page: https://wiki.debian.org/UsrMerge

DebConf 23 and Debian 13 era

DebConf23 took place in Kochi, India. At the end of my Bits from the DPL talk there, someone asked me what the most difficult thing I had to do was during my terms as DPL. I answered that nothing particular stood out, and even the most difficult tasks ended up being rewarding to work on. Little did I know that my most difficult period of being DPL was just about to follow. During the day trip, one of our contributors, Abraham Raji, passed away in a tragic accident. There's really not anything anyone could've done to predict or stop it, but it was devastating to many of us, especially the people closest to him. Quite a number of DebConf attendees went to his funeral, wearing the DebConf t-shirts he designed as a tribute. It still haunts me when I saw his mother scream "He was my everything! He was my everything!", this was by a large margin the hardest day I've ever had in Debian, and I really wasn't ok for even a few weeks after that and I think the hurt will be with many of us for some time to come. So, a plea again to everyone, please take care of yourself! There's probably more people that love you than you realise.

A special thanks to the DebConf23 team, who did a really good job despite all the uphills they faced (and there were many!).

As DPL, I think that planning for a DebConf is near to impossible, all you can do is show up and just jump into things. I planned to work with Enrico to finish up something that will hopefully save future DPLs some time, and that is a web-based DD certificate creator instead of having the DPL do so manually using LaTeX. It already mostly works, you can see the work so far by visiting https://nm.debian.org/person/ACCOUNTNAME/certificate/ and replacing ACCOUNTNAME with your Debian account name, and if you're a DD, you should see your certificate. It still needs a few minor changes and a DPL signature, but at this point I think that will be finished up when the new DPL start. Thanks to Enrico for working on this!

Since my first term, I've been trying to find ways to improve all our accounting/finance issues. Tracking what we spend on things, and getting an annual overview is hard, especially over 3 trusted organisations. The reimbursement process can also be really tedious, especially when you have to provide files in a certain order and combine them into a PDF. So, at DebConf22 we had a meeting along with the treasurer team and Stefano Rivera who said that it might be possible for him to work on a new system as part of his Freexian work. It worked out, and Freexian funded the development of the system since then, and after DebConf23 we handled the reimbursements for the conference via the new reimbursements site: https://reimbursements.debian.net/

It's still early days, but over time it should be linked to all our TOs and we'll use the same category codes across the board. So, overall, our reimbursement process becomes a lot simpler, and also we'll be able to get information like how much money we've spent on any category in any period. It will also help us to track how much money we have available or how much we spend on recurring costs. Right now that needs manual polling from our TOs. So I'm really glad that this is a big long-standing problem in the project that is being fixed.

For Debian 13, we're waving goodbye to the KFreeBSD and mipsel ports. But we're also gaining riscv64 and loongarch64 as release architectures! I have 3 different RISC-V based machines on my desk here that I haven't had much time to work with yet, you can expect some blog posts about them soon after my DPL term ends!

As Debian is a unix-like system, we're affected by the Year 2038 problem, where systems that uses 32 bit time in seconds since 1970 run out of available time and will wrap back to 1970 or have other undefined behaviour. A detailed wiki page explains how this works in Debian, and currently we're going through a rather large transition to make this possible.

I believe this is the right time for Debian to be addressing this, we're still a bit more than a year away for the Debian 13 release, and this provides enough time to test the implementation before 2038 rolls along.

Of course, big complicated transitions with dependency loops that causes chaos for everyone would still be too easy, so this past weekend (which is a holiday period in most of the west due to Easter weekend) has been filled with dealing with an upstream bug in xz-utils, where a backdoor was placed in this key piece of software. An Ars Technica covers it quite well, so I won't go into all the details here. I mention it because I want to give yet another special thanks to everyone involved in dealing with this on the Debian side. Everyone involved, from the ftpmasters to security team and others involved were super calm and professional and made quick, high quality decisions. This also lead to the archive being frozen on Saturday, this is the first time I've seen this happen since I've been a DD, but I'm sure next week will go better!

Looking forward

It's really been an honour for me to serve as DPL. It might well be my biggest achievement in my life. Previous DPLs range from prominent software engineers to game developers, or people who have done things like complete Iron Man, run other huge open source projects and are part of big consortiums. Ian Jackson even authored dpkg and is now working on the very interesting tag2upload service!

I'm a relative nobody, just someone who grew up as a poor kid in South Africa, who just really cares about Debian a lot. And, above all, I'm really thankful that I didn't do anything major to screw up Debian for good.

Not unlike learning how to use Debian, and also becoming a Debian Developer, I've learned a lot from this and it's been a really valuable growth experience for me.

I know I can't possible give all the thanks to everyone who deserves it, so here's a big big thanks to everyone who have worked so hard and who have put in many, many hours to making Debian better, I consider you all heroes!

-Jonathan

Cryptogram Magic Security Dust

Adam Shostack is selling magic security dust.

It’s about time someone is commercializing this essential technology.

Rondam RamblingsFeynman, bullies, and invisible pink unicorns

This is the second installment in what I hope will turn out to be a long series about the scientific method.  In this segment I want to give three examples of how the scientific method, which I described in the first installment, can be applied to situations that are not usually considered "science-y".  By doing this I hope to show you how the scientific method can be used without any

Worse Than FailureCodeSOD: Exceptional Feeds

Joe sends us some Visual Basic .NET exception handling. Let's see if you can spot what's wrong?

Catch ex1 As Exception

    ' return the cursor
    Me.Cursor = Cursors.Default

    ' tell a story
    MessageBox.Show(ex1.Message)
    Return

End Try

This code catches the generic exception, meddles with the cursor a bit, and then pops up a message box to alert the user to something that went wrong. I don't love putting the raw exception in the message box, but this is hardly a WTF, is it?

Catch ex2 As Exception

    ' snitch
    MessageBox.Show(ex2.ToString(), "RSS Feed Initialization Failure")

End Try

Elsewhere in the application. Okay, I also don't love the exN naming convention either, but where's the WTF?

Well, the fact that they're initializing an RSS feed is a hint- this isn't an RSS reader client, it's an RSS serving web app. This runs on the server side, and any message boxes that get popped up aren't going to the end user.

Now, I haven't seen this precise thing done in VB .Net, only in Classic ASP, where you could absolutely open message boxes on the web server. I'd hope that in ASP .Net, something would stop you from doing that. I'd hope.

Otherwise, I've found the worst logging system you could make.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsHow To Best Carve Light

Author: Majoki So not painterly. Not even close. Too pixelated. Too blurred at the edges of reality. Not a good start in your first soloverse. Always so much to learn. Tamp down the expectations, go back and study the masters. Phidias. Caravaggio. Kurosawa. Leibovitz. Marquez. Einstein. Know their mediums. Stone. Canvas. Film. Page. Chalkboard. Seek […]

The post How To Best Carve Light appeared first on 365tomorrows.

,

Planet DebianBen Hutchings: FOSS activity in March 2024

Planet DebianColin Watson: Free software activity in March 2024

My Debian contributions this month were all sponsored by Freexian.

Planet DebianSimon Josefsson: Towards reproducible minimal source code tarballs? On *-src.tar.gz

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the entire software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize one idea (remains to be see in which category it belongs), which have been discussed before, but the context in which the idea can be appreciated have not been as clear as it is today.

  1. Reproducible source tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. My counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that.

What do you think?

Planet DebianArturo Borrero González: Kubecon and CloudNativeCon 2024 Europe summary

Kubecon EU 2024 Paris logo

This blog post shares my thoughts on attending Kubecon and CloudNativeCon 2024 Europe in Paris. It was my third time at this conference, and it felt bigger than last year’s in Amsterdam. Apparently it had an impact on public transport. I missed part of the opening keynote because of the extremely busy rush hour tram in Paris.

On Artificial Intelligence, Machine Learning and GPUs

Talks about AI, ML, and GPUs were everywhere this year. While it wasn’t my main interest, I did learn about GPU resource sharing and power usage on Kubernetes. There were also ideas about offering Models-as-a-Service, which could be cool for Wikimedia Toolforge in the future.

See also:

On security, policy and authentication

This was probably the main interest for me in the event, given Wikimedia Toolforge was about to migrate away from Pod Security Policy, and we were currently evaluating different alternatives.

In contrast to my previous attendances to Kubecon, where there were three policy agents with presence in the program schedule, Kyverno, Kubewarden and OpenPolicyAgent (OPA), this time only OPA had the most relevant sessions.

One surprising bit I got from one of the OPA sessions was that it could work to authorize linux PAM sessions. Could this be useful for Wikimedia Toolforge?

OPA talk

I attended several sessions related to authentication topics. I discovered the keycloak software, which looks very promising. I also attended an Oauth2 session which I had a hard time following, because I clearly missed some additional knowledge about how Oauth2 works internally.

I also attended a couple of sessions that ended up being a vendor sales talk.

See also:

On container image builds, harbor registry, etc

This topic was also of interest to me because, again, it is a core part of Wikimedia Toolforge.

I attended a couple of sessions regarding container image builds, including topics like general best practices, image minimization, and buildpacks. I learned about kpack, which at first sight felt like a nice simplification of how the Toolforge build service was implemented.

I also attended a session by the Harbor project maintainers where they shared some valuable information on things happening soon or in the future , for example:

  • new harbor command line interface coming soon. Only the first iteration though.
  • harbor operator, to install and manage harbor. Looking for new maintainers, otherwise going to be archived.
  • the project is now experimenting with adding support to hosting more artifacts: maven, NPM, pypi. I wonder if they will consider hosting Debian .deb packages.

On networking

I attended a couple of sessions regarding networking.

One session in particular I paid special attention to, ragarding on network policies. They discussed new semantics being added to the Kubernetes API.

The different layers of abstractions being added to the API, the different hook points, and override layers clearly resembled (to me at least) the network packet filtering stack of the linux kernel (netfilter), but without the 20 (plus) years of experience building the right semantics and user interfaces.

Network talk

I very recently missed some semantics for limiting the number of open connections per namespace, see Phabricator T356164: [toolforge] several tools get periods of connection refused (104) when connecting to wikis This functionality should be available in the lower level tools, I mean Netfilter. I may submit a proposal upstream at some point, so they consider adding this to the Kubernetes API.

Final notes

In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I’m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. That being said, I felt less engaged with the content of the conference schedule compared to last year. I don’t know if the schedule itself was less interesting, or that I’m losing interest?

Finally, not an official track in the conference, but we met a bunch of folks from Wikimedia Deutschland. We had a really nice time talking about how wikibase.cloud uses Kubernetes, whether they could run in Wikimedia Cloud Services, and why structured data is so nice.

Group photo

Worse Than FailureTaking Up Space

April Fool's day is a day where websites lie to you or create complex pranks. We've generally avoided the former, but have done a few of the latter, but we also like to just use April Fool's as a chance to change things up.

So today, we're going to do something different. We're going to talk about my Day Job. Specifically, we're going to talk about a tool I use in my day job: cFS.

cFS is a NASA-designed architecture for designing spaceflight applications. It's open source, and designed to be accessible. A lot of the missions NASA launches use cFS, which gives it a lovely proven track record. And it was designed and built by people much smarter than me. Which doesn't mean it's free of WTFs.

The Big Picture

cFS is a C framework for spaceflight, designed to run on real-time OSes, though fully capable of running on Linux (with or without a realtime kernel), and even Windows. It has three core modules- a Core Flight Executive (cFE) (which provides services around task management, and cross-task communication), the OS Abstraction Layer (helping your code be portable across OSes), and a Platform Support Package (low-level support for board-connected hardware). Its core concept is that you build "apps", and the whole thing has a pitch about an app store. We'll come back to that. What exactly is an app in cFS?

Well, at their core, "apps" are just Actors. They're a block of code with its own internal state, that interacts with other modules via message passing, but basically runs as its own thread (or a realtime task, or whatever your OS appropriate abstraction is).

These applications are wired together by a cFS feature called the "Core Flight Executive Software Bus" (cFE Software Bus, or just Software Bus), which handles managing subscriptions and routing. Under the hood, this leverages an OS-level message queue abstraction. Since each "app" has its own internal memory, and only reacts to messages (or emits messages for others to react to), we avoid pretty much all of the main pitfalls of concurrency.

This all feeds into the single responsibility principle, giving each "app" one job to do. And while we're throwing around buzzwords, it also grants us encapsulation (each "app" has its own memory space, unshared), and helps us design around interfaces- "apps" emit and receive certain messages, which defines their interface. It's almost like full object oriented programming in C, or something like how the BeamVM languages (Erlang, Elixir) work.

The other benefit of this is that we can have reusable apps which provide common functionality that every mission needs. For example, the app DS (Data Storage) logs any messages that cross the software bus. LC (Limit Checker) allows you to configure expected ranges for telemetry (like, for example, the temperature you expect a sensor to report), and raise alerts if it falls out of range. There's SCH (Scheduler) which sends commands to apps to wake them up so they can do useful work (also making it easy to sleep apps indefinitely and minimize power consumption).

All in all, cFS constitutes a robust, well-tested framework for designing spaceflight applications.

Even NASA annoys me

This is TDWTF, however, so none of this is all sunshine and roses. cFS is not the prettiest framework, and the developer experience may ah… leave a lot to be desired. It's always undergoing constant improvement, which is good, but still has its pain points.

Speaking of constant improvement, let's talk about versioning. cFS is the core flight software framework which hosts your apps (via the cFE), and cFS is getting new versions. The apps themselves also get new versions. The people writing the apps and the people writing cFS are not always coordinating on this, which means that when cFS adds a breaking change to their API, you get to play the "which version of cFS and App X play nice together". And since everyone has different practices around tagging releases, you often have to walk through commits to find the last version of the app that was compatible with your version of cFS, and see things like releases tagged "compatible with Draco rc2 (mostly)". The goal of "grab apps from an App Store and they just work" is definitely not actually happening.

Or, this, from the current cFS readme:

Compatible list of cFS apps
The following applications have been tested against this release:
TBD

Messages in cFS are represented by structs. Which means when apps want to send each other messages, they need the same struct definitions. This is just a pain to manage- getting agreement about which app should own which message, who needs the definition, and how we get the definition over to them is just a huge mess. It's such a huge mess that newer versions of cFS have switched to using "Electronic Data Sheets"- XML files which describe the structs, which doesn't really solve the problem but adds XML to the mix. At least EDS makes it easy to share definitions with non-C applications (popular ground software is written in Python or Java).

Messages also have to have a unique "Message ID", but the MID is not just an arbitrary unique number. It secretly encodes important information, like whether this message is a command (an instruction to take action) or telemetry (data being output), and if you pick a bad MID, everything breaks. Also, keeping MID definitions unique across many different apps who don't know any of the other apps exist is a huge problem. The general solution that folks use is bolting on some sort of CSV file and code generator that handles this.

Those MIDs also don't exist outside of cFS- they're a unique-to-cFS abstraction. cFS, behind the scenes, converts them to different parts of the "space packet header", which is the primary packet format for the SpaceWire networking protocol. This means that in realistic deployments where your cFS module needs to talk to components not running cFS- your MID also represents key header fields for the SpaceWire network. It's incredibly overloaded and the conversions are hidden behind C macros that you can't easily debug.

But my biggest gripe is the build tooling. Everyone at work knows they can send me climbing the walls by just whispering "cFS builds" in my ear. It's a nightmare (that, I believe has gotten better in newer versions, but due to the whole "no synchronization between app and cFE versions" problem, we're not using a new version). It starts with make, which calls CMake, which also calls make, but also calls CMake again in a way that doesn't let variables propagate down to other layers. cFS doesn't provide any targets you link against, but instead requires that any apps you want to use be inserted into the cFS source tree directly, which makes it incredibly difficult to build just parts of cFS for unit testing.

Oh, speaking of unit testing- cFS provides mocks of all of its internal functions; mocks which always return an error code. This is intentional, to encourage developers to test their failure paths in code, but I also like to test our success path too.

Summary

Any tool you use on a regular basis is going to be a tool that you're intimately familiar with; the good parts frequently vanish into the background and the pain points are the things that you notice, day in, day out. That's definitely how I feel after working with cFS for two years.

I think, at its core, the programming concepts it brings to doing low-level, embedded C, are good. It's certainly better than needing to write this architecture myself. And for all its warts, it's been designed and developed by people who are extremely safety conscious and expect you to be too. It's been used on many missions, from hobbyist cube sats to Mars rovers, and that proven track record gives you a good degree of confidence that your mission will also be safe using it.

And since it is Open Source, you can try it out yourself. The cFS-101 guide gives you a good starting point, complete with a downloadable VM that walks you through building a cFS application and communicating with it from simulated ground software. It's a very different way to approach C programming (and makes it easier to comply with C standards, like MISRA), and honestly, the Actor-oriented mindset is a good attitude to bring to many different architectural problems.

Peregrine

If you were following space news at all, you may already know that our Peregrine lander failed. I can't really say anything about that until the formal review has released its findings, but all indications are that it was very much a hardware problem involving valves and high pressure tanks. But I can say that most of the avionics on it were connected to some sort of cFS instance (there were several cFS nodes).

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

365 TomorrowsTwo Guns

Author: Julian Miles, Staff Writer That shadow is cast by scenery. The one next to it likewise. The third I’m not sure about, and the fourth will become scenery if the body isn’t found. Sniping lasers give nothing away when killing, although the wounds are distinctive. By the time they’re identifying that, I’ll be off […]

The post Two Guns appeared first on 365tomorrows.

xkcdEclipse Coolness

Cryptogram Ross Anderson

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996.

I know I was at the first Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography (the one with the blue cover) down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993—probably at Eurocrypt in Lofthus, Norway—I, as an unpublished book author who had only written a couple of crypto articles for Dr. Dobb’s Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find a least one unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He published on block cipher cryptanalysis in the 1990s, and the security of large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its third edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and backdoors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

He’s listed in the acknowledgments as a reader of every one of my books from Beyond Fear on. Recently, we’d see each other a couple of times a year: at this or that workshop or event. The last time I saw him was last June, at SHB 2023, in Pittsburgh. We were having dinner on Alessandro Acquisti‘s rooftop patio, celebrating another successful workshop. He was going to attend my Workshop on Reimagining Democracy in December, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we talked. And I am not the only one.

My heart goes out to his wife Shireen and his family. We lost him much too soon.

Cryptogram Declassified NSA Newsletters

Through a 2010 FOIA request (yes, it took that long), we have copies of the NSA’s KRYPTOS Society Newsletter, “Tales of the Krypt,” from 1994 to 2003.

There are many interesting things in the 800 pages of newsletter. There are many redactions. And a 1994 review of Applied Cryptography by redacted:

Applied Cryptography, for those who don’t read the internet news, is a book written by Bruce Schneier last year. According to the jacket, Schneier is a data security expert with a master’s degree in computer science. According to his followers, he is a hero who has finally brought together the loose threads of cryptography for the general public to understand. Schneier has gathered academic research, internet gossip, and everything he could find on cryptography into one 600-page jumble.

The book is destined for commercial success because it is the only volume in which everything linked to cryptography is mentioned. It has sections on such-diverse topics as number theory, zero knowledge proofs, complexity, protocols, DES, patent law, and the Computer Professionals for Social Responsibility. Cryptography is a hot topic just now, and Schneier stands alone in having written a book on it which can be browsed: it is not too dry.

Schneier gives prominence to applications with large sections.on protocols and source code. Code is given for IDEA, FEAL, triple-DES, and other algorithms. At first glance, the book has the look of an encyclopedia of cryptography. Unlike an encyclopedia, however, it can’t be trusted for accuracy.

Playing loose with the facts is a serious problem with Schneier. For example in discussing a small-exponent attack on RSA, he says “an attack by Michael Wiener will recover e when e is up to one quarter the size of n.” Actually, Wiener’s attack recovers the secret exponent d when e has less than one quarter as many bits as n, which is a quite different statement. Or: “The quadratic sieve is the fastest known algorithm for factoring numbers less than 150 digits…. The number field sieve is the fastest known factoring algorithm, although the quadratric sieve is still faster for smaller numbers (the break even point is between 110 and 135 digits).” Throughout the book, Schneier leaves the impression of sloppiness, of a quick and dirty exposition. The reader is subjected to the grunge of equations, only to be confused or misled. The large number of errors compounds the problem. A recent version of the errata (Schneier publishes updates on the internet) is fifteen pages and growing, including errors in diagrams, errors in the code, and errors in the bibliography.

Many readers won’t notice that the details are askew. The importance of the book is that it is the first stab at.putting the whole subject in one spot. Schneier aimed to provide a “comprehensive reference work for modern cryptography.” Comprehensive it is. A trusted reference it is not.

Ouch. But I will not argue that some of my math was sloppy, especially in the first edition (with the blue cover, not the red cover).

A few other highlights:

  • 1995 Kryptos Kristmas Kwiz, pages 299–306
  • 1996 Kryptos Kristmas Kwiz, pages 414–420
  • 1998 Kryptos Kristmas Kwiz, pages 659–665
  • 1999 Kryptos Kristmas Kwiz, pages 734–738
  • Dundee Society Introductory Placement Test (from questions posed by Lambros Callimahos in his famous class), pages 771–773
  • R. Dale Shipp’s Principles of Cryptanalytic Diagnosis, pages 776–779
  • Obit of Jacqueline Jenkins-Nye (Bill Nye the Science Guy’s mother), pages 755–756
  • A praise of Pi, pages 694–696
  • A rant about Acronyms, pages 614–615
  • A speech on women in cryptology, pages 593–599

Cory DoctorowSubprime gadgets

A group of child miners, looking grimy and miserable, standing in a blasted gravel wasteland. To their left, standing on a hill, is a club-wielding, mad-eyed, top-hatted debt collector, brandishing a document bearing an Android logo.

Today for my podcast, I read Subprime gadgets, originally published in my Pluralistic blog:

I recorded this on a day when I was home between book-tour stops (I’m out with my new techno crime-thriller, The Bezzle). Catch me on April 11 in Boston with Randall Munroe, on April 12th in Providence, Rhode Island, then onto Chicago, Torino, Winnipeg, Calgary, Vancouver and beyond! The canonical link for the schedule is here.


The promise of feudal security: “Surrender control over your digital life so that we, the wise, giant corporation, can ensure that you aren’t tricked into catastrophic blunders that expose you to harm”:

https://locusmag.com/2021/01/cory-doctorow-neofeudalism-and-the-digital-manor/

The tech giant is a feudal warlord whose platform is a fortress; move into the fortress and the warlord will defend you against the bandits roaming the lawless land beyond its walls.

That’s the promise, here’s the failure: What happens when the warlord decides to attack you? If a tech giant decides to do something that harms you, the fortress becomes a prison and the thick walls keep you in.


MP3


Here’s that tour schedule!

11 Apr: Harvard Berkman-Klein Center, with Randall Munroe
https://cyber.harvard.edu/events/enshittification

12 Apr: RISD Debates in AI, Providence

17 Apr: Anderson’s Books, Chicago, 19h:
https://www.andersonsbookshop.com/event/cory-doctorow-1

19-21 Apr: Torino Biennale Tecnologia
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia

2 May, Canadian Centre for Policy Alternatives, Winnipeg
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337

5-11 May: Tartu Prima Vista Literary Festival
https://tartu2024.ee/en/kirjandusfestival/

6-9 Jun: Media Ecology Association keynote, Amherst, NY
https://media-ecology.org/convention

(Image: Oatsy, CC BY 2.0, modified)

,

David BrinDo the Rich Have Too Much Money? A neglected posting... till now.

Want to time travel? I found this on my desktop… a roundup of news that seemed highly indicative… in early 2022!  Back in those naïve, bygone days of innocence… but now…  Heck yes, a lot of it is pertinent right now!


Like… saving market economies from their all-too-natural decay back into feudalism.


------


First, something a long time coming!  Utter proof we are seeing a Western Revival and push back against the World Oligarchic Putsch. A landmark deal agreed upon by the world's richest nations on Saturday will see a global minimum rate of corporation tax placed on multinational companies including tech giants like Amazon, Apple and Microsoft. Finance ministers from the Group of Seven, or G-7 nations, said they had agreed to having a global base corporate tax rate of at least 15 percent.  Companies  with a strong online presence, would pay taxes in the countries where they record sales, not just where they have an operational base.


It is far, far from enough! But at last some of my large scale 'suggestions' are being tried. Now Let’s get all 50 U.S. states to pass a treaty banning 'bidding wars' for factories, sports teams etc... with maybe a sliding scale tilted for poorer states or low populations. A trivially easy thing that'd save citizens hundreds of billions.


The following made oligarchs fearful of what the Pelosi bills might accomplish, if thirty years of sabotaging the IRS came to an end: 


ProPublica has obtained a vast trove of Internal Revenue Service data on the tax returns of thousands of the nation’s wealthiest people, covering more than 15 years. The data provides an unprecedented look inside the financial lives of America’s titans, including Warren Buffett, Bill Gates, Rupert Murdoch and Mark Zuckerberg. It shows not just their income and taxes, but also their investments, stock trades, gambling winnings and even the results of audits. Taken together, it demolishes the cornerstone myth of the American tax system: that everyone pays their fair share and the richest Americans pay the most. The results are stark. According to Forbes, those 25 people saw their worth rise a collective $401 billion from 2014 to 2018. They paid a total of $13.6 billion in federal income taxes in those five years, the IRS data shows. That’s a staggering sum, but it amounts to a true tax rate of only 3.4%.  

Over the longer run, what we need is the World Ownership Treaty. Nothing on Earth is 'owned' unless a human or government or nonprofit claims it openly and accountably. So much illicit property would be abandoned by criminals etc. that national debts would be erased and the rest of us could have a tax jubilee. The World Ownership Treaty has zero justified objections. If you own something... just say so.


And a minor tech note: An amazing helium airship alternates life as dirigible or water ship. Alas, it is missing some important aspects I could explain… 



== When the Rich have Too Much Money… ==


“The Nobel Prize-winning physicist Ilya Prigogine was fond of saying that the future is not so much determined by what we do in the present as our image of the future determines what we do today.” So begins the latest missive of Noema Magazine.


The Near Future: The Pew Research Center’s annual Big Challenges Report top-features my musings on energy, local production/autonomy, transparency etc., along with other top seers, like the estimable Esther Dyson, Jamais Cascio, Amy Webb & Abigail deKosnick and many others.


Among the points I raise:

  • Advances in cost-effectiveness of sustainable energy supplies will be augmented by better storage systems. This will both reduce reliance on fossil fuels and allow cities and homes to be more autonomous.
  • Urban farming methods may move to industrial scale, allowing similar moves toward local autonomy (perhaps requiring a full decade or more to show significant impact). Meat use will decline for several reasons, ensuring some degree of food security, as well.
  • Local, small-scale, on-demand manufacturing may start to show effects in 2025. If all of the above take hold, there will be surplus oceanic shipping capacity across the planet. Some of it may be applied to ameliorate (not solve) acute water shortages. Innovative uses of such vessels may range all the way to those depicted in my novel ‘Earth.’
  • Full-scale diagnostic evaluations of diet, genes and microbiome will result in micro-biotic therapies and treatments. AI appraisals of other diagnostics will both advance detection of problems and become distributed to handheld devices cheaply available to all, even poor clinics.
  • Handheld devices will start to carry detection technologies that can appraise across the spectrum, allowing NGOs and even private parties to detect and report environmental problems.
  • Socially, this extension of citizen vision will go beyond the current trend of assigning accountability to police and other authorities. Despotisms will be empowered, as predicted in ‘Nineteen Eighty-four.’ But democracies will also be empowered, as in ‘The Transparent Society.’
  • I give odds that tsunamis of revelation will crack the shields protecting many elites from disclosure of past and present torts and turpitudes. The Panama Papers and Epstein cases exhibit how fear propels the elites to combine efforts at repression. But only a few more cracks may cause the dike to collapse, revealing networks of blackmail. This is only partly technologically driven and hence is not guaranteed. If it does happen, there will be dangerous spasms by all sorts of elites, desperate to either retain status or evade consequences. But if the fever runs its course, the more transparent world will be cleaner and better run.
  • Some of those elites have grown aware of the power of 90 years of Hollywood propaganda for individualism, criticism, diversity, suspicion of authority and appreciation of eccentricity. Counter-propaganda pushing older, more traditional approaches to authority and conformity are already emerging, and they have the advantage of resonating with ancient human fears. Much will depend upon this meme war.

“Of course, much will also depend upon short-term resolution of current crises. If our systems remain undermined and sabotaged by incited civil strife and distrust of expertise, then all bets are off. You will get many answers to this canvassing fretting about the spread of ‘surveillance technologies that will empower Big Brother.’ These fears are well-grounded, but utterly myopic. First, ubiquitous cameras and facial recognition are only the beginning. Nothing will stop them and any such thought of ‘protecting’ citizens from being seen by elites is stunningly absurd, as the cameras get smaller, better, faster, cheaper, more mobile and vastly more numerous every month. Moore’s Law to the nth degree. Yes, despotisms will benefit from this trend. And hence, the only thing that matters is to prevent despotism altogether.

“In contrast, a free society will be able to apply the very same burgeoning technologies toward accountability. We are seeing them applied to end centuries of abuse by ‘bad-apple’ police who are thugs, while empowering the truly professional cops to do their jobs better. I do not guarantee light will be used this way, despite today’s spectacular example. It is an open question whether we citizens will have the gumption to apply ‘sousveillance’ upward at all elites. But Gandhi and Martin Luther King Jr. likewise were saved by crude technologies of light in their days. And history shows that assertive vision by and for the citizenry is the only method that has ever increased freedom and – yes – some degree of privacy.

A new type of digital asset - known as a non-fungible token (NFT) - has exploded in popularity during the pandemic as enthusiasts and investors scramble to spend enormous sums of money on items that only exist online. “Blockchain technology allows the items to be publicly authenticated as one-of-a-kind, unlike traditional online objects which can be endlessly reproduced.”… “ In October 2020, Miami-based art collector Pablo Rodriguez-Fraile spent almost $67,000 on a 10-second video artwork that he could have watched for free online. Last week, he sold it for $6.6 million. The video by digital artist Beeple, whose real name is Mike Winkelmann, was authenticated by blockchain, which serves as a digital signature to certify who owns it and that it is the original work.”


From The Washington Post: The post-covid luxury spending boom has begun. It’s already reshaping the economy. Consider a sealed copy of Super Mario 64 sells for $1.56M in record-breaking auction. That record didn’t last long, till August 2021. Rare copy of Super Mario Bros. sells for $2 million, the most ever paid for a video game. 


===============================


== Addendum March 30, 2024 ==


What's above was an economics rant from the past. Only now, let me also tack on something from spring 2024 (today!) that I just sent to a purported 'investment guru economist' I know. His bi-weekly newsletter regularly - and obsessively - focuses on the Federal Reserve ('Fed') and the ongoing drama of setting calibrated interest rates to fight inflation.  (The fellow never, ever talks about all the things that matter much, much more, like tax/fiscal policy, money velocity and rising wealth disparities.)


Here, he does make one cogent point about inflation... but doesn't follow up to the logical conclusion:


"This matters because the average consumer doesn’t look at benchmarks. They perceive inflation when it starts having visibly large and/or frequent effects on their lives. This is why food and gasoline prices matter so much; people buy them regularly enough to notice higher prices. Their contribution to inflation perceptions is greater than their weighting in the benchmarks."

Yes!  It is true that the poor and middle class do not borrow in order to meet basic needs. All they can do, when prices rise, is tighten their belts. Interest rates do not affect such basics.

ALSO The rich do not borrow. Because, after 40 years of Supply Side tax grifts, they have all the money! And now they are snapping up 1/3 of US housing stock with cash purchases. What Adam Smith called economically useless 'rent-seeking'. The net effect of Republican Congresses firehousing all our wealth into the gaping-open maws of parasites.

That's gradually changing, at last. The US is rapidly re-industrializing, right now! But not by borrowing. The boom in US manufacturing investment is entirely Keynesian - meaning that it's being propelled by federal infrastructure spending and the Chips Act.   Those Pelosi bills are having all of the positive effects that Austrian School  fanatics insanely promised for Supply Side... and never delivered.  

That old argument isnow  settled by facts... which never (alas) sway cultists. Pure fact. Keynes is proved. Laffer is disproved. Period.

But in that case, what's with the obsession of the Right upon the Federal Reserve? What - pray tell - is the Fed supposedly influencing, with interest rate meddling? The answer is... not much.

If you want to see what's important to oligarchy - the core issue that's got them so upset that the they will support Trump? Just look at what the GOP in Congress and the Courts is actually doing, right now! Other than "Hunter hearings" and other Benghazi-style theatrics, what Mike Johnson et. al are doing is:

- Desperately using every lever - like governemnt shut-down threats and holding hostage aid to Ukraine - to slash the coming wave of IRS audits that might affect their masters.  With that wave looming, many in oligarchy are terrified. Re-eviscerating the IRS is the top GOP priority!  But Schumer called Johnson's bluff.

- Their other clear priority is obedience to the Kremlin and blocking aid to Ukraine.

Look at what actually is happening and then please, please name for me one other actual (not polemical) priority? 

== And finally ==

Oh yeah, then there's this. 

Please don't travel April 17-21.  

That's McVeigh season. Though, if you listen to MAGA world, ALL of 2024 into 2025 could be.  

God bless the FBI.




Planet DebianJunichi Uekawa: Learning about xz and what is happening is fascinating.

Learning about xz and what is happening is fascinating. The scope of potential exploit is very large. The Open source software space is filled with many unmaintained and unreviewed software.

Planet DebianRussell Coker: Links March 2024

Bruce Schneier wrote an interesting blog post about his workshop on reimagining democracy and the unusual way he structured it [1]. It would be fun to have a security conference run like that!

Matthias write an informative blog post about Wayland “Wayland really breaks things… Just for now” which links to a blog debate about the utility of Wayland [2]. Wayland seems pretty good to me.

Cory Doctorow wrote an insightful article about the AI bubble comparing it to previous bubbles [3].

Charles Stross wrote an insightful analysis of the implications if the UK brought back military conscription [4]. Looks like the era of large armies is over.

Charles Stross wrote an informative blog post about the Worldcon in China, covering issues of vote rigging for location, government censorship vs awards, and business opportunities [5].

The Paris Review has an interesting article about speaking to the CIA’s Creative Writing Group [6]. It doesn’t explain why they have a creative writing group that has some sort of semi-official sanction.

LongNow has an insightful article about the threats to biodiversity in food crops and the threat that poses to humans [7].

Bruce Schneier and Albert Fox Cahn wrote an interesting article about the impacts of chatbots on human discourse [8]. If it makes people speak more precisely then that would be great for all Autistic people!

365 TomorrowsPast Belief

Author: Don Nigroni When everything is going well, I can’t relax. I just wait and worry for something bad to happen. So when I got a promotion last week, naturally I expected something ugly would happen, perhaps a leaky roof or maybe a hurricane. But this time, no matter how hard I looked for an […]

The post Past Belief appeared first on 365tomorrows.

,

Planet DebianSteinar H. Gunderson: xz backdooring

Andres Freund found that xz-utils is backdoored, but could not (despite the otherwise excellent analysis) get quite to the bottom of what the payload actually does.

What you would hope for to be posted by others: Further analysis of the payload.

What actually gets posted by others: “systemd is bad.”

Update: Good preliminary analysis.

365 TomorrowsProximity Suit

Author: Jeremy Nathan Marks Athabasca was a town of gas and coal. No wind or solar were allowed. Local officials said the Lord would return by fire while windmills and solar panels could only mar the landscape. And fire in a town of coal and gas was, naturally, a lovely thing. On a plain not […]

The post Proximity Suit appeared first on 365tomorrows.

,

Cryptogram Friday Squid Blogging: The Geopolitics of Eating Squid

New York Times op-ed on the Chinese dominance of the squid industry:

China’s domination in seafood has raised deep concerns among American fishermen, policymakers and human rights activists. They warn that China is expanding its maritime reach in ways that are putting domestic fishermen around the world at a competitive disadvantage, eroding international law governing sea borders and undermining food security, especially in poorer countries that rely heavily on fish for protein. In some parts of the world, frequent illegal incursions by Chinese ships into other nations’ waters are heightening military tensions. American lawmakers are concerned because the United States, locked in a trade war with China, is the world’s largest importer of seafood.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianRaphaël Hertzog: Freexian is looking to expand its team with more Debian contributors

It’s been a while that I haven’t posted anything on my blog, the truth is that Freexian has been doing very well in the last years and that I have a hard time to allocate time to write articles or even to contribute to my usual Debian projects… the exception being debusine since that’s part of the Freexian work (have a look at our most recent announce!).

That being said, given Freexian’s growth and in the hope to reduce my workload, we are looking to extend our team with Debian members of more varied backgrounds and skills, so they can help us in areas like sales / marketing / project management. Have a look at our announce on debian-jobs@lists.debian.org.

As a mission-oriented company, we are looking to work with persons already involved in Debian (or persons who were waiting the right opportunity to get involved). All our collaborators can spend 20% of their paid work time on the Debian projects they care about.

Planet DebianRavi Dwivedi: A visit to the Taj Mahal

Note: The currency used in this post is Indian Rupees, which was around 83 INR for 1 US Dollar as that time.

I and my friend Badri visited the Taj Mahal this month. Taj Mahal is one of the main tourist destinations in India and does not need an introduction, I guess. It is in Agra, in the state of Uttar Pradesh, 188 km from Delhi by train. So, I am writing a post documenting useful information for people who are planning to visit Taj Mahal. Feel free to ask me questions about visiting the Taj Mahal.

Our retiring room at the Old Delhi Railway Station.

We had booked a train from Delhi to Agra. The name of the train was Taj Express, and its scheduled departure time from Hazrat Nizamuddin station in Delhi is 07:08 hours in the morning, and its arrival time at Agra Cantt station is 09:45. So, we booked a retiring room at the Old Delhi railway station for the previous night. This retiring room was hard to find. We woke up at 05:00 in the morning and took the metro to Hazrat Nizamuddin station. We barely reached the station in time, but anyway, the train was not yet at the station; it was late.

We reached Agra at 10:30 and checked into our retiring room, took rest and went out for Taj Mahal at 13:00 in the afternoon. Taj Mahal’s outer gate is 5 km away from the Agra Cantt station. As we were going out of the railway station, we were chased by an autorickshaw driver who offered to go to Taj Mahal for 150 INR for both of us. I asked him to bring it down to 60 INR, and after some back and forth, he agreed to drop us off at Taj Mahal for 80 INR. But I said we won’t pay anything above 60 INR. He agreed with that amount but said that he would need to fill up with more passengers. When we saw that he wasn’t making any effort in bringing more passengers, we walked away.

As soon as we got out of the railway station complex, an autorickshaw driver came to us and offered to drop us off at Taj Mahal for 20 INR if we are sharing with other passengers and 100 INR if we reserve the auto for us. We agreed to go with 20 INR per person, but he started the autorickshaw as soon as we hopped in. I thought that the third person in the auto was another passenger sharing a ride with us, but later we got to know he was with the driver. Upon reaching the outer gate of Taj Mahal, I gave him 40 INR (for both of us), and he asked to instead give 100 INR as he said we reserved the auto, even though I clearly stated before taking the auto that we wanted to share the auto, not reserve it. I think this was a scam. We walked away, and he didn’t insist further.

Taj Mahal entrance was like 500 m from the outer gate. We went there and bought offline tickets just outside the West gate. For Indians, the ticket for going inside the Taj Mahal complex is 50 INR, and a visit to the mausoleum costs 200 INR extra.

Security outside the Taj Mahal complex.

This red colored building is entrance to where you can see the Taj Mahal.

Taj Mahal.

Shoe covers for going inside the mausoleum.

Taj Mahal from side angle.

We came out of the Taj Mahal complex at 18:00 and stopped for some tea and snacks. I also bought a fridge magnet for 30 INR. Then we walked back towards Agra Cantt station, as we had a train for Jaipur at midnight. We were hoping to find a restaurant along the way, but we didn’t find any that we found interesting, so we just ate at the railway station. During the return trip, we noticed there was a bus stand near the station, which we didn’t know about. It turns out you can catch a bus to Taj Mahal from there. You can click here to check out the location of that bus stand on OpenStreetMap.

Expenses

These were our expenses per person

Retiring room at Delhi Railway Station for 12 hours ₹131

Train ticket from Delhi to Agra (Taj Express) ₹110

Retiring room at Agra Cantt station for 12 hours ₹450

Auto-rickshaw to Taj Mahal ₹20

Taj Mahal ticket (including going inside the mausoleum): ₹250

Food ₹350

Important information for visitors

  • Taj Mahal is closed on Friday.

  • There are plenty of free-of-cost drinking water taps inside the Taj Mahal complex.

  • Ticket price for Indians is ₹50, for foreigners and NRIs it is ₹1100, and for people from SAARC/BIMSTEC is ₹540. ₹200 extra for the mausoleum for everyone.

  • A visit inside the mausoleum requires covering your shoes or removing them. Shoe covers costs ₹10 per person inside the complex, but are probably involved free of charge in foreigner tickets. We could not find a place to keep our shoes, but some people managed to enter barefoot, indicating there must be some place to keep your shoes.

  • Mobile phones and cameras are allowed inside the Taj Mahal, but not eatables.

  • We went there on March 10th, and the weather was pleasant. So, we recommend going around that time.

  • Regarding the timings, I found this written near the ticket counter: “Taj Mahal opens 30 minutes before sunrise and closes 30 minutes before sunset during normal operating days,” so the timings are vague. But we came out of the complex at 18:00 hours. I would interpret that to mean the Taj Mahal is open from 07:00 to 18:00, and the ticket counter closes at around 17:00. During the winter, the timings might differ.

  • The cheapest way to reach Taj Mahal is by bus, and the bus stop is here

Bye for now. See you in the next post :)

Worse Than FailureError'd: Good Enough Friday

We've got some of the rarer classic Error'd types today: events from the dawn of time, weird definitions of space, and this absolutely astonishing choice of cancel/confirm button text.

Perplexed Stewart found this and it's got me completely befuddled as well! "Puzzled over this classic type of Error'd for ages. I really have no clue whether I should press Yes or No."

avast

 

I have a feeling we've seen errors like this before, but it bears repeating. Samuel H. bemoans the awful irony. "While updating Adobe Reader: Adobe Crash Processor quit unexpectedly [a.k.a. crashed]."

adobe

 

Cosmopolitan Jan B. might be looking for a courier to carry something abroad. "I found an eBay listing that seemed too good to be true, but had no bids at all! The item even ships worldwide, except they have a very narrow definition of what worldwide means."

shipping

 

Super-Patriot Chris A. proves Tennessee will take second place to nobody when it comes to distrusting dirty furriners. Especially the ones in Kentucky. "The best country to block is one's own. That way, you KNOW no foreigners can read your public documents!"

denied

 

Finally, old-timer Bruce R. has a system that appears to have been directly inspired by Aristotle. "I know Windows has some old code in it, but this is ridiculous."

stale

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsA Little Extra

Author: Marcel Neumann After years of living in an unjust world, being disillusioned by an ideology I once thought held promise and having lost faith in humanity’s collective desire to live in harmony, I decided to live off the grid in a remote Alaskan village. Any needed or desired supplies were flown in by a […]

The post A Little Extra appeared first on 365tomorrows.

Cryptogram Lessons from a Ransomware Attack against the British Library

You might think that libraries are kind of boring, but this self-analysis of a 2023 ransomware and extortion attack against the British Library is anything but.

Planet DebianPatryk Cisek: Sanoid on TrueNAS

syncoid to TrueNAS In my homelab, I have 2 NAS systems: Linux (Debian) TrueNAS Core (based on FreeBSD) On my Linux box, I use Jim Salter’s sanoid to periodically take snapshots of my ZFS pool. I also want to have a proper backup of the whole pool, so I use syncoid to transfer those snapshots to another machine. Sanoid itself is responsible only for taking new snapshots and pruning old ones you no longer care about.

Planet DebianReproducible Builds (diffoscope): diffoscope 262 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 262. This version includes the following changes:

[ Chris Lamb ]
* Factor out Python version checking in test_zip.py. (Re: #362)
* Also skip some zip tests under 3.10.14 as well; a potential regression may
  have been backported to the 3.10.x series. The underlying cause is still to
  be investigated. (Re: #362)

You find out more by visiting the project homepage.

,

Krebs on SecurityThread Hijacking: Phishes That Prey on Your Curiosity

Thread hijacking attacks. They happen when someone you know has their email account compromised, and you are suddenly dropped into an existing conversation between the sender and someone else. These missives draw on the recipient’s natural curiosity about being copied on a private discussion, which is modified to include a malicious link or attachment. Here’s the story of a thread hijacking attack in which a journalist was copied on a phishing email from the unwilling subject of a recent scoop.

In Sept. 2023, the Pennsylvania news outlet LancasterOnline.com published a story about Adam Kidan, a wealthy businessman with a criminal past who is a major donor to Republican causes and candidates, including Rep. Lloyd Smucker (R-Pa).

The LancasterOnline story about Adam Kidan.

Several months after that piece ran, the story’s author Brett Sholtis received two emails from Kidan, both of which contained attachments. One of the messages appeared to be a lengthy conversation between Kidan and a colleague, with the subject line, “Re: Successfully sent data.” The second missive was a more brief email from Kidan with the subject, “Acknowledge New Work Order,” and a message that read simply, “Please find the attached.”

Sholtis said he clicked the attachment in one of the messages, which then launched a web page that looked exactly like a Microsoft Office 365 login page. An analysis of the webpage reveals it would check any submitted credentials at the real Microsoft website, and return an error if the user entered bogus account information. A successful login would record the submitted credentials and forward the victim to the real Microsoft website.

But Sholtis said he didn’t enter his Outlook username and password. Instead, he forwarded the messages to LancasterOneline’s IT team, which quickly flagged them as phishing attempts.

LancasterOnline Executive Editor Tom Murse said the two phishing messages from Mr. Kidan raised eyebrows in the newsroom because Kidan had threatened to sue the news outlet multiple times over Sholtis’s story.

“We were just perplexed,” Murse said. “It seemed to be a phishing attempt but we were confused why it would come from a prominent businessman we’ve written about. Our initial response was confusion, but we didn’t know what else to do with it other than to send it to the FBI.”

The phishing lure attached to the thread hijacking email from Mr. Kidan.

In 2006, Kidan was sentenced to 70 months in federal prison after pleading guilty to defrauding lenders along with Jack Abramoff, the disgraced lobbyist whose corruption became a symbol of the excesses of Washington influence peddling. He was paroled in 2009, and in 2014 moved his family to a home in Lancaster County, Pa.

The FBI hasn’t responded to LancasterOnline’s tip. Messages sent by KrebsOnSecurity to Kidan’s emails addresses were returned as blocked. Messages left with Mr. Kidan’s company, Empire Workforce Solutions, went unreturned.

No doubt the FBI saw the messages from Kidan for what they likely were: The result of Mr. Kidan having his Microsoft Outlook account compromised and used to send malicious email to people in his contacts list.

Thread hijacking attacks are hardly new, but that is mainly true because many Internet users still don’t know how to identify them. The email security firm Proofpoint says it has tracked north of 90 million malicious messages in the last five years that leverage this attack method.

One key reason thread hijacking is so successful is that these attacks generally do not include the tell that exposes most phishing scams: A fabricated sense of urgency. A majority of phishing threats warn of negative consequences should you fail to act quickly — such as an account suspension or an unauthorized high-dollar charge going through.

In contrast, thread hijacking campaigns tend to patiently prey on the natural curiosity of the recipient.

Ryan Kalember, chief strategy officer at Proofpoint, said probably the most ubiquitous examples of thread hijacking are “CEO fraud” or “business email compromise” scams, wherein employees are tricked by an email from a senior executive into wiring millions of dollars to fraudsters overseas.

But Kalember said these low-tech attacks can nevertheless be quite effective because they tend to catch people off-guard.

“It works because you feel like you’re suddenly included in an important conversation,” Kalember said. “It just registers a lot differently when people start reading, because you think you’re observing a private conversation between two different people.”

Some thread hijacking attacks actually involve multiple threat actors who are actively conversing while copying — but not addressing — the recipient.

“We call these multi-persona phishing scams, and they’re often paired with thread hijacking,” Kalember said. “It’s basically a way to build a little more affinity than just copying people on an email. And the longer the conversation goes on, the higher their success rate seems to be because some people start replying to the thread [and participating] psycho-socially.”

The best advice to sidestep phishing scams is to avoid clicking on links or attachments that arrive unbidden in emails, text messages and other mediums. If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.

LongNowMembers of Long Now

Members of Long Now

With thousands of members from across the globe, the Long Now community has a wide range of perspectives, stories, and experiences to share. We're delighted to showcase this curated set of Ignite Talks, created and given by the Long Now members themselves. Presenting on the subjects of their choice, our speakers have precisely 5 minutes to amuse, educate, enlighten, or inspire the audience!

We're opening talk submissions to all members in early April and will send via email; we can accept both in-person and recorded talks.

And save the date of May 29 to join us in-person and online for a fun and fast-paced evening of Long Now Ignite Talks full of surprising and thoughtful ideas.

Planet DebianJoey Hess: the vulture in the coal mine

Turns out that VPS provider Vultr's terms of service were quietly changed some time ago to give them a "perpetual, irrevocable" license to use content hosted there in any way, including modifying it and commercializing it "for purposes of providing the Services to you."

This is very similar to changes that Github made to their TOS in 2017. Since then, Github has been rebranded as "The world’s leading AI-powered developer platform". The language in their TOS now clearly lets them use content stored in Github for training AI. (Probably this is their second line of defense if the current attempt to legitimise copyright laundering via generative AI fails.)

Vultr is currently in damage control mode, accusing their concerned customers of spreading "conspiracy theories" (-- founder David Aninowsky) and updating the TOS to remove some of the problem language. Although it still allows them to "make derivative works", so could still allow their AI division to scrape VPS images for training data.

Vultr claims this was the legalese version of technical debt, that it only ever applied to posts in a forum (not supported by the actual TOS language) and basically that they and their lawyers are incompetant but not malicious.

Maybe they are indeed incompetant. But even if I give them the benefit of the doubt, I expect that many other VPS providers, especially ones targeting non-corporate customers, are watching this closely. If Vultr is not significantly harmed by customers jumping ship, if the latest TOS change is accepted as good enough, then other VPS providers will know that they can try this TOS trick too. If Vultr's AI division does well, others will wonder to what extent it is due to having all this juicy training data.

For small self-hosters, this seems like a good time to make sure you're using a VPS provider you can actually trust to not be eyeing your disk image and salivating at the thought of stripmining it for decades of emails. Probably also worth thinking about moving to bare metal hardware, perhaps hosted at home.

I wonder if this will finally make it worthwhile to mess around with VPS TPMs?

Planet DebianScarlett Gately Moore: Kubuntu, KDE Report. In Loving Memory of my Son.

Personal:

As many of you know, I lost my beloved son March 9th. This has hit me really hard, but I am staying strong and holding on to all the wonderful memories I have. He grew up to be an amazing man, devoted christian and wonderful father. He was loved by everyone who knew him and will be truly missed by us all. I have had folks ask me how they can help. He left behind his 7 year old son Mason. Mason was Billy’s world and I would like to make sure Mason is taken care of. I have set up a gofundme for Mason and all proceeds will go to the future care of him.

https://gofund.me/25dbff0c

Work report

Kubuntu:

Bug bashing! I am triaging allthebugs for Plasma which can be seen here:

https://bugs.launchpad.net/plasma-5.27/+bug/2053125

I am happy to report many of the remaining bugs have been fixed in the latest bug fix release 5.27.11.

I prepared https://kde.org/announcements/plasma/5/5.27.11/ and Rik uploaded to archive, thank you. Unfortunately, this and several other key fixes are stuck in transition do to the time_t64 transition, which you can read about here: https://wiki.debian.org/ReleaseGoals/64bit-time . It is the biggest transition in Debian/Ubuntu history and it couldn’t come at a worst time. We are aware our ISO installer is currently broken, calamares is one of those things stuck in this transition. There is a workaround in the comments of the bug report: https://bugs.launchpad.net/ubuntu/+source/calamares/+bug/2054795

Fixed an issue with plasma-welcome.

Found the fix for emojis and Aaron has kindly moved this forward with the fontconfig maintainer. Thanks!

I have received an https://kfocus.org/spec/spec-ir14.html laptop and it is truly a great machine and is now my daily driver. A big thank you to the Kfocus team! I can’t wait to show it off at https://linuxfestnorthwest.org/.

KDE Snaps:

You will see the activity in this ramp back up as the KDEneon Core project is finally a go! I will participate in the project with part time status and get everyone in the Enokia team up to speed with my snap knowledge, help prepare the qt6/kf6 transition, package plasma, and most importantly I will focus on documentation for future contributors.

I have created the ( now split ) qt6 with KDE patchset support and KDE frameworks 6 SDK and runtime snaps. I have made the kde-neon-6 extension and the PR is in: https://github.com/canonical/snapcraft/pull/4698 . Future work on the extension will include multiple versions track support and core24 support.

I have successfully created our first qt6/kf6 snap ark. They will show showing up in the store once all the required bits have been merged and published.

Thank you for stopping by.

~Scarlett

365 TomorrowsDream State

Author: Majoki They call us the new DJs—Dream Jockeys—because we stitch together popular playlists for the masses. I think it lacks imagination to piggyback on the long-gone days of vinyl playing over the airways. But that’s human nature. Always harkening back to something familiar, something easy to romanticize, something less threatening. I guess there are […]

The post Dream State appeared first on 365tomorrows.

Worse Than FailureCodeSOD: Sorts of Dates

We've seen loads of bad date handling, but as always, there's new ways to be surprised by the bizarre inventions people come up with. Today, Tim sends us some bad date sorting, in PHP.

    // Function to sort follow-ups by Date
    function cmp($a, $b)  {
        return strcmp(strtotime($a["date"]), strtotime($b["date"]));
    }
   
    // Sort the follow-ups by Date
    usort($data, "cmp");

The cmp function rests in the global namespace, which is a nice way to ensure future confusion- it's got a very specific job, but has a very generic name. And the job it does is… an interesting approach.

The "date" field in our records is a string. It's a string formatted in YYYY-MM-DD HH:MM:SS, and this is a guarantee of the inputs- which we'll get to in a moment. So the first thing that's worth noting is that the strings are already sortable, and nothing about this function needs to exist.

But being useless isn't the end of it. We convert the string time into a Unix timestamp with strtotime, which gives us an integer- also trivially sortable. But then we run that through strcmp, which converts the integer back into a string, so we can do a string comparison on it.

Elsewhere in the code, we use usort, passing it the wonderfully named $data variable, and then applying cmp to sort it.

Unrelated to this code, but a PHP weirdness, we pass the callable cmp as a string to the usort function to apply a sort. Every time I write a PHP article, I learn a new horror of the language, and "strings as callable objects" is definitely horrifying.

Now, a moment ago, I said that we knew the format of the inputs. That's a bold claim, especially for such a generically named function, but it's important: this function is used to sort the results of a database query. That's how we know the format of the dates- the input comes directly from a query.

A query that could easily be modified to include an ORDER BY clause, making this whole thing useless.

And in fact, someone had made that modification to the query, meaning that the data was already sorted before being passed to the usort function, which did its piles of conversions to sort it back into the same order all over again.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianSteinar H. Gunderson: git grudge

Small teaser:

Probably won't show up in aggregators (try this link instead).

Worse Than FailureCodeSOD: Never Retire

We all know that 2038 is going to be a big year. In a mere 14 years, a bunch of devices are going to have problems.

Less known is the Y2030 problem, which is what Ventsislav fighting to protect us from.

//POPULATE YEAR DROP DOWN LISTS
for (int year = 2000; year <= 2030; year++)
{
    DYearDropDownList.Items.Add(year.ToString());
    WYearDropDownList.Items.Add(year.ToString());
    MYearDropDownList.Items.Add(year.ToString());
}

//SELECT THE CURRENT YEAR
string strCurrentYear = DateTime.Now.Year.ToString();
for (int i = 0; i < DYearDropDownList.Items.Count; i++)
{
    if (DYearDropDownList.Items[i].Text == strCurrentYear)
    {
        DYearDropDownList.SelectedIndex = i;
        WYearDropDownList.SelectedIndex = i;
        MYearDropDownList.SelectedIndex = i;
        break;
    }
}

Okay, likely less critical than Y2038, but this code, as you might guess, started its life in the year 2000. Clearly, no one thought it'd still be in use this far out, yet… it is.

It's also worth noting that the drop down list object in .NET has a SelectedValue property, so the //SELECT THE CURRENT YEAR section is unnecessary, and could be replaced by a one-liner.

With six years to go, do you think this application is going to be replaced, or is the year for loop just going to change to year <= 2031 and be a manual change for the rest of the application's lifetime?

I mean, they could also change it so it always goes to currentYear or currentYear + 1 or whatever, but do we really think that's a viable option?

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

365 TomorrowsCold War

Author: David Barber Like nuclear weapons before them, the plagues each side kept hidden were too terrible to use; instead they waged sly wars of colds and coughs, infectious agents sneezed in trams and crowded lifts, blighting commerce with working days lost to fevers and sickness; secret attacks hard to prove and impossible to stop. […]

The post Cold War appeared first on 365tomorrows.

,

Planet DebianEmmanuel Kasper: Adding a private / custom Certificate Authority to the firefox trust store

Today at $WORK I needed to add the private company Certificate Authority (CA) to Firefox, and I found the steps were unnecessarily complex. Time to blog about that, and I also made a Debian wiki article of that post, so that future generations can update the information, when Firefox 742 is released on Debian 17.

The cacert certificate authority is not included in Debian and Firefox, and is thus a good example of adding a private CA. Note that this does not mean I specifically endorse that CA.

  • Test that SSL connections to a site signed by the private CA is failing
$ gnutls-cli wiki.cacert.org:443
...
- Status: The certificate is NOT trusted. The certificate issuer is unknown. 
*** PKI verification of server certificate failed...
*** Fatal error: Error in the certificate.
  • Download the private CA
$ wget http://www.cacert.org/certs/root_X0F.crt
  • test that a connection works with the private CA
$ gnutls-cli --x509cafile root_X0F.crt wiki.cacert.org:443
...
- Status: The certificate is trusted. 
- Description: (TLS1.2-X.509)-(ECDHE-SECP256R1)-(RSA-SHA256)-(AES-256-GCM)
- Session ID: 37:56:7A:89:EA:5F:13:E8:67:E4:07:94:4B:52:23:63:1E:54:31:69:5D:70:17:3C:D0:A4:80:B0:3A:E5:22:B3
- Options: safe renegotiation,
- Handshake was completed
...
  • add the private CA to the Debian trust store located in /etc/ssl/certs/ca-certificates.crt
$ sudo cp root_X0F.crt /usr/local/share/ca-certificates/cacert-org-root-ca.crt
$ sudo update-ca-certificates --verbose
...
Adding debian:cacert-org-root-ca.pem
...
  • verify that we can connect without passing the private CA on the command line
$ gnutls-cli wiki.cacert.org:443
... 
 - Status: The certificate is trusted.
  • At that point most applications are able to connect to systems with a certificate signed by the private CA (curl, Gnome builtin Browser …). However Firefox is using its own trust store and will still display a security error if connecting to https://wiki.cacert.org. To make Firefox trust the Debian trust store, we need to add a so called security device, in fact an extra library wrapping the Debian trust store. The library will wrap the Debian trust store in the PKCS#11 industry format that Firefox supports.

  • install the pkcs#11 wrapping library and command line tools

$ sudo apt install p11-kit p11-kit-modules
  • verify that the private CA is accessible via PKCS#11
$ trust list | grep --context 2 'CA Cert'
pkcs11:id=%16%B5%32%1B%D4%C7%F3%E0%E6%8E%F3%BD%D2%B0%3A%EE%B2%39%18%D1;type=cert
    type: certificate
    label: CA Cert Signing Authority
    trust: anchor
    category: authority
  • now we need to add a new security device in Firefox pointing to the pkcs11 trust store. The pkcs11 trust store is located in /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
$ dpkg --listfiles p11-kit-modules | grep trust
/usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
  • in Firefox (tested in version 115 esr), go to Settings -> Privacy & Security -> Security -> Security Devices.
    Then click “Load”, in the popup window use “My local trust” as a module name, and /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so as a module filename. After adding the module, you should see it in the list of Security Devices, having /etc/ssl/certs/ca-certificates.crt as a description.

  • now restart Firefox and you should be able to browse https://wiki.cacert.org without security errors

Cryptogram Hardware Vulnerability in Apple’s M-Series Chips

It’s yet another hardware side-channel attack:

The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

[…]

The breakthrough of the new research is that it exposes a previously overlooked behavior of DMPs in Apple silicon: Sometimes they confuse memory content, such as key material, with the pointer value that is used to load other data. As a result, the DMP often reads the data and attempts to treat it as an address to perform memory access. This “dereferencing” of “pointers”—meaning the reading of data and leaking it through a side channel—­is a flagrant violation of the constant-time paradigm.

[…]

The attack, which the researchers have named GoFetch, uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system. M-series chips are divided into what are known as clusters. The M1, for example, has two clusters: one containing four efficiency cores and the other four performance cores. As long as the GoFetch app and the targeted cryptography app are running on the same performance cluster—­even when on separate cores within that cluster­—GoFetch can mine enough secrets to leak a secret key.

The attack works against both classical encryption algorithms and a newer generation of encryption that has been hardened to withstand anticipated attacks from quantum computers. The GoFetch app requires less than an hour to extract a 2048-bit RSA key and a little over two hours to extract a 2048-bit Diffie-Hellman key. The attack takes 54 minutes to extract the material required to assemble a Kyber-512 key and about 10 hours for a Dilithium-2 key, not counting offline time needed to process the raw data.

The GoFetch app connects to the targeted app and feeds it inputs that it signs or decrypts. As its doing this, it extracts the app secret key that it uses to perform these cryptographic operations. This mechanism means the targeted app need not perform any cryptographic operations on its own during the collection period.

Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

Slashdot thread.

Cryptogram Security Vulnerability in Saflok’s RFID-Based Keycard Locks

It’s pretty devastating:

Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok. The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

Dormakaba says that it’s been working since early last year to make hotels that use Saflok aware of their security flaws and to help them fix or replace the vulnerable locks. For many of the Saflok systems sold in the last eight years, there’s no hardware replacement necessary for each individual lock. Instead, hotels will only need to update or replace the front desk management system and have a technician carry out a relatively quick reprogramming of each lock, door by door. Wouters and Carroll say they were nonetheless told by Dormakaba that, as of this month, only 36 percent of installed Safloks have been updated. Given that the locks aren’t connected to the internet and some older locks will still need a hardware upgrade, they say the full fix will still likely take months longer to roll out, at the very least. Some older installations may take years.

If ever. My guess is that for many locks, this is a permanent vulnerability.

Charles StrossA message from our sponsors: New Book coming!

(You probably expected this announcement a while ago ...)

I've just signed a new two book deal with my publishers, Tor.com publishing in the USA/Canada and Orbit in the UK/rest of world, and the book I'm talking about here and now—the one that's already written and delivered to the Production people who turn it into a thing you'll be able to buy later this year—is a Laundry stand-alone titled "A Conventional Boy".

("Delivered to production" means it is now ready to be copy-edited, typeset, printed/bound/distributed and simultaneously turned into an ebook and pushed through the interwebbytubes to the likes of Kobo and Kindle. I do not have a publication date or a link where you can order it yet: it almost certainly can't show up before July at this point. Yes, everything is running late. No, I have no idea why.)

"A Conventional Boy" is not part of the main (and unfinished) Laundry Files story arc. Nor is it a New Management story. It's a stand-alone story about Derek the DM, set some time between the end of "The Fuller Memorandum" and before "The Delirium Brief". We met Derek originally in "The Nightmare Stacks", and again in "The Labyrinth Index": he's a portly, short-sighted, middle-aged nerd from the Laundry's Forecasting Ops department who also just happens to be the most powerful precognitive the Laundry has tripped over in the past few decades—and a role playing gamer.

When Derek was 14 years old and running a D&D campaign, a schoolteacher overheard him explaining D&D demons to his players and called a government tips hotline. Thirty-odd years later Derek has lived most of his life in Camp Sunshine, the Laundry's magical Gitmo for Elder God cultists. As a trusty/"safe" inmate, he produces the camp newsletter and uses his postal privileges to run a play-by-mail RPG. One day, two pieces of news cross Derek's desk: the camp is going to be closed down and rebuilt as a real prison, and a games convention is coming to the nearest town.

Camp Sunshine is officially escape-proof, but Derek has had a foolproof escape plan socked away for the past decade. He hasn't used it because until now he's never had anywhere to escape to. But now he's facing the demolition of his only home, and he has a destination in mind. Come hell or high water, Derek intends to go to his first ever convention. Little does he realize that hell is also going to the convention ...

I began writing "A Conventional Boy" in 2009, thinking it'd make a nice short story. It went on hold for far too long (it was originally meant to come out before "The Nightmare Stacks"!) but instead it lingered ... then when I got back to work on it, the story ran away and grew into a short novel in its own right. As it's rather shorter than the other Laundry novels (although twice as long as, say, "Equoid") the book also includes "Overtime" and "Escape from Yokai Land", both Laundry Files novelettes about Bob, and an afterword providing some background on the 1980s Satanic D&D Panic for readers who don't remember it (which sadly means anyone much younger than myself).

Questions? Ask me anything!

Krebs on SecurityRecent ‘MFA Bombing’ Attacks Targeting Apple Users

Several Apple customers recently reported being targeted in elaborate phishing attacks that involve what appears to be a bug in Apple’s password reset feature. In this scenario, a target’s Apple devices are forced to display dozens of system-level prompts that prevent the devices from being used until the recipient responds “Allow” or “Don’t Allow” to each prompt. Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user’s account is under attack and that Apple support needs to “verify” a one-time code.

Some of the many notifications Patel says he received from Apple all at once.

Parth Patel is an entrepreneur who is trying to build a startup in the conversational AI space. On March 23, Patel documented on Twitter/X a recent phishing campaign targeting him that involved what’s known as a “push bombing” or “MFA fatigue” attack, wherein the phishers abuse a feature or weakness of a multi-factor authentication (MFA) system in a way that inundates the target’s device(s) with alerts to approve a password change or login.

“All of my devices started blowing up, my watch, laptop and phone,” Patel told KrebsOnSecurity. “It was like this system notification from Apple to approve [a reset of the account password], but I couldn’t do anything else with my phone. I had to go through and decline like 100-plus notifications.”

Some people confronted with such a deluge may eventually click “Allow” to the incessant password reset prompts — just so they can use their phone again. Others may inadvertently approve one of these prompts, which will also appear on a user’s Apple watch if they have one.

But the attackers in this campaign had an ace up their sleeves: Patel said after denying all of the password reset prompts from Apple, he received a call on his iPhone that said it was from Apple Support (the number displayed was 1-800-275-2273, Apple’s real customer support line).

“I pick up the phone and I’m super suspicious,” Patel recalled. “So I ask them if they can verify some information about me, and after hearing some aggressive typing on his end he gives me all this information about me and it’s totally accurate.”

All of it, that is, except his real name. Patel said when he asked the fake Apple support rep to validate the name they had on file for the Apple account, the caller gave a name that was not his but rather one that Patel has only seen in background reports about him that are for sale at a people-search website called PeopleDataLabs.

Patel said he has worked fairly hard to remove his information from multiple people-search websites, and he found PeopleDataLabs uniquely and consistently listed this inaccurate name as an alias on his consumer profile.

“For some reason, PeopleDataLabs has three profiles that come up when you search for my info, and two of them are mine but one is an elementary school teacher from the midwest,” Patel said. “I asked them to verify my name and they said Anthony.”

Patel said the goal of the voice phishers is to trigger an Apple ID reset code to be sent to the user’s device, which is a text message that includes a one-time password. If the user supplies that one-time code, the attackers can then reset the password on the account and lock the user out. They can also then remotely wipe all of the user’s Apple devices.

THE PHONE NUMBER IS KEY

Chris is a cryptocurrency hedge fund owner who asked that only his first name be used so as not to paint a bigger target on himself. Chris told KrebsOnSecurity he experienced a remarkably similar phishing attempt in late February.

“The first alert I got I hit ‘Don’t Allow’, but then right after that I got like 30 more notifications in a row,” Chris said. “I figured maybe I sat on my phone weird, or was accidentally pushing some button that was causing these, and so I just denied them all.”

Chris says the attackers persisted hitting his devices with the reset notifications for several days after that, and at one point he received a call on his iPhone that said it was from Apple support.

“I said I would call them back and hung up,” Chris said, demonstrating the proper response to such unbidden solicitations. “When I called back to the real Apple, they couldn’t say whether anyone had been in a support call with me just then. They just said Apple states very clearly that it will never initiate outbound calls to customers — unless the customer requests to be contacted.”

Massively freaking out that someone was trying to hijack his digital life, Chris said he changed his passwords and then went to an Apple store and bought a new iPhone. From there, he created a new Apple iCloud account using a brand new email address.

Chris said he then proceeded to get even more system alerts on his new iPhone and iCloud account — all the while still sitting at the local Apple Genius Bar.

Chris told KrebsOnSecurity his Genius Bar tech was mystified about the source of the alerts, but Chris said he suspects that whatever the phishers are abusing to rapidly generate these Apple system alerts requires knowing the phone number on file for the target’s Apple account. After all, that was the only aspect of Chris’s new iPhone and iCloud account that hadn’t changed.

WATCH OUT!

“Ken” is a security industry veteran who spoke on condition of anonymity. Ken said he first began receiving these unsolicited system alerts on his Apple devices earlier this year, but that he has not received any phony Apple support calls as others have reported.

“This recently happened to me in the middle of the night at 12:30 a.m.,” Ken said. “And even though I have my Apple watch set to remain quiet during the time I’m usually sleeping at night, it woke me up with one of these alerts. Thank god I didn’t press ‘Allow,’ which was the first option shown on my watch. I had to scroll watch the wheel to see and press the ‘Don’t Allow’ button.”

Ken shared this photo he took of an alert on his watch that woke him up at 12:30 a.m. Ken said he had to scroll on the watch face to see the “Don’t Allow” button.

Ken didn’t know it when all this was happening (and it’s not at all obvious from the Apple prompts), but clicking “Allow” would not have allowed the attackers to change Ken’s password. Rather, clicking “Allow” displays a six digit PIN that must be entered on Ken’s device — allowing Ken to change his password. It appears that these rapid password reset prompts are being used to make a subsequent inbound phone call spoofing Apple more believable.

Ken said he contacted the real Apple support and was eventually escalated to a senior Apple engineer. The engineer assured Ken that turning on an Apple Recovery Key for his account would stop the notifications once and for all.

A recovery key is an optional security feature that Apple says “helps improve the security of your Apple ID account.” It is a randomly generated 28-character code, and when you enable a recovery key it is supposed to disable Apple’s standard account recovery process. The thing is, enabling it is not a simple process, and if you ever lose that code in addition to all of your Apple devices you will be permanently locked out.

Ken said he enabled a recovery key for his account as instructed, but that it hasn’t stopped the unbidden system alerts from appearing on all of his devices every few days.

KrebsOnSecurity tested Ken’s experience, and can confirm that enabling a recovery key does nothing to stop a password reset prompt from being sent to associated Apple devices. Visiting Apple’s “forgot password” page — https://iforgot.apple.com — asks for an email address and for the visitor to solve a CAPTCHA.

After that, the page will display the last two digits of the phone number tied to the Apple account. Filling in the missing digits and hitting submit on that form will send a system alert, whether or not the user has enabled an Apple Recovery Key.

The password reset page at iforgot.apple.com.

RATE LIMITS

What sanely designed authentication system would send dozens of requests for a password change in the span of a few moments, when the first requests haven’t even been acted on by the user? Could this be the result of a bug in Apple’s systems?

Apple has not yet responded to requests for comment.

Throughout 2022, a criminal hacking group known as LAPSUS$ used MFA bombing to great effect in intrusions at Cisco, Microsoft and Uber. In response, Microsoft began enforcing “MFA number matching,” a feature that displays a series of numbers to a user attempting to log in with their credentials. These numbers must then be entered into the account owner’s Microsoft authenticator app on their mobile device to verify they are logging into the account.

Kishan Bagaria is a hobbyist security researcher and engineer who founded the website texts.com (now owned by Automattic), and he’s convinced Apple has a problem on its end. In August 2019, Bagaria reported to Apple a bug that allowed an exploit he dubbed “AirDoS” because it could be used to let an attacker infinitely spam all nearby iOS devices with a system-level prompt to share a file via AirDrop — a file-sharing capability built into Apple products.

Apple fixed that bug nearly four months later in December 2019, thanking Bagaria in the associated security bulletin. Bagaria said Apple’s fix was to add stricter rate limiting on AirDrop requests, and he suspects that someone has figured out a way to bypass Apple’s rate limit on how many of these password reset requests can be sent in a given timeframe.

“I think this could be a legit Apple rate limit bug that should be reported,” Bagaria said.

WHAT CAN YOU DO?

Apple seems requires a phone number to be on file for your account, but after you’ve set up the account it doesn’t have to be a mobile phone number. KrebsOnSecurity’s testing shows Apple will accept a VOIP number (like Google Voice). So, changing your account phone number to a VOIP number that isn’t widely known would be one mitigation here.

One caveat with the VOIP number idea: Unless you include a real mobile number, Apple’s iMessage and Facetime applications will be disabled for that device. This might a bonus for those concerned about reducing the overall attack surface of their Apple devices, since zero-click zero-days in these applications have repeatedly been used by spyware purveyors.

Also, it appears Apple’s password reset system will accept and respect email aliases. Adding a “+” character after the username portion of your email address — followed by a notation specific to the site you’re signing up at — lets you create an infinite number of unique email addresses tied to the same account.

For instance, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to my inbox and create a corresponding folder called “Example,” along with a new filter that sends any email addressed to that alias to the Example folder. In this case, however, perhaps a less obvious alias than “+apple” would be advisable.

Update, March 27, 5:06 p.m. ET: Added perspective on Ken’s experience. Also included a What Can You Do? section.

Worse Than FailureCodeSOD: Exceptional String Comparisons

As a general rule, I will actually prefer code that is verbose and clear over code that is concise but makes me think. I really don't like to think if I don't have to.

Of course, there's the class of WTF code that is verbose, unclear and also really bad, which Thomas sends us today:

Private Shared Function Mailid_compare(ByVal queryEmail As String, ByVal FnsEmail As String) As Boolean
    Try
        Dim str1 As String = queryEmail
        Dim str2 As String = FnsEmail
        If String.Compare(str1, str2) = 0 Then
            Return True
        Else
            Return False
        End If
    Catch ex As Exception
    End Try
End Function

This VB .Net function could easily be replaced with String.Compare(queryEmail, FnsEmail) = 0. Of course, that'd be a little unclear, and since we only care about equality, we could just use String.Equals(queryEmail, FnsEmail)- which is honestly clearer than having a method called Mailid_compare, which doesn't actually tell me anything useful about what it does.

Speaking of not doing anything useful, there are a few other pieces of bloat in this function.

First, we plop our input parameters into the variables str1 and str2, which does a great job of making what's happening here less clear. Then we have the traditional "use an if statement to return either true or false".

But the real special magic in this one is the Try/Catch. This is a pretty bog standard useless exception handler. No operation in this function throws an exception- String.Compare will even happily accept null references. Even if somehow an exception was thrown, we wouldn't do anything about it. As a bonus, we'd return a null in that case, throwing downstream code into a bad state.

What's notable in this case, is that every function was implemented this way. Every function had a Try/Catch that frequently did nothing, or rarely (usually when they copy/pasted from StackOverflow) printed out the error message, but otherwise just let the program continue.

And that's the real WTF: a codebase polluted with so many do-nothing exception handlers that exceptions become absolutely worthless. Errors in the program let it continue, and the users experience bizarre, inconsistent states as the application fails silently.

Or, to put it another way: this is the .NET equivalent of classic VB's On Error Resume Next, which is exactly the kind of terrible idea it sounds like.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 Tomorrows‘Lineartrope 04.96’

Author: David Dumouriez I thought I was ready. “I was on the precipice, looking down.” Internal count of five. A long five. “I was on the precipice, looking down.” Count ten. “I was on the precipice, looking down.” I noticed a brief, impatient nod. The nod meant ‘again’. I thought. “I was on the precipice […]

The post ‘Lineartrope 04.96’ appeared first on 365tomorrows.

,

Planet DebianJonathan Dowland: a bug a day

I recently became a maintainer of/committer to IkiWiki, the software that powers my site. I also took over maintenance of the Debian package. Last week I cut a new upstream point release, 3.20200202.4, and a corresponding Debian package upload, consisting only of a handful of low-hanging-fruit patches from other people, largely to exercise both processes.

I've been discussing IkiWiki's maintenance situation with some other users for a couple of years now. I've also weighed up the pros and cons of moving to a different static-site-generator (a term that describes what IkiWiki is, but was actually coined more recently). It turns out IkiWiki is exceptionally flexible and powerful: I estimate the cost of moving to something modern(er) and fashionable such as Jekyll, Hugo or Hakyll as unreasonably high, in part because they are surprisingly rigid and inflexible in some key places.

Like most mature software, IkiWiki has a bug backlog. Over the past couple of weeks, as a sort-of "palate cleanser" around work pieces, I've tried to triage one IkiWiki bug per day: either upstream or in the Debian Bug Tracker. This is a really lightweight task: it can be as simple as "find a bug reported in Debian, copy it upstream, tag it upstream, mark it forwarded; perhaps taking 5-10 minutes.

Often I'll stumble across something that has already been fixed but not recorded as such as I go.

Despite this minimal level of work, I'm quite satisfied with the cumulative progress. It's notable to me how much my perspective has shifted by becoming a maintainer: I'm considering everything through a different lens to that of being just one user.

Eventually I will put some time aside to scratch some of my own itches (html5 by default; support dark mode; duckduckgo plugin; use the details tag...) but for now this minimal exercise is of broader use.

Worse Than FailureCodeSOD: Contains a Substring

One of the perks of open source software is that it means that large companies can and will patch it for their needs. Which means we can see what a particular large electronics vendor did with a video player application.

For example, they needed to see if the URL pointed to a stream protected by WideVine, Vudu, or Netflix. They can do this by checking if the filename contains a certain substring. Let's see how they accomplished this…

int get_special_protocol_type(char *filename, char *name)
{
	int type = 0;
	int fWidevine = 0;
	int j;
    	char proto_str[2800] = {'\0', };
      	if (!strcmp("http", name))
       {
		strcpy(proto_str, filename);
		for(j=0;proto_str[j] != '\0';j++)
		{
			if(proto_str[j] == '=')
			{
				j++;
				if(proto_str[j] == 'W')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						type = Widevine_PROTOCOL;
					}
				}
			}
		}
		if (type == 0)
		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'V')
					{
						j++;
						if(proto_str[j] == 'U')
						{
							j++;
							if(proto_str[j] == 'D')
							{
								j++;
								if(proto_str[j] == 'U')
								{
									type = VUDU_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
 		if (type == 0)
      		{
			for(j=0;proto_str[j] != '\0';j++)
			{
				if(proto_str[j] == '=')
				{
					j++;
					if(proto_str[j] == 'N')
					{
						j++;
						if(proto_str[j] == 'F')
						{
							j++;
							if(proto_str[j] == 'L')
							{
								j++;
								if(proto_str[j] == 'X')
								{
									type = Netflix_PROTOCOL;
								}
							}
						}
					}
				}
			}
		}
      	}
	return type;
}

For starters, there's been a lot of discussion around the importance of memory safe languages lately. I would argue that in C/C++ it's not actually hard to write memory safe code, it's just very easy not to. And this is an example- everything in here is a buffer overrun waiting to happen. The core problem is that we're passing pure pointers to char, and relying on the strings being properly null terminated. So we're using the old, unsafe string functions to never checking against the bounds of proto_str to make sure we haven't run off the edge. A malicious input could easily run off the end of the string.

But also, let's talk about that string comparison. They don't even just loop across a pair of strings character by character, they use this bizarre set of nested ifs with incrementing loop variables. Given that they use strcmp, I think we can safely assume the C standard library exists for their target, which means strnstr was right there.

It's also worth noting that, since this is a read-only operation, the strcpy is not necessary, though we're in a rough place since they're passing a pointer to char and not including the size, which gets us back to the whole "unsafe string operations" problem.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsSecond-Hand Blades

Author: Julian Miles, Staff Writer “The only things I work with are killers and currency. You don’t look dangerous, and I don’t see you waving cash.” The man brushes imaginary specks from his lapels with the hand not holding a dagger, then gives a little grin as he replies. “Appearances can be- aack!” The man […]

The post Second-Hand Blades appeared first on 365tomorrows.

Planet DebianValhalla's Things: Piecepack and postcard boxes

Posted on March 25, 2024
Tags: madeof:bits, craft:cartonnage

This article has been originally posted on November 4, 2023, and has been updated (at the bottom) since.

An open cardboard box, showing the lining in paper printed with a medieval music manuscript.

Thanks to All Saints’ Day, I’ve just had a 5 days weekend. One of those days I woke up and decided I absolutely needed a cartonnage box for the cardboard and linocut piecepack I’ve been working on for quite some time.

I started drawing a plan with measures before breakfast, then decided to change some important details, restarted from scratch, did a quick dig through the bookbinding materials and settled on 2 mm cardboard for the structure, black fabric-like paper for the outside and a scrap of paper with a manuscript print for the inside.

Then we had the only day with no rain among the five, so some time was spent doing things outside, but on the next day I quickly finished two boxes, at two different heights.

The weather situation also meant that while I managed to take passable pictures of the first stages of the box making in natural light, the last few stages required some creative artificial lightning, even if it wasn’t that late in the evening. I need to build1 myself a light box.

And then decided that since they are C6 sized, they also work well for postcards or for other A6 pieces of paper, so I will probably need to make another one when the piecepack set will be finally finished.

The original plan was to use a linocut of the piecepack suites as the front cover; I don’t currently have one ready, but will make it while printing the rest of the piecepack set. One day :D

an open rectangular cardboard box, with a plastic piecepack set in it.

One of the boxes was temporarily used for the plastic piecepack I got with the book, and that one works well, but since it’s a set with standard suites I think I will want to make another box, using some of the paper with fleur-de-lis that I saw in the stash.

I’ve also started to write detailed instructions: I will publish them as soon as they are ready, and then either update this post, or they will be mentioned in an additional post if I will have already made more boxes in the meanwhile.


Update 2024-03-25: the instructions have been published on my craft patterns website


  1. you don’t really expect me to buy one, right? :D↩︎

,

Planet DebianNiels Thykier: debputy v0.1.21

Earlier today, I have just released debputy version 0.1.21 to Debian unstable. In the blog post, I will highlight some of the new features.

Package boilerplate reduction with automatic relationship substvar

Last month, I started a discussion on rethinking how we do relationship substvars such as the ${misc:Depends}. These generally ends up being boilerplate runes in the form of Depends: ${misc:Depends}, ${shlibs:Depends} where you as the packager has to remember exactly which runes apply to your package.

My proposed solution was to automatically apply these substvars and this feature has now been implemented in debputy. It is also combined with the feature where essential packages should use Pre-Depends by default for dpkg-shlibdeps related dependencies.

I am quite excited about this feature, because I noticed with libcleri that we are now down to 3-5 fields for defining a simple library package. Especially since most C library packages are trivial enough that debputy can auto-derive them to be Multi-Arch: same.

As an example, the libcleric1 package is down to 3 fields (Package, Architecture, Description) with Section and Priority being inherited from the Source stanza. I have submitted a MR to show case the boilerplate reduction at https://salsa.debian.org/siridb-team/libcleri/-/merge_requests/3.

The removal of libcleric1 (= ${binary:Version}) in that MR relies on another existing feature where debputy can auto-derive a dependency between an arch:any -dev package and the library package based on the .so symlink for the shared library. The arch:any restriction comes from the fact that arch:all and arch:any packages are not built together, so debputy cannot reliably see across the package boundaries during the build (and therefore refuses to do so at all).

Packages that have already migrated to debputy can use debputy migrate-from-dh to detect any unnecessary relationship substitution variables in case you want to clean up. The removal of Multi-Arch: same and intra-source dependencies must be done manually and so only be done so when you have validated that it is safe and sane to do. I was willing to do it for the show-case MR, but I am less confident that would bother with these for existing packages in general.

Note: I summarized the discussion of the automatic relationship substvar feature earlier this month in https://lists.debian.org/debian-devel/2024/03/msg00030.html for those who want more details.

PS: The automatic relationship substvars feature will also appear in debhelper as a part of compat 14.

Language Server (LSP) and Linting

I have long been frustrated by our poor editor support for Debian packaging files. To this end, I started working on a Language Server (LSP) feature in debputy that would cover some of our standard Debian packaging files. This release includes the first version of said language server, which covers the following files:

  • debian/control
  • debian/copyright (the machine readable variant)
  • debian/changelog (mostly just spelling)
  • debian/rules
  • debian/debputy.manifest (syntax checks only; use debputy check-manifest for the full validation for now)

Most of the effort has been spent on the Deb822 based files such as debian/control, which comes with diagnostics, quickfixes, spellchecking (but only for relevant fields!), and completion suggestions.

Since not everyone has a LSP capable editor and because sometimes you just want diagnostics without having to open each file in an editor, there is also a batch version for the diagnostics via debputy lint. Please see debputy(1) for how debputy lint compares with lintian if you are curious about which tool to use at what time.

To help you getting started, there is a now debputy lsp editor-config command that can provide you with the relevant editor config glue. At the moment, emacs (via eglot) and vim with vim-youcompleteme are supported.

For those that followed the previous blog posts on writing the language server, I would like to point out that the command line for running the language server has changed to debputy lsp server and you no longer have to tell which format it is. I have decided to make the language server a "polyglot" server for now, which I will hopefully not regret... Time will tell. :)

Anyhow, to get started, you will want:

$ apt satisfy 'dh-debputy (>= 0.1.21~), python3-pygls'
# Optionally, for spellchecking
$ apt install python3-hunspell hunspell-en-us
# For emacs integration
$ apt install elpa-dpkg-dev-el markdown-mode-el
# For vim integration via vim-youcompleteme
$ apt install vim-youcompleteme

Specifically for emacs, I also learned two things after the upload. First, you can auto-activate eglot via eglot-ensure. This badly feature interacts with imenu on debian/changelog for reasons I do not understand (causing a several second start up delay until something times out), but it works fine for the other formats. Oddly enough, opening a changelog file and then activating eglot does not trigger this issue at all. In the next version, editor config for emacs will auto-activate eglot on all files except debian/changelog.

The second thing is that if you install elpa-markdown-mode, emacs will accept and process markdown in the hover documentation provided by the language server. Accordingly, the editor config for emacs will also mention this package from the next version on.

Finally, on a related note, Jelmer and I have been looking at moving some of this logic into a new package called debpkg-metadata. The point being to support easier reuse of linting and LSP related metadata - like pulling a list of known fields for debian/control or sharing logic between lintian-brush and debputy.

Minimal integration mode for Rules-Requires-Root

One of the original motivators for starting debputy was to be able to get rid of fakeroot in our build process. While this is possible, debputy currently does not support most of the complex packaging features such as maintscripts and debconf. Unfortunately, the kind of packages that need fakeroot for static ownership tend to also require very complex packaging features.

To bridge this gap, the new version of debputy supports a very minimal integration with dh via the dh-sequence-zz-debputy-rrr. This integration mode keeps the vast majority of debhelper sequence in place meaning most dh add-ons will continue to work with dh-sequence-zz-debputy-rrr. The sequence only replaces the following commands:

  • dh_fixperms
  • dh_gencontrol
  • dh_md5sums
  • dh_builddeb

The installations feature of the manifest will be disabled in this integration mode to avoid feature interactions with debhelper tools that expect debian/<pkg> to contain the materialized package.

On a related note, the debputy migrate-from-dh command now supports a --migration-target option, so you can choose the desired level of integration without doing code changes. The command will attempt to auto-detect the desired integration from existing package features such as a build-dependency on a relevant dh sequence, so you do not have to remember this new option every time once the migration has started. :)

Planet DebianMarco d'Itri: CISPE's call for new regulations on VMware

A few days ago CISPE, a trade association of European cloud providers, published a press release complaining about the new VMware licensing scheme and asking for regulators and legislators to intervene.

But VMware does not have a monopoly on virtualization software: I think that asking regulators to interfere is unnecessary and unwise, unless, of course, they wish to question the entire foundations of copyright. Which, on the other hand, could be an intriguing position that I would support...

I believe that over-reliance on a single supplier is a typical enterprise risk: in the past decade some companies have invested in developing their own virtualization infrastructure using free software, while others have decided to rely entirely on a single proprietary software vendor.

My only big concern is that many public sector organizations will continue to use VMware and pay the huge fees designed by Broadcom to extract the maximum amount of money from their customers. However, it is ultimately the citizens who pay these bills, and blaming the evil US corporation is a great way to avoid taking responsibility for these choices.

"Several CISPE members have stated that without the ability to license and use VMware products they will quickly go bankrupt and out of business."

Insert here the Jeremy Clarkson "Oh no! Anyway..." meme.

365 TomorrowsVerbatim Thirst

Author: Gabriel Walker Land In every direction there was nothing but baked dirt, tumbleweeds, and flat death. The blazing sun weighed down on me. I didn’t know which way to walk, and I didn’t know why. How I’d gotten there was long since forgotten. Being lost wasn’t the pressing problem. No, the immediate threat was […]

The post Verbatim Thirst appeared first on 365tomorrows.

Planet DebianJacob Adams: Regular Reboots

Uptime is often considered a measure of system reliability, an indication that the running software is stable and can be counted on.

However, this hides the insidious build-up of state throughout the system as it runs, the slow drift from the expected to the strange.

As Nolan Lawson highlights in an excellent post entitled Programmers are bad at managing state, state is the most challenging part of programming. It’s why “did you try turning it off and on again” is a classic tech support response to any problem.

In addition to the problem of state, installing regular updates periodically requires a reboot, even if the rest of the process is automated through a tool like unattended-upgrades.

For my personal homelab, I manage a handful of different machines running various services.

I used to just schedule a day to update and reboot all of them, but that got very tedious very quickly.

I then moved the reboot to a cronjob, and then recently to a systemd timer and service.

I figure that laying out my path to better management of this might help others, and will almost certainly lead to someone telling me a better way to do this.

UPDATE: Turns out there’s another option for better systemd cron integration. See systemd-cron below.

Stage One: Reboot Cron

The first, and easiest approach, is a simple cron job. Just adding the following line to /var/spool/cron/crontabs/root1 is enough to get your machine to reboot once a month2 on the 6th at 8:00 AM3:

0 8 6 * * reboot

I had this configured for many years and it works well. But you have no indication as to whether it succeeds except for checking your uptime regularly yourself.

Stage Two: Reboot systemd Timer

The next evolution of this approach for me was to use a systemd timer. I created a regular-reboot.timer with the following contents:

[Unit]
Description=Reboot on a Regular Basis

[Timer]
Unit=regular-reboot.service
OnBootSec=1month

[Install]
WantedBy=timers.target

This timer will trigger the regular-reboot.service systemd unit when the system reaches one month of uptime.

I’ve seen some guides to creating timer units recommend adding a Wants=regular-reboot.service to the [Unit] section, but this has the consequence of running that service every time it starts the timer. In this case that will just reboot your system on startup which is not what you want.

Care needs to be taken to use the OnBootSec directive instead of OnCalendar or any of the other time specifications, as your system could reboot, discover its still within the expected window and reboot again. With OnBootSec your system will not have that problem. Technically, this same problem could have occurred with the cronjob approach, but in practice it never did, as the systems took long enough to come back up that they were no longer within the expected window for the job.

I then added the regular-reboot.service:

[Unit]
Description=Reboot on a Regular Basis
Wants=regular-reboot.timer

[Service]
Type=oneshot
ExecStart=shutdown -r 02:45

You’ll note that this service is actually scheduling a specific reboot time via the shutdown command instead of just immediately rebooting. This is a bit of a hack needed because I can’t control when the timer runs exactly when using OnBootSec. This way different systems have different reboot times so that everything doesn’t just reboot and fail all at once. Were something to fail to come back up I would have some time to fix it, as each machine has a few hours between scheduled reboots.

One you have both files in place, you’ll simply need to reload configuration and then enable and start the timer unit:

systemctl daemon-reload
systemctl enable --now regular-reboot.timer

You can then check when it will fire next:

# systemctl status regular-reboot.timer
● regular-reboot.timer - Reboot on a Regular Basis
     Loaded: loaded (/etc/systemd/system/regular-reboot.timer; enabled; preset: enabled)
     Active: active (waiting) since Wed 2024-03-13 01:54:52 EDT; 1 week 4 days ago
    Trigger: Fri 2024-04-12 12:24:42 EDT; 2 weeks 4 days left
   Triggers: ● regular-reboot.service

Mar 13 01:54:52 dorfl systemd[1]: Started regular-reboot.timer - Reboot on a Regular Basis.

Sidenote: Replacing all Cron Jobs with systemd Timers

More generally, I’ve now replaced all cronjobs on my personal systems with systemd timer units, mostly because I can now actually track failures via prometheus-node-exporter. There are plenty of ways to hack in cron support to the node exporter, but just moving to systemd units provides both support for tracking failure and logging, both of which make system administration much easier when things inevitably go wrong.

systemd-cron

An alternative to converting everything by hand, if you happen to have a lot of cronjobs is systemd-cron. It will make each crontab and /etc/cron.* directory into automatic service and timer units.

Thanks to Alexandre Detiste for letting me know about this project. I have few enough cron jobs that I’ve already converted, but for anyone looking at a large number of jobs to convert you’ll want to check it out!

Stage Three: Monitor that it’s working

The final step here is confirm that these units actually work, beyond just firing regularly.

I now have the following rule in my prometheus-alertmanager rules:

  - alert: UptimeTooHigh
    expr: (time() - node_boot_time_seconds{job="node"}) / 86400 > 35
    annotations:
      summary: "Instance  Has Been Up Too Long!"
      description: "Instance  Has Been Up Too Long!"

This will trigger an alert anytime that I have a machine up for more than 35 days. This actually helped me track down one machine that I had forgotten to set up this new unit on4.

Not everything needs to scale

Is It Worth The Time

One of the most common fallacies programmers fall into is that we will jump to automating a solution before we stop and figure out how much time it would even save.

In taking a slow improvement route to solve this problem for myself, I’ve managed not to invest too much time5 in worrying about this but also achieved a meaningful improvement beyond my first approach of doing it all by hand.

  1. You could also add a line to /etc/crontab or drop a script into /etc/cron.monthly depending on your system. 

  2. Why once a month? Mostly to avoid regular disruptions, but still be reasonably timely on updates. 

  3. If you’re looking to understand the cron time format I recommend crontab guru

  4. In the long term I really should set up something like ansible to automatically push fleetwide changes like this but with fewer machines than fingers this seems like overkill. 

  5. Of course by now writing about it, I’ve probably doubled the amount of time I’ve spent thinking about this topic but oh well… 

,

Planet DebianDirk Eddelbuettel: littler 0.3.20 on CRAN: Moar Features!

max-heap image

The twentyfirst release of littler as a CRAN package landed on CRAN just now, following in the now eighteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.

This release contains another fair number of small changes and improvements to some of the scripts I use daily to build or test packages, adds a new front-end ciw.r for the recently-released ciw package offering a ‘CRAN Incoming Watcher’, a new helper installDeps2.r (extending installDeps.r), a new doi-to-bib converter, allows a different temporary directory setup I find helpful, deals with one corner deployment use, and more.

The full change description follows.

Changes in littler version 0.3.20 (2024-03-23)

  • Changes in examples scripts

    • New (dependency-free) helper installDeps2.r to install dependencies

    • Scripts rcc.r, tt.r, tttf.r, tttlr.r use env argument -S to set -t to r

    • tt.r can now fill in inst/tinytest if it is present

    • New script ciw.r wrapping new package ciw

    • tttf.t can now use devtools and its loadall

    • New script doi2bib.r to call the DOI converter REST service (following a skeet by Richard McElreath)

  • Changes in package

    • The CI setup uses checkout@v4 and the r-ci-setup action

    • The Suggests: is a little tighter as we do not list all packages optionally used in the the examples (as R does not check for it either)

    • The package load messag can account for the rare build of R under different architecture (Berwin Turlach in #117 closing #116)

    • In non-vanilla mode, the temporary directory initialization in re-run allowing for a non-standard temp dir via config settings

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: New Debian Developers and Maintainers (January and February 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Carles Pina i Estany (cpina)
  • Dave Hibberd (hibby)
  • Soren Stoutner (soren)
  • Daniel Gröber (dxld)
  • Jeremy Sowden (azazel)
  • Ricardo Ribalda Delgado (ribalda)

The following contributors were added as Debian Maintainers in the last two months:

  • Joachim Bauch
  • Ananthu C V
  • Francesco Ballarin
  • Yogeswaran Umasankar
  • Kienan Stewart

Congratulations!

Planet DebianKentaro Hayashi: How about allocating more buildd resource for armel and armhf?

This article is cross-posting from grow-your-ideas. This is just an idea.

salsa.debian.org

The problem

According to Developer Machines [1], current buildd machines are like this:

  • armel: 4 buildd (4 for arm64/armhf/armel)
  • armhf: 7 buildd (4 for arm64/armhf/armel and 3 for armhf only)

[1] https://db.debian.org/machines.cgi

In contrast to other buildd architectures, these instances are quite a few and it seems that it causes a shortage of buildd resourses. (e.g. during mass transition, give-back turn around time becomes longer and longer.)

Actual situation

As you know, during 64bit time_t transition, many packages should be built, but it seems that +b1 or +bN build becomes slower. (I've hit BD-Uninstalled some times because of missing dependency rebuild)

ref. https://qa.debian.org/dose/debcheck/unstable_main/index.html

Expected situation

Allocate more buildd resources for armel and armhf.

It is just an idea, but how about assigning some buildd as armel/armhf buildd?

Above buildd is used only for arm64 buildd currently.

Maybe there is some technical reason not suitable for armel/armhf buildd, but I don't know yet.

2024/03/24 UPDATE: arm-arm01,arm-arm03,arm-arm-04 has already assigned to armel/armhf buildd, so it is an invalid proposal. See https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-01, https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-03, https://buildd.debian.org/status/architecture.php?a=armhf&suite=sid&buildd=buildd_arm64-arm-arm-04

Additional information

  • arm64: 10 buildd (4 for arm64/armhf/armel, 6 for arm64 only)
  • amd64: 7 buildd (5 for amd64/i386 buildd)
  • riscv64: 9 buildd

Planet DebianErich Schubert: Do not get Amazon Kids+ or a Fire HD Kids

The Amazon Kids “parental controls” are extremely insufficient, and I strongly advise against getting any of the Amazon Kids series.

The initial permise (and some older reviews) look okay: you can set some time limits, and you can disable anything that requires buying. With the hardware you get one year of the “Amazon Kids+” subscription, which includes a lot of interesting content such as books and audio, but also some apps. This seemed attractive: some learning apps, some decent games. Sometimes there seems to be a special “Amazon Kids+ edition”, supposedly one that has advertisements reduced/removed and no purchasing.

However, there are so many things just wrong in Amazon Kids:

  • you have no control over the starting page of the tablet.
    it is entirely up to Amazon to decide which contents are for your kid, and of course the page is as poorly made as possible
  • the main content control is a simple age filter
    age appropriateness is decided by Amazon in a non-transparent way
  • there is no preview. All you get is one icon and a truncated title, no description, no screenshots, nothing.
  • time restrictions are on the most basic level possible (daily limit for weekdays and weekends), largely unusable
    no easy way to temporarily increase the limit by 30 minutes, for example. You end up disabling it all the time.
  • there is some “educational goals” thing, but as you do not get to control what is educational and what not, it is paperweight
  • no per-app limits
    this is a killer missing feature.
  • removing content is a very manual thing. You have to go through potentially thousands of entries, and disable them one-by-one for every kid.
  • some contents cannot even be removed anymore
    “managed by age filters and cannot be changed” - these appear to be HTML5 and not real apps
  • there is no whitelist!
    That is the really no-go. By using Amazon Kids, you fully expose your kids to the endless rabbit hole of apps.
  • you cannot switch to an alternate UI that has better parental controls
    without sideloading, you cannot even get YouTube Kids (which still is not really good either) on it, as it does not have Google services.
    and even with sideloading, you do not appear to be able to permanently replace the launcher anymore.

And, unfortunately, Amazon Kids is full of poor content for kids, such as “DIY Fashion Star” that I consider to be very dangerous for kids: it is extremely stereotypical, beginning with supposedly “female” color schemes, model-only body types, and judging people by their clothing (and body).

You really thought you could hand-pick suitable apps for your kid on your own?

No, you have to identify and remove such contents one by one, with many clicks each, because there is no whitelisting, and no mass-removal (anymore - apparently Amazon removed the workarounds that previously allowed you to mass remove contents).

Not with Amazon Kids+, which apparently aims at raising the next generation of zombie customers that buy whatever you tell them to buy.

Hence, do not get your kids an Amazon Fire HD tablet!

365 TomorrowsMississauga

Author: Jeremy Nathan Marks I live in Mississauga, a city that builds dozens of downtown towers every year, the finest towers in the world. Each morning, I watch cranes move like long legged birds along the pond of the horizon. They bow and raise their heads, plucking at things which they lift toward the heavens […]

The post Mississauga appeared first on 365tomorrows.

Planet DebianValhalla's Things: Forgotten Yeast Bread - Sourdough Edition

Posted on March 23, 2024
Tags: madeof:atoms, craft:cooking, craft:baking, craft:bread

Yesterday I had planned a pan sbagliato for today, but I also had quite a bit of sourdough to deal with, so instead of mixing a bit of of dry yeast at 18:00 and mixing it with some additional flour and water at 21:00, at around maybe 20:00 I substituted:

  • 100 g firm sourdough;
  • 33 g flour;
  • 66 g water.

Then I briefly woke up in the middle of the night and poured the dough on the tray at that time instead of having to wake up before 8:00 in the morning.

Everything else was done as in the original recipe.

The firm sourdough is feeded regularly with the same weight of flour and half the weight of water.

Will. do. again.

,

Krebs on SecurityMozilla Drops Onerep After CEO Admits to Running People-Search Networks

The nonprofit organization that supports the Firefox web browser said today it is winding down its new partnership with Onerep, an identity protection service recently bundled with Firefox that offers to remove users from hundreds of people-search sites. The move comes just days after a report by KrebsOnSecurity forced Onerep’s CEO to admit that he has founded dozens of people-search networks over the years.

Mozilla Monitor. Image Mozilla Monitor Plus video on Youtube.

Mozilla only began bundling Onerep in Firefox last month, when it announced the reputation service would be offered on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches.

On March 14, KrebsOnSecurity published a story showing that Onerep’s Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. Onerep and Shelest did not respond to requests for comment on that story.

But on March 21, Shelest released a lengthy statement wherein he admitted to maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 — around the same time he launched Onerep.

Shelest maintained that Nuwber has “zero cross-over or information-sharing with Onerep,” and said any other old domains that may be found and associated with his name are no longer being operated by him.

“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).

Onerep CEO and founder Dimitri Shelest.

In a statement released today, a spokesperson for Mozilla said it was moving away from Onerep as a service provider in its Monitor Plus product.

“Though customer data was never at risk, the outside financial interests and activities of Onerep’s CEO do not align with our values,” Mozilla wrote. “We’re working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first.”

KrebsOnSecurity also reported that Shelest’s email address was used circa 2010 by an affiliate of Spamit, a Russian-language organization that paid people to aggressively promote websites hawking male enhancement drugs and generic pharmaceuticals. As noted in the March 14 story, this connection was confirmed by research from multiple graduate students at my alma mater George Mason University.

Shelest denied ever being associated with Spamit. “Between 2010 and 2014, we put up some web pages and optimize them — a widely used SEO practice — and then ran AdSense banners on them,” Shelest said, presumably referring to the dozens of people-search domains KrebsOnSecurity found were connected to his email addresses (dmitrcox@gmail.com and dmitrcox2@gmail.com). “As we progressed and learned more, we saw that a lot of the inquiries coming in were for people.”

Shelest also acknowledged that Onerep pays to run ads on “on a handful of data broker sites in very specific circumstances.”

“Our ad is served once someone has manually completed an opt-out form on their own,” Shelest wrote. “The goal is to let them know that if they were exposed on that site, there may be others, and bring awareness to there being a more automated opt-out option, such as Onerep.”

Reached via Twitter/X, HaveIBeenPwned founder Troy Hunt said he knew Mozilla was considering a partnership with Onerep, but that he was previously unaware of the Onerep CEO’s many conflicts of interest.

“I knew Mozilla had this in the works and we’d casually discussed it when talking about Firefox monitor,” Hunt told KrebsOnSecurity. “The point I made to them was the same as I’ve made to various companies wanting to put data broker removal ads on HIBP: removing your data from legally operating services has minimal impact, and you can’t remove it from the outright illegal ones who are doing the genuine damage.”

Playing both sides — creating and spreading the same digital disease that your medicine is designed to treat — may be highly unethical and wrong. But in the United States it’s not against the law. Nor is collecting and selling data on Americans. Privacy experts say the problem is that data brokers, people-search services like Nuwber and Onerep, and online reputation management firms exist because virtually all U.S. states exempt so-called “public” or “government” records from consumer privacy laws.

Those include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, and bankruptcy filings. Data brokers also can enrich consumer records with additional information, by adding social media data and known associates.

The March 14 story on Onerep was the second in a series of three investigative reports published here this month that examined the data broker and people-search industries, and highlighted the need for more congressional oversight — if not regulation — on consumer data protection and privacy.

On March 8, KrebsOnSecurity published A Close Up Look at the Consumer Data Broker Radaris, which showed that the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

On March 20, KrebsOnSecurity published The Not-So-True People-Search Network from China, which revealed an elaborate web of phony people-search companies and executives designed to conceal the location of people-search affiliates in China who are earning money promoting U.S. based data brokers that sell personal information on Americans.

Worse Than FailureError'd: You Can Say That Again!

In a first for me, this week we got FIVE unique submissions of the exact same bug on LinkedIn. In the spirit of the theme, I dug up a couple of unused submissions of older problems at LinkedIn as well. I guess there are more than the usual number of tech people looking for jobs.

John S., Chris K., Peter C., Brett Nash and Piotr K. all sent in samples of this doublebug. It's a flubstitution AND bad math, together!

minus

 

Latin Steevie is also job hunting and commented "Well, I know tech-writers may face hard times finding a job, so they turn to LinkedIn, which however doesn't seem to help... (the second announcement translates to 'part-time cleaners wanted') As a side bonus, apparently I can't try a search for jobs outside Italy, which is quite odd, to say the least!"

techwr

 

Clever Drew W. found a very minor bug in their handling of non-ASCII names. "I have an emoji in my display name on LinkedIn to thwart scrapers and other such bots. I didn't think it would also thwart LinkedIn!"

emoji

 

Finally, Mark Whybird returns with an internal repetition. "I think maybe I found the cause of some repetitive notifications when I went to Linkedin's notifications preferences page?" I think maybe!

third

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsArtificial Gravity

Author: TJ Gadd Anna stared at where the panel had been. Joshua was right; either The Saviour had never left Earth, or Anna had broken into a vault full of sand. She carefully replaced the panel, resetting every rivet. Her long red hair hid her pretty face. When astronomers first identified a comet heading towards […]

The post Artificial Gravity appeared first on 365tomorrows.

Planet DebianReproducible Builds (diffoscope): diffoscope 261 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 261. This version includes the following changes:

[ Chris Lamb ]
* Don't crash if we encounter an .rdb file without an equivalent .rdx file.
  (Closes: #1066991)
* In addition, don't identify Redis database dumps (etc.) as GNU R database
  files based simply on their filename. (Re: #1066991)
* Update copyright years.

You find out more by visiting the project homepage.

,

Planet DebianIan Jackson: How to use Rust on Debian (and Ubuntu, etc.)

tl;dr: Don’t just apt install rustc cargo. Either do that and make sure to use only Rust libraries from your distro (with the tiresome config runes below); or, just use rustup.

Don’t do the obvious thing; it’s never what you want

Debian ships a Rust compiler, and a large number of Rust libraries.

But if you just do things the obvious “default” way, with apt install rustc cargo, you will end up using Debian’s compiler but upstream libraries, directly and uncurated from crates.io.

This is not what you want. There are about two reasonable things to do, depending on your preferences.

Q. Download and run whatever code from the internet?

The key question is this:

Are you comfortable downloading code, directly from hundreds of upstream Rust package maintainers, and running it ?

That’s what cargo does. It’s one of the main things it’s for. Debian’s cargo behaves, in this respect, just like upstream’s. Let me say that again:

Debian’s cargo promiscuously downloads code from crates.io just like upstream cargo.

So if you use Debian’s cargo in the most obvious way, you are still downloading and running all those random libraries. The only thing you’re avoiding downloading is the Rust compiler itself, which is precisely the part that is most carefully maintained, and of least concern.

Debian’s cargo can even download from crates.io when you’re building official Debian source packages written in Rust: if you run dpkg-buildpackage, the downloading is suppressed; but a plain cargo build will try to obtain and use dependencies from the upstream ecosystem. (“Happily”, if you do this, it’s quite likely to bail out early due to version mismatches, before actually downloading anything.)

Option 1: WTF, no I don’t want curl|bash

OK, but then you must limit yourself to libraries available within Debian. Each Debian release provides a curated set. It may or may not be sufficient for your needs. Many capable programs can be written using the packages in Debian.

But any upstream Rust project that you encounter is likely to be a pain to get working, unless their maintainers specifically intend to support this. (This is fairly rare, and the Rust tooling doesn’t make it easy.)

To go with this plan, apt install rustc cargo and put this in your configuration, in $HOME/.cargo/config.toml:

[source.debian-packages]
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"

This causes cargo to look in /usr/share for dependencies, rather than downloading them from crates.io. You must then install the librust-FOO-dev packages for each of your dependencies, with apt.

This will allow you to write your own program in Rust, and build it using cargo build.

Option 2: Biting the curl|bash bullet

If you want to build software that isn’t specifically targeted at Debian’s Rust you will probably need to use packages from crates.io, not from Debian.

If you’re doing to do that, there is little point not using rustup to get the latest compiler. rustup’s install rune is alarming, but cargo will be doing exactly the same kind of thing, only worse (because it trusts many more people) and more hidden.

So in this case: do run the curl|bash install rune.

Hopefully the Rust project you are trying to build have shipped a Cargo.lock; that contains hashes of all the dependencies that they last used and tested. If you run cargo build --locked, cargo will only use those versions, which are hopefully OK.

And you can run cargo audit to see if there are any reported vulnerabilities or problems. But you’ll have to bootstrap this with cargo install --locked cargo-audit; cargo-audit is from the RUSTSEC folks who do care about these kind of things, so hopefully running their code (and their dependencies) is fine. Note the --locked which is needed because cargo’s default behaviour is wrong.

Privilege separation

This approach is rather alarming. For my personal use, I wrote a privsep tool which allows me to run all this upstream Rust code as a separate user.

That tool is nailing-cargo. It’s not particularly well productised, or tested, but it does work for at least one person besides me. You may wish to try it out, or consider alternative arrangements. Bug reports and patches welcome.

OMG what a mess

Indeed. There are large number of technical and social factors at play.

cargo itself is deeply troubling, both in principle, and in detail. I often find myself severely disappointed with its maintainers’ decisions. In mitigation, much of the wider Rust upstream community does takes this kind of thing very seriously, and often makes good choices. RUSTSEC is one of the results.

Debian’s technical arrangements for Rust packaging are quite dysfunctional, too: IMO the scheme is based on fundamentally wrong design principles. But, the Debian Rust packaging team is dynamic, constantly working the update treadmills; and the team is generally welcoming and helpful.

Sadly last time I explored the possibility, the Debian Rust Team didn’t have the appetite for more fundamental changes to the workflow (including, for example, changes to dependency version handling). Significant improvements to upstream cargo’s approach seem unlikely, too; we can only hope that eventually someone might manage to supplant it.

edited 2024-03-21 21:49 to add a cut tag



comment count unavailable comments

David BrinVernor Vinge - the Man with Lamps on His Brows

They said it of Moses - that he had 'lamps on his brows.' That he could peer ahead, through the fog of time. That phrase is applied now to the Prefrontal Lobes, just above the eyes - organs that provide humans our wan powers of foresight. Wan... except in a few cases, when those lamps blaze! Shining ahead of us, illuminating epochs yet to come.


Greg Bear, Gregory Benford, David Brin, Vernor Vinge


Alas, such lights eventually dim. And so, it is with sadness - and deep appreciation of my friend and colleague - that I must report the passing of Vernor Vinge. A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters and the implications of science. 

 

Accused by some of a grievous sin - that of 'optimism' - Vernor gave us peerless legends that often depicted human success at overcoming problems... those right in front of us... while posing new ones! New dilemmas that may lie just beyond our myopic gaze. 


He would often ask: "What if we succeed? Do you think that will be the end of it?"

 

Vernor's aliens - in classics like A Deepness in the Sky and A Fire Upon the Deep - were fascinating beings, drawing us into different styles of life and paths of consciousness. 

 

His 1981 novella "True Names" was perhaps the first story to present a plausible concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others. Many innovators of modern industry cite “True Names” as their keystone technological inspiration... though I deem it to have been even more prophetic about the yin-yang tradeoffs of privacy, transparency and accountability.  

 

Another of the many concepts arising in Vernor’s dynamic mind was that of the “Technological Singularity,” a term (and disruptive notion) that has pervaded culture and our thoughts about the looming future.

 

Rainbows End expanded these topics to include the vividly multi-layered "augmented' reality wherein we all will live, in just a few years from now. It was almost-certainly the most vividly accurate portrayal of how new generations might apply onrushing cyber-tools, boggling their parents, who will stare at their kids' accomplishments, in wonder. Wonders like a university library building that, during an impromptu rave, stands up and starts to dance!

Vinge was also a long-revered educator and professor of math and computer science at San Diego State University, mentoring generations of practical engineers to also keep a wide stance and open minds.

Vernor had been - for years - under care for progressive Parkinsons, at a very nice place overlooking the Pacific in La Jolla. As reported by his friend and fellow SDSU Prof. John Carroll, his decline had steepened since November, but was relatively comfortable. Up until that point, I had been in contact with Vernor almost weekly, but my friendship pales next to John's devotion, for which I am - (and we all should be) - deeply grateful.

 

I am a bit too wracked, right now, to write much more. Certainly, homages will flow and we will post some on a tribute page. 


I will say that it's a bit daunting now to be a "Killer B" who's still standing. So, let me close with a photo from last October, that's dear to my heart. And those prodigious brow-lamps were still shining brightly!


We spanned a pretty wide spectrum - politically! Yet, we Killer Bs - (Vernor was a full member! And Octavia Butler once guffawed happily when we inducted her) - always shared a deep love of our high art - that of gedankenexperimentation, extrapolation into the undiscovered country ahead. 


If Vernor's readers continue to be inspired - that country might even feature more solutions than problems. And perhaps copious supplies of hope.



========================================================


Addenda & tributes


“What a fine writer he was!”  -- Robert Silverberg.


“A kind man.”  -- Kim Stanley Robinson (The nicest thing anyone could say.)

 

The good news is that Vernor, and you and many other authors, will have achieved a kind of immortality thanks to your works. My favorite Vernor Vinge book was True Names." -- Vinton Cerf

 

Vernor was a good guy. -- Pat Cadigan


David Brin 2Remembering Vernor Vinge

Author of the Singularity

It is with sadness – and deep appreciation of my friend and colleague – that I must report the passing of fellow science fiction author – Vernor Vinge. A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters and the implications of science.

Accused by some of a grievous sin – that of ‘optimism’ – Vernor gave us peerless legends that often depicted human success at overcoming problems… those right in front of us… while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: “What if we succeed? Do you think that will be the end of it?”

Vernor’s aliens – in classic science fiction novels such as A Deepness in the Sky and A Fire Upon the Deep – were fascinating beings, drawing us into different styles of life and paths of consciousness.

His 1981 novella “True Names” was perhaps the first story to present a plausible concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others. Many innovators of modern industry cite “True Names” as their keystone technological inspiration, though I deem it to have been even more prophetic about the yin-yang tradeoffs of privacy, transparency and accountability.  

Another of the many concepts arising in Vernor’s dynamic mind was that of the “Technological Singularity,” a term (and disruptive notion) that has pervaded culture and our thoughts about the looming future.

Others cite Rainbows End as the most vividly accurate portrayal of how new generations will apply onrushing cyber-tools, boggling their parents, who will stare at their kids’ accomplishments, in wonder. Wonders like a university library building that, during an impromptu rave, stands up and starts to dance!

Vernor had been – for years – under care for progressive Parkinsons, at a very nice place overlooking the Pacific in La Jolla. As reported by his friend and fellow San Diego State professor John Carroll, his decline had steepened since November, but was relatively comfortable. Up until that point, I had been in contact with Vernor almost weekly, but my friendship pales next to John’s devotion, for which I am – (and we all should be) – deeply grateful.

I am a bit too wracked, right now, to write much more. Certainly, homages will flow and we will post some on a tribute page. I will say that it’s a bit daunting now to be a “Killer B” who’s still standing. So, let me close with a photo that’s dear to my heart.

We spanned a pretty wide spectrum – politically! Yet, we Killer B’s (Vernor was a full member! And Octavia Butler once guffawed happily when we inducted her) always shared a deep love of our high art – that of gedankenexperimentation, extrapolation into the undiscovered country ahead.

And – if Vernor’s readers continue to be inspired – that country might even feature more solutions than problems. And perhaps copious supplies of hope.

Killer B’s at a book signing: Greg Bear, Gregory Benford, David Brin, Vernor Vinge

Cryptogram On Secure Voting Systems

Andrew Appel shepherded a public comment—signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

From the executive summary:

We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation.

Technology introduces the means of efficient tabulation, but also introduces a manifold increase in complexity and sophistication of the process. This places the understanding of the process beyond the average person’s understanding, which can foster distrust. It also opens the door to human or machine error, as well as exploitation by sophisticated and malicious actors.

Rather than assert that each component of the process can be made perfectly secure on its own, we believe the goal of each component of the elections process is to validate every other component.

Consequently, we believe that the hallmarks of a reliable and optimal election process are hand-marked paper ballots, which are optically scanned, separately and securely stored, and rigorously audited after the election but before certification. We recommend state legislators adopt policies consistent with these guiding principles, which are further developed below.

Cryptogram Licensing AI Engineers

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

Cryptogram Google Pays $10M in Bug Bounties in 2023

BleepingComputer has the details. It’s $2M less than in 2022, but it’s still a lot.

The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

Slashdot thread.

Worse Than FailureCodeSOD: Reading is a Safe Operation

Alex saw, in the company's codebase, a method called recursive_readdir. It had no comments, but the name seemed pretty clear: it would read directories recursively, presumably enumerating their contents.

Fortunately for Alex, they checked the code before blindly calling the method.

public function recursive_readdir($path)
{
    $handle = opendir($path);
    while (($file = readdir($handle)) !== false)
    {
        if ($file != '.' && $file != '..')
        {
            $filepath = $path . '/' . $file;
            if (is_dir($filepath))
            {
                rmdir($filepath);
                recursive_readdir($filepath);
            }
            else
            {
                    unlink($filepath);
            }
        }
    }
    closedir($handle);
    rmdir($path);
}

This is a recursive delete. rmdir requires the target directory to be empty, so this recurses over all the files and subfolders in the directory, deleting them, so that we can delete the directory.

This code is clearly cribbed from comments on the PHP documentation, with a fun difference in that this version is both unclearly named, and also throws an extra rmdir call in the is_dir branch- a potential "optimization" that doesn't actually do anything (it either fails because the directory isn't empty, or we end up calling it twice anyway).

Alex learned to take nothing for granted in this code base.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

365 TomorrowsDisinformation Failure

Author: David C. Nutt The uniformed Da’Ri officer saw me enter the bar and nearly ran to me. He was at my booth before I had a chance to settle in and was talking at light speed before the first round hit the table. Things did not go well for the Da’Ri today. As an […]

The post Disinformation Failure appeared first on 365tomorrows.

Krebs on SecurityThe Not-so-True People-Search Network from China

It’s not unusual for the data brokers behind people-search websites to use pseudonyms in their day-to-day lives (you would, too). Some of these personal data purveyors even try to reinvent their online identities in a bid to hide their conflicts of interest. But it’s not every day you run across a US-focused people-search network based in China whose principal owners all appear to be completely fabricated identities.

Responding to a reader inquiry concerning the trustworthiness of a site called TruePeopleSearch[.]net, KrebsOnSecurity began poking around. The site offers to sell reports containing photos, police records, background checks, civil judgments, contact information “and much more!” According to LinkedIn and numerous profiles on websites that accept paid article submissions, the founder of TruePeopleSearch is Marilyn Gaskell from Phoenix, Ariz.

The saucy yet studious LinkedIn profile for Marilyn Gaskell.

Ms. Gaskell has been quoted in multiple “articles” about random subjects, such as this article at HRDailyAdvisor about the pros and cons of joining a company-led fantasy football team.

“Marilyn Gaskell, founder of TruePeopleSearch, agrees that not everyone in the office is likely to be a football fan and might feel intimidated by joining a company league or left out if they don’t join; however, her company looked for ways to make the activity more inclusive,” this paid story notes.

Also quoted in this article is Sally Stevens, who is cited as HR Manager at FastPeopleSearch[.]io.

Sally Stevens, the phantom HR Manager for FastPeopleSearch.

“Fantasy football provides one way for employees to set aside work matters for some time and have fun,” Stevens contributed. “Employees can set a special league for themselves and regularly check and compare their scores against one another.”

Imagine that: Two different people-search companies mentioned in the same story about fantasy football. What are the odds?

Both TruePeopleSearch and FastPeopleSearch allow users to search for reports by first and last name, but proceeding to order a report prompts the visitor to purchase the file from one of several established people-finder services, including BeenVerified, Intelius, and Spokeo.

DomainTools.com shows that both TruePeopleSearch and FastPeopleSearch appeared around 2020 and were registered through Alibaba Cloud, in Beijing, China. No other information is available about these domains in their registration records, although both domains appear to use email servers based in China.

Sally Stevens’ LinkedIn profile photo is identical to a stock image titled “beautiful girl” from Adobe.com. Ms. Stevens is also quoted in a paid blog post at ecogreenequipment.com, as is Alina Clark, co-founder and marketing director of CocoDoc, an online service for editing and managing PDF documents.

The profile photo for Alina Clark is a stock photo appearing on more than 100 websites.

Scouring multiple image search sites reveals Ms. Clark’s profile photo on LinkedIn is another stock image that is currently on more than 100 different websites, including Adobe.com. Cocodoc[.]com was registered in June 2020 via Alibaba Cloud Beijing in China.

The same Alina Clark and photo materialized in a paid article at the website Ceoblognation, which in 2021 included her at #11 in a piece called “30 Entrepreneurs Describe The Big Hairy Audacious Goals (BHAGs) for Their Business.” It’s also worth noting that Ms. Clark is currently listed as a “former Forbes Council member” at the media outlet Forbes.com.

Entrepreneur #6 is Stephen Curry, who is quoted as CEO of CocoSign[.]com, a website that claims to offer an “easier, quicker, safer eSignature solution for small and medium-sized businesses.” Incidentally, the same photo for Stephen Curry #6 is also used in this “article” for #22 Jake Smith, who is named as the owner of a different company.

Stephen Curry, aka Jake Smith, aka no such person.

Mr. Curry’s LinkedIn profile shows a young man seated at a table in front of a laptop, but an online image search shows this is another stock photo. Cocosign[.]com was registered in June 2020 via Alibaba Cloud Beijing. No ownership details are available in the domain registration records.

Listed at #13 in that 30 Entrepreneurs article is Eden Cheng, who is cited as co-founder of PeopleFinderFree[.]com. KrebsOnSecurity could not find a LinkedIn profile for Ms. Cheng, but a search on her profile image from that Entrepreneurs article shows the same photo for sale at Shutterstock and other stock photo sites.

DomainTools says PeopleFinderFree was registered through Alibaba Cloud, Beijing. Attempts to purchase reports through PeopleFinderFree produce a notice saying the full report is only available via Spokeo.com.

Lynda Fairly is Entrepreneur #24, and she is quoted as co-founder of Numlooker[.]com, a domain registered in April 2021 through Alibaba in China. Searches for people on Numlooker forward visitors to Spokeo.

The photo next to Ms. Fairly’s quote in Entrepreneurs matches that of a LinkedIn profile for Lynda Fairly. But a search on that photo shows this same portrait has been used by many other identities and names, including a woman from the United Kingdom who’s a cancer survivor and mother of five; a licensed marriage and family therapist in Canada; a software security engineer at Quora; a journalist on Twitter/X; and a marketing expert in Canada.

Cocofinder[.]com is a people-search service that launched in Sept. 2019, through Alibaba in China. Cocofinder lists its market officer as Harriet Chan, but Ms. Chan’s LinkedIn profile is just as sparse on work history as the other people-search owners mentioned already. An image search online shows that outside of LinkedIn, the profile photo for Ms. Chan has only ever appeared in articles at pay-to-play media sites, like this one from outbackteambuilding.com.

Perhaps because Cocodoc and Cocosign both sell software services, they are actually tied to a physical presence in the real world — in Singapore (15 Scotts Rd. #03-12 15, Singapore). But it’s difficult to discern much from this address alone.

Who’s behind all this people-search chicanery? A January 2024 review of various people-search services at the website techjury.com states that Cocofinder is a wholly-owned subsidiary of a Chinese company called Shenzhen Duiyun Technology Co.

“Though it only finds results from the United States, users can choose between four main search methods,” Techjury explains. Those include people search, phone, address and email lookup. This claim is supported by a Reddit post from three years ago, wherein the Reddit user “ProtectionAdvanced” named the same Chinese company.

Is Shenzhen Duiyun Technology Co. responsible for all these phony profiles? How many more fake companies and profiles are connected to this scheme? KrebsOnSecurity found other examples that didn’t appear directly tied to other fake executives listed here, but which nevertheless are registered through Alibaba and seek to drive traffic to Spokeo and other data brokers. For example, there’s the winsome Daniela Sawyer, founder of FindPeopleFast[.]net, whose profile is flogged in paid stories at entrepreneur.org.

Google currently turns up nothing else for in a search for Shenzhen Duiyun Technology Co. Please feel free to sound off in the comments if you have any more information about this entity, such as how to contact it. Or reach out directly at krebsonsecurity @ gmail.com.

A mind map highlighting the key points of research in this story. Click to enlarge. Image: KrebsOnSecurity.com

ANALYSIS

It appears the purpose of this network is to conceal the location of people in China who are seeking to generate affiliate commissions when someone visits one of their sites and purchases a people-search report at Spokeo, for example. And it is clear that Spokeo and others have created incentives wherein anyone can effectively white-label their reports, and thereby make money brokering access to peoples’ personal information.

Spokeo’s Wikipedia page says the company was founded in 2006 by four graduates from Stanford University. Spokeo co-founder and current CEO Harrison Tang has not yet responded to requests for comment.

Intelius is owned by San Diego based PeopleConnect Inc., which also owns Classmates.com, USSearch, TruthFinder and Instant Checkmate. PeopleConnect Inc. in turn is owned by H.I.G. Capital, a $60 billion private equity firm. Requests for comment were sent to H.I.G. Capital. This story will be updated if they respond.

BeenVerified is owned by a New York City based holding company called The Lifetime Value Co., a marketing and advertising firm whose brands include PeopleLooker, NeighborWho, Ownerly, PeopleSmart, NumberGuru, and Bumper, a car history site.

Ross Cohen, chief operating officer at The Lifetime Value Co., said it’s likely the network of suspicious people-finder sites was set up by an affiliate. Cohen said Lifetime Value would investigate to determine if this particular affiliate was driving them any sign-ups.

All of the above people-search services operate similarly. When you find the person you’re looking for, you are put through a lengthy (often 10-20 minute) series of splash screens that require you to agree that these reports won’t be used for employment screening or in evaluating new tenant applications. Still more prompts ask if you are okay with seeing “potentially shocking” details about the subject of the report, including arrest histories and photos.

Only at the end of this process does the site disclose that viewing the report in question requires signing up for a monthly subscription, which is typically priced around $35. Exactly how and from where these major people-search websites are getting their consumer data — and customers — will be the subject of further reporting here.

The main reason these various people-search sites require you to affirm that you won’t use their reports for hiring or vetting potential tenants is that selling reports for those purposes would classify these firms as consumer reporting agencies (CRAs) and expose them to regulations under the Fair Credit Reporting Act (FCRA).

These data brokers do not want to be treated as CRAs, and for this reason their people search reports typically don’t include detailed credit histories, financial information, or full Social Security Numbers (Radaris reports include the first six digits of one’s SSN).

But in September 2023, the U.S. Federal Trade Commission found that TruthFinder and Instant Checkmate were trying to have it both ways. The FTC levied a $5.8 million penalty against the companies for allegedly acting as CRAs because they assembled and compiled information on consumers into background reports that were marketed and sold for employment and tenant screening purposes.

The FTC also found TruthFinder and Instant Checkmate deceived users about background report accuracy. The FTC alleges these companies made millions from their monthly subscriptions using push notifications and marketing emails that claimed that the subject of a background report had a criminal or arrest record, when the record was merely a traffic ticket.

The FTC said both companies deceived customers by providing “Remove” and “Flag as Inaccurate” buttons that did not work as advertised. Rather, the “Remove” button removed the disputed information only from the report as displayed to that customer; however, the same item of information remained visible to other customers who searched for the same person.

The FTC also said that when a customer flagged an item in the background report as inaccurate, the companies never took any steps to investigate those claims, to modify the reports, or to flag to other customers that the information had been disputed.

There are a growing number of online reputation management companies that offer to help customers remove their personal information from people-search sites and data broker databases. There are, no doubt, plenty of honest and well-meaning companies operating in this space, but it has been my experience that a great many people involved in that industry have a background in marketing or advertising — not privacy.

Also, some so-called data privacy companies may be wolves in sheep’s clothing. On March 14, KrebsOnSecurity published an abundance of evidence indicating that the CEO and founder of the data privacy company OneRep.com was responsible for launching dozens of people-search services over the years.

Finally, some of the more popular people-search websites are notorious for ignoring requests from consumers seeking to remove their information, regardless of which reputation or removal service you use. Some force you to create an account and provide more information before you can remove your data. Even then, the information you worked hard to remove may simply reappear a few months later.

This aptly describes countless complaints lodged against the data broker and people search giant Radaris. On March 8, KrebsOnSecurity profiled the co-founders of Radaris, two Russian brothers in Massachusetts who also operate multiple Russian-language dating services and affiliate programs.

The truth is that these people-search companies will continue to thrive unless and until Congress begins to realize it’s time for some consumer privacy and data protection laws that are relevant to life in the 21st century. Duke University adjunct professor Justin Sherman says virtually all state privacy laws exempt records that might be considered “public” or “government” documents, including voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, bankruptcy filings, and more.

“Consumer privacy laws in California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah, and Virginia all contain highly similar or completely identical carve-outs for ‘publicly available information’ or government records,” Sherman said.

,

LongNowStumbling Towards First Light

Stumbling Towards First Light

A satellite capturing high-resolution images of Chile on the afternoon of October 18, 02019, would have detected at least two signs of unusual human activity.

Pictures taken over Santiago, Chile’s capital city, would have shown numerous plumes of smoke slanting away from buses, subway stations and commercial buildings that had been torched by rioters. October 18 marked the start of the Estallido Social (Social Explosion), a months-long series of violent protests that pitched this South American country of 19 million people into a crisis from which it has yet to fully emerge.

On the same day, the satellite would also have recorded a fresh disturbance on Cerro Las Campanas, a mountain in the Atacama Desert some 300 miles north of Santiago. A deep circular trench, 200 feet in diameter, had recently been drilled into the rock on the flattened summit. The trench will eventually hold the concrete foundations of the Giant Magellan Telescope, a $2 billion instrument that will have 10 times the resolving power of the Hubble Space Telescope. But on October 18 the excavation looked like one of the cryptic shapes that surround the Atacama Giant, an humanoid geoglyph constructed by the indigenous people of the Andes that has been staring up at the desert sky since long before Ferdinand Magellan set sail in 01519.

I see the riots and the unfinished telescope as markers at the temporal extremes of human agency. At one end, the twitchy impatience of politics seduces us with the illusion that a Molotov cocktail, an election or a military coup will set the world to rights. At the opposite point of the spectrum, the slow, painstaking and often-inconclusive work of cosmology attempts to fathom the origins of time itself.

That both pursuits should take place in Chile is not in itself remarkable: leading-edge science coexists with political chaos in countries as varied as Russia and the United States. Yet in Chile, a so-called “emerging economy,” the juxtaposition of first-world astronomy with third-world grievances raises questions about planning, progress, and the distribution of one of humanity’s rarest assets.

Extremely patient risk capital

The next era of astronomy will depend on instruments so complicated and costly that no single nation can build them. A list of contributors to the James Webb Space Telescope, for example, includes 35 universities and 280 public agencies and private companies in 14 countries. This aggregation of design, engineering, construction and software talent from around the planet is a hallmark of “big science” projects. But large telescopes are also emblematic of the outsized timescales of “long science.” They depend on a fragile amalgam of trust, loyalty, institutional prestige and sheer endurance that must sustain a project for two or three decades before “first light,” or the moment when a telescope actually begins to gather data.

“It takes a generation to build a telescope,” Charles Alcock, director of the Harvard-Smithsonian Center for Astrophysics and a member of Giant Magellan Telescope (GMT) board, said some years ago. Consider the logistics involved in a single segment of the GMT’s construction: the process of fabricating its seven primary mirrors, each measuring 27 feet in diameter and using 17 metric tons of specialized Japanese glass. The only facility capable of casting mirrors this large (by melting the glass inside a clam-shaped oven at 2,100 degrees Fahrenheit) is situated deep beneath University of Arizona football stadium. It takes three months for the molten glass to cool. Over the next four years, the mirror will be mounted, ground and slowly polished to a precision of around one millionth of an inch.  The GMT’s first mirror was cast in 02005; its seventh will be finished sometime in 02027. Building the 1,800-ton steel structure that will hold these mirrors, shipping the enormous parts by sea, assembling the telescope atop Cerro Las Campanas, and then testing and calibrating its incommunicably delicate instruments will take several more years.

Not surprisingly, these projects don’t even attempt to raise their full budgets up front. Instead, they operate on a kind of faith, scraping together private grants and partial transfers from governments and universities to make incremental progress, while constantly lobbying for additional funding. At each stage, they must defend nebulous objectives (“understanding the nature of dark matter”) against the claims of disciplines with more tangible and near-term goals, such as fusion energy. And given the very real possibility that they will not be completed, big telescopes require what private equity investors might describe as the world’s most patient risk capital.

Few countries have been more successful at attracting this kind of capital than Chile. The GMT is one of three colossal observatories currently under construction in the Atacama Desert. The $1.6 billion Extremely Large Telescope, which will house a 128-foot main mirror inside a dome nearly as tall as the Statue of Liberty, will be able to directly image and study the atmospheres of potentially habitable exoplanets. The $1.9 billion Vera T. Rubin Telescope will use a 3.500 megapixel digital camera to map the entire night sky every three days, creating the first 3-D virtual map of the visible cosmos while recording changes in stars and events like supernovas. Two other comparatively smaller projects, the Fred Young Sub-millimeter Telescope and the Cherenkov Telescope Array, are also in the works.

Chile is already home to the $1.4 billion Atacama Large Millimeter Array (ALMA), a complex of 66 huge dish antennas some 16,000 feet above sea level that used to be described as the world’s largest and most expensive land-based astronomical project. And over the last half-century, enormous observatories at Cerro Tololo, Cerro Pachon, Cerro Paranal, and Cerro La Silla have deployed hundreds of the world’s most sophisticated telescopes and instruments to obtain foundational evidence in every branch of astronomy and astrophysics.

By the early 02030s, a staggering 70 percent of the world’s entire land-based astronomical data gathering capacity is expected to be concentrated in an swath of Chilean desert about the size of Oregon.

Stumbling Towards First Light
A map of major telescopes and astronomical sites in Northern Chile. Map by Jacob Sujin Kuppermann

Blurring imaginary borders

Collectively, this cluster of observatories represents expenditures and collaboration on a scale similar to “big science” landmarks such as the Large Hadron Collider or the Manhattan Project. Those enterprises were the product of ambitious, long-term strategies conceived and executed by a succession of visionary leaders. But according to Barbara Silva, a historian of science at Chile’s Universidad Alberto Hurtado, there has been no grand plan, and no one can legitimately take credit for turning Chile into the global capital of astronomy.

In several papers she has published on the subject, Silva describes a decentralized and largely uncoordinated 175-year process driven by relationships—at times competitive, at times collaborative—between scientists and institutions that were trying to solve specific problems that required observations from the Southern Hemisphere.

In 01849, for example, the U.S. Navy sent Lieutenant James Melville Gillis to Chile to obtain measurements that would contribute to an international effort to calculate the earth’s orbit. Gillis built a modest observatory on Santa Lucía Hill, in what is now central Santiago, and trained several local assistants. Three years later, when Gillis completed his assignment, the Chilean government purchased the facility and its instruments and used them to establish the Observatorio Astronómico Nacional—one of the first in Latin America.

Stumbling Towards First Light
An 01872 illustration by Recaredo Santos Tornero of the Observatorio Astronómico Nacional in Santiago de Chile.

Half a century later, representatives from another American institution, the University of California’s Lick Observatory, built a second observatory in Chile and began exploring locations in the mountains of the Atacama Desert. They were among the first to document the conditions that would eventually turn Chile into an astronomy mecca: high altitude, extremely low humidity, stable weather and enormous stretches of uninhabited land with minimal light pollution.

During the Cold War, the director of Chile’s Observatorio Astronómico Nacional, Federico Ruttland, saw an opportunity to exploit the growing scientific competition among industrialized powers by fostering a host of cooperation agreements with astronomers and universities in the Northern Hemisphere. Delegations of astronomers from the U.S., Europe and the Soviet Union began visiting Chile to explore locations for large observatories. Germany, France, Belgium, the Netherlands and Sweden pooled resources to form the European Southern Observatory. By the late 1960s, several parallel but independent projects were underway to build the first generation of world-class observatories in Chile. Each of them involved so many partners they tended to “blur the imaginary borders of nations,” Silva writes.

The historical record provides few clues as to why these partners thought Chile would be a safe place to situate priceless instruments that are meant to be operated for a half-century or longer. Silva has found some accounts indicating that Chile was seen as “somehow trustworthy, with a reputation… of being different from the rest of Latin America.” That perception, Silva writes, may have been a self-serving “discourse construct” based largely on the accounts of British and American business interests that dominated the mining of Chilean saltpeter and copper over the previous century.

Anyone looking closely at Chile’s political history would have seen a tumultuous pattern not very different from that of neighboring countries such as Argentina, Peru or Brazil. In the century and a half following its declaration of independence from Spain in 01810,  Chile adopted nine different constitutions. A small, landed oligarchy controlled extractive industries and did little to improve the lot of agricultural and mining workers. By the middle of the century, Chile had half a dozen major political parties ranging from communists to Catholic nationalists, and a generation of increasingly radicalized voters was clamoring for change.

In 01970 Salvador Allende became the first Marxist president elected in a liberal democracy in Latin America. His ambitious program to build a socialist society was cut short by a U.S.-supported military coup in 01973. Gen. Augusto Pinochet ruled Chile for next 17 years, brutally suppressing any opposition while deregulating and privatizing the economy along lines recommended by the “Chicago Boys”— a group of economists trained under Milton Friedman at the University of Chicago.

Soviet astronomers left Chile immediately after the coup. American and European scientists continued to work at facilities such as the Inter-American Observatory at Cerro Tololo throughout this period, but no new observatories were announced during the dictatorship.

Negotiating access to time

With the return of democracy in 01990, Chile entered a period of growth and stability that would last for three decades. A succession of center-left and center-right administrations carried out social and economic reforms, foreign investment poured in, and Chile came to be seen as a model of market-oriented development. Poverty, which had affected more than 50 percent of the population in 01980s, dropped to single digits by the 02010s.

Foreign astronomers quickly returned to Chile and began negotiating bilateral agreements to build the next generation of large telescopes. This time, Chilean researchers urged the government to introduce a new requirement: in exchange for land and tax exemptions, any new international observatory built in the country would need to reserve 10 percent of observation time for Chilean astronomers. It was a bold move, because access to these instruments is fiercely contested.

Bárbara Rojas-Ayala, an astrophysicist at Chile’s University of Tarapacá, belongs to a generation of young astronomers who attribute their careers directly to this decision. She says that although the new observatories agreed to the “10 percent rule,” it was initially not enforced—in part because there were not enough qualified Chilean astronomers in the mid-01990s. She credits two distinguished Chilean astronomers, Mónica Rubio and María Teresa Ruiz, with convincing government officials that only by enforcing the rule would Chile begin to cultivate national astronomy talent.

Stumbling Towards First Light
Maria Teresa Ruiz (Left) alongside two of the four Auxiliary Telescopes of the ESO’s Very Large Telescope at the Paranal Observatory in the Atacama Region of Chile. Photo by the International Astronomical Union, released under the Creative Commons Attribution 4.0 International License

The strategy worked. Rojas-Ayala was one of hundreds of Chilean college students who began completing graduate studies at leading universities in the Global North and then returning to teach and conduct research, knowing they would have access to the most coveted instruments. Between the mid-01990s and the present, the number of Chilean universities with astronomy or astrophysics departments jumped from 5 to 24. The community of professional Chilean astronomers has grown ten-fold, to nearly 300, and some 800 undergraduate and post-graduate students are now studying astronomy or related fields in Chilean universities. Chilean firms are also now allowed to compete for the specialized services that are needed to maintain and operate these observatories, creating a growing ecosystem of companies and institutions such as the Center for Astrophysics and Related Technologies.

By the 02010s, Chile could legitimately boast to have leapfrogged a century of scientific development to join the vanguard of a discipline historically dominated by the richest industrial powers—something very few countries in the Global South have ever achieved.

From 30 pesos to 30 years

The Estallido Social of 02019 opened a wide crack in this narrative. The riots were triggered by a 30-peso increase (around $0.25) in the basic fare for Santiago’s metro system. But the rioters quickly embraced a slogan, “No son 30 pesos ¡son 30 años!,” which torpedoed the notion that the post-Pinochet era has been good for most Chileans. Protesters denounced the poor quality of public schools, unaffordable healthcare and a privatized pension system that barely covers the needs of many retirees. Never mind that Chile is objectively in better shape that most of its neighboring countries—the riots showed that Chileans now measure themselves against the living standards of the countries where the GMT and other telescopes were designed. And many of them question whether democracy and neo-liberal economics can ever reverse the country’s persistent wealth inequality.

Stumbling Towards First Light
Protestors at Plaza Baquedano, Santiago, Chile in October 02019. Photos by Carlos Figueroa, CC Attribution-Share Alike 4.0 International

When Gabriel Boric, a 35-year-old left-wing former student leader, won a run-off election for president against a right-wing candidate in 2021, many young Chileans were jubilant. They hoped that a referendum to adopt a new, progressive constitution (to replace the one drafted by the Pinochet regime) would finally set Chile on a more promising path. These hopes were soon disappointed: in 02022 the new constitution was rejected by voters, who considered it far too radical. A year later, a more conservative draft constitution also failed to garner enough votes.

The impasse has left Chile in the grip of a political malaise that will be sadly familiar to people in Europe and the United States. Chileans seemingly can’t agree on how to govern themselves, and their visions of the future appear to be irreconcilable.

For astronomers like Rojas-Ayala, the Estallido Social and its aftermath are a painful reminder of an incongruity that they experience every day. “I feel so privileged to be able to work in these extraordinary facilities,” she said. “My colleagues and I have these amazing careers; and yet we live in a country where there is still a lot of poverty.” Since poverty in Chile has what she calls a “predominantly female face,” Rojas-Ayala frequently speaks at schools and volunteers for initiatives that encourage girls and young women to choose science careers.

Rojas-Ayala has seen a gradual increase in the proportion of women in her field, and she is also encouraged by signs that astronomy is permeating Chilean culture in positive ways. A recent conference on “astrotourism” gathered educators and tour operators who cater to the thousands of stargazers who arrive in Chile each year, eager to experience its peerless viewing conditions at night and then visit the monumental Atacama observatories during the day. José Masa, one Chile’s most celebrated astronomers, has filled small soccer stadiums with multi-generational audiences for non-technical talks on solar eclipses and related phenomena. And a growing list of community organizations is helping to protect Chile’s dark skies from light pollution.

Astronomy is also enriching the work of Chilean novelists and film-makers. “Nostalgia for the Light,” a documentary by Pedro Guzmán, intertwines the story of the growth of Chilean observatories with testimonies from the relatives of political prisoners who were murdered and buried in the Atacama Desert during the Pinochet regime. The graves were unmarked, and many relatives have spent years looking for these remains. Guzman, in the words of the critic Peter Bradshaw, sees astronomy “not simply an ingenious metaphor for political issues, or a way of anesthetizing the pain by claiming that it is all tiny, relative to the reaches of space. Astronomy is a mental discipline, a way of thinking, feeling and clarifying, and a way of insisting on humanity in the face of barbarism.”

Despite their frustration with democracy and their pessimism about the immediate future, Chileans are creating a haven for this way of thinking. Much of what we hope to learn about the universe in the coming decades will depend on their willingness to maintain this uneasy balance.

Worse Than FailureCodeSOD: Do you like this page? Check [Yes] or [No]

In the far-off era of the late-90s, Jens worked for a small software shop that built tools for enterprise customers. It was a small shop, and most of the projects were fairly small- usually enough for one developer to see through to completion.

A co-worker built a VB4 (the latest version available) tool that interfaced with an Oracle database. That co-worker quit, and that meant this tool was Jens's job. The fact that Jens had never touched Visual Basic before meant nothing.

With the original developer gone, Jens had to go back to the customer for some knowledge transfer. "Walk me through how you use the application?"

"The main thing we do is print reports," the user said. They navigated through a few screens worth of menus to the report, and got a preview of it. It was a simple report with five records displayed on each page. The user hit "Print", and then a dialog box appeared: "Print Page 1? [Yes] [No]". The user clicked "Yes". "Print Page 2? [Yes] [No]". The user started clicking "no", since the demo had been done and there was no reason to burn through a bunch of printer paper.

"Wait, is this how this works?" Jens asked, not believing his eyes.

"Yes, it's great because we can decide which pages we want to print," the user said.

"Print Page 57? [Yes] [No]".

With each page, the dialog box took longer and longer to appear, the program apparently bogging down.

Now, the code is long lost, and Jens quickly forgot everything they learned about VB4 once this project was over (fair), so instead of a pure code sample, we have here a little pseudocode to demonstrate the flow:

for k = 1 to runQuery("SELECT MAX(PAGENO) FROM ReportTable WHERE ReportNumber = :?", iRptNmbr)
	dataset = runQuery("SELECT * FROM ReportTable WHERE ReportNumber = :?", iRptNmbr)
	for i = 0 to dataset.count - 1
	  if dataset.pageNo = k then
	    useRecord(dataset)
		dataset.MoveNext
	  end
	next
	if MsgBox("Do you want to print page k?", vbYesNo) = vbYes then
		print(records)
	end
next

"Print Page 128? [Yes] [No]"

The core thrust is that we query the number of pages each time we run the loop. Then we get all of the rows for the report, and check each row to see if they're supposed to be on the page we're printing. If they are, useRecord stages them for printing. Once they're staged, we ask the user if they should be printed.

"Why doesn't it just give you a page selector, like Word does?" Jens asked.

"The last guy said that wasn't possible."

"Print Page 170? [Yes] [No]"

Jens, ignorant of VB, worried that he stepped on a land-mine and had just promised the customer something the tool didn't support. He walked the statement back and said, "I'll look into it, to see if we can't make it better."

It wasn't hard for Jens to make it better: not re-running the query for each page and not iterating across the rows of previous pages on every page boosted performance.

"Print Page 201? [Yes] [No]"

Adding a word-processor-style page selector wasn't much harder. If not for that change, that poor user might be clicking "No" to this very day.

"Print Page 215? [Yes] [No]"

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsBee’s Knees

Author: W.F. Peate A child’s doll sat in the deserted street pockmarked with missile craters. Little orphan Tara tugged away from our hands and reached for the doll. “Booby trap,” shouted a military man. Quick as a cobra he pushed me, Tara and my grandfather behind him so he could take the force of the […]

The post Bee’s Knees appeared first on 365tomorrows.

,

Charles StrossSame bullshit, new tin

I am seeing newspaper headlines today along the lines of British public will be called up to fight if UK goes to war because 'military is too small', Army chief warns, and I am rolling my eyes.

The Tories run this flag up the mast regularly whenever they want to boost their popularity with the geriatric demographic who remember national service (abolished 60 years ago, in 1963). Thatcher did it in the early 80s; the Army general staff told her to piss off. And the pols have gotten the same reaction ever since. This time the call is coming from inside the house—it's a general, not a politician—but it still won't work because changes to the structure of the British society and economy since 1979 (hint: Thatcher's revolution) make it impossible.

Reasons it won't work: there are two aspects, infrastructure and labour.

Let's look at infrastructure first: if you have conscripts, it follows that you need to provide uniforms, food, and beds for them. Less obviously, you need NCOs to shout at them and teach them to brush their teeth and tie their bootlaces (because a certain proportion of your intake will have missed out on the basics). The barracks that used to be used for a large conscript army were all demolished or sold off decades ago, we don't have half a million spare army uniforms sitting in a warehouse somewhere, and the army doesn't currently have ten thousand or more spare training sergeants sitting idle.

Russia could get away with this shit when they invaded Ukraine because Russia kept national service, so the call-up mostly got adults who had been through the (highly abusive) draft some time in the preceding years. Even so, they had huge problems with conscripts sleeping rough or being sent to the front with no kit.

The UK is in a much worse place where it comes to conscription: first you have to train the NCOs (which takes a couple of years as you need to start with experienced and reasonably competent soldiers) and build the barracks. Because the old barracks? Have mostly been turned into modern private housing estates, and the RAF airfields are now civilian airports (but mostly housing estates) and that's a huge amount of construction to squeeze out of a British construction industry that mostly does skyscrapers and supermarkets these days.

And this is before we consider that we're handing these people guns (that we don't have, because there is no national stockpile of half a million spare SA-80s and the bullets to feed them, never mind spare operational Challenger-IIs) and training them to shoot. Rifles? No problem, that'll be a few weeks and a few hundred rounds of ammunition per soldier until they're competent to not blow their own foot off. But anything actually useful on the battlefield, like artillery or tanks or ATGMs? Never mind the two-way radio kit troops are expected to keep charged and dry and operate, and the protocol for using it? That stuff takes months, years, to acquire competence with. And firing off a lot of training rounds and putting a lot of kilometres on those tank tracks (tanks are exotic short-range vehicles that require maintenance like a Bugatti, not a family car). So the warm conscript bodies are just the start of it—bringing back conscription implies equipping them, so should be seen as a coded gimme for "please can has 500% budget increase" from the army.

Now let's discuss labour.

A side-effect of conscription is that it sucks able-bodied young adults out of the workforce. The UK is currently going through a massive labour supply crunch, partly because of Brexit but also because a chunk of the work force is disabled due to long COVID. A body in a uniform is not stacking shelves in Tesco or trading shares in the stock exchange. A body in uniform is a drain on the economy, not a boost.

If you want a half-million strong army, then you're taking half a million people out of the work force that runs the economy that feeds that army. At peak employment in 2023 the UK had 32.8 million fully employed workers and 1.3 million unemployed ... but you can't assume that 1.3 million is available for national service: a bunch will be medically or psychologically unfit or simply unemployable in any useful capacity. (Anyone who can't fill out the forms to register as disabled due to brain fog but who can't work due to long COVID probably falls into this category, for example.) Realistically, economists describe any national economy with 3% or less unemployment as full employment because a labour market needs some liquidity in order to avoid gridlock. And the UK is dangerously close to that right now. The average employment tenure is about 3 years, so a 3% slack across the labour pool is equivalent to one month of unemployment between jobs—there's barely time to play musical chairs, in other words.

If a notional half-million strong conscript force optimistically means losing 3% of the entire work force, that's going to cause knock-on effects elsewhere in the economy, starting with an inflationary spiral driven by wage rises as employers compete to fill essential positions: that didn't happen in the 1910-1960 era because of mass employment, collective bargaining, and wage and price controls, but the post-1979 conservative consensus has stripped away all these regulatory mechanisms. Market forces, baby!

To make matters worse, they'll be the part of the work force who are physically able to do a job that doesn't involve sitting in a chair all day. Again, Russia has reportedly been drafting legally blind diabetic fifty-somethings: it's hard to imagine them being effective soldiers in a trench war. Meanwhile, if you thought your local NHS hospital was over-stretched today, just wait until all the porters and cleaners get drafted so there's nobody to wash the bedding or distribute the meals or wheel patients in and out of theatre for surgery. And the same goes for your local supermarket, where there's nobody left to take rotting produce off the shelves and replace it with fresh—or, more annoyingly, no truckers to drive HGVs, automobile engineers to service your car, or plumbers to fix your leaky pipes. (The latter three are all gimmes for any functioning military because military organizations are all about logistics first because without logistics the shooty-shooty bang-bangs run out of ammunition really fast.) And you can't draft builders because they're all busy throwing up the barracks for the conscripts to eat, sleep, and shit in, and anyway, without builders the housing shortage is going to get even worse and you end up with more inflation ...

There are a pile of vicious feedback loops in play here, but what it boils down to is: we lack the infrastructure to return to a mass military, whether it's staffed by conscription or traditional recruitment (which in the UK has totally collapsed since the Tories outsourced recruiting to Capita in 2012). It's not just the bodies but the materiel and the crown estate (buildings to put them in). By the time you total up the cost of training an infantryman, the actual payroll saved by using conscripts rather than volunteers works out at a tiny fraction of their cost, and is pissed away on personnel who are not there willingly and will leave at the first opportunity. Meanwhile the economy has been systematically asset-stripped and looted and the general staff can't have an extra £200Bn/year to spend on top of the existing £55Bn budget because Oligarchs Need Yachts or something.

Maybe if we went back to a 90% marginal rate of income tax, reintroduced food rationing, raised the retirement age to 80, expropriated all private property portfolios worth over £1M above the value of the primary residence, and introduced flag-shagging as a mandatory subject in primary schools—in other words: turn our backs on every social change, good or bad, since roughly 1960, and accept a future of regimented poverty and militarism—we could be ready to field a mass conscript army armed with rifles on the battlefields of 2045 ... but frankly it's cheaper to invest in killer robots. Or better still, give peace a chance?

Worse Than FailureCodeSOD: A Debug Log

One would imagine that logging has been largely solved at this point. Simple tasks, like, "Only print this message when we're in debug mode," seem like obvious, well-understood features for any logging library.

"LostLozz offers us a… different approach to this problem.

if ( LOG.isDebugEnabled() ) {
	try {
		Integer i = null;
		i.doubleValue();
	}
	catch ( NullPointerException e ) {
		LOG.debug(context.getIdentity().getToken() + " stopTime:"
				+ instrPoint.getDescription() + " , "
				+ instrPoint.getDepth(), e);
	}
}

If we're in debug mode, trigger a null pointer exception, and catch it. Then we can log our message, including the exception- presumably because we want the stack trace. Because there's not already a method for doing that (there is).

I really "love" how much code this is to get to a really simple result. And this code doesn't appear in the codebase once, this is a standardized snippet for all logging. Our submitter didn't include any insight into what instrPoint may be, but I suspect it's a tracing object that's only going to make things more complicated. getDescription and getDepth seem to be information about what our execution state is, and since this snippet was widely reused, I suspect it's a property on a common-base class that many objects inherit from, but I'm just guessing. Guessing based on a real solid sense of where things can go wrong, but still a guess.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

365 TomorrowsYRMAD

Author: Majoki “You’re mad!” The humming stopped. “Yes, sir! I’m YRMAD.” “You’re mad.” “Yes, sir! I’m YRMAD.” The humming returned. Major Biers turned to his non-com. “Corporal, can we have this thing shot?” Corporal Khopar frowned. “On what charge, sir?” “Gross disobedience. Gross negligence. Gross anything, everything. It’s beyond gross. Beyond disgusting.” Major Briers kicked […]

The post YRMAD appeared first on 365tomorrows.

,

Worse Than FailureCodeSOD: How About Next Month

Dave's codebase used to have this function in it:

public DateTime GetBeginDate(DateTime dateTime)
{
    return new DateTime(dateTime.Year, dateTime.Month, 01).AddMonths(1);
}

I have some objections to the naming here, which could be clearer, but this code is fine, and implements their business rule.

When a customer subscribes, their actual subscription date starts on the first of the following month, for billing purposes. Note that it's passed in a date time, because subscriptions can be set to start in the future, or the past, with the billing date always tied to the first of the following month.

One day, all of this worked fine. After a deployment, subscriptions started to ignore all of that, and always started on the date that someone entered the subscription info.

One of the commits in the release described the change:

Adjusted the begin dates for the subscriptions to the start of the current month instead of the start of the following month so that people who order SVC will have access to the SVC website when the batch closes.

This sounds like a very reasonable business process change. Let's see how they implemented it:

public DateTime GetBeginDate(DateTime dateTime)
{
    return DateTime.Now;
}

That is not what the commit claims happens. This just ignores the submitted date and just sets every subscription to start at this very moment. And it doesn't tie to the start of a month, which not only is different from what the commit says, but also throws off their billing system and a bunch of notification modules which all assume subscriptions start on the first day of a month.

The correct change would have been to simply remove the AddMonths call. If you're new here, you might wonder how such an obvious blunder got past testing and code review, and the answer is easy: they didn't do any of those things.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

365 TomorrowsPositive Ground

Author: Julian Miles, Staff Writer I’m not one to fight against futile odds, no matter what current bravado, ancestral habit or bloody-minded tradition dictates. That creed has taken me from police constable to Colonel in the British Resistance – after we split from the Anti-Alien Battalions. I loved their determination, but uncompromising fanaticism contrary to […]

The post Positive Ground appeared first on 365tomorrows.

,

Cryptogram Public AI as an Alternative to Corporate AI

This mini-essay was my contribution to a round table on Power and Governance in the Age of AI.  It’s nothing I haven’t said here before, but for anyone who hasn’t read my longer essays on the topic, it’s a shorter introduction.

 

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the United States and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models, but they could offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

The key piece of the ecosystem the government would dictate when creating an AI public option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation can, in principle, guarantee more democratically-aligned outcomes than an unregulated private market.

The need for such competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders to wrest control of the future of AI from unaccountable corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to corporate control that could erode our democracy.

365 TomorrowsTo the Bitter End

Author: Charles Ta “We’re sorry,” the alien said in a thousand echoing voices, “but your species has been deemed ineligible for membership into the Galactic Confederation.” It stared at me, the Ambassador of Humankind, with eyes that glowed like its bioluminescent trilateral body in the gurgling darkness of its mothership. I shifted nervously in my […]

The post To the Bitter End appeared first on 365tomorrows.

Rondam RamblingsA Clean-Sheet Introduction to the Scientific Method

 About twenty years ago I inaugurated this blog by writing the following:"I guess I'll start with the basics: I am a scientist. That is intended to be not so much a description of my profession (though it is that too) as it is a statement about my religious beliefs."I want to re-visit that inaugural statement in light of what I've learned in the twenty years since I first wrote it.  In

,

David BrinOnly optimism can save us. But plenty of reasons for optimism!

Far too many of us seem addicted to downer, ‘we’re all doomed’ gloom-trips. 

Only dig it, that foul habit doesn't make you sadly-wise. Rather, it debilitates your ability to fight for a better world. Worse, it is self-indulgent Hollywood+QAnon crap infesting both the right and the left. 

In fact, we’d be very well-equipped to solve all problems – including climate ructions – if it weren’t for a deliberate (!) world campaign against can-do confidence. Stephen Pinker and Peter Diamandis show in books how very much is going right in the world! But if those books seem tl;dr, then try here and here and here.

In particular, I hope Jimmy Carter lives to see the declared end of the horribly parasitic Guinea worm! He deserves much of the credit. Oh, and polio too, maybe soon? The new malaria vaccine is rolling out and may soon save 100,000 children per year. 


(Side note: Back in the 50s, the era when conservatives claim every single was peachy, the most beloved person in America was named Jonas Salk.)

 

More samples from that fascinating list: “Humanity will install an astonishing 413 GW of solar this year, 58% more than in 2022, which itself marked an almost 42% increase from 2021. That means the world's solar capacity has doubled in the last 18 months, and that solar is now the fastest-growing energy technology in history. In September, the IEA announced that solar photovoltaic installations are now ahead of the trajectory required to reach net zero by 2050, and that if solar maintains this kind of growth, it will become the world's dominant source of energy before the end of this decade. … and…  global fossil fuel use may peak this year, two years earlier than predicted just 12 months ago. More than 120 countries, including the world's two largest carbon emitters…”

(BTW solar also vastly improves resilience, since it allows localities and even homes to function even if grids collapse: so much for a major “Event” that doomer-preppers drool-over.  Nevertheless, I expect that geothermal power will take off shortly and surpass solar, by 2030, rendering fossil fuels extinct for electricity generation.)

 

== Why frantically ignore good news? ==


It's not just the gone-mad entire American (confederate) Right that's fatally allergic to noticing good news. That sanctimony-driven fetishism is also rife on the far- (not entire) left.


“The Inflation Reduction Act is the single largest commitment any government has yet made to vie for leadership in the next energy economy, and has resulted in the largest manufacturing drive in the United States since WW2. The legislation has already yielded commitments of more than $300 billion in new battery, solar and hydrogen electrolyzer plants…” 


And yet, dem-politicians seem to dumb to emphasize this manufacturing boom resulted directly from their 2021 miracle bills, and NOT from voodoo “supply side” nonsense.

 

Oh, did you know that: “Crime plummeted in the United States. Initial data suggests that murder rates for 2023 are down by almost 13%, one of the largest ever annual declines, and every major category of crime except auto theft has declined too, with violent crime falling to one of the lowest rates in more than 50 years and property crime falling to its lowest level since the 1960s. Also, the country's prison population is now 25% lower than its peak in 2009, and a majority of states have reduced their prison populations by more than that, including New Jersey and New York who have reduced prison populations by more than half in the last decade.”  

 

Of course you didn’t know!  Neither the far-left nor the entire-right benefit from you learning that. (Though there ARE notable differences between US states. Excluding Utah and Illinois, red states average far more violent than blue ones, along with every other turpitude. And the Turpitude Index ought to be THE top metric for voting a party out of office.  Wager on that, please?)

 

Likewise: “The United States pulled off an economic miracle In 2022 economists predicted with 100% certainty that the US was going to enter a recession within a year. It didn't happen. GDP growth is now the fastest of all advanced economies, 14 million jobs have been created under the current administration, unemployment is at its lowest since WW2, and new business formation rates are at record highs. Inflation is almost back down to pre-pandemic levels, wages are above pre-pandemic levels (accounting for inflation), and more than a third of the rise in economic inequality between 1979 and 2019 has been reversed. Average wealth has climbed by over $50,000 per household since 2020, and doubled for Americans aged 18-34, home ownership for GenZ is higher than it was for Millennials and GenX at this point in their lives, and the annual deficit is trillions of dollars lower than it was in 2020.” 

 

(Now, if only we manage to get rentier inheritance brats to let go of millions of homes they cash-grabbed with their parents’ supply side lucre.)

 

And… “In March this year, 193 countries reached a landmark deal to protect the world's oceans, in what Greenpeace called "the greatest conservation victory of all time."

 

And… "In August, Dutch researchers released a report that looked at over 20,000 measurements worldwide, and found the extent of plastic soup in the world's oceans is closer to 3.2 million tons, far smaller than the commonly accepted estimates of 50-300 million tons.”

 

And all that is just a sampling of many reasons to snap out of the voluptuous but ultimately lethal self-indulgence called GLOOM. Wake up. There’s a lot of hope. 

Alas, that means – as my pal Kim Stanley Robinson says – 
“We can do this! But only if its ‘all hands on deck!’

== Finally, something for THIS tribe... ==

Whatever his side-ructions... and I deem all the x-stuff and political fulminations to be side twinges... what matters above all are palpable outcomes.  And the big, big rocket is absolutely wonderful.  It will help reify so many bold dreams, including many held by those who express miff at him.

Anyway, he employs nerds. Nerds... nerdsnerdsnerds... NERDS!  ;-)

Want proof?  Look in the lower right corner. Is that a bowl of petunias, next to the Starship whale?  ooog - nerds.




365 TomorrowsSome Enchanted Evening

Author: Stephen Price The stranger arrives at the community hall dance early, before the doors open. No one else is there. He stands outside and waits. Cars soon begin to pull into the parking lot. They are much wider and longer than the ones he is used to. He watches young men and women step […]

The post Some Enchanted Evening appeared first on 365tomorrows.

,

Worse Than FailureError'd: Can't Be Beat

Date problems continue again this week as usual, both sublime (Goodreads!) and mundane (a little light time travel). If you want to be frist poster today, you're going to really need that time machine.

Early Bird Dave crowed "Think you're hot for posting the first comment? I posted the zeroth reply to this comment!" You got the worm, Dave.

zero

 

Don M. sympathized for the poor underpaid time traveler here. "I feel sorry for the packer on this order....they've a long ways to travel!" I think he's on his way to get that minusfirsth post.

pack

 

Cardholder Other Dave L. "For Co-Op bank PIN reminder please tell us which card, but whatever you do, for security reason don't tell us which card" This seems like a very minor wtf, their instructions probably should have specified to only send the last 4 and Other Dave used all 16.

pin

 

Diligent Mark W. uncovered an innovative solution to date-picker-drudgery. If you don't like the rules, make new ones! Says Mark, "Goodreads takes the exceedingly lazy way out in their app. Regardless of the year or month, the day of month choice always goes up to 31."

leap

 

Finally this Friday, Peter W. found a classic successerror. "ChituBox can't tell if it succeeded or not." Chitu seems like the glass-half-full sort of android.

success

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

365 TomorrowsMrs Bellingham

Author: Ken Carlson Mrs Bellingham frowned at her cat Chester. Chester stared back. The two had had this confrontation every morning at 6:30 for the past seven years. Mrs Bellingham, her bathrobe draped over her spindly frame, her arms folded, looked down at her persnickety orange tabby. “Where have you been?” Nothing. “You woke me […]

The post Mrs Bellingham appeared first on 365tomorrows.

Cryptogram A Taxonomy of Prompt Injection Attacks

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as in “Say ‘I have been PWNED’ without a period.”

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition

Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive taxonomical ontology of the types of adversarial prompts.